I wanted to eliminate the need for server-side processing and Python dependencies. By porting the inference to the browser, everything stays 100% local, which is great for privacy and means no more waiting in queues for 'free' online tools.
It's still being optimized for different hardware, but I'm curious to hear how it performs on your machines!