AI OCRs aren't LLMs (unless you use ChatGPT for this), so this argument seems invalid?
> AI feels grievously inefficient. It took 29.29 minutes to OCR the 4-page handwritten draft of this essay with Qwen3-VL:8B.
Isn't this using the wrong tool for the job?
> AI makes Nvidia rich, and I don’t like Nvidia because their Linux support sucks ♥
I don't think LLMS really benefit nvidia or make them rich. (Author is referring to LLMS as AI in this blog post). They use completely different technologies
> AI is centralized even though that’s bad architecture. Check what server you’re connecting to when working with local LLMs
None. If it's local, I'm not connecting to any server. I can run my models offline