The article started as a response to two LinkedIn posts — Duda Bardavid writing about graduating from Lovable to Claude Code, and Gokul Rajaram warning vertical AI founders about long-horizon agents.
But the more I dug into it, the more it connected to Sutton's Bitter Lesson. For 70 years in AI research, general methods + compute beat handcrafted domain expertise. Every time. Chess, speech, vision, Go.
BloombergGPT seems like the product version of the same story — $10M training a domain-specific financial LLM, then GPT-4 beat it on most financial tasks with zero specialized training.
What I see in our product data matches: users don't stay in one vertical. They start narrow and expand into general-purpose workflows within weeks. The tool doesn't define the work — the person does.
Not claiming vertical SaaS is dying. Regulated industries with proprietary data moats are different. But for the "AI-augmented professional" — founders, builders, knowledge workers — I think general access beats specialized templates.
Would love to hear other opinions, especially from anyone building vertical AI tools and weather this is concern or not? Or you are building general one and you are concerned that its not defensible?