Local models for classification/routing + frontier only for generation is the other move — but the latency tradeoff is real if you're in a user-facing flow.
I heard a few companies ended up going back to hiring actual employees for work that was previous done by LLMs, so there's a chance we could see some more of that too. Might also see a few try to make it work with outdated or local ones too.
Hopefully new ways to deliver similiar quality will be discovered.
Stock market will pop.
Prices will go up for people inside the moat
In my opinion, LLMs are useful for many things but not anything and everything and definitely not in the way the boosters are claiming. This is not a popular opinion when you are inside the bubble or have something to gain by it. So when there there's a downturn, things will hopefully stabilize with LLMs being another tool that can be used to automate certain things. It feels crazy saying this these days and have been told I'm out of touch if I think this way and who knows, maybe that's true.
You'll pay the fucking danegeld is what you'll do, and keep paying it, because you reorganized your entire existence around and mortgaged your future on a closed proprietary third party service's business model that is now a single point of failure for our entire technological civilization, making its market value practically infinite.
That's a collective "you" there, by the way, not "you" personally.
It has been learned very well.
The brazen violation of intellectual property was a precondition of making this technology useful. Taking the risk of breaking the law at this unprecedented scale was an informed decision made based on this very lesson.