First, it's just too much too fast. Both the companies that make AI their business like OpenAI and the companies bolting AI onto everything have been forceful and abrasive with their pushing. Normally technology has more time to seep in and organically normalize with people for a while before the pushing begins, but this time the gas pedal was floored shortly after OpenAI had shipped a usable MVP.
Second, the value is far from clear for a lot of people, partly from lazy bolt-on integrations, but also just because people don't actually want/need it for many of the tasks it's being sold for and because it's not good/reliable for some tasks.
Third, as noted in the article, the surrounding environment isn't right. Many of your average people feel like the dog in the "this is fine" meme[0] and aren't really in the mood to be sold something that could ultimately further concentrate wealth and make their lives harder. It's like parking an ice cream truck in front of a burning office building and wondering why nobody running out is buying a cone.
I say this as someone who finds AI useful for some things. All of this is pretty plainly visible. Either the big names in the industry are horrifically out of touch or they're pretending to not see it in hopes of faking it until they make it, I'm not sure which.
[0]: https://www.npr.org/2023/01/16/1149232763/this-is-fine-meme-...
It's hard to see AI as anything but the latest accelerant for that.
We don't know if software product like Adobe suite will be irrelevant or cloned with vibe coding.
The assumption that inference with sota won't be local in 5 years is not certain.
We do know technological advancements will leave the data centers as stranded assets. There’s not enough money in the most optimistic revenue projections to pay for them, and models are simultaneously getting better and cheaper to operate.
Adobe (and similar companies) will either improve or be replaced by vibe coding. I think the assumption a lot of wall street and management is making is that Adobe can replace itself with vibe coding and vibe customer support, and then not be simultaneously out-innovated by a few dozen companies founded by folks they laid off.
Local inference is 6-12 months behind SOTA. If that holds, you can have a 2029 SOTA locally on a Rapberry Pi 8, or 2030 SOTA for $500/month (in 2026 dollars). If 2030 SOTA is qualitatively better at that point, then we’ll be way past AGI, and the economy will be unrecognizable.
It's the other way around, software improvements make the hardware more valuable. Let's say that one unit of compute can generate one unit of value. As the software improves on any of the principal axes (cheaper cost for same quality, or new capabilities that you could previously not get for any price), that same unit of compute will produce more value.
What would threaten those compute investments? Basically order of magnitude improvements in the hardware, but that kind of thing will take longer to happen than the projected lifetime of the hardware. (Or the demand for AI evaporating, but that tends to be an issue of faith that is hard to have a useful discussion on.)
That's just not the world we live in currently.
It's even worse than that. I'm not aware of any tasks which it's good at. Even after several years of effort, LLMs suck at coding, the thing they are supposedly best at. Maybe it'll get good, but right now it just isn't.
> In Mr. Huang’s view, the critics want regulations that will hamper the A.I. industry and slow it down. Meanwhile, the skeptics are “scaring people from making the investments in A.I.” that would make it better.
What a weak, out of touch statement. This guy is at the helm of the most valuable company in world history and according to him the thing that's threatening their growth is.. negative vibes?
Where are the adults?
It seems that AI coding tools are very sensitive to codebase structure. If you work on a monolith with relatively simple, straightforward structure this is the happy path. A bird's nest of microservices is not. If your team has taken the time and effort to structure the codebase in a way that's amenable to AI, and you invest in the tooling, and you keep up that effort over time, then AI does seem to work.. Not "10x productivity gain" as they try to sell it to us, but maybe >1.0x. It's not clear, though, that for the vast majority of developers AI provides any speedup whatsoever. That's the problem. If it only works for the top 5% or whatever, that addressable market is very, very small.
We can debate endlessly whether the horse and buggy is better than the car, or the cell phone will replace the film camera. But at the end of the day, history has shown that none of that matters. We're better off just agreeing to it and working to improve it.
The problem with your analogies are that there is no path where a constant improvement to cars leads to anything but better outcomes for human.
There is no realistic or likely path where improvements to cellphones leads to anything but better outcomes for humans.
However, if AI keeps getting better to the extent we can imagine, ie Super Intelligence, the outcomes are more likely to be extinction level negative than positive.
That’s not up to you to decide. Whatever company’s service you are using can and will eventually pull the rug.
I don't know why people keep pointing to history to argue adoption is inevitable. Isn't history is littered with no-code solutions that no one uses anymore?
The internet has been entwined in my life since 1991, when I got my first email. Before that it was BBS's. The context and parallels that I'm witnessing now very much align with what I've seen before over the last 35 years. I've bet on some history based predictions in this cycle that few else saw, that absolutely have come true.
This isn't a no-code solution, and not even close to that. It is very much of a more code than ever solution.
I agree. Just make sure you're not cherry-picking your data. Make sure you include the NFT hype cycle in your corpus.
You have big tech oligarchs salivating at the idea of moar profits by firing a bunch of people.
You have elected officials who might mean well but won't be able to react quickly and don't understand the nuance of a lot of tech things.
You have ordinary people trying to figure out how to make use of this stuff without losing their own jobs. But they don't have a ton of influence.
For big tech to start relying on vibe coding without code reviews etc is a huge risk.
Big tech has so much red tabe preventing people from getting stuff done. Security reviews needed, etc. This inertia will hold back even a super intelligence from getting stuff done.
Some nerds in a garage trying to apply vibe coding to a problem won't have this red tape.
Red tape is necessary in big orgs because you can't have 100k people running around shipping new half broken, semi supported software with security holes. So you established release processes, approvals, code reviews, etc.
All I'm saying is: big tech is also at risk of being disrupted by AI.
I'm one of the actors and I sided with AMD early on.
Coal mining from 1950-1970. Production up. Coal cheaper. Employment way down. Classic book, "Night Comes to the Cumberlands" (1963) on how Appalachia became really poor.
[1] https://www.gilderlehrman.org/history-resources/teacher-reso...
AI boom is clearly anti human. People are fearing jobs, livelihoods, and their homes. I don't think anyone in right mind would have accepted this had this not been marketed the way it has been
A) AI gets very good and you'll lose your job.
OR
B) This whole thing is a bubble and because of how many eggs have been put in this single basket, when the bubble pops, you'll lose your job as we head into a recession.
It really does just seem like pure downside to the average person, not even to mention all the slop everywhere, deepfake revenge porn being democratized, and generally just having bad gpt wrappers shoved down your throat.
Edit: There really isn't a sense that AI is going to help the common person. Inequality is rising and AI seems to only fuel this fire. I hope that we as a society can actually distributed the fruits of AI to everyone... but I'm not holding my breath.
This doesn't mean doxxing. I can have my identity verified with, for example, Youtube... but still have a handle/nick presented to end users. My real name need not be exposed.
However without something like this, there's no real hope at curtailing what's coming down the pipe. And I say this without liking it, wanting it, I've fought for an anonymous internet my entire life. But I think that's just... over now.
Either the internet will die, no forum, comment section, video site will live, or we end up with identity verification and gated posting online.
I just don't see how else to deal with this.
I'm not even saying you can't use AI to write comments, although I think that's a dumb way to interact with other people. It's simply that within a year, there won't be a single way to tell a single post from AI or human. A single video. Anything.
And preventing fake accounts, sock puppeting, is the only way to even hope to stem that tide. And further, we'll need to be able to sue for defamation. Fraudulent activity. Foreign interference. The change required because of all of this, is literally repugnant.
Yet... it's now here in front of us.
There was constant sneering at dot-com businesses and venture capitalists. There was FuckedCompany.com [0]. The Pets.com superbowl ad was seen as a cautionary tale.
Startup.com [1] portrayed paying parking tickets online as Sisyphean. People thought the internet was for porn and weirdos. Krugman famously said "By 2005 ... it will become clear that the Internet's impact on the economy has been no greater than the fax machine's." [2]
Clay Shirky: "The truth is no online database will replace your daily newspaper, no CD-ROM can take the place of a competent teacher and no computer network will change the way government works." [3]
A lot of the above was from mid to late 1990s but, in my opinion, living through it, it carried over into the 2000s with people being highly skeptical and quick to engage in shadenfruende whenever a company didn't live up to the hype.
[0] https://en.wikipedia.org/wiki/Fucked_Company
[1] https://en.wikipedia.org/wiki/Startup.com
[2] https://web.archive.org/web/20030226083257/http://www.redher...
[3] https://www.newsweek.com/clifford-stoll-why-web-wont-be-nirv...
The claims of "adopt Internet/AI or be left behind" were similar but for some reason the reactions are different.
Microsoft was in full swing with trying to strangle the computing space. "Embrace, extend, extinguish" was a term coined from that era. Ballmer called Linux "a cancer". [0]
People were in a panic about Napster and how the internet would steal billions of dollars.
It does seem like people are much more against AI now than the dot-com boom then, but it's all looks and sounds very familiar to me.
[0] https://www.theregister.com/2001/06/02/ballmer_linux_is_a_ca...
lol, absolutely not. The music industry was afraid of this, yes. The normies? Couldn't get enough of it.
A lot of those reporters are now leadership at major newspapers like the NYT (eg. Applebome who linked Doom with Columbine and is now the Deputy National Editor for the NYT).
A large amount of reporters (both techno-optimists and techno-pessimists) discussing technology today are literally boomers who have been fighting this battle against each other since the 1990s and taking all the airtime away from alternative younger voices on both sides.
[0] - https://www.nytimes.com/1999/05/02/weekinreview/the-nation-a...
Today, the message is that (Dear leaders,) your workers can be replaced by machines. Not that you together can do more with this new tool, but that you can slim down your operation. Maybe I'm just older, but the optimism I saw then is now divided into opportunity (AI consultants) and skepticism (workers.)
This is a narrative the AI industry created, because they want to tap into the huge salary money pool. They tell a story of anti-innovation cost-cutting rather than "do more with these tools."
Well, they were right on that one.
his company has grown from ~10 million to ~900 million users in three years. if that's not quickly enough, the problem is unreasonable expectations.
people like AI! they just don't want it to be shoved into absolutely every aspect of their lives at all times.
Personally, I have that feeling when I use ChatGPT. It consistently blows my mind. OpenClaw is even more incredible, and I'm certainly not any kind of power user. I'm just testing the waters.
So why not that feeling of amazement / wonder / shock / awe? If you asked me, I'd say two things: first, I think the "wonder cycle" on older products has made us a bit jaded. Consider again the smartphone. When it came out, everyone was blown away -- now, our smartphones are more like chains to work, life, etc., and all anyone can talk about is how badly they want to be rid of them (while, of course, they use them every moment of the day!).
I think there may be a bit of, "Great, another technological miracle -- how long until I hate this, too?"
Second, I think Silicon Valley / tech has lost a lot of trust over the years as an industry. I remember once upon a time really loving Google's products. But Google got creepier and creepier, less and less consumer friendly and seemingly more focused on its bottomline, and . . . now I don't use any Google products. Same with Microsoft -- growing up near Seattle, Microsoft (like Boeing) was a "cool" company. Amazon was the same way. I even had a Facebook!
And now, not only do view all of these companies with some combination of disgust / suspicious / fear, I see pretty much any new tech company the same way. I would bet a lot of people feel this way. We're just waiting for the rug pull. I think the OpenAI ad thing was probably the first time where I felt my skin crawl a bit, and I think that'll keep on happening as time goes on and corporate drift makes these AI companies just like any other company out there.
Anyway, point being, I don't know if it's really tech itself that turns people off. It's the culture, the failed expectations, the lack of trust, everything, all smushed together.
All that, and the constant deluge of lies--coding agents will make you 10x as effective in six months, AGI is pretty much here, etc. When Jobs did his demos generally (maybe every time?) he had something to show. It wasn't some empty promise, it was real. I want that kind of tech, not the imaginary stuff.
And (almost) everyone said how terrible it is - and yet they all use it.
Give it a bit of time for people to understand the limitations, and where it shines and it will become an indispensable part of life.
There was plenty of internet and computer pessimism at the time as well, with the Internet expected to lead to more coal being used [0], being viewed as a conduit for scams [1], the risk of moral panics [2], and being blamed for causing the Columbine Massacre [3].
Ironically, this same author at the NYT (David Streitfeld) was a reporting negatively about the Dot-Com boom in the 1990s at the WashPo [4][5] as well as during the subsequent bust [6] and has been very public about his oride of being "low-tech" [7].
There is nothing wrong with that stance, but the entire premise of the article that techno-optimism was the norm which turned into techno-pessimism is clearly written in bad faith, when a large portion of the intelligentsia was already techno-pessimistic in the 1990s and 2000s, just like they were in the 80s, 70s, and earlier.
[0] - https://www.forbes.com/forbes/1999/0531/6311070a.html?sh=286...
[1] - https://www.nytimes.com/1999/07/01/technology/internet-s-cha...
[2] - https://dl.acm.org/doi/pdf/10.1145/322796.322800
[3] - https://www.nytimes.com/1999/05/02/weekinreview/the-nation-a...
[4] - https://www.washingtonpost.com/archive/politics/1999/11/06/g...
[5] - https://www.washingtonpost.com/archive/business/1999/05/18/o...
[6] - https://www.latimes.com/archives/la-xpm-2001-jul-21-mn-24886...
[7] - https://www.nytimes.com/2018/07/18/technology/personaltech/t...
Meanwhile US government is overtly corrupt, criminal morons, they certainly don't care or have any sort of plan to distribute the gains from this technology evenly. Scott Bessent is saying with a smirk on his face that the tariff refunds will not go to consumers [1]. These people actively hate you and laugh at your powerlessness. Hating AI is the right response because the current political system ensures 10% of the benefits will accrue to most people and 90% to the elites, the power imbalance gets even more extreme and it will lead to techno-feudalism (as it has in the past).
[1] https://finance.yahoo.com/news/bessent-says-tariff-refund-ul...
Right now sota models requires a lot of iron.
It's possible that this will always be the case. But its is not a certainty!
We've seen software improvements shave orders of magnitude of compute requirements before. This could totally happen here. Iron could easily become stranded assets.
But that said, models have already become commodities, well somewhat. Is the value in running inference or applying it?
Today, we dare not use vibe coded libraries for mission critical things, HTML sanitization as an example.
But one day, who is to say the industry won't be disrupted by a vibe coded database with ~100% Oracle compatibility? Made by a nerd in a garage.
Established code bases is a moat today. It might not be in 5 years. Big tech won't be well positioned to take advantage, because trusting vibe coded crap is risky.
My point is mostly: the future is uncertain. Big established software companies might see their moat challenged by nerd in a garage running LLMs in the cloud.
What about the Adobe suite? AutoCAD? Office, etc. (To be fair, it's possible that software never was the moat).
This is the answer to all of your questions. Network effect and brand recognition sell Oracle, Adobe, office etc. Alternatives to all of them already exist, with either feature parity or close enough for most people.
The existing brands keep going because big companies and institutions don't pay for products vibe coded by some guy in a garage, they buy products that have paid support that they know will continue to exist for years.
But what about 5 years from now?
What when the menus have the same layout, compatibility with the legacy binary file format is near perfection.
Today, alternatives exists, but they are not polished the same way.
Based on the abysmal ability of LLMs to write code today, that's not likely to happen. One never knows. But I wouldn't put money on it.