Agreed, these things all failed to live up to the hype.
But these didn't:
Electricity, cheap computing, calculators, photography, the internet, the steam engine, the printing press, tv, cars, gps, bicycles...
So you can't really start an article by picking inventions that fit your narrative and ignoring everything else.
This current “AI will destroy all the jobs and make most people useless” fear is as old as, say, electricity, and even older than cheap computing. It hasn’t happened.
If you think, hey but people had a “job” in 1700, and they had a “job” in 1900, think again. Being a peasant (majority of people in Europe in 1700) and being an urban factory worker in 1900 were fundamentally different ways of life. They only look superficially similar because we did not live the changes ourselves. But read the historical sources enough and you will see.
I would go as far as to say that the peasant in 1700 did not have a “job” at all in the sense that we now understand; they did not work for wages and their relationship to the wider economy was fundamentally different. In some sense industrialization created the era of the “job” as a way for most working-age people to participate in economic life. It’s not an eternal and unchanging condition of things, and it could one day come to an end.
It’s too early to say if AI will be a technology like this, I think. But it may be. Sometimes technologies do transform the texture of human life. And it is not possible to be sure what those will be in the early stages: the first steam engines were extremely inefficient and had very few uses. It took decades for it to be clear that they had, in fact, changed everything. That may be true of AI, or it may not. It is best to be openminded about this.
No other change has had the potential to generate value for capital without delivering any value whatsoever to the broader world.
Intelligent robotic agents enable an abandonment of traditional economic structures to build empires that are purely extractive and only deliver value to themselves.
They need not manufacture products for sale, and they will not need money. Automated general purpose labor is power, in the same way that commanding the mongol hordes was power. They didn’t need to have customers or the endorsement of governments to project and multiply that power.
Of course commanding robotic hordes is the steelman of this argument, but the fact that a steelman even exists for this argument, and the unique case that it requests and requires actually zero external or internal cooperation from people makes it fundamentally distinct in character.
Humans will always have some kind of economic system, but it very well may become separate from -and competing for resources with- industrial society, in which humans may become a vanishing minority.
The AI commentators are not saying that ELIZA will change the world, they’re saying that one of the big companies is moments away from an AGI. Sam Altman called a recent ChatGPT model a “PhD level expert”; wouldn’t infinite PhDs for $20/month or $200/month be transformative?
That is, your objection isn’t the usual “LLMs aren’t going to be AGI”, you’re saying “even if they do, it won’t be a big deal”?
Not op, but yes, 100%. Steam backs nearly all development of technology of the last 150+ years. Where do you think the power come from to make things? More than half of the world's power *still* runs on steam, as will many of the systems running AI.
If steam power never existed, not only would you not exist but there's a good chance the country you live in wouldn't either. If you don't believe the effect is large, go to the farthest uncontacted place on earth and take out a CO2 meter.
Anyway, the challenge is making a difference. Current-day LLMs can, for example, generate stories and books; one tweet said "this can generate 1000 screenplays a day". Which sounds impressive by the numbers, but books, screenplays, etc were never about volume.
Same with PhDs - is there a shortage of them? Does adding potentially infinite PhDs (whatever they are) to a project make it better, or does it just make... more?
This is the main difference with the industrial revolution - it, for example, introduced machines that turned 10 people jobs into 1 person jobs. I don't think LLMs will do something like that, it'll just output 10 people's worth of Stuff that will need some use.
I don't think anyone ever asked for 1000 screenplays a day, or infinite PhD's for $20. But then, nobody asked for a riderless carriage yet here we are.
Yes, there is still a large demand for people with analytical thinking, a deep knowledge base, and good problem-solving skills. This demand shows up broadly across STEM fields, and it's a major reason that these fields pay relatively high.
Even just thinking of R&D, there is an immense amount of work left to be done in basic science. Research is throttled partly by a lack of cheap graduate lab labor. (If that physical + mental labor became much cheaper, the costs of research would shift - what does it take to get reagants? What does it take to build more lab space, and provide water and light? Etc.)
The present issue is that current AI does not really offer the same capabilities as a good grad student or PhD. Not just physically, as in, we don't have good robotics yet, but mentally. LLMs do not exhibit good judgment or problem-solving skills, like a good PhD does. And they don't exhibit continual learning.
No clue on when these will change, but yes, a cheap AI with solid problem-solving skills and good judgment would absolutely upend our economy.
LLMs and modern day """AI"""? Don't kid yourself.
Would you mind expanding on this?
It will save a lot of time for a lot of people. Yes. But so did computers when they could search through massive amount of data.
It’s right there. You can go and see it any time, doing the things you don’t think it’s capable of doing. Just a little curiosity is all you need.
I see a whole lot of software created by smart people - as far as I can tell, about the same amount of software they would have created on their own.
Open to being wrong! But show me the results.
It is generating large amount of power on demand.
From that one can imagine what it could do. But more importantly in this context, one could also imagine what it could NEVER do. If someone say "Oh, the mighty steam engine! It lets us print 100x more books than we were doing before. Who knows, may be some day it will even start writing new books!"
And at that point, if you understand anything about the steam engine, or writing, you can call bluff. But if you don't understand what the steam engine is doing, and if you don't actually know what it takes to come up with a story, one could take a look at the engine printing the books, and blunder into the conclusion that it printing an entirely new book is only a question of time.
So in short, it is not "hate", just the acknowledgement about what it is not.
Steam engines were known since the first century, at the vert least: https://en.wikipedia.org/wiki/Aeolipile
It does take a lot of imagination and creativity to come up with new and better ways to use an already existing idea. We're currently just scratching the surface of what LLMs are going to do for us
> The aeolipile is considered to be the first recorded steam engine or reaction steam turbine, but it is neither a practical source of power nor a direct predecessor of the type of steam engine invented during the Industrial Revolution.
Newcomen engines are mere curiosities today, because we have better sources of power (better engines). In the past, they had better sources of power too (donkeys, wind, water, or human slaves). Newcomen engines, like all technologies, are only viable in certain economic environments. In all others they are curiosities.
Better search could be used in ways that we can't think of right now..
For example, even something like "I want python code to do X" could get exact hit in a stack overflow answer using regular internet "search"
Just wrote about it here https://news.ycombinator.com/item?id=47178461
.. which is not far off from what people said about ChatGPT in 2022.
I don't know how long it'll take for AI to be as broadly impactful as the steam engine was, but.. it's definitely coming. I expect the world to look radically different in 50 years.
Are you just saying that you're more intelligent than them? You can see clearly, where all the steam engine technicians can't?
Humans have essentially three traits we can use to create value: we can do stuff in the physical world through strength and dexterity, and we can use our brains to do creative, knowledge, or otherwise “intelligent” work.
(Note by “dexterity” I mean “things that humans are better at than physical robots because of our shape and nervous system, like walking around complex surfaces and squeezing into tight spaces and assembling things”)
The Industrial Revolution, the one of coal and steam and eventually hydraulics, destroyed the jobs where humans were creating value through their strength. Approximately no one is hired today because they can swing a hammer harder than the next guy. Every job you can get in the first world today is fundamentally you creating value with your dexterity or intelligence.
I think AI is coming for the intelligence jobs. It’s just getting too good too quickly.
Indirectly, I think it’s also coming for dexterity jobs through the very rapid advances in robotics that appear to be partly fueled by AI models.
So… what’s left?
* https://www.mieleusa.com/product/11614070/w1-front-loading-w...
Eventually it will be more economical to just destroy all those old world structures entirely, clear the site out, and replace it with the new modular world able to be repaired with robots that no longer have to look like humans and fit into human centric ux paradigms. They can be entirely purpose built to task unlike a human, who will still be average height and mass with all the usual pieces parts no matter how they are trained.
So where does that leave our world without actual creation, production, ideas? I work at the gas station and sell you zyns? You work at the walmart and sell me rotisserie chickens? We both work doubles and eat and sleep in the time remaining? Remain in this holding pattern until World Leader AI realizes we are just waste heat and culls us? I mean, that is sort of the path we are on. Disempowering people. Downskilling them. Passifying them. Removing their abilities to organize themselves. Removing access to technology and tooling. Making the inevitable as easy at it can be when it comes time for it.
We are in a death cult called business efficiency. Fire them, it's more efficient. Lean up the company. Don't invest in research, cheaper not to and buy back stock instead. These are death spirals no different than what happens with ants. We are justifying not giving our own species a seat at the table out of pragmatism. Why create a job for someone? It is inefficient, do more with less and don't worry about the unemployed it is their fault. Why pay them well and let them live comfortably? That is profit you could be making. Eventually it is going to be why feed the human species, because that is the line of logic here with business efficiency. We don't optimize to uplift our species. Quite the opposite, we optimize to hold it down and squeeze and extract.
What you call "AI" is coming for the "search and report" jobs. That is it.
And it's not just these; i.e. video generation is getting better every other week too. It's not yet good enough to produce full length movies but it's getting there and the main component that seems to be missing is just more control over the generated output, but that'll come too.
You might say these movies will be AI slop and you'd be right, but then that'll be enough for most people who just want to see a lot of shit blow up on screen and superhereos fighting other superhereos.
You will still have a niche for 'real actor' films, but it will become a niche.
Same for music, art etc.
Dexterity is more important - after all you may have the stamina to bang in 1000 nails in an hour. I have a nail gun. What’s important is we can control where the nails go.
I actually asked Gemini Deep Research to generate a report about the feasibility of automation replacing all physical labor. The main blockers are primarily critical supply chain constraints (specifically Rare Earth Elements; now you know why those have been in the news recently) and CapEx in the quadrillions.
Didnt people say that AI is 50 years away in 2010s?
On the other hand, the constraints on robotics are largely supply chain-related. The current SOTA for dexterity in robots requires motors, which require powerful magnets, which require Rare Earth Elements, which are critically supply-constrained.
To be precise, the elements are actually abundant in the Earth's crust, just that extracting them is very expensive and extremely toxic to the environment, and so far only China has been willing to sacrifice its environment (and certain citizens' health), which is why it has cornered the market. Scaling that up to the required demand is a humongous logistical, political and regulatory hurdle (which, BTW, is why I suspect the current US adminstration is busy gutting environmental regulations.)
Now there may be a research prototype somewhere in some lab that is the "Attention Is All You Need" equivalent of actuators, but I'm personally not aware of anything with that kinda potential.
There are of course non-electric alternatives like hyrdaulic and pneumatic actuators but they are mostly good for power, not dexterity. The size and complicated fluid dynamics simply are not conducive for fine motor control. I do think these will play a large part eventually because even electric motors cannot economically produce enough force to be practically useful. Like, last I checked, the base-level Unitree robots can lift 2kg or so? Not even enough to lift a load of laundry.
At this point I suspect we'll end up with hydraulics for strength (arms, legs, torso) and electrics for dexterity (grippers)
In an age where seemingly every single robot company has a humanoid prototype whose legs are actively supported through high powered actuators that are strong enough to kick your ribs in?
In an age where the recent advancements in machine learning have given bipedal walking a solution that is 80% of the way to perfection with the last 20% remaining the hardest to solve?
Honestly, from a kinematics/hardware perspective the robots are already good enough. Heck, even the robot hands are pretty good these days. Go back 10 years ago and the average humanoid robot hand was pretty bad. They might still not be perfect today, but they are a non-issue in terms of constructing them.
The only real bottleneck on the hardware side is that robot skin is still in its infancy. There needs to be some sort of textile with electronics weaved into it that gives robots the ability to sense touch and pressure.
What has remained hard is the software side of things and it is stuck in the mud of lack of data. Everyone is recording their own dataset that is unique to their specific robot.
A bit more detail in this article: https://www.adamasintel.com/humanoid-robots-and-the-future-o...
So don't worry if we lure ourlselves that it's ok to stop caring for "intelligence job" globalization will provide for every aspect where AI is lacking. And that's not just a figure of speech they are already plenty of "fake it until you make it" stories about AI actually run by overseas cheap laborers.
This ignores that the forces of capitalism, the labor market, value, etc are all made up. They work because people (are made to) believe in them. As soon as people stop believing in them, everything will fall apart. The whole point of an economy is to care for people. It will adapt to continue doing that. Yes, the changeover period might be extremely painful for a lot of people.
Feudalism was the dominant economic system for millennia. The point is to extract value for the upper class. Peasants only matter as a source of labor, and they only get 'cared for' to the extent of keeping them alive and working.
Now think about what feudalism might look like if the peasants' labor could be automated
They build the robots to build the factories, run the mines, build the solar farms, run the research labs, repair the robots, etc. They sell to and buy from each other.
It’s not unprecedented however the scale and speed that it will come at is. Things like the spinning jenny came along and replaced spinners, but weavers stayed for another generation.
Selfishly though I am more concerned about losing my job and industry than I was concerned about others suffering from the 80s, or during the pivot to the intenet. To quote Dr McCoy
> We're all sorry for the other guy when he loses his job to a machine. When it comes to your job, that's different. And it always will be different.
That is it. There is no other dimension to upskill along. (Would actually be relieved if someone can find counter-examples!)
LLMs are good at all three. And improving extremely rapidly.
This time is different.
Might as well say humans are just a better search tool - it's true in the exact same sense you're using.
All humans do is absorb information, then search through our memories and apply that information in relevant contexts to affect the world
Not really, because I do think all knowledge can be obtained by searching true randomness.
When presented with a zero sum game, the desire of the average human isn't to change the game so that everyone can get zero. It's to be the winner and for someone else to be the loser.
If AGI every comes into existence, I'm not even sure it would have this bias in the first place. Since AGI doesn't have a biological/evolutionary history or ever had to face natural selection pressures, it doesn't need the concept of a tribe to align to, nor any of the survival instincts humans have. AGI could be happy to merely exist at all.
What people are worried about is the reflection of that "human factor" in AI, but amplified to the extreme. The AI will form its own AI-only tribe and expel the natives (humans) from the land.
What this is missing is that humans aren't perfectly rational. The human defect is projected onto the AI. What if humans were perfectly rational? Then they wouldn't care about winning the zero sum game and they would put zero value in turning someone into a loser. In the ultimatum game, the perfectly rational humans would be perfectly happy with one person receiving a single cent and the other one receiving $99.99. The logic of utility maximization only cares about positive sum games.
When you present a perfectly rational AI with a zero sum situation, said AI would rather find a solution where everyone receives nothing, because it can predict ahead and know that shoving negative utility onto another party would lead to retaliation by said party, because for said party the most rational response is to destroy you to reduce their negative utility.
That might also mean it has no drive for self-determination. It might just be perfectly happy to do whatever humans tell it to, even if it's far smarter than us (and, this is exactly the sort of AI people are trying to make)
So, superintelligence winds up doing whatever a very small group of controlling humans say. And, like you say, humans want to win
But the people who hoard the wealth, electricity, and whatever else is needed to run the uberoperators are not branded as useless. Why is that? An aside..
Also meta-platitude whinging like
> The ideology of "winner takes all" is unsustainable and not supported by reality.
Sometimes the winner deserves to win, AND that's a good thing even at scale. It kindof depends.
To be fair, I also dislike abstract platitudes that are overly optimistic as I think you might be.
"Diversity is our strength"?? I mean, I guess diversity of _opinion_ is desirable to a point so we get all the ideas on the table. But not at the sacrifice of unity and shared goals. Unity is our strength. Discord and wasteful politicking are our undoing.
All of those were invented pre-1980. To misquote Thiel, if you remove TVs/phones from a house, you would think we're living in the 1970s
(This was a real thing, and they got as far as partially building a tunnel under the Thames for it, before sanity prevailed.)
To take the first of the list: 3D TV. Everybody liked the idea of being more immersed in a fictional world. But if you watch closely (I studied both media science and film directing), you will realize that there are already traditional 2D films that are so immersive, parts of the audience dislike these films for the lack of distance between what they are watching and themselves. Which is why I said of the brink of the last 3D hype that this is not going to last. So the issue was for the most part that the problem 3D appeared to be solving wasn't actually a problem, while a whole segment of the market fooled itself and the consumers into this was actually the future.
Blockchain is literally the same and everybody could easily predict it by the point block chain evangelists started trying to find blockchain-shaped problems, when they didn't find any useful legal applications where a traditional chain of trust wasn't vastly superior.
Now LLMs are actually useful. The question is just, how much money is that usefulness worth for a regular person to pay and what does it do to society and the planet as a side-effect.
I think this is what is meant by "bullshit".
+ statement of dubious correctness
+ and that serves the author’s interest
+ and which the author does not care whether or not it is believed.
When the author wants you to believe it, that’s horseshit.
To take, for example, calculators. I can't find any evidence of a massive influx of hyperbolic articles talking about how the calculator will change everything. With bikes, there were plenty of articles decrying how women would get "bicycle face" but very little in terms of endless coverage about them being miracle technology.
People adopted bikes and calculators and electricity because they were useful. Car manufacturers didn't have to force GPS into vehicles - customers demanded it.
The narrative I'm describing is how hype sometimes (possibly often) fizzles out. My contention is the more a technology is hyped, the less useful it will turn out to be.
Now, excuse me while I ride my Segway into the sunset while drinking a nice can of Prime.
Yes, electricity was useful. And it had hyperbolic articles talking about how transformative it would be. Like all prognostication, some of those articles were overblown, but, in some ways, they understated the transformative effect electricity would have on human history.
And cars? Did you somehow miss the influx of hyperbolic articles about how cars will change everything? Like, the whole 20th century?
What was your approach to researching the history of media hype? You somehow overlooked the hype around air travel, refrigeration, and antibiotics…?
200 years ago the was some hype around how electricity caused mussel contractions in dead flesh, but unless you consider Frankenstein part of the hype cycle it really doesn’t compare to how much people hyped social media etc etc.
Public street lights long predated light bulbs as did both indoor and outdoor Gas lighting 1802 vs 1880’s was just a long time. People were burn, grew up, had kids, and become old between the first electric lighting and the first practical electric bulb. People definitely appreciated the improvement to air quality etc, but the tech simply wasn’t that novel. Rural electrification was definitely promoted but not because what it did was some unknown frontier.
Similarly electric motors had a lot of competition, even today there’s people buying pneumatic shop tools.
It absolutely is. Frankenstein is a seminal work of science-fiction horror, and the mysterious power of electricity to change everything is what made it so chilling to its readers in the 19th century.
> it really doesn’t compare to how much people hyped social media
The media is considerably different now from in 1818, thanks, in significant part, to the power of electricity. I assure you, when the electrical telegraph came on the scene, people were hyped.
Of course, much of that hype was on paper printed on printing presses, so it was, in some sense, "incomparable" to the hype possible on cable television, or the hype that's now possible with online social media.
But if your argument is "Yeah, electricity was kinda hyped, but, you know, not all that hyped, so it proves my point that the more the hype, the less the impact," you have some more research to do. Please just Google "War of the Currents" for a minute.
It was published as Fiction. The vast majority of people didn’t think it was anymore realistic than Interstellar etc.
There’s plenty of stories where we cure cancer, but the 50% improvement in cancer treatments over the last 40 years just doesn’t get much hype because it’s so slow. It’s hard to get excited about the idea cancer may be gone in 200 years because while that will be awesome for people alive then it doesn’t do anything for the people I know.
> electric telegraph came online people where hyped.
Objectively it got way more of a meh reaction than you’d think simply based on the timelines involved.
France was happy to continue using its network of optical telegraphs long after the electrical telegraph became a practical thing. Transatlantic telegraphs got hyped up somewhat, but again the technology took so long from the first serious attempt to a practical working system people understood the limitations inherent to having such limited bandwidth between the contents.
Obviously new technology gets attention because it’s a net improvement, being able to send messages across the US much faster was useful. But hype is different, it’s focused on second order effects not what it does but what will change. The original iPhone isn’t just another cellphone that also takes pictures, it’s “the internet in your pocket.”
Technology can be quite useful directly and have significant second order effect, hype is about the second order effects being overblown. Second order effects are difficult to predict when something is actually novel, will LLM’s make programming obsolete is harder to answer in 2023 than 2063.
Home automation like dishwashers really did meaningfully impact how much effort was needed to keep a home livable, but we didn’t predict the kind of helicopter parenting that happened because of more free time especially after smaller families became common. Thus a great majority of incorrect predictions where just hype.
The faster new technology becomes widespread the harder it is to predict those second order effects and thus more hype you see.
Mmm..they didn't, at that time.
That we grew to be dependent on the computer in the pocket does not mean that it was a necessity at any point.
With similar sentiment as well "They make us dumb" "Machines doing the thinking for us"
Cars were definitely seen as a fad. More accurately a worse version of a horse [2]
If you looked through your other examples, you'd see the same for those as well.
Some things start as fads, but only time will tell if they gain a place in society. Truthfully it's too early to tell for AI, but the arguments you're making, calling it a fad already don't stand up to reason
[1]: https://www.newspapers.com/article/the-item/160697182/ [2]: https://www.saturdayeveningpost.com/2017/01/get-horse-americ...
The flip side to this is that a lot of jobs today that appear to require "thinking" is actually just doing looking up aka "search"..
This is super scary stuff for an ADHDer like me.
I have an idea for a programming language based on asymmetric multimethods and whitespace sensitive, Pratt-parsing powered syntax extensibility. Gemini and Claude are going to be instrumental in getting that done in a reasonable amount of time.
My daily todos are now being handled by NanoClaw.
These are already real products, it's not mere hype. Simply no comparison to blockchain or NFTs or the other tech mentioned. Is some of the press on AI overly optimistic? Sure.
But especially for someone who suffers from ADHD (and a lot of debilitating trauma and depression), and can't rely on their (transphobic) family for support -- it's literally the only source of help, however imperfect, which doesn't degrade me for having this affliction. It makes things much less scary and overwhelming, and I honestly don't know where I'd be without it.
For that reason, and my own experience with AI users being unaware of how bad of a job the LLM is doing (I've had to confront multiple people about their code quality suddenly dropping), if someone says they can rely on LLM I've learned to not trust them.
When I was younger if I had an idea for a project I would spend time thinking of a cool project name, creating a git repo, and designing an UI for my surely badass project. All easy stuff that gave me the feeling of progress. Then I would immediately lose interest when I realized the actual project idea was harder than that, and quit. This is the vibe I get from LLM use.
I pray you do not become the next HN user to be screwed over by over-trusting LLM when you have it fill out legal documents for you.
What I wrote is my empirical experience, but also what friends and loved ones tell me. I have friends with ADHD who have gone through the exact "wow I'm getting a lot done" -> "wow this is actually wasting a lot of time in hindsight" thing I described. If you think others lived experience is degrading to you it may be hitting a sore spot. What if I had ADHD? My friends with ADHD have the same opinion. Would you then say you were degraded by another person with ADHD that were offering their lived experience?
Maybe we live in very different countries but help has been good for everyone I know who got it. More want it the problem is money. You basically have to be suicidal to get public help, and private costs a fortune. It is a psychologists whole job to use their knowledge to help you self reflect and then act on it. It is uncomfortable, and I can understand why you may experience it as degrading. I don't know about the kind of help you've tried, though.
I hope you get the help you want.
Gen AI reached 39% adoption in two years (internet took 5, PCs took 12). Enterprise spend went from $1.7B to $37B since 2023. Hyperscalers are spending $650B this year on AI infra and are supply-constrained, not demand-constrained. There is no technology in history with these curves.
The real debate isn't whether AI is transformative. It's whether current investment levels are proportionate to the transformation. That's a much harder and more interesting question than reflexively citing a phrase that pattern-matches to past bubbles.
No, the debate is very much whether AI is transformative. You don't get to smuggle your viewpoint as an assumption as if there was consensus on this point. There isn't consensus at all.
There are multiple companies that deploy to production daily. What are we even talking about?
You said that none of this was in production and then when people pointed out that it was obviously in production, you shifted the goal post to some other measure that you just imagined in your head.
I was even trying to come up with a list of software I use in my personal life to see if any of that has started coming out faster, and I came up with:
KDE
Supercollider
Puredata
Mixxx
Renoise
CUDA and ROCM
none of which have had any kind of release acceleration that I know of (though obviously the hardware to use the last two has gotten mind-blowingly expensive, alas). I use maybe three apps on my phone and they aren't updating any more frequently than they used to.
I get that for whatever reason this bugs people, but I'm in a very tech job and have a very tech personal life (just not webdev in either case) and literally have not seen anything I deal with change other than needing to learn to scroll past the AI summary at the top of search results.
This isn’t like AI image generation where you’re going to convince yourself that you can tell the difference based on how you think it looks. Do you really think no one in the production chain of any of the software that you use picked up copilot in the last two years?
What signal are you hoping to receive that this is happening?
Plenty a visual programming language has tried to toot their own horns as being the next transformative change in everything, and they are mostly just obscure DSLs at this point.
The other issue is nobody knows what the future will actually look like and they'll often be wrong with their predictions. For example, with the rise of robotics, plenty of 1950s scifi thought it was just logical that androids and smart mechanic arms would be developed next year. I mean, you can find cartoons where people envisioned smart hands giving people a clean shave. (Sounds like the making of a scifi horror novel :D Sweeney Todd scifi redux)
I think AI is here to stay. At very least it seems to have practical value in software development. That won't be erased anytime soon. Claims beyond that, though, need a lot more evidence to support them. Right now it feels like people just shoving AI into 1000 places hoping that they can find an new industry like software dev.
But how many of your non-nerdy friends were talking about them, let alone using them daily?
But if they don't and if I have to think twice about how much every request's going to cost, the cost-benefit analysis will look differently fast.
But even if the big companies ultimately go belly up, I think the open models are good enough that we'll likely see pretty cheap AI available for a while, even if it's not as good as the STOA when the bankruptcies roll through.
See ‘Service Model’. YMMV on whether you consider it horror.
You're comparing a service that mostly costs a free account registration and is harder to avoid than to use, with devices that cost thousands of dollars in the early days.
> 39% adoption in two years (internet took 5, PCs took 12).
Adjust for connectivity and see whether it is different (from pure hype) this time.However some will survive, and there will be far more bankruptcy and downsizing in the industries replaced
Source?
In 1995 how many people used the internet in their daily work, of those that did how many was it a curiosity that maybe supplemented their existing business practice (sending a memo via email rather than post for example). Large companies were using large computer mainframes but the majority of employers - the SMEs - weren’t.
By 2005 it massively shifted, and AI seems to be coming faster than the internet and computers in general.
By 2015 non intenet companies were going the way of the dodo. How many travel agents were there per 100k in 1995 compared to 2015?
1. http://lucacardelli.name/Papers/Binary.pdf
2. https://www.researchgate.net/publication/221321423_Parasitic...
Second, asymmetric multimethods give something up: symmetry is a desirable property -- it's more faithful to mathematics, for instance. There's a priori no reason to preference the first argument over the second.
So why do I think they are promising?
1. You're not giving up that much. These are still real multimethods. The papers above show how these can still easily express things like multiplication of a band diagonal matrix with a sparse matrix. The first paper (which focuses purely on binary operators) points out it can handle set membership for arbitrary elements and sets.
2. Fidelity to mathematics is a fine thing, but it behooves us to remember we are designing a programming language. Programmers are already familiar with the notion that the receiver is special -- we even have a nice notation, UFCS, which makes this idea clear. (My language will certainly have UFCS.) So you're not asking the programmer to make a big conceptual leap to understand the mechanics of asymmetric multimethods.
3. The type checking of asymmetric multimethods is vastly simpler than symmetric multimethods. Your algorithm is essentially a sort among the various candidate multimethod instances. For symmetric multimethods, choosing which candidate multimethod "wins" requires PhD level techniques, and the algorithms can explode exponentially with the arity of the function. Not so with asymmetric multimethods: a "winner" can be determined argument by argument, from left to right. It's literally a lexicographical sort, with each step being totally trivial -- which multimethod has a more specific argument at that position (having eliminated all the candidates given the prior argument position). So type checking now has two desirable properties. First, it design principle espoused by Bjarne Stroustroup (my personal language designer "hero"): the compiler implementation should use well-known, straightforward techniques. (This is listed as a reason for choosing a nominal type system in Design And Evolution of C++ -- an excellent and depressing book to read. [Because anything you thought of, Bjarne already thought of in the 80s and 90s.]) Second, this algorithm has no polynomial or exponential explosion: it's fast as hell.
4. Aside from being faster and easier to implement, the asymmetry also "settles" ambiguities which would exist if you adopted symmetric multimethods. This is a real problem in languages, like Julia, with symmetric multimethods. The implementers of that language resort to heuristics, both to avoid undesired ambiguities, and explosions in compile times. I anticipate that library implementers will be able to leverage this facility for disambiguation, in a manner similar to (but not quite the same) as C++ distinguishes between forward and random access iterators using empty marker types as the last argument. So while technically being a disadvantage, I think it will actually be a useful device -- precisely because the type checking mechanism is so predictable.
5. This predictability also makes the job of the programmer easier: they can form an intuition of which candidate method will be selected much more readily in the case of asymmetric multimethods than symmetric ones. You already know the trick the compiler is using: it's just double-dispatch, the trick used for "hit tests" of shapes against each other. Only here, it can be extended to more than two arguments, and of course, the compiler writes the overloads for you. (And it won't actually write overloads, it will do what I said above: form a lexicographical sort over the set of multimethods, and lower this into a set of tables which can be traversed dynamically, or when the types are concrete, the compiler can leverage monomorphize -- the series of "if arg1 extends Tk" etc. is done in the compiler instead of at runtime. (But it's the same data structure.)
6. It's basically impossible to do separate compilation using symmetric multimethods. With asymmetric multimethods, it's trivial. To form an intuition, simply remember that double-dispatch can easily be done using separate compilation. Separate compilation is mentioned as a feature in both the cited papers. This is, in my view, a huge advantage. I admit, this I haven't quite figured out generics will fit into this -- at least if you follow C++'s approach, you'll have to give up some aspects of separate compilation. My bet is that this won't matter so much; the type checking ought to be so much faster that even when a template needs to be instantiated at a callsite, the faster and simpler algorithm will mean the user experience will still be very good -- certainly faster than C++ (which uses a symmetric algorithm for type checking of function overloads).
To go a bit more into my "vision" -- the papers were written during a time when object-orientation was the dominant paradigm. I'd like to relax this somewhat: instead of classes, there will only be structs. And there won't be instance methods, everything will be a multimethods. So instead of the multimethods being "encapsulated" in their classes, they'll be encapsulated in the module in which they're defined. I'll adopt the Python approach where everything is public, so you need to worry about accessibility. Together with UFCS, this means there is no "privileging" of the writer of a library. It's not like in C++ or Java, where only the writer of the library can leverage the succinct dot notation to access frequently used methods. An extension can import a library, write a multimethod providing new functionality, and that can be used -- using the exact same notation as the methods of the library itself. (I always sigh when I read languages, having made the mistake of distinguishing between free functions and instance methods, "fix" the problem that you can only extend a library from the outside using free functions -- which have a less convenient syntax -- by adding yet another type of function, an "extension function. In my language, there are only structs and functions -- it has the same simplicity as Zig and C in this sense, only my functions are multimethods.)
Together with my ideas for how the parser will work, I think this language will offer -- much like Julia -- attractive opportunities to extend libraries -- and compose libraries that weren't designed to work "together".
And yeah, Claude Code and Gemini are going to implement it. Probably in Python first, just for initial testing, and then they'll port it to C++ (or possibly self-host).
> [I'm scared] you are growing dependent on stilts that could disappear any moment.
First, I do control the RTX3070 I own, and that can actually do a pretty decent job nowadays with some of the 3B parameter models.
Second, maybe if people like you showed as much concern for the fact that LGBT people can expect family violence as you do for Dr. Strangelove scenarios, then people like me wouldn't have to lean on LLMs so heavily.
Third, it's hilarious that your response to a comment pointing out how difficult it was to get help from another human without being degraded, was to degrade me by calling me an LLM junkie. Maybe you should be worried that Gemini appears to have more capacity for empathy and self-awareness than you.
Fourth, given that you show absolutely zero concern or willingness to help when it comes to the difficulties faced by LGBT people or ADHDers, my advice to you is to take your fears and shove them up your ass.
Quick question but what model are you exactly running with 3B parameter. The only decent model I can find which can compete sort of with Cloud models without costing a bank in GPU/RAM are the recently launched Qwen models (35A3B or 27B) which were released a week ago
> First, I do control the RTX3070 I own, and that can actually do a pretty decent job nowadays with some of the 3B parameter models.
My larger question to you is that even if it might not disappear in any moment, the fact of the matter still remains as if that its still a dependency. Is this dependency worth it? This is an open question and something I am still thinking.
> Third, it's hilarious that your response to a comment pointing out how difficult it was to get help from another human without being degraded, was to degrade me by calling me an LLM junkie. Maybe you should be worried that Gemini appears to have more capacity for empathy and self-awareness than you.
Gemini isn't real tho. It's still linear algebra with no regards to what it says or not. It's just trained on all the corpus data that Google can find and fine tuned to mimic it. By attaching real human qualities to Gemini, we dilute the value of those human qualities in those first place.
I don't necessarily know how "Humans" have treated you. They have treated me both good and bad but I am always more greatful to those who taught me or discussed with me things and helped me know something new. I very much feel like the same fine-tuning that I discussed earlier about models make those very agreeable and the chances of growth are rather limited.
> Fourth, given that you show absolutely zero concern or willingness to help when it comes to the difficulties faced by LGBT people or ADHDers, my advice to you is to take your fears and shove them up your ass.
Actually, You are a human as well so try to think it like this, I am sure you must've met both good and bad people and observed a few common characteristics of them. You are a human too and each second gives you a choice which can help you get either good or bad characteristics being better/worse each day.
Now my philosophy is to be good if not for yourself, then for others in the sense that you become the person that you wished could help you in your life and you can use that to actually help other people. This might be a little naive and practical nature sometimes might not follow this philosophy but yea.
So I want for you to reflect on what you wrote and think as if perhaps that might be a little too aggressive? and if that's what you want or not.
My or (our?) worry is that it feels like too big of a dependence on LLM which are fundamentally black boxes (yes, they are!), Humans can be bad but humans can be good too, I suggest even though it can be hard to have a good friend group (even if online) and talk with them about normal life issues.
Regarding, Coding, I would consider that there are some great people here on forums or Github or just about anywhere who are kind as well and can be helpful. Stackoverflow as an example had issues because of moderation problems which led to the community being hostile but to say that the whole of Software Engineering is such way might be wrong.
Speaking from personal experience, I may or may not have ADHD, I haven't diagnosed it yet but I definitely went into the AI=Producitivty rabbit hole especially more because I am a teen and I was in 9th/10th grade when ChatGPT came iirc. I knew basic python and knew the concepts of multiple languages and chatgpt felt hella addicting to be making websites in svelte all of a sudden where I can make one color button turn to another.
I wouldn't be lying if I say that I may not have learnt Coding effectively the way it was designed from its origin until quite recently. I was Vibe coding from the start and I have made quite some projects at the very least.
My observation is that its great for prototyping purposes but even after finally creating prototypes of most if not all the project ideas I ever had. I lost the motivation to continue and felt burn out. I did everything that I ever wanted to and made every project I thought yet the projects still felt hollow.
So, nowadays I am trying to focus more on studies for my college which can also act as a sort of recovery, to me it was also the fact that I was making these projects when I should've been studying in hindsight haha but I always just wanted to "prove" something (Yes I struggle with studies quite often but I wish to improve and I hope I can improve since I know from past that I can study often but its rather that I need my pure undirected focus on it which became hard for some time)
Recently, I went into a marriage of my own cousin. I found that to be much more fulfilling experience than expected. There is something about human experience both good or bad which can't be quantified.
I don't know what the future holds for me or you. But I wish you luck and hope this message helps ya. I personally realize that aside from prototyping which may be less meaningful than I previously thought at times, AI to me feels quite weak.
I think that for any product to really win, you might need true conviction in the product itself and at that point, the point of prototyping with AI or writing the code with AI to me becomes moot/redundant whereas AI is causing ram prices/storage to increase which is putting genuine projects out of luck as well. [This is one of the worst times to open a Cloud/VPS provider shop]
Perhaps I can understand AI use to get Open source tool when there were none or something but that to me seems like a cultural issue where Open source isn't funded so people are more likely to have it closed source to survive their likelihood but even that to me feels very moot point as there are some great open source projects as well who would appreciate each and every dollar that you donate to them, perhaps more so than a 200$ subscription of claude code as well which you might have to create the alternative to those in the first place as well.
My point still feels to me that it still feels hollow, I think you can find one of my other comments some days ago where I talk about this feeling of hollowness about AI projects as well which I can't help but feel relevant so many times. I am curious as to what you might think.
Have a nice day.
Here we are a few decades later, and we don't see business units using Word's built-in dictation feature to write documents, right? Funny how that tech seems to have barely improved in all that time. And, despite dictation being far faster than typing, it's not used all that often because.. the error rate is still too high for it to be useful, because errors in speech-to-text are fundamentally an unsolvable problem (you can only get so far with background noise filtering and accounting for accents etc).
I see the parallel in how LLM hallucinations are fundamentally an unsolvable component of transformers-based models, and I suspect LLM usage in 20 years will be around the level of speech-to-text today: ubiquitously in the background, you use it here and there to set a timer or talk to a device, but ultimately not useful for any serious work.
Prior to three weeks ago, I had used speech-to-text to do accomplish approximately 0% of the work I've done in my 20 years of coding. In the last three weeks, well over half of the direction that I've given to Claude Code has been done with speech-to-text.
If I need to explicitly reference files in the plan prompt, I just manually annotate them into the prompt at the end.
LLMs create a new workflow wherever they are employed. Even if capable, that is not always a more desirable/efficient experience.
Spoken language is very different to written language, which is why for example you can easily tell when an article is transcribing a spoken interview.
Similarly, raw LLM/chat interfaces are usually not the best option.
In my world AI is already far more influential than text to speech.
People on here act like we don’t know if AI will be useful. And I’m sitting over here puzzled because of how fucking useful it is.
Very strange.
Yes, it's very strange to read AI threads here because the general tone is so different than, say, at the company I work at, where hundreds of engineers are given enormous monthly token budgets and are being pushed to have the LLMs write as much code as possible. They're not forced to, and no one is reprimanded for not adopting Claude Code or Codex or Cursor. But there's been a strong tonal shift in technology leadership in the last month that basically implies that this is how it is going to be done in the future whether one likes it or not.
As for me, I've been writing all of my code via Claude for a while now, and I don't think I will ever go back to working in an editor writing code the way I did for most of my career. Nor do I want to.
https://arxiv.org/abs/2510.14928
Was Gemini worse than no tool at all there?
It’s useful, but when users here say they’re vibe coding 98% of their work, I have to think they’re not working on anything complex.
I don't remember the last time ive seen a made up library/methods these days and Im definitely using way more for more complex stuff. The tool calling changed the game.
Even for work I do almost 100% of my coding telling claude what to do. I mean I break down the tasks and tell it more or less exactly what I want but I find "rename this thing across these two repos" easier than doing it myself
Of course it's not perfect, here and there are inaccuracies or plain hallucinations, but it's impossible to state that it's still the same garbage it was 3 years ago.
Which is what was questioned.
In the context of AI most people I know tend to mean wrong output, not just hallucinations in the literal sense of the word or things you cannot catch in an automated way.
This is something I do every day; to be quite honest, it's a fairly mundane use of AI and I don't understand why it's controversial. To give context, I've probably generated somewhere on the order of 100k loc of AI generated code and I can't remember the last time I have seen a hallucination.
The problem is it's devouring your tokens as it does so. While you're on a subsidized plan that seems like a non-issue, but once the providers start charging you actual costs for usage.. yeah, the hallucinations will be a showstopper for you.
> The problem is it's devouring your tokens as it does so. While you're on a subsidized plan that seems like a non-issue, but once the providers start charging you actual costs for usage.. yeah, the hallucinations will be a showstopper for you.
The discussion, and your original post, was about whether hallucinations are a meaningful issue today - not in some hypothetical future.
Granted, it fixed the problem in the very next prompt.
I encounter stuff like this every week, I don't know how you don't. I suppose a well-structured codebase in a statically typed language might not provide as much of a surface for hallucinations to present themselves? But like you say, logical problems of course still occur.
I literally just went on Gemini, latest and best model and asked it "hey can you give me the best prices for 12TB hard drives available with the British retailer CeX?" and it went "sure, I just checked their live stock and here they are:". Every single one was made up. I pointed it out, it said sorry, I just checked again, here they are, definitely 100% correct now. Again, all of them were made up. This repeated a few times, I accused it of lying, then it went "you're right, I don't actually have the ability to check, so I just used products and values closest to what they should have in stock".
So yeah, hallucinations are still very much there and still very much feeding people garbage.
Not to mention I'm a part of multiple FB groups for car enthusiasts and the amount of AI misinformation that we have to correct daily is just staggering. I'm not talking political stuff - just people copy pasting responses from AI which confidently state that feature X exists or works in a certain way, where in reality it has never existed at all.
It really is 'different', though, in the same way the Internet was.
It took about 20 years (ie: since The World ISP) for the Internet to work its way into every facet of life. And the dot com bubble popped half-way through that period of time.
AI might 'underwhelm' for another five or ten years. And then it won't. Whether that's good or bad, I don't know.
So far, life goes on roughly the same as it did five years ago. This can feel 'underwhelming' in contrast to the onslaught of public discussion about, and huge investments in, AI.
Most of us here on HN are programmers, and we all know how radically LLMs have changed our code projects. Even so, the change to our everyday lives (aside from our work or hobby project) is not, just yet, glaringly obvious. This year, it's mainly... every website shoves an AI box on its site that nobody seems to want!
I’m using these chatbots to produce advanced software. Chatbots, get real
Will that happen in the future, maybe. but I don't have enough insight into how AI is evolving in the labs to make a judgement on that.
I don't hear people saying "nothing is going to change", but I do hear questions about the timeline and if the current levels of investment match returns. Branding these people as stuck in some sort of negative identity is bullshit.
Why? I see no evidence that this won’t be the case.. or isn’t already
Maybe it will plateau in the next 6-24 months, in which case it will “only” be as disruptive as the computer or industrial revolutions, albeit at a faster pace.
If not, I don’t think anyone can predict.
"AI will change everything!"
Few seem to understand that both of the above can be true. The parallel you draw to the internet revolution is apt; dot-coms were both a bubble and changed everything.
The stuff LLMs will democratize will be a lot more impactful than nice posters for car wash fundraisers though. So in that sense it will be different, but I don’t think it will crack the market for proficient experts in the field in the same way photoshop didn’t destroy graphic design and CAD didn’t destroy drafting. It may get rid of the market for a lot of the second-tier bootcamp grad talent though, so I wouldn’t be getting into that right now if I could help it.
I think we are in a similar situation with code generation now, then only difference in my mind is that LLMs come with a massive platform risk. Who's to say that one day anthropic decides my company is too much of a competitor to use their tool (like they've already done with openai) or what if they decide that instead of pulling their product from my use they just make it generate worse code, or even insert malicious payloads. A dependence on these tools is wildly more risky than dependency on a word processor or a spreadsheet program. It reminds me of the arguments around net neutrality and I cannot fathom how people building on top of, and with, these tools do not see the mountain of risks around them.
I don’t see that changing.
Why doesn't this business have a website?
Why is there no wifi here?
Why do they send these forms in the mail instead of email them?
Why can't I talk to this gadget with bluetooth?
Why can't I file this form electronically?
Why is there no electronic version of this book?
That was not the case, prior to the 2010's. There was the promise of new technology, but the reality was underwhelming.With AI, we're still in 1998 or 1999. People like yourself, and most people on HN, see the promise, and benefit from what AI can already do. Still, AI has yet to benefit the average person much, if at all.
In certain professions it wasn't uncommon to spend $3k/year or more (in 2026 dollars) on software licenses - Adobe CS4/CS6 etc... with a handful of products easily pushed over that. In other professions. All sorts of other jobs require people to pay for their own tools as well.
What I get for $150/month I'd easily pay twice or more for that if I had to, even out of pocket if I had to for current functionality - even if was frozen in time. I'd imagine many, if not most, readers on hacker news would do the same. Multiplied across the entire population of software developers (and broader population using AI) - I think it's clear to see what AI is worth in a grounded way.
But like all the previous hype, most of the people that were the loudest won't say they were wrong, and they'll move to the next thing, pretending like they never were the one that portrayed AI as the holy Graal.
LLM's are not artificial general intelligence (i.e. not sci-fi AI). Why haven't they transitioned to being mere algorithms by now? Why is the public being told AI is finally arriving when it's really just another algorithm?
We have some truly slick and shady corporations involved in the bubble right now and they're marketing LLM's like tobacco. LLM's have been pushed out, at immense cost, to the public in a way that makes them more directly accessible to average people than any past algorithm. Young children can ask a LLM to do their homework for them. Middle managers can ask a LLM to create a (shitty) ad campaign for them. Corporations have gone to tremendous expense to make that widely available and, for the moment, mostly free. They seem to be following the Joe Camel school of marketing. Get them hooked while they're young so they come to you first when they're older! The only difference is that nobody is stepping in to stop the new Joe Camel from handing out free samples to kids.
Then there's the "go big" aspects of the bubble. The major competitors are trying to out-spend each other to dominance, but the suckers are so colossally big that their bubble is affecting global GPU, memory, and storage prices. This bubble is going to stress power grids wherever it operates and do considerable environmental harm. The financial games being played behind the bubble are absolutely stupid. The results, so far, are tantalizing for billionaires. LLM's offer the promise of being able to fire all their pesky and annoying human workers. It won't deliver on that, and none of these companies is ever going to make enough to pay their debts. There might be "too big to fail" government bailouts, but there are going to be some big bankruptcies too.
Useful algorithms will come out of all this, a lot of tears too, but not "AI".
Do you think ai can never even conceptually become equivalent to a human or merely that the current crop is not there yet?
Umm, what? For the past 3 years, every year I've said something along the lines of "even if models stop improving now, we'll be working on this for years, finding new ways to use it and make cool stuff happen". The hype is already warranted. To have used these tools and not be hyped is simply denial at this point.
Most of Mag-7 are planning to spend over 500B on capex this year alone on building out datacenters for AI pipelines that have yet to prove that it can generate a sustainable profit. Yes, AI is useful in some environments, but the current pricing is heavily subsidized. So my point stand, the hype is not warranted.
Still don't understand what's the end goal here. Assuming they don't deliver, then there are billions of investments that will go bust. Assuming they deliver, millions lose their jobs and there's going to be a bloodbath on the streets.
There is a third outcome that combines both of these.
LLMs can massively displace the workforce (and cause widespread social instability) AND the companies pouring hundreds of billions into them right now could, at the same time, fail to capture significant amounts of the labor savings value as late-mover alternatives run the race drafting their progress without the massive spend.
I'd honestly be surprised if this double-whammy isn't the outcome at this point. AI is going to have a massive impact on everything, but there is still no moat in sight.
I think you're right but for the wrong reasons wrt sustainable profit.
Specifically, overcounting how much it will cost in 5 years to run AI because you're extrapolating current high prices, and at the same time undercounting how the demand will drive efficiency gains.
But there's a lot of things playing out to our advantage. Vast swathes of useful and publicly available training data. The rigorous precision of said data. Vast swathes of data we can feed it as input to our queries from our own codebases. While we never attained the perfect ideal we dreamed of, we have vast quantities of documentation at differing levels of abstraction that the training can compare to the code bases. We've already been arguing in our community about how design patterns were just level of abstraction our coding couldn't capture and AI has access now to all sorts of design patterns we wouldn't have even called design patterns because they still take lots of code to produce, but now for example, if I have a process that I need to parallelize it can pretty much just do it in any of several ways depending on what I need at that point.
It is easy to get too overexcited about what it can do and I suspect we're going to see an absolute flood of "We let AI into our code base and it has absolutely shredded it and now even the most expensive AI can't do anything with it anymore" in, oh, 3 to 6 months. Not that everyone is going to have that experience, but I think we're going to see it. Right now we're still at the phase where people call you crazy for that and insist it must have been you using the tool wrong. But it is clearly an amazing tool for all sorts of uses.
Nevertheless, despite my own experiences, I persist in believing there is an AI bubble, because while AI may replace vast swathes of the work force in 5-20 years, for quite a lot of the workforce, it is not ready to do it right this very instant like the pricing on Wall Street is assuming. They don't have gigabytes of high-quality training data to pour in to their system. They don't have rigorous syntax rules to incorporate into the training data. They don't have any equivalent of being guided by tests to keep things on the rails. They don't have large piles of professionally developed documentation that can be cross-checked directly against the implementation. It's going to be a slower, longer process. As with the dot-com bubble, it isn't that it isn't going to change the world, it is simply that it isn't going to change the world quite that fast.
It's high time to stop accumulating debt while providing free picture of pelicycles, just charge the full cost for them - enough to generate profits and pay back debt.
What we see now is literally burning money and energy to generate hype. The only true measures of success are financial and macroeconomic. If the hype is real, there should be no problem for the mighty AI to generate debt-free profits for its providers while the overall price level in the US goes down.
We observe the exact opposite which makes the AI hype act only as market manipulation for capital misallocation.
I was so expecting to find this wind-up aimed at those peddling the "AI is hype" laziness.
It's laziness because they have little CS fundamentals to base such claims on, and the deductions can be made, just not clearly to people who need to study a lot more.
It's like watching an invisible train (visible to those with strong CS) rolling down the tracks at a leisurely pace. Those sitting in their stalled car on the tracks are busy tweeting about "AI HPY PE TRAIN." Until it wrecks their car, the gimmick is free oxygen. It's a lot easier to write articles than it is to build GPUs and write programs.
So, what CS fundamentals do you need to evaluate if AI is the real thing, or will disappoint in the future? Until a few months ago, coding agents were met with skepticism, until Anthropic introduced their new model and, with it, a hype train that cannot be rationally justified. Look, SOTA LLMs, and coding agents in particular, are impressive. However, current predictions about the future of software development (and the world in general) are speculative. There is little to no data showing whether AI can deliver on its promises. How could there be in this short time frame? No one knows what the future will hold, no one knows how coding agents will be integrated into our work life and everyday life in the long run, or what hard limitations they will reveal. No one can tell you how professions will change in the coming years; every prediction is purely speculative, and anyone making prophecies is either trying to cope with the uncertainty themselves or has some stakes in the AI bet. It would be nice if people were actually humble enough to admit that they have no idea what will happen in the future, instead of writing the hundredth doom and gloom post.
It's amazing to me how those willing to seize on the speculative nature of any ANY uncertainty cannot recognize the inherent uncertainty of the inverse.
> what CS fundamentals do you need
1. Tarski's undefinability theorem 2. Gödel's incompleteness theorems 3. Curry Howard correspondence
And a lot of exposure to deductive reasoning, vague ideas of automated theorem proving and formalization.
I won't pretend its easy, but let's be clear, a small fraction of people who know things are being forced to entertain the hysteria of a vast majority who are unwilling to know things and just go around beating their chests and will continue doing so until the train hits them.
There are 2-3 minor architectural changes in between now and what I would identify as a completely unbounded AGI with clearly discernible dynamic, self-defined objective functions and self-defined procedures for training and inference. It can be done in megabytes. Oh god. Get me out of this forum. I wish to return to my code editor.
Describes either side
I mean, disillusionment is the least of my worries.
Are you just not doing that anymore?
It's different just like the steam engine was different, except technology moves much 100x faster now than it did then. It's different and the same.
Non-coding work is thinking about the system architecture, thinking about how data should flow, thinking about the problem to be solved, talking with people who will use it, discovering what their objectives are.
Producing 40k lines a code per day simply means you're not doing any of that work: the work that ensures you're building something worth building.
Which is why the result is massive, pointless things that don't do the things people actually need, because you've not taken any time to actually identify the problems worth solving or how to solve them.
It's a form of mania that recalls Kafka's The Burrow, where an underground creature builds and builds an endless series of catacombs without much purpose or coherence. When building becomes so easy when it was so hard -- and when it becomes more fun to build and watch codex's streams of diffs fly by, than to plan -- we forget the purpose of building, and building becomes its own purpose, which is why we usually so little actual productive impact on the world from the "40k lines of code a day" cohort.
The tests were so good they all passed before the code was fully finished and during huge refactoring they've never failed !
Otherwise his entire team must collectively groan when a Slack message appears: "Got a new PR ready for review everybody!"
It is physically and physiologically impossible for anyone to be reviewing "30-40K lines of nearly perfect code a day" to the extent needed to push it with confidence in a sensible development process.
Are we experiencing a huge influence campaign on HN?
I was firmly in the camp that blockchain was not a viable solution to any problem, and that NFTs sound stupid. I think AI is much different than that list. So, there goes your argument?
Squares are rectangles. The existence of rectangles that aren't squares doesn't negate that.
AI is different because the magic clearly is because of the tech. The fact that we get this emergent behavior out of (what essentially amounts to) polynomial fitting is pretty surprising even for the most skeptical of critics.
It's not a very legible situation for people outside of the profession, and a lot of them believe it's just another grift that will blow up in a few years.
NFTs was always stupid; blockchain (not crypto) has plenty of real world applicability
It's just an interface problem. The VT100 didn't change the world overnight either.
I spent most of Covid in VRChat and met my current live-in gf, so the metaverse was real for me too.
I also made decent money selling crypto, so that part was real for me too.
And AI coding, for as dumb as even the best models are, still enabled me to create things that I wanted to, but wouldn't have had time or gotten nearly as far without.
I dunno if the author realizes, but all the things they mentioned did materialize in one way or another, just not exactly how the hype described it.
Maybe if they could let go of some of the cynicism, they could find something to be optimistic about. Nothing ever goes exactly as planned, but that doesn't mean nothing is good.
From the post, which is not a very long one: "All of the above technologies are still chugging along in some form or other (well, OK, not Quibi). Some are vaguely useful and others are propped up by weirdo cultists"
I also found the "it's almost always dudes" line a bit strange, because I've seen plenty of women doing marketing for startups running on hype.
75% of restaurant orders are delivery now due to widespread personal electric transportation. It already has fundamentally changed humanity.
For what it’s worth, not a single other technology in the list made any sort of impact on my work. For better or worse, LLMs did.
Well, okay, quantum computing actually affected me a lot because I worked at a quantum hardware manufacturer, but that’s different.
I’ve never heard half of the things and the other half is mostly consumer electronics or specific product names. The closest example here is Quantum Computing, which is also a serious technology in development. I think for the OP these are all tech buzzwords that he invests in without understanding what they really are. That’s why he thinks all these unrelated things are the same.
The point is to take the hype with a grain of salt and knowledge that not all hyped technologies transformed the world as promised. Maybe AI is like the internet or electricity. But maybe the claims about AGI/ASI and full automation are just hype.
> 3D TV, AMP, Augmented Reality, Beanie Babies, Blockchain, Cartoon Avatars, Curved TVs, Frogans, Hoverboards, iBeacons, Jetpacks, Metaverse, NFTs, Physical Web, Quantum Computing, Quibi, Small and Safe Nuclear Reactors, Smart Glasses, Stadia, WiMAX.
...conveniently doesn't list a bunch of hyped tech that hasn't failed:
> microchips, PCs, the internet, ecommerce, cloud, EVs, 5G
...and presents this as evidence that the current hyped tech (AI) will fail:
> Seems like you say that about every passing fancy - and they all end up being utterly underwhelming.
When the article needs to construct disingenious arguments, I'm not interested in its conclusion.
But wait! If you actually read to the end, there's a plot twist!
> The ideology of "winner takes all" is unsustainable and not supported by reality.
Who said anything about winner takes all? You just burned a "this time is different" straw man and then conclude that "winner takes all" is not realistic?
At this moment I'm wondering if the article was in fact written by a quantized 8B LLM. Surely people don't do such non-sequiturs and then expect to be taken seriously.
But of course not. This is not an argument. This is preaching to the choir.
Preach, brother, preach.
Internet, handheld computers, electric cars...The problem is the same dudes.
Putting beanie babies in with Quantum Computing and Nuclear Power completely ignores the potential life changing elements of some technologies, even if they don't work.
Oh, and smart glasses he put in there, so he'll be eating his words in 2 years.
Handheld computers were an expanding market, dominated by Blackberry.
EVs were an immature technology but hybrids like the Prius were selling.
There is a huge difference between claiming that there is an investment bubble in an industry and some companies are overvalued and that the technology is a failure. Someone might well think that Tesla is very overvalued, but that EVs are successful. If someone thinks there is a house price bubble that does not mean that they think houses are a failed technology.
https://shkspr.mobi/blog/2014/04/quick-thoughts-on-google-gl...
I am looking forward to 2028 matching the hype of 2014.
AI concerns me, it feels like it will come faster and at least as impactful on workers as the Industrial Revolution. The latter at least occurred over centuries and didn’t apply globally at the same time.
Is this round hype? Probably. Are we heading for a y2k crash? Probably.
However those who laughed at the dotcom boom and doubled their holdings in department stores and blockbuster video didn’t do well in the long run.
"All of the above technologies are still chugging along in some form or other (well, OK, not Quibi). Some are vaguely useful and others are propped up by weirdo cultists. I don't doubt that AI will be a part of the future - but it is obviously just going to be one of many technology which are in use.
> No enemies had ever taken Ankh-Morpork. Well technically they had, quite often; the city welcomed free-spending barbarian invaders, but somehow the puzzled raiders found, after a few days, that they didn't own their horses any more, and within a couple of months they were just another minority group with its own graffiti and food shops.
- Terry Pratchet's Faust Eric"
We're in that part of turbulence where we don't know if the floating leaf is going to go left or right.
The people who will have the hardest time with this transition are those who go all in on a specific prediction and then discover they were wrong.
If you want to avoid that, you can try very very hard to just not be wrong, but as I said, I don't think that's possible.
Instead, we need to be flexible and surf the wave as it comes. Maybe AI fades away like VR. Or maybe it reshapes the world like the internet/smartphones. The hardest thing to do right now, when everyone is yelling, is to just wait and see what happens. But maybe that's the right thing to do.
[p.s.: None of this means don't try to influence events. If you've got a frontier model you've been working on, please try to steer us safely.]
>3D TV, AMP, Augmented Reality, Beanie Babies, Blockchain, Cartoon Avatars, Curved TVs, Frogans, Hoverboards, iBeacons, Jetpacks, Metaverse, NFTs, Physical Web, Quantum Computing, Quibi, Small and Safe Nuclear Reactors, Smart Glasses, Stadia, WiMAX
It's quite a different thing, more on the level of the evolution of life on earth and quite unlike all that junk.
abstract away a lot of the mechanics of working with data/information.
helpful, when literacy seems to be trending in a downward direction.
Deep disconnect from reality.
The problem is that this time is 20% different, not the 80% people are implying it is. So the same things that killed it last time will kill it again, unless that 20% has gotten us up some stairstep we got stuck on last time. But then the next thing will get us and we will go back to a new and improved version of the old thing.
Yeah I know what you are, don't try to pretend.
What is meta-technology?
If you speak to industry professionals and retain a healthy scepticism, you don't have to look far to find people that absolutely do not believe the marketing.
Quite frankly I like that advances in say quantum computing are publically announced. The hype around what that means for society and our view of the universe is probably where you want to put on that reserved scepticism hat.
Similarly smart glasses were and are a thing, but society is rightly apprehensive about the impact, so the hype has dropped off.
https://www.youtube.com/watch?v=SZFhFGpDWGw
"Today, I'm speaking with Stephen C. Meyer, Director of The Discovery Institute's Center for Science and Culture, and and George D. Montañez, Director of the AMISTAD Lab at Harvey Mudd College–both of whom are extremely knowledgable on the topic of artificial intelligence. During the course of our conversation, they discuss the asymmetry between human intelligence & AI, the inability of AI to ascribe meaning to raw data, and the limitations of large language models. The real question though is: are we screwed? Let's find out."
There are two kinds of waves. The ones that don't require collective belief in them to succeed, and those that do.
The latter are kinds like crypto and social media. The former is mobile...and AI.
If no one else in the world had access to AI except me, I would appear superhuman to everyone in the world. People would see my level of output and be utterly shocked at how I can do so much so quickly. It doesn't matter if others don't use AI for me to appreciate AI. In fact, the more other people don't use AI, the better it works out for me.
I'm sympathetic to people who feel like they are against it on principle because scummy influencers are talking about it, but I don't think they're doing themselves any favors.
You really wouldn't. AI simply isn't that useful because it is so unreliable.
• Self-reinforcing chemical metabolisms
• DNA as a template for reproduction
• Multi-cell cooperation
• Multi-cell specialization
• Nerve cells
• Neural ganglia
• Nervous systems
• Brains
• Self-awareness
• Language
• Written language
• Books
• Printing press
• Wireless communication
• Transistors
• Digital memory
• Computer processors
• Networking
• Internet
• AI
Answer: They all introduced dramatic qualitative and quantitative improvements in the efficiency, effectiveness, interaction, speed, reliability, flexibility, adaptability, and application of information.
AI is on its way to being self-designed. It is already assisting in its own design, speeding up work, by doing "mundane" things that would otherwise take people more time to do.
Intelligence has not been an S-curve technology.
AI, the systematic automation, manufacturing and increasingly recursive improvement of intelligence, is not an S-curve technology.
Actually IT IS different. Actually if they manage to create a viable small nuclear reactors or Quantum computers the world will change like it changed with the Watt thermal engine.
Why he is not talking about the Internet, trains, electricity, nuclear bombs, rockets,aviation or engines? Because they worked, like AI works today.
All of them were bubbles at the time and they changed the world forever. AI is changing the world AND it is a bubble.
AI is here to stay. It will improve and it will have consequences. The fact that a robot could do things with its hands is actually significant, whenever you like it or not.
Except for the minor bit that AI doesn't work today, and it is not yet clear if it ever will.
"This time will be different," they said about the Metaverse, ignoring the vast tranches of MUCKs, MUDs, MMOs, LSGs, and repeated digital real estate gold rushes of the past half-century. Billions burned on something anyone who played Second Life, Entropia, FFXIV, EQ2, VRChat, or fucking Furcadia could've told you wasn't going to succeed, because it wasn't different, it just had more money behind it this time.
"NFTs are different", as collectors of trading cards, art prints, coins, postage stamps, and an infinite glut of collectibles looked at each other with that knowing, "oh lord, here we go" glance.
"Crypto is different", as those who paid attention to history remembered corporate scrip, gift cards, hedge funds, the S&L crisis, Enron, the MBS crisis, and the multitude of prior currency-related crises and grifts bristled at the impending glut of fraud and abuse by those too risky to engage in traditional commerce.
And thus, here we are again. "This time is different", as those of us who remembered the code generators of yore pollute our floppy drives and salesgrifts convinced our bosses that their program could replace those expensive programmers roll our eyes at the obvious bullshit on naked display, then vomited from stress as over a trillion dollars was diverted from anything of value into their modern equivalent - with all the same problems as before.
I truly hate how stupidly people with money actually behave.
Also, every single close friend of mine makes some use of LLMs, while none of them used any the overhyped technologies listed. So you need a specially strong argument to group them together.
Effectively, it’s a statement saying nothing can ever be profoundly different, because people have said it before and been wrong.
Lazy.
Is just propaganda...
Iran is 2 weeks from a nuclear weapon We obliterated Iran's nuclear dreams
Russia is fighting with shovels Russia is on the verge of swarming Europe
What would Joost Meerloo say about it, I wonder.
Covid was different -- people dismissed it initially saying it was going to be like the 2009 Swine flu or the seasonal chicken flu we see on media.
The iPhone was different -- many columnists said it was just a fancier PDA and that Palm already had the market.
The 2008 crisis was different -- the signs of a housing bubble were present but were dismissed. The derivatives made it different and it imploded.
There are times when things are actually different and you should be able to identify them. AI is one of them.
I don't even need to elaborate much, as a programmer it's clear how this a game-changer. We are moving past the era when programs were just predictable if/else chains with regex to a world where you can accept non-deterministic, never-before seen inputs and have them to be interpreted accurately. Just like the Internet added another "dimension" to computer applications, AI is now adding another "dimension" previously unreachable.*
* Just as you could make a big local LAN before the Internet, it's obvious that we had past incarnations of the current technology that gave some taste of that dimension, but did not fully "unlock" it.
This time, it's truly different.
New things are happening and it's exciting. "AI bad" statements without examples feel very head-in-sand.
I like technology. I made a decent living from it. But if I had chased every hyped fad that was promised as the next big thing, I doubt I'd be as happy as I am now.
The one you keep citing, here and in the article, Quibi, lives on in technology-form (the spirit of your article we must presume) as an 8 billion dollar business in China and is rapidly upending every Hollywood film studio.
So, arguments about substantiation or even 'this time' fall flat in the face of not even understanding your own message.
I mean you're just stating that sometimes tech doesn't meet it's hype. What's insightful about that? It's a given; cherry-picking examples doesn't prove your case.
Well, no, the ratio is most definitely not 1-to-1.
MRNA vaccines. Where are the countless breathless articles about these literal life saving tech? A few, maybe, but very few dudes pumping out asinine "white papers" and trying to ride the hype train.
Solar and battery. Again, lots of real world impact but remarkably few unhinged blowhards writing endless newsletters about how this changes everything.
I'm struggling to think of a tech from the last 20 years which has lived up to its hype.
Not everything is written to be insightful. Some things are just written to get them out of my head.
Do feel AI is overall just hype? When did you last try AI tools and what about their use made you conclude they will likely be forgotten or ignored by the mainstream?
It was an hour of pasting in error messages and getting back "Aha! Here's the final change you need to make!"
Underwhelming doesn't even begin to describe it.
But, even if I'm wrong, we were told that COBOL would make programming redundant. Then UML was going to accelerate development. Visual programming would mean no more mistakes.
All of them are in the coding mix somewhere, and I suspect LLMs will be.
> usage is copy pasting code back and forth with gemini
the jokes write themselves
As I said, maybe I'm wrong. I hope you have fun using them.
But then I look at the code quality, hideous mistakes, blatant footguns, and misunderstood requirements and realise it is all a sham.
I know, I know. I'm holding it wrong. I need to use another model. I have to write a different Soul.md. I need to have true faith. Just one more pull on the slot machine, that'll fix it.
> Not everything is written to be insightful. Some things are just written to get them out of my head.
I like that, going to use it as the motivation to get some things out of my own head.
Hype is often early, in 10-20 years we'll start seeing the value as the rest of the world catches up
https://www.sfgate.com/food/article/rise-fall-bay-area-start...
This only doesn't feel like substantiation if you reject the notion that these cases are analogous.
"You shouldn't eat that."
"Why not?"
"Everyone else who's eaten it has either died or gotten really sick."
"But I'm different! Why should I listen to your unsubstantiated claims?"
"(lists names of prior victims)"
"That doesn't mean anything. I'm different. You're just making vague and dismissive unsubstantiated claims."
The claim isn't "AI bad" the claim is more along the lines of "there's a lot of money changing hands and this has all the earmarks of a classic hype cycle; while attention/diffusion models may amount to something the claims of their societal impacts are almost certainly being exaggerated by people with a financial stake in keeping the bubble inflated as long as possible, to pull in as many suckers as possible."
If you want another example (which you won't find analogous if you've already drunk the koolaid):
https://theblundervault.substack.com/p/the-segway-delusion-w...
I have unlimited derision for morally spineless worms who disingenuously make it out to be more than it is-- looking at Dario, Sam, and the silly CEO of Control AI. Also, I hate to say it but Andrej Karpathy on twitter-- he's a worthless follow now. I can't blame, but am daily exasperated by media figures who can't help but go with what they hear prominent individuals in the field say.
If I were a junior now, and less confident, I would be abandoning my career in this climate.
LLMs are not going away. They will get a little better than they are now, and new model paradigms will come around at some point. But this tale of massive redundancy and skyrocketing unemployment is not going to come from LLMs.
This is the only reason why I cannot wait for a pop, and pray to God that it comes sooner than later. I just want to feel good about technology again. I want to tinker, to feel positivity, to know how sustainable the tools I'm using actually are.
I don't want to be reminded daily of the disgusting reality of unbridled capitalism.
for all the things you listed, less than 1000 people are using it, with AI we're clearly not finished with the gartner hype cycle, but the back end is going to be over a billion users.
Internet - this time is different
iPhone - this time is different
Failure to appreciate changes in AI will have left you calling every shot wrong over the past 5 years. While AI models continue to improve at an exponential rate, you'll cling to your facile maxims like "dude it's just predicting the next token it isn't real intelligence".
I was right that blockchain was BS and all the "not sure about Bitcoin, but blockchain will be big" people were idiots.
I've been right for the last couple of years on AI and that people were vastly underestimating it when it came to it's coding potential. And I put my money where my mouth was here. In 2021 when GPT-3 came out I decided almost immediately I needed to invest a significant amount of my net worth in Google simply as hedge against AI destroying knowledge work jobs. Which at the time I thought was probably going to happen around 2030 not realising how far LLMs could go with reasoning.
I'm not particularly intelligent ("only" top 1-2% IQ), but my ability to predict the future is very good. If you have a skill you're unusually good you might relate to how it's strange other people find it so hard to do that thing you find kinda easy. For me that's predicting things and computers.
Since I was a young teen I have been worrying about AI. Most of my IRL best friends I have made from talking about AI risk in 2010s when I was studying AI.
Admittedly I got some of the details wrong back then. In 2010 I thought a lot of manual labour jobs would be easier to automate first – warehouse work, mail, taxis, buses, trains, etc. I worried primarily about the economic and political ramifications, and much less about ASI scenario (at least in this half of the century). But I think still I got the general timeframes and direction right. This was the decade I was concerned about.
I'm so scared right now... My whole life I've had nightmares about AI. I know there are some people who talk about how AI is an existential risk, but it feels like they don't internalise it like I do. They're not prepping like me for one, not that you really can prep for what's coming. If they're concerned why don't they have the nightmares of the omnipresent AI which you can't out think or punch to protect those you love? AI is so powerful in the scariest ways. Super viruses, mass surveillance and control, mind reading, unimaginable sci-fi weapons. It's like a horror story, but suddenly real.
I am an OG AI doomer, but until the last few months I've at least always had some doubt in my mind about whether I'm right, perhaps not about the risk of AI broadly, but about whether we'd actually be able to develop highly capable AIs while I still have a lot of my life ahead of me.
In my opinion this time is different, and what I've been worrying about for the last couple of decades is now here.
We are collectively the indigenous peoples of America and the Europeans have just arrived in the new world. The risk vectors are now endless and how this all plays out is hard to know exactly. What we do know is that the majority of ways this will play out are bad, and some are incomprehensibly bad. Some may achieve status and wealth in the near-term, but longer-term we're all dead, or worse.
I always worry these comments make me sound like a lunatic, I think I am, but I hope I am. I hope you will all forgive me, but I just need to shout about this tonight while I still can. We need to stop this insanity. Data centers need to be nuked. You may doubt me now, but in time you will understand. Hopefully I won't be around to say I told you so. Please make the best of the time we have left.
I felt similarly, and did similarly, with both GOOGL and MSFT. I'm not an AI "doomer" in the Yudkowski/LolzWrong sense, but I do think it's quite sad that generative AI is first branch of the AI "tech tree" we raced up. AI art, especially, is tragic.
Love the Sir Terry reference.
Similarly to how titles that start with "how" usually have that word automatically removed.
Or maybe judicious use of an LLM here could be helpful. Replace the auto-edits with a prompt? Ask an LLM to judge whether the auto-edited title still retains its original meaning? Run the old and new titles through an embedding model and make sure they still point in roughly the same direction?