I look forward to the "personal computing" period, with small models distributed everywhere...
One could argue that this period was just a brief fluke. Personal computers really took off only in the 1990s, web 2.0 happened in the mid-2000s. Now, for the average person, 95%+ of screen time boils down to using the computer as a dumb terminal to access centralized services "in the cloud".
As I travel a ton, I can confidently tell you, that this is still not true at all, and I’m kinda disappointed that the general rule of optimizing for bad reception died.
Yep, and people will look at you like you have two heads when you suggest that perhaps we should take this into account, because it adds both cost and complexity.
But I am sick to the gills of using software - be that on my laptop or my phone - that craps out constantly when I'm on the train, or in one of the many mobile reception black spots in the areas where I live and work, or because my rural broadband has decided to temporarily give up, because the software wasn't built with unreliable connections in mind.
It's not that bleeding difficult to build an app that stores state locally and can sync with a remote service when connectivity is restored, but companies don't want to make the effort because it's perceived to be a niche issue that only affects a small number of people a small proportion of the time and therefore not worth the extra effort and complexity.
Whereas I'd argue that it affects a decent proportion of people on at least a semi-regular basis so is probably worth the investment.
it of course leads to a crappy user experience if they don't optimize for low bandwidth, but they don't seem to care about that, have you ever checked out how useless your algorithmic Facebook feed is now? Tons of bandwidth, very little information.
It seems like their measure is time on their website equals money in their pocket and baffling you with BS is a great way to achieve that until you never visit again in disgust and frustration.
Native apps need to deal with the variety in software environments (not to say that web apps are entirely insulated from this), across several mobile and desktop operating systems. In the face of that complexity, having to compile for both x86-64 and arm64 is at most a minor nuisance.
I used to work for a company building desktop tools that were distributed to, depending on the tool, on the low end tens of thousands of users, and on the high end, hundreds of thousands. We had one tool that was nominally used by about a million people but, in actuality, the real number of active users each month was more like 300k.
I was at the company for 10 years and I can only remember one issue where we could not reproduce or figure it out on tools that I worked on. There may have been others for other tools/teams, but the number would have been tiny because these things always got talked about.
In my case the guy with the issue - who'd been super-frustrated by it for a year or more - came up to our stand when we were at a conference in the US, introduced himself, and showed me the problem he was having. He then lent me his laptop overnight[0], and I ended up installing Wireshark to see why he was experiencing massive latency on every keystroke, and what might be going on with his network shares. In the end we managed to apply a fix to our code that sidestepped the issue for users with his situation (to this day, he's been the only person - as far as I'm aware - to report this specific problem).
Our tools all ran on Windows, but obviously there were multiple extent versions of both the desktop and server OS that they were run on, different versions of the .NET runtime, at the time everyone had different AV, plus whatever other applications, services, and drivers they might have running. I won't say it was a picnic - we had a support/customer success team, after all - but the vast majority of problems weren't a function of software/OS configuration. These kinds of issues did come up, and they were a pain in the ass, but except in very rare cases - as I've described here - we were always able to find a fix or workaround.
Nowadays, with much better screensharing and remote control options, it would be way easier to deal with these sorts of problems than it was 15 - 20 years ago.
[0] Can't imagine too many organisations being happy with that in 2025.
One problem I've found in my current house is that the connection becomes flakier in heavy rain, presumably due to poor connections between the cabinet and houses. I live in Cardiff which for those unaware is one of Britain's rainiest cities. Fun times.
Another point is, subjectively, added privacy compared to say South Korean products is mostly a myth. It 100% doesn't apply if you are not US citizen and even then, fingers crossed all 3-letter agencies and device creator are not over-analyzing every single data point about you continuously, is naive. What may be better is devices are harder to steal & take ownership, but for that I would need to see some serious independent comparison, not some paid PR from which HN is not completely immune to.
Space.
You don't want to wait 3-22 minutes for a ping from Mars.
There are FOSS alternatives for about everything for hobbyist and consumer use.
Making software that is polished and reliable and automatic enough that non computer people can use it is a lot harder than just making software. I’d say it’s usually many times harder.
If computers came with Debian, Firefox and Libre Office preinstalled instead of only W11, Edge and with some Office 365 trail, the relative difficulty would be gone I think.
Same thing with most IT departments only dealing with Windows in professional settings. If you even are allowed to use something different you are on your own.
This is such a HN comment illustrating how little your average HN knows of the world beyond their tech bubble. Internet everywhere, you might have something of a point. But "high speed internet access everywhere" sounds like "I haven't travelled much in my life".
Also, is percentage of screentime the relevant metric? We moved TV consumption to the PC, does that take away from PCs?
Many apps moved to the web but that's basically just streamed code to be run in a local VM. Is that a dumb terminal? It's not exactly local compute independent...
Nearly entirety of the use cases of computers today don't involve running things on a 'personal computer' in any way.
In fact these days, every one kind of agrees as little as hosting a spreadsheet on your computer is a bad idea. Cloud, where everything is backed up is the way to go.
PC was never 'no web'. No one actually 'counted every screw in their garage' as the PC killer app. It was always the web.
The web really pushed adoption, much more than a person computation machine. It was the main use case for most folks.
The personal computer arguably begins with VisiCalc in 1979.
> Through the 1970s, personal computers had proven popular with electronics enthusiasts and hobbyists, however it was unclear why the general public might want to own one. This perception changed in 1979 with the release of VisiCalc from VisiCorp (originally Personal Software), which was the first spreadsheet application.
https://en.wikipedia.org/wiki/History_of_personal_computers#...
Mainstream use of the web really took off in the second half of the 1990s. Arbitrarily, let's say with the release of Windows 95. That's a quarter of a century you'd be blinking for.
That exists, too, with GeForce Now etc, which is why I said mostly.
This whole idea that you can connect lots of cheap low capacity boxes and drive down compute costs is already going away.
In time people will go back to thinking compute as a variable of time taken to finish processing. That's the paradigm in the cloud compute world- you are billed for the TIME you use the box. Eventually people will just want to use something bigger that gets things done faster, hence you don't have to rent them for long.
The killer apps in the 80s were spreadsheets and desktop publishing.
Would you classify eg gmail as 'content streaming'?
Web browsers aren't quite that useless with no internet connection, some sites do offer offline capabilities (for example gmail). but even then, the vast majority of offline experiences exist to tide the user over until network can be re-established, instead of truly offering something useful to do locally. Probably the only mainstream counter-examples would be games.
As far as how Gmail's existing offline mode works, I don't know.
(I have to use some weaselwording here, because GMail had decent spam detection since basically forever, and whether you call that AI or not depends on where we have shifted the goalposts at the moment.)
But yes I am looking forward to having my own LMS on my PC which only I have access to.
The mail server is the mail server even for Outlook.
Outlook gives you a way to look through email offline. Gmail apps and even Gmail in Chrome have an offline mode that let you look through email.
It's not easy to call it fully offline, nor a dumb terminal.
I was just probing the 'content _streaming_' term. As you demonstrate, you'd have to squint really hard to describe GMail as content streaming.
'Offline' vs 'content streaming' is a false dichotomy. There's more different types of products and services.
(Which reminds me a bit of crypto-folks calling everything software that's not in crypto "web2", as if working on stodgy backends in a bank or making Nintendo Switch games has anything to do with the web at all.)
Our personal devices are far from thin clients.
I think it depends on if you see the browser for content or as a runtime environment.
Maybe it depends on the application architecture...? I.e., a compute-heavy WASM SPA at one end vs a server-rendered website.
Or is it an objective measure?
The text content of a weather app is trivial compared to the UI.
Same with many web pages.
Desktop apps use local compute, but that's more a limitation of latency and network bandwidth than any fundamental need to keep things local.
Security and privacy also matter to some people. But not to most.
Turn off internet on they iPad and see how many apps that people use still work.
The Ipad is a high performance computer, not just because Apple think that's fun, but out of necessity given its ambition: the applications people use on it require local storage and rather heavy local computation. The web browser standards if nothing else have pretty much guaranteed that the age of thin clients is over: a client needs to supply a significant amount of computational resources and storage to use the web generally. Not even Chromebooks will practically be anything less than rich clients.
Going back to the original topic (and source of the analogy), IOS hosts an on-device large language model.
Maybe a PC without a hard drive (PXE the OS), but if it has storage and can install software, its not dumb.
The people who only use phones and tablets or only use laptops as dumb terminals are not the people who were buying PCs in the 1980s and 1990s, or they were they were not serious users. They were mostly non-computer-users.
Non-computer-users have become casual consumer level computer users because the tech went mainstream, but there's still a massive serious computer user market. I know many people with home labs or even small cloud installations in their basements, but there are about as many of them as serious PC users with top-end PC setups in the late 1980s.
You can and people do self-host stuff that big tech wants pushed into the cloud.
You can have a NAS, a private media player, Home Assistant has been making waves in the home automation sphere. Turns out people don't like buying overpriced devices only to have to pay a $20 subscription, and find out their devices don't talk to each other, upload footage inside of their homes to the cloud, and then get bricked once the company selling them goes under and turns of the servers.
The explanation of 'why' is not an argument. Big tech is not making it easy != it's impossible. Passive sufferers indeed.
Edit: got a website with an RSS feed somewhere maybe? I would like to follow more people with a point of view like yours.
Which might even be true, since cloud based software might offer conveniences that local substitutes don't.
However this is not an inherent property of cloud software, its just some effort needs to go into a local alternative.
That's why I mentioned Home Assistant - a couple years ago, smart home stuff was all the rage, and not only was it expensive, the backend ran in the cloud, and you usually paid a subscription for it.
Nowadays, you can buy a local Home Assistant hub (or make one using a Pi) and have all your stuff only connect to a local server.
The same is true for routers, NAS, media sharing and streaming to TV etc. You do need to get technical a bit, but you don't need to do anything you couldn't figure out by following a 20 minute Youtube video.
Not sure it could really work like that IRL, but I haven't put a ton of thought into it. It'd make our always-online devices make a little more sense.
I'm with GP - I imagine a future when capable AI models become small and cheap enough to run locally in all kinds of contexts.
Web2.0 discarded the protocol approach and turned your computer into a thin client that does little more than render webapps that require you to be permanently online.
There was also FidoNet with offline message readers.
People must have been pretty smart back then. They had to know to hang up the phone to check for new messages.
The thing we do need to be careful about is regulatory capture. We could very well end up with nothing but monolithic centralized systems simply because it's made illegal to distribute, use, and share open models. They hinted quite strongly that they wanted to do this with deepseek.
There may even be a case to be made that at some point in the future, small local models will outperform monoliths - if distributed training becomes cheap enough, or if we find an alternative to backprop that allows models to learn as they infer (like a more developed forward-forward or something like it), we may see models that do better simply because they aren't a large centralized organism behind a walled garden. I'll grant that this is a fairly polyanna take and represents the best possible outcome but it's not outlandishly fantastic - and there is good reason to believe that any system based on a robust decentralized architecture would be more resilient to problems like platform enshittification and overdeveloped censorship.
At the end of the day, it's not important what the 'average' user is doing, so long as there are enough non-average users pushing the ball forward on the important stuff.
Most open source development happens on GitHub.
You'd think non-average developers would have noticed their code is now hosted by Microsoft, not the FSF. But perhaps not.
The AI end game is likely some kind of post-Cambrian, post-capitalist soup of evolving distributed compute.
But at the moment there's no conceivable way for local and/or distributed systems to have better performance and more intelligence.
Local computing has latency, bandwidth, and speed/memory limits, and general distributed computing isn't even a thing.
It only has to be good enough to do what we want. In the extreme, maybe inference becomes cheap enough that we ask “why do I have to wake up the laptop’s antenna?”
You could say the same about all self-hosted software, teams with billions of dollars to produce and host SaaS will always have an advantage over smaller, local operations.
There might be also local/global bias strategies. A tiny local model trained on your specific code/document base may be better aligned to match your specific needs than a galaxy scale model. If it only knows about one "User" class, the one in your codebase, it might be less prone to borrowing irrelevant ideas from fifty other systems.
We're already very, very close to "smart enough for most stuff". We just need that to also be "tuned for our specific wants and needs".
And AI just further normalizes the need for connectivity; cloud models are likely to improve faster than local models, for both technical and business reasons. They've got the premium-subscriptions model down. I shudder to think what happens when OpenAI begins hiring/subsuming-the-knowledge-of "revenue optimization analysts" from the AAA gaming world as a way to boost revenue.
But hey, at least you still need humans, at some level, if your paperclip optimizer is told to find ways to get humans to spend money on "a sense of pride and accomplishment." [0]
We do not live in a utopia.
[0] https://www.guinnessworldrecords.com/world-records/503152-mo... - https://www.reddit.com/r/StarWarsBattlefront/comments/7cff0b...
I had always hoped we'd do more locally on-device (and with native apps, not running 100 instances of chromium for various electron apps). But, it's hard to extract rent that way I suppose.
I access websites on a 64gb, 16 core device. I deploy them to a 16gb, 4 core server.
That's the part that pisses me off the most. They all claim it's for the IP68, but that's bullshit. There's plenty of devices with removable backs & batteries that are IP68.
My BlackBerry bold 9xxx was 10mm thin. the iPhone 17 Pro Max is 8.75. You aren't going to notice the 1.3mm of difference, and my BlackBerry had a user replaceable battery, no tools required just pop off the back cover.
The BlackBerry was also about 100 grams lighter.
The non-user removable batteries and unibody designs are purely for planned obsolescence, nothing else.
Or even physical things like mattresses, according to discussions around the recent AWS issues.
Because someone else can sell the goose and take your market.
Apple is best aligned to be the disruptor. But I wouldn’t underestimate the Chinese government dumping top-tier open-source models on the internet to take our tech companies down a notch or ten.
If Apple does finally come up with a fully on-device AI model that is actually useful, what makes you think they won't gate it behind a $20/mo subscription like they do for everything else?
Non sequitur.
If a market is being ripped off by subscription, there is opportunity in selling the asset. Vice versa: if the asset sellers are ripping off the market, there is opportunity to turn it into a subscription. Business models tend to oscillate between these two for a variety of reasons. Nothing there suggets one mode is infinitely yielding.
> If Apple does finally come up with a fully on-device AI model that is actually useful, what makes you think they won't gate it behind a $20/mo subscription like they do for everything else?
If they can, someone else can, too. They can make plenty of money selling it straight.
Only in theory. Nothing beats getting paid forever.
> Business models tend to oscillate between these two for a variety of reasons
They do? AFAICT everything devolves into subscriptions/rent since it maximizes profit. It's the only logical outcome.
> If they can, someone else can, too.
And that's why companies love those monopolies. So, no... other's can't straight up compete against a monopoly.
It's this disruptor Apple in the room with us now?
Apple's second biggest money source is services. You know, subscriptions. And that source keeps growing: https://sixcolors.com/post/2025/10/charts-apple-caps-off-bes...
It's also that same Apple that fights tooth and nail every single attempt to let people have the goose or even the promise of a goose. E.g. by saying that it's entitled to a cut even if a transaction didn't happen through Apple.
It's very risky play, and if it doesn't work it leaves China in a much worse place than before, so ideally you don't make the play unless you're already facing some big downside, sort of as a "hail Mary" move. At this point I'm sure they're assuming Trump is glad handing them while preparing for military action, they might even view invasion of Taiwan as defensive if they think military action could be imminent anyhow.
And you know we'd be potting their transport ships, et cetera, from a distance the whole time, all to terrific fanfare. The Taiwan Strait would become the new training ground for naval drones, with the targets being almost exclusively Chinese.
Taiwan fields strong air defenses backed up by American long-range fortifications.
The threat is covert decapitation. A series of terrorist attacks carried out to sow confusion while the attack launches.
Nevertheless, unless China pulls off a Kabul, they’d still be subject to constant cross-Strait harassment.
Data General and Unisys did not create PCs - small disrupters did that. These startups were happy to sell eggs.
Assuming consumers even bother to set up a coop in their living room...
Like the web, which worked out great?
Our Internet is largely centralized platforms. Built on technology controlled by trillion dollar titans.
Google somehow got the lion share of browser usage and is now dictating the direction of web tech, including the removal of adblock. The URL bar defaults to Google search, where the top results are paid ads.
Your typical everyday person uses their default, locked down iPhone or Android to consume Google or Apple platform products. They then communicate with their friends over Meta platforms, Reddit, or Discord.
The decentralized web could never outrun money. It's difficult to out-engineer hundreds of thousands of the most talented, most highly paid engineers that are working to create these silos.
Fr tho, no ads - I'm not making money off them, I've got no invite code for you, I'm a human - I just don't get it. I've probably told 500 people about Brave, I don't know any that ever tried it.
I don't ever know what to say. You're not wrong, as long as you never try to do something else.
As someone who has been using brace since it was first announced and very tightly coupled to the BAT crypto token I must say it is much less effective nowadays.
I often still see a load of ads and also regularly have to turn off the shields for some sites.
I never have to turn off shields - on one hand the number of times I've had to do that.
Maybe I have something additional installed I don't know.
Or rather, they'd block Brave.
Selling eggs is better how?
1. Most people don't have machines that can run even midsized local models well
2. The local models are nearly as good as the frontier models for a lot of use cases
3. There are technical hurdles to running local models that will block 99% of people. Even if the steps are: download LM Studio and download a model
Maybe local models will get so good that they cover 99% of normal user use cases and it'll be like using your phone/computer to edit a photo. But you'll still need something to make it automatic enough that regular people use it by default.
That said, anyone reading this is almost certainly technical enough to run a local model. I would highly recommend trying some. Very neat to know it's entirely run from your machine and seeing what it can do. LM Studio is the most brainless way to dip your toes in.
I'm curious what counts as a midsize model; 4B, 8B, or something larger/smaller?
What models would you recommend? I have 12GB of vram so anything larger than 8B might be really slow, but i am not sure
Large: Requires >128GB VRAM
Medium: 32-128GB VRAM
Small: 16GB VRAM
Micro: Runs on a microcontroller or GPUs with just 4GB of VRAM
There's really nothing worthwhile for general use cases that runs in under 16GB (from my testing) except a grammar-checking model that I can't remember the name of at the moment.
gpt-oss:20b runs on 16GB of VRAM and it's actually quite good (for coding, at least)! Especially with Python.
Prediction: The day that your average gaming PC comes with 128GB of VRAM is the day developers will stop bothering with cloud-based AI services. gpt-oss:120b is nearly as good as gpt5 and we're still at the beginning of the AI revolution.
LLMs are the internal combustion engine, and chatbot UIs are at the "horseless carriage" phase.
My personal theory is that even if models stopped making major advancements, we would find cheaper and more useful ways to use them. In the end, our current implementations will look like the automobile pictured below.
[0] https://group.mercedes-benz.com/company/tradition/company-hi...
[0] https://play.google.com/store/apps/details?id=com.google.ai....
Like 50% of internet users are already interacting with one of these daily.
You usually only change your habit when something is substantially better.
I don't know how free versions are going to be smaller, run on commodity hardware, take up trivial space and ram etc, AND be substantially better
If you are using an Apple product chances are you are already using self-hosted models for things like writing tools and don't even know it.
No, you usually only change your habit when the tools you are already using are changed without consulting you, and the statistics are then used to lie.
I think the misconception is that things cannot be overpriced for reasons other than inefficiency.
Just a bunch of billionaires jockeying for not being poor.
The period when you couldn't use Linux as your main OS because your organization asked for .doc files?
No thanks.
Because if there’s one thing worse than governments having nuclear weapons, it’s everyone having them.
It would be chaos. And with physical drones and robots coming, it woukd be even worse. Think “shitcoins and memecoins” but unlike those, you don’t just lose the money you put in and you can’t opt out. They’d affect everyone, and you can never escape the chaos ever again. They’d be posting around the whole Internet (including here, YouTube deepfakes, extortion, annoyance, constantly trying to rewrite history, get published, reputational destruction at scale etc etc), and constant armies of bots fighting. A dark forest.
And if AI can pay for its own propagation via decentralized hosting and inference, then the chance of a runaway advanced persistent threat compounds. It just takes a few bad apples, or even practical jokers, to unleash crazy stuff. And it will never be shut down, just build and build like some kind of kessler syndrome. And I’m talking about with just CURRENT AI agent and drone technology.
*ahem* It's gonna be like every other tool/societal paradigm shift like the smartphone before this, and planes/trains/cars/ships/factories/electricity/oil/steam/iron/bronze etc. before that:
• It'll coalesce into the hands of a few corporations.
• Idiots in governments won't know what the fuck to do with it.
• Lazy/loud civvies will get lazier/louder through it.
• There'll be some pockets of individual creativity and freedom, like open source projects, that will take varying amounts of time to catch on in popularity or fade away to obscurity.
• One or two killer apps that seem obvious but nobody thought of, will come out of nowhere from some nobody.
• Some groups will be quietly working away using it to enable the next shift, whether they know it or not.
• Aliens will land turning everything upside down. (I didn't say when)
Do you think Sam Altman, Jeff Bezos, and Mark Zuckerberg are all wrong saying that we’re in a bubble? Do they “lack imagination?”
Also? What do I need imagination for, isn’t that what AI does now?
Even beyond that, the Soviet Union had just collapsed. The US had a balanced budget and 10X less debt. Globalization was just getting started. China was nothing compared to what it was today. It was the absolute most stable time of my life. Basically the opposite of 2025.
"This time things are different because AI blah blah blah".
When you boil it down, all bubbles are based around the idea of using your imagination to image how "this time it is different".
The people who don't think this is a bubble are not seeing that their imagination is a bug and not a feature here.
Currently, investment into AI exceeds the dot-com bubble by a factor of 17. Even in the dot-com era, the early internet was already changing media and commerce in fundamental ways. November is the three-year anniversary of ChatGPT. How much economic value are they actually creating? How many people are purchasing AI-generated goods? How much are people paying for AI-provided services? The value created here would have to exceed what the internet was generating in 2000 by a factor of 17 (which seems excessive to me) to even reach parity with the dot-com bubble.
"But think where it'll be in 5 years"—sure, and let's extrapolate that based on where it is now compared to where it was 3 years ago. New models present diminishing returns. 3.5 was groudbreaking; 4 was a big step forward; 5 is incremental. I won't deny that LLMs are useful, and they are certainly much more productized now than they were 3 years ago. But the magnitude of our collective investment in AI requires that a huge watershed moment be just around the corner, and that makes no sense. The watershed moment was 3 years ago. The first LLMs created a huge amount of potential. Now we're realizing those gains, and we're seeing some real value, but things are also tapering off.
Surely we will have another big breakthrough some day—a further era of AI which brings us closer to something like AGI—but there's just no reason to assume AGI will crop up in 2027, and nothing less that that can produce the ROI that such enormous valuations will eventually, inexorably, demand.
https://en.wikipedia.org/wiki/California_gold_rus
Not to mention thousands of native inhabitants getting killed or enslaved:
The bet is not that there will be this one seminal moment of AGI where all the investment will make sense. The bet is that it has already showed up if you look for specific things and will continue to do so. I wouldn't bet against the idea that LLMs will introduce itself to all jobs, one at a time. Reddit moderators, for example, will meet AGI (as far as they know, their entire world being moderating) sooner than say, I don't know, a Radiologist.
The universe of people getting paid to make CRUD apps is over. Many here will be introduced to AGI faster and sooner. Then it could be phone customer support representatives. It could show up for the face-to-face worker who is now replaced by a screen that can talk to customers (which already arrived yesterday, it's here). It'll appear erratic and not cohesive, unless you zoom out and see the contagion.
---
Rome needed to recognize that the Barbarian hordes had arrived. Pay attention to all the places the invasion has landed. You can pretend like the Vandals are not in your town for a little bit, sure, but eventually they will be knocking on many doors (most likely all doors). We're in a time period of RADICAL transformation. There is no half-assing this conviction. Practicality will not serve us here.
That is exactly what you need in order to make AI useful. Even a baby needs to cry to signal its needs to parents, which are like ASI to it. AI working on a task lacks in 3 domains: start, middle and finish.
AI cannot create its own needs, they belong to the context where it is used. After we set AI to work, it cannot predict the outcomes of its actions unless they pass through your context and return as feedback. In the end, all benefits accumulate in the same context. Not to mention costs and risks - they belong to the context.
The AI is a generalist, context is exactly what it lacks. And context is distributed across people, teams, companies. Context is non-fungible. You can't eat so I get satiated. Context is what drives AI. And testing is the core contextual activity when using AI.
You're talking about TAs. I know TAs. Their jobs have not disappeared. They are not using AI to grade papers.
> Are you really going to hire that developer because you need extra manpower to stand up test coverage?
Yes. Unsupervised AI agents cannot currently replace developers. "Oh we'll get someone to supervise it"—yes, that person's job title is "developer" and they will be doing largely the same job they'd have done 5 years ago.
> The universe of people getting paid to make CRUD apps is over.
Tell that to all the people who get paid to make CRUD apps. Frankly, Airtable has done more to disrupt CRUD apps than AI ever did.
> Rome needed to recognize that the Barbarian hordes had arrived.
IDK what to tell you. All these jobs are still around. You're just fantasizing.
Not all developer jobs will disappear, but there will most certainly be fewer available. Any new grad can tell you how hard it is to find a software engineering job nowadays.
Certainly we cannot just let an AI spin and build software unattended. But what used to take days can now be done in minutes.
Probably a lot. I remember my mom recently showing me an AI-generated book she bought. And pretty much immediately refunded it. Not because it was AI, but because the content was trash.
I agree that AI is overhyped but so was the early web. It was projected to do a lot of things ”soon”, but was not really doing that much 4 years in. I don’t think the newspapers or commerce were really worried about it. The transformation of the business landscape took hold after the crash.
He figured there was a credit bubble like that around the time of the dot com bubble and now but the calculation if purely based on interest rates and the money can go into any assets - property, stocks, crypto etc. It's not AI specific.
He explains it here https://youtu.be/uz2EqmqNNlE
The Wicksell spread seems to have come from Wicksell's proposed 'natural rate of interest' detailed in his 1898 book
https://en.wikipedia.org/wiki/Knut_Wicksell#Interest_and_Pri...
Its precisely why these companies are investing so much, robots combined with AI will be creating that value.
Will they? Within what timeframe? Because a bubble economy can't be told to "just hang on a few more years" forever. LLMs are normal technology; they will not suddenly become something they are not. There's no indication that general intelligence is right on the horizon.
> There's no indication that general intelligence is right on the horizon.
You dont need general intelligence for all the tasks, if a robot can do some of those tasks with limited intelligence cheaper than a humnan, that is all corporations care about.
To make a car analogy; the current LLMs are not the early cars, but the most refined horse drawn carriages. No matter how much money is poured into them, you won’t find the future there.
I think this is one of the greatest fallacies surrounding LLMs. This one, and the other one - scaling compute!! The models are plenty fine, what they need is not better models, or more compute, they need better data, or better feedback to keep iterating until they reach the solution.
Take AlphaZero for example, it was a simple convolutional network, not great compared to LLMs, small relative recent models, and yet it beat the best of us at our own game. Why? Because it had unlimited environment access to play games against other variants of itself.
Same for the whole Alpha* family, AlphaStar, AlphaTensor, AlphaCode, AlphaGeometry and so on, trained with copious amounts of interactive feedback could reach top human level or surpass humans in specific domains.
What AI needs is feedback, environments, tools, real world interaction that exposes the limitations in the model and provides immediate help to overcome them. Not unlike human engineers and scientists - take their labs and experiments away and they can't discover shit.
It's also called the ideation-validation loop. AI can ideate, it needs validation from outside. That is why I insist the models are not the bottleneck.
The problem with language is that there is no know correct answer. Everything is vague, ambiguous and open ended. How would we even implement feedback for that?
So yes, we do need new models.
This is likely true but not for the reasons you think about. This was arguably true 10 years ago too. A human brain uses 100 watts per day approx and unlike most models out there, the brain is ALWAYS in training mode. It has about 2 petabytes of storage.
In terms of raw capabilities, we have been there for a very long time.
The real challenge is finding the point where we can build something that is AGI level with the stuff we have. Because right now, we might have the compute and data needed for AGI but we might lack the tools needed to build a system that efficient. It's like a little dog trying to enter a fenced house, the closest path topologically between the dog and the house might not be accessible for that dog at that point because its current capabilities (short legs, inability to jump high or push through the fence standing in between) so while it is further topologically, a longer path topologically might be the closest path to reach the house.
In case it's not obvious, AGI is the house, we are the little dog and the fence represent current challenges to build AGI.
The human brain is not a 20-watt computer ("100 watts per day" is not right) that learns from scratch on 2 petabytes of data. State manipulations performed in the brain can be more efficient than what we do in silicon. More importantly, its internal workings are the result of billions of years of evolution, and continue to change over the course of our lives. The learning a human does over its lifetime is assisted greatly by the reality of the physical body and the ability to interact with the real world to the extent that our body allows. Even then, we do not learn from scratch. We go through a curriculum that has been refined over millennia, building on knowledge and skills that were cultivated by our ancestors.
An upper bound of compute needed to develop AGI that we can take from the human brain is not 20 watts and 2 petabytes of data, it is 4 billion years of evolution in a big and complex environment at molecular-level fidelity. Finding a tighter upper bound is left as an exercise for the reader.
You have great points there and I agree. Only issue I take with your remark above. Surely, by your own definition, this is not true. Evolution by natural selection is not a deterministic process so 4 billion years is just one of many possible periods of time needed but not necessarily the longest or the shortest.
Also, re "The human brain is not a 20-watt computer ("100 watts per day" is not right)", I was merely saying that there exist an intelligence that consumes 20 watts per day. So it is possible to run an intelligence on that much energy per day. This and the compute bit do not refer to the training costs but to the running costs after all, it will be useless to hit AGI if we do not have enough energy or compute to run it for longer than half a millisecond or the means to increase the running time.
Obviously, the path to design and train AGI is going to take much more than that just like the human brain did but given that the path to the emergence of the human brain wasn't the most efficient given the inherent randomness in evolution natural selection there is no need to pretend that all the circumstances around the development of the human brain apply to us as our process isn't random at all nor is it parallel at a global scale.
That's why I say that is an upper bound - we know that it _has_ happened under those circumstances, so the minimum time needed is not more than that. If we reran the simulation it could indeed very well be much faster.
I agree that 20 watts can be enough to support intelligence and if we can figure out how to get there, it will take us much less time than a billion years. I also think that on the compute side for developing the AGI we should count all the PhD brains churning away at it right now :)
I think we're probably still far from the full potential of LLMs, but I don't see any obstacles to developing and switching to something better.
We had plenty of options for better technologies both available and in planning, 56k modems were just the cost effective/lowest common denominator of their era.
It's not nearly as clear that we have some sort of proven, workable ideas for where to go beyond LLMs.
That's simply not true. Modems were basically the same tech in the dsl era, and using light instead of electricity is a very gradual refinement.
> we're probably still far from the full potential of LLMs
Then how come the returns are so extremely diminishing?
> I don't see any obstacles to developing and switching to something better.
The obstacle is that it needs to be invented. There was nothing stopping newton from discovering relativity either. We simply have no idea what the road forward even looks like.
How do we know that?
That's not how I remember it (but I was just a kid so I might be misremembering?)
As I remember (and what I gather from media from the era) late 80s/early 90s were hyper optimistic about tech. So much so that I distinctly remember a ¿german? TV show when I was a kid where they had what amounts to modern smartphones, and we all assumed that was right around the corner. If anything, it took too damn long.
Were adults outside my household not as optimistic about tech progress?
If you’re unfamiliar, the phone connectivity situation in the 80s and 90s was messy and piecemeal. AT&T had been broken up in 1982 (see https://www.historyfactory.com/insights/this-month-in-busine...), and most people had a local phone provider and AT&T was the default long-distance provider. MCI and Sprint were becoming real competition for AT&T at the time of these commercials.
Anyway, in 1993 AT&T was still the crusty old monopoly on most people’s minds, and the idea that they were going to be the company to bring any of these ideas to the market was laughable. So the commercials were basically an image play. The only thing most people bought from AT&T was long distance service, and the main threat was customers leaving for MCI and Sprint. The ads memorable for sure, but I don’t think they blew anyone’s mind or made anyone stay with AT&T.
AT&T and the baby bells were widely loathed (man I hated Ameritech…), so the idea they would extend their tentacles in this way was the main thing I reacted to. The technology seemed straightforwardly likely with Dennard scaling in full swing.
I thought it would be banks that owned the customer relationship, not telcos or Apple (or non-existent Google), but the tech was just… assume miniaturization’s plateau isn’t coming for a few decades.
Still pretty iconic/memorable, though!
https://bsky.app/profile/ruv.is/post/3liyszqszds22
Note that this is the state TV broadcasting this in their main news program. The most popular daily show in Iceland.
When this article are claiming both sides of the debate, I believe only one of them are real (the ones hyping up the technology). While there are people like me who are pessimistic about the technology, we are not in any position of power, and our opinion on the matter is basically a side noise. I think a much more common (among people with any say in the future of this technology) is the believe that this technology is not yet at a point which warrants all this investment. There were people who said that about the internet in 1999, and they were proven 100% correct in the months that followed.
I pay for chatgpt plus and github copilot.
I'm curious what kind of problem your "brain cant wrap around", but the AI could.
I'm curious what kind of problem your "brain cant wrap around", but the AI could.
One of the most common use cases is that I can't figure out why my SQL statement is erroring or doesn't work the way it should. I throw it into ChatGPT and it usually solves it instantly.It's pretty common for me to spend a day being stuck on a gnarly problem in the past. Most developers have. Now I'd say that's extremely rare. Either an LLM will solve it outright quickly or I get enough clues from an LLM to solve it efficiently.
I've tried using LLMs for SQL and it fails at exactly that: complexity. Sure it'll get the basic queries right, but throw in anything that's not standard every day SQL into it and it'll give you solutions that are not great really confidently.
If you don't know SQL enough to figure out these issues in the first place, you don't know if the solutions the LLM provides are actually good or not. That's a real bad place to be in.
You are using the term “hard problem” to mean something like solving P = NP. But in reality as soon as you step outside of your area of expertise most problems will be hard for you. I will give you some examples of things you might find to be hard problems (without knowing your background):
- what is the correct way to frame a door into a structural exterior wall of a house with 10 foot ceilings that minimized heat transfer and is code compliant.
- what is the correct torque spec and sequence for a Briggs and Stratton single cylinder 500 cc motor.
- how to correctly identify a vintage Stanley hand plane (there were nearly two dozen generations of them, some with a dozen different types), and how to compare them and assess their value.
- how to repair a cracked piece of structural plastic. This one was really interesting for me because I came up with about 5 approaches and tried two of them before asking an LLM and it quickly explained to me why none of the solutions I came up with would work with that specific type of plastic (HDPE is not something you can glue with most types of resins or epoxies and it turns out plastic welding is the main and best solution). What it came up with was more cost efficient, easier, and quicker than anything I thought up.
- explaining why mixing felt, rust, and CA glue caused an exothermal reaction.
- find obscure local programs designed to financially help first time home buyers and analyze their eligibility criteria.
In all cases I was able to verify the solutions. In all cases I was not an expert on the subject and in all cases for me these problems presented serious difficulty so you might colloquially refer to them as hard problems.
In this case, the original author stated that AI only good for rewriting emails. I showed a much harder problem that AI is able to help me with. So clearly, my problem can be reasonably described as “hard” relative to rewriting emails.
So the craft is lost. Making that optimised query or simplifying the solution space.
No one will ask "should it be relational even?" if the LLM can spit out sql then move on to next problem.
Anyway, I'm sure people have asked if we should be programming in C rather than Assembly to preserve the craft.
ChatGPT is currently the best solar calculator on the publicly accessible internet and it's not even close. It'll give you the internal rate of return, it'll ask all the relevant questions, find you all the discounts you can take in taxes and incentives, determine whether you should pay the additional permitting and inspection cost for net metering or just go local usage with batteries, size the batteries for you, and find some candidate electricians to do the actual installation once you acquire the equipment.
Edit: My guess is that it'd cost several thousand dollars to hire someone to do this for you, and it'll save you probably in the $10k-$30k range on the final outcomes, depending on the size of system.
So it's literally the same as googling "what's the ballpark solar installation cost for X in Y area" unbelievable, and people pay $20+ per month for this
e.g., if you had a heart condition, you can't just poll three LLMs and be "reasonably sure" you've properly diagnosed the ailment.
They probably meant that they could read (and trace) the logic in Python for correctness.
I would recommend spending that "couple thousand" for quote(s). It's a second opinion from someone who hopefully has high volume in your local market. And your downside could be the entire system plus remediation, fines, etc.
To be clear, I'm not opposed to experimenting, but I wouldn't rely on this. Appreciate your comment for the discussion.
Examples or it didn't happen.
No it won’t replace my job this year or the next, but what Sonnet 4.5 and GPT 5 can do compared to e.g. Gemini Flash 2.5 is incredible. They for sure have their limits and do hallucinate quite a bit once the context they are holding gets messy enough but with careful guidance and context resets you can get some very serious work done with them.
I will give you an example of what it can’t do and what it can: I am working on a complicated financial library in Python that requires understanding nuanced parts of tax law. Best in class LLM cannot correctly write the library code because the core algorithm is just not intuitive. But it can:
1. Update all invocations of the library when I add non-optional parameters that in most cases have static values. This includes updating over 100 lengthy automated tests.
2. Refactor the library to be more streamlined and robust to use. In my case I was using dataclasses as the base interface into and out of it and it helped me split one set of classes into three: input, intermediate, and output while fully preserving functionality. This was a pattern it suggested after a changing requirement made the original interface not make nearly as much sense.
3. Point me to where the root cause of failing unit tests was after I changed the code.
4. Suggest and implement a suite of new automated tests (though its performance tests were useless enough for me to toss out in the end).
5. Create a mock external API for me to use based on available documentation from a vendor so I could work against something while the vendor contract is being negotiated.
6. Create comprehensive documentation on library use with examples of edge cases based on code and comments in the code. Also generate solid docstrings for every function and method where I didn’t have one.
7. Research thorny edge cases and compare my solutions to commercial ones.
8. Act as a rubber ducky when I had to make architectural decisions to help me choose the best option.
It did all of the above without errors or hallucinations. And it’s not that I am incapable of doing any of it, but it would have taken me longer and would have tested my patience when it comes to most of it. Manipulating boilerplate or documenting the semantic meaning between a dozen new parameters that control edge case behavior only relevant to very specific situations is not my favorite thing to do but an LLM does a great job of it.
I do wish LLMs were better than they are because for as much as the above worked well for me, I have also seen it do some really dumb stuff. But they already are way too good compared to what they should be able to do. Here is a short list of other things I had tried with them that isn’t code related that has worked incredibly well:
- explaining pop culture phenomenon. For example I had never understood why Dr Who fans take a goofy campy show aimed in my opinion at 12 year olds as seriously as if it was War and Peace. An LLM let me ask all the dumb questions I had about it in a way that explained it well.
- have a theological discussion on the problem of good and evil as well as the underpinnings of Christian and Judaic mythology.
- analyze in depth my music tastes in rock and roll and help fill in the gaps in terms of its evolution. It actually helped me identify why I like the music I like despite my tastes spanning a ton of genres, and specifically when it comes to rock, created one of the best and most well curated playlists I had ever seen. This is high praise for me since I pride myself on creating really good thematic playlists.
- help answer my questions about woodworking and vintage tool identification and restoration. This stuff would have taken ages to research on forums and the answers would still be filled with purism and biased opinions. The LLM was able to cut through the bullshit with some clever prompting (asking it to act as two competing master craftsmen).
- act as a writing critic. I occasionally like to write essays on random subjects. I would never trust an LLM to write an original essay for me but I do trust it to tell me when I am using repetitive language, when pacing and transitions are off, and crucially how to improve my writing style to take it from B level college student to what I consider to be close to professional writer in a variety of styles.
Again I want to emphasize that I am still very much on the side of there being a marketing and investment bubble and that what LLMs can do being way overhyped. But at the same time over the last few months I have been able to do all of the above just out of curiosity (the first coding example aside). These are things I would have never had the time or energy to get into otherwise.
With no disrespect I think you are about 6-12 months behind SOTA here, the majority of recent advances have come from long running task horizons. I would recommend to you try some kind of IDE integration or CLI tool, it feels a bit unnatural at first but once you adapt your style a bit, it is transformational. A lot of context sticking issues get solved on their own.
One thing that struck me: models are all trained starting 1-2 years ago. I think the training cutoff for Sonnet 4.5 is like May 2024. So I can only imagine with is being trained and tested currently. And also these models are just so ahead of things like Qwen and Llama for the types of semi-complex non-coding tasks I have tried (like interpreting my calendar events), that it isn’t even close.
Because some notable people dismissed things that wound up having profound effect on the world, it does not mean that everything dismissed will have a profound effect.
We could just as easily be "peak Laserdisc" as "dial-up internet".
There's another presumably unintended aspect of the comparison that seems worth considering. The Internet in 2025 is certainly vastly more successful and impactful than the Internet in the mid-90s. But dial-up itself as a technology for accessing the Internet was as much of a dead-end as Laserdisc was for watching movies at home.
Whether or not AI has a similar trajectory as the Internet is separate from the question of whether the current implementation has an actual future. It seems reasonable to me that in the future we're enjoying the benefits of AI while laughing thinking back to the 2025 approach of just throwing more GPUs at the problem in the same way we look back now and get a chuckle out of the idea of "shotgun modems" as the future.
1. the opening premise comparing AI to dial-up internet; basically everyone knew the internet would be revolutionary long before 1995. Being able to talk to people halfway across the world on a BBS? Sending a message to your family on the other side of the country and them receiving it instantly? Yeah, it was pretty obvious this was transformative. The Krugman quote is an extreme, notable outlier, and it gets thrown out around literally every new technology, from blockchain to VR headsets to 3DTVs, so just like, don't use it please.
2. the closing thesis of
> Consider the restaurant owner from earlier who uses AI to create custom inventory software that is useful only for them. They won’t call themselves a software engineer.
The idea that restaurant owners will be writing inventory software might make sense if the only challenge of creating custom inventory software, or any custom software, was writing the code... but it isn't. Software projects don't fail because people didn't write enough code.
I was only able to do this because I had some prior programming experience but I would imagine that if AI coding tools get a bit better they would enable a larger cohort of people to build a personal tool like I did.
That sounds pretty similar to long-distance phone calls? (which I'm sure was transformative in its own way, but not on nearly the same scale as the internet)
Do we actually know how transformative the general population of 1995 thought the internet would or wouldn't be?
As soon as the internet arrived, a bit late for us (I'd say 1999 maybe) due to the minitel's "good enough" nature, it just became instantly obvious, everyone wanted it. The general population was raving mad to get an email address, I never heard anyone criticize the internet like I criticize the fake "AI" stuff now.
I have a suspicion this is LLM text, sounds corny. There are dozens open source solutions, just look one up.
The key variable for me in this house of cards is how long folks will wait before they need to see their money again, and whether these companies will go in the right direction long enough given these valuations to get to AGI. Not guaranteed and in the meantime society will need to play ball (also not a guarantee)
It's falling into the trap of assuming we're going to get to the science fiction abilities of AI with the current software architectures, and within a few years, as long as enough money is thrown at the problem.
All I can say for certain is that all the previous financial instruments that have been jumped on to drive economic growth have eventually crashed. The dot com bubble, credit instruments leading to the global financial crisis, the crypto boom, the current housing markets.
The current investments around AI that we're all agog at are just another large scale instrument for wealth generation. It's not about the technology. Just like VR and BioTech wasn't about the technology.
That isn't to say the technology outcomes aren't useful and amazing, they are just independant of the money. Yes, there are Trillions (a number so large I can't quite comprehend it to be honest) being focused into AI. No, that doesn't mean we will get incomprehensible advancements out the other end.
AGI isn't happening this round folks. Can hallucinations even be solved this round? Trillions of dollars to stop computers lying to us. Most people where I work don't even realise hallucinations are a thing. How about a Trillion dollars so Karen or John stop dismissing different viewpoints because a chat bot says something contradictory, and actually listen? Now that would be worth a Trillion dollars.
Imagine a world where people could listen to others outside of their bubble. Instead they're being given tools that re-inforce the bubble.
The real parallel is Canal Mania — Britain’s late-18th-century frenzy to dig waterways everywhere. Investors thought canals were the future of transport. They were, but only briefly.
Today’s AI runs on GPUs — chips built for rendering video games, not thinking machines. Adapting them for AI is about as sensible as adapting a boat to travel across land. Sure, it moves — but not quickly, not cheaply, and certainly not far.
It works for now, but the economics are brutal. Each new model devours exponentially more power, silicon, and capital. It just doesn't scale.
The real revolution will come with new, hardware built for the job (that hasn't been invented yet) — thousands of times faster and more efficient. When that happens, today’s GPU farms will look like quaint relics of an awkward, transitional age: grand, expensive, and obsolete almost overnight.
A GPU is fundamentally just a chip for matrix operations, and that's good for graphics but also for "thinking machines" as we currently have them. I don't think it's like a boat traveling on land at all.
This happens to be useful both for graphics (same "program" running on on huge number of pixels/vertices) and neural networks (same neural operations on huge number of inputs/activations)
Think 3D printers versus injection molds: you prototype with flexibility, then mass-produce with purpose-built tooling. We've seen this pattern before too. CPUs didn't vanish when GPUs arrived for graphics. The canal analogy assumes wholesale replacement. Reality is likely more boring: specialization emerges and flexibility survives.
The Claw allows a garbage truck to be crewed by one man where it would have needed two or three before, and to collect garbage much faster than when the bins were emptied by hand. We don't know what the economics of such automation of (physical) garbage collection portend in the long term, but what we do know is that sanitation workers are being put out of work. "Just upskill," you might say, but until Claw-equipped trucks started appearing on the streets there was no need to upskill, and now that they're here the displaced sanitation workers may be in jeopardy of being unable to afford to feed their families, let alone find and train in some new marketable skill.
So no, we're in the The Claw era of AI, when business finds a new way to funge labor with capital, devaluing certain kinds of labor to zero with no way out for those who traded in such labor. The long-term implications of this development are unclear, but the short-term ones are: more money for the owner class, and some people are out on their ass without a safety net because this is Goddamn America and we don't brook that sort of commie nonsense here.
The waste collection companies in my area don't use them because it's rural and the bins aren't standardized. The side loaders don't work for all use cases of garbage trucks.
[0] https://en.wikipedia.org/wiki/Garbage_truck
>In 1969, the city of Scottsdale, Arizona introduced the world's first automated side loader. The new truck could collect 300 gallon containers in 30 second cycles, without the driver exiting the cab
I don't expect that to cease in my lifetime.
For a bit, I thought science and industry were finally starting to see the problem with our quality degradation and tech regression. Instead, the current hype cycle is all about settling for even crappier quality and lower reliability.
If you make the context small enough, we're back at /api/create /api/read /api/update /api/delete; or, if you're old-school, a basic function
So, when people say something about future, they are looking into the past to draw some projections or similar trends, but they may be missing the change in the full context. The considered factors of demand and automation might be too few to understand the implications. What about political, social and economic landscape? The systems are not so much insulated to study using just a few factors.
The situation is far from similar now. Now there's an app for everything and you must use all of them to function, which is both great and horrible.
From my experience, current generation of AI is unreliable and so cannot be trusted. It makes non-obvious mistakes and often sends you off on tangents, which consumes energy and leads to confusion.
It's an opinion I've built up over time from using AI extensively. I would have expected my opinion to improve after 3 years, but it hasn't.
When the railroad bubble popped we had railroads. Metal and sticks, and probably more importantly, rights-of-way.
If this is a bubble, and it pops, basically all the money will have been spent on Nvidia GPUs that depreciate to 0 over 4 years. All this GPU spending will need to be done again, every 4 years.
Hopefully we at least get some nuclear power plants out of this.
I'm still a fan of the railroad comparisons though for a few additional reasons:
1. The environmental impact of the railroad buildout was almost incomprehensibly large (though back in the 1800s people weren't really thinking about that at all.)
2. A lot of people lost their shirts investing in railroads! There were several bubbly crashes. A huge amount of money was thrown away.
3. There was plenty of wasted effort too. It was common for competing railroads to build out rails that served the same route within miles of each other. One of them might go bust and that infrastructure would be wasted.
It takes China 5 years now, but they've been ramping up for more than 20 years.
Heck if nothing else all the new capacity being created today may translate to ~zero cost storage, CPU/GPU compute and networking available to startups in the future if the bubble bursts, and that itself may lead to a new software revolution. Just think of how many good ideas are held back today because deploying them at scale is too expensive.
Note that these are just power purchase agreements. It's not nothing, but it's a long ways away from building nuclear.
I agree the depreciation schedule always seems like a real risk to the whole financial assumptions these companies/investors make, but a question I've wondered: - Will there be an unexpected opportunity when all these "useless" GPUs are put out to pasture? It just seems like saying a factory will be useless because nobody wants to buy an IBM mainframe, but an innovative company can repurpose a non-zero part of that infrastructure for another use case.
I think we may not upgrade every 4 years, but instead upgrade when the AI models are not meeting our needs AND we have the funding & political will to do the upgrade.
Perhaps the singularity is just a sigmoid with the top of the curve being the level of capex the economy can withstand.
Trains are closer to $50-100,000 per mile per year.
If there's no money for the work it's a prioritization decision.
The author didn't mention them.
AI companies robbed so much data from the Internet free and without permission.
Sacrificing the interests of owners of websites.
It's not sustainable.
It's impossible for AI to go far.
Software is similar to cars - the individual components that need to be properly procured and put together is very complex and trust will be important - will you trust that you as a restaurant owner vibe coded your payment stack properly or will you just drop in the 3 lines to integrate with Stripe? I think most of the non-tech business owners will do the latter.
The part about Jevons' paradox is interesting though.
What is clear, is that we have strapped a rocket to our asses, fueled with cash and speculation. The rocket is going so fast we don't know where we're going to land, or if we'll land softly, or in a very large crater. The past few decades have examples of craters. Where there are potential profits, there are people who don't mind crashing the economy to get them.
I don't understand why we're allowing this rocket to begin with. Why do we need to be moving this quickly and dangerously? Why do we need to spend trillions of dollars overnight? Why do we need to invest half the fucking stock market on this brand new technology as fast as we can? Why can't we develop it in a way that isn't insanely fast and dangerous? Or are we incapable of decisions not based on greed and FOMO?
They earn so much from oil and are so keenly aware this will stop, they'd rather spend a trillion on a failure, than keep that cash rotting away with no future investment.
No project, no country, can swallow the Saudi oil money like Sam Altman can. So, they're building enormous data centers with custom nuclear plants and call that Stargate to syphon that dumb money in. It's the whole business model of Softbank: find a founder whose hubris is as big as Saudi stupidity.
Maybe it's my bubble, but so far I didn't hear someone saying that. What kind of jobs should that be, given that both forms, physical and knowledge work, will be automatable sooner or later?
That claim just reads like he's concocted two sides for his position to be the middle ground between. I did that essays in high school but I try to be better than that now.
Because we all know how essential the internet is nowadays.
But in the case of AI, that argument is much harder to make. The cost of compute hardware is astronomic relative to the pace of improvements. In other words, a million dollars of compute today will be technically obsolete (or surpassed on a performance/watt basis) much faster than the fiber optic cables laid by Global Crossing.
And the AI data centers specialized for Nvidia hardware today may not necessarily work with the Nvidia (or other) hardware five years from now—at least not without major, costly retrofits.
Arguably, any long-term power generation capacity put down for data centers of today would benefit data centers of tomorrow, but I'm not sure much such investment is really being made. There's talk of this and that project, but my hunch and impression is that much of it will end up being small-scale local power generation from gas turbines and the like, which is harmful for the local environment and would be quickly dismantled if the data center builders or operators hit the skids. In other words, if the bubble bursts I can't imagine who would be first in line to buy a half-built AI data center.
This leads me to believe this bubble has generated much less useful value to benefit us in future than the TMT bubble. The inference capacity we build today is too expensive and ages too fast. So the fall will be that much more painful for the hyperscalers.
That is the real dial-up thinking.
Couldn't AI like be their custom inventory software?
Codex and Claud Code should not even exist.
Absolutely not. It's inherently a software with a non-zero amount of probability in every operation. You'd have a similar experience asking an intern to remember your inventory.
Like I enjoy Copilot as a research tool right but at the same time, ANYTHING that involves delving into our chat history is often wrong. I own three vehicles, for example, and it cannot for it's very life remember the year, make and model of them. Like they're there, but they're constantly getting switched around in the buffer. And once I started positing questions about friend's vehicles that only got worse.
Really. Tool use is a big deal for humans, and it's just as big a deal for machines.
This your first paradigm shift? :-P
I'm not saying it is useless tech, but no it's not my first paradigm shift, and that's why I can see the difference.
The only argument you (and Toucan, who's been around longer) seem to muster is that the systems aren't perfect. They occasionally say stupid things, need extra handholding, babble nonsense and write buggy code, commit blatant plagiarism, fall into fallacious reasoning traps, and can be fooled with simple tricks... unlike people, presumably.
I replied because you answered "you're doing it wrong" to a question of it's failures. It seems you dismiss the concerns of the smaller errors or failures without realizing the point being made. If it's "smart" enough to solve take international math medals and beat grand masters at go, but can't truly understand problems and anticipate needs or issues, to me it's not a genuine intelligence and in it's current form never will be.
It's not that they are not perfect, is that they have no concept of reality, and it's evident in their failures. Beyond this point I am not interested in trying to convince you.
I’m not sure that’s a certainty.
A single image generally took nothing like a minute. Most people had moved to 28.8K modems that would deliver an acceptable large image in 10-20 seconds. Mind you, the full-screen resolution was typically 800x600 and color was an 8-bit palette… so much less data to move.
Moreover, thanks to “progressive jpeg”, you got to see the full picture in blocky form within a second or two.
And of course, with pages was less busy and tracking cookies still a thing of the future, you could get enough of a news site up to start reading in less time that it can take today.
One final irk is that it’s little overdone to claim that “For the first time in history, you can exchange letters with someone across the world in seconds”. Telex had been around for decades, and faxes, taking 10-20 seconds per page were already commonplace.
Does that comparison with the fiber infra from the dotcom era really hold up? Even when those companies went broke, the fiber was still perfectly fine a decade later. In contrast, all those datacenters will be useless when the technology has advanced by just a few years.
Nobody is going to be interested in those machines 10 years from now, no matter if the bubble bursts or not. Data centers are like fresh produce. They are only good for a short period of time and useless soon after. They are being constantly replaced.
Power gen = yes. Reasoning is trivial due to long useful asset life.
For compute, why would it be any different than fiber in the AI bubble pop case? Less useful, but not useless. Are you misremembering the fiber overbuild era? The fiber investments were impaired and sold off for pennies on the dollar. The value here wouldn't go to zero, either.
I mean, sort of, but the fiber optics in the ground have been upgraded several by orders of magnitude of its original capacity by replacing the transceivers on either end. And the fiber itself has lasted and will continue to last for decades.
Neither of those properties is true of the current datacenter/GPU boom. The datacenter buildings may last a few decades but the computers and GPUs inside will not and they cannot be easily amplified in their value as the fiber in the ground was.
>We’re in the 1950s equivalent of the internet boom — dial-up modems exist, but YouTube doesn’t.
Mass production of telephone line modems in the United States began as part of the SAGE air-defense system in 1958, connecting terminals at various airbases, radar sites, and command-and-control centers to the SAGE director centers scattered around the United States and Canada.
Shortly afterwards in 1959, the technology in the SAGE modems was made available commercially as the Bell 101, which provided 110 bit/s speeds. Bell called this and several other early modems "datasets".
> 1. Economic strain (investment as a share of GDP)
> 2. Industry strain (capex to revenue ratios)
> 3. Revenue growth trajectories (doubling time)
> 4. Valuation heat (price-to-earnings multiples)
> 5. Funding quality (the resilience of capital sources)
> His analysis shows that AI remains in a demand-led boom rather than a bubble, but if two of the five gauges head into red, we will be in bubble territory.
This seems like a more quantitative approach than most of "the sky is falling", "bubble time!", "circular money!" etc analyses commonly found on HN and in the news. Are there other worthwhile macro-economic indicators to look at?
It's fascinating how challenging it is meaningfully compare current recent events to prior economic cycles such as the y2k tech bubble. It seems like it should be easy but AFAICT it barely even rhymes.
Stockmarket capitalisation as a percentage of GDP AKA the Buffett indicator.
https://www.longtermtrends.net/market-cap-to-gdp-the-buffett...
Good luck, folks.
I'm sure there are other factors that make this metric not great for comparisons with other time periods, e.g.:
- rates
- accounting differences
If that bothers you, just multiply valuations by .75
Doesn’t make much difference even without doing the same adjust for previous eras.
Buffett indicator survives this argument. He’s a smart guy.
We’re at the end of Moore’s Law, it’s pretty reasonable to assume. 3nm M5 chips means there are—what—a few hundred silicon atoms per transistor? We’re an order of magnitude away from .2 nm which is the diameter of a single silicon atom.
My point is, 30 years have passed since dial up. That’s a lot of time to have exponentially increasing returns.
There’s a lot of implicit assumption that “it’s just possible” to have a Moore’s Law for the very concept of intelligence. I think that’s kinda silly.
>The internet, and the state of the art of computing in general has been driven by one thing and one thing alone: Moore’s Law.
You're wrong here... the one thing driving the internet and start of the art computing is money. Period. It wouldn't matter if Moore never existed, and his law was never a thing, money would still be driving technology to improve.
You're kind of separating yin from yang and pretending that one begot the other. The reason so much money flooded into chip fab was because compute is one of the few technologies (the only technology?) with recursive self improvement properties. Smaller chip fab leads to more compute, which enabled smaller chip fabs though research modeling. Sure: and it's all because humans want to do business faster. But TSMC literally made chips the business and proved out the pure play foundry business model.
> Even if Moore's Law was never a thing
Then arguably in that universe, we would have eventually hit a ceiling, which is precisely the point I'm trying to make against the article: it's a little silly to assume there's an infinite frontier of exponential improvement available just because that was the prior trend.
> Moore's Law has very little to do with the physical size of a single transistor
I mean it has everything to do with the physical size of a single transistor, precisely because of that recursive self improvement phenomenon. In a universe where moore's law doesn't exist, in 2025 we wouldn't be on 3nm production dies, and compute scale would have capped off decades ago. Or perhaps even a lot of other weird physical things would probably be different, like maybe macroscopic quantum phenomena or just an entire universe that is one sentient blob made from the chemical composition of cheeto dust.
But all of these advancements in processing power are driven by money, not by some made-up "law" that sounds nice on paper but has little to do with the real world. Sorry but "Moore's law" isn't really a "law" in any way like the laws of physics.
My whole fucking point is that neither are the AI scaling laws.
Please stop talking to me.
Your original comment was downvoted quite a bit. Because you're wrong about this statement, and it sticks out more than anything else you wrote.
>Please stop talking to me.
Likewise.
That's what my parents thought about computers and the internet, wondering what it's actually good for beyond burning $9000 in phone bills to Zerg rush Protoss noobs.
And all the other things computers+internet could do, they could already do through other more reliable (at the time) ways.
But then it turned out that simply making mundane tasks just a little bit faster, and reducing the need to interact with strangers by just that little bit, created a new step on the staircase, a new baseline, with which to reach and do other grander things more easily.
What is the AI version of that? Maybe code generation. Maybe.
Being able to plan a trip from a single sentence would be one killer app for many people:
"I'm free next week. I'd like to go to A, B, or C for a couple days. What's a cheap flight and a room within this budget near X area?"
and if it could go and also make a booking through your accounts that would be amazing.
But right now even Google's Gemini is an utter useless dumbass if asked to search Google Flights or Airbnb.
I mean if LLMs could just be a natural-language wrapper around existing tools, that'd be amazing in itself. But corporivalry has made that an stillborn dream.
What is this about? Weird thing to say.
I'm no expert but I can't help feeling there's lots of things they could be doing vastly better in this regard - presumably there is lots to do and they will get around to it.
And not just that, they come out with an iPhone that has _no_ camera as an attempt to really distance themselves from all the negative press tech (software and internet in particular) has at the moment.
The Apple engineers, with their top level unfettered access to the best Apple AI - they'll convince shareholders to fund it forever, even if normal people never catch on.