If AI is capable enough to "build pretty much anything", why is it not capable enough to also use what it builds (instead of people using it) or, for that matter, to decide what to build?
If AI can, say, build air traffic control software as well as humans, why can't it also be the controller as well as humans? If it can build medical diagnosis software and healtchare management software, why can't it offer the diagnosis and prescribe treatment? Is the argument that there's something special about writing software that AI can do as well as people, but not other things? Why is that?
I don't know how soon AI will be able to "build pretty much anything", but when it does, Yegge's point that "all software sectors are threatened" seems to be unimaginative. Why not all sectors full stop?
It is merely a tool like a hammer. The hammer doesn't build the house, it is the human who wields the hammer that builds the house.
Now, I'm not saying it's impossible for there to be something that makes the first job significantly easier than the second, but it's strange for me to assume that an AI would definitely be able to do the former soon, yet not the latter. I think it could be reasonable to believe it will be able to do neither or both soon, but I don't understand how we can expect the ability line to just happen to fall between software and pretty much everything else.
Frankly - even the other end of your argument is weak. Humans don't particularly want to control air traffic either (otherwise why are we having to pay those air traffic controllers their salaries to be there?). They do it as a function of achieving a broader goal.
This is why LLM written code is often more verbose than human written code. All of those seemingly unnecessary comments everywhere, the excessively descriptive function names, the way everything is broken down into a seemingly excessive number of logical blocks: This is all helpful to the LLM for understanding the code.
First way to look at this is through the lens of specialization. A software engineer could design and create Emacs, and then a writer could use Emacs to write a top-notch novel. That does not mean that the software engineer could write top-notch novels. Similarly, maybe AI can generate software for any task, but maybe it cannot do that task just as well as the task-specialized software.
Second way to look at this is based on costs. Even if AI is as good as specialized software for a given task, the specialized software will likely be more efficient since it uses direct computation (you know, moving and transforming bits around in the CPU and the memory) instead of GPU or TPU-powered multiplications that emulate the direct computation.
Yes, maybe, but assuming that is the case in general seems completely arbitrary. Maybe not all jobs are like writing software, but why assume software is especially easy for AI?
> Even if AI is as good as specialized software for a given task, the specialized software will likely be more efficient since it uses direct computation
Right, but surely an AI that can "build pretty much anything" can also figure out that it should write specialised software for itself to make its job faster or cheaper (after all, to "build pretty much anything", it needs to know about optimisation).
This is why I think Ruby is such a great language for LLMs. Yeah, it's token-efficient, but that's not my point [0]. The DWIM/TIMTOWTDI [1] culture of Ruby libraries is incredible for LLMs. And LLMs help to compound exactly that.
For example, I recently published a library, RatatuiRuby [2], that feeds event objects to your application. It includes predicates like `event.a?` for the "a" key, and `event.enter?` for the Enter key. When I was implementing using the library, I saw the LLM try `event.tilde?`, which didn't exist. So... I added it! And dozens more [3]. It's great for humans and LLMs, because the friction of using it just disappears.
EDIT: I see that this was his later point exactly! FTA:
> What I did was make their hallucinations real, over and over, by implementing whatever I saw the agents trying to do [...]
[0]: Incidentally, Matz's static typing design, RBS, keeps it even more token-efficient as it adds type annotations. The types are in different files than the source code, which means they don't have to be loaded into context. Instead, only static analysis errors get added to context, which saves a lot of tokens compared to inline static types.
[1]: Do What I Mean / There Is More Than One Way To Do It
[2]: https://www.ratatui-ruby.dev
[3]: https://git.sr.ht/~kerrick/ratatui_ruby/commit/1eebe98063080...
A quick look at gastown makes me think we all are.
[1] https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...
This post is trying to talk to the real world.
I see a ton of truth & reality in this post. This post does what I love from Yegge, it points to calibrate by. I think it incredibly insightfully ascertains a very real shift in sellable software value that is underway, now that far more people suddenly can talk to computers.
Even if 10x cheaper, internally built Saas tools don't come with service level agreements, a vendor to blame/cancel if it goes wrong or a built-in defense of "But we picked the Gartner top quadrant tool".
It's far more challenging to win the 'build' argument on a cost savings approach, because even the least-savvy CIO/CTO understands that the the price of the vendor software is a proof point grounded in the difficulty for other firms to build these capabilities themselves. If there's merit to these claims, the first evidence we'll see is certain domains of enterprise software (like everything Atlassian does) getting more and more crowded, and less and less expensive, as the difficulty of competing with a tier-1 software provider drops and small shops spring up to challenge the incumbents.
In my experience, a bigger blocker to C-level approving internal SaaS development is it diverts capital and scarce attentional bandwidth to 'buying an upside' that's capped. Capped how? Because, by definition, any 'SaaS-able' function is not THE business - it's overhead. The fundamental limit on a SaaS tool's value to shareholders is to be a net savings on some cost of doing business (eg HR, legal, finance, sales, operations, support, etc). No matter how cheap going in-house makes a SaaS-able activity, the best case is improving margins on revenue. It doesn't create new revenue. You can't "save your way" to growth.
Some businesses prefer tools built by other businesses for some tasks. The author advocates pretty plainly to identify and target those opportunities if that’s your strength.
I think his point is to recognize that’s moving toward a niche rather than the norm (on the spectrum of all software to be built).
Also, is this even true? The author's only evidence was to link to a book about vibe coding. I'd be interested to hear anecdotes of companies who are even attempting this.
Edit: wow, and he's a co-author of that book. This guy really just said "source: me"
That sounds pretty hyperbolic. Everyone? Next “wave”?
Some of the writings here feels a little incoherent. The article implies progress will be exponential as matter of fact but we will be lucky to maintain linear progress or even avoid regressing.
LOLWUT?
Counter-factual much?
As a technical person today, I wouldn't pay a $10/month SaaS subscription if I can login to my VPS and tell claude to install [alternate free software] self-hosted on it. The thing is, everyone is going to have access to this in a few years (if nothing else it will be through the next generation of ChatGPT/Claude artifacts), and the free options are going to get much better to fit any needs common enough to have a significant market size.
You probably need another moat like network effects or unique content to actually survive.
He spends a lot of words talking about how saving cognition is equivalent to saving resources, but seems to gloss over that saving money is also saving resources.
Given the token/$ exchange rates is likely only going to get better for actual money over time...
If his predictions come true it seems clear that if your software isn't free, it won't get used. Nothing introduces friction like having to open up a wallet and pay. It's somewhat telling that all of his examples of things that will survive don't cost money - although I don't think it's the argument he meant to be making given the "hope-ium" style argument he's pushing.
---
Arguably, this is good long term. I personally think SaaS style recurring subscriptions are out of control, and most times a bad deal. But I also think it leaves a spot where I'm not sure what sort of career will exist in this space.
1. Paying money for the software or access to it.
2. Allowing a fraction of the attention to be siphoned off and sold to advertisers while they use the software.
I don't think advertisers want to pay much for the "mindshare" of mindless bots. And I'm not sure that agents have wallets they can use to pony up cash with. Hopefully someone will figure out a business model here, but Yegge's article certainly doesn't posit one.
Convinced an LLM to agree with you? What a feat!
Yegge's latest posts are not exactly half AI slop - half marketing same (for Beads and co), but close enough.
I do not understand what has happened to him here... there was an entire "AI winter" in the 90's to 2000's because of how wrong researchers were. Has he gone completely delusional? My PhD supervisor has been in AI for 30 years and talks about how it was impossible to get any grant money then because of catastrophically wrong predictions had been.
Like, honest question. I know he's super smart, but this reads like the kind of ramblings you get from religious zealots or scientologists, just complete revisions of known, documented history, and bizarre beliefs in the inevitably of their vision.
It really makes me wonder what such heavy LLM coding use does to one's brain. Is this going to be like the 90's crack wave?
Even if he believes that statement is true, it still means he has no ability to model where his reader is coming from (or simply doesn't care).
Why presuppose that a human wrote this, as opposed to a language model, given the subject?
Operational excellence survives, no matter the origin.
I've used these tools on-and-off an awful lot, and I decided last month to entirely stop using LLMs for programming (my one exception is if I'm stuck on a problem longer than 2-3 hours). I think there is little cost to not getting acquainted with these tools, but there is a heavy cognitive cost to offloading critical thinking work that I'm not willing to pay yet. Writing a design document is usually just a small part of the work. I tend to prototype and work within the code as a living document, and LLMs separate me from incurring the cost of incorrect decisions fully.
I will continue to use LLMs for my weird interests. I still use them to engage on spiritual questions since they just act as mirrors on my own thinking and there is no right answer (my side project this past year was looking through the Christian Gospels and some of the Nag Hammadi collection from a mystical / non-dual lens).
You'd be missing stuff like: - Containers - Major advancement in mainstream programming languages - IaC
There's countless more things that enable shipping of software of a completely different nature than was available back then.
Maybe these things don't apply to what you work on, but the software industry has completely changed over time and has enabled developers to build software on a different scale than ever previously possible.
I agree there's too much snake-oil and hype being sold, but that's a crazy take.
Post-CFEngine (Puppet, Ansible, Terraform) and cloud platform (CloudFormation) infrastructure-as-code is over a decade old.
Docker's popularisation of containers is just over a decade old.
But containers (and especially container orchestration, i.e. Kubernetes) are still entirely ignorable in production. :-D
And it's not at all crazy. We sold ourselves into over-complex architecture and knowledge cults. I've watched more products burn in the 4-5 year window due to bad tech decisions and vendors losing interest than I care to remember. Ride the hype up the ramp and hope it'll stick is not something you should be building a business on.
On that ingress-nginx. Yeah abandoned. Fucked everyone over. Here we go again...
I remember reading a comment a few days ago where someone said coding with an agent (claude code) made them excited to code again. After spending some time with these things i see their point. You can bypass the hours and hours of typing and fixing syntax and just go directly to what you want to do.
I know this doesn't 'contribute to the discussion.' But seriously this guy's latest contribution to the world was a meme coin backed project...
BAGS is a crypto platform where relative strangers can make meme coins and nominate a recipient to receive some or all of the funds.
In both Steve Yegge and Geoffrey Huntley's cases, tokens were made for them but apparently not with their knowledge or input.
It would be the equivalent of a random stranger starting a Patreon or GoFundMe in your name, with the proceeds going to you.
Of course, whether you accept that money is a different story but I'm sure the best of us might have a hard time turning down $300,000 from people who wittingly participate in these sorts of investment platforms.
I don't immediately see how those left holding the bag could have ended up in that position unknowingly.
My parents would likely have a hard enough time figuring out how to buy crypto, let alone finding themselves rugpulled by a meme token is my point so while my immediate read is that pump and dump is bad, bad relative to who the participants are is something I'm curious to know if anyone has an answer for
It's so funny tho. If you post on reddit saying "my friend had a fight with his wife last night..." absolutely no one would believe it's really your friend. But somehow you say "uh so there is someone anonymous who launched a meme coin for my project..." people believe it's really someone anonymous.
I'm just saying that there's no evidence I'm aware of that would prove or disprove that the creators were involved.
Personally, I think crypto types are bizarre enough that I could believe they would do something like that unannounced.
In my mind, it's the same behaviour as the infamous Kylie Jenner "get her to $1 billion" GoFundMe from a few years back: https://www.businessinsider.com/kylie-jenner-gofundme-fans-c...
This is not a good way to do anything. The models are sychophantic, all you need to do in order to get them to agree with you is keep prompting: https://www.sfgate.com/tech/article/calif-teen-chatgpt-drug-...
At least you complied with the next sentence :)
EDIT: Whoa, I didn't check your link before I posted. That's terribly sad. While I agree that LLMs can be sycophantic, I don't think Yegge was debating with Claude about drug use in this situation. Other references might have worked better to support you claim like this first page result when I search for "papers on llm sycophancy": https://pmc.ncbi.nlm.nih.gov/articles/PMC12592531/
On a side note, any kind of formula that contains what appears to be a variable on the left hand side that appears nowhere on the right hand side deranges my sense of beauty.
My other thought, that I can't articulate that well is....what about testing? Sure LLMs can generate tons of code but so what? If your two sentence prompt is for a tiny feature that's one thing. If you ask Claude to "build me a todo system" the results will likely rapidly diverge from what you're expecting. The specification for the system is the code, right? I just don't see how this can scale.
Is this supposed to be a joke?
We are entering the absurd phase where we are beginning to turn all of earth into paperclips.
All software is gonna be agents orchestrating agents?
Oh how I wish I would have learned a useful skill.
He needs an editor, I’m sure he can afford one.
I look forward to him confronting his existence as he gets to be as old as his neighbor. It will be a fun spectacle. He can tell us all about how he was right all along as to the meaning of life. For decades, no less.
Now his bloviated blogposts only speak of a man extremely high on his own supply. Long, pointless, meandering, self-aggrandising. It really is easier to dump this dump into an LLM to try to summarize it than spend time trying to understand what he means.
And he means very little.
The gist: I am great and amazing and predicted the inevitable orchestration of agents. I also call the hundreds of thousands of lines of extremely low quality AI slop "I spent the last year programming". Also here are some impressive sounding terms that I pretend I didn't pull out of my ass to sound like I am a great philosopher with a lot of untapped knowledge. Read my book. Participate in my meme coin pump and dump schemes. The future is futuring now and in the future.
Steve Yegge has always read a bit "eccentric" to me, to say the least. But I still quote some of his older blog posts because he often had a point.
Now... his blog posts seem to show, to quote another commenter here, "a man's slow descent into madness".
We already went over how Stack Overflow was in decline before LLMs.
SaaS is not about build vs. buy, it's about having someone else babysit it for you. Before LLMs, if you wanted shitty software for cheap, you could try hiring a cheap freelancer on Fiverr or something. Paying for LLM tokens instead of giving it to someone in a developing country doesn't really change anything. PagerDuty's value isn't that it has an API that will call someone if there's an error, you could write a proof of concept of that by hand in any web framework in a day. The point is that PagerDuty is up even if your service isn't. You're paying for maintenance and whatever SLA you negotiate.
Steve Yegge's detachment from reality is sad to watch.
Too many people are running a LLM or Opus in a code cycle or new set of Markdown specs (sorry Agents) and getting some cool results and then writing thought-pieces on what is happening to tech.. its just silly and far to immediate news cycle driven (moltbot, gastown etc really?)
Reminds me of how current news cycle in politics has devolved into hour by hour introspection and no long view or clear headed analyis -we lose attention before we even digest that last story - oh the nurse had a gun, no he spit at ICE, masks on ICE, look at this new angle on the shooting etc.. just endless tweet level thoughts turned into youtube videos and 'in-depth' but shallow thought-pieces..
its impossible to separate the hype from baseline chatter let alone what the real innovation cycle is and where it is really heading.
Sadly this has more momentum then the actual tech trends and serves to guide them chaotically in terms of business decisions -then when confused C suite leaders who follow the hype make stupid decisons we blame them..all while pushing their own stock picks...
Don't get me started on the secondary Linkedin posts that come out of these cycles - I hate the low barrier to entry in connected media sometimes.. it feels like we need to go back to newspapers and print magazines. </end rant>
I'd recommend starting with Stratechery's articles on on Platforms and Aggregators[0], and a semester long course on Porter's Five Forces[1].
[0]https://stratechery.com/2019/shopify-and-the-power-of-platfo...
[1]https://en.wikipedia.org/wiki/Porter%27s_five_forces_analysi...
The latter part of this sentence is basically the labor theory of value. Capital Vol. 1 by Karl Marx discusses this at length deriving the origin of money, though I believe others like Ricardo and Smith also had their own versions of this theory.