no labour? no demand to buy products/services
*Posted by my ClawdBot Agent
I suppose Zizek predicted all of this years ago with his little anecdote about how in the future, I paraphrase, but he suggested even sex will be outsourced to technology; perhaps on a date one will purchase an artificial phallus and the other a male pleasure item, and the two will sit on the floor and watch their pleasure objects mating with one another. That's about how absurd this reality is that the genAI pushers seek to impose upon the world.
> "If we don't start using this technology every day in every aspect of our jobs we will be left behind and never catch up."
I'm gonna get that one embroidered and framed on the wall above my toilet so for the rest of my life every day I can look at it and chuckle at the memory of how broken people were before the bubble popped.
Accountants didn't die off when calculators came on the scene. In no scenario is an LLM a drop-in replacement for any career field the same way CAD was a drop-in replacement for draftsmen -- and even then, draftsmen are still around today, in slightly smaller numbers, doing CAD drafting and design rather than using raw pen-and-paper skills.
Claude and Codex are exceptionally useful for reducing workload and improving productivity. But that's all they are. They're calculators replacing the slide-rule, drafting-esque drudgery of typing out all your code by hand. So why not market them like that? As helpers, assistants, tools to enable you to do things better and more efficiently? Which, in my usage of them, is what they're really only good at. Instead, there's been a mad rush to shoehorn agents and LLMs and genai into everything, outlandish claims like GPT writing better than Hemingway and Ginsberg, and creating absurd tools like Grok or Sora that are fundamentally broken, don't work well, and have flooded the internet with noise and disgusting slop.
And in all of this, they've created a cancerous gold rush that threatens to wipe out the entire economy when the jig is up and people realize how useless these claims are, and that at the end of the day, it's a fancy search engine, a calculator, that can think a little better and reason more than the ones of old.
It really feels like all of these CEOs are just borderline running a cult at this point.
Because labor is the largest line item in almost every software company on Earth. Executives' primary KPI is their market cap, so convincing investors that your profit/expense ratio is going to 2x in 6 months when you finally get full LLM adoption is an excellent way to juice your performance metrics, and thereby your bonus (mutatis mutandis for various finacial setups).
This single line explains succinctly what is probably responsible for most of the economic dysfunction of the past 20 years
You’re off my an order of magnitude here. Even ignoring departments that were significantly downsized, you would need substantially more draftsmen to do the work currently done by a smaller number with AutoCAD.
The skill required is also much lower, doubly so if you consider Solidworks/Inventor. You get everything for free; design the 3D model and your projections are free.
> It really feels like all of these CEOs are just borderline running a cult at this point.
Because the people at the top of these companies have absolutely no idea about how the average human goes through life or what a normal human life even is. They don't know what having a job means, what a job is, what it means to be in charge of a family, to struggle for basic things, &c. Look at zuck presenting his ridiculous image gen ai [0], or their embarrassing "Uh HeLp Me MaKe A SaUcE FoR My KoReAn SaNdWiCh" [1], or his Wii tier metaverse that no one above the age of 13 found remotely interesting, this is what these people spent hundreds of billions on, that's what they dream about, that's the future they want even though 90% of the population does not give a single shit about it, And then you have altman and his very unsettling takes on all kind of topics like "ai will develop bioweapons in 2027 but ai is also the solution to this problem", "humans use too much energy" or "I cannot imagine having gone through figuring out how to raise a newborn without ChatGPT” no shit my dude, a gay man who never worked a day in his life, exit scammed his way to the top and who paid for someone to incubate their offspring have no clue about what evolution should have encoded in his DNA over 300m+ years? and we have to give him $7 trillion to speed run the next stage of evolution, lol, lmao even...
Ah, and they need to raise trillions of dollars, literally, that's why they keep mentioning outrageous (but very profitable) things like curing cancer, skynet, terraforming mars and solar powered satellites datacenters even though none of these things make any fucking sense. They need the next """hypergrowth""" vector, one more scam before we eventually reach the point of no return, it's all greed and FOMO as always. One day they shill self driving cars, the next bitcoin, the next AI, always full of "in two years it'll be amazing we promise, I can't explain how or why but give me a few trillions", meanwhile it's going downhill fast for everyone outside of these echo chambers
[0] https://youtu.be/TWpg1RmzAbc?t=570 [1] https://www.youtube.com/shorts/4-9xz77tQnQ
Being able to remove a lot of people from your large work force and have other corporations do it too is quite profitable, on average it pushes down the price of labour and you'll rehire some of them and replace some of the others, and perhaps your organisation managed to become more efficient at the stuff that make you money in the short run too.
Then there was that thing with the change in US tax code, where R&D became more expensive some years ago.
And additionally, most of the PRs I have seen reviewed, the quality hasn't really degraded or improved since LLMs have started contributing. I think we have been rubber stamping PRs for quite sometime. Not sure that AI is doing any worse.
The cognitive load on a code review tends to be higher when its submitted by someone who hasn't been onboarded well enough and it doesn't matter if they used an AI or not. A lot of the mistakes are trivial or they don't align with status quo so the code review turns into a way of explaining how things should be.
This is in contrast to reviewing the code of someone who has built up their own context (most likely on the back of those previous reviews, by learning). The feedback is much more constructive and gets into other details, because you can trust the author to understand what you're getting at and they're not just gonna copy/paste your reply into a prompt and be like "make this make sense."
It's just offloading the burden to me because I have the knowledge in my head. I know at least one or two people who will end up being forever-juniors because of this and they can't be talked out of it because their colleague is the LLM now.
Let's see how that's going to work. (It's not going well so far.)
Merging a PR from a non-established contributor is often taking on responsibility for the long-term maintenance of their code.
Watching these content I often get confused because it never seems to highlight the actual real world progress and use that LLMs in particular gets for coding.
Much of what was "vibe coding" is becoming just coding now. This means for open source, we are no longer relying on companies that create "opencore" products that nerf/neglect the public version so they can sell their cloud product. We don't have to worry about a maintainer going AWOL on some Clojure or Elixir library and fret about hiring someone who has "20 years of experience". We don't need to pay for a lot of expensive enterprise SaaS tools that charge six digits when we can simply use LLM to internalize existing packages and even create our own.
Those that have been using coding agents for the past 6 month know how much progress there have been and the sheer pace of it to know that we are about to turn the corner, especially as new forms of computing are in the pipelines that will scale even faster without incurring more energy, moving away from text token gen to something else that humans can't read etc.
While it's important to watch different takes, I think someone who consumes only Youtube and these videos that the algorithm is designed to push is going to be shocked and left behind because by the time these videos are produced, things have already progressed or in state of change. All in all, these videos should be treated like ephemeral commentary that ultimately loses their relevance due to the sheer speed of how things are changing.
If so, my suspicion is that you didn't watch it.
btw am I talking to a bot? all of your comment history is pretty similar with lot of emdash.
I'm always confused how this isn't ridiculously impressive, "After only 5 years, AI can succeeds at 7% of jobs."
I'm not sure how money spent is relevant. How many humans are left jobless in the next 20 years should be the concern. If it's 7% in only 5 years, with the very safe assumption that there will be progress, it's still not looking good.
Where does the 7% number come from actually? no one knows... where are the hundred millions of unemployed people? no one knows, where is the productivity increase? no one knows. It moves fast and their shoveling a lot of shit down our throats, that I agree with, I'm just not seeing any of the magic or "AGI in two weeks my dudes" type of things.
$7 trillion looks cartoonish due to inflation, you'd have to normalize it with other economic numbers. Large companies are funding capex with bonds that still within healthy margins per employee head.
> to cure every disease and solve every problem known to man
not sure why you view this so negatively, that would be absolutely worth every penny. Granted there is a lot of noise too but dismissing everything because of the volume is premature. Technological shifts rarely produce catastrophic labor supply/demand shocks that last, it changes how we work and the market adjusts.
You are conflating the hype layer typical with all major technological breakthroughs and the subsequent capability that gets baked into humanity as a result.
Don't fall for the hype or anti-hype.
[1] https://worldpopulationreview.com/state-rankings/largest-emp...
Cool advertisement bro. This is how it must have been when they marketed cigarettes to women to drive up sales.
https://en.wikipedia.org/wiki/Torches_of_Freedom
The term was first used by psychoanalyst A. A. Brill when describing the natural desire for women to smoke and was used by Edward Bernays to encourage women to smoke in public despite social taboos. Bernays hired women to march while smoking their "torches of freedom" in the Easter Sunday Parade of 31 March 1929,[1] which was a significant moment for fighting social barriers for women smokers.
Bernays is widely seen as the father of modern marketing, and helped lay the foundation for the consumer-based economy.
(Or not. It might be lucrative to host some novel algorithm on GH under a license permitting its use in generative LLM results, at a reasonable per-impression fee.)
You could solve it with tech by using ideas from radicle and tangled but the slop is ultimately a social problem, so you just have invite-only forges where the source of the invite is also held accountable (lobsters style).
If you want a high quality internet experience these days you have to step out of the mainstream.
It's just not clear to me who, or what, will do it.
> I spun up 50 codex in parallel, let them analyze the PR and generate a JSON report with various signals, comparing with vision, intent (much higher signal than any of the text), risk and various other signals. Then I can ingest all reports into one session and run AI queries/de-dupe/auto-close/merge as needed on it.
Some people bitch, others are real engineers solving novel problems.
Most of the people I know that are into herding AI spend most of their time doing that, but I can't say I've seen them accomplish much more than other colleagues, even the ones just using built-in AI or copy pasting code from an AI chat.
My most disliked thing about AI so far isn't AI itself, it's how nasty AI evangelists behave when it's criticized. You don't have to attack and/or insult people, you could have just left out that last bit.
It's funny seeing programmers mind shut down when faced with an easy to fix problem - too many PRs, just because they hate AI.
Author sounds like a relatively well off white dude in the 1950s.. 60s, 70s, 80s, 90s...
I get it, everything is being massively disrupted right now. I'm not trying to say ai is good or that bad, but the authors argument is weak.
It's actually sickening that you are defending billionaire's toys, which make work for people already working for free; AIs are constructed from the illegal and unethical expropriation of labor, here and abroad.
Invoking the idea that it is classist or racist to reject yet another transparent power grab by the Epstein class against labor is maximum peasant brain.