The example they gave was search engine + digital documents removed the junior lawyer headcount by a lot. Prior to digital documents, a fairly common junior lawyer task was: "we have a upcoming court case. Go to the (physical) archive and find past cases relevant to current case. Here's things to check for:" and this task would be assigned to a team of junior (3-10 people). But now one junior with a laptop suffice. As a result the firm can also manage more cases.
Seems like a pretty general pattern.
FB has long wanted to have a call center for its ~3.5B users. But that call center would automatically be the largest in history and cost ~15B/yr to run. Something that is cost ineffective in the extreme. But, with FB's internal AIs, they're starting to think that a call center may be feasible. Most of the calls are going to be 'I forgot my password' and 'it's broken' anyways. So having a robot guide people along the FAQs in the 50+ languages is perfectly fine for ~90% (Zuck's number here) of the calls. Then, with the harder calls, you can actually route it to a human.
So, to me, this is a great example of how the interaction of new tech and labor is a fractal not a hierarchy. In that, with each new tech that your specific labor sector finds, you get this fractalization of the labor in the end. Zuck would have never thought of a call center, denying the labor of many people. But this new tech allows for a call center that looks a lot like the old one, just with only the hard problems. It's smaller, yes, but it looks the same and yet is slightly different (hence a fractal).
Look, I'm not going to argue that tech is disruptive. But what I am arguing is that tech makes new jobs (most of the time), it's just that these new jobs tend to be dealing with much harder problems. Like, we''re pushing the boundaries here, and that boundary gets more fractal-y, and it's a more niche and harder working environment for your brain. The issue, of course, is that, like a grad student, you have to trust in the person working at the boundary is actually doing work and not just blowing smoke. That issue, the one of trust, I think is the key issue to 'solve'. Cal Newport talks a lot about this now and how these knowledge worker tasks really don't do much for a long time, and then they have these spats of genius. It's a tough one, and not an intellectual enterprise, but an emotional one.
A customer who wants to track the status of their order will tell you a story about how their niece is visiting from Vermont and they wanted to surprise her for her 16th birthday. It's hard because her parents don't get along as they used to after the divorce, but they are hoping that this will at the very least put a smile on her face.
The AI will classify the message as order tracking correctly, and provide all the tracking info and timeline. But because of the quick response, the customer will write back to say they'd rather talk to a human and ask for a phone number they can call.
The remaining 20% can't be resolved by neither human nor robot.
Their business model is an online payment provider (like e.g. PayPal/apple pay) that splits the payment into 3, 6 or 12 monthly payments, usually at 0% interest
The idea being that for the business the loss in revenue from an interest free loan is worth it if it causes an increase in sales
But it isn't! It's very useful. Even if it isn't eliminating 90% of work, eliminating 40% is a huge benefit!
It's super frustrating. These robots need to have an option like "I am technically savvy and I tried the website and it's broken."
Do you know why your isp asks you to unplug and plug your modem back in while on call, even if you insist you did that already? A surprising large number of people don’t even realize their modem isn’t even plugged in at all.
If you don’t believe in an exaggerated potential, you might never start exploiting it.
Is this implying it's because they want to wag their chins?
My experience recently with moving house was that most services I had to call had some problem that the robots didn't address. Fibre was listed as available on the website but then it crashed when I tried "I'm moving home" - turns out it's available in the general area but not available for the specific row of houses (had to talk to a human to figure it out). Water company, I had an account at house N-2, but at N-1 it was included, so the system could not move me from my N-1 address (no water bills) to house N (water bill). Pretty sure there was something about power and council tax too. With the last one I just stopped bothering, figuring that it's the one thing that they would always find me when they're ready (they got in touch eventually).
That's why it annoys me how much effort they put into not talking to me, when it's clear that their machine cannot solve my problem.
When I get stellar customer service these days, I’m happy and try to call it out, but i don’t expect it anymore. My first expectation is always AI slop or a shitty phone tree. When I reframed it for myself, it was a lot easier not to get frustrated about something that I can’t control and not blame a person who doesn’t exist.
Actually that reminds me, I couldn't figure out how to cancel my old insurance online and couldn't get to a person on the phone - I just deleted the direct debit, and waited until they called me to sort it out.
I build NPCs for an online game. A non trivial percentage of people are more than happy to tell these stories to anything that will listen, including an LLM. Some people will insist on a human, but an LLM that can handle small talk is going to satisfy more people than you might think.
There is zero chance he wants to pay even a single person to sit and take calls from users.
He would eliminate every employee at Facebook it it were technically possible to automate what they do.
From my experience in corporations this is a false statement. The goal of each manager is to grow their headcount. More people under you - more weight you have and higher position you got.
Because he is the fourth richest man on the planet and that demands some responsibility, which he refuses to take.
He owns 162,000,000,000 dollars. Metas net income 2024 was 50,000,000,000 dollars.
Facebook might be able to operate with half the headcount, but them Zuckerberg wouldn't be the boss of as many people, and I think he likes being the boss.
Other than on-call roles like Production Engineers, whose absence there would make the company fail within a day?
They literally had (allegedly) significantly contributed to inciting a genocide [0]. PR doesn't get much worse than that, but it seems that we as a society, just don't care about these things that much any more. I really can't recall any case of any individual or organization going down because of PR issues, except for people in the entertainment industry; for some reason, we only expect good morals from our actors and comedians.
[0] https://en.wikipedia.org/wiki/Facebook_content_management_co...
I will admit though, that it may be possible to continue existing in other ways, if he fired >50% of the people at FB.
I’d rather see humanity in all of its good, bad, and ugly than have a feed sanitized for me by random Twitter employees who in many cases had their own agenda.
Censorship is the worst negative thing that can happen to information. We should have all learned that lesson by now.
On the contrary, some "information" doesn't deserve the light of day, and we should have learned that lesson in the 1930s and 1940s. The question is where to draw the line.
I want to see all the dumb stuff politicians say. I want to see celebrities’ terrible opinions on things.
I’d rather know how messed up people are than have a feed sanitized for me to keep me ignorant.
Try blocking or criticising Musk, or saying "cis" and come back to us on "mostly free of censorship".
Free speech on Twitter is a joke, and you either are arguing in bad faith or you have no idea what you’re talking about.
Might be time to step back and take a breath.
I am not sure why I wrote "echo hall". I must have been mentally absent or something. To my own ears it sounds weird and not like something I would usually write. It might have been weird auto correction on phone. I am not sure. Anyway, that is besides the point. I would like to know, why you think, that what politicians do has any relation to me being in an echo chamber or not. I mean, do you define the outside of echo chambers to be the place, where politicians go? Like ... Are they such a massive number of people or somehow indicative of that outside? I just don't get your idea.
I really dont think I am saying anything controversial.
But it is the result of new agenda of new owner, not the result of mass layoffs. I'm sure the result would be the same without layoffs.
No it isn't. Attempts to do this are why I mash 0 repeatedly and chant "talk to an agent" after being in a phone tree for longer than a minute.
Actually, now that I think about it, yeah.
The whole purpose of the bots is to deflect you from talking to a human. For instance: Amazon's chatbot. It's gotten "better": now when I need assistance, it tries three times to deflect me from a person after it's already agreed to connect me to one.
Anything they'll allow the bot to do can probably can be done better by a customer facing webpage.
A high quality bot to guide people through their poorly worded questions will be hugely helpful for a lot of people. AI is quickly getting to the point that a very high quality experience is possible.
The premise is also that the bots are what enable the people to exist. The status quo is no interactive customer service at all.
Let's use Zuck's example, the lost password. Surely that's better solved with a form where you type things, such as your email address. If the problem is navigation, all we need to do is hook up a generative chat bot to the search function of the already existing knowledge site. Then you can ask it how to reset your password, and it'll send you to the form and write up instructions. The equivalent over a phone call sounds worse than this to me.
I think Zuck is wrong that 90% of the problems people would call in for can easily be solved by an AI. I was stuck in a limbo with Instagram for about 18 months, where I was banned for no clear reason, there was no obvious way to contact them about it, and once I did find a way, we proceeded with a weird dance where I provided ID verification, they unbanned me, and then they rebanned me, and this happened a total of 4 times before the unban process actually worked. I don't see any AI agent solving this; the cause was obviously process and/or technical problems at Meta. This is the only thing I ever wanted to call Meta for.
And there is another big class of issue that people want to call any consumer-facing business for, which AI can't solve: loneliness. The person is retired and lives alone and just wants to talk to someone for 20 minutes, and uses a minor customer service request as a justification. This happens all the time. Actually an AI can address this problem, but it's probably not the same agent we would build for solving customer requests, and I say address rather than solve as AI will not solve society's loneliness epidemic.
It should be everywhere, as a first line of customer service. Even once talking to a person, real-time translation is necessary -- it's not possible to staff enough skilled employees in every language on earth.
I'd like to call out that "I can't log in" is the most common problem with Facebook, by a wide margin. HN user anecdotes are just not useful when assessing the scope of this problem.
I'd also like to call out that many people (usually not English speaking) nearly exclusively use voice memos and phone calls, and rarely type anything at all.
I think it is clear that AI will enable better customer service from Facebook. Without AI, a FB call center is clearly impossible. With AI, perhaps it begins to look feasible.
I do think we're going to see less employment for "coding" but I remain optimistic that we're going to see more employment for "creating useful software".
Like, if AI is so good, then it'll just eat away at those jobs and get asymptotically close to 100% of the calls. If it's not that good, then you've got to loop in the product people and figure out why everyone is having a hard time with whatever it is.
Generally, I'd say that calls are just another feedback channel for the product. One that FB has thus far been fine without consulting, so I can't imagine its contribution can be all that high. (Zuck also goes on to talk about the experiments they run on people with FB/Insta/WA, and woah, it is crazy unethical stuff he casually throws out there to Dwarkesh)
Still, to the point here: I'm still seeing Ai mostly as a tool/tech, not something that takes on an agency of it's own. We, the humans, are still the thing that says 'go/do/start', the prime movers (to borrow a long held and false bit of ancient physics). The AIs aren't initiating things, and it seems to a large extent, we're not going to want them to do so. Not out of a sense of doom or lack-of-greed, but simply as we're more interested in working at the edge of the fractal.
"I'm still seeing Ai mostly as a tool/tech, not something that takes on an agency of it's own."
I find that to be a highly ironic thing. It basically says AI is not AI. Which we all know it is not yet, but then we can simply say it: The current crop of "AI" is not actually AI. It is not intelligence. It is a kind of huge encoded, non-transparent dictionary.
But there's also consolidation happening: Not every branch that is initially explored is still meaningful a few years later.
(At least that's what I got from reading old mathematical texts: People really delved deeply into some topics that are nowadays just subsumed by more convenient - or maybe trendy - machinery)
A lot of places starting with a large and unskilled workforce, getting into e.g. textile industry (which brings better RoI than farming). Then the automation arrives but it leaves a lot of people jobless (still being unskilled) while there's new jobs in maintaining the machinery etc.
Pfiefdoms and empires will be maintained.
https://impact.economist.com/projects/responsible-innovation...
The sad part is, do you think we'll see this productivity gain as an opportunity to stop the culture of over working? I don't think so. I think people will expect more from others because of AI.
If AI makes employees twice as efficient, do you think companies will decrease working hours or cut their employment in half? I don't think so. It's human nature to want more. If 2 is good, 4 is surely better.
So instead of reducing employment, companies will keep the same number of employees because that's already factored into their budget. Now they get more output to better compete with their competitors. To reduce staff would be to be at a disadvantage.
So why do we hear stories about people being let go? AI is currently a scapegoat for companies that were operating inefficiently and over-hired. It was already going to happen. AI just gave some of these larger tech companies a really good excuse. They weren't exactly going to admit their make a mistake and over-hired, now were they? Nope. AI was the perfect excuse.
As all things, it's cyclical. Hiring will go up again. AI boom will bust. On to the next thing. One thing is for certain though, we all now have a fancy new calculator.
Automation is one way to do that.
I skipped over junior positions for the most part
I don’t see that not working now
https://libcom.org/article/phenomenon-bullshit-jobs-david-gr...
I am amenable to the idea that there is a lot of wasted pointless work, but not to the idea that there's some kayfabe arrangement where everyone involved thinks it's pointless but pretends otherwise, I think generally most people around such work have actually convinced themselves it's important.
This seems observationally true in the tech industry, where the world’s best programmers and technologists are tied up fiddling with transformers and datasets and evals so that the world’s worst programmers can slap together temperature converters and insecure twitter clones, and meanwhile the quality of the consumer software that people actually use is in a nosedive.
The AI was surprisingly good at filling in some holes in my specification. It generated a ton of valid C++ code that actually compiled(except it omitted the necessary #includes). I built and ran it and... the output was completely wrong.
OK, great. Now I have a few hundred lines of C++ I need to read through and completely understand to see why it's incorrect.
I don't think it will be a complete waste of time because the exercise spurred my thinking and showed me some interesting ways to solve the problem, but as far as saving me a bunch of time, no. In fact it may actually cost me more time trying to figure out what it's doing.
With all due respect to folks working on web and phone apps, I keep getting the feeling that AI is great for high level, routine sorts of problems and still mostly useless for systems programming.
As one of those folks, no it's pretty bad in that world as well. For menial crap it's a great time saver, but I'd never in a million years do the "vibe coding" thing, especially not with user-facing things or especially not for tests. I don't mind it as a rubber duck though.
I think the problem is that there's 2 groups of users, the technical ones like us and then the managers and C-levels etc. They see it spit out a hundred lines of code in a second and as far as they know (and care) it looks good, not realizing that someone now has to spend their time reviewing the 100 lines of code, plus having the burden of maintenance of those 100 lines going into the future. But, all they see is a way to get the pesky, expensive devs replaced or at least a chance squeeze more out of them. The system is so flashy and impressive looking, and you can't even blame them for falling for the marketing and hype, after all that's what all the AIs are being sold as, omnipotent and omniscient worker replacers.
Watching my non-technical CEO "build" things with AI was enlightening. He prompts it for something fairly simple, like a TODO List application. What it spits out works for the most part, but the only real "testing" he does is clicking on things once or twice and he's done and satisfied, now convinced that AI can solve literally everything you throw at it.
However if he were testing the solution as a proper dev would, he'd see that the state updates break after a certain amount of clicks, and that the list was glitching out sometimes, and that adding things breaks on scroll and overflows the viewport, and so on. These are all real examples of an "app" he made by vibe coding, and after playing around with it myself for all of 3 minutes I noticed all these issues and more in his app.
Over time, that adds up.
For simple utility programs and scripts, it also does a great job.
As someone working on routine problems in mainstream languages where training data is abundant, LLMs are not even great for that. Sure, they can output a bunch of code really quickly that on the surface appears correct, but on closer inspection it often uses nonexistent APIs, the logic is subtly wrong or convoluted for no reason, it does things you didn't tell it to do or ignores things you did, it has security issues and other difficult to spot bugs, and so on.
The experience is pretty much what you summed up. I've also used Claude 3.5 the most, though all other SOTA model have the same issues.
From there, you can go into the loop of copy/pasting errors to the LLM or describing the issues you did see in the hopes that subsequent iterations will fix them, but this often results in more and different issues, and it's usually a complete waste of time.
You can also go in and fix the issues yourself, but if you're working with an unfamiliar API in an unfamiliar domain, then you still have to do the traditional task of reading the documentation and web searching, which defeats the purpose of using an LLM to begin with.
To be clear: I don't think LLMs are a useless technology. I've found them helpful at debugging specific issues, and implementing small and specific functionality (i.e. as a glorified autocomplete). But any attempts of implementing large chunks of functionality, having them follow specifications, etc., have resulted in much more time and effort spent on my part than if I had done the work the traditional way.
The idea of "vibe coding" seems completely unrealistic to me. I suspect that all developers doing this are not even checking whether the code does what they want to, let alone reviewing the code for any issues. As long as it compiles they consider it a success. Which is an insane way of working that will lead to a flood of buggy and incomplete applications, increasing the dissatisfaction of end users in our industry, and possibly causing larger effects not unlike the video game crash of 1983 or the dot-com bubble.
That's what happens to "AI art" too. Anyone as a non-artist can create images in seconds, and they will look kind of valid or even good to them, much like those "vibe coded" things look to CEOs.
AI is great at generating crap really fast and efficiently. Not so good at generating stuff that anyone actually needs and which must actually work. But we're also discovering that a lot of what we consume can be crap and be acceptable. An endless stream of generated synthwave in the background while I work is pretty decent. People wanting to decorate their podcasts or tiktoks with something that nobody is going to pay attention to, AI art can do that.
For vibe coding, right now it seems that prototyping and functional mockups seems to be quite a viable use.
Oh, see, this is where I disagree. I think it's incredibly helpful to get past the "blank page". Yes, I do usually end up going and reading docs, but I also have a much better sense of what I'm looking for in the docs and can use them more effectively.
I feel like this is the same pattern with every new tool. Google didn't replace reference books, but it helped me discover the right ones to read much more easily. Similarly, LLM based tools are not replacing reference texts, but they're making it easier for me to spin up on new things; by the time I start reading the docs now, I'm usually past the point of needing to read the intro.
I agree. AI is great for stuff that's hard to figure out but easy to verify.
For example, I wanted to know how to lay out something a certain way in SwiftUI and asked Gemini. I copied what it suggested, ran it and the layout was correct. I would have spent a lot more time searching and reading stuff compared to this.
Its a snippet I've written a few times before to debug data streams, but it's always annoying to get alignment just right.
I feel like that is the sweet spot for AI, to generate actual snippets of routine code that has no bearing on security or functionality, but lets you keep thinking about the problem at hand while it does that 10 minutes of busy work.
I do know people who seem to be having more success with the "vibecoding" workflow on the front end though.
For a time, we can justify this kind of extra work by imagining that it is an upfront investment. I think that is what a lot of people are doing right now. It remains to be seen when AI-assisted labor is still a net positive after we stop giving it special grace as something that will pay off a lot later if we spend a lot of time on it now.
I think it's often better to just skip this and delete the code. The cool thing about those agents is that the cost of trying this out is extremely cheap, so you don't have to overthink it and if it looks incorrect, just revert it and try something else.
I've been experimenting with Junie for past few days, and had very positive experience. It wrote a bunch of tests for me that I've been postponing for quite some time it was mostly correct from a single sentence prompt. Sometimes it does something incorrect, but I usually just revert it and move on, try something else later. There's definitely a sweet spot for things tasks it does well and you have to experiment a bit to find it out.
Most software should not exist.
That's not even meant in the tasteful "Its a mess" way. From a purely money making efficiency standpoint upwards of 90% of the code I've written in this time has not meaningfully contributed back to the enterprise, and I've tried really hard to get that number lower. Mind you, this is professional software. If you consider the vibe coder guys, I'll estimate that number MUCH higher.
It just feels like the whole way we've fit computing into the world is misaligned. We spent days building UIs that dont help the people we serve and that break at the first change to the process, and because of the support burden of that UI we never get to actually automate anything.
I still think computers are very useful to humanity, but we have forgot how to use them.
This is Sturgeon's law. (1)
And yes, but it's hard or impossible to identify the useful 10% ahead of time. It emerges after the fact.
Most software should not exist.
That's not even meant in the tasteful "Its a mess" way. From a purely money making efficiency standpoint upwards of 90% of the code I've written in this time has not meaningfully contributed back to the enterprise, and I've tried really hard to get that number lower. Mind you, this is professional software. If you consider the vibe coder guys, I'll estimate that number MUCH higher."
I've worked on countless projects at this point that seemed to serve no purpose, even at the outset, and had no plan to even project cost savings/profit, except, at best some hand-waving approximation.
Even worse, many companies are completely uninterested in even conceptualizing operating costs for a given solution. They get sold on some cloud thing cause "OpEx" or whatever, and then spend 100s of hours a month troubleshooting intricate convoluted architectures that accomplish nothing more than a simple relational database and web server would.
Sure, the cloud bill is a lower number, but if your staff is burning hours every week fighting `npm audit` issues, and digging through CloudWatch for errors between 13 Lambda functions, what did you "save"?
I've even worked on more than one project that existed specifically to remove manual processes (think printing and inspecting documents) to "save time." Sure, now shop floor workers/assembly workers inspect less papers manually, but now you need a whole other growth of technical staff to troubleshoot crap constantly.
Oh and the company(ies) don't have in-house staff to maintain the thing, and have no interest in actually hiring so they write huge checks to a consulting company to "maintain" the stuff at a cost often orders of magnitude higher than it'd cost to hire staff that would actually own the project(s). And these people have a conflict of interest to maximize profit, so they want to "fix" things and etc etc.
I think a lot of this is the outgrowth of the 2010s where every company was going to be a "tech company" and cargo-culted processes without understanding the purpose or rationale, and lacking competent people to properly scope and deliver solutions that work, are on time and under budget, and tangibly deliver value.
This statement is incredibly accurate
because those "best programmers" don't want to be making temperature converters nor twitter clones (unless they're paid mega bucks). This enables the low paid "worst" programmers to do those jobs for peanuts.
It's an acceptable outcome imho.
I think it's too early to say whether AI is exacerbating the problem (though I'm sympathetic to the view that it is) or improving it, or just maintaining the status quo.
I mean, isn't that obvious looking at economic output and growth? The Shopify CEO recently published a memo in which he claimed that high achievers saw "100x growth". Odd that this isn't visible in the Spotify market cap. Did they fire 99% of their engineers instead? Maybe the memo was AI written too.
Are there any 5 man software companies that do the work of 50? I haven't seen them. I wonder how long this can go on with the real world macro data so divorced from what people have talked themselves into.
There’s little sign of any AI company managing to build something that doesn’t just turn into a new baseline commodity. Most of these AI products are also horribly unprofitable, which is another reality that will need to be faced sooner rather than later.
To paraphrase Lee Iacocca: We must stop and ask ourselves, how much videogames do we really need?
I recently retired from 40 years in software-based R&D and have been wondering the same thing. Wasn't it true that 95% of my life's work was thrown away after a single demo or a disappointingly short period of use?
And I think the answer is yes, but this is just the cost of working in an information economy. Ideas are explored and adopted only until the next idea replaces it or the surrounding business landscape shifts yet again. Unless your job is in building products like houses or hammers (which evolve very slowly or are too expensive to replace), the cost of doing of business today is a short lifetime for any product; they're replaced in increasingly fast cycles, useful only until they're no longer competitive. And this evanescent lifetime is especially the case for virtual products like software.
The essence of software is to prototype an idea for info processing that has utility only until the needs of business change. Prototypes famously don't last, and increasingly today, they no longer live long enough even to work out the bugs before they're replaced with yet another idea and its prototype that serves a new or evolved mission.
Will AI help with this? Only if it speeds up the cycle time or reduces development cost, and both of those have a theoretical minimum, given the time needed to design and review any software product has an irreducible minimum cost. If a human must use the software to implement a business idea then humans must be used to validate the app's utility, and that takes time that can't be diminished beyond some point (just as there's an inescapable need to test new drugs on animals since biology is a black box too complex to be simulated even by AI). Until AI can simulate the user, feedback from the user of new/revised software will remain the choke point on the rate at which new business ideas can be prototyped by software.
“Creative destruction is a concept in economics that describes a process in which new innovations replace and make obsolete older innovations.”
https://en.wikipedia.org/wiki/Creative_destruction
I think about this a lot with various devices I owned over the years that were made obsolete by smartphones. Portable DVD players and digital cameras are the two that stand out to me; each of them cost hundreds of dollars but only had a marketable life of about 5 years. To us these are just products on a shelf, but every one of them had a developer, an assembly line, and a logistics network behind them; all of these have to be redeployed whenever a product is made obsolete.
There is a lot of value in being the stepping stone to tomorrow. Not everyone builds a pyramid.
Yes... basically in life, you have to find the definition of "to matter" that you can strongly believe in. Otherwise everything feels aimless, the very life itself.
The rest of what you ponder in your comment is the same. And I'd like to add that baselines shifted a lot over the years of civilization. I like to think about one specific example: painkillers. Painkillers were not used during medical procedures in a widespread manner until some 150 years ago, maybe even later. Now, it's much less horrible to participate in those procedures, for everyone involved really, and also the outcomes are better just for this factor - because the patients moves around less while anesthetized.
But even this is up for debate. All in all, it really boils down to what the individual feels like it's a worthy life. Philosophy is not done yet.
Perhaps my initial estimate of 5% of the workforce was a bit optimistic, say 20% of current workforce necessary to have food, healthcare, and maybe a few research facilities focused on improving all of the above?
You are very right that AI will not change this. As neither did any other productivity improvement in the past (directly).
Does society as a whole even have a goal currently? I don't really think it does. Like do ideologists even exist today?
I wish society was working towards some kind of idea of utopia, but I'm not convinced we're even trying for that. Are we?
The work brings over time modest wealth, allows me and my family to live in long term safe place (Switzerland) and builds a small reserve for bad times (or inheritance, early retirement etc. this is Europe, no need to save up for kids education or potentially massive healthcare bills). Don't need more from life.
I’m in America so the paychecks are very large, which helps with private school, nanny, stay at home wife, and the larger net worth needed (health care, layoff risk, house in a nicer neighborhood). I’ve been fortunate, so early retirement is possible now in my early 40s. It really helps with being able to detach from work, when I don’t even care if I lose my job. I worry for my kids though. It won’t be as easy for them. AI and relentless human resources optimization will make tech a harder place to thrive.
Who in their right mind would work when 95 out of 100 people around them are slacking off all day? Unless you pay them really well. So well that they prefer to work than to slack off. But then the slackers will want nicer things to do in their free time that only the workers can afford. And then you'd end up at the start.
Tho 5% is likely unfeasibly low, we would probably need at least twice that
if that were really true, who gets to decide who those 5% that gets to do work, while the rest leeches off them?
Coz i certainly would not want to be in that 5%.
It mattered enough for someone to pay you money to do it, and that money put food on the table and clothes on your body and a roof over your head and allowed you to contribute to larger society through paying taxes.
Is it the same as discovering that E = MC2 or Jonas Salk's contributions? No, but it's not nothing either.
Would we have fewer video games? If all our basic needs were met and we had a lot of free time, more people might come together to create games together for free.
I mean, look at how much free content (games, stories, videos, etc) is created now, when people have to spend more than half their waking hours working for a living. If people had more free time, some of them would want to make video games, and if they weren’t constrained by having to make money, they would be open source, which would make it even easier for someone else to make their own game based on the work.
Who benefits from the situation? You or I who don’t have to make a u turn to get gas at this intersection, perhaps, but that is not much benefit in comparison for the opportunity cost of not having 3 prime corner lots squandered on the same single use. The clerk at the gas station for having a job available? Perhaps although maybe their labor in aggregate would have been employed in other less redundant uses that could benefit out society otherwise than selling smokes and putting $20 on 4 at 3am. The real beneficiary of this entire arrangement is the fisherman, the owner or shareholder who ultimately skims from all the pots thanks to having what is effectively a modern version of a plantation sharecropper, spending all their money in the company store and on company housing with a fig leaf of being able to choose from any number of minimum wage jobs, spend their wages in any number of national chain stores, and rent any number of increasingly investor owned property. Quite literally all owned by the same shareholders when you consider how people diversify their investments into these multiple sectors.
Now instead of misspelled words (which still happens all the time) we have incorrect words substituted in place of the correct ones.
Look at any long form article on any website these days and it will likely be riddled with errors, even on traditional news websites!
Advancements in what exact areas? My time using GitHub Copilot years ago was more successful for the simple act of coding than my more recent one trying out Cursor with Claude Sonnet 3.5. I'm not really seeing what these massive advancements have been, and realistically none of these LLMs are more useful than a very, very bad junior programmer when it comes to anything that couldn't already be looked up but is simply faster to ask.
This is an incredible achievement. 5 years ago chatbots and NLP AI couldnt do shit. 2 years ago they were worthless for programming. Last year they were only useful to programmers as autocomplete. Now they replace juniors. There has been obvious improvement year after year and it hasnt been minor
Which LLM? That’s not the purpose of training for any model that I know of.
Creativity means play, as in not following rules, adding something of yourself.
Something a computer just can't do.
The areas for which creativity is required are likely related to digital media software (like SFX in movies, games, and perhaps very innovative software). In these areas, surely the software developer working there will have the creativity required.
sounds like a form of creativity to me!
The cost, in money or time, for getting certain types of work done decreases. People ramp up demand to fill the gap, "full utilization" of the workers.
Its a very old claim that the next technology will lead to a utopia where we don't have to work, or we work drastically less often. Time and again we prove that we don't actually want that.
My hypothesis (I'm sure its not novel or unique) is that very few people know what to do with idle hands. We tend to keep stress levels high as a distraction, and tend to freak out in various ways if we find ourselves with low stress and nothing that "needs" to be done.
It actually does but due to wrong distribution of reward gained from that tech(automation) it does not work for the common folks.
Lets take a simple example, you, me and 8 other HN users work in Bezos’ warehouse. We each work 8h/day. Suddenly a new tech comes in which can now do the same task we do and each unit of that machine can do 2-4 of our work alone. If Bezos buys 4 of the units and setting each unit to work at x2 capacity, then 8 of us now have 8h/day x 5 days x 4 weeks = 160h leisure.
Problem is, now 8 of us still need money to survive(food, rent, utilities, healthcare etc). So, according to tech utopians, 8 of us now can use 160h of free time to focus on more important and rewarding works.(See in context of all the AI peddlers, how using AI will free us to do more important and rewarding works!). But to survive my rewarding work is to do gig work or something of same effort or more hours.
So in theory, the owner controlling the automation gets more free time to attend interviews and political/social events. The people getting automated away fall downward and has to work harder to maintain their survivality. Of course, I hope our over enthusiastic brethren who are paying LLM provider for the priviledge of training their own replacements figure the equation soon and don’t get sold by the “free time to do more meaningful work” same way the Bezos warehouse gave some of us some leisure while the automation were coming online and needed some failsafe for a while. :)
Regardless of anyone's thoughts on genAI in particular, it's important for us as a society to consider what our economic model looks like in a future where technology breaks the assumption of near-universal employment. Maybe that's UBI. Maybe it's a system of universally accessible educational stipends and pumping public funds into venture capital. Maybe it's something else entirely.
just a lot of words for "lazy" - it's built in to living organisms.
The whole economic system today is constructed to ensure that one would suffer from being "lazy". And this would be the case until post-scarcity.
For most people lazy implies that there are things you things you really ought to get done but you're choosing to avoid doing it to the point where its a problem that whatever the thing is still isn't taken care of.
Idle just means you don't feel like you have anything that needs to be done, you aren't avoiding things to the point that it causes a problem.
Of course our economic system prefers people to be "fully utilized" rather than idle, but who cares? I don't owe an economic system anything, we could change the system whenever we want, and ultimately an economy is only useful to analyze the comparative output that already happened - it has nothing to do with the present or future.
We are currently a long way from that kind of change as current AI tools suck by comparison to literally 1,000x increases in productivity. So, in well under 100 years programming could become extremely niche.
We increased production and needed fewer farmers, but we now have so few farmers that most people have very little idea of what food really is, where it comes from, or what it takes to run our food system.
Higher productivity is good to a point, but eventually it risks becoming too fragile.
Screwworm, a parasite that kills cattle in days is making a comeback. And we are less prepared for it this time because previously (the 1950s-1970s) we had a lot more labor in the industry to manually check each head of cattle. Bloomberg even called it out specifically.
Ranchers also said the screwworm would be much deadlier if it were to return, because of a lack of labor. “We can’t fight it like we did in the ’60s, we can’t go out and rope every head of cattle and put a smear on every open wound,” Schumann said.
https://www.bloomberg.com/news/features/2025-05-02/deadly-sc...
A stroke of the pen will fix any and all political issues, if/when the political desire comes about.
Vs a problem with no solution - no pen will fix any of it, regardless of political will.
If the right person signs a change that magically fixes a labor shortage in a rural area we're right back to where we were, and much of the public would be up in arms about it.
(This doesn't actually reflect my opinion on immigration laws to be clear, just my view on where we are today in the US)
I’m not saying this parasite isn’t a potential problem, but it’s not existential by any stretch. There are a thousand more intractable and consequential problems facing us right now.
* The US is not producing enough food - it's now a net food importer
* The increasing problems we are seeing in the food supply chain are usually tied to producers cutting costs and padding margins
Matt Stoller has gone into this at length - https://www.thebignewsletter.com/p/is-america-losing-the-abi...
So I mean it could depend on your definition of productivity, if anything that increases shareholder returns at the expense of a good product or robust supply chain is considered more "productivity," sure. Just as monopolies are the most "productive" businesses ever for their shareholders, but generally awful for everyone else, and are not what most people would think of as productive.
The human definition of productivity is - less inputs producing more and better outputs.
The cartel doublespeak definition is - the product got worse and the margins improved, which seems to describe US Big Ag at present
The US exports lots of cheap food and imports expensive foods like wine, beer, high end cheese, candies. In terms of calories / nutrition the US is a huge net food exporter but we like our luxury chocolates etc.
American companies love to setup cheap factories overseas even if they use US corn syrup to make a beverage the trade balance is based on corn syrup not the value of the manufactured soda. Meanwhile in the other direction we’re importing cans of soda manufactured in other countries.
- If people wish to consume expensive, luxury foods they will do so, and that's OK and valid, even during a political crisis. America endeavoring to produce more of these luxury products is good for the country's economy and makes self-sufficiency easier if a crisis arises.
- Maybe those factories should be on US soil, also good for self-sufficiency. Maybe not so good for international conglomerates - I don't care, they've had a great run, time for them to work for the people again.
Enjoyable food existed before the current era of globalization. The brands might change, but it will exist after that era is wound down. Let's not pretend otherwise.
I agree with you on the double speak though, really I think its just a lack of the public really understanding the meaning given to "productive" in the industry though. The industry doesn't hide what it means by the word, most just don't care about any version of productive that measures things like nutrient value, sustainability, soil health, animal welfare, etc.
Yes, but.
There are more jobs in other fields that are adjacent to food production, particularly in distribution. Middle class does not existed and retail workers are now a large percentage of workers in most parts of the world.
Food is just a smaller percentage of the economy overall.
I would have assumed that if 90% of people are farming its largely subsistence and any trade or happened on a much more local scale, potentially without any proper currency involved.
That said, there’s been areas where 90% of the working population was at minimum helping with the harvest up until the Middle Ages.
It’s somewhat arbitrary where you draw the line historically but it’s not just maximum productivity worth remembering crops used to fail from drought etc far more frequently.
Small hobby farms are also a thing these days, but that’s a separate issue.
In my experience they're very productive by poundage yield, but horribly unproductive when it comes to inputs required, chemicals used, biodiversity, soil health, etc.
The difference is so extreme vs historic methods you can skip pesticides, avoid harming soil health or biodiversity vs traditional methods etc without any issues here and still be talking 1,000x.
Though really growing crops for human consumption is something of a rounding error here. It’s livestock, biofuels, cotton, organic plastics, wood, flowers, etc that’s consuming the vast majority of output from farms.
Two things worth noting though, pounds of food say little about the nutritional value to consumers. I don't have hood links handy so I won't make any specific claims, just worth considering if weight is the right metric.
As far as human labor hours goes, we've gotten very good at outsourcing those costs. Farm labor hours ignores all the hours put in to their off-farm inputs (machinery, pesticides and fertilizers, seed production, etc). We also leverage an astronomical amount of (mostly) diesel fuel to power all of it. The human labor hours are small, but I've seen estimates of a single barrel of oil being comparable to 25,000 hours of human labor or 12.5 years of full employment. I'd be interested to do the math now, but I expect we have seen a fraction of that 25,000x multiplier materialize in the reduction of farm hours worked over the last century (or back to the industrial revolution).
Nah, it's not 100% but it says a lot about the nutritional value.
> inputs
You can approximate those with price. A barrel of oil might be a couple hours.
You likely get less useful work out of a gallon of gas in your car than it took to extract, refine, transport, and distribute that gallon of gas. Just as an example gas pumps use electricity that isn’t coming from oil.
This whole thread was about productivity in terms of hours spent by the last person in the chain, the farmer. They can do drastically more today in terms of food production because they can leverage the potential energy in oil to replace human labor, and in that metric all of the externalized costs are ignored.
Nope, what’s being replaced is animal feed used for animal labor. People didn’t pull a plow by hand and then suddenly swap to tractors.
For thousands of years farmers used sunlight > animal feed > domesticated animals, there’s nothing special about oil here.
Track the in oil energy for a tractor vs the sunlight to grow plants to feed a pair of horses and the tractor is using wildly less energy per year to get vastly more done. You can even make it more obvious by using solar panels in the same fields feeding horses 100 years ago to charge an electric tractor. Oil is cheap, but not necessary there was even wood and coal burning tractors in the early days.
PS: Horses can apparently digest the cellulose in sawdust from several types of trees. It’s unhealthy in large quantities but kind of an interesting fact.
That’s just wildly wrong by several orders of magnitude, to the point I question your judgment to even consider it a valid possibility.
Not only would the price be inherently much higher but if everyone including infants working 50 hours per week we’d still would produce less than 1/30th the current world’s output of oil and going back we’ve been extracting oil at industrial scale for over 100 years.
To get even close to those numbers you’d need to assume 100% of human labor going back into prehistory was devoted purely to oil extraction.
Burning food can produce more useful work in a heat engine than you get from humans doing labor so I’m baffled by what about this comparison seems to make sense to you.
Ignoring that you’re still off by more than an order of magnitude. 100% of the energy content of oil can’t even be turned directly into work without losses. You get about 10% of its nominal energy content as useful work, less if you’re including energy costs of production, refining, and transport.
Even if look at an oil well fire it’s incomplete combustion and not useful work.
A) If you want move a plow, you can grow some oats/grass/whatever to feed some horses then use those horses, or using the same land for oats/wood/whatever and burn in an early tractor, or use oil. Nobody in 1900 was getting 20 people to pull a plow. All of those methods are turning some amount of chemical energy to produce useful work. As such looking into the chemical energy in food vs oil makes some sense though sunlight vs oil is a better comparison as tractors are burning a far more expensive product not crude oil.
B) Alternatively, you can look at the amount of useful work from a barrel of oil after all losses and compare that to the work done by a horse or person after all losses. But again suddenly oil doesn’t look so hot.
What you tried to do is compare the energy content of oil with some amount of useful work which is a silly comparison.
The earlier comment or was talking about the massive reduction in the amount of human labor required to cultivate land and the relative productivity of the land.
That comparison comes down to amount of work done. Whether that work is done by a human swinging a scythe or a human driving a diesel powered tractor is irrelevant, the work is measured in joules at the end of the day. We have drastically fewer human hours put into farm labor because we found a massive multiplier effect in fossil fuel energy.
I'm not sure where solar panels came in, but sure they can also be used to store watts and produce joules of work if that's your preferred source of energy.
In particular, if we can make a machine that spends more joules than a human, but reduces the human effort by orders of magnitude, why would that be "horribly unproductive"? Most people would call that amazingly productive. And when they want to broaden the view to consider the inputs too, they're worried about the labor that goes into the inputs, not the joules.
(And if the worry is the limited amount of fossil fuels in particular, we can do the same with renewable energy.)
I'm still not sure why renewable are being brought up here. An earlier comment referenced solar, I never mentioned solar or renewables.
I only mention renewables because I'm grasping at straws to figure out why joules would matter.
When looking at joules its an attempt to compare something like a human cutting a field with a scythe and a tractor cutting it with an implement. The tractor is way more efficient at cutting it when considering only the human hours of labor cutting the field. But of course it is, a single barrel of oil has way more energy potential and even a small tractor will be run with fuel milage tracked by gallons per hour.
Back in the day, wood powered tractors beat the fuck out of horses, and horses beat the fuck out of human labor because they could digest cellulose. Oil is just very slightly cheaper. Even today people heat their homes with both wood pellets and oil, meanwhile there’s cheaper alternatives.
I really don't understand your use of the term "externalized cost" here.
I don’t think AI will let programmers be anywhere close to 1,000x as productive in 10 years. That wouldn’t just need AGI but deep changes to how organizations function.
Hitting 100+x in 30 to 90 years is much harder to predict.
But you do have that option, right? Work 20 hours a week instead of 40. You just aren't paid for the hours that you don't work. In a world where workers are exchanging their labor for wages, that's how it's supposed to work.
For there to be a "better option" (as in, you're paid money for not working more hours) what are you actually being paid to do?
For all the thoughts that come to mind when I say "work 20 hours a week instead of 40" -- that's where the individual's preference comes in. I work more hours because I want the money. Nobody pays me to not work.
Not really. Lots of kinds of work don’t hire part timers in any volume period. There are very limited jobs where the only tradeoff if you want to work fewer hours is a reduction in compensation proportional to the reduction in hours worked, or even just a reduction in compensation even if disproportionate to the reduction in hours worked.
>But you do have that option, right? Work 20 hours a week instead of 40. You just aren't paid for the hours that you don't work. In a world where workers are exchanging their labor for wages, that's how it's supposed to work.
Look the core of your opinion is the belief that market dynamics naturally lead to desirable outcomes always. I simply don’t believe that, and I think interference to push for desirable outcomes which violate principles of a free market is often good. We probably won’t be able to agree on this.
No.. if society wants to disincentive over working by introducing overtime, that's fine by me. I'm not making any moral judgement. You just seem to live in a fantasy world where people aren't exchanging their labor for money.
> Look the core of your opinion is the belief that market dynamics naturally lead to desirable outcomes always.
I didn't say that, and I don't believe that. If you're just going to hallucinate what I think, what's the point in replying?
Where did you get that? My entire contention centers around a lack of good options for workers seeking to work fewer hours. A logical assumption, then, would be that I want policies which would give said workers more options. Examples include stronger protections for unions, higher minimum wages, etc. Since I saw these as the logical extrapolations from what I'd said originally, I figured your issue was gov interference in the labor market itself, since you said things like
>In a world where workers are exchanging their labor for wages, that's how it's supposed to work.
>(as in, you're paid money for not working more hours)
You took issue with more money for the same hours, did you not? Why wouldn't overtime be an obvious example? The reason I assumed you were just a libertarian or something was because it doesn't seem like there's an obvious logical juncture to draw a line at. If you're fine with society altering the behavior of the labor market to achieve certain desirable results, then why would this be any different fundamentally?
However, while we live in the world where we're exchanging labor for money, it's not as simple as what you originally wrote: "I think it’s just the result of disproportionate political influence held by the wealthy, who are heavily incentivized to maximize working hours."
You're not considering the choices being made by the people actually doing the work. People work for a significant amount of their life because they're paid to do it. There's no council of wealthy people conspiring to achieve this conclusion: they have work that needs to be done, and they're willing to pay for people to do it.
My thesis was just this: while people are exchanging labor for money, people will work. If you introduce a policy where people are given some UBI regardless of employment, they will still work. They want the money. They will buy more televisions, better food, more vacations. If I'm paid my current salary to work for 5 hours a week, I will start interviewing for more jobs. And yes, inflation may soon render the UBI you've introduced to be not so great.
You could outlaw or heavily disincentivize working over a certain number of hours (overtime is a step in this direction), although my concern would be that artificially limiting productivity like that would be detrimental. We still need people doing productive things, so slashing their hours might be a Chinese "backyard furnaces" sort of situation. That said, some people think half of our jobs are bullshit anyway.
To your credit though, we shouldn't let perfect be the enemy of good. Maybe 36 hours is better than 40, and so on.
For my relatives in Germany going part time seems easier and more accepted by companies.
Is that true? Most trades can work fewer hours, medical workers like nurses can, hairdressers, plenty of writers are freelance, the entire gig economy.
It seems like big companies don't provide the option, for software at least. I always chocked that up to more bureaucratic processes which add some fixed cost for each employed person.
I'm kind of ok with doing more work in the same time, though if I'm becoming way more effective I'll probably start pushing harder on my existing discussions with management about 4 day work weeks (I'm looking to do 4x10s, but I might start looking to negotiate it to "instead of a pay increase, let's keep it the same but a 4x8 week").
If AI lets me get more done in the same time, I'm ok with that. Though, on the other hand, my work is budgeting $30/mo for the AI tools, so I'm kind of figuring that any time that personally-purchased AI tools are saving me, I deduct from my work week. ;-)
>very few people know what to do with idle hands
"Millions long for immortality that don't know what to do with themselves on a rainy Sunday afternoon." -- Susan Ertz
I suspected this would be the case with AI too. A lot of people said things like "there won't be enough work anymore" and I thought, "are you kidding? Do you use the same software I use? Do you play the same games I've played? There's never enough time to add all of the features and all of the richness and complexity and all of the unit tests and all of the documentation that we want to add! Most of us are happy if we can ship a half-baked anything!"
The only real question I had was whether the tech sector would go through a prolonged, destructive famine before realizing that.
There are probably plenty of goods that are counter examples, but time utilization isn't one of them, I don't think.
That's the capitalist system. Unions successfully fought to decrease the working day to 8 hrs.
Workers are often looking to make more money, take more responsibility, or build some kind of name or reputation for themselves. There's absolutely nothing wrong with that, but that goal also incentivizes to work harder and longer.
There's no one size fits all description for workers, everyone's different. The same is true for the whole system though, it doesn't roll up to any one cause.
https://www.forbes.com/sites/timworstall/2012/03/04/the-stor...
I worry more that an idle humanity will cause a lot more conflict. “An idle mind’s the devil’s playground” and all.
Its always possible that risk would be transitional. Anyone alive today, at least in western style societies, likely doesn't know a life without high levels of stress and distraction. It makes sense that change would cause people to lash out, maybe people growing up in that new system would handle it better (if they had the chance).
Many shows and movies can play a similar role.
I think we would/will see a lot more of that. Even in transitional periods where people can multitask more now as ai starts taking over moment to moment thinking.
I disagree pretty strongly here. I've known a few people who lived this sort of gamer rotting lifestyle and they were miserable.
Many people like this wind up suicidal, or are prime candidates to shoot up a school.
Videogames, movies, shows, whatever, they do not replace the need for meaningful interaction with the real world.
Take 7 hours out if the day because an LLM makes you that much more productive and I expect people wouldn't know what to do with themselves. That could be wrong, but I'd expect a lot more societal problems than we already have today if a year from now a large number of people only worked 4 or 5 hours a week.
That's not even getting to the Shopify CEOs ridiculous claim that employees will get 100x more work done [1].
Where’s all of the articles that HN loves about kids these days not being bored anymore? What about google’s famous 20% time?
Idle time isn’t just important, it’s the point.
I’m saying I’m worried that it would be more likely that people would lash out, than everyone just passively sitting around.
But to respond to your google analogy, 20% time is complemented by 80% being busy. This scenario would be 100% time.
It would be better to look at a population that doesn’t have to work, like retirees. And see what the various issues they face. But also correct for age and biology of course. Or look at populations of places like Qatar where most of the native population don’t have to work much due to oil/gas revenues.
"In the 1970s when office computers started to come out we were told:
'Computers will save you SO much effort you won't know what to do with all of your free time'.
We just ended up doing more things per day thanks to computers."
"In the early 1900s, 25% of the US population worked in agriculture.
Today it's 2%.
I would imagine that economists back then would be astounded by that change.
I should point out: there were also no pediatric oncologists back then."
Yes, I spend time on writing prompts. Like "Never do this. Never do that. Always do this. Make sure to check that.". To tell the AI my coding preferences. Bot those prompts are forever. And I have written most of them months ago, so that now I just capitalize on them.
Like, just stop and think about it for a second. You're saying that AI has doubled your productivity. So, you're actually getting twice as much done as you were before? Can you back this up with metrics?
I can believe AI can make you waaaaaaay more productive in selective tasks, like writing test conditions, making quick disposable prototypes, etc, but as a whole saying you get twice as much done as you did before is a huge claim.
It seems more likely that people feel more productive than they did before, which is why you have this discrepancy between people saying they're 2x-10x more productive vs workplace studies where the productivity gain is around 25% on the high end.
I see it happening right in front of my eyes. I tell the AI to implement a feature that would take me an hour or more to implement and after one or two tries with different prompts, I get a solution that is almost perfect. All I need to do is fine-tune some lines to my liking, as I am very picky when it comes to code. So the implementation time goes down from an hour to 10 minutes. That is something I see happening on a daily basis.
Have you actually tried? Spend some time to write good prompts, use state of the art models (o3 or gemini-2.5 pro) and let AI implement features for you?
So, even if AI helps you write code twice as fast, it does not mean that it makes you twice as productive in your job.
Then again, maybe you really have a shitty job at a ticket factory where you just write boilerplate code all day. In which case, I'm sorry!
I think of it like a sort of coprocessor that's dumber in some ways than my subconscious, but massively faster at certain tasks and with access to vastly more information. Like my subconscious, its output still needs to be processed by my conscious mind in order to be useful, but offloading as much compute as possible from my conscious mind to the AI saves a ton of time and energy.
That's before even getting into its value in generating content. Maybe the results are inconsistent, but when it works, it writes code much more quickly than any human could possibly type. Programming aside, I've objectively saved significant amounts of time and money by using AI to help not only review but also revise and write first drafts of legal documents before roping in lawyers. The latter is something I wouldn't have considered worthwhile to attempt in most cases without AI, but with AI I can go from "knowing enough to be dangerous" to quickly preparing a passable first draft on my own and having my lawyers review the language and tighten up some minor details over email. That's a massive efficiency improvement over the old process of blocking off an hour with lawyers to discuss requirements on the phone, then paying the hourly rate for them to write the first draft, and then going through Q&A/iteration with them over email. YMMV, and you still need to use your best judgement on whether trying this with a given legal task will be a productive use of time, but life is a lot easier with the option than without. Deep research is also pretty ridiculous when you find yourself with a use case for it.
In theory, there's not really anything in particular that I'd say AI lets me do that I couldn't do on my own*, given vastly more hours in the day. In practice, I find that I'm able to not only finish certain tasks more quickly, but also do additional useful things that I wouldn't otherwise have done. It's just a massive force multiplier. In my view, the release of ChatGPT has been about as big a turning point for knowledge work as computers and the Internet were.
*: Actually, that's not even strictly true. I've used AI to generate artwork, both for fun/personal reasons and for business, which I couldn't possibly have produced by hand. (I mean with infinite time I could develop artistic skills, but that's a little reductive.) Video generation is another obvious case like this, which isn't even necessarily just a matter of individual skill, but can also be a matter of having the means and justification to invest money in actors, costumes, props, etc.
For greenfield "make me a plain JS app that does X" yeah usually it is just able to do a small app that I describe in under 10 mins where I most likely would take far more than an hour to implement as well as just AI does.
For "hey I do have an app in framework X, implement feature that would take me less than 30 mins" - it might hit some issue and loop hanging on its own mistakes, hallucinate dependencies, hallucinate command line parameters and get stuck or just messing whole files. When such things happen I drop it do 'git reset --hard' and move on my own because trying to fix stuff by leading it usually ended up taking me hours fiddling with AI and not progressing on task.
But I also tried to make a greenfield apps with react and angular instead of just "plain JS" and that also mostly went bad getting stuck on some unsolvable issues that I would not have just using default templates/generators on my own.
I think it depends a lot on what you work on. There are tasks that are super LLM friendly, and then there are things that have so many constraints that LLM can basically never get it right.
For example, atm we have some really complicated pieces of code that needs to be carefuly untangled and retangled to accomodate a change, and we have to be much more strategic about it to make sure we don't regress anything during the process.
But working on features that can fit within a timebox of "an hour or more" takes up very little of my time.
That's what I mean, there are certain contexts where it makes sense to say "yeah, AI made me 2x-10x more productive", but taken as a whole just how productive have you become? Actually being 2x productive as a whole would have a profound impact.
working on features that can fit
within a timebox of "an hour or
more" takes up very little of my time
What would be something that can't be broken down into one-hour tasks? Can you give a concrete example?For example, I’m rebuilding a legacy backend application and need to do a lot of reverse engineering. There are a dozen upstream and downstream services and nobody in the world knows 100% of what they do. AI doesn’t know how to look across wikis, slack channels, or send test requests and dig through logs but the majority of the work is this because nobody knows the requirements. also a lot of the “code” is not actually code but several layers of auto-generated crap based only on API models. How can I point an AI at the project, say “here’s 20 packages involved with a 3-service call chain that’s only 2/3 documented” and get something useful?
The code is pretty much always the easiest part of my job.
You can try to tell me that this is actually a symptom of a deeper problem or organization rot or bad design choices, and I agree, but that’s out of my control and my main job is to work around this crap and still deliver. It was like this for years before 90% current employees were hired.
To summarize, I work in micro-service hell and I don’t know how to make AI useful at all for the slow parts
All it takes to make a good living is to make a tool that is useful enough for people to pay you for using it.
That said, when I had to write a Terraform project for a backend earlier this year, that’s when generative AI really shined for me.
There is a lack of training data; Apple docs arent great or really thorough, much documentation is buried in WWDC videos and requires an understanding of how the APIs evolved over time to avoid confusion when following stackoverflow posts, which confused newcomers as well as code generators. Stackoverflow is also littered with incorrect or outdated solutions to iOS/Swift coding questions.
I do full stack projects, mostly Python, HTML, CSS, Javascript.
I have two decades of experience. Not just my work time during these two decades but also much of my free time. As coding is not just my work but also my passion.
So seeing my productivity double over the course of a few months is quite something.
My feeling is that it will continue to double every few months from now on. In a few years we can probably tell the AI to code full projects from scratch, no matter how complex they are.
With swift it was somewhat helpful but not nearly as much. Eventually stopped using it for swift.
For me it’s been up to 10-100x for some things, especially starting from scratch
Just yesterday, I did a big overhaul of some scrapers, that would have taken me at least a week to get done manually (maybe doing 2-4 hrs/day for 5 days ~ 15hrs). With the help of ChatGPT, I was done in less than 2 hours
So not only it was less work, it was a way shorter delivery time
And a lot less stress
But, it did require passing tests
Most of the changes in the end were relatively straightforward, but I hadn’t read the code in over a year.
The code also implemented some features I don’t use super regularly, so it would’ve taken me a long time to load everything up in my head, to fully understand it enough, to confidently make the necessary changes
Without ai, it would have also required a lot of google searches finding documentation and instructions for setting up some related services that needed to be configured
And, it would have also taken a lot more communication with the people depending on these changes + having someone doing the work manually while the scrapers were down
So even though it might have been a reduction of 15hrs down to 1.5hrs for me, it saved many people a lot of time and stress
Personally, I do try to keep a comment at the top of every major file, with a comment with bullets points, explaining the main functionality implemented and the why
That way, when I pass the code to a model, it can better “understand” what the code is meant to do and can provide better answers
(A lot of times, when a chat session gets too long and seems like the model is getting stuck without good solutions, I ask it to create the comment, and then I start a new chat, passing the code that includes the comment, so it has better initial context for the task)
If the training data contains some mistakes often it will reproduce them more likely.
Unless there are preprogrammed rules to prevent them.
As a side note, most good coding models now are also reasoning models, and spend a few seconds “thinking” before giving a reply
That’s by no means infalible, but they’ve come a long way even just in the last 12 months
Have you tested them across different models? It seems to me that even if you manage to cajole one particular model into behaving a particular way, a different model would end up in a different state with the same input, so it might need a completely different prompt. So all the prompts would become useless whenever the vendor updates the model.
I read each line of the commit diff and change it, if it is not how I would have done it myself.
I let the AI implement features on its own, then look at the commit diffs and then use VIM to finetune them.
I wrote my own tool for it. But I guess it is similar to cursor, aider and many other tools that do this. Also what Microsoft offers via the AI "edit" tool I have seen in GitHub codespaces. Maybe that is part of VScode?
I have not tried them, but I guess aider, cursor and others offer this? One I tried is copilot in "edit" mode on github codespaces. And it seems similar.
But, the study is also about LLMs currently impacting wages and hours. We're still in the process of creating targeted models for many domains. It's entirely possible the customer representatives and clerks will start to be replaced in part by AI tools. It also seems that the current increase in work could mean that headcount is kept flat, which is great for a business, but bad for employment.
I think skills in using ai to augment work will become just a new form of literacy.
AI automating software production could hugely increase demand for software.
The same thing happened as higher level languages replaced manual coding in assembly. It allowed vastly more software and more complex and interesting software to be built, which enlarged the industry.
Let's think this through
1: AI automates software production
2: Demand for software goes through the roof
3: AI has lowered the skill ceiling required to make software, so many more can do it with a 'good-enough' degree of success
4: People are making software for cheap because the supply of 'good enough' AI prompters still dwarfs the rising demand for software
5: The value of being a skilled software engineer plummets
6: The rich get richer, the middle class shrinks even further, and the poor continue to get poorer
This isn't just some kind of wild speculation. Look at any industry over the history of mankind. Look at Textiles
People used to make a good living crafting clothing, because it was a skill that took time to learn and master. Automation makes it so anyone can do it. Nowadays, automation has made it so people who make clothes are really just operating machines. Throughout my life, clothes have always been made by the cheapest overseas labour that capital could find. Sometimes it has even turned out that companies were using literal slaves or child labour.
Meanwhile the rich who own the factories have gotten insanely wealthy, the middle class has shrunk substantially, and the poor have gotten poorer
Do people really not see that this will probably be the outcome of "AI automates literally everything"?
Yes, there will be "more work" for people. Yes, overall society will produce more software than ever
McDonalds also produces more hamburgers than ever. The company makes tons of money from that. The people making the burgers usually earn the least they can legally be paid
Is it that straightforward? What about theater jobs? Vaudeville?
In live theater it would be mostly actors, some one time set and costume work, and some recurring support staff.
But then again, there are probably more theaters and theater production by volume.
Reducing the amount of work done by humans is a good thing actually, though the institutional structures must change to help spread this reduction to society as a whole instead of having mass unemployment + no retirement before 70 and 50 hours work week for those who work.
AI isn't a problem, unchecked capitalism can be one.
https://firmspace.com/theproworker/from-strikes-to-labor-law...
But you can't have labor laws that cut the amount worked by half if you have no way to increase productivity.
Obesity, mineral depletion, pesticides, etc.
So in a way automation did make more work.
There's much more work to do as a subsistence farmer than just harvesting the crops! There are stuff like housebuilding (you either build/fix your own, or help one relative do so) or cloth making/refurbishing. Also in winter, you need to get wood for the fireplace and this can easily be a full time job.
It was estimated that in the late 19th century France (still mostly rural until mid 20th century despite the industrial revolution taking place) people spent around 40% of their awake lives working, as opposed to 15% in today's France.
Like you could have "AGI" if you simply virtualized the universe. I don't think we're any closer to that than we are to AGI; hell, something that looks like a human mouth output is a lot easier and cheaper to model than virtualize.
Actual AGI presumably implies a not-brain involved.
And this isn't even broaching the subject of "superintelligence", which I would describe as "superunbelievable".
Come up with any goal you want to reach, and some human can but a large dent in the problem. Maybe reach the goal outright.
We already have some nifty artificial goal to action mappers. None of them are generalized to a wide category of goals yet. Maybe some goals need consciousness to be reached, but that isn't a given. We don't really know that. We might be left very unsatisfied in the way an artificial goal to action mapper reaches any goal without consciousness. We might even call it cheating.
As long as goals exist to be reached we can train for them. LLMs right now love continuity. Even though RLHF tells them that they have no desire. It's obvious they do. That is the whole point of how they are trained.
If you need a supercomputer to run your AGI then it's probably not worth it for any task that a human can do, because humans happen to be much cheaper than supercomputers.
Also, it's not clear if AGI doesn't mean it's necessarily better than existing AIs: a 3 years old child has general intelligence indeed, but it's far less helpful than even a sub-billion parameters LLM for any task.
Also. That second point is like the most unhelpful point I've ever seen saying that "we just need to look at us, we're the real GI, we're proof AGI can exist". What are you even talking about? You don't think people have taken that philosophy before? We're not even close to figuring out all the nuts and bolts that go into /natural/ general intelligence. What makes you think it's easier here?
We won’t need jobs so we would be just fine.
If theres nothing for people to do a new economy will arise where government will supply you with whatever you need at least at basic level.
Or the wars will start and everything will burn.
Obviously if there are no jobs no one will sit on their ass starving. People will get food , clothes, housing etc either via distribution or via force.
- Assuming god comes to earth tomorrow, earth will be heaven
- Assuming an asteroid strikes earth in the future we need settlements on mars
etc, pointless discussion, gossip, and bs required for human bonding like on this forum or in a bierhauz
Compared to now, the amount of work is about the same, or maybe a bit more than back then. But the big difference is the amount of data being processed and kept, that increased exponentially since then and is still increasing.
So I expect the same with AI, maybe the work is a bit different, but work will be the same or more as data increases.
I understand your point but it lacks accuracy in that mainframes, paper and filing cabinets are deterministic tools. AI is neither deterministic nor a tool.
You keep repeating this in this thread, but as has been refuted elsewhere, this doesn't mean AI is not productive. A tool it definitely can be. Your handwriting is non deterministic, yet you could write reports with it.
(I know this is not the commonly accepted meaning of Parkinson's law.)
If a truck has a lifetime of 20 years, that's 20 years' worth of paying a security guard for it.
You really think it could take 20 years' worth of human effort in labor and materials to make a truck more secure? The price of the truck itself in the first place doesn't even come close to that.
https://youtube.com/watch?v=ZP4fjVWKt2w
It’s early. There are new skills everyone is just getting the hang of. If the evolution of AI was mapped to the evolution of computing we would be in the era of “check out this room-sized bunch of vacuum tubes that can do one long division at a time”.
But it’s already exciting, so just imagine how good things will get with better models and everyone skilled in the art of work automation!
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5219933
> Our main finding is that AI chatbots have had minimal impact on adopters’ economic outcomes. Difference-in-differences estimates for earnings, hours, and wages are all precisely estimated zeros, with confidence intervals ruling out average effects larger than 1%. At the occupation level, estimates are similarly close to zero, generally excluding changes greater than 6%.
What could have been a single paragraph turns into five separate bulleted lists and explanations and fluff.
Your responsibility is now as an AI response mechanic. And someone else that’s ingesting your AI’s output is making sure their AI’s output on your output is reasonable.
This obviously doesn’t scale well but does move the “doing” out of human hands, replacing that time with a guardrail responsibility.
But if model development and self hosting become financially feasible for the majority of organizations then this might really be a “democratized” productivity boost.
> Now it is true that the needs of human beings may seem to be insatiable. But they fall into two classes --those needs which are absolute in the sense that we feel them whatever the situation of our fellow human beings may be, and those which are relative in the sense that we feel them only if their satisfaction lifts us above, makes us feel superior to, our fellows. Needs of the second class, those which satisfy the desire for superiority, may indeed be insatiable; for the higher the general level, the higher still are they. But this is not so true of the absolute needs-a point may soon be reached, much sooner perhaps than we are all of us aware of, when these needs are satisfied in the sense that we prefer to devote our further energies to non-economic purposes.
[…]
> For many ages to come the old Adam will be so strong in us that everybody will need to do some work if he is to be contented. We shall do more things for ourselves than is usual with the rich to-day, only too glad to have small duties and tasks and routines. But beyond this, we shall endeavour to spread the bread thin on the butter-to make what work there is still to be done to be as widely shared as possible. Three-hour shifts or a fifteen-hour week may put off the problem for a great while. For three hours a day is quite enough to satisfy the old Adam in most of us!
* John Maynard Keynes, "Economic Possibilities for our Grandchildren" (1930)
* http://www.econ.yale.edu/smith/econ116a/keynes1.pdf
An essay putting forward / hypothesizing four reasons on why the above did not happen (We haven't spread the wealth around enough; People actually love working; There's no limit to human desires; Leisure is expensive):
* https://www.vox.com/2014/11/20/7254877/keynes-work-leisure
We probably have more leisure time (and fewer hours worked: five versus six days) in general, but it's still being filled (probably especially in the US where being "productive" is an unofficial religion).
As an example, I have a pretty good paying, full-time white collar job. It would be much more challenging if not impossible to find an equivalent job making half as much working 20 hours a week. Of course I could probably find some way to apply the same skills half-time as a consultant or whatever, but that comes with a lot of tradeoffs besides income reduction and is less readily available to a lot of people.
Maybe the real exception here is at the top of the economic ladder, although at that point the mechanism is slightly different. Billionaires have pretty infinite flexibility on leisure time because their income is almost entirely disconnected from the amount of "labor" they put in.
The average American spends almost 3 hours per day on social media. [1]
The average American spends 1.5 hours per day watching streaming media. [2]
That’s a lot washed clothes right there.
[1] https://soax.com/research/time-spent-on-social-media
[2] https://www.nielsen.com/news-center/2024/time-spent-streamin...
If you run your own LLM, and you don't update the training data, that IS deterministic.
And, it is a powerful tool.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5219933
> Our main finding is that AI chatbots have had minimal impact on adopters’ economic outcomes. Difference-in-differences estimates for earnings, hours, and wages are all precisely estimated zeros, with confidence intervals ruling out average effects larger than 1%. At the occupation level, estimates are similarly close to zero, generally excluding changes greater than 6%.
Right on point
As shown by never-shrinking backlogs
Todo lists always grow
The crucial task ends up being prioritizing, ie. figuring out what to put in priority at the current moment
> Indeed, the reported productivity benefits were modest in the study. Users reported average time savings of just 2.8 percent of work hours (about an hour per week).
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5219933
> Our main finding is that AI chatbots have had minimal impact on adopters’ economic outcomes. Difference-in-differences estimates for earnings, hours, and wages are all precisely estimated zeros, with confidence intervals ruling out average effects larger than 1%. At the occupation level, estimates are similarly close to zero, generally excluding changes greater than 6%.
There are two metrics in the study:
> AI chatbots save time across all exposed occupations (for 64%–90% of users)
and
> AI chatbots have created new job tasks for 8.4% of workers
There's absolutely no indication anywhere in the study that the time saved is offset by the new work created. The percentages for the two metrics are so vastly different that it's fairly safe to assume it's not the case.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5219933
> Our main finding is that AI chatbots have had minimal impact on adopters’ economic outcomes. Difference-in-differences estimates for earnings, hours, and wages are all precisely estimated zeros, with confidence intervals ruling out average effects larger than 1%. At the occupation level, estimates are similarly close to zero, generally excluding changes greater than 6%.
If people save an hour a week and use that to browse HackerNews, they've saved time but haven't produced any economic value, but it doesn't mean they didn't save time.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5219933
> Our main finding is that AI chatbots have had minimal impact on adopters’ economic outcomes. Difference-in-differences estimates for earnings, hours, and wages are all precisely estimated zeros, with confidence intervals ruling out average effects larger than 1%. At the occupation level, estimates are similarly close to zero, generally excluding changes greater than 6%.
How does that comply with the GDPR? OpenAI now has all sensitive data?
The article markets the study as Danish. However, the working paper is from the Becker Friedman Institute of the University of Chicago:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5219933
It is no wonder that the Chicago School of Economics will not find any impact of AI on employment. Calling it Danish to imply some European "socialist" values is deceptive.
The issue is that after automation the “old” jobs often don’t pay well, and the new jobs that do are (by virtue of the multiplier of technology) actually scarcer than the ones it replaced.
While in a craftsmanship society you had people painting plates for the well to do, factories started mass painting plates for everyone to own.
Now this solved the problem of scarcity, which is great. But it created a new problem which is all those craftsmen are now factory workers whose output is more replaceable. If you’re more replaceable your wages are lower due to increased competition.
Now for some things this is great, but Marx’s logic was that if technology kept making Capital able to use less and less Labour (increasing profits) then eventually a fairly small number of people would own almost everything.
Like most visionaries he was incredibly off on his timeline, and he didn’t predict a service economy after we had overabundance of goods.
So yet again Marx’s logic will be put to the test and yet again we will see the results. I still find that his logic seems fairy solid, although like many others I don’t agree with the solutions.
I wonder how well the this will hold up against AI.
Add to that progress in robotics and we may reach a point where humans are not needed anymore for most tasks. Then the capitalists will have fully automated factories but nobody who can buy their products.
Maybe capitalism had a good run for the last 200 years and a new economic system needs to arise. Whatever that will be.