But for an individual cobbler, you basically got fired at one job and hired at another. This may come as a surprise to those who view work as simply an abstract concept that produces value units, but people actually have preferences about how they spend their time. If you're a cobbler, you might enjoy your little workshop, slicing off the edge of leather around the heel, hammering in the pegs, sitting at your workbench.
The nature of the work and your enjoyment of it is a fundamental part of the compensation package of a job.
You might not want to quit that job and get a different job running a shoe assembly line in a factory. Now, if the boss said "hey, since you're all going to be so much more productive working in the factory, we'll give you all 10x raises" then perhaps you might be more excited about putting down your hammer. But the boss isn't saying that. He's saying "all of the cobblers at the other companies are doing this to, so where are you gonna go?".
Of course AI is a top-down mandate. For people who enjoy reading and writing code themselves and find spending their day corralling AI agents to be a less enjoyable job, then the CEO has basically given them a giant benefits cut with zero compensation in return.
I don’t actually think it’ll be a productivity boost the way I work. Code has never been the difficult part, but I’ll definitely have to show I have included AI in my workflow to be left alone.
Oh well…
We are probably on a similar trajectory.
I wouldn't analogize the adoption of AI tools to a transition from individual craftspeople to an assembly line, which is a top-down total reorganization of the company (akin to the transition of a factory from steam power to electricity, as a sibling commenter noted [0]). As it currently exists, AI adoption is a bottom-up decision at the individual level, not a total corporate reorganization. Continuing your analogy, it's more akin to letting craftspeople bring whatever tools they want to work, whether those be hand tools or power tools. If the power tools are any good, most will naturally opt for them because they make the job easier.
>The nature of the work and your enjoyment of it is a fundamental part of the compensation package of a job.
That's certainly a part of it, but I also think workers enjoy and strive to be productive. Why else would they naturally adopt things like compilers, IDEs, and frameworks? Many workers enjoyed the respective intellectual puzzles of hand-optimizing assembly, or memorizing esoteric key combinations in their tricked-out text editors, or implementing everything from scratch, yet nonetheless jumped at the opportunity to adopt modern tooling because it increased how much they could accomplish.
I'm sorry, but did you forget what page this comment thread is attached to? It's literally about corporate communication from CEOs reorganizing their companies around AI and mandating that employees use it.
> That's certainly a part of it, but I also think workers enjoy and strive to be productive.
Agreed! Feeling productive and getting stuff done is also one of the joys of work and part of the compensation package. You're right that to the degree that AI lets you get more done, it can make the job more rewarding.
For some people, that's a clear net win. They feel good about being more productive, and they maybe never particularly enjoyed the programming part anyway and are happy to delegate that to AI.
For other people, it's not a net win. The job is being replaced with a different job that they enjoy less. Maybe they're getting more done, but they've having so little fun doing it that it's a worse job.
That’s exactly my point. The fact that management is trying to top-down force adoption of something that operates at the individual level and whose adoption is thus inherently a bottom-up decision says it all. Individual workers naturally pick up tools that make them more productive and don’t need to be forced to use them from the top-down. We never saw CEOs issue memos “reorganizing” the company around IDEs or software frameworks and mandate that the employees use them because employees naturally saw their productivity gains and adopted them organically. It seems the same is not true for AI.
All the tools that improved productivity for software devs (Docker, K8S/ECS/autoscaling, Telemetry providers) took very long for management to realize they bring value, and in some places with a lot of resistance. Some places where I worked, asking for an IntelliJ license would make your manager look at you like you were asking "hey can I bang your wife?".
https://writingball.blogspot.com/2020/02/the-infamous-apple-...
In most companies, you can't just pick up random new tools (especially ones that send data to third parties). The telling part is giving internal safety to use these tools.
This is simply not true. As a counter example consider debuggers. They are a big productivity boost, but it requires the user to change their development practice and learn a new tool. This makes adoption very hard. AI has a similar issue of being a new tool with a learning curve.
I would have just thought that people using them would quickly outpace the people that weren't and the people falling behind would adapt or die.
I could believe it. Especially if there are big licensing costs for the debuggers.
>the people falling behind would adapt or die.
It is better to educate people, make them more efficient, and avoid having them die. Having employees die is expensive for the company.
If anything, the problem is that management wants to automate poorly. The employees are asked to "figure it out", and if they give feedback that it's probably not the best option, that feedback is rejected.
AI is a broad category of tools, some of which are highly useful to some people - but mandating wide adoption is going to waste a lot of people's time on inefficient tools.
Companies are just groups of employees - and if the companies are failing to provide a clear rationale to increase productivity those companies will fail.
Any company issuing such an edict early on would have bankrupted themselves. And by the time it became practical, no such edict was needed.
[1] there was a remote universe where I could see myself working for Shopify, now that company is sitting somewhere between Wipro and Accenture in my ranking.
It might be that these companies don't care about actual performance or it might be that these companies are too cheap/poorly run to reward/incentivize actual performance gains but either way... the fault is on leadership.
There are good books on this: e.g. https://www.amazon.ca/Next-Generation-Performance-Management...
A friend of mine is an engineer of a large pre-IPO startup, and their VP of AI just demanded every single employee needs to create an agent using Claude. There were 9700 created in a month or so. Imagine the amount of tech debt, security holes, and business logic mistakes this orgy of agents will cause and will have to be fixed in the future.
edit: typo
People with roles nowhere near software/tech/data are being asked about their AI usage in their self-assessment/annual review process, etc.
It's deeply fascinating psychologically and I'm not sure where this ends.
I've never seen any tech theme pushed top down so hard in 20+ years working. The closest was the early 00s offshoring boom before it peaked and was rationalized/rolled back to some degree. The common theme is C-suite thinks it will save money and their competitors already figured out out, so they are FOMOing at the mouth about catching up on the savings.
> The common theme is C-suite thinks it will save money and their competitors already figured out out, so they are FOMOing at the mouth about catching up on the savings.
I concur 100%. This is a monkey-see-monkey-do FOMO mania, and it's driven by the C-suite, not rank-and-file. I've never seen anything like it.
Other sticky "productivity movements" - or, if you're less generous like me, fads - at the level of the individual and the team, for example agile development methodologies or object oriented programming or test driven development, have generally been invented and promoted by the rank and file or by middle management. They may or may not have had some level of industry astroturfing to them (see: agile), but to me the crucial difference is that they were mostly pushed by a vanguard of practitioners who were at most one level removed from the coal face.
Now, this is not to say there aren't developers and non-developer workers out there using this stuff with great effectiveness and singing its praises. That _is_ happening. But they're not at the leading edge of it mandating company-wide adoption.
What we are seeing now is, to a first approximation, the result of herd behavior at the C-level. It should be incredibly concerning to all of us that such a small group of lemming-like people should have such an enormously outsized role in both allocating capital and running our lives.
- If all your peers are doing it and you do it and it doesn't work, it's not your fault, because all your peers were doing it too. "Who could have known? Everyone was doing it."
- If all your peers _aren't_ doing it and you do it and it doesn't work, it's your fault alone, and your board and shareholders crucify you. "You idiot! What were you thinking? You should have just played it safe with our existing revenue streams."
And the one for what's happening with RTO, AI, etc.: - If all your peers are doing it and you _don't do it_ and it _works_, your board crucifies you for missing a plainly obvious sea change to the upside. "You idiot! How did you miss this? Everyone else was doing it!"
Non-founder/mercenary C-suites are incentivized to be fundamentally conservative by shareholders and boards. This is not necessarily bad, but sometimes it leads to funny aggregate behavior, like we're seeing now, when a critical mass of participants and/or money passes some arbitrary threshold resulting in a social environment that makes it hard for the remaining participants to sit on the sidelines.Imagine a CEO going to their board today and going, "we're going to sit out on potentially historic productivity gains because we think everyone else in the United States is full of shit and we know something they don't". The board responds with, "but everything I've seen on CNBC and Bloomberg says we're the only ones not doing this, you're fired".
I am not as negative on AI as the rest of the group here though. I think AI first companies will out pace companies that never start to learn the AI muscle. From my prospective these memos mostly seem reasonable.
I mean.. recent FBI files of certain emails would imply.. probably, yes.
https://www.semafor.com/article/04/27/2025/the-group-chats-t...
This is a great line - evocative, funny, and a bit o wordplay.
I think you might be right about the behavior here; I haven't been able to otherwise understand the absolute forcing through of "use AI!!" by people and upon people with only a hazy notion of why and how. I suppose it's some version of nuclear deterrence or Pascal's wager -- if AI isn't a magic bullet then no big loss but if it is they can't afford not to be the first one to fire it.
Apparently Anthropic has been in there for 6 months helping them with some back office streamlining and the outcome of that so far has been.. a press release announcing that they are working on it!
A cynic might also ask if this is simply PR for Goldman to get Anthropic's IPO mandate.
I think people underestimate the size/scope/complexity of big company tech stacks and what any sort of AI transformation may actually take.
It may turn into another cottage industry like big data / cloud / whatever adoption where "forward deployed / customer success engineers" are collocated by the 1000s for years at a time in order to move the needle.
Or install a landline (over 5G because that's how you do it nowadays) and call it a day. :-)
Indeed! I'm not like dead set against them. I just find they're kind of a bad tool for most jobs I've used them for and I'm just so goddamn tired of hearing about how revolutionary this kinda-bad tool is.
I was a huge AI skeptic but since Jan 2025, I have been watching AI take my job away from me, so I adapted and am using AI now to accelerate my productivity. I'm in my 50s and have been programming for 30 years so I've seen both sides and there is nothing that is going to stop it.
But the evangelist insistence that it literally cannot be a net negative in any contexts/workflows is just exhausting to read and is a massive turn-off. Or that others may simply not benefit the same way with that different work style.
Like I said, I feel like I get net value out of it, but if my work patterns were scientifically studied and it turned out it wasn't actually a time saver on the whole I wouldn't be that surprised.
There are times where after knocking request after request out of the park, I spend hours wrangling some dumb failures or run into spaghetti code from the last "successful" session that massively slow down new development or require painful refactoring and start to question whether this is a sustainable, true net multiplier in the long term. Plus the constant time investment of learning and maintaining new tools/rules/hooks/etc that should be counted too.
But, I enjoy the work style personally so stick with it.
I just find FOMO/hype inherently off-putting and don't understand why random people feel they can confidently say that some random other person they don't know anything about is doing it wrong or will be "left behind" by not chasing constantly changing SOTA/best practices.
> I was a huge AI skeptic but since Jan 2025,
> I'm in my 50s and have been programming for 30 years
> there is nothing that is going to stop it.
I need to turn this into one of those checklists like the anti-spam one and just paste it every time we get the same 5 or 6 clichés
2. most ai adoption is personal. people use whichever tools work for their role (cc / codex / cursor / copilot (jk, nobody should be using copilot)
3. there is some subset of ai detractors that refuse to use the tools for whatever reason
the metrics pushed by 1) rarely account for 2) and dont really serve 3)
i work at one of the 'hot' ai companies and there is no mandate to use ai... everyone is trusted to use whichever tools they pick responsibly which is how it should be imo
I seem to be using claude (sonnet/opus/haiku, not cc though), and have the option of using codex via my copilot account. Is there some advantage to using codex/claude more directly/not through copilot?
if you can, use cc or codex through your ide instead, oai and anthropic train on their own harnesses, you get better performance
If you can’t state what a thing is supposed to deliver (and how it will be measured) you don’t have a strategy, only a bunch of activity.
For some reason the last decade or so we have confused activity with productivity.
(and words/claims with company value - but that's another topic)
I'm at the forefront of agentic tooling use, but also know that I'm working in uncharted territory. I have the skills to use it safely and securely, but not everyone does.
Demanding everyone, from drywaller to admin assistant go out and buy a purple colored drill, never use any other colored drill, and use their purple drill for at least fifty minutes a day (to be confirmed by measuring battery charge).
Each department head needs to incorporate into their annual business plan how they are going to use a drill as part of their job in accounting/administration/mailroom.
Throughout the year, must coordinate training & enforce attendance for the people in their department with drill training mandated by the Head of Drilling.
And then they must comply with and meet drilling utilization metrics in order to meet their annual goals.
Drilling cannot be fail, it can only be failed.
Enforced use means one of two things:
1. The tool sucks, so few will use it unless forced.
2. Use of the tool is against your interests as a worker, so you must be coerced to fuck yourself over (unless you're a software engineer, in which case you may excitedly agree to fuck yourself over willingly, because you're not as smart as you think you are).
I have friends who are finance industry CTOs, and they have described it to me in realtime as CEO FOMO they need to manage ..
Remember tech is sort of an odd duck in how open people are about things and the amount of cross pollination. Many industries are far more secretive and so whatever people are hearing about competitors AI usage is 4th hand hearsay telephone game.
edit: noteworthy someone sent yet another firmwide email about AI today which was just linking to some twitter thread by a VC AI booster thinkbro
This simply isn’t how economics works. There is always additional demand, especially in the software space. Every other productivity-boosting technology has resulted in an increase in jobs, not a decrease.
Another time I asked it to rename a struct field across a the whole codebase. It missed 2 instances. A simple sed & grep command would've taken me 15 seconds to write and do the job correctly and cost $~0.00 compute, but I was curious to see if the AI could do it. Nope.
Trillions of dollars for this? Sigh... try again next week, I guess.
^ If someone says that they are definitely "holding it wrong", yes. If they used it more they would understand that you use the clutch ring to the appropriate setting to avoid this. What you don't do, is keep using the screwdriver while the business that pays you needs 55 more townhouses built.
Using sonnet 4 or even just not knowing which model they are using is a sign of someone not really taking this tech all that seriously. More or less anyone who is seriously trying to adopt this technology knows they are using Opus 4.6 and probably even knows when they stopped using Opus 4. Also, the idea that you wouldn't review the code it generated is, perhaps not uncommon, but I think a minority opinion among people who are using the tools effectively. Also a rename falls squarely in the realm of operations that will reliably work in my experience.
This is why these conversations are so fruitless online - someone describes their experience with an anecdote that is (IMO) a fairly inaccurate representation of what the technology can do today. If this is their experience, I think it's very possible they are holding it wrong.
Again, I don't mean any hate towards the original poster, everyone can have their own approach to AI.
But I feel you, part of me wants to quit too, but can't afford that yet.
I am aware of a large company that everyone in the US has heard of, planning on laying off 30% of their devs shortly because they expect a 30% improvement in "productivity" from the remaining dev team.
Exciting indeed. Imagine all the divorces that will fall out of this! Hopefully the kids will be ok, daddy just had an accident, he won't be coming home.
If you think anything that is happening with the amount of money and bullshit enveloping this LLM disaster, you should put the keyboard down for a while.
Then concludes his email with:
> I have asked Shelly to free up time on my calendar next week so people can have conversations with me about our future.
I assume Shelly is an AI, and not human headcount the CEO is wasting on menial admin tasks??
>The misconceptions about Klarna and AI adoption baffle me sometimes.
>Yes, we removed close to 1,500 micro SaaS services and some large. Not to save on licenses, but to give AI the cleanest possible context.
If you remove all your services...
[Company that's getting disrupted by AI: Fiverr, Duolingo]: rush to adopt internal AI to cut costs before they get undercut by competition
[Company that's orthogonal: Box, Ramp, HFT]: build internal tools to boost productivity, maintain 'ai-first' image to keep talent
[Company whose business model is AI]: time to go all in
Relevant article from two days ago https://www.latent.space/p/adversarial-reasoning
happy to be corrected but im not aware of any direct improvements llms bring to ultra low latency market making, time to first token is just too high (not including coding agents)
from talking to some friends in the space theres some meaningful improvements in tooling especially in discretionary trading that operate on longer time horizons where agents can actually help w research and sentiment analysis
What are the trenches in businesses in 2030, purely ownership over physical assets and energy?
"X trackers and content blocked
Your Firefox settings blocked this content from tracking you across sites or being used for ads."
Screenshots don't track me so they would be ok.
That may be all the publicly-posted ones, but I'm skeptical. They have 11.
There were a lot more internal memos.
Also notice how almost all the stocks of these companies except Meta who have announced AI-first initiatives are at best flat or down but more than 20% YTD.
What does that tell you?
And yes, people did resist IDEs (“I’m best with my eMacs” - no you weren’t), people resisted the “sufficiently smart compiler”, and so on. What happened was that they were replaced by the sheer growth in the industry providing new people who didn’t have these constraints.