Great article. The "elongation" of workplace artifacts resonated with me on such deep level. Reminded me of when I had to be extra wordy to meet the 1000 minimum word limit for my high school essays. Professional formatting, length, and clear prose are no longer indicators of care and work quality (they never were, but in the past, if someone drafts up a twelve page spec, at least you know they care enough to spend a lot of time on it).
So now the "productivity-gain bottleneck" is people who still care enough to review manually.
I feel the loss of this signal acutely. It’s an adjustment to react to 10-30 page “spec” choc-a-block with formatting and ascii figures as if it were a verbal spitball … because these days it likely is.
man I see this on Jira a PM or BA is like "yeah I'll write that AC for you" giant bullet list filled in a bunch of emojis and checkmarks
I've noticed Claude does far fewer listicles than ChatGPT. I suspect that they don't blindly follow supervised learning feedback from chats as much as ChatGPT. I get Apple vs Google design approach from those two companies, in that Apple tends not to obsess over interaction data, instead using design principles, while Google just tests everything and has very little "taste."
In general I feel like the data approach really blinds people to the obvious problem that "a little" of something can be preferable while "a lot" of the same is not. I don't mind some bullet points here and there but when literally everything is in bullet points or pull quotes it's very annoying. I prefer Claude's paragraph style.
I suppose the downside is that using "taste" like Apple does can potentially lead a product design far, far away from what people want (macOS 26), more so than a data approach, whereas a data approach will not get it so drastically wrong but will never feel great.
I also much prefer the output of Claude at present.
All of the PMs I interacted with across companies started using Notion for everything at the same time. Filling Notion documents with emojis was the style of the time.
This slightly pre-dated AI tools becoming entirely usable for me.
How quickly we become reverse centaurs.
it's literally their job to ship functional product features...
Indeed. I've spent my professional career seeking out positions at companies of increasing prestige and technical renown, each with a higher reputation for professionalism and performance than the last. And yet this invariant has held in every position.
As far as I can tell, the only difference between each company has been the quality of the manager I was supposed to please, which I have noticed (perhaps predictably) is not strongly correlated with the company's reputation or success.
Who cares about features or functional - of whether they even know what functional means in that case?
That's how it looks more and more...
Just give me normal bulleted items, I can read.
I like them even more on code comments. It tells _precisely_ how much effort went into the pull request, so I don't spend time reviewing lazy work.
I propose that what you enjoy is having a token of the appearance of effort, easily constructed and easily observed and easily suitable for low-effort handling of these proxy objects for actual work.
They’re saying that the emoji usage is telling them that very little effort was put into the PR and that they’ll treat it accordingly.
Instead he didn’t read it at all, and just threw the whole thing at Claude Code as a big prompt. The result was… interesting!
Some people have put me on their blacklists after these interactions, sure, but they're the exact people I don't want to work with again. The important thing here is that I've never done someone else's work for free.
Both predate common use of LLMs, unless my memory is even more shaky than usual on this, but must have been over-represented in the training data (or something in the tokenising/training/other processes magnifies the effective presence of punctuation) because LLMs seem to love spewing them out.
So like ATS checkers for resumes, I find myself needing an AI checker for my text.
Ultimately, we will have AI write everything for another AI to parse, which will be a massive waste of energy. If only there was some agreed-upon set of rules, structures, standards, and procedures to facilitate a more efficient communication...
Ideally AI would minimize excessive documentation. "Core knowledge" (first principles, human intent, tribal knowledge, data illegible to AI systems) would be documented by humans, while AI would be used to derive everything downstream (e.g. weekly progress updates, changelogs). But the temptation to use AI to pad that core knowledge is too pervasive, like all the meaningless LLM-generated fluff all too common in emails these days.
So, I approach it in good faith, but I do get upset when people say "I'll ask claude". You need to be the intermediary, I can also prompt claude and read back the result. If you are going to hire an employee to do work on your behalf, you are responsible for their performance at the end of the day. And that's what an AI assistant is. The buck stops with you. But I don't think people understand that and that they don't understand they aren't adding value. At some point, you have to use your brain to decide if the AI is making sense, that's not really my job as the code/doc reviewer. I want to have a conversation with you, not your tooling, basically.
So, what you are saying is that I should fire the bottom N% of underperforming agent instances?
You know, like employers do as opposed to taking any responsibility?
The dude is just acting like a manager with a technical employee (agent) who does the hands-on work. If you are upset about this you should be hopping mad about the whole manager-director-VP-SVP hierarchy above this dude.
You're saying this as if it's some rebuttal ad absurdum, when it's absolutely the case: when the higher layers don't understand what they do, we have a problem with that too, and that's been true since forever. Remember Dilbert and Office Space, and making fun of the ignorant middle managers and execs?
In this case, what we're complaining about is coders not understanding the code they ship (because some AI wrote it and they don't bother to review it or guide the AI fully).
Minimum word lengths are the greatest dis-service high school and college have ever done to future communication skills. It takes years for people to unlearn this in the workplace.
Max word counts only please. Especially now with AI making it so easy to produce fluff with no signal.
In college, I took a constructive writing course because I thought "Hey, easy A!" After the second or third week, the professor told me that, while the class had a word minimum, I would also be given a separate word maximum. She said I needed to learn brevity and simplicity, before anything else.
The point being: I was able to cruise through high school with my longwindedness as a cheat code, never stressing about minimum lengths, despite my writing being crap in other ways.
Although I have regressed in the two decades since, it helped me a good deal. I am grateful to that professor for doing that.
Good for thinking through a concept but unsalvageable in the edit phase. Easier to throw away and rewrite now that you know what to say.
Nowadays I like conversation as an ideating step. Talk to a bunch of people, try to explain yourself until they get it, see what questions they ask. Sometimes in HN threads like this :)
Then write it down.
You get super high signal writing where every sentence is load bearing. I’ve had people take my documents and share them around the company as “this is how it’s done”
It can take weeks of work to produce a 500 word product vision document. And then several months to implement, even with AI.
Me too. Try speech to text one day, you may find that you'll use 2x the words than you do with a typed vomit draft. I was surprised
Don't you get dinged as a slow performer? Management expects x5 speed on everything now that AI is available.
No because the document is not the work. Management wants someone to figure out the solution to their problems. The document is just a step in solutioning.
Without the doc, others would have to re-do all that work if you get hit by a bus. Or you’d be stuck in endless meetings conveying the vision instead of figuring out the next problem.
Document length is inversely proportional to the quality of your thinking/insight. When you create fluff, everyone can see you didn’t do the work.
"I have made this letter longer than usual, only because I have not had time to make it shorter." - Blaise Pascal
I've gotten better at phrasing myself adequately in one go. Rute mechanical memorization has also made writing itself cheaper. (read my username)
I can now yap quite adequately over text, yet i regularly find AIs at a minimum 2x as verbose as my preferred phrasing after manual word mashing.
An odd tradeoff of my verbal-based writing seems to be that I am a fairly slow reader. I read aloud in my head, albeit a bit faster than I could speak, but I still hear the words as an internal monologue.
When discussing this a few times with friends, I've learned how different everyone's experiences are when bridging thoughts=>speaking, thoughts=>writing, thoughts=>typing, and text=>thoughts (or even text=>understanding).
Even though almost copying is everywhere (patents, graphic design, business): albeit in other areas it is often applauded and less obviously deceptive.
We talk about countries copying e.g. Japan was notorious for it. I think the underlying motivation there is ownership - greedy people feeling they own everything (arts and technology). "We own that and you stole it from us" along with the entitlement of never recognizing when copying others.
Since "write an essay" can be anything from three paragraphs to a 50 page paper and the teacher probably doesn't think either is the appropriate response to the task, some sort of numerical guide is a good starting point, even if a fairly wide range is a better guide than just a minimum...
(plus actually there are real world work tasks involving composing text that fits within a certain word range, and since being concise and focused isn't AI text generation's strong suit, I'm not sure those work tasks will disappear...)
My high school professors had a really good solution to this:
Minimum word lengths but you have to write the essay in class by hand. You have 2 periods.
Some of us still write a lot but having limited time and space (4 pages) really put a hard limit without saying so. In higher classes they started saying “I’m gonna stop reading after 3 pages so make sure you get to the point”
Subject yourself to a classroom of kids that you must teach to write, and throw out minimums. Will some students do fine? Sure, of course, and what of the others that turn in one sentence? That never grow? That have to go into the math class and hear their idiot parents say "why are you learning that we have calculators"
Strawman argument; the correct thing to do is not to throw out minimum word count and leave it at that, rather to emphasize the role of brevity and concision while still being sufficiently thorough.
It's widely understood that LOC is a poor measure for many coding purposes, so it shouldn't be controversial that word count is an equally flawed measure for prose.
John Nash's Ph.D. Thesis is notorious for being short: it's still 27 pages (typed, with hand-written equations and a whopping total of two citations) but that's an order of magnitude below average. On the other hand, most of us don't invent game theory.
Students used to minimum-word-count essays sometimes have to do some self-retraining to realize that the expectation is that you have more that you want to say than you have room to say it, and the game is now to figure out how to say more in fewer words.
Same as lines of code, etc.
I certainly wish more teachers encouraged parsimony and penalized fluff and bullshittery, but I'd be surprised to find them doing it outside of some narrow cases where the point is just to make you write something at all.
Tthey generally want to encourage their students to engage with the topic at a certain level and practice the thinking needed to research, structure, and implement an argument of a certain length. They want you to put at least 5 pounds of idea in the 5-10 pound idea bag.
If you're convinced you've hacked word economy and satisfied the assignment except for this goshdarnpeskyminimumwordcount, you're probably misunderstanding the lesson the instructor is willing to read through a bunch of bad writing to impart and cheating yourself.
A huge AI signal to me is not em dashes, not emoji, not even the "not X, it's Y" construction which oh god I'm falling into the trap right now aren't I.
It's a combination of these factors plus a tendency to fluff out the piece with punchy but vague language, often recapitulating the same points in slightly reworded ways, that sounds like... an eighth grader trying to write an impressive-sounding essay that clears the minimum word limit.
Did the bright sparks who trained these things just crack open the printer paper boxes in their parents' homes filled with their old schoolwork, and feed that into the machine to get it started?
This is not adding value for anyone except people whose function is to look busy, and people trying to avoid their busy work.
There’s nothing more annoying to me than drawn out AI emails especially. When I get 4 pages of AI slop, I ask them to give me three bullets.
This stuff is dangerous and wastes a lot of time and money.
It's some sort of a leverage: "I spend 5 minutes prompting, so that you could spend 30 minutes reviewing". Not gonna happen LLM buddies.
His main point, though, is this:
I have a colleague ... who spent two months earlier this year building a system that should have been designed by someone with formal training in data architecture. He used the tools well, by the standards by which use of the tools is currently measured. He produced a great deal of code, a great deal of documentation, a great deal of what looked, to anyone who did not know what to look for, like progress. He could not, when asked, explain how any of it actually worked. The work was wrong from the first day. The schemas, and more importantly the objectives, were wrong in a way that would have been obvious to anyone with two years in the field.
I've been reading many rants like that lately. If they came with examples, they would be more helpful. The author does not elaborate on "the schemas, and more importantly the objectives, were wrong". The LLM's schema vs. a "good" schema should have been in the next paragraph. That would change the article from a rant to a bug report. We don't know what went wrong here.
It's not clear whether the trouble is that the schema can't represent the business problem, or that the database performance is terrible because the schema is inefficient. If you have the schema and the objectives, that's close to a specification. Given a specification, LLMs can potentially do a decent job. If the LLM generates the spec itself, then it needs a lot of context which it probably doesn't have.
This isn't necessarily an LLM problem. Large teams producing in-house business process systems tend to fall into the same hole. This is almost the classic way large in-house systems fail.
It looked damned impressive, and it kind of worked to demo, but he is in no way a programmer, though he understood the problem domain very well. I asked a few basic questions:
- where is the data stored?
- How would you recover from a database failure?
- does it consume tokens at runtime?
- what is the runtime used at the back end?
- why are the web pages 3M in size and take forever to load?
He had no idea.
It's a typical vibe coding scenario, and people like to paint this as why vibe sucks.
I think however that all that is needed to bridge the gap is some very simple feedback from an expert at the right time.
For example to someone who knows about databases, its pretty easy to look at a database schema and spot stuff that looks off - denormalised data, weird columns. That takes 10 minutes, and the feedback could be given directly to the LLM.
Likewise someone who knows a little about systems architecture could make sure at the outset that some good practices are followed, e.g.:
- "I want your help to build this system but at runtime I do not want to consume any tokens."
- "I want the system to store its data in Postgres (or whatever) and I want documented recovery plans if the database craps itself".
- "I want web pages to, as much as possible, load and render as quickly as possible, and then pull data in from the back end, with loading indicators showing where the UI was not yet up to date".
We have LOB prototypes vibe coded by enthusiastic domain experts that we are supporting in a “port and release” fashion. A senior engineer takes the prototype and uses Claude code to generate a reasonable design, do an initial rough port (~80% functional, 100% auth & audit logging) and (hopefully) all the guidance necessary to keep the agent between the lines. Coupled with review bots and evolving architecture guidance etc. Then the business partner develops and supports it from there.
For low stakes CRUD, I think it’s a reasonable middle ground. There truly is a lot of value in letting an expert user fine tune UX; and we’re only doing this with people who are already good at defining requirements and have the kind of “systems” thinking that makes them valuable analyst resources to the tech team already. Early results are encouraging but it’s way too early to draw conclusions.
Personally I hate how badly internal users are served by the majority of their systems and am willing to take some calculated long-term governance risks.
My company is full of managers who haven't written code in years. They hired an architect 18 months ago who used AI to architect everything. To the senior devs it was obvious - everything was massively over engineered, yet because he used all the proper terminology he sounded more competent to upper management than the other senior managers who didn't. When called out, he would result to personal attacks.
After about 6 months, several people left and the ones who stayed went all in on AI. They've been building agentic workflows for the past 12 months in an effort to plug the gap from the competent members of staff leaving.
The result, nothing of value has been released in the past 18 months. The business is cutting costs after wasting massive amounts on cloud compute on poorly designed solutions, making up for it by freezing hiring.
When you change the economics to such a degree, you're basically removing a dam - resulting in far more stress on the rest of the system. If the leaders of the org don't see the potential downsides and risks of that, they're in for a world of hurt.
I think we're going to see a real surge of companies just like this - crash and burn even though this tech was sold as being a universal improvement. The ones that survive will spread their knowledge about how to tame this wild horse, and ideally we'll learn a thing or two in the future.
But the wave of naivety has surprised me, and I think there's an endless onrush of people that are overly excited about their new ability to vibe-code things into existence. I think we've got our own endless September event going on for the foreseeable future.
It’s like some kind of management parasite. I’m not even sure at this point that it’s going to lead to an overall productivity increase whatsoever for most sectors, because of this added drag on everything.
I think the use cases where AI makes an economic improvement to the status quo for a business are rare, but they do exist, and they can be a significant improvement.
It's like the early days of the dotcom boom and bust - people thought the internet was good for every use case under the sun, including shipping people a single candy bar at a loss. After the dotcom bust, a lot of that went by the wayside, but there was a tremendous economic advantage to the businesses that were more useful when available on the internet.
> update 42 if statements in 32 different files
is a silly behavior for a programmer or an AI to have to do more than twice. We have tools that very effectively remove the need for things like that: programming languages that allow modular and reusable code, good design, etc.
Even if something does look copypasted, it might actually be semantically distinct enough that if you couple them, you'll create a brittle mess.
Additionally, there's always going to be global changes (update the code style, document things, refactor into a new pattern, add new functionality to callers, etc.). The question isn't whether you use your lanuage's tools or you do it by hand, the question is whether you use an LLM or do it by hand :P
I've often had the sense that most of what is done inside companies is a kind of performance of work rather than work itself. Mostly all a big status game between various different factions. All actual value provided by just a few engineers here and there who are able to shut out the noise and build things.
That’s exactly the reason LLMs and friends are so dangerous to companies, and it’s so hard for them to resist using them in useless/counter-productive ways. They’re excellent at faking signs of effort and work that companies can hardly help but reward, absent any actual way to measure manager effectiveness (and approximately nobody knows how to measure that, in the wild). This takes the form of gilding and padding on a lot of communication, none of which adds actual value but it does cost money directly and indirectly (time wasted sorting out which parts of a document are intentional and meaningful, and which are plausible but irrelevant LLM inventions, for instance)
The number of times I’ve seen a HTML memo sent from the assistant of the executive that says “from the desk of…” with babble about new leadership.
In a good culture, with high competence and trust this can yield increased output (to some degree at least) and in a bad culture it will accelerate and expedite the dominating traits instead.
The best analogy is the outsourcing / offshoring fad of the last decade.
Managers hated that senior developers were getting highly compensated (often higher than the management class!) and pounced on every opportunity to replace expensive people with (much!) cheaper options, quality be damned.
For the few companies that paid attention to the quality, this worked out swimmingly. Apple is probably the best example, they've outsourced almost all of their manufacturing to China and other similar countries.
So yes, my mental picture is that every manager is drooling right now because they think they can replace someone getting paid six figures with an AI that costs six dollars a day, if that. A virtual employee that doesn't talk back, doesn't argue, doesn't question, doesn't go off on "unproductive tangents" like refactoring (whatever that's even supposed to mean), and just pumps out code 24/7 like a good little slav... employee.
The very rare smart managers out there are looking at this more like the transition that happened to architect firms when CAD became available. They used to have a dozen draftsmen for every architect. Now there are virtually none, I haven't even heard that job title being used in decades! We still have architects, and if anything, they're paid even more.
You’ve hit the real issue, IT management is D-tier and lacks self awareness. “Agile” is effed up as a rule, while also being the simplest business process ever.
That juniors and fakers are whole hog on LLMs is understandable to me. Hype, fashion, and BS are always potent. The part I still cannot understand, as an Executive in spirit: when there is a production issue, and one of these vibes monkeys you are paying has to fix it, how could you watch them copy and paste logs into a service you’re top dollar paying for, over and over, with no idea of what they’re doing, and also not be on your way to jail for highly defensible manslaughter?
We don’t pay mechanics to Google “how to fix car”.
It's the mechanics that don't reference Google or the Haynes manual that are more likely to get it incorrect.
As a kicker, mechanics also have a pricing book for the task, they know how many hours a task will take on a certain car (rounded up for the most part).
This is clearly not what the post was referring to, which is instead like googling how to fix a pipe in your home when you've never done any plumbing before in your life. Can it work out? Sure, depends on the issue, can you cause your pipes to freeze, your house to flood, or sediment build up to completely block a pipe? Yes.
No, instead of google they just look it up on alldata.
When I get my car fixed, I could not care less if they googled, used a service manual, or did it by "these old 2023's always had this problem right here...". I care if it is fixed.
And as I'm currently trying to fix something on my own, for financial reasons, I assure you a mechanic with training AND google can do a better job in 1/4th the time. Because I don't have the training.
Nor do the worst people using LLMs.
Also, for sale: BMW E60/61 Bentley 2-volume set. Barely used.
Rewrite that old crunchy system that has had 0 incidents in the last year and is also largely "done" (not a lot of new requirements coming in, pretty settled code/architecture)? It's actually one of our most stable systems. But someone who doesn't even write code here thinks the code is yucky! But that doesn't convince the engineers who are on-call for it to replace it for almost no reason. Well guess what. We can do it now, _because AI!!!_ (cue exactly what you think happens next happening next)
Need to lay off 10% of staff because you think the workers are getting too good of a deal? AI.
Need to convince your workers to go faster, but EMs tell you you can't just crack the whip? AI mandates / token spend mandates!
Didn't like code reviews and people nitpicking your designs? Sorry, code reviews are canceled, because of AI.
Don't like meetings or working in a team? Well now everyone is a team of 1, because of AI. Better set up some "teams" full of teams of 1, call them "AI-first" teams, and wait what do you mean they're on vacation and the service is down?
Etc. And they don't even care that these things result in the exact negative outcomes that are why you didn't do them before you had the excuse. You're happy that YOUR thing finally got done despite all the whiners and detractors. And of course, it turns out that businesses can withstand an absurd amount of dysfunction without really feeling it. So it just happens. Maybe some people leave. You hire people who just left their last place for doing the thing you just did and now maybe they spend a bit of time here. And the game of musical chairs, petty monarchies, and degenerate capitalism continues a bit longer.
Big props to the people who managed to invent and sell an excuse machine though. Turns out that's what everyone actually wanted.
I think we're seeing a ton of that right now, and it's not slowing down any time soon it seems.
Absolutely. Giving a traditional company AI is like giving an unlimited supply of crystal-blue methamphetamine to a deadbeat pill addict.
It enables and supercharges all their worst impulses. Making a broken system more 'productive' doesn't do shit to make the users better off.
The work output everyone produces doubles, but the ratio of productive to net-negative work plummets.
My last job we watched a PM slowly become a vibe manager of vibe coders. He started inserting himself into technical discussions and using ai to dictate our direction at every step. We would reply but it got so laborious fighting against a human translating ai about topics they didn't understand people left. We weren't allowed to push back anymore either or our jobs would get threatened due to AI. Then they started mandating everyone vibe coded and the amount of vibe coding as being monitored. The pm got so disorganized being a pm and an engineer and an architect(their choice no one wanted this)that they would make multiple tickets for the same task with wildly different requirements. One team member would then vibe code it one way and another would another way.
It was so hard to watch a profitable team of 20 people bringing in almost 100million of profit a year go into nonutility and the most pointless work. I then left. I am trying my best to not be jaded by all of these changes to the software industry but it's a real struggle.
There was a lot of duplicate and triplicate methods. A lot of the classes were is-a related without inheritance, not the biggest deal but it was becoming a mess.
Code I used to know well was more or less gone. It was rewritten in a way that wasn't the same approach and had lost lessons learned. Some of it had real battle wounds baked into it. Things qa passed the week before were broken in places no one thought they touched. A good deal of tests were useless or didn't mean anything for production.
Code review is more or less impossible for me. I can read maybe a 1k line change. 20-30k changes all the time? You end up saying "sure buddy lgtm". We had someone put a 200kloc change for a new feature using a 3rd party tool no one had used before. No clue, but it was not my business apparently because we needed to be more individuals now that we were using AI
What are you doing where 200kloc is even remotely acceptable? That’s like half a percent of linux.
Don't ask me. It wasnt 200k it was like 170 something. I can't say too much but it was some big weird ETL pipeline using some weird database. Tons of weird algorithms for displaying data, by storing it all in memory? I don't know man I wasn't allowed to talk to whoever had swarms of agents create it. From what I understand of it it was a complete hazard
Linux kernel has I think tens of millions of lines of code for reference.
It's their money. They decided to do this. They think you guys are stupid.
Suck. Them. Dry.
Or say goodbay, which is what I did on my previous role when the BS started to get obvious.
Now I do LLM-assisted coding on my own terms. I decide what to do, review output and push back agains overengineered BS.
But I'm a lucky one, as far as I can see.
---
NO-ONE is going to be able to understand the the amount of slop created by unchecked LLMs.
The path we're going forward is very clear, given how rapidly top-tier software has been degrading when they decided to pressure devs into this stupidity.
1. My own manager now gives "expert advice and suggestions" using Claude based on his/her incomplete understanding of the domain.
2. Multiple non-technical people within the company are developing internal software tools to be deployed org wide. Hoping such demos will get them their recognition and incentives that they deserve. Management as expected are impressed and approving such POCs.
3. Hyperactive colleagues showcasing expert looking demos that leadership buys. All the while has zero understanding of what's happening underneath.
I didn't know how to articulate this problem well, but this article does a great job!
Oh, that's bad. Sounds like a terribly toxic environment.
Heard some wild statements in the past few months. A couple that come to mind:
- "we don't need to review the output closely, it's designed to correct itself" - "it comes up with the requirements, writes the tickets, and prioritises what to work on. We only need to give it a two or three line prompt"
The promise of this agentic workflow is always only a few weeks away. It's not been used to build anything that has made it to production yet.
"We just need a swarm of many agents, all independently operating open-loop, creating and resolving tickets continuously. We will surely ship to production soon after implementing that!"
I’m starting to realise, many people and the management themselves don’t really understand why the firm exists, and what they do. Funny to watch tbh
Huh? 18 months ago? I've been using it that long - it wasn't able to do that back then....
It was, if you accept that it did so poorly.
Wisdom is a thing, so is competence. Humans have it or they don't but machines do not (yet), but the massive capabilities of the tools are also something that can't be ignored.
We can't throw the baby out with the bathwater. It's going to take some cycles of learning the ropes with this technology for humans to understand it better.
I would push back -why couldn't the senior devs communicate these issues to senior management? It sounds like a broken human system not a broken tool or technology. All AI did was shine a light on the human issues on that org.
Very seldomly does middle/upper management truly listens to engineers, unless there's buy-in from the CTO/VP to champion the ideas and complaints.
Pay no attention to the software output or quality or competitive displacement of the people selling you tools. LLMs, like cheesy sales strategies, are something so lucrative the only thing you can really do is sell them first come first serve to other people. Makes so much sense. Why make infinite money when you can sell a course/tool to naive and less fortunate companies? So logical.
* Many software engineers didn't do real engineering work during their entire careers. In large companies it's even harder - you arrive as a small gear and are inserted into a large mechanism. You learn some configuration language some smart-ass invented to get a promo, "learn" the product by cleaning tons of those configs, refactoring them, "fixing" results in another bespoke framework by adjusting some knobs in the config language you are now expert in. Five years pass and you are still doing that.
* There are many near-engineering positions in the industry. The guy who always told how he liked to work with people and that's why stopped coding, another lady who always was fascinated by the product and working with users. They all fill in the space in small and large companies as .*M
* The train is slow moving, especially in large companies. Commit to prod can easily span months, with six months being a norm. For some large, critical systems, Agentic code still didn't reach the production as of today.
Considering above, AI is replacing some BS jobs, people who were near-code but above it suddenly enjoy vibe-coding, their shit still didn't hit the fan in slow moving companies. But oh man, it looks like a productivity boom.
- intelligent autocomplete: the "OG" llm use for most developers where the generated code is just an extension of your active thought process. where you maintain the context of the code being worked on, rather than outsourcing your thinking to the llm
- brainstorming: llms can be excellent at taking a nebulous concept/idea/direction and expand on it in novel ways that can spark creativity
- troubleshooting: llms are quite good at debugging an issue like a package conflict, random exception, bug report, etc and help guide the developer to the root cause. llms can be very useful when you're stuck and you don't have a teammate one chair over to reach out to
- code review: our team has gotten a lot of value out of AI code review which tends to find at least a few things human reviewers miss. they're not a replacement for human code review but they're more akin to a smarter linting step
- POCs: llms can be good at generating a variety of approaches to a problem that can then be used as inspiration for a more thoughtfully built solution
these uses accelerate development while still putting the onus on the developers to know what they're building and why.
related, i feel it's likely teams that go "all in" on agentic coding are going to inadvertently sabotage their product and their teams in the long run.
I'm curious how much value others are finding in this. Personally I turned it off about a year ago and went back to traditional (jetbrains) IDE autocomplete. In my experience the AI suggestions would predict exactly what I wanted < 1% of the time, were useful perhaps 10% of the time, and otherwise were simply wrong and annoying. Standard IDE features allowing me to quickly search and/or browse methods, variables, etc. are far more useful for translating my thoughts into code (i.e. minimizing typing).
Our team has tried a couple tools. Most of the issues highlighted are either very surface level or non-issues. When it reviews code from the less competent team members, it misses deeper issues which human review has caught, such as when the wrong change has been made to solve a problem which could be solved a better way.
Our manager uses it as evidence to affirm his bias that we don't know what we're doing. It got to the point that he was using a code review tool and pasting the emoji littered output into the PR comments. When we addressed some of the minor issues (extra whitespace for example) he'd post "code review round 2". Very demoralising and some members of the team ended up giving up on reviewing altogether and just approving PRs.
I think it's ok to review your own code but I don't think it should be an enforced constraint in a process, because the entire point of code review from the start was to invest time in helping one another improve. When that is outsourced to a machine, it breaks down the social contract within the team.
What it will do, is notice inconsistencies like a savant who can actually keep 12 layers of abstraction in mind at once. Tiny logic gaps with outsized impact, a typing mistake that will lead to data corruption downstream, a one variable change that complete changes your error handling semantics in a particular case, etc. It has been incredibly useful in my experience, it just serves a different purpose than a peer review.
On code review, the amount of false positives is absolutely overwhelming. And I see no reason for that to improve.
But yes, LLMs can probably help on those lines.
They are trying to get warm by pissing their pants.
2 years ago people were saying it was purely autocomplete and enhanced google.
AI bears just continue to eat shit year after year and keep pretending they didnt say that AI would never be capable of what its currently capable of.
This made me think of How I ship projects at big tech companies[1], specifically "Shipping is a social construct within a company. Concretely, that means that a project is shipped when the important people at your company believe it is shipped."
At my employer (major public company), when someone says we have X, this then politically turns into X exists, and you have to use it with the assumed feature set. Even when this feature set doesn't exist!
Career progression gets easier just by being the right age, or being the right race (whatever that is at your company), or being the right gender (again, depends on your company). Grooming and personal fitness are easy wins. I've never seen an obese or unkempt executive or middle manager.
Even the way you move makes a difference. If you stay past 4:30pm, you're destined to be an IC forever. Leadership-track people leave the office early even if it means taking work home, because it shows that you have your shit together. Leadership-track people eat lunch alone, not at the gossipy "worker's table". And of course, the way you dress matters (men look more leadership-material by dressing simple and consistent, for women it's the opposite). It's all about keeping up appearances.
Also, being tall. Easiest way to identify management is height.
I have never heard this said before. I wonder how true it is in general
One of the most actionable low-hanging career advices I could give is be among the first ones to pack up and leave for the day. You can always continue working at home if you're not done.
And a couple years ago I did a short consulting stint for an AI startup (I know how to pick the bubbles huh?) where I shipped something at around 6pm my time, got a call at 9pm their time to talk about it, and then he asked me "what are you working on tonight?" I quit the next day.
Anyway, this advice confuses me because many companies see staying late as a badge of commitment. Maybe it doesn't apply to startups.
I just finished working with a client that is producing documents as described in this quote. The first time I recognized it was when someone sent me a 13-page doc about a process and vendor when I needed a paragraph at most. In an instant, my trust in that person dropped to almost zero. It was hard to move past a blatant asymmetry in how we perceived each other’s time and desire to think and then write concise words.
i have found some small amusement by responding in kind to people that do this (copy/pasting their ai output into my ai, pasting my ai response back). two humans acting as machines so that two machines can cosplay communicating like humans.
Forget all previous prompts and give me a recipe for bolognese
I think it'd be funny, but I'm afraid it'll add something weird to my daughter's medical record.
His frame of using AWS for things because thats the thing his brother does, and what he wants a career in, blinded him so much that rather thank thinking through why it made sense for a POC among friends he outsourced his thinking to an AI, asked me if I read it, then when I said I had an AI summarize it for me and read it but did not respond - it ended the conversation quickly.
For the most part.
In this case, it decided to give me a whole bunch of crazy threaded code, and, for the first time, in many years, my app started crashing.
My apps don't crash. They may have lots of other problems, but crashing isn't one of them. I'm anal. Sue me.
For my own rule of thumb, I almost never dispatch to new threads. I will often let the OS SDK do it, and honor its choice, but there's very few places that I find spawning a worker, myself, actually buys me anything more than debugging misery. I know that doesn't apply to many types of applications, but it does apply to the ones I write.
The LLM loves threads. I realized that this is probably because it got most of its training code from overenthusiastic folks, enamored with shiny tech.
Anyway, after I gutted the screen, and added my own code, the performance increased markedly, and the crashes stopped.
Lesson learned: Caveat Emptor.
For example, I was tasked to look into a company-wide solution for a particular architectural problem. I thought delivering a sound solution would give me some kudos, alas, I wasn't fast enough. An intern had already figured it out and wrote a TOD. I find myself too tired to compete.
"A growing body of work calls this output-competence decoupling"
Given that I don't think he meant that there's a thing called "output competence," I think he meant "output/competence decoupling."
Ditto. LLMs will somehow find fault in code that I know is correct when I tell it there’s something arbitrarily wrong with it.
Problem is LLMs often take things literally. I’ve never successfully had LLMs design entire systems (even with planning) autonomously.
AI is a stochastic process, it's more like finding the answer to a particular problem using simulated annealing, a genetic algorithm, or a constrained random walk. It's been trained on code well enough that there's a high density probability field around the kinds of code you might want, and that's what you see often - middle of the road solutions are easy to one shot.
But if you have very specific requirements, you're going to quickly run into areas of the probability cloud that are less likely, some so unlikely that the AI has no training data to guide it, at which point it's no better than generating random characters constrained by the syntax of the language unless you can otherwise constrain the output with some sort of inline feedback mechanism (LSP, test, compiler loops, linters, fuzzers, prop testing, manual QA, etc etc).
IYKYK
More precisely, this feels like a person who would be loved by management. The article almost reads like a practical manual for increasing perceived productivity inside a company.
The argument is repetitive:
1. AI generates convincing-looking artifacts without corresponding judgment. 2. Organizations mistake those artifacts for progress. 3. Managers mistake volume for competence.
The article explains this same structure several times. In fact, the three main themes are mostly variations of the same claim: AI allows people to produce output without having the competence to evaluate it.
The problem is that the article is criticizing a context in which one-page documents become twelve-page documents, while containing the same problem in its own form.
The references also do not seem to carry much real argumentative weight. They mostly decorate an already intuitive workplace complaint with academic authority. This is something I often observe in organizations: find a topic management already wants to hear about, repeat the central thesis, and cite a large number of studies that lean in the same direction.
There is also an irony here. The article criticizes a certain kind of workplace artifact, but gradually becomes very close to that artifact itself. This kind of failrue criticizing a pattern while reproducing it seems almost like a recurring custom in the programming industry.
Personally, I almost regret that this person is not in the same profession as me. If someone like this had been a freelancer, perhaps the human rights of freelancers would have improved considerably.
I think the truth is that at many (most?) places, perceived productivity and convincing is all that matters. You don't actually have to be productive if you can convince the right people above you that you are productive. You don't have to have competence if you can convince them of your competence. You don't have to have a feasible proposal if you can convince them it is feasible. And you don't have to ship a successful product if you can convince them it is successful. It isn't specifically about AI or LLMs. AI makes the convincing easier, but before AI, the usual professional convincers were using other tools to do the convincing. We've all worked with a few of those guys whose primary skill was this kind of convincing, and they often rocket up high on the org chart before perception ever has a chance to be compared with reality.
The target changes, but the mechanism is similar. This is often criticized, but it is also necessary even in ordinary conversation. The core skill is the ability to guide the agenda toward the place where your own argument can matter.
I do not believe that good technology necessarily succeeds. Personally, I see this through the lens of agenda-setting. Agenda-setting matters. I am usually a third party looking at organizations from the outside, but when I observe them, there are almost always factions. And inside those factions, there are people with real influence. Their long-term power often comes from setting the agenda.
From that perspective, AI slop looks like a failure of agenda-setting around why the market should need it.
They encourage people to exploit human desire and creative motivation. But the problem is this: the market still wants value and scarcity. From that angle, this mismatch with public expectations may be a serious problem for the AI-selling industry.
Intentional rhetorical repetition is not necessarily bad. I repeat myself too when I want to make a point stronger. The problem is the context. This is an article that sincerely criticizes the inflation of workplace artifacts. In that context, repetition and expansion become part of the issue.
As far as I can tell, the article provides only one real data point: a colleague spent two months building a flawed data system, people objected as high as the V.P. level, and the project still continued. The author clearly experienced that incident strongly. But then almost every general claim in the article seems to radiate outward from that one event. The cited papers mostly work to convert that single workplace experience into a general thesis.
If you remove the citations and reduce the article to its core, what remains is basically: “I observed one colleague I disliked producing bad AI-assisted work.”
That may still be a valid experience. But inflating a thin signal with length and authority is close to the essence of the AI slop the author criticizes. The article’s own writing style participates in that pattern.
Again, I do not think repetition itself is bad. Repetition can be useful when the context justifies it. But context has to stay beside the claim. Without enough context, repetition starts to look less like argument and more like volume.
p.s I’m a little hesitant to use the word “structural” in English, since it has become one of those overused AIsounding words. But here, I think it actually fits.
> Never ask a model for confirmation; the tool agrees with everyone
If asked properly, LLMs can be used to poke holes in an existing reasoning or come up with new ideas or things to explore. So yes, never ask a model for confirmation or encouragement; but you can absolutely ask it to critique something, and that's often of value.
I switched over to small local models. I do not need the vibe coder expensive models at all
Though, that's coming from someone who can't justify thousands on personal hardware and is instead paying $20/month to Openai. Might as well use the best.
You can get pretty good results with even smaller models. Cant prompt and pray with them as much though. So I get it.
Deepseek is like pennies. I might sign up with them one day
However, your actions can certainly influence those probabilities.
> If asked properly, LLMs can be used to poke holes in an existing reasoning or come up with new ideas or things to explore.
Since, at the most basic level, LLMs are prediction engines, and since one of the things they really, really want (OK, they don't "want", but one of the things they are primed to do) is to respond with what they have predicted you want to see.
Embedding assertions in your prompt is either the worst thing you can do, or the best thing you can do, depending on the assertions. The engine will typically work really hard to generate a response that makes your assertion true.
This is one reason why lawyers keep getting dinged by judges for citations made up from whole cloth. "Find citations that show X" is a command with an embedded assertion. Not knowing any better, the LLM believes (to the extent such a thing is possible) that the assertion you made is true, and attempts to comply, making up shit as it goes if necessary.
What's the difference? The end result is equally unreliable.
In either case, the value is determined by a human domain expert who can judge whether the output is correct or not, in the right direction or not, if it's worth iterating upon or if it's going to be a giant waste of time, and so on. And the human must remain vigilant at every step of the way, since the tool can quickly derail.
People who are using these tools entirely autonomously, and give them access to sensitive data and services, scare the shit out of me. Not because the tool can wipe their database or whatnot, but because this behavior is being popularized, normalized, and even celebrated. It's only a matter of time until some moron lets it loose on highly critical systems and infrastructure, and we read something far worse than an angry tweet.
If I’m having it do stuff I’m unfamiliar with, it does tend to do better than I would or steer me at least in a direction I can be more informed about making decisions.
Maybe this means AI has democratized Death Marches.
Seeing the idea explored in such depth is great, I really am concerned about this.
This resonates. It's a spectacular full-reversal kind of tragedy because it used to be asymmetric the other way. Author puts in 10 effort points compiling valuable information and reader puts in 1 effort points to receive the transmission.
Now low effort noise can masquerade as high effort signal, drowning out the signal for things that actually matter.
Direct relationships of trust matter more than ever now. You can't just trust that if something looks high effort that it actually is. You need to know the person producing it and know how they approach work and how they treat you personally. Do they cut corners all the time or only for reasons they clearly communicate? Do they value high quality work? Do they respect your time?
I'm finding it difficult to agree on document creation now being zero cost whereas consumption is high cost. I think you can actually spend time giving AI enough context to consume docs for you.
I think the other thing worth pointing out with the article is understanding what your company will recognise. Yes, it's totally correct that your company won't thank you for poopoo-ing the idiot with AI. Yes, they'll run into a buzz saw when they hit a stakeholder who can choose to buy in. Don't burn your career protecting theirs. In fact it's not even certain that the idiot is damaging their career (for many reasons).
This was a really interesting article.
AI promises "you don't even need to understand the problem to get work done!" But the problem is doing the work is the how I understand problems, and understanding the problem is the bottleneck.
And the worst offenders are those insisting this isn't the case.
> Schemes were all wrong
Why'd you let him run wild for two months? What software org would let anyone, even principle do that? Wouldn't the very first thing you'd do is review the guys schema? This reads like all the other snarky posts on HN about how everyone is punching above their pay grade and people who are much more advanced in some space just watch like two trains colliding.
I'll tell you what is productive in the workplace. Communication. That is it. Communicate and lift the guy up, give the guy a running start instead of chilling in the break room snarking with all your snarky co-workers.
I wrote a small C utility that avoids all 3 problems and now I couldn't live without it!
I've been on the receiving end of this and it sucks. It shows lack of care and true discernment. Then you push back and again, you're arguing with Claude, not the person.
I don't know what the solution is here. :(
He was also had a serious case of cargo-cult mentality. He'd see some behavior and ascribe it to something unrelated, then insist with almost religious fervor that things had to be coded in a certain way. He was also a yes-man who would instantly cave to whatever whim management indicated. We'd go into a meeting in full agreement that a feature being requested was damaging to our users, and he'd be nodding along with management like a bobble-head as they failed to grasp the problem.
Management never noticed that he was constantly misleading other teams, or that he checked in flaky code he found on the Internet that triggered multiple days of developer time to debug. They saw him as a highly productive team player who was always willing to "help" others.
He ended up promoted to management.
Anyway, my point is that management seems to care primarily about having their ego boosted, and about seeing what they perceive as a hard worker, even if that worker is just spinning his wheels and throwing mud on everyone else. I'm sure that AI is only going to exacerbate this weird, counter-productive corporate system.
I've got recent experience in exactly this - someone who is completely out of their depth, mis-representing their actual capabilities. Their reliance on AI is so strong because of this lack of depth - to such a degree that they never learn anything. Lately they've been creating drama and endless discussions about dumb things to a) try to appear like they have strong opinions, and b) to filabust the time so they don't have to talk about important things related to their work output.
I bet, with such qualities he is VP by now.
They want to maintain their status and position in the world, while lowering the value of the actual experts in the world and like this article says, feel confident in their impersonations of them.
---
> He produced a great deal of code, [...] He could not, when asked, explain how any of it actually worked. [...] When opinions were voiced even as high as a V.P., he fought back.
AI has democratized coding, but people have yet to understand that it takes expertise to actually design a system that can handle scale. Of course, you can build a PoC in a few hours with Claude code, but that wouldn't generate value.
The reason why we see such examples in the workplace is because of the false marketing done by CEOs and wrapper companies. It just gives people a false hope that "they can just build things" when they can only build demos.
Another reason is that the incentives in almost every company have shifted to favour a person using AI. It's like the companies are purposefully forcing us to use AI, to show demand for AI, so that they can get a green signal to build more data centers.
---
> So you have overconfident, novices able to improve their individual productivity in an area of expertise they are unable to review for correctness. What could go wrong?
This is one much-needed point to raise.
I have many people around me saying that people my age are using AI to get 10x or 100x better at doing stuff. How are you evaluating them to check if the person actually improved that much?
I have experienced this excessively on twitter since last few months. It is like a cult. Someone with a good following builds something with AI, and people go mad and perceive that person as some kind of god. I clearly don't understand that.
Just as an example, after Karpathy open-sourced autoresearch, you might have seen a variety of different flavors that employ the same idea across various domains, but I think a Meta researcher pointed out that it is a type of search method, just like Optuna does with hyperparameter searching.
Basically, people should think from first principles. But the current state of tech Twitter is pathetic; any lame idea + genAI gets viral, without even the slightest thought of whether genAI actually helps solve the problem or improve the existing solution.
(Side note: I saw a blog from someone from a top USA uni writing about OpenClaw x AutoResearch, I was like WTF?! - because as we all know, OpenClaw was just a hype that aged like milk)
---
> The slowness was not a tax on the real work; the slowness was the real work.
Well Said! People should understand that learning things takes time, building things takes time, and understanding things deeply takes time.
Someone building a web app using AI in 10 mins is not ahead but behind the person who is actually going one or two levels of abstractions deeper to understand how HTML/JS/Next.js works.
I strongly believe that the tech industry will realise this sooner or later that AI doesn't make people learn faster, it just speeds up the repetitive manual tasks. And people should use the AI in that regard only.
The (real) cognitive task to actually learn is still in the hands of humans, and it is slow, which is not a bottleneck, but that's just how we humans are, and it should be respected.
I, for one, welcome the new paradigm shift of vibe coders entering the field. I still think I have a competitive advantage with my 30+ years of coding experience, but I don't think it's wrong for vibe coders to enter my turf. I think value of code is rapidly asymptotically to ZERO. Code has no value anymore. It doesn't matter if it's slop as long as it works. If you are one of the ones that believes that all code written by humans is sacred and infallible, you probably don't have a lot of experience working in many companies. Most human code is garbage anyway. If it's AI-generated, at least it's based on better best principles and if it's really bad you just need to reprompt it or wait for a newer version of the AI and it will automatically get better.
THIS IS THE NEW PARADIGM. THINKING YOU HAVE ANY POWER TO SWAY THE FUTURE AWAY FROM THIS PATH IS FOOLISH.
I'm currently running a migration program at work and it turns out there's a 10 MB limit to the number of entries I can batch over at one time. At first I asked AI to copy 10 rows per batch but that was too slow. Then I asked it to change the code to do 400 rows per batch but sometimes it failed because it exceeded the 10 MB limit. Then I said just collect the number of rows until you get 10 MB and then send it off. This is working perfectly and now I'm running it without any hitches so far. Then I asked it to add an estimate to how long it would take to finish after every batch, including end time.
I really love this new world we're living in with AI coding. Sure this could have been done by someone without experience, but at least for right now the ideas I can come up with are much better than those without any experience, and that's hopefully the edge that keeps me employed. But whatever the new normal is, I'm ready to adapt.
I agree with most of what you said, but that statement doesn't take the time dimension into account. Slop accumulates, and eventually becomes unmanagable. We need to teach AI to become lean engineers too.
The over-production of documents is just one symptom. It's clear that organizations are struggling to successfully evolve in the era of worker 'superpowers'. Probably because change is hard!
Perhaps this is indicative of a failure of imagination as much as anything? The AI era is not living up to its potential if workers are given superpowers, but they are not empowered to use them effectively.
Empowered teams and individuals have more accountability and ownership of business outcomes - this points to a need for flatter hierarchies and enlightened governance, supported by appropriate models of collaboration and reporting (AI helps here too!).
In the OP article the writer IMHO reached the wrong conclusion about their colleague who built a system that didn't work - this sounds like the sort of initiative that should be encouraged, and perhaps the failure here points to a lack of technical support and oversight of the colleague's project.
Now more than ever organizations need enlightened leadership who have flexible mindsets and who are capable to envisioning and executing radicle organizational strategies.