Maybe it's still insufficient advice, but it hasn't worked for them at least in part because they haven't figured out how to apply it.
From the post, I see low empathy and an air of superiority, (perhaps earned by genuinely being smarter than their peers-- doesn't make it more attractive).
That's going to cause friction because a team is a _social_ construct.
This is unfortunately the world we are in now.
Frankly, reminds me of Michael O'Church
In the first example, for example, they suggested a new metric to track added warnings in the build, and then there was a disagreement in the team, and then as a footnote someone went and fixed the warnings anyway? That sounds like the author might be missing something from their story.
I do not find anything missing here. This is how things often plays out in reality. Both your retelling of it and what was actually written in the article.
Your retelling: Some people agree and some disagree with new metric. That is completely normal. Then someone who agree or want to achieve the peace or just temporary does not feel like doing "real jira" tasks fixes warnings. Team moves on.
Actual article: the warnings get solved when it becomes apparent one of them caused production issue. That is when "this new process step matters" side wins.
> Organizations don't optimize for correctness. They optimize for comfort
...do I need to say it?
Stopped here. That pattern.
I recognize this pattern from this AI "companion" my mate showed me over Christmas. It told a bunch of crazy stories using this "seize the day" vibe.
It had an animated, anthropomorphized animal avatar. And that animal was an f'ing RACCOON.
There is a delayed but direct association between RLHF results we see in LLM responses and volume of LinkedIn-spiration generated by humans disrupting ${trend.hireable} from coffee shops and couches.
// from my couch above a coffee shop, disrupting cowork on HN. no avatars. no stories. just skills.md
- It is not X. It is Y.
- X [negate action] Y. X [action] Z.
The titles are giveaways too: Comfort Over Correctness, Consensus As Veto, The Nuance, Responsibility Without Authority, What Changes It. Has that bot taste.
If you want I can compile a list of cases where this doesn't happen. Do you want me to do that?
Writing is art. Does it get the intended point across? Does it resonate with the reader? Does it make them feel something? Then it is good.
LLMs serve up a sort of bland pap with sugary highs of excitement which resembles a cross between manic advertising copy and a breathless teenager who's just discovered whatever subject they're talking about. They also sometimes confabulate and generate text which is at best tangential and at worst completely misleading.
It's exhausting and if you haven't carefully read what they generate (which most people clearly have not), you should not expect another human to read it.
Just as an interesting taste, here is my copy above rewritten to sound even more EXCITING and ENGAGING.
"They deliver a horrifying concoction – a sickly sweet, manufactured echo of thought, a grotesque blend of relentless advertising whispers and the manic, unearned enthusiasm of a teenager just discovering a world they don't understand! But the truly chilling thing is this: they fabricate. They weave elaborate lies, constructing text that’s not just tangential, but actively, dangerously misleading!
It’s a psychic assault, a draining vortex of intellectual despair! And if you haven’t wrestled with every single word, dissected it, exposed its flaws – and frankly, I suspect most haven’t – then don’t dare expect anyone else to salvage this wreckage! This is not a passive observation; it’s a desperate plea against a future where genuine thought is suffocated by the cold, sterile logic of a machine! We must guard against this, or we risk losing everything!” -- gemma3:4b
Your comment is hilarious on a meta-level: it's an example of exactly the sort of socially-mediated gatekeeping the author of the article (machine or human, I don't care) criticizes. It is, in fact, essential to match authority and responsibility to achieve excellence in any endeavor, and it's a truth universally acknowledged that vague consensus requirements are tools socially adept cowards use to undermine excellence.
Competent dictatorship is effective. Look at how much progress Python made under GVR. People who rail against hierarchy and authority, even when deployed correctly, are exactly the sort of people who should be nowhere near anything that requires progress.
Imagine running a military campaign by seeking consensus among the soldiers.
Or, you know, Linus Fucking Torvalds. If you were carrying the success or failure of most of the world's digital infrastructure on your shoulders, you also might be grating to some.
Your efforts to improve quality could be vetoed by your coworkers for a variety of reasons: they don't care, they don't trust your judgement, they see other things as a higher priority... the list goes on and on. Some of these things can't be changed by you, but some can, and that's where the soft skills come into play.
That's only marginally sped up even if you could generate the code with a click of a button.
This was somehow related to the "social activity" part :D
If it was better specified I'd be done already, but instead I've had to go back and forth with multiple people multiple times about what they actually wanted, and what legacy stuff is worth fixing and not, and how to coordinate some dependent changes.
Most of this work has been the oft-derided "soft skills" that I keep hearing software engineers don't need.
> The "soft skills" framing is wild. You're supposed to learn to communicate your way out of a structural problem. Like taking a public speaking class to fix a broken org chart.
If learning to communicate well wouldn't fix a structural problem, then communicating well wouldn't fix it either.
Bad advice given to them:
> The standard advice is always "communicate better, get buy-in, frame it differently." [...] The advice for this position is always the same: communicate better. Get buy-in. Frame it as their idea. Pick your battles. Show, don't tell.
That sort of naive kindergarten advice is how people want things to work, but how they rarely work. Literally the only functional part of it is the "pick your battles" part. That one is necessary, but not sufficient. The listed advice will make you be seen as nice cooperative person. It is not how you achieve the change.
So OP comes to the "the problem isn't communication. It's structural." conclusion.
You're right that organizations do often become consensus-driven. It's a failure mode, not something to which we should aspire. And we certainly shouldn't tell people to deal with a shortfall of authority in an organization by becoming social slime balls that get their way through manipulating emotions and not atoms. People who advise doing this ruin good technologists by turning them into middling politicians.
"Disagree and commit" is a good thing. Escalating disagreement to a "single threaded owner" for a quick decision is a good thing. It avoids endless argumentation and aligns incentives the right way. Committees (formal or not) diffuse responsibility. Maturity is understanding that hierarchy is normal and desirable.
"Responsibility is a unique concept... You may share it with others, but your portion is not diminished. You may delegate it, but it is still with you... If responsibility is rightfully yours, no evasion, or ignorance or passing the blame can shift the burden to someone else. Unless you can point your finger at the man who is responsible when something goes wrong, then you have never had anyone really responsible."
He was seen as an asshole who was difficult to work with, and while that's true, it's also true that without his leadership, it's doubtful the Navy would have launched their nuclear power program, nor would it have been as successful. He ran a dictatorship, with an absurd amount of direct reports, very little middle management, and it was wildly successful.
> "Disagree and commit" is a good thing.
> In a healthy organization, responsible people act and are held to account by their results.
The first is only true if the second is also true, but I'm sure you know that.
Literally everyone else who wants to change something or keep it the same has to play politics as they try to influence this one person. But also, practically speaking, this one person still have to do a lot of politics. A single team leads with great power still struggle to enact the change. They encounter both open and hidden opposition. Their opposition is even frequently right. They also encounter misunderstanding, passive aggression, seeming compliance, passivity. Or simply people fully agreeing, being onboard and still doing things the old way out of intertie or organization pressures.
> You're right that organizations do often become consensus-driven.
I did not said that at all. I did not even said that it is bad when it happens. I said that the usual advice OP was given makes people feel good, but it is a bad advice for achieving the change.
As years pass, you are judged against the standard you set, and if you do not keep raising this standard, you start being seen as average, even if you are performing the same when you joined.
I've seen this play out many, many times.
When an incompetent person is hired, even if issues are acknowledged, if they somehow stay, the expectations from them will be set to their level. The feedback will stop as if you complain about same issues or same person's work every time, people will start seeing this as a you problem. Everyone quietly avoids this, so the person stays.
When a competent person is hired, it plays out the same. After 3/5/10 years, you are getting the same recognition and rewards as the incompetent person as long as you both maintain your competency.
However, I've seen (very few) people who consistently raised their own standards and improved their impact and they've climbed quickly.
I've seen people lowering their own standards and they were quickly flagged as under-performers, even if their reduced impact was still above average.
Basically manager asks me something and asks AI something.
I'm not always using so-called "common wisdom". I might decide to use library of framework that AI won't suggest. I might use technology that AI considers too old.
For example I suggested to write small Windows helper program with C, because it needs access to WinAPI; I know C very well; and we need to support old Windows versions back to Vista at least, preferably back to Windows XP. However AI suggest using Rust, because Rust is, well, today's hotness. It doesn't really care that I know very little of Rust, it doesn't really care that I would need to jump through certain hoops to build Rust on old Windows (if it's ever possible).
So in the end I suggest to use something that I can build and I have confidence in. AI suggests something that most internet texts written by passionate developers talk about.
But manager probably have doubts in me, because I'm not world-level trillion-dollar-worth celebrity, I'm just some grumpy old developer, so he might question my expertise using AI.
Maybe he's even right, who knows.
You mention the tradeoffs between rust. Including the high level of uncertainty and increased lead time as you need to learn the language.
The manager, now having that information, can insist on using rust, and you get er great opportunity to learn rust. Now being totally off the hook, even if the project fails, as you mentioned the risks.
- Luke 4:24
It's why people often trust consultants over the people inside the organization. It's why people often want to elect new leaders even if the current leaders are doing a decent job.
The baby almost always gets thrown out with the bath water.
https://en.wikipedia.org/wiki/Don't_throw_the_baby_out_with_...
Given the standard advice to job hop every 1-3 years, and the intern/coop work pattern of semester long stints, is this not just a structural consequence?
Do you gain competitive advantage as a company with longer tenures? Or shorter, even?
Or is it an attitude problem, compare with old people planting shade trees:
“Codebases flourish when senior devs write easily maintainable modules in whose extensions they will never work”
That's a very strong foundational claim right at the start. And in my experience, a completely false one. Which makes the whole argument that follows it completely unsound.
Also, the author seems to treat the terms "consensus" and "buy-in" as synonymous. They're not, and this distinction can make a huge difference in terms of healthy teams can operate. Patrick Lencioni covers this well in his classic book, "Five Dysfunctions of a Team".
Can you explain more? I'm not familiar with that distinction, nor that book.
EDIT: I asked ChatGPT, and it came up with this [0]. Please let me know if it's accurate (I don't necessarily dislike LLMs, I just think they're wildly oversold, and also value human input).
0: https://chatgpt.com/share/69a05ce2-95e4-8006-ae56-bd51472894...
How about "have you tried unionizing?" Because the common theme here is lack of respect which is ultimately limited by your own bargaining power. That means it's only your individual value against the collective will of the company, and the individual is going to lose that fight more often than not (with very rare exceptions for extremely talented and smart people who won the life lottery who are smarter than everyone at a company).
How hard do you think it is to get cheaper developers from LatAM?
Other places I worked it is usually another engineer throwing a spanner in the works. Smaller companies have a lot of pets in the code and architecture. But if you avoid the pets you can change things.
I’m confused. The polite way to say no at work is to make it about not having time.
So, if I understood correctly, complaining that his architectural advice for other teams/people was constantly ignored, and his solution is the same thing he was complaining about.
ie The teams he was advising also thought authority should match responsibility - and they did want they wanted and ignored him?
Not the framework you developed. Not the fact that your work powers millions of users. To them, you're just a replaceable worker bee. You are only needed when something breaks. Architectural decisions are made by anecdotal experiences by them and it's just stone, paper, scissors all over again.
And when shit blows up right in their faces, it will not be about their judgement or lack thereof - it will be about how you didn't communicate about the issue properly. It will always be you who will be under the bus. And then the bunch of these clowns go and vibe code some stupid-ass product and sell it to gullible investors "wHo NeEds EnGiNeErs?"
And then you read about how 1000s of users' information went public all over the internet post their launch...the very next day.
/endrant
I literally had this discussion with my boss yesterday. I spent time writing up what I already knew to be true (we have systemic issues which are unsolved, because we only ever fix symptoms, not root causes), replete with 10+ incidents all pointing to the same patterns, and was told I need to get the opinions of others on my team before proceeding with the fixes I recommended. “I can do that, but I also already know the outcome.”
> Responsibility Without Authority
This. So much this. Every time I hear someone excitedly explain that their dev teams “own their full stack,” I die a little inside. Do they fix their [self-inflicted] DB problems, or do they start an incident, ask for help, and then refuse to make the necessary structural changes afterwards? Thought so.
Insert fire writing gif here.
Business outcome comes first, and it is only rarely aligned with technical excellence. Closing a deal might involve making an unreasonable promise, and implementing it might not require more than an ugly hack, so you go with the ugly hack and make the money.
Comfort could be important but many people don't perform well when comfortable, so the organisation has to add some degree of confusion and pressure to keep them at a productive equilibrium where they don't fall into either apathy or burst into flames.
And yes, the boss decides, not because they are especially accountable or responsible, but because the power comes from ownership. In some organisations this is veiled and workers get a say most of the time, but in a pinch it'll be the higher-ups that actually have that power.
and then the blame could be shifted to the future generations, it's their incompetence after all.
> Correctness wins when the cost of ignoring it becomes impossible to miss: an outage, a customer complaint, data loss. Until then, comfort wins every time.
Those who tolerate comfort-winning aren't engineers and shouldn't be admitted to stand close to engineering systems overall, especially outside the software industry.
Go beyond identifying all these problems towards solving them. Choose a small problem, where you won’t have to fight and argue, just a little dust bunny you can sweep out of the way. Do it again, and again, and again. This is how you build trust. As you build trust, it becomes easier to seek change.
Additionally, you may also find that not all the little problems are worth solving, and what’s more interesting are the bigger problems around product-market fit, usability, and revenue.
TFA author (and me), and you have wildly different motivations. I don't know the author, but have said verbatim much of what they wrote, so I feel like I can speak on this.
Beyond the fact that I recognize the company has to continue exist for me to be employed, none of those hold the slightest bit of interest for me. What motivates me are interesting technical challenges, full stop. As an example, recently at my job we had a forced AI-Only week, where everyone had to use Claude Code, zero manual coding. This was agony to me, because I could see it making mistakes that I could fix in seconds, but instead I had to try to patiently explain what I needed to be done, and then twiddle my thumbs while cheerful nonsense words danced around the screen. One of the things I produced from that was a series of linters to catch sub-optimal schema decisions in PRs. This was praised, but I got absolutely no joy from it, because I didn't write it. I have written linters that parse code using its AST before, and those did bring me joy, because it was an interesting technical challenge. Instead, all I did was (partially) solve a human challenge; to me, that's just frustration manifest, because in my mind if you don't know how to use a DB, you shouldn't be allowed to use the DB (in prod - you have to learn, obviously).
I am fully aware that this is largely incompatible with most workplaces, and that my expectations are unrealistic, but that doesn't change the fact that it is how I feel.
Re: AI, that's not to say I don't use it, I just view it as a sometimes useful tool that you have to watch very closely. I also often view their use as an X-Y problem.
Another recent example: during the same AI week, someone made an AI Skill (I'm not sure how that counts as software, but I digress) that connects to Buildkite to find failed builds, then matches the symptoms back to commit[s]. In their demo, they showed it successfully doing so for something that "took them hours to solve the day before." The issue was having deployed code before its sibling schema migration.
While I was initially baffled at how they missed the logs that very clearly said "<table_name> not found," after having Claude go do something similar for me later, I realized it's at least partially because our logs are just spamming bullshit constantly. 5000-10000 lines isn't uncommon. Maybe if you weren't mislabeling what are clearly DEBUG messages as INFO, and if you didn't have so many abstractions and libraries that the stack traces are hundreds of lines deep, you wouldn't need an LLM to find the needle in the haystack for you.
I also share some of your philosophy — life is too short for us not to find joy at work, if we can. It’s a lot easier to find that joy when the team’s shipping valuable software, of course.
What's frustrating (I've said that a lot, I know) to me is that my skills are seen as valued, but my opinions aren't. I also have a pathological need to help people, and so when someone asks me, I can't help but patiently explain for the Nth time how a B+tree works (I include docs! I've written internal docs at varying levels!) and why their index design won't work. This is usually met with "Thanks!" because I've solved their problem, until the next problem occurs. When I then point out that they have a systemic issue, and point to the incidents proving this, they don't want to hear it, because that turns "I made an error, and have fixed it" into "I have made a deep architectural mistake," and people apparently cannot stand to be wrong.
That also baffles me - I don't think I'm arrogant or conceited; when I'm wrong, I publicly say so, and explain precisely where I was mistaken, what the correct answer is, and provide references. Being wrong isn't a moral failing, or even necessarily an indictment on your skills, but for some reason, people are deathly afraid to admit they were wrong.
> Authority matching responsibility. That's the only fix I've seen work. Either you get decision-making power that matches the decisions you're already making, or you find a place that treats your judgment as an asset instead of something to manage.
I don't think the solution is to become some kind of dictator. And I don't think it's about not valuing your judgement.
The key issue is a fundamental misalignment of core values. In the examples given, the culture is such that quality is not the highest priority. A system based on consensus only really works if core values are shared, or there will always be discontent. Consensus won't work under these circumstances. You'll never be able to 'trust' your colleagues to 'do the right thing'.
If you care about quality, you have to look for another organisation and have a lot of questions about how they assure quality.
Agreed, but my main frustration is what glitchc wrote a few comments down: "No one actually claims their product is crap and quality doesn't matter."
I have never met anyone in management who will admit that they value velocity over correctness and uptime, but their actions do. If you want to optimize for velocity, growing your user base, expanding your features, that's fine - but you need to acknowledge that you're making a trade-off in doing so. If you're a solo dev, or working at an extremely small shop with high trust, it's possible that you can have high velocity and high quality, but the combination is vanishingly rare at most places.
Some organizations do in fact optimize for correctedness, and some people are good at it.
Some people are good in everything (totally possible, universe doesn't care about keeping dichotomies). Maybe that technical guy was only technical up until now because it was what added more value. People often don't consider that.
Right now, we're seeing some small changes in value dynamics. It makes us foster those (mostly pointless) meta-conversations about what organizations are and how people fit in them. But the truth stays the same, both are incredibly diverse.
* Their codebase is written in something relatively obscure, like Elixir or Haskell.
* They're an infrastructure [0] or monitoring provider.
* They're running their code on VMs, and have a sane instantiation and deployment process.
* They use Foreign Key Constraints in their RDBMS, and can explain and defend their chosen normalization level.
* They're running their own servers in a colo or self-owned datacenter.
And here are some anti-signals. Same disclaimers apply.
* Their backend is written in JS / TS (and to a somewhat lesser extent, Python [1]).
* They're running on K8s with a bunch of CRDs.
* They've posted blog articles about how they solved problems that the industry solved 20 years ago.
* They exclusively or nearly exclusively use NoSQL [2].
0: This is hit or miss; reference the steady decline in uptime from AWS, GitHub, et al.
1: I love Python dearly, and while it can be made excellent, it's a lot easier to make it bad.
2: Modulo places that have a clear need for something like Scylla - use the the right tool for the job, but the right tool is almost never a DocumentDB.
Look at any high quality open source software, and the care people put into them. Those are organizations, made up of people, some of them highly technical.
Startups often don't optimize for correctedness. They can't afford it. But that's a niche. Funny enough, it's the one that's being most affected by the shift in value dynamics right now, so I understand that some people here might see the world as just this, but it isn't.