Processes, tools, and diligence vigilantly seem the most apparent path. Perhaps rehash the 50 year old debate of professionalization while AI vibes coding is barking at the door, because what could possibly go wrong with even less experience doing the same thing and expecting a different result.
If you want to do that on your own time, that's fine - but the purpose of a job is economic. Of course you should write software of some reasonable quality, but optimizations have diminishing economic returns. Eventually, the returns are lower than the cost (in time, money, etc) of further optimizing, and this break-even point is usually at a lower level of quality than engineers would like it to be. Leadership and engineering managers know this and behave accordingly.
One can be skeptical about the implied statement and leadership/management knows what it is doing beyond delivering at the (arbitrarily) set time. One definition of Quality is to satisfy a need entirely at the lowest cost in the shortest time, but more often that not, the last term gets 90% of the attention.
Do they? I’ve been fighting against the tide for years until I understood that all of quality this and quality that doesn’t matter. Sure, it sucks to be on the receiving end of buggy software, but this where you vote with your money. At work? Finish the task with least amount of resources and move on.
I've been doing this for decades, it's never a problem. Either velocity tanks in which case there's a short period where company invests into improving it, or people leave.
It’s just really hard to overstate how much damage a bunch of crappy code can have. Even with the best of intentions. I must say I strongly disagree that this is “never a problem”.
But I'm curious about how one prevents this dysfunctional culture.
At a big company “you” don’t lose anything. You only lose if you’re a fool trying to fix dysfunctional culture when you’re not even close to C level.
which is now claude code...
I think it’s like switching CEOs when the company goes out of startup mode into grownup company. What got you here won’t keep you here.
If you're working at a company who disregards safety and security good luck getting them to care about clean code and efficiency.
Making a bug fix on jira or web page there's less to loose.
The whole ballgame is making sure you have no low quality people on your team.
The quality of your team is more-or-less a pre-existing background variable. The question is whether a team of comparable quality takes longer to produce quality software than hacked-together software, and the answer appears to be "yes". The only way out of this is if optimizing more for code quality *actually helps you recruit better engineers*.
I can put a little data to that question, at least. I run a recruiting company that does interviews, so we have data both on preferences and on apparent skill level.
I went and split our data set by whether an engineer indicated that emphasis on code quality was important to them. Our data suggests that you can probably make slightly better hires (in a raw technical-ability sense) by appealing to that candidate pool:
- Candidates who emphasized quality were slightly (non-significantly) more likely to pass our interview and,
- Candidates who emphasized quality were slightly (non-significantly) less likely to have gotten a job already
The effect is pretty small, though, and I doubt it outweighs the additional cost.
Key word is ‘can’. And it takes far more time and money to assemble “quality” team.
Ive watched many businesses appreciate the benefits of software quality (happy customers, few incidents, fast feature turnaround) without ascribing it to anything in particular.
Then, when it went away, they chalked up the problems to something else, throwing fixes at it which didnt work.
At no point in time did they accurately perceive what they had or what they lost, even at the point of bankruptcy.
Part of the problem is that the absence of bugs, incidents and delays just feels normal and part of the problem is most people are really bad at detecting second order effects and applying second order fixes. E.g. they think "another developer will fix it" or "devs just need to dedicate more time to manual QA".
Conversely, because it's so hard to see I think it can make a really good competitive moat.
I’ve converted people by building better systems than they’ve seen before. Some balk, but better than half end up getting it and pitching in.
Ouch. It seems that when a manager sinks some team's velocity by adding a bad developer to it the following reaction is always to add more bad developers so the velocity recovers.
And then when they can't herd all the bad developers around, the obvious next step is to finish destroying everything again by imposing some strict process.
It that were always the case we could bask in the joy that the problem sorted itself out, but alas, there's a lot of crap that keeps on going.
This is traditionally not only with software, but other kinds of companies too.
Some people are just not quality people.
At this conference there's a presentation encouraging "You should finish your software."
If that's all people did that would be 10x better right there.
I don't think we'll reach this promised land™ until incentives re-align. Treating software as an assembly line was obviously The Wrong Thing judging by the results - problem is how can we ever move to a model that rewards quality perhaps similar to (book) authors and royalties?
Owner-operator SaaS is about as close as you can get but limits you to web and web-adjacent.
Get couple shredded guys and gals to show off how fit they are so everyone feels guilty they are snacking past 8PM.
Sell another batch of “how to do pushups” followed by “how to do pushups vol.2” with “pushup pro this time even better”.
Where in the end normal people are not getting paid for getting shredded, they get paid for doing their stuff.
I just constantly feel like I am not a proper dev because I mostly skip unit tests - but on the other hand I built last 15 years couple of systems that worked and were bringing in value.
(The answer btw: Because nobody would be able to explain to a jury/judge that 80% or whatever is enough)
Everybody who worked with the 2005 Toyota Camry ETCS would have known what was up when it killed a few people, for example. Nobody can work on spaghetti code of that magnitude and not realize that something is off.
Boeing employees who tried to blow the whistle were similarly ignored or silenced while a few died in mysterious circumstances.
Obviously, this assumes you write enterprise grade code. YMMV
But still cottage industry of "clean code" is pushing me into self doubts and shame.
I’m not saying that you yourself have this attitude - but the “tests are for suckers, I just ship” crowd really grinds my gears because to me it says “ha! Why do you care about getting things right?”
Totally get where you’re coming from though, sometimes the expected behavior is trivial to verify or (in the case of GUIs) can be very difficult and not worth the effort to implement.
You just contribute to BS scare tactics of people selling “clean code”.
However, you should want to build quality software because building quality things is fulfilling. Unfortunately certain systems have made the worship of money the end all be all of human experience.
The QE engineers and the development engineers were in entirely separate branches of the org chart. They had different incentive structures. The interface documentation was the source of truth.
The release cadence was slow. QE had absolute authority to stop a release. QE wrote more code than development engineers did with their tests and test automation.
They did TDD for a long time, they wrote Clean Code™, they organised meetups, sponsored and went to conferences, they paid 8th Light consultants to come teach (this was actually worth it!) and sent people to Agile workshops and certificates.
At first, I was like "wow, I am in heaven".
About a year later, I noticed so much repetition and waste of time in the processes.
Code was at a point where we had a "usecase" that calls a "repository" that fetches a list of "ItemNetworkResponse" which then gets mapped into "Item" using "ItemNetworkResponseToItemMapper" and tests were written for every possible thing and path.
They had enterprise clients, were charging them nicely, paying developers nicely and pocketed extra money due to "safety buffers" added by both engineers, managers and sales people, basically doubling the length of any project for "safety".
The company kept to their "high dev standards" which meant spending way more time, and thus costing way more, than generic cookie-cutter agencies would cost for the same project.
This was great until every client wanted to save money.
The company shut down last year.
ThoughtWorks and companies like them do work but theyre heavily reliant upon heavy duty sales. Delivery at high quality is necessary but not sufficient.
Sofwtare development and quality assurance should be tightly integrated and should work together on ensuring a good product. Passing builds over a wall of documentation is a recipe for disasters, not good quality software.
In 2025 I think the only thing that makes sense is having SDETs embedded in development teams.
lol, fire business analysts and let tech writers do their job. Sounds like some kind of VC black company.
It seems to be socially associated with the Handmade Hero and Jon Blow Jai crowd, which is not so much concerned that their software might be buggy as that it might be lame. They're more concerned about user experience and efficiency than they are about correctness.
This is not at _all_ my interpretation of Casey and JBlow's views. How did you arrive at this conclusion?
> They're more concerned about user experience and efficiency than they are about correctness.
They're definitely very concerned about efficiency, but user experience? Are you referring to DevX? They definitely don't prize any kind of UX above correctness.
And stability is important, but not critical - and the main way they want to achieve it is that errors should be very obvious so that they can be caught easily in manual testing. So C++ style UB is not great, since you may not always catch it, but crashing on reading a null pointer is great, since you'll easily see it during testing. Also, performance concerns trump correctness - paying a performance cost to get some safety (e.g. using array bounds access enforcement) is lazy design, why would you write out of bounds accesses in the first place?
I think that's an overall good summary of the crowd's attitude. They think that mainstream programming environments err too far in the direction of keeping your software from being buggy, charging programmers a heavy cost for it. Undoubtedly for videogames they are correct.
Jai in particular does support array bounds checking, but you can turn it on or off as a compilation option: https://jai.community/t/metaprogramming-build-options/151
IMHO this group's canonical lament was expressed by Mike Acton in his "Data-Oriented Design and C++" talk, where he asks: "...Then why does it take Word 2 seconds to start up?!"[0]. See also Muratori's bug reports which seem similar[1].
I think it is important to note, as the parent comment alludes, that these performance problems are real problems, but they are usually not correctness problems (for the counterpoint, see certain real time systems). To listen to Blow, who is actually developing a new programming language, it seems his issue with C++ is mostly about how it slows down his development speed, that is -- C++ compilers aren't fast enough, not the "correctness" of his software [2].
Blow has framed these same performance problems as problems in software "quality", but this term seems share the same misunderstanding as "correctness". And therefore seems to me like another equivocation.
Software quality, to me, is dependent on the domain. Blow, et. al, never discuss this fact. Their argument is more like -- what if all programmers were like John Carmack and Michael Abrash? Instead of recognizing software is an economic activity and certain marginal performance gains are often left on the table, because most programmers can't be John Carmack and Michael Abrash all the time.
[0]: https://www.youtube.com/watch?v=rX0ItVEVjHc [1]: https://github.com/microsoft/terminal/issues/10362 [2]: https://www.youtube.com/watch?v=ZkdpLSXUXHY
At least for Casey his case is less that everyone should be Carmack or Abrash but that programmers often through their poor design choices prematurely pessimise their code when they don’t need too.
I think this is far enough since Casey, unlike Blow, does offer some practical advice.
The argument made there is that "software quality" in the uncle bob sense, or in your domain version, is not necessarily wrong but at the very least subjective, and should not be used to guide software development.
Instead, we can state that the software we build today does the same job it did decades ago while requiring much vaster resources, which is objectively problematic. This is a factual statement about the current state of software engineering.
The theory that follows from this is that there is a decadence in how we approach software engineering, a laziness or carelessness. This is absolutely judgemental, but its also clearly defended and not based on gut feel but rather on these observations around team sizes/hardware usage vs actual product features.
Their background in videogames makes them an obvious advocate for the opposite, as the gaming industry has always taken performance very seriously as it is core to the user experience and marketability of games.
In short, it is not about "oh it takes 2 seconds to startup word ergo most programmers suck and should pray to stand in the shadow of john carmack", it is about a perceived explosion in complexity both in terms of number of developers & in terms of allocated hardware, without an accompanying explosion in actual end user software complexity.
The more I think about this, the more I have come to agree with this sentiment. Even though the bravado around the arguments can sometimes feel judgemental, at its core we all understand that nobody needs 600mb of npm packages to build a webapp.
Do we want software to be more complex? Can you explain what you mean here? The explosion from my POV seems to be related to simply more software.
> at its core we all understand that nobody needs 600mb of npm packages to build a webapp.
Perhaps, but isn't this a different argument/different problem?
If the argument is that these software packages are bloat, which can be detrimental to performance (which BTW is a bank shot as you describe it here), we all understand we don't need npm at all to build a webapp. However, it might make it easier? Isn't easy really important in some domains?
Again -- software engineering is an economic activity. If Word startup speed was important then more engineering resources would be expended to solve that problem.
>> I think it is important to note, as the parent comment alludes, that these performance problems are real problems, but they are usually not correctness problems (for the counterpoint, see certain real time systems).
The thing is we agree that performance problems are real problems. The problem is imagining that they are the same problem for every programmer in every domain. A high speed trading firm or a game dev studio simply has different constraints than Microsoft re: Word or a web dev.
"Why does this software not behave like my (better) software?" is a good question. Unfortunately I think Blow, et. al, only give this question a shallow examination. Maybe one doesn't treat the engineering of a thermostat the same way one treats a creative enterprise like a game? Maybe the economic/intellectual/self rewards are not similar?
But between the sparse website, invite-only and anonymous organizers, it just feels like it's emphasizing the reactionary vibes around the Handmade/casey/jblow sphere. Like they don't want a bunch of blue-haired antifa web developers to show up and ruin everything.
Glad to see they got Sweden's own Eskil Steenberg though. Tuning in for that at least.
https://handmade.network/blog/p/8989-separating_from_handmad...
https://handmadecities.com/news/splitting-from-handmade-netw...
There's a reason web developers, and the ecosystem/community around them, are the butt of many jokes. I don't think it's at all surprising that the injection of identity politics into the software industry has had a negative effect on quality.
If it had any effect, it would be negligible compared to offshoring and weak incentives.
That's a pretty broad claim. This conference could be in response to a perceived negative effect on quality, but claiming that as a fact seems hard to back up to me
It's a clever political tactic coz a 50 year old white male middle manager at Microsoft trying to become a board member on an open source foundation would face a lot more hostility than a 20-something girl who pushes all of the diversity buttons.
It mirrors the rather successful marketing strategies for a string of movies including Ghostbusters movie and Barbie, among others. i.e. "There's a certain kind of person who doesnt like our latest corporate offering...". Who wants to be that person?
Yep, preemptively destroying the reputation of whoever opposes you is a common and ancient tactic of bullies from all levels, from school patios to fascist governments.
Look at the people who are pushing for politicizing software development, and you'll see they are always getting money out of the deal.
This reads like "Oh some people are meeting, so this must actually be about ME".
You write this like this is a bad thing.
I just came to a conference to learn some cool new tech, but instead got lectured about my transphobia, that my database is systemic discrimination and my HDD being named „slave“ means I burn crosses in my free time, even though I have zero family relations to anything America.
I mean this screams fun right from the get go.
Anyway, I’ll watch the twitch stream from across the pond.
I would expect this conf to expand on those types of concepts and strategies.
Why would they need to do that? Is that even a goal or something that this conference is addressing at all?
My question is how far does it go - are the gains going to peter out, or does it keep going or even accelerate? Seems like one of the latter two thus far.
I would guess the same way humans do.
Put brain in creative mode, bang out something that works
Put brain in rules compliance mode and tidy everything up.
Then send for code review.
I feel like this comes about because it's the optimal strategy for doing robust one-shot "point fixes", but it comes at the cost of long-term codebase heath.
I have noticed this bias towards lots of duplication eventually creates a kind of "ai code soup" that you can only really "fix" or keep working on with AI from that point out.
With the right guidance and hints you can get it to refactor and generalise - and it does it well - but the default style definitely trends to "slop" in my experience so far.
All I found is a Twitch tagline that reads "Software is getting worse. We're here to make it better."
I am going to keep saying this, if your main tagline/ethos is broken by your website you have failed.
* On mobile the topics are hidden without scroll over. You also can't read multiple of the topics without scrolling right as you read.
* The background is very distracting and disrupts readability.
* None of your speakers have links to their socials/what they are known for.
* > Who are the organizers? Sam, Sander and Charlie.
* * Ah yes, my favourite people.... At least hyperlink their socials.
Personal Quality Coding practices have been around for as long as software has been a thing. Way back when, Watts Humphrey, Steve McConnell, and Steve Maguire wrote books on how to maximize personal Quality. Many of their techniques still hold true, today.
But as long as there are bad people managers and short-sighted execs, you'll have shit quality; regardless of who does the work.
Overwhelming majority of companies have no interest in even approaching the idea of what that would mean.
"we need 7 years experience with Mulesoft and Kubernetes."
sure, yeah, whatever.
Not even a section of where and when to find the talks offline.
I sometimes wonder if there could be an optimal number of microservices. As far as I know no one has connected issue data to the number of microservices before. Maybe there‘s an optimal number like „8“ which leads to lower number of bugs and faster resolution times.
If you ask Amazon then the more the merrier, because the number of microservices is effectively a multiplier on the bill.
Really that’s the core of it
I don't see how anyone can be "for" quality and not talk about how quality can be assessed. Where are the talks about that?
However, I would be interested in establishing a union for technologists across the nation. Drive quality from the bottom up, form local chapters, collectively bargain.
Quality is a measurement. That’s how it works in hardware land, anyway. Product defects - and, crucially, their associated cost to the company - are quantified
Quality is not some abstract, feel good concept like “developer experience”. It’s a real, hard number of how much money the company loses to product defects.
Almost every professional software developer I’ve ever met is completely and vehemently opposed to any part of their workflow being quantified. It’s dismissed as “micromanagement” and “bean counting”.
Bruh. You can’t talk about quality with any seriousness while simultaneously refusing metrics. Those two points are antithetical to one another.
1. It is partly because the typical metrics used for software development in big corporations (e.g., test coverage, cyclomatic complexity, etc) are such a snake oil. They are constantly misused and/or misinterpreted by management and because of that cause developers a lot of frustration.
2. Some developers see their craft as a form of art, or at least an activity for "expressing themselves" in an almost literary way. You can laugh at this, but I think it is a very humane way of thinking. We want to feel a deeper meaning and purpose in what we do. Antirez of redis fame have expressed something like this. [0]
3. Many of these programmers are working with games and graphics and they have a very distinct metric: FPS.
2. With respect: that’s a bit of an exceptionalist mindset. There’s nothing precious about software’s value to a business. It’s a vehicle to make money. That’s not to say craft isn’t important - it is, and it has tangible impacts to work. The point I’m making is that: my boss would laugh me out of the room if I told him “You can’t measure the quality of my electronics designs or my delivery process; it’s art.”
3. I’ve never heard of FPS but I’m very interested in learning more. Thanks for sharing the link.
Edit: oh ok duh yeah of course you could measure the frame rate of your graphics stack and get a metric for code quality. D’oh. Whoops. XD
No it isn't, as in it literally isn't. Quantification is the process of judging something numerically, objectively and in measurement. Qualification is just the opposite, judging something by its nature, essence or kind.
Software quality, like all kinds of quality, is always a subjective and experiential feature. Just like, when someone says, this piece of furniture is a high quality, handmade chair, in all likelihood they haven't performed a numerical analysis of the properties of the chair, they're expressing a subjective, direct sentiment.
The handmade movement in software, was exactly about this, putting focus on the personal, lived judgement of experienced practitioners as opposed to trying to quantify software by some objective metric, that's why individual people feature so heavily in it.
Yes, it is. It is a well known field in hardware development, and generally treated as a sub field of manufacturing engineering. It deals with things like testing, sampling, statistics of yield, and process improvement. If you’ve ever done a DFMEA, an 8D report, a Five Whys review, a sampling quality analysis, or a process map, you’ve used tools produced by this discipline.
That’s what I’m trying to tell you and everyone else reading this.
Software, as a profession, collectively talks about quality with all of the rigor of joint passing English majors sharing their favorite sections of Zen and the Art of Motorcycle Maintenance.
Quality has a meaning and a definition and a field of study attached to it. Semiconductors and large scale consumer product manufacturing wouldn’t exist as we know it without this.
Yet people throw around the term "engineers" with reckless abandon for seemingly anyone that wrote Javascript once in their lives. It all strikes me as very silly.
Yes and I gave you that definition in the first part of my response. That someone in the semiconductor industry made a poor and colloquial choice of words when he confused qualitative and quantitative processes, (the hardware industry deals with the latter), is not evidence to the contrary.
When people talk about software, they're using the terms appropriately. We can objectively talk about the quantities attached to a software. Number of dependencies, size, startup time, what have you, but two people will never necessarily agree on the quality of software, your high quality software might be junk to me, because that is at its root a subjective judgement. There is not a lot of qualitative or subjective judgement in the world of elementary hardware (it either works or doesn't), there is a lot of it in end user software.
It is very difficult to make a bad piece of hardware that does very well on a number of metrics, it's very easy to make a shoddy piece of software that performs well on an infinite number of metrics, because nobody has a subjective experience with a transistor but they do with a piece of software. That is why you should use the terms correctly and not apply processes from one domain to the other.
Quality is not a "real, hard number" because such a thing would depend entirely on how you collect the data, what you count as data, and how you interpret the data. All of this is brimming with controversy, as you might know if you had read more than zero books about qualitative research, epistemology, the philosophy, history, or practice of science. I say "might" because of course, the number of books one reads is no measure of wisdom. It is one indicator of an interest to learn, though.
It would be nice if you had learned, in your years on Earth, that you can't talk about quality with any seriousness while simultaneously refusing to accept that quality is about people, relationships, and feelings. It's about risks and interpretations of risk.
Now, here is the part where I agree with you: quality is assessed, not measured. But that assessment is based on evidence, and one kind of evidence is stuff that can be usefully measured.
While there is no such thing as a "qualitometer," we should not be automatically opposed to measuring things that may help us and not hurt us.