Back in 2000 I knew every server and network switch in our office and eventually our self hosted server room with a SAN and a whopping 3TB of RAM before I left. Now I just submit a yaml file to AWS
Code is becoming no different, I treat Claude/Codex as junior developers, I specify my architecture carefully, verify it after it’s written and I test the code that AI writes for functionality and scalability to the requirements. But I haven’t looked at the actually code for the project I’m working on.
I’ve had code that I did write a year ago that I forgot what I did and just asked Codex questions about it.
- code I can’t help modelling in my head (business-critical, novel, experimental, or introduces new patterns). I feel like there’s actually one or two more shades in between. ```
Sometimes I think something belongs in the second category, but then it turns out it’s really more like the first. And sometimes something is second-category, but for the sake of getting things done, it makes more sense to treat it like the first.
If vibe coding keeps evolving, this is probably the path it needs to explore. I just wonder what we’ll end up discovering along the way.
I want to say, I've lived through the time (briefly) where folks felt if you didn't understand the memory management or, even assembly, level ops of code, you're not going to be able to make it great.
High level languages, obviously, are a counter-argument that demonstrate that you don't necessarily need to understand all the details to deliver an differentiable experience.
Personally, I can get pretty far with a high-level mental model and deeper model of key high-throughput areas in the system. Most individuals aren't optimizing a system, they're building on top of a core innovation.
At the core you need to understand the system.
Code is A language that describes it but there's others and arguably, in a lot of cases, a nice visual language goes much further for our minds to operate on.
Thinking clearly is just as relevant or encumbering as it always was.
Answer: no. Just harder.
It does seem to me that the people who consistently get the best results from AI coding aren't that far away from the code. Maybe they aren't literally writing code any more, but still communicating with the LLM in terms that come from software development experience.
I think there will still be value in learning how to code, not unlike learning arithmetic and trigonometry, even if you ultimately use a calculator in real life.
But I think there will also still be value in being able to code even in real life. If you have to fix a bug in a software product, you might be able to fix it with more precise focus than an LLM would, if you know where to look and what to do, resulting in potentially less re-testing.
Personally, I balk at the idea of taking responsibility for shipping real software product that I (or, in a team environment, other humans on my team) don't understand. Perhaps that is my aerospace software background speaking -- and I realize most software is not safety-critical -- but I would be so much more confident shipping something that I understood how it worked.
I don't know. Maybe in time that notion will fade. As some are quick to point out, well, do you understand the compiled/assembled machine code? I do not. But I also trust the compilation process more than I trust LLMs. In aerospace, we even formally qualify tools like compilers to establish that they function as expected. LLM output, especially well-guided by good prompts and well-tested, may well be high quality, but I still lack trust in it.
Why? You can't validate the LLM outputs properly, and commit bugs and maybe even blatantly non-functional code.
My company is pressuring juniors to use LLM when coding, and I'm finding none of them fully understand the LLM outputs because they don't have enough engineering experience to find code smells, bugs, regressions, and antipatterns.
In particular, none of them have developed strong unit testing skills, and they let the LLM mock everything because they don't know any better, when they should generally only mock API dependencies. Sometimes LLM will even mock integration tests, which to me isn't generally a super good idea.
So the tests that are supposed to validate the code are completely worthless.
It has led to multiple customer impacting issues, and we spend more time mopping the slop than we do engineering as tenured engineers.
No, most of the chatter I’ve heard here has been the opposite. Changes have been poorly communicated, surprising, and expensive.
If he’s been vibe-coding all this and feeling impressed with himself, he’s smelling his own farts. The performance thus far has been ascientific, tone-deaf and piss-poor.
Maybe vibe-coding is not for him.
I realize that director level managers may not get this because they've always lived and worked in the domain of "vibes" but that doesn't mean it's not true