Now, imagine a scenario of a typical SWE in todays or maybe not-so-distant future: the agents build your software, you simply a gate-keeper/prompt engineer, all tests pass, you're now doing a production deployment at 12am and something happens but your agents are down. At that point, what do you do if you haven't build or even deployed the system? You're like a L1 support at this point, pretty useless and clueless when it comes to fully understanding and supporting the application .
The loss of competency seems pretty obvious but it's good to have data. What is also interesting to me is that the AI assisted group accomplished the task a bit faster but it wasn't statistically significant. Which seems to align with other findings that AI can make you 'feel' like you're working faster but that perception isn't always matched by the reality. So you're trading learning and eroding competency for a productivity boost which isn't always there.
My hypothesis is that the AI users gained less in coding skill, but improved in spec/requirement writing skills.
But there’s no data, so it’s just my speculation. Intuitively, I think AI is shifting entry level programmers to focus on expressing requirements clearly, which may not be all that bad of a thing.
Well, yeah. You were still (presumably) debugging the code you did write in the higher level language.
The linked article makes it very clear that the largest decline was in problem solving (debugging). The juniors starting with AI today are most definitely not going to do that problem-solving on their own.
One of my advantages(?) when it comes to using AI is that I've been the "debugger of last resort" for other people's code for over 20 years now. I've found and fixed compiler code generation bugs that were breaking application code. I'm used to working in teams and to delegating lots of code creation to teammates.
And frankly, I've reached a point where I don't want to be an expert in the JavaScript ORM of the month. It will fall out of fashion in 2 years anyway. And if it suddenly breaks in old code, I'll learn what I need to fix it. In the meantime, I need to know enough to code review it, and to thoroughly understand any potential security issues. That's it. Similarly, I just had Claude convert a bunch of Rust projects from anyhow to miette, and I definitely couldn't pass a quiz on miette. I'm OK with this.
I still develop deep expertise in brand new stuff, but I do so strategically. Does it offer a lot of leverage? Will people still be using it on greenfield projects next year? Then I'm going to learn it.
So at the current state of tech, Claude basically allows me to spend my learning strategically. I know the basics cold, and I learn the new stuff that matters.
And I must admit my appetite in learning new technologies has lessened dramatically in the past decade; to be fair, it gets to a point that most new ideas are just rehashing of older ones. When you know half a dozen programming languages or web frameworks, the next one takes you a couple hours to get comfortable with.
If you’ve forgotten your Win32 reverse engineering skills I’m guessing you haven’t done much of that in a long time.
That said, it’s hard to truly forget something once you’ve learned it. If you had to start doing it again today, you’d learn it much faster this time than the first.
For what it’s worth—it’s not entirely clear that this is true: https://en.wikipedia.org/wiki/Hyperthymesia
The human brain seemingly has the capability to remember (virtually?) infinite amounts of information. It’s just that most of us… don’t.
Ok, so my statement is essentially correct.
Most of us can not keep infinite information in our brain.
I'm a bit younger (33) but you'd be surprised how fast it comes back. I hadn't touched x86 assembly for probably 10 years at one point. Then someone asked a question in a modding community for an ancient game and after spending a few hours it mostly came back to me.
I'm sure if you had to reverse engineer some win32 applications, it'd come back quickly.
These last few months, however, I've had to spend a lot of time debugging via disassembly for my work. It felt really slow at first, but then it came back to me and now it's really natural again.
That's a skill onto itself, and I mean the general stuff does not fade or at least come back quickly. But there's a lot of the tail end that's just difficult to recall because it's obscure.
How exactly did I hook Delphi apps' TForm handling system instead of breakpointing GetWindowTextA and friends? I mean... I just cannot remember. It wasn't super easy either.
People naturally try to use what they've learned but sometimes end up making things more complicated than they really needed to be. It's a regular problem even excluding the people intentionally over-complicating things for their resume to get higher paying jobs.
One take-away for us from that viewpoint was that knowledge in fact is more important than the lines of code in the repo. We'd rather lose the source code than the knowledge of our workers, so to speak.
Another point is that when you use consultants, you get lines of codes, whereas the consultancy company ends up with the knowledge!
... And so on.
So, I wholeheartedly agree that programming is learning!
Isn't this the opposite of how large tech companies operate? They can churn develops in/out very quickly, hire-to-fire, etc... but the code base lives on. There is little incentive to keep institutional knowledge. The incentives are PRs pushed and value landed.
Isn't large amounts of required institutional knowledge typically a problem?
We had domain specialists with decades of experience and knowledge, and we looked at our developers as the "glue" between domain knowledge and computation (modelling, planning and optimization software).
You can try to make this glue have little knowledge, or lots of knowledge. We chose the latter and it worked well for us.
But I was only in that one company, so I can't really tell.
I could have sworn I was meant to be shipping all this time...
Common example here is learning a language. Say, you learn French or Spanish throughout your school years or on Duolingo. But unless you're lucky enough to be amazing with language skills, if you don't actually use it, you will hit a wall eventually. And similarly if you stop using language that you already know - it will slowly degrade over time.
I don't necessarily think that writing more code means you get better coder. I automate nearly all my tests with AI and large chunk of bugfixing as well. I will regularly ask AI to propose an architecture or introduce a new pattern if I don't have a goal in my mind. But in these last 2 examples, I will always redesign the entire approach to be what I consider a better, cleaner interface. I don't recall AI ever getting that right, but must admit I asked AI in the first place cos I didn't know where to start.
If I had to summarize, I would say to let AI implement coding, but not API design/architecture. But at the same time, you can only get good at those by knowing what doesn't work and trying to find a better solution.
I can iterate on entire approaches in the same amount of time it would have taken to explore a single concept before.
But AI is an amplifier of human intent- I want a code base that’s maintainable, scalable, etc., and that’s a different than YOLO vibe coding. Vibe engineering, maybe.
How exactly? Do you tell the agent "please write a test for this" or do you also feed it some form of spec to describe what the tested thing is expected to do? And do these tests ever fail?
Asking because the first option essentially just sets the bugs in stone.
Wouldn't it make sense to do it the other way around? You write the test, let the AI generate the code? The test essentially represents the spec and if the AI produces sth which passes all your tests but is still not what you want, then you have a test hole.
I care more about the code than the tests. Tests are verification of my work. And yes, there is a risk of AI "navigating around" bugs, but I found that a lot of the time AI will actually spot a bug and suggest a fix. I also review each line to look for improvements.
Edit: to answer your question, I will typically ask it to test a specific test case or few test cases. Very rarely will I ask it to "add tests everywhere". Yes, these tests frequently fail and the agent will fix on 2nd+ iteration after it runs the tests.
One more thing to add is that a lot of the time agent will add a "dummy" test. I don't really accept those for coverage's sake.
A follow-up:
> I care more about the code than the tests.
Why is that? Your (product) code has tests. Your test (code) doesn't. So I often find that I need to pay at least as much attention to my tests to ensure quality.
I find tests easier to write. Your function(s) may be hundred lines long, but the test is usually setup, run, assert.
I don't have much experience beyond writing unit/integration tests, but individual test cases seem to be simpler than the code they test (linear, no branches).
Sometimes I wonder if people who make statements like this have ever actually casually browsed Twitter or reddit or even attempted a "large" application themselves with SOTA models.
An example: I vibecoded myself a Toggl Track clone yesterday - it works amazingly but if I had to rewrite e.g. the PDF generation code by myself I wouldn't have a clue!
1. AI help produced a solution only 2m faster, and
2. AI help reduced retention of skill by 17%
I think being intentional about learning while using AI to be productive is where the stitch is, at least for folks earlier in their career. I touch that in my post here as well: https://www.shayon.dev/post/2026/19/software-engineering-whe...
Personally, I’ve never been learning software development concepts faster—but that’s because I’ve been offloading actual development to other people for years.
[1] https://martinfowler.com/articles/llm-learning-loop.html
This similarly indicates that reliance on LLM correlates with degraded performance in critical problem-solving, coding and debugging skills. On the bright side, using LLMs as a supplementary learning aid (e.g. clarifying doubts) showed no negative impact on critical skills.
This is why I'm skeptical of people excited about "AI native" junior employees coming in and revamping the workplace. I haven't yet seen any evidence that AI can be effectively harnessed without some domain expertise, and I'm seeing mounting evidence that relying too much on it hinders building that expertise.
I think those who wish to become experts in a domain would willingly eschew using AI in their chosen discipline until they've "built the muscles."
I'm wondering if we could have the best of IDE/Editor features like LSP and LLMs working together. With an LSP syntax errors are a solved problem, if the language is statically typed I often find myself just checking out type signatures of library methods, simpler to me than asking an LLM. But I would love to have LLMs fixing your syntax and with types available or not, giving suggestions on how to best use the libraries given current context.
Cursor tab does that to some extent but it's not fool proof and it still feels too "statistical".
I'd love to have something deeply integrated with LSPs and IDE features, for example VSCode alone has the ability of suggesting imports, Cursor tries to complete them statistically but it often suggest the wrong import path. I'd like to have the twos working together.
Another example is renaming identifiers with F2, it is reliable and predictable, can't say the same when asking an agent doing that. On the other hand if the pattern isn't predictable, e.g. a migration where a 1 to 1 rename isn't enough, but needs to find a pattern, LLMs are just great. So I'd love to have an F2 feature augmented with LLMs capabilities
It reduces the context switching between coding and referencing docs quite a bit.
> Importantly, using AI assistance didn’t guarantee a lower score. How someone used AI influenced how much information they retained. The participants who showed stronger mastery used AI assistance not just to produce code but to build comprehension while doing so—whether by asking follow-up questions, requesting explanations, or posing conceptual questions while coding independently.
This might be cynically taken as cope, but it matches my own experience. A poor analogy until I find a better one: I don't do arithmetic in my head anymore, it's enough for me to know that 12038 x 912 is in the neighborhood of 10M, if the calculator gives me an answer much different from that then I know something went wrong. In the same way, I'm not writing many for loops by hand anymore but I know how the code works at a high level and how I want to change it.
(We're building Brokk to nudge users in this direction and not a magic "Claude take the wheel" button; link in bio.)
I am not saying you should be struggling performatively, like a person still proud in 2026 that they are still using Vim for large projects (good for you, eh), but sometimes you need to embrace the discomfort.
I remember a small competition where people do a well-defined "share this content to others" routine to showcase how OS A is way more intuitive than OS B. There was also an OS C, which was way slower than A&B. Then, someone came using OS C, topped the chart with a sizeable time difference.
The point is, sometimes mastery pays back so much that, while there's theoretically better ways to do something, the time you save from that mastery is enough of a reason to not to leave the tool you're using.
I also have a couple of "odd" tools that I use and love, which would cause confused looks from many people. Yet, I'm fast and happy with them.
These large projects are amlmost always in Java, C#, and co. Where the verbosity of the language make it required to use an IDE. Otherwise, it would be a struggle to identify which module to import or what prefix and suffix (Manager, Service, Abstract, Factory, DTO,…) to add to the concept name.
Submission about the arXiv pre-print: https://news.ycombinator.com/item?id=46821360
I can start to see the dangers of ai now, whereas before it was more imaginary sci-fi stuff I couldn’t pin down. On the other hand a dystopian sci-fi full of smart everything seems more possible now since code can be whipped up so easily, which means perhaps that the ability for your smart-monocle to find and hack things in every day life is also way more likely now if the world around you is saturated by quick and insecure code.