1. The problem case is very well specified
2. There's a verification harness it can work with
3. You don't care about long-term maintainability or security
Producing the things that solve for these is 90% of the job.
Consider what goes into setting up automated verification: how do you write unit tests when the units aren't built yet? You need to understand and design them. That's the entire premise behind TDD. You design the code through writing the tests first.
I don't see AI improving on this area. Encoding business invariants into high value automated verification isn't itself a verifiable task. Neither are code quality, security, or knowing when something is under-specified.
For production work specifically I think the future may look like developers doing mostly specification, verification and review with the AI doing the plumbing. I'm not actually convinced this is a long term win as there are a lot of trade-offs, but we'll see.
The only major difference to past experiences of new tools is that AI appears to have a wide range of likely-looking uses (and even more _marketed_ uses), and only recently have specific use-cases/patterns started to emerge with any stability. Many of the likely-looking uses turn out to be minor or no improvement (in a good number of cases, actually worse), which cumulatively _change_ the workflow but don't _improve_ it. Then there's a few specific areas where it helps (sometimes enormously).
To be more concrete:
1. AI helps with being more specification-driven (AI UX people have inadvertently replaced with the word "specification" with "plan"). Think upfront, do research, plan the design, then get AI to scaffold the code, then spend lots of time cleaning up and dealing with filling-out to a full production-worthy implementation.
2. AI can (on average) help with writing anything which is easily scaffolded from existing 'stuff': boiler plate code; adding an extra piece of infrastructure following established patterns; writing commit messages for small commits with clear intentions from the code, and similarly for PRs.
3. AI is useful as a search and diagnostic tool. Impenetrable or just long error message? AI can summarise that and pull out the useful specifics (e.g. the target line of code and the actual likely interpretations of the message). And Google search has become so poor for specific searches that I rely more and more on Claude for finding (verifiable-by-me) answers.
- Has it changed my workflow? Yes.
- Has it replaced me? Not even close.
- Has it displaced some of the work I used to do? Yes. More time spent on architecture and debugging than on writing code. Debugging workload has gone up due to the convincing-but-wrong code AI often generates that then takes a while to pull apart and fix; or when the code just doesn't match a production-worthy architecture despite extensive planning: too much training on open-source which is made up of crumby code (by volume, not by popularity).
Sadly, (1) above has also meant that some of the joy of "diving into a problem and scrubbing around in the code to figure out what's going on" has been lost. Instead, just ask AI to "delve into it". For many people, this has removed a part of the process they found tedious. They just wanted to get to a solution. For some people, this has removed a part of the problem-solving challenge that was good fun. Professionally, it's a shift, and it's still hit or miss as to whether it's overall more productive or not. For hobby projects, it's a choice whether to start or continue using AI or not.
Parting thought: AI has been pretty great for web tech stuff. I can see why so many engineers (particularly in Silicon Valley) think it's going to rule the world. But outside of web tech (e.g. computer architecture), it's pretty pants. It's junior-engineer-quality/reliability on stuff it's had huge amounts of training on (web tech, infrastructure, fantasy art, etc.) but useless at things it's got much less coverage of (computer architecture, technical diagrams, 3D spatial reasoning, etc.). This is a comment on where LLMs like Claude and ChatGPT and others are at today. It is not a comment on the future potential, nor on what can be achieved using other forms of AI or combined forms of AI.
This is a personal viewpoint and experience, not on behalf of any current or former employer.
That'll change in a few year's time. The industry can't sustain this level of obsession forever (for one thing, venture capitalists will move on to the next big thing, as per the very definition of their business model).
For now, it's a case of make hay while the sun shines.