I'm not sure why he thinks current LLM technologies (with better training) won't be able to do more and more of this as time passes.
To genuinely "talk to stakeholders" requires being part of their social world. To be part of their social world you have to have had a social past - to have been a vulnerable child, to have experienced frustration and joy. Efforts to decouple human development from human cognition betray a fundamental misunderstanding.
The irony here is that although pointing out quite well how people may have made incorrect judgment calls due to what comes down to personal experience at various times, this aspect is also down to personal experience.
An LLM can look these up and is still getting them wrong, or it can get them right but still pick the wrong conventions to use. More importantly though, LLM code assistants will not always be performing lookups, you cannot assume the same IDE and tool configuration profile for everyone. You cannot even assume that everyone's using an IDE with an embedded chatbot.
Moreover, if I know a key term or phrase (which is most cases) I can lookup those things in Google or IDE search, which is also faster than an LLM.
EDIT: to be clear, I’m still writing code. I can do many small tasks and fixes by hand faster than I can describe them to an LLM and check or fix its output. I also figure out how to structure a project partly by writing code. Many small fixes and structure by experimentation probably aren’t ideal software development, and maybe soon I’ll figure out LLMs (or they’ll improve) such that I end up writing better code faster with them. But right now I believe LLMs struggle with good APIs and especially modularity; because the only largely-LLM projects I’ve seen are small, and get abandoned and/or fall apart when the developer tries to extend them.
My question now is: given that that there are only a limited number of types of system, why not have templates for the know how for most of these system? LLM can just fill in the blanks and have a working system in no time for most of the use cases.
The only thing I can think of that will happen is that we will have new creative systems for use cases we have never even thought about. I doubt AI will take over. Human creativity has no bounds so we will see an explosion of new ideas that only humans can solve not a capitulation to AI.
College is for growing individuals that can handle the complexities required in a field. That is the real value.
You don't do CS or SE because you have to get out of college with knowledge of the latest hype, but you get out of college armed with the tools that make you able to learn and handle any of the that latest hype for decades to come.
This field especially moves way too fast for anything to be actual by the time you graduate. That's why you focus on the fundamentals and problem solving and in some exams here and there you get some touch of different fields (data, machine learning, etc).
For transparency in future incidents, I now expect that post-mortems like this one [0] would go along the lines of: "An AI code generator was used, it passed all the tests, we checked everything and we still got this error."
There is still one fundamental lesson in [0]: English as a 'programming language' cannot be formally verified and probabilistic AI generators can still be the cause of perfect-looking code being the cause of an incident.
This time the engineers will have no understanding of the AI generated code itself.
[0] https://sketch.dev/blog/our-first-outage-from-llm-written-co...