I mostly don’t think that is possible though because there’s too much ambiguity in natural language. So the answer is probably when AI is close enough to AGI that I can treat it like an actual trusted senior engineer that I’m delegating to.
There’s ambiguity in the x86 specification, such that you can execute a single instruction and get different results in intel vs amd. See the rcpss instruction, for example.
I get that LLMs are categorically different, and they’re absolutely not as reliable as compilers are, but compilers are also not as reliable as compilers seem. And even less predictable IMO.
Yes, as a better autocomplete.
People are allergic to articles and documentation generated/processed by LLM.
You're switching from an active role to a passive one, meaning your skill will suffer over the time. There is a huge difference between doing the things and thinking you know what it's doing. It's harder to review bad generated code because how polished it looks, compared to code made by humans where the difference is much more obvious.
Code assistants seem to work great when dealing with boilerplate, but wouldn't be better to get rid of the need for the boilerplate in the first place?