Trust builds on trust. Most logical people aren’t going to invest a significant amount of time to learn something they found to not be trustworthy.
Most people also don’t adopt bleeding edge tools into their production workflows. Production should be stable and they don’t want it changing every few months. With AI tooling, it is changing every few months.
When I’m on a call with senior management, because something seemingly went wrong and am asked how something works, they want answers from the person who wrote it. There would be very little patience for someone typing to their AI on the side and reading back LLM text.
Ultimately we are still responsible to understand and know the code we ship. Saying we should trust the agents, forego code reviews, and ship, shows a lack of responsibility and accountability for what is being sent to production, especially with these very new tools that have created major trust issues with many developers.
Trust is hard gained and easily lost. My early use of AI tools broke the trust that all the articles like this say I should have. The trust in the tools broke, and the trust in the people pushing use of the tools broke. It is now the job of the tool makers, and those pushing them, to build trust. If the pushers start pushing before the tools are actually ready, this further damages trust. At this point, AI tools have a long and hard road to build trust with me, which is their own doing. There has been constant over promising and under delivering, all fueled by people with FOMO trying to push their FOMO issues onto everyone else.