I'm clearly on the Lopopolo side of the Zechner-Lopopolo Continuum. We've reached a point where I trust SOTA AI to write better code and perform better reviews than I can in many situations - given proper, thorough prompting. It's far from hands-off, which is why I much prefer the term "agentic engineering" over "vibe coding". The `/review` command is my new best friend: I run it after every major change an agent makes, just as I used to ask "are you sure?" after every important response from a chatbot.
This is the topic of almost every conversation I'm having with friends who are programmers right now. The compiler analogy (we basically never read compiler output) is the thing I keep thinking about.
I spent the whole last week covering, speaking to, debating with many of the top AI Engineers, at the AIE Europe conference in London. One question came up, and I dubbed it the Z/L Continuum: Should AI Engineers even look at code anymore?
i think the intended target of the software matters - Zechner needs to make a small tasteful core agent that others can extend/depend on, whereas Lopopolo needs to ship a large 1M LOC enterprise app with a tiny team of 5
the code conclusions kinda fall out of those constraints