I don't even have time to test the all programs I'm creating, let alone review the LOCs. I build stuff on a whim and it's in my "did it work" pile. I've also got a "looks ok, I'll deploy it sometime" list.
Embrace it. Enjoy it. Ship solutions to problems not lines of code.
Certainly mine isn't. But I've still generated hundreds of thousands of lines of code.
But no one will ever read them. And solid engineering defines the interfaces between them. So we specify the ins and outs and let the rest take its course.
At least this part I am still specifying. It doesn't get to choose its own technologies. It generally includes the architecture in the plan that I review.
I've been coding since the 8-bit days.
With the added benefit I can specify, "let's try using this stack this time." I haven't got to spend two months learning it to get to MVP.
I made a video game to wish a friend happy birthday. I made a couple websites for job applications. I can make a landing page for an idea for a friend and the longest part is buying the domain name. I had a convo with a friend about finding more ideas to work on as I have abundant spare time due to LLMs and there are LLMs sourcing stuff to help execute on that.
Every time a friend has a "it would be cool" idea, I can trivially throw something together to do it.
Really my biggest optimization has been giving the AI as many tools as possible to do the testing part itself, as testing the work is the real bottleneck. Dockerize everything, so all the error logs are in one place and it can reset at will. Have it set up fixtures, so that if it deletes the database (has happened), it can just re-create it.
Are you doing all of this in cursor or something like Claude code?