Edit: for those who don't frequent HN or reddit every day: https://old.reddit.com/r/google_antigravity/comments/1p82or6...
> https://indianexpress.com/article/technology/tech-news-techn...
dang, please replace the link.
I think if all you care about is the outcome then sure, you might enjoy AI coding more
If you enjoy the problem solving process (and care about quality) then doing it by hand is way, way more enjoyable
(But would further gamification make it more enjoyable? No, IMO. So maybe all we learn here is that people don't like change in any direction.)
Argue about the value of video games all you like, I would still place them above slot machines any day
I do care about the outcome, which is why the thought of using AI to generate it makes me want to gouge my eyes out
In my view using AI means not caring about the outcome because AI produces garbage. In order to be happy with garbage you have to not care
Having a private office instead of an open floor plan for instance
Or not working in the JIRA two week sprint format
Or not having to work with offshore teams that push the burden of quality control onto you
My point is I bet that the Google CEO (and basically every other software CEO) doesn't actually care if software development is enjoyable or not
The enjoyment factor is real. The iteration speed with Claude Code is insane. But the model's suggestions still need guardrails.
For security-focused apps especially, you can't just accept what the LLM generates. We spent weeks ensuring passwords never touch the LLM context - that's not something a vibe-coded solution catches by default.
The productivity gains are real, but so is the need for human oversight on the security-critical parts.