I still think that non-programmers are going to have a tough time with vibe coding. Nuances and nomenclature in the language you are targeting and programming design principles in general help in actually getting AI to build something useful.
A simple example is knowing to tell AI that a window should be 'modal' or that null values should default to xyz.
Yesterday I was inside one of the tools that a just build it prompt created and I asked it to use a NewType pattern for some of the internals.
It wasn't until I was in bed that I thought why? If I'm never reading that code and the agent doesn't benefit from it? Why am I dragging my cognitive baggage into the code base?
Would a future lay vibe coder care what a New type pattern is? Why it helps? Who it helps?
I think the pedagogy of programming will change so that effective prompting will be more accessible.
It's a massive accelerator for my dumb and small hobby projects. If they take to long I tend to give up and do something else.
Recently I designed and 3d printed a case for a raspberry pi, some encoders and buttons, a touchscreen, just to contol a 500 EUR audio effects paddel (eventide H9)
They official android app was over if the worst apps I ever used. They even blocked paste in the login screen...
Few people have this fx box, and even fewer would need my custom controller for it., build it for an audience of one. But thanks to llms it was not that big of a deal. It allowed me to concentrate on what was fun.
https://cdn.discordapp.com/attachments/1461079634354639132/1...
https://cdn.discordapp.com/attachments/1461079634354639132/1...
I recently for fun gave an llm an image of a dj mixer and record player, in order to recreate a 3d model. It somewhat worked, but it's very basic.
I've always been unhappy with the way tasking/todo app (don't) work for me. I just started building a TUI in Zig (with the help of Codex) to manage my daily tasks. And since I'm building it just for me, the scope is mine to determine too.
How brutal will the enshittification phase of these products be?
Will the 10x cost or whatever be something that future employers will have to pay, or will it be a more visible impact for all of us? Assuming no AGI scenario here and the investments will have to be paid back with further subscription services like today.
I really hope Open Source (Open Weights) keep up with the development, and that a continuation of Moore's Law (the bastardized performance per € version) makes local models increasingly accessible.
I have had similar experiences to the author, and I’ve found that just working with a single agent in Antigravity (on the Gemini Pro subscription) is adequate. The extra perceived speed and power of multiple agents and/or Claude Code really didn’t match the output.
With a single Gemini (or sometimes switching to Claude Opus which inexplicably Google provides a generous amount of for free via AG) gives me incremental results so fast that I spend most of my time thinking about what I want (answering unplanned product questions or deciding how to handle edge cases).
I’m fact, sometimes I just get exhausted with so much decision making. However, that’s what it takes to build something useful; we just aren’t accustomed to iterating so fast!
This is the future.
I don't think we'll ever manually write code again. It's just so much faster.
NFTs and crypto were also the future.
> I don't think we'll ever manually write code again. It's just so much faster.
More work for the people who like to fix tech debt.
I find this the least convincing argument ever. Its only a gotcha if you assume all/most of the people excited about one were excited about the other. Personally I never met a real person who gave a shit about crypto, much less nfts. But AI interest is everywhere, with it roughly 50/50 in my life of people who are uneasy with it vs use it regularly.
I don't disagree about monumentous amounts of tech debt and risk being created. Its my hope for my own job and skills being relevant going into the future. I do like playing with it, understanding it as a tool. But it is just a tool, not a machine god, and regularly fallible.
I was a doubter. This will literally work 100x faster than you. It can one-shot 1kLOC across dozens of files in mere minutes and understand the context.
You'll need to pay back a lot of those performance gains in reviewing the code, but the overall delta is a 2x speedup at minimum. I'd say it's closer to 4x. You can get a week's worth of work done in a day.
A human context switches too much and cannot physically keep up with these models. We're at the chess take off moment. We're still good at reviewing and steering.
i feel like the expectations for a "Show HN" project are too high for a passing around a silly little toy that I had the robot throw together. product hunt is for things that are actual products/businesses. so maybe you throw it in a targetted subreddit for a niche interest group?
seems like there should be a marketplace for silly little side-projects, but i'm not sure how you keep it from getting overrun
I'm debating whether to share the one I'm working on at all. I made it for myself so maybe it should stay that way.
not because i think i'll actually use any of them, but because they could inspire me to do something different in my silly little side projects
the goal isn't "product release", it's elementary school "show and tell"
I’ve often thought about standing up a subreddit specifically for side projects but with the proviso of:
- No sign up
- No ads
- No subscription/payments of any kind
Open-source is welcome but optional.
No login or signup is required so it's very easy to try out and quite fun to play with, which probably helped. I think the time people are willing to invest in something before getting some sort of reward is approaching sub-second territory.
This is a really good summary of how I've experienced AI put into words. I'm not really sure how this can be monetized though.
I'm not going to burn $200-1k per day on agents to do some side projects that have been on the back burner. The only reason I'm doing it now is the heavily subsidized or free available models all over the place.
Maybe its just the specific language being used here but I really hate talking to these things. They inject way too much personality into things, especially Claude and are still too sychophantic and could lead you down a wrong path. I'd much rather just give them instructions.
It consumes lots of tokens, required more setup and at the end of the day had pretty much the same output as if I had used a single agent.
Talking with other knowledgeable humans works just as well for the first thing, but suitable other humans are not as readily available all the time as an LLM, and suitably-chosen LLMs do a pretty good job of engaging whatever part of my brain or personality it is that is stimulated through conversation to think inventively.
For the second thing, LLMs can just answer most of the questions I ask, but I don't trust their answers for reasons that we all know very well, so instead I ask them to point me at technical sources as well, and that often gets me information more quickly than I would have by just starting from a relatively uninformed google search (though Google is getting better at doing the same job, too).