We are likely going to get better in judging this new communication and media. But we need much more experience in it, until we can do that properly.
It will be annoying for quite a while, as it was with social media, until we found the places that are still worth our time and attention. But i am hopeful that we will be able to do that.
Until then i am going to work on my AI side project every evening until i deem it ready and bug free. It already works well enough for my own purposes (which i made it for) and my requirements were heavily influenced by my work process. I would never have been able to finish such a project, even with full time working on it over a year without AI.
Six months ago no-one would post a "Show HN" for a static personal website they had built for themselves - it wouldn't be novel or interesting enough to earn any attention.
That bar just went through the ROOF. I expect it will take a few more months for the new norms to settle in, at which point hopefully we'll start seeing absurdly cool side projects that would not have been feasible beforehand.
I suspect a month of AI accelerated work is still enough to make the front page. I don’t see the competition as steeper. I bet it’s about the same per unit time.
What I mean by that is: after reading through a brief description of a project, or a conceptual overview, they are no better than noise at predicting whether it will be worthwhile to try out, or rewarding to learn about, or have a discussion about, or start using day-to-day.
Things on the front page used to be high quality software, research papers, etc. And now it is entirely driven by marketing operations, and social circles. There is no differential amplification of quality.
I don't know what the solution is, but I imagine it involves some kind of weighted voting. That would be a step towards a complicated engagement algorithm, instead of the elegant and predictable exponential decay that HN is known for.
You could criticize a Michelin inspector the same way. The poor bastards have to actually taste the dish and can't decide merit based on menu descriptions alone.
Output is growing decoupled from what we used to consider tightly linked personal characteristics.
There is no guarantee that this will reform under rules that make sense in the old order.
It is embarrassing to see grown engineers unable to cope with the obvious.
Interviews by celebrities predicting AI will revolutionize the economy: 2837191001747
Software and online things I've used that seem to be better than they were before ChatGPT was introduced: 0
- searching: better
- photo edit/enhance/filter: easier and acessible
- text summarization: better
- quick scripts/tools: faster
- brainstorming/iterating ideas: faster
- generating list of names: faster
- rephrasing text: better
- researching topics: faster
- stackoverflow: i'm finally free. won't be missed by me
- coding: debatable but for me LLMs made possible projects that weren't before due to scope or lack of expertise
The AI mode does at least attempt to list it's sources, but it's extra hoops to jump through.
I don't think you can really get any sort of a signal on this?
Nobody is all that sensitive to the amount of features that get shipped in any project, and nobody really perceives how many people or how much time was needed to ship anything. As a user, unless that means a 5x difference in price of some service, you don't really see or care about any of that - and even if there were savings on the part of any developer/company, they'd probably just pocket the difference. Similarly, if there's a product or service that exists thanks to vibe coding and wouldn't have existed otherwise, you probably don't know that particular detail.
Even when fuckups and bugs do happen, there's also no signal whether it's explicitly due to AI (or whether people are scapegoating it), or just management pushing features nobody wants and enshittifying products and entire industries for their own gain.
Well, maybe StackOverflow is a bit easier to host now: https://blog.pragmaticengineer.com/stack-overflow-is-almost-...
I would not know if they have gotten better or worse cause I don’t use them anymore.
I love computing, and programming. If anything I'm better able to appreciate that now that I no longer care if my work has any impact.
Also write another javascript framework in case it seems easier to create one than take the time to learn one.
I have zero interest in seeing something that Claude emitted that the author could never in a million years have written it themselves.
Its baffling to me these people think anyone cares about what their claude prompt output.
I apply the same logic to software.
I think we're going to implement this unless we hear strong reasons not to. The idea has already come up a lot, so there's demand for it, and it seems clear why.
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
https://news.ycombinator.com/item?id=47077840
https://news.ycombinator.com/item?id=47050590
https://news.ycombinator.com/item?id=47077555
https://news.ycombinator.com/item?id=47061571
https://news.ycombinator.com/item?id=47058187
https://news.ycombinator.com/item?id=47052452
--- original comment ---
Recent and related:
Is Show HN dead? No, but it's drowning - https://news.ycombinator.com/item?id=47045804 - Feb 2026 (423 comments; subthread https://news.ycombinator.com/item?id=47050421 is about what to do about it)
AI makes you boring - https://news.ycombinator.com/item?id=47076966 - Feb 2026 (367 comments)
When I use Codex to do vibe coding stuff, I don't usually have one big prompt, I usually have it do small things piecemeal and I iterate with it later. Maybe I'm using it wrong but it tends to be more "conversational" and I think that would be harder to share, especially considering I'll do things over dozens of sessions.
I suppose I could keep an archive of every session I've ever opened with Codex and share that, but thus far I haven't really done that.
Granted, I don't really share my stuff with "Show HN".
How would that be feasible for a project of any complexity whatsoever?
> Authors should be asked to indicate categories of AI use (e.g., literature discovery, data analysis, code generation, language editing), not narrate workflows or share prompts. This standardization reduces ambiguity, minimizes burden, and creates consistent signals for editors without inviting overinterpretation. Crucially, such declarations should be routine and neutral, not framed as exceptional or suspicious.
I think that sharing at least some of the prompts is a reasonable thing to do/require. I log every prompt to a LLM that I make. Still, I think this is a discussion worth having.
[1] https://scholarlykitchen.sspnet.org/2026/02/03/why-authors-a...
If I have a vibe coded project with 175k lines of python, there would be genuinely thousands and thousands of prompts to hundreds of agents, some fed into one another.
Whats the worth of digging through that? What do you learn? How would you know that I shared all of them?
How many do you have in the log total?
A few caveats: I'm not a heavy LLM user (this is probably what you're getting at) and the following is a low estimate. Often, I'll save the URL only for the first prompt and just put all subsequent prompts under that one URL.
Anyhow, running a simple grep command suggests that I have at least 82 prompts saved.
In my view, it would be better to organize saved prompts by project. This system was not set up with prompt disclosure in mind, so getting prompts for any particular project would be annoying. The point is more to keep track of what I'm thinking of at a point in time.
Right now, I don't think there are tools to properly "share the prompts" at the scale you mentioned in your other comment, but I think we will have those tools in the future. This is a real and tractable problem.
> Whats the worth of digging through that? What do you learn? How would you know that I shared all of them?
The same questions could be asked for the source code of any large scale project. The answers to the first two are going to depend on the project. I've learned quite a bit from looking at source code, personally, and I'm sure I could learn a lot from looking at prompts. As for the third question, there's no guarantee.
This is one (1) conversation: https://chatgpt.com/share/69991d7e-87fc-8002-8c0e-2b38ed6673...
It has 9 "prompts" On just the issue of path re-writing, that's probably one of a dozen conversations, NOT INCLUDING prompts fed into an LLM that existed to strip spaces and newlines caused by copying things out of a TUI.
It's ok for things to be different than they used to be. It's ok for "prompts" to have been a meaningful unit of analysis 2 years ago but pointless today.
You might as well ask for a record of the conversations between two engineers while code was being written. That's what the chat is. I have a pre-pre-alpha project which already has potentially hundreds of "prompts"--really turns in continuing conversations. Some of them with 1 kind of embedded agent, some with another. Some with agents on the web with no project access.
Sometimes I would have conversations about plans that I drop. do I include those, if no code came out of them but my perspective changed or the agent's context changed so that later work was possible?
I don't mean to be dismissive, but maybe you don't have the necessary perspective to understand what you're asking for.
I disagree. Thinking about this more, I can give an example from my time working as a patent examiner at the USPTO. We were required to include detailed search logs, which were primarily autogenerated using the USPTO's internal search tools. Basically, every query I made was listed. Often this was hundreds of queries for a particular application. You could also add manual entries. Looking at other examiners' search logs was absolutely useful to learn good queries, and I believe primary examiners checked the search logs to evaluate the quality of the search before posting office actions (primary examiners had to review the work of junior examiners like myself). With the right tools, this is useful and not burdensome, I think. Like prompts, this doesn't include the full story (the search results are obviously important too but excluded from the logs), but that doesn't stop the search logs from being useful.
> You might as well ask for a record of the conversations between two engineers while code was being written.
No, that's not typically logged, so it would be very burdensome. LLM prompts and responses, if not automatically logged, can easily be automatically logged.
But other distribution strategies exist. You just have to be smarter about finding and getting in front of your core audience.
Solid solutions are being overshadowed by AI slop alternatives which were assembled in a few months with no long term vision; the results look great superficially, but under the bonnet, it's inefficient, expensive, closed, lacks flexibility, experience degrades over time. All the essential stuff that people don't think about initially is what's missing.
It feels like the logical conclusion of peak attention economy; the media fully saturated with junk where junk beats quality due to the volume advantage.
People may lack ideas of interesting projects to work on in the first place; so we need to think about how to help people to think of useful and interesting projects to work on
Related to that idea, people may need to develop skills for building more "complex" ideas and we may need to think about how to make that more possible in the era of AI usage... even if some AI agent can take care of a technical side of things, it still takes a kind of "complexity of thought" to dream up more complicated / useful / interesting projects (I get the impression that there may be a need for some kind of training of the mind necessary for asking for an "automobile" rather than a "faster horse", by analogy... and that conception was often found through manually tinkering with intermediate devices like the bicycle. Hence an AI could "one shot" something interesting but what that thing is is limited by the imagination of the user, which may be limited by technical inability - in other words, the user may need to develop more technical ability in order to be able to dream up more interesting things to create, even if this "technical ability" is a more "skilled usage of AI tools")
There needs to be some way to filter through "noise". That's not a new issue and... a lot of these questions or complaints often feel very "meta", as in - you could just ask AI how to make side projects more interesting or useful, or how to create good filters in the age of AI. In sports, there are hierarchical levels of competition - likewise here you might have forums that are more closed off to newcomers if you want to "gatekeep" things, and they have to compete in "local pools" of attention and if they "win" there, then their "qualified authority / leader" submits the project to a higher level, and so on. AI suggests using qualified curators to create newsfeeds or to act as filters of the "slop".
Productizing anything is hard and writing code with AI is basically impossible to do reliably, securely, and at scale unless you're already an expert in what you're trying to do. For example, working on a project now, and it's kind of endearing watching my AI buddy run into every single pothole I ran into when I first started working with Tauri or Rust.
Unless you know what you're doing (and why you're doing it), AI suggestions are in the best case benign, and in the worst case architectural disasters. Very rarely, I'm like "hm that might be a good idea."
I think AI-aided development will raise the bar for products and makes expert engineers like 10x more valuable. Personally, I'm elated that I don't have to write my 4000th React boilerplate or Rust logging helper anymore.
And the real, actual hard work (as in: coming up with new algorithms for novel problems, breaking problems down so others can understand them, splitting up code/modules in a way that makes sense for another person, etc.) will likely never be doable by AI.
Just because you can't separate the noise from signal with that easy check doesn't mean these people can't get the joy of side projects. It's especially lazy when that project is open-source and you can literally ask CC, hey dig into this code, did they build anything interesting or different? Peter's side projects like Vibetunnel and Openclaw have so many hidden gems in the code (a rest API for controlling local terminals and Heartbeat, respectively) that you can integrate into your own project. Neglecting these projects as "AI slop" is stopping you from learning what amazing things people are building, especially when those people have different expertise. Lest we forget, the transformer model came from Alphafold and sometimes the best discoveries come from completely unrelated fields.
I would love to meet these people that are getting joy out of seeing other peoples random fucking vibe coded apps that have zero rigor or skill applied to them lol.
That being said, THOSE projects at this point have enough "activity" around them to make them at least somewhat worthy of a post. Which none of the vibe code posts have going for them.
There's more projects I suspect are made predominantly using AI, but I don't want to speculate.
What, were the vibe coded in COBOL?
That is: I don't understand why the use of Claude Code itself renders them unworthy of discussion.
I'm not sure if this article deserves all that much attention if the standard is a subjective interpretation of what is truly special.. human made, human directed, or not.
Is there some sort of spectrum of not special, kind of special, pretty special, and truly special?
Does it have to be special for everyone or just some people?
Is it trying to say that people by default build and share things for external validation?
The argument about how people are using AI to solve a problem is akin to how people might feel about someone using a spreadsheet to solve a problem.
Sometimes projects are for learning. Sometimes projects are for solving a problem that's small to others, but okay to you to solve.
Insecurity about other people learning to build things for the first time and then continue to learn to build them better might be what this is about, period.
There's always been a great number of problems that never could could quite get the attention of software development.
I've genuinely met non-software folks who are interested in first solving a problem and then solving it better and better. And I think that type of self-directed learning in software is invaluable.
AI makes slop, but humans sure seem to like creating the same frameworks over and in every language and thinking it's progress in some way. But every so often, you get a positive shift forward, maybe a Ruby on Rails or something else.
If your spaces that were previously full of interesting things suddenly become deluged with uninteresting things then that is something to complain about.
Come on. This site keeps promoting negative content.
It wasn't like I couldn't build before, it just makes it easier and a hell of a lot more fun now. I just did an AI side project and it was a blast. https://oj-hn.com
AI isn't going to take your job. People who know AI are.
It’s why I only focus on hardware actual “hacking” projects, more fun to read and follow, and I know it wasn’t vibecoded too.
To write such quick note about AI slop a few days after they had a show hn with something that looks a bit like AI slop amused me.