I check in every few weeks and I don't understand how anyone can use that subreddit more frequently.
This is an interesting post, but not a Show HN.
And there is also this https://dewmal.medium.com/hacker-news-is-a-living-time-capsu...
which also includes the average voting scores, which actually fall at the same time the quantity increases (while the average story scores remain the same), which is interesting.
Original title: "Data on AI-related Show HN posts More than 1 in 5 Show HN posts are now AI-related, but get less than half the votes or comments."
6 months ago, 155 comments https://news.ycombinator.com/item?id=44463249
I assume the vast, vast majority never get any upvotes.
Edit: maybe you could:
- remove outliers (anything that made the front page)
- normalise vote count by expected time in the first 20 posts of shownew, based on the posting rate at the time
(edit: striked) <strike>Is it deliberate that this post appears as “Show HN” itself? I hope not to be too negative, but to qualify as such I would expect much more that a page with two graphs.</strike>
https://old.reddit.com/r/selfhosted/comments/1qfp2t0/mod_ann...
Nowhere in my comment did I say this, so this is quite a non-sequitur you've based the following personal attack upon. Regardless of whether it's possible to use LLMs to generate good things, the vast majority of things generated with them are not good, and if the good things exist, they are being drowned out in a sea of spam, increasingly difficult to discover along with the good human-generated content.
I have to say, I would characterise both your comment and the original comment I replied to as being considerably more "unfair" than mine. The first comment was clearly written in such a way to get a rise out of people. Your reply is directly insinuating that I'm out-of-touch and ranting at clouds.
It's very tiresome. Like an idiot/savant, they're an idiot most of the time and every 10th try you go 'oh, but that's neat and clever'.
[1] https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
Quite a few of us are tired of being told that we're imagining doing what used to take weeks multiple times in an evening.
LLMs, when used like this, do not increase productivity on making software worth sharing with other people. While they can knock out the proof-of-concept, they cannot build it into something valuable to anyone but the prompter, and by shortcircuiting the learning process, you do not learn the skills necessary to build upon the domain yourself, meaning you still have to spend weeks learning those skills if you actually want to build something meaningful. At least this is true for everything I have observed out of the vibe-coding bubble thus far, and my own extensive experiences trying to discover the 10x boost I am told exists. I am open to being shown something genuinely great that an LLM generated in an evening if you wish to share evidence to the contrary.
There is also the question of the provenance of the code, of course. Could you have saved those weeks by simply using a library? Is the LLM saving you weeks by writing the library ""from scratch"", in actuality regurgitating code from an existing library one prompt at a time? If the LLM's productivity gain is that it normalized copying and pasting open-source code wholesale while calling it your own, I don't think that's the great advancement for humanity it is portrayed as.
A few weeks ago I brought up a new IPS display panel that I've had custom made for my next product. It's a variant of the ST7789. I gave Opus 4.5 the registers and it produced wrapper functions that I could pass to LVGL in a few minutes, requiring three prompts.
This is just one of countless examples where I've basically stopped using libraries for anything that isn't LVGL, TinyUSB, compression or cryptography. The purpose built wrappers Opus can make are much smaller, often a bit faster, and perhaps most significantly not encumbered with the mental model of another developer's assumptions about how people should use their library. Instead of a kitchen sink API, I/we/it created concise functions that map 1:1 to what I need them to do.
I happen to believe that you're foolish for endlessly repeating the same blather about "vibe coding" instead of celebrating how amazing what you yourself said about lowering the barrier to entry for domains that are extremely rough and outside of their immediate skillset actually is and the incredible impact it has on project trajectory, motivation and skill-stacking for future projects.
Your [projected] assumption that everyone using these tools learns nothing from seeing how problems can be solved is painfully narrow-minded, especially given than anyone with a shred of intellectual curiosity quickly finds that they can get up to speed on topics that previously seemed daunting to impossible. Yes, I really do believe that you have to expend effort to not experience this.
During the last few weeks I've built a series of increasingly sophisticated multi-stage audio amplifier circuits after literal decades of being quietly intimidated by audio circuits, all because I have the ability to endlessly pepper ChatGPT with questions. I've gone from not understanding at all to fully grasping the purpose and function of every node to a degree that I could probably start to make my own hybrids. I don't know if you do electronics, but the disposition of most audio electronics types does not lend itself to hours of questions about op-amps.
Where do we agree? I strongly agree that people are wasting our time when they post low-effort slop. I think that easy access to LLMs shines a mirror on the awkward lack of creativity and good, original ideas that too many people clearly [don't] have. And my own hot take is that I think Claude Code is unserious. I don't think it's responsible or even particularly compelling to get excited about making never looking at the code as a goal.
I've used Cursor to build a 550k+ LoC FreeRTOS embedded app over the past six months that spans 45 distinct components which communicate via a custom message bus and event queue, juggling streams from USB, UART, half a dozen sensors, a high speed SPI display. It is well-tested, fully specified and the product of about 700 distinct feature implementation plan -> chat -> debug loops. It is downright obnoxious reading the stuff you declare when you're clearly either doing it wrong or, well, confirmation of the dead internet theory.
I honestly don't know which is worse.
Responsible people who use their knowledge to review LLM-generated code will produce more - up to their maximum rate of taking responsibility.
Irresponsible people will just smear shit all over the codebase.
The jury is out what's the net effect and the agents' level of sophistication is a secondary factor.