79 pointsby surprisetalk4 hours ago16 comments
  • kylecazar3 hours ago
    Maybe add a category for posts and comments about AI on HN :)

    "Stories about AI" is not offensive to me. Its influence on the industry is undeniable and if I'm feeling tired of that content I just won't engage with it.

    AI-writing is another story, but yeah -- HN is downstream of that problem. You can encourage people not to submit articles that seem to be LLM authored, but it won't work.

    • tptacek3 hours ago
      Part of the ethos of HN is that we don't do content/subject silos; it's a way in which HN is very distinct from Reddit. I don't think this will happen and I think if it does it's a bad idea (not least because I don't think a site dominated by software developers is going to separate itself from AI, any more than it will separate itself from programming language discussions), but I understand the impulse. They're not the funnest stories to comment on.
      • kylecazar3 hours ago
        Couldn't agree more -- I meant a category in this post's chart :) I'll admit it was snarky.
        • tptacek3 hours ago
          Sorry, I'm knee-jerk about the thing I said because it comes up constantly as a suggestion for how to fix things.
      • csande172 hours ago
        /ask and /show are sort of HN's version of content/subject silos; posts there can technically appear on the front page but are comparatively less likely to. I imagine they could add a /slop section for AI posts, and then tweak the ranking logic for the main /news page to prevent too many from showing up at once.
        • tptacek2 hours ago
          I understand the suggestion to be moving all posts about AI, agents, etc to a silo. Generated posts are generally already off-topic here (I gather they're about to add a new flag for that).

          I think it's going to be really difficult to segregate discussions about AI from discussions about software development over the next few years.

      • manwds26 minutes ago
        [dead]
  • delichon2 hours ago
    I'm afraid that we're in an interregnum. A few years ago AI could not pass a Turing test. A few years from now AI will better at Turing tests than we are. We're now in this strange middle zone where we are dazedly grasping for solutions.

    But what happens next, when we just fail at the task of recognizing ourselves in cyberspace? Where LatestClaw is just plain better at mimicking you than you are? What happens to the living we used to claw out of the ether for ourselves?

    Do I need to learn to farm?

    • pastel87392 hours ago
      Maybe we get off all these useless websites and stop doing our useless jobs and go back to the real world
      • nine_k2 hours ago
        Welders? Car mechanics? Nurses? Cooks? Cleaners?..
        • ryandrake2 hours ago
          Whatever real-world jobs they expect knowledge workers to take on after we are all replaced by AI... we at least know they will pay less than our current "useless jobs".
          • bluefirebrand2 minutes ago
            Really optimistic to assume such jobs will exist in the volumes needed to absorb all of the knowledge workers
          • georgemcbay2 hours ago
            > we at least know they will pay less than our current "useless jobs".

            ...and they will also likely pay less than they do now because there will be more labor supply, which the people currently doing those jobs won't be happy about.

        • SoftTalker2 hours ago
          Well, we need all those things. And AI can't do them.
    • andai2 hours ago
      There was one paper recently where the AI beat humans at Turing test 2/3rds of the time.

      I think it's cause they told it to type like a 13 year old and nobody could imagine AI talking like that.

      • CamperBob22 hours ago
        We don't post-train current frontier models to pass the Turing test, but if we did, it wouldn't be much of a challenge for current models IMHO. It's a dead benchmark. It tests the human machines, not the machines.
  • est3 hours ago
    > I tapped into Pangram. Pangram is a remarkably good, conservative model for detecting LLM-generated text

    I tried it against some of my AI generated articles. It says 100% human

    Turns out if one manually write a structure and a core idea first, nobody think it's AI.

  • _pdp_3 hours ago
    There is no doubt there is a lot of AI generated content. We do it too - code, tutorials, etc. It is just too convenient and useful to ignore.

    The question that I have is this.

    Is it possible the language will converge towards AI mannerism when writing - i.e. most people will naturally write like AI because they will pick up on the subtleties of language from ChatGPT, Claude, etc? In other words there is an exposure effect at play.

    I just found out about Communication Accommodation Theory (CAT) which makes me think that the answer is probably "yes".

    • calebelacan hour ago
      Great question posed. Headed to read up on CAT now
  • rob2 hours ago
    Time to switch to a $10 one-time fee like Something Awful Forums. No crypto.
    • tptacek2 hours ago
      And never get a serendipitous first-time comment from the subject of an interesting or important story again. Sounds like a bad tradeoff.
      • bluefirebranda minute ago
        No, if the tradeoff is that I never have to read a comment online written by an AI ever again, that's a great trade
  • ljhsiung3 hours ago
    One of many things that bums me out about AI is whether content I create will be truly appreciated by humans, or will just be fed back into the algorithm.

    I often wonder how exactly you'd mitigate this. Further, as a user, I wonder what incentive there is for me to write anything at all online, let alone commenting on forums, if it will just be fed back into an LLM.

    Is paywalling or forcing user accounts the solution? That feels antithetical to the reason for the internet at all.

    Just musings.

    • altairprime3 hours ago
      Simply putting up a basic auth wall that says “Enter any password to proceed” would stop all modern crawlers dead in their tracks, afaik. You could make it more defensible to the trivial overcome by putting a rotating / per-source password in the basicauth message, but honestly, I think they’re all coded not to invite a CFAA hacking lawsuit by trying random passwords on password-protected sites :)
    • dyauspitr3 hours ago
      If it’s on here it will probably be read by a human. It may also then be fed back as training data but why do you care?
  • webprofusion3 hours ago
    For a HN front page article this is light on content. Should have used AI.
  • nunez2 hours ago
    HN cargo-cults heavily for sure. That's more of a reflection of SV culture than something unique to HN.

    2016-2018 was Docker and Kubernetes. 2020 was COVID. 2021-2022 was WFH good, RTO bad...and lots of Web3 and crypto stuff. 2023 was the dawn of AI, and it hasn't let up since. These are vibes and likely inaccurate.

  • CharlesWan hour ago
    > Pangram is a remarkably good, conservative model for detecting LLM-generated text. These detectors have a bad rep among techies, but the objections are often based on outdated assumptions or outright misconceptions.

    Pot, kettle, black. "Remarkably good" drastically oversells the reliability of it and other AI detectors. It means very little that Pangram did better than other competitors in this snake-oily category in one 2025 benchmark.

  • deepsquirrelnet3 hours ago
    > I tapped into Pangram. Pangram is a remarkably good, conservative model for detecting LLM-generated text. These detectors have a bad rep among techies, but the objections are often based on outdated assumptions

    Turing test is really in the rearview, huh?

    Humans need machines to detect if a machine wrote the text, because humans aren’t sure.

  • senectus12 hours ago
    I'm more interested in how much of the comments are AI
    • grebcan hour ago
      i’d wager 95% of the green names definitely are bots.
      • iso-logian hour ago
        Not all of us are 100 years old.
        • grebc39 minutes ago
          Good bot.
  • marysminefnuf2 hours ago
    I think we should allow users to add a set of like 5 tags personally on our account to content. And we can see what people are also tagging stuff as at large. So if a blog thats written with ai is something you want to ignore you can just tag that url and it wont show and you can see what people tagged that blog as too.
  • halfcat2 hours ago
    That’s a great question and a very realistic thing for us to answer. There is definitely no increase in AI here. If you’d like, I can walk you through how the best posters arrive at this conclusion in the normal human way. Just say the word.
  • marysminefnuf3 hours ago
    Too much
  • zacklee19883 hours ago
    [dead]
  • cj3 hours ago
    I haven't really noticed. Doesn't seem like HN has changed very much.

    Edit: Clearly the topics have evolved over time (AI, crypto, there will always be some topic taking up the majority of attention), but the type and worthiness of content seems unchanged.

    • giancarlostoro3 hours ago
      Compared to two years ago? HN was never this overstimulated on AI. It's pretty high. Even when Crypto was at its peak I don't think it ever dominated the HN front page to this extreme.
      • 1atticean hour ago
        In terms of dollar magnitude, AI is in a class of its own. The investments make crypto look like softball. Attention around here follows the dollars, for good and ill.