12 pointsby devrandoom9 hours ago4 comments
  • remram5 minutes ago
    Discussed here: https://news.ycombinator.com/item?id=43806940 (238 points, 168 comments)
  • remraman hour ago
    I'm of two minds about this.

    On the one hand, this is egregious and should not happen. People should not be stalked and lied to, and communities should not be manipulated at this scale.

    On the other hand, it certainly already happens as it is incredibly easy for any group to use LLMs and do this. So the process and effectiveness should be studied and reported to the public... There might be a better way to do it though?

  • devrandoom9 hours ago
    Reddit is furious, understandably.

    I wonder if other groups are using AI to post on Reddit too?

    • feraloink7 hours ago
      More seriously now, the Zurich researchers were very creepy. I'm reading the article further and saw that they stalked some of the past history of redditors first before trying to influence them in r/change my views or whatever it is called.

      I wouldn't want to be a guinea pig for that purpose.

    • feraloink7 hours ago
      I wonder if other groups are using AI to post on HN too?

      (Seems less likely than on reddit... TBQH)

      • krapp7 hours ago
        Seems more likely to me. Reddit has a far broader userbase, whereas HN is primarily made up of the sort of people who would post with AI. And HN has a non rate-limited API (even though most people here probably use Algolia.)

        And people already do paste comments from AI, it's becoming more and more common. At least they have the courtesy of pointing it out but I expect that will stop before the AI comments do.

        • morkalork7 hours ago
          While I do love seeing the comments starting with "I asked ChatGPT..." get absolutely eviscerated in the replies and negatively voted, it is probably sending the wrong signal.
  • MartinGAugustin6 hours ago
    [dead]