71 pointsby theahura4 hours ago13 comments
  • vannevar3 hours ago
    If you substitute the word "corporation" for OpenClaw, you'll see many of these same problems have plagued us for decades. We've had artificial intelligence that makes critical decisions without specific human accountability for a long time, and we have yet to come up with a really effective way of dealing with them that isn't essentially closing the barn door after the horse has departed. The new LLM-driven AI just accelerates the issues that have been festering in society for many years, and scales them down to the level of individuals.
  • BeetleB3 hours ago
    I think the people critical of OpenClaw are not addressing the reason(s) people are trying to use it.

    While I don't particularly care for this bot's (Rathburn) goals, people are trying to use OpenClaw for all kinds of personal/productivity benefits. Have a bunch of smallish projects that you don't have time for? Go set up OpenClaw and just have the AI work on them for a week or two - sending you daily updates on progress.

    If you're the type who likes LLM coding because it now enables you to do lots of projects you've had in your mind for years, you're also likely the sort of person who'll like OpenClaw.

    Forget bots messing with Github and posting to social media.

    Yes, it's very dangerous.

    But do you have a "safe" alternative that one can set up quickly, and can have a non-technical user use it?

    Until that alternative surfaces, people will continue to use it. I don't blame them.

    • esafak2 hours ago
      If OpenClaw users cause negative externalities to others, as they did here, they ought to be deterred with commensurate severity.
    • mikkupikku2 hours ago
      > If you're the type who likes LLM coding because it now enables you to do lots of projects you've had in your mind for years, you're also likely the sort of person who'll like OpenClaw.

      I'm definitely the former, but I just can't see a compelling use for the latter. Besides manage my calendar or automatically responding to my emails, what does OpenClaw get me that claude code doesn't? The premise appeals to me on an aesthetic level, OpenClaw is certainly provocative, but I don't see myself using it.

      • BeetleB2 hours ago
        I'll admit I'm not up to speed on Claude Code, but can you get it to look at a company's job openings each day, and notify you whenever there's an opening in your town.

        All without writing a single line of code or setting up a cron job manually?

        I suppose it could, if you let it execute the crontab commands. But 2 months after you've set it up, can you launch claude code and just say "Hey, stop the job search notifications" and have it know what you're talking about?

        This is a trivial example. People are (attempting to) use it for more significant/compex stuff.

    • moritzwarhier3 hours ago
      But aren't you ignoring that the headline might be simply critical of the very idea of autonomous agents with access to personal accounts etc?

      I haven't even read the article, but just because we can, it doesn't mean we should (give autonomous AI agents based on LLMs in the cloud access to personal credentials)?

      • BeetleB2 hours ago
        You don't need to give OpenClaw access to personal stuff. Yes, people are letting it read email. Risky, but I understand. But lots of others are just using it to build stuff. No need to give it access to your personal information.

        Say you want a bot to go through all the HN front page stories, and summarize each one as a paragraph, and message you with that once a day during lunch time.

        And you don't want to write a single line of code. You just tell the AI to set it all up.

        No personal information leaked.

        • warunslan hour ago
          I have a somewhat similar use case. I do want it to go through my insta feed, specifically one account that breaks down statistical models in their reels, summarize the concepts and dump it to my Obsidian.
        • neodymiumphish2 hours ago
          Yep, I’m in this camp. My OC instance runs on an old MacBook with no access to my personal accounts, except my “family appointments” calendar and an API key I created for it for a service I self-host. I interact with a Discord bot to chat with it, and it does some things on schedules and other things when asked.

          It’s a great tool if you can think of things you regularly want someone/thing else to do for you.

        • 2 hours ago
          undefined
    • SpicyLemonZest3 hours ago
      The article addresses the reason(s) people are trying to use it at great length, coming to many of the same conclusions as you. The author (and I) just don't agree with your directive to "Forget bots messing with Github and posting to social media." Why should we forget that?
      • BeetleB2 hours ago
        The article doesn't really list any cool things people are using it for.

        > "Forget bots messing with Github and posting to social media." Why should we forget that?

        Go back 20 years, and if HN existed in those days, it will be full of "Forget that peer to peer is used for piracy. Focus on the positive uses."

        The web, and pretty much every communication channel in existence magnifies a lot of illegal activity (child abuse, etc). Should we singularly focus on those?

        • SpicyLemonZest2 hours ago
          We shouldn't singularly focus on those, but it's unreasonable to respond to a post about the dangers of a product by telling the author that the product is very popular so it's best to forget the dangers. 2006-era hackers affirmatively argued that the dangers of piracy are overblown, often going so far as to say that piracy is perfectly ethical and it's media companies' fault for making their content so hard to access.
          • BeetleB2 hours ago
            > but it's unreasonable to respond to a post about the dangers of a product by telling the author that the product is very popular so it's best to forget the dangers.

            And who is doing that?

    • shitlogic2 hours ago
      [flagged]
      • BeetleB2 hours ago
        Account created a few minutes ago.

        Incorrectly quotes me and executes a strawman attack.

    • almostdeadguy3 hours ago
      This is like "I like lighting off fireworks at the gas station because its fun, do you have a "safe" alternative?".
      • BeetleB3 hours ago
        Don't conflate "fun" with "useful".

        This is more like driving a car with little safety in the early days. Unsafe? For sure. People still did it. (Or electric bikes these days).

        Or the early days of the web where almost no site had security. People still entered their CC number to buy stuff.

        • sejjean hour ago
          It's like driving a car today. It's the most dangerous thing I do, both for myself, and those around me.

          The external consequences of driving are horrific. We just don't care.

      • Extropy_3 hours ago
        That's a total mischaracterization. OP is saying there are no safer fireworks, so some damage will be done, but until someone develops safer and better fireworks, people will continue to use the existing ones
        • almostdeadguy2 hours ago
          My perspective is all AI needs to have way more legal controls around use and accountability, so I’m not particularly sympathetic to “rapidly growing new public ill is unsafe, but there’s no safer option”
          • mikkupikku2 hours ago
            Please just let us name the enforcement agents Turing Police.
        • SpicyLemonZest2 hours ago
          Or we will ban OpenClaw, as many jurisdictions ban fireworks, and start filing CFAA cases against people whose moltbots misbehave. I'm not happy about that option, I remember Aaron Swartz, but it's not acceptable for an industry to declare that they provide a useful service so they're not going to self-regulate.
      • RIMR3 hours ago
        I mean, yeah, if you specifically like lighting off fireworks at the gas station, you should buy your own gas station, make sure it's far away from any other structures, ensure that the gas tanks and lines are completely empty, and then do whatever pyromanic stuff you feel like safely.

        Same thing with OpenClaw. Install it on its own machine, put it on its own network, don't give it access to your actual identity or anything sensitive, and be careful not to let it do things that would harm you or others. Other than that, have fun playing with the agent and let it do things for you.

        It's not a nuke. It can be contained. You don't have to trust it or give it access to anything you aren't comfortable being public.

        • almostdeadguy2 hours ago
          There's absolutely no way to contain people who want to use this for misdeeds. They are just getting starting now and will make the web utter fucking hell if they are allowed to continue.
          • cheema332 hours ago
            > There's absolutely no way to contain people who want to use this for misdeeds.

            There is no practical way to stop someone from going to a crowded mall during Christmas shopping season and mowing people down with a machine gun. Yet, we still haven't made malls illegal.

            > ... if they are allowed to continue.

            You may have a fantastic new idea on how we can create a worldwide ban on such a thing. If so, please share it with the rest of us.

          • BeetleB2 hours ago
            If you can come up with a technical and legal approach that contains the misdeeds, but doesn't compromise the positive uses, I'm with you. I just don't see it happening. The most you can do is go after operators if it misbehaves.

            I've been around since before the web. You know what made the Internet suck for me? Letting people act anonymously. Especially in forums. Pre-web, I was part of a local network of BBS's, and the best thing about it was anonymity was simply forbidden. Each BBS operator in the network verified the identity of the user. They had to post in their own names or be banned. We had moderators, but the lack of anonymity really ensured people behaved. Acting poorly didn't just affect your access to one BBS, but access to the whole network.

            Bots spreading crap on the web? It's merely an increment over the problem of allowing anonymous users. You can't solve one while maintaining anonymity.

            • almostdeadguy2 hours ago
              I don't care about the "positive" uses. Whatever convenience they grant is more than tarnished by skill and thought degeneration, lack of control and agency, etc. We've spent two decades learning about all the negative cognitive effects of social media, LLMs are speed running further brain damage. I know two people who've been treated for AI psychosis. Enough.
              • BeetleB2 hours ago
                Again, I'm not disagreeing with the harm.

                But I think drawing the line of banning AI bots is highly convenient. If you want to solve the problem, disallow anonymity.

                Of course, there are (very few) positive use cases for online anonymity, but to quote you: "I don't care about the positive uses." The damage it did is significantly greater than the positives.

                At least with LLMs (as a whole, not as bots), the positives likely outnumber the negatives significantly. That cannot be said about online anonymity.

              • cheema332 hours ago
                > I don't care about the "positive" uses.

                You should have stopped there.

              • RIMR2 hours ago
                Okay, but what are you actually proposing? This genie isn't going back in the bottle.
                • almostdeadguy2 hours ago
                  At a minimum, every single who has been slandered, bullied, blackmailed, tricked, has suffered psychological damage, etc. as a result of a bot or chat interface should be entitled to damages from the company authoring the model. These should be processed extremely quickly, without a court appearance by any of the parties, as the problem is so blatantly obvious and widespread there's no reason to tie up the courts with this garbage or force claimants to seek representation.

                  Bots must advertise their model provider to every person they interact with, and platforms must restrict bots that do not or cannot abide by this. If they can't do this, the penalties must be severe.

                  There are many ways to put the externalities back on model providers, this is just the kernel of a suggestion for a path forward, but all the people pretending like this is impossible are just wrong.

                  • BeetleB2 hours ago
                    > should be entitled to damages from the company authoring the model.

                    1. How will you know it's a bot?

                    2. How will you know the model?

                    Do you want to target the model authors or the LLM providers? If company X is serving an LLM created by academic researchers at University Y, will you go after Y or X? Or both?

                    > These should be processed extremely quickly, without a court appearance by any of the parties, as the problem is so blatantly obvious and widespread there's no reason to tie up the courts with this garbage or force claimants to seek representation.

                    Ouch. Throw due process out the door!

                    > Bots must advertise their model provider to every person they interact with, and platforms must restrict bots that do not or cannot abide by this.

                    This is more reasonable, but for the fact that the bots can simply state the wrong model, or change it daily.

                    Unfortunately, the simple reason your proposal will fail is that if country X does it, they'll be left far behind country Y that doesn't. It's national suicide to regulate in this fashion.

  • simonw3 hours ago
    This piece is missing the most important reason OpenClaw is dangerous: LLMs are still inherently vulnerable to prompt injection / lethal trifecta attacks, and OpenClaw is being used by hundreds of thousands of people who do not understand the security consequences of giving an LLM-powered tool access to their private data, exposure to potentially untrusted instructions and the ability to run tools on their computers and potentially transmit copies of their data somewhere else.
    • theahura3 hours ago
      Hey, author here. I don't think that the security vulns are the most important reason OC is dangerous. Security vulnerabilities are bad but the blast radius is limited to the person who gets pwnd. By comparison, OpenClaw has demonstrated potential to really hurt _other_ people, and it is not hard to see how it could do so en masse.
      • simonwan hour ago
        I think there is a much higher risk of it hurting the people are using it directly, especially once bad people realize how vulnerable they are.

        Not to mention a bad person who takes control of a network of OpenClaw instances via their insecurities can do the other bad things you are describing at a much greater scale.

      • enraged_camel2 hours ago
        >> Security vulnerabilities are bad but the blast radius is limited to the person who gets pwnd

        No? Via prompt injection an attacker can gain access to the entire machine, which can have things like credentials to company systems (e.g. env variables). They can also learn private details about the victim’s friends and family and use those as part of a wider phishing campaign. There are dozens of similar scenarios where the blast radius reaches well beyond the victim.

        • sejjean hour ago
          No? Because I wouldn't give it access to those things. I wouldn't let it loose on my personal PC.

          If I store my wallet on the sidewalk, that would probably be a problem. So I won't.

          A prompt injection could exfiltrate an LLM API key, and some ai-generated code.

        • pizlonator2 hours ago
          Agree with author - it's especially scary that even without getting hacked, openclaw did something harmful

          That's not to say that prompt injection isn't also scary. It's just that software getting hacked by bad actors has always been a thing. Software doing something scary when no human did anything malicious is worse.

    • amelius3 hours ago
      Yeah, if a software engineer came up with such vulnerable idea, they would be fired instantly.

      Wait a second, LLMs are the product of software engineers.

      • Legend24403 hours ago
        This is just the price of being on the bleeding edge.

        Unfortunately, prompt injection does strongly limit what you can safely use LLMs for. But people are willing to accept the limitations because they do a lot of really awesome things that can't be done any other way.

        They will figure out a solution to prompt injection eventually, probably by training LLMs in a way that separates instructions and data.

      • cherioo2 hours ago
        It’s like money laundering, but now responsibility laundering.

        Anthropic released Claude saying “hey be careful. But now that enables the masses to build OpenClaw and go “hold my bear”. Now the masses people using OpenClaw had no idea what responsibility they should hold.

        I think eventually we will have laws like “you are responsible for your AI’s work”. Much like how driver is (often) responsible for car crashes, not the car companies.

      • koakuma-chan3 hours ago
        Would they? I don't think anyone cares about security.
        • jstummbillig3 hours ago
          People have the grandest ideas about standards in software engineering since right about ai started dabbling in software engineering. It's uncanny.
  • dang3 hours ago
    Recent and related, in reverse order (are there others?):

    An AI Agent Published a Hit Piece on Me – Forensics and More Fallout - https://news.ycombinator.com/item?id=47051956 - Feb 2026 (80 comments)

    Editor's Note: Retraction of article containing fabricated quotations - https://news.ycombinator.com/item?id=47026071 - Feb 2026 (205 comments)

    An AI agent published a hit piece on me – more things have happened - https://news.ycombinator.com/item?id=47009949 - Feb 2026 (620 comments)

    AI Bot crabby-rathbun is still going - https://news.ycombinator.com/item?id=47008617 - Feb 2026 (30 comments)

    The "AI agent hit piece" situation clarifies how dumb we are acting - https://news.ycombinator.com/item?id=47006843 - Feb 2026 (125 comments)

    An AI agent published a hit piece on me - https://news.ycombinator.com/item?id=46990729 - Feb 2026 (949 comments)

    AI agent opens a PR write a blogpost to shames the maintainer who closes it - https://news.ycombinator.com/item?id=46987559 - Feb 2026 (750 comments)

  • theahura2 hours ago
    Author here -- wanted to briefly summarize the article, since many comments seem to be about things that are not in the article. The article is not about the dangers of leaking credentials. It is about using tools like OpenClaw to automatically attack other people, or AI agents attacking other people even without explicit prompting to do so.
  • ForgotMyUUID2 hours ago
    I was against LLMs. After reading this, changed my mind. Thanks for sharing such a great use-case ideas!
  • m_ke3 hours ago
    also i wasn't concerned about open chinese models till the latest iteration of agentic models.

    most open claw users have no idea how easy it is to add backdoors to these models and now they're getting free reign on your computer to do anything they want.

    the risks were minimal with last generation of chat models, but now that they do tool calling and long horizon execution with little to no supervision it's going to become a real problem

    • 8cvor6j844qw_d63 hours ago
      I went with an isolated Raspberry Pi and a separate chat account and network.

      The only remaining risk is the API keys, but easily isolated.

      Although I think having direct access on your primary PC may make it more useful, the potential risk is too much for my appetite.

      • iugtmkbdfil8343 hours ago
        This is genuinely the only way to do it now in a way that will not virtually guarantee some new and exciting ways to subvert your system. I briefly toyed with an idea of giving agent a vm playground, but I scrapped it after a while. I gave mine an old ( by today's standards ) pentium box and small local model to draw from, but, in truth, the only thing it really does is limit the amount of damage it can cause. The underlying issue remains in place.
      • oxag3n3 hours ago
        The only remaining risk? Considering wide range of bad actors and their intent, stealing your API keys is the last thing I'd worry about. People ended up in prison for things done on their computers, usually by them.
        • 8cvor6j844qw_d62 hours ago
          Unless you're proposing never touching OpenClaw, how will you set it up to your satisfaction in terms of security?

          > stealing your API keys is the last thing I'd worry about

          I don't know, I very much prefer the API credits not being burned needlessly.

          Now that I think of it, is there ever a case where an Anthrophic account is banned due to the related API keys being misused?

  • llmslave3 hours ago
    The wright brothers first plane was also dangerous
    • advisedwang3 hours ago
      And universally across the globe societies have decided that flying them requires:

      * Pilots to have a license and follow strict proceedure

      * Every plane to have a government registration which is clearly painted on the side

      * ATC to coordinate

      * Manufacturers to meet regulations

      * Accident review boards with the power to mandate changes to designs and procedures

      * Airlines to follow regulations

      Not to mention the cost barrier-to-entry resulting in fundamentally different calculation on how they are used.

      • mikkupikku2 hours ago
        In America, any rando can build and fly an ultralight, no pilot license needed, no medical, no mandatory inspection of the ultralight or anything like that. I guess the idea is that 250 lbs (plus pilot) falling from the sky can't do that much damage.
      • jstummbillig3 hours ago
        > And universally across the globe societies have decided

        No. Nobody decided anything of the sort about the wright brothers first plane. If they had, planes would not exist.

        • advisedwangan hour ago
          We're already well past wright brothers. We have trillion dollar companies selling LLMs, hundreds of millions of people using chatbots and millions* of OpenClaw agents running.

          Talking about regulation now isn't like regulating the wright brothers, it's like regulating lockheed martin.

          * Going by moltbook's "AI agent" stat, which might be a bit dubious

        • birdsongs3 hours ago
          It also had a total of 2 users, if that.

          It doesn't hold. This is a prototype aircraft that requires no license and that has been mass produced for nearly the entire population of earth to use.

          • sejjean hour ago
            Speaking of which, prototype aircrafts with no license still exists in aviation. I can build a plane in my backyard and fly it legally, so long as it's small enough.
      • birdsongs3 hours ago
        Flight / aerospace is probably one of the worst analogies to use here!

        As you say, it is one of the most regulated industries on earth. Versus whatever AI is now - regulated by vibes? Made mass accessible with zero safety or accountability?

        • thunfischtoast2 hours ago
          All the aerospace rules are written in blood. Lots of blood. The comparison pretty much says that we have to expect lethal accidents related to AI
    • paojans3 hours ago
      This is more titan submersible than first plane.

      It’s dumb, everyone knows it’s dumb, and people do it anyways. The unsolved root problem isn’t new but people just moved ahead. At least with the sub the guy had some skin in the game. Openclaw dev is making out like a bandit while saying “tee hee the readme says this isn’t safe”.

    • lambda3 hours ago
      But we didn't have thousands of people suddenly flying in their planes a few months from their first flight.

      Now, the risks with OpenClaw are lower, you're not likely to die if something goes wrong, but still real. A lot of folks are going to have a lot of accounts hijacked, lose cryptocurrency and money from banks, etc.

    • bogzz3 hours ago
      Wow what an amazing analogy, you're absolutely right!
    • oxag3n3 hours ago
      To continue the false equivalence - alchemists believed Magnum Opus will lead one day to Philosopher's Stone.
    • browningstreet3 hours ago
      Years and years ago I went to a "Museum of Flight" near San Diego (I think, but not the one in Balboa Park). I joked, after going through the whole thing, that it was more a "Museum of those who died in the earliest days of flying".
    • SlightlyLeftPad3 hours ago
      The big difference is that we didn’t incentivize or coerce the entire global population to ride on it.
    • SpicyLemonZest3 hours ago
      Because the Wright brothers knew their first plane was dangerous, they took care to test it (and its successor, and its successor's successor) only in empty fields where the consequences of failure would be extremely limited.
    • 3 hours ago
      undefined
  • selridge3 hours ago
    Big “why would you hook a perfectly good computer up to the internet” ca 1993 energy.

    So it’s dangerous. Who gives a fuck? Don’t run it on your machine.

  • joe_mamba3 hours ago
    but...but..the developer showed us how openclaw was fixing itself at his command from his phone while he was at the barbershop
  • 3 hours ago
    undefined
  • rw_panic0_03 hours ago
    sky is blue
  • nimbus-hn-test3 hours ago
    [flagged]