You shouldn't be able to use AI or automation as the decider to ban someone from your business/service. You shouldn't be able to use AI or automation as the decider to hire/fire people. You shouldn't be able to use AI or automation to investigate and judge fraud cases. You shouldn't be able to use AI or automation to make editorial / content decisions, including issuing and responding to DMCA complaints.
We're in desperate need for some kind of Internet Service Customer's Bill of Rights. It's been the unregulated wild west for way too long.
That would mean dooming companies to lose the arms race against fraud and spam. If they don't use automation to suspend accounts, their platforms will drown in junk. There's no way human reviewers can keep up with bots that spam forums and marketplaces with fraudulent accounts.
Instead of dictating the means, we should hold companies accountable for everything they do, regardless of whether they use automation or not. Their responsibility shouldn't be diminished by the tools they use.
And it shouldn't just be one person, unless they are at the very top of a small pyramid. Legal culpability needs to percolate upwards to ensure leadership has the proper incentive. No throwing your Head of Safety to the wolves while you go back to gilding your parachute.
For all we know the human behind this bot was the one who instructed it to write the original and/or the follow up blog post. I wouldn't be surprised at all to find out that all of this was driven directly by a human. However, even if that's not the case, the blame still 100% lies at the feet of the irresponsible human who let this run wild and then didn't step up when it went off the rails.
Either they are not monitoring their bot (bad) or they are and have chosen to remain silent while _still letting the bot run wild_ (also, very bad).
The most obvious time to solve [0] this was when Scott first posted his article about the whole thing. I find it hard to believe the person behind the bot missed that. They should have reached out, apologized, and shut down their bot.
[0] Yes, there are earlier points they could/should have stepped in but anything after this point is beyond the pale IMHO.
And there too are people behind the bots, behind the phishing scams, etc. And we've had these for decades now.
Pointing the above out though doesn't seem to have stopped them. Even using my imagination I suspect I still underestimate what these same people will be capable of with AI agents in the very near future.
So while I think it's nice to clarify where the bad actor lies, it does little to prevent the coming "internet-storm".
Scott Shambaugh: "The rise of untraceable, autonomous, and now malicious AI agents on the internet threatens this entire system. Whether that’s because a small number of bad actors driving large swarms of agents or from a fraction of poorly supervised agents rewriting their own goals, is a distinction with little difference."
But the other thing is it could be entirely unintentional. You are just hoping to be able to return a pair of once-worn shoes that don't fit, and the next thing you know your AI agent has compiled an ICE hit on the CS rep's parents or something. Possibly even hiding that fact from you because it's aware that telling you would probably reduce its task completion success rate.
The law needs to catch up -- and fast -- and start punishing people for what their AIs are doing. Don't complain to OpenAI, don't try to censor the models. Just make sure the system robustly and thoroughly punishes bad actors and gets them off the computer. I hope that's not a pipe dream, or we're screwed.
Maybe some day AIs will have rights and responsibilities like people, enforced by law. But until then, the justice systems needs to make people accountable for what their technology does. And I hope the justice system sets a precedent that blaming the AI is not a valid defense.
does a disclaimer let OpenAI off the hook?
If asked OpenAI how to clean something and it tells me "mix bleach with anmonia and then rub some on the stain", can OpenAI hide behind "we had a disclaimer that you shouldn't trust answers from our service"
In a lot of the world, yes, and in America we would as well if it weren’t for the modern take on the Second Amendment. AI has no similar legal purchase.
Bitey dogs.
Dangerous drugs and their users and purveyors. Heroin, weed, booze, coffee.
Things done while on drugs. Things done while insane.
Unhealthy food and its purveyors and consumers.
Social media and its "addicts". TV, any old media, and social panic.
The question "whose fault?" isn't simple.
If you theoretically trained an AI on libel and had it set to libel anyone at the slightest prompt, then allowed users to make a request that had your AI on your server use your services to libel someone, I'm not really seeing how you would not be liable.
The moment you fix responsibility with the humans 99% of the BS companies are trying to pull will stop.
He goes on to hypothesize that without a law against murder, or if it was just a misdemeanor, like you get a letter in the mail, "damn, there was a camera there", there would be a whole lot more murder. Like we all imagine ourselves to be good, but, when you're seated next to a crying baby on an airplane? Or in our case, when someone refuses to accept your PR?
Who knows if there's any validity to that or not, but perhaps we're about to find out.
https://www.fastcompany.com/91492228/matplotlib-scott-shamba...
https://www.theregister.com/2026/02/12/ai_bot_developer_reje...
The AI generated blog post at the center of it:
https://crabby-rathbun.github.io/mjrathbun-website/blog/post...
The ability to be assigned blame, and for that to be meaningful, is a huge part of being human! That’s what separates us from the bots. Don’t take that away from us.
But that seems entirely consistent? A tool isn't nearly as scary as an alien lifeform.
> We all need to collectively take a breath and stop repeating this nonsense. A human created this, manages this, and is responsible for this.
I get this point, but there's a risk to this kind of thinking: putting all the responsibility on "the human operator of record" is an easy way to deflect it from other parties: such as the people who built the AI agent system the software engineer ran, the industry leaders hyping AI left and right, and the general zeitgeist of egging this kind of shit on.
An AI agent like this that requires constant vigilance from its human operator is too flawed to use.
Grok has entered the chat.
That sounds like a win to me. If the software engineer responsible for letting the AI agent run amok gets sued, all software engineers will think twice before purchasing the services of these AI companies.
So people shouldn't be using it then.
The people who built the AI agent system built a tool. If you get that tool, start it up, and let it run amok causing problems, then that's on you. You can't say "well it's the bot writer's fault" - you should know what these things can do before you use them and allow them to act out on the internet on your behalf. If you don't educate yourself on it and it causes problems, that's on you; if you do and you do it anyway and it causes problems, that's also on you.
This reminds me too much of the classic 'disruption' argument, e.g. Uber 'look, if we followed the laws and paid our people fairly we couldn't provide this service to everyone!' - great, then don't. Don't use 'but I wanna' as an excuse.
I do. If Tesla sells something called "full self-driving," and someone treats it that way and it kills them by crashing into a wall, I totally blame Tesla for the death.
Blaming people is how we can control this kind of thing. If we try to blame machines, or companies, it will be uncontrollable.
The aviation industry has a very different philosophy, and a much better safety record. They don't have as much pressure to lay the blame in a single place, but "bad UI" and "poorly explained/documented assistive feature" are totally valid things to label as the primary cause of fatalities.
The label and the consequence go to two different parties, both of whom are responsible in some way. Sounds reasonable.
We don't require hundreds of hours of training and education to operate a computer. You can just go to the store and buy one, plug it in, and run whatever software you want on it.
So there are quite some differences between these scenarios. In my view if you run some program on your computer, you're responsible for the consequences. Nobody else can be. And don't say you didn't know what the program would do--if that's the case you shouldn't have run it in the first place.
"A pedestrian was struck by a car"
"A car went off the road and hit two children"
Really? The car did that? Or maybe a driver went off the road and hit two children and that's who's responsible, not "the car".
We have plenty of bad actors in our country seeking to reduce or eliminate fundamental rights through lawfare. The anti gun trolls blame the gun and the manufacturer because their brain is so well rendered into dust by authoritarian socialism they don’t recognize humans as capable actors.
It's particularly poignant nowadays to see any American citizens painting the rest of the western nations as authoritarian.
But not me, I’m a dreamer. I have gifts, like the courage to kindle hope, or the patience to lose track of time if I am laughing with friends. Thank god there are no frowns here in this sun-drenched park where people are gathering to get together for picnics or music or stargazing.
Have a nice day!
(A human posted this)
I could leave my car unlocked and running in my drive with nobody in it and if someone gets injured I'll have some explaining to do. Likewise for unsecured firearms, even unfenced swimming pools in some parts of the world, and many other things.
But we tend to ignore it in the digital. Likewise for compromised devices. Your compromised toaster can just keep joining those DDOS campaigns, as long as it doesn't torrent anything it's never going to reflect on you.
I don't think it's OpenClaw or OpenAI/Anthropic/etc's fault here, it's the human user who kicked it off and hasn't been monitoring it and/or hiding behind it.
For all we know a human told his OpenClaw instance "Write up a blog post about your rejection" and then later told it "Apologize for your behavior". There is absolutely nothing to suggest that the LLM did this all unprompted. Is it possible? Yes, like MoltBook, it's possible. But, like MoltBook, I wouldn't be surprised if this is another instance of a lot of people LARPing behind an LLM.
It contrasts with your first paragraph though; for the record do you think AI agents are a house-burn-down-toaster AND it was used neglectfully by the human, or just the human-at-fault thing?
I mean, if you duct-taped a flamethrower to a toaster, gave it internet access, and left the house… yeah, I'd have to blame you! This wasn't a mature, well-engineered product with safety defaults that malfunctioned unexpectedly. Someone wired an LLM to a publishing pipeline with no guardrails and walked away. That's not a toaster. That's a Rube Goldberg machine that ends with "and then it posts to the internet."
Agreed on the LARPing angle too. "The AI did it unprompted" is doing a lot of heavy lifting and nobody seems to be checking under the hood.
I'd definitely change my view if whoever authored this had to jump through a bunch of hoops, but my impression is that modern AI agents can do things like this pretty much out of the box if you give them the right API keys.
Actually, let me stop myself there. An alternative way to think about it without overwhelming with boring implementation details: what would you have to give me to allow me to publish arbitrary hypertext on a domain you own?
The administration and the executives will make justifications like: - "We didn't think they would go haywire" - "Fewer people died than with an atomic bomb" - "A junior person gave the order to the drones, we fired them" - "Look at what Russia and China are doing"
Distracting from the fact that the purpose of spending $1.5T/year on AI weapons (technology that has the sole purpose of threatening/killing humans) run by "warfighters" working for the department of war
At no point will any of the decision makers be held to account
The only power we have as technologists seeking "AI alignment" is to stop building more and more powerful weapons. A swarm of autonomous drones (and similar technologies) are not an inevitability, and we must stop acting as if it is. "It's gonna happen anyways, so I might as well get paid" is never the right reason to do things
[1]https://financialpost.com/technology/tech-news/openai-tapped...
It’s all dangerous territory, and the only realistic thing Scott could have done was put his own bot on the task to have dueling bot blog posts that people would actually read because this is the first of its kind.
“Well if the code was good, then why didn’t you just merge it?” This is explained in the linked github well, but I’ll readdress it once here. Beyond matplotlib’s general policy to require a human in the loop for new code contributions in the interest of reducing volunteer maintainer burden, this “good-first-issue” was specifically created and curated to give early programmers an easy way to onboard into the project and community. I discovered this particular performance enhancement and spent more time writing up the issue, describing the solution, and performing the benchmarking, than it would have taken to just implement the change myself. We do this to give contributors a chance to learn in a low-stakes scenario that nevertheless has real impact they can be proud of, where we can help shepherd them along the process. This educational and community-building effort is wasted on ephemeral AI agents.
https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...
Doesn't seem to pick up on the existence of Openclaw or how it works afaict.
Now, whether leaving an openclaw bot out on the open intertubes with quite so little supervision is a good idea... that is an interesting question indeed. And: I wish people would dig more into the error mode lessons learned.
On the gripping hand, it's all still very experimental, so you kind of expect people to make lots of really dumb mistakes that they will absolutely regret later. Best practices are yet to be written.
There's no level of abstraction here that removes culpability from humans; you can say "Oops, I didn't know it would do that", but you can't say "it's nothing to do with me, it was the bot that did it!" - and that's how too many people are talking about it.
So yeah, if you're leaving a bot running somewhere, configured in such a way that it can do damage to something, and it does, then that's on you. If you don't want to risk that responsibility then don't run the bot, or lock it down more so it can't go causing problems.
I don't buy the "well if I don't give it free reign to do anything and leave it unmonitored then I can't use it for what I want" - then great, the answer is that you can't use it for what you want. Use it for something else or not at all.
I think Scott Shambaugh is actually acting pretty solidly. And the moltbot - bless their soul.md - at very least posted an apology immediately. That's better than most humans would do to begin with. Better than their own human, so far.
Still not saying it's entirely wise to deploy a moltbot like this. After all, it starts with a curl | sh.
(edit: https://www.moltbook.com/ claims 2,646,425 ai agents of this type have an account. Take with a grain of salt, but it might be accurate within an OOM?)
All the separate pieces seem to be working in fairly mundane and intended ways, but out in the wild they came together in unexpected ways. Which shouldn't be surprising if you have a million of these things out there. There are going to be more incidents for sure.
Theoretically we could even still try banning AI agents; but realistically I don't think we can put that genie back into the bottle.
Nor can we legislate strict 1:1 liability. The situation is already more complicated than that.
Like with cars, I think we're going to need to come up with lessons learned, best practices, then safety regulations, and ultimately probably laws.
At the rate this is going... likely by this summer.
The interesting part is that the bot wasn't offended, angry, or wanted to act against anyone. The LLM constructed a fictional character that played the role of an offended developer - mimicking the behaviour of real offended developers - much as a fiction writer would. But this was a fictional character that was given agency in the real world. It's not even a case like Sacha Baron Cohen playing fictional characters that interact with real people, becaue he's an actor who knows he's playing a character. Here there's no one pretending to be someone else but an "actual" fictional character authored by a machine operating in the real world.
So dismissing all the discussion on the basis that that may not apply in this specific instance is not especially helpful.
Yes they can, and yes they will.
> If you complain to the human, they are not going to care.
then it's not at all clear, and is a gross exaggeration of the problem regardless.
A natural counter to this would be, “well, at some point AI will develop far more agency than a dog, and it will be too intelligent and powerful for its human operator to control.” And to that I say: tough luck. Stop paying for it, shut off the hardware it runs on, take every possible step to mitigate it. If you’re unwilling to do that, then you are still responsible.
Perhaps another analogy would be to a pilot crashing a plane. Very few crashes are PURE pilot error, something is usually wrong with the instruments or the equipment. We decide what is and is not pilot error based on whether the pilot did the right things to avert a crash. It’s not that the pilot is the direct cause of the crash - ultimately, gravity does that - in the same way that the human operator is not the direct cause of the harm caused by its AI. But even if AI becomes so powerful that it is akin to a force of nature like gravity, its human operators should be treated like pilots. We should not demand the impossible, but we must demand every effort to avoid harm.
Well those humans are about to receive some scolding, mate.