Security is one failure mode. But "agent did something subtly wrong that didn't trigger any errors" is another. And unlike a hacked system where you notice something's off, a flaky agent just... occasionally does the wrong thing. Sometimes it works. Sometimes it doesn't. Figuring out which case you're in requires building the same observability infrastructure you'd use for any unreliable distributed system.
The people running these connected to their email or filesystem aren't just accepting prompt injection risk. They're accepting that their system will randomly succeed or fail at tasks depending on model performance that day, and they may not notice the failures until later.
[1] https://blog.cloudflare.com/moltworker-self-hosted-ai-agent/...
On the other hand, I just wanna point out
> Firstly, Cloudflare Workers has never been so compatible with Node.js. Where in the past we had to mock APIs to get some packages running, now those APIs are supported natively by the Workers Runtime.
Deployed a project a couple of days ago, and compared to past attempts where I had to wrangle (pun intended) with certain configs for deployment styles for node based applications, the normal build tooling just worked out of the box. Planning to move a couple of my free-from-me high DAU user projects that are on the vercel premium tier over to CF workers.
showing how many insecure deployments there are
Insecure how? Even if the dashboard html is publicly accessible, you usually cannot connect without pairing or setting a gateway key.Running it this kind of agent in the cloud certainly has upsides, but also:
- All home/local integrations are gone.
- Data needs to be stored in the cloud.
No thanks.
A local agent has zero ping to your smart home and files, but high latency to the outside world (especially with bad upload speeds). A cloud agent (Cloudflare) has a fat pipe to APIs (OpenAI/Anthropic) and the web, but can't see your local printer.
The ideal future architecture is hybrid. A dumb local executor running commands from a smart cloud brain via a secure tunnel (like Cloudflare Tunnel). Running the agent's brain locally is a bottleneck unless you're running Llama 3 locally
Hosting Moltbot on your own hardware reigns supreme.
I'm running it on my old Mac mini right now and I have not given it access to untrusted inputs like my email inbox. It only has access to my filesystem (synced to my laptop with Syncthing), local applications like Apple Reminders, and OpenRouter. I already find it useful for augmenting web searches with stuff that's in my Obsidian vault.
There's nothing new, it's 'just' conveniently packaged for the gamers and /r/battlestation owners and distro-ricing crowd to install and run. There'll be similar hype waves where they too are confused because nothing's new when it's easy enough for our not-technically-inclined older relatives etc. to run somehow (not from GitHub!).
Ultimately its a convenience wrapper that makes it easy to wire up Claude or Chatgpt to a chat platform like discord, but its claiming to be far more revolutionary for reasons I dont yet know.
Also clawdbot is objectively a pretty inconvenient way to hook Claude Code up to a chat app. I made a bare-bones one that takes 2 minutes to run with npx: https://github.com/clharman/afk-code
It's an obvious move in hindsight, but I hadn't thought of it. Now, the amount of people running it outside of a sandbox or isolated machine and giving it that kind of access would probably make me cry.
https://github.com/caesarnine/binsmith
Been running it on a locked down Hetzner server + using Tailscale to interact with it and it's been surprisingly useful even just defaulting to Gemini 3 Flash.
It feels like the general shape of things to come - if agents can code then why can't they make their own harness for the very specific environments they end up in (whether it's a business, or a super personalized agent for a user, etc). How to make it not a security nightmare is probably the biggest open question and why I assume Anthropic/others haven't gone full bore into it.
This has come up in a few recent statements by the project lead, including scammy memecoins and name-sniping. One source:
https://www.theregister.com/2026/01/27/clawdbot_moltbot_secu...
I saw an AI generated (not even local llm but some cloud llm SORA) AI video ad of lobster/clawdbot on r/localllama not by any reddit ad (whcih gets block by ubo) but rather by a human.
I really got pissed by it and there was one comment which was pissed too. I really resonated with that comment. Clawdbot is really dumb, I seriously don't understand the hype.
WE are getting into purely crypto version of somehow AI (like with all of its weird hype mostly). The bubble is near imo.
> Workers Rate limit Degradation
> Update - We are continuing to work on a fix for this issue.
Cloudflare: Hold my beer, we'll run it in the cloud.
The irony is that the whole point of the "self-hosted" movement was leaving the cloud to own your data and compute. Cloudflare suggests moving it back to the cloud but labeling it Serverless. Technically elegant, but ideologically funny
Though honestly administering Kubernetes at home gets old faster than paying $5 a month