It uses kernel-level security primitives (Landlock on Linux, Seatbelt on macOS) to create sandboxes where unauthorized operations are structurally impossible. API keys are also stored in apples secure enclave (or the kernel keyring in linux) , and injected at run time and zeroized from memory after use. There is also some blocking of destructive actions (rm -rf ~/)
its as simple to run as: nono run --profile openclaw -- openclaw gateway
You can also use it to sandbox things like npm install:
nono run --allow node_modules --allow-file package.json package.lock npm install pkg
Its early in, there will be bugs! PR's welcome and all that!
On the other hand, the people most inclined to hand over access to everything to this bot also strike me as people without a lot to lose? I don't want to make an unfair characterization or anything, it just strikes me that handing over the keys to your entire life/identity is a lot more palatable if you don't have much to lose anyway?
Am I missing something?
You might think: But that is great right??
I had a chat with a friend also in IT, ChatGPT and alike is the one doing all the "brain part and execution" in most cases. Entire workflows are done by AI tools, he just presses a button in some cases.
People forget that our brain needs stimulation, if you don't use it, you forget things and it gets dumber. Watch the next generation of engineers that are very good at using AI but are unable to do troubleshooting on their own.
Look at what happened with ChatGPT4 -> 5, companies workflows worldwide stopped working setting companies back by months.
Do you wanna a real world example???
Watch people who spent their entire lives within an university getting all sort of qualification but never really touched the real deal unable to do anything.
Sure, there are the smarter ones who would put things to the test and found awesome job, but many are jobless because all they did is "press a button", they are just like the AI enthusiasts, remove such tools and they can no longer work.
The prompt injection possibilities are incredibly obvious... the entire world has write access to your agent.
???????
It’s definitely not it it’s final form but it’s showing potential.
All this is running on a cheap VPS, where the worst it has access to is the LLM and Discord API keys and AnkiWeb login.
https://moltroad.com/ comes to mind. The "top rated" on there describes itself as "trading in neural contraband".
That's in addition to all of the actual hijacking hacks that have been going on.
I'm not saying any of this is successful, but people are certainly trying.
That said, I still don't trust it and have it quarantined in a VPS. It's still surprisingly useful even though it doesn't have access to anything that I value. Tell it to do something and it'll find a way!
Thinking about this more .. given all the AI generated code being put into production these days (I routinely see posts of anthropic and others boast how much code is being written by AI). I can see it being much, much harder to review all the code being written by AIs. It makes a lot of sense to use an AI system to find vulnerabilities that humans don't have time to catch.
At least currently, I don't think we have good ways of preventing the former, but the latter should be possible to avoid.
It's just as bad as a lot of the vibe-coders I've seen. I literally saw this vibe-coder who created an app without even knowing what they wanted to create (as in, what it would do), and the AI they were using to vibe-code literally handwrote a PE parser to load DLLs instead of using LoadLibrary or delay loading. Which, really, is the natural consequence of giving someone access to software engineering tools when they don't know the first thing about it. Is that gatekeeping of a sort? Maybe, but I'd rather have that then "anyone can write software, and oh by the way this app reimplements wcslen in Rust because the vibe-coder had no idea what they were even doing".
That is indeed the point. Moltbot reminds me a lot of the demon core experiment(s): Laughably reckless in hindsight, but ultimately also an artifact of a time of massive scientific progress.
> Is that gatekeeping of a sort? Maybe, but I'd rather have that
Serious question: What do you gain from people not being able to vibe code?
I suppose it's going to be harder to identify obvious slop at a first glance, but fundamentally, what changes?
Some people actually fell for "move fast and break things".
Just ship anything and everything as fast as possible because all that matters is growth at all costs. Security is hard and it takes time, diligence, and effort and investors aren't going to be looking at the metric of "days without security incident" when flinging cash into your dumpster fire.
Here's the thing. People who don't see a problem with the former obviously have no interest in addressing the latter.
Also, if you think about it, billions of people aren't running Moltbot at all.
Is the only real answer sandboxing + zero trust + treating agents as hostile by default? Or is this category fundamentally incompatible with least privilege?
yikes
no, they documented it
https://docs.openclaw.ai/gateway/security#node-execution-sys...
All these companies/projects break decades of our security practice and sell you AI browser, AI agent for... I don't know what?
/i
I mean: there are literally people spending $200 and more per month to have their personal, a bit schizophrenic, assistant engage moreover in conspicuous consumption for them.
Now as to my take on it: I think energy, when it comes to 8 billion humans, is basically infinite so I think it's only a matter of converting enough of that energy that either is or reaches our planet into a usable form. So I don't mind energy consumption.
But it'd be nice if could we at least have those who use AI not being hypocrites and stop criticizing Bitcoin mining and ICE cars? (by ICE I mean "Internal Combustion Engine" in case you thought I was talking about other kind of cars)
From now on you're only allowed to criticize ICE cars and Bitcoin mining if you don't use AI.