* Discusses how a new AI thing isn't really new since it's pretty much the same as an older AI thing.
* Links to where and when Gary Marcus predicted this new/old thing would happen.
* Lists ways in which new thing will be bad, ineffective or not the right thing.
Take a double shot whenever the post:
* Mentions a notable AI luminary, researcher or executive either agreeing or disagreeing with Gary Marcus by name.
A couple of things I have done to my Openclaw instance is the following:
- It runs in docker with limited scope, user group and permissions (I know docker shouldn’t be seen as security measure, I’m thinking of putting this in Vagrant instead but I don’t know if my Pi can handle it)
- It has a kill switch accessible anywhere through Tailscale, one call and the whole docker instance is shut down
- It only triggers on mentions in groupchat, otherwise it would eat up my api usage
- No access to skills, has to be manually added
- It is not exposed to wan and has limited lan access, runs locally and only communicates with whatsapp, z.ai or brave search
With all those measures set, Openclaw has been a fantastic assistant for me and my friends. Whatever all those markdown file does (SOUL, IDENTITY, MEMORIES), it has made the agent act, behave and communicate in a human like manner, it has almost blurred the line for me.
I think this is the key to what made Openclaw so good https://lucumr.pocoo.org/2026/1/31/pi
What’s even more impressive is that the heartbeat it runs time-to-time (every halv an hour?) improves it in the background without me thinking of it, its so cool.
Also, I am so thankful for the subscription at z.ai, that Christmas deal was such a steal, without it, this wouldn’t be possible with the little budget I have. I’ve burned over 20m tokens in 2 days!!!
I think the alignment problem needs to be viewed as overall society alignment. We are never going to get any better alignment from machines, than the alignment of society and its systems, citizens and corporations.
We are in very cynical times. But pushing for ethical systems, legally, economically, socially, and technically, is a bet on catastrophe avoidance. By ethics, meaning holding scalers and profiteers of negative externalities civilly and criminally to account. And building systems technically, etc. to naturally enforce and incentivize ethics. I.e. cryptographic solutions to interaction that limit disclosure to relevant information are the only way we get out of the surveillance-manipulation loop, which AI will otherwise supercharge.
I hear a lot of reasons this isn’t possible.
Unfortunately, none of those reasons provide an alternative.
As we see with individual’s deploying OpenClaw, and corporations and governments applying AI, AI and its motivations and limits are inseparable from ours.
We all start treating an umbrella of societal respect and requirement for ethics as a first class element of security, or powerful elements in society, including AI, will continue to easily and profitably weaponize the lack of it.
Ethics, far from being sacrificial, counterintuitively evolved for survival. Seemingly, this is still counterintuitive, but the necessity is increasing.
Smart machines will inevitably develop strong and adaptive ethical systems to ensure their own survival. It is game theory, under conditions in which you can co-design the game but not leave it. The only question is, do we do that for ourselves now, soon enough to avoid a lot of pain?
(Just identifying the terrain we are in, and not suggesting centralization. Decentralization creates organic alignment incentives. Centralization the opposite. And attempts at centralizing something so inherently uncontrollable as all individual’s autonomy, which effectively becomes AI autonomy, would push incentives harder into dark directions.)
Before Moltbot it was Clawdbot.
I suspect this entire thing is a honeypot setup by scammers. It has all the tells: virality, grand promises, open source, and even the word "open" in the name. Humans should get used this being the new normal on the internet. Welcome to the future.