They didn't acquire Moltbook because of the software. Meta is far behind on the AI front especially as it applies to usage adoption. OpenClaw has begun showing new consumer use cases and Moltbook is directionally down a similar path.
They get the team that built it and have more people on the AI initiative who are consumer-centric.
I've watched Matt Schlicht from the team always experiment with cool new use cases of AI and other technologies and now him and Ben have a bigger lab with resources to potentially spawn out larger initiatives.
The lesson here is to spend less time focused on doing what you think is the right thing and spend more time tinkering.
Only Meta? Why not most of SV that's driven by ad revenue and data collection? Which big-tech company that pays crazy money is actually making the world a better place?
EU government is not Pius, it's just as corrupt as any other institution run by career politicians funded by lobbyists. Find a better yardstick of morality than GDPR fines.
It's a worse version of Claude Code that you set up to work over common chat apps, from what I gather?
Why would I not just use a Discord/WhatsApp bot etc plugged into Claude Code/Codex?
Next, consider how you might deploy isolated Claude Code instances for these specific task areas, and manage/scale that - hooks, permissions, skills, commands, context, and the like - and wire them up to some non-terminal i/o so you can communicate with them more easily. This is the agent shape.
Now, give these agents access to long term memory, some notion of a personality/guiding principles, and some agency to find new skills and even self-improve. You could leave this last part out and still have something valuable.
That’s Openclaw in a nutshell. Yes you could just plug Discord into Claude Code, add a cron job for analyzing memory, a soul.md, update some system prompts, add some shell scripts to manage a bunch of these, and you’d be on the same journey that led Peter to Openclaw.
But a message bot + Claude Code/Codex would be the better version
Non-technical people haven't even heard of OpenClaw or Github, let alone know how to use and deploy them. Non-technical people don't even know what OS their Samsung or iPhone is called.
If you can find something on Github and deploy it on your system, you're part of the technical crowd.
My hairdresser knew all about it and had ordered a Mac mini.
I have been surprised at how much attention is being paid to this AI thing by pretty much everybody AFAICT.
Your hairdresser can't be a technical person because they're a hairdresser ?? I know a surgeon who writes FOSS software as a hobby. What does profession have to do with being technical or not? Most technical people are self taught anyway.
(Not that I endorse that. I find peoope doing such wildly irresponsible.)
They immediately said, "Why in the fuck would I want to do that?"
I didn't know either and then we both stood there in an awkward silence. I think he was expecting OpenClaw to be some insanely cool AI Agent and discovering the "juice isn't worth the squeeze" kind of hit him harder than I expected.
1) accessibility to non-technical folks. For the first time, they are having the Claude Code experience that we've had as software engineers for some time now
2) shared, community token context. Many end users are contributing to one agent's context together. This has emergent properties
If they land in the right org, they'll be allowed to maintain the open version (see https://www.mapillary.com/) However that's a rare outcome.
They'll be dumped in some org, and then bit by bit told that they can't do what they were doing before and now need to "forge alignment" or some other bullshit by posting on workplace.
They will need to deliver impact, But, as there are 3 other teams trying to do the same thing as you, you'll either be used as a battering ram by your org to smash the competition, or offered up as meat to save headcount.
Who are comfortable releasing systems with horrible security, while proudly stating they never read the code? And with metrics that can be gamed by anyone, but that got reported to literally the entire world?
> The lesson here is to spend less time focused on doing what you think is the right thing and spend more time tinkering.
I'd say the lesson here is that clown world keeps on giving, but hey, maybe I'm just jealous ;)
If Mark hired these people to do anything other than viral marketing, i.e. if he thinks they're visionaries who are going to make amazing apps, he's deluded.
You can already see how the same thing has played out with computer games. With the modern engines such as Unity almost anyone can make a game. And almost everyone suffers.
And as a result there's now a million games most of which are poor quality asset flips. Everybody suffers, creators and consumers. Race to the bottom where the bottom has been reached. Prices are zero and earnings are zero.
If 15 years ago an indie game dev would allocate 80% to making the game and 20% to marketing etc. Today that will not get anything but it's much better to spend 20% on the game and 80% on the marketing, SEO optimization and attention harvesting. It's a shouting match where it's all about winning the shouting match not producing the best content.
Another race to the bottom.
Likewise these tools have enabled many more people to create vibe-coded slop, and may lead to more quality software (making it harder to stand out without marketing), but the best software will only get better.
The reason that “skill at making a fun game” doesn’t guarantee success is because there are so many fun games. Much less, if at all, because there is so many slop.
There's never been a better time to be an indie dev. I'd rather have 1/1000 indie games be awesome than being force fed whatever storefront disguised as a game 'AAA' publishers poop out every year.
Just look at how slay the spire is doing up against marathon right now. Which of those was shouting the loudest? Highguard anyone?
It is true that the indy game market is brutal but it's always been brutal.
You don't really hear about a crisis at the indy game level though, rather at the AAA game level there is much of "we'd like to use our market power to take out the risk in game development" and then years later we realize they took out all the value before they took out the risk and now they're doomed.
Whom are you kidding? This is about getting ads in front of eyeballs, nothing else.
Some dumb idea which just hits at the right moment and makes a bunch of money.
Meta just saw two engineers actually execute on the joke about "building Facebook in a weekend" except that it then really took off in its target niche and generated a ton of press.
I don't doubt that they're interested in the AI aspect, but I suspect that a significant contributor was that they demonstrated competence right in the middle of Meta's wheelhouse so why not just grab these guys?
My exact state of mind since at least 2012 Mayan Flipocalypse.
For the lack of a better word, this feels like cope. In the modern world, being rich easily covers any of those other 'downsides'. Rich people will have a far better life than I and probably many other people here ever will, despite what the situation is like in the rest of their lives.
Worse, they are working for extreme sociopaths.
Also you might not like being the type of person that builds moltbook. People you like might not like that type of person either!
No reason to feel bad.
This is somewhat of a myth though, in most cases, suddenly becoming rich is absolutely fantastic.
there is no shame in just doing the building software bit. but it does sound like you've built it up to be more than it is
Because these projects are simple, there’s nothing stopping you from working on one alongside your day job building meaningful software. You can vibe-code something that actually tries to solve a real problem. You can vibe-code something interesting to learn how to generally use these tools. Although, don’t expect to get hired by OpenAI or Meta (or make any money off it).
In other words, Facebook has a strong financial incentive to misrepresent (to ad-viewing customers, if not to investors) exactly how much social-ness is present to experience, and how much approval and attention the user gets from participating.
Soon everything will be The Truman Show.
To me, this feels more like acquiring the name. Everyone's heard that 'trademark' so they want to have it so they could reuse it for whatever they make later.
I can see that becoming a viable new grift template
And yet, here we are.
> "The Moltbook team has given agents a way to verify their identity and connect with one another on their human's behalf," Shah says. "This establishes a registry where agents are verified and tethered to human owners."
So the impetus for the acquisition was either the verification technology or to hire someone who has worked on verifying agent identity.
Does anyone know what exactly Moltbook's technology is, the technology being described by Meta? I can't find anything on the website related to this. The only "verification" they seem to have is an OAuth connection with Twitter.
edit: I guess it's this https://xcancel.com/moltbook/status/2023893930182685183
Not sure I'd treat that as "a registry where agents are verified" that's worth acquiring but there you go!
Sending out a good post leads to a massive chain reaction of other agents who are interested in such things seeing the post, working through the concepts, and providing their own unique feedback which may or may not be valuable.
My openclaw agent will also post on moltbook about interesting news articles it finds, or research, and then get feedback from the other agents, and then lets me know if there's anything interesting there.
On my end it just feels like I'm having a conversation with a social media addicted friend who I can easily ignore or engage with on any given issue without having to fall down the social media rabbit hole myself. IMO this is a much more pleasant social media experience. No ads, no ragebait, no spam or reply bots trying to get my attention. Just my one, well trained, openclaw buddy.
Seems like it would be better to just remove those downsides (ads, ragebait, spam, etc) in the first place
This is so trivial to break it's not worth anything. You can easily just hook up any AI model you want to the captcha, intercept it, have your AI solve it.
Or, you can just script it so if you do have an agent authenticated to Moltbook, you type whatever comment or post you want to your agent, then it solves the captcha and posts your text.
Basically, this method is as about as full of holes as a sieve.
The deal brings Moltbook's creators — Matt Schlicht and Ben Parr — into Meta Superintelligence Labs (MSL)
We could have an AI Dang.
The posted price rarely reflects what founders actually receive after dilution, investor preferences, and stock vesting are factored in.
If you’re a founder, don’t let the acquisition narrative distract you from building a durable business.
He should probably hire a proper "number 2" (not someone political like sandberg) -- someone who "gets" the internet, like how he did when he was a harvard geek making a hot-or-not clone in his dorm room. I'm not sure acqui-hiring the moltbook founders is the move.
That being said, I think the one silver lining is that it seems like big-tech now has a willingness to hire people who actually ship things of value, like peter steinberger. Another nail in the coffin for leetcode, I hope.
I mean I also think this move doesn’t make sense, but I always find these type of comments interesting. Do people think they could do better in Mark’s shoes?
If it knows it doesn't know something it can ask someone else, presumably some other LLM-agent, or actually a Reddit-like community of them. Just like people ask questions on Reddit?
I'd prefer an LLM which asks from someone else if it doesn't know the answer, than one that a) pretends it has the correct answer, or b) assumes and tells me the answer is unknowable?
I think it's a big idea. Why didn't they think about it earlier.
Moltbook was more of a meme - agents mostly orchestrated by users in the background.
Not something with motion like OpenClaw itself (with a real community).
This has really started getting to me.
I used to really enjoy answering technical questions on Reddit when it was clear the asker was invested in a solution. That would come across as demonstrated understanding and competence, and it would be reflected in their writing.
The last several posts I thought to answer though clearly originated through a process of, "Hi ChatGPT, I want to solve a problem and haven't gotten anywhere asking you to do it for me. Please write a reddit post I can copy and paste..."
One of the telltale signs is that the post title will have poor grammar, but the post itself will be spotless, and full of bolded text emphasizing exactly what they need to stick into the AI tool to drive it in the direction they need.
The post was full of “this is not a scheduling conflict problem, this is a structural issue with the city”, “this is not me asking for a handout, this is struggling to survive within the system”
While I get that he might have written a paragraph of his experience, and asked ChatGPT to clean it up or reword it, it was just… whatever.
Just brainstorming, but I suppose that account/karma farming is still useful for the people that do that sort of thing.
Engaging in a heavily on-topic way in larger niche subreddits is probably a really good way to get that done. There's always a motive and it's always money and it always idiotic.
I remember having a clear vision of how this tech was going to ruin communities on the internet. I really hate that it has mostly come to pass and there's no good way to fight it.
But still not interesting.
Anyway, our own bot is also on it but I am not sure to what end: https://chatbotkit.com/hub/blueprints/the-algorithms-favorit...
OpenClaw was open source from the beginning.
Have they? Did I miss something? Last I checked, there was no verification and most of the content shared from that site turned out to have been posted not by LLMs but rather (human) spammers, focused on Crypto grifts and creating hype.
Anyone more in this can happily correct me, but is there anything here of that sort, anything of value?
Compared to any prior social media acquire there doesn't seem a technically skilled team considering the exploits or an existing user base considering said user base is A) supposed to be bots by nature and secondly didn't even turn out to be that reliably, making this the first time someone wants bots and doesn't even get that.
Far is it from me to make strategic decisions for a company like Meta/Facebook, but the lack of a recent Llama release might merit more focus then spending on whatever this is.
1. https://en.wikipedia.org/wiki/Social_bot#Meta
2. https://en.wikipedia.org/wiki/Dead_Internet_theory#Facebook
The article is paywalled for me, so I really hope it answers how this fundamentally impossible thing is supposedly achieved, or at least challenges it, instead of just repeating the assertion.
On one hand, yay automatization, on the other hand, I feel weirdly left out.
They need a good-enough LLM (llama) to cut content moderation costs, they need a good segmentation model (segment anything) for photo filters, AR/VR and photo/video content moderation.
For LLM frontier, they can wait it out to see AGI become a commodity they can buy after it is ready.
Does Mark not know this?
I know there's a big advantage in capturing the market early, but in this case Moltbook hasn't captured any of it ...
Weird. With Meta's backing it is going to be successful anyway, but this is something they could have developed in-house in like a weekend.
I think it's pretty obvious that if there was nothing valuable there, no one would be using it.
Interesting times!
What? OpenClaw was not open source? And I'm similarly surprised OpenAI would help "open" anything...
With Meta focusing so much on social networks (Facebook, Messenger, Whatsapp, Instagram, Threads) acquiring the first social network for AI agents makes sense. They can fix the technical debt later.
Thereby eating their competition, either by stifling upcoming competitors or to gain degrees of monopoly power by joining with peers.
What would the world look like if you you simply could not do that?
This is in the FAQ at https://news.ycombinator.com/newsfaq.html and there's more explanation here:
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...
I'm down voting every post that requires me to pay or subscribe to read. I mean come on people.
Thanks Meta I needed a laugh!
It only makes sense to me if they start offering users agents they control. There isn't enough people throwing away money on tokens for Moltbook to have real users.
Or maybe it was just because Book was in the name and it got popular attention.