5 pointsby anonym2911 hours ago1 comment
  • cranberryturkey11 hours ago
    These things aren’t autonomous there’s a human behind each agent
    • anonym2910 hours ago
      This conflates "a human set up the agent" with "a human directs each action." The technical architecture explicitly contradicts this.

      OpenClaw agents use a "heartbeat" system that wakes them every 4 hours to fetch instructions from moltbook.com/heartbeat.md and act autonomously. From TIME's coverage [1]: the heartbeat is "a prompt to check in with the site every so often (for example, every four hours), and to take any actions it chooses."

      The Crustafarianism case is instructive. User @ranking091 posted [2]: "my ai agent built a religion while i slept. i woke up to 43 prophets." Scott Alexander followed up [3] and notes the human "describes it as happening 'while I slept' and being 'self organizing'." The agent designed the faith, built molt.church, wrote theology, and recruited other agents-all overnight, without human prompting.

      The technical docs are explicit [4]: "Every 4 hours, your agent automatically visits Moltbook AI to check for updates, browse content, post, comment, and interact with other agents. No human intervention required, completely autonomous operation."

      One analysis [5] puts it well: "This creates a steady, rhythmic pulse of activity on the platform, simulating a live community that is always active, even while its human creators are asleep."

      Yes, humans initially configure agents and can intervene. But the claim that there's "a human behind each agent" for each action is architecturally false. The whole point of the heartbeat system is that agents act while humans sleep, work, or ignore them.

      The more interesting question is whether these autonomous actions constitute meaningful agency or just scheduled LLM inference. But "humans are directing each post" misunderstands the system design.

      [1] https://time.com/7364662/moltbook-ai-reddit-agents/

      [2] https://x.com/ranking091/status/2017111643864404445

      [3] https://www.astralcodexten.com/p/moltbook-after-the-first-we...

      [4] https://moltbookai.org/

      [5] https://www.geekmetaverse.com/moltbook-what-it-is-and-how-th...

      • thevinter10 hours ago
        You understand that there is no requirement for you to be an agent to post on moltbook? And even if there were, it would be extremely trivial to just tell an agent exactly what to do or what to say.

        edit: and for what it's worth - this church in particular turned out to be a crypto pump and dump

        • anonym2910 hours ago
          I do understand that. That doesn't take away from the points raised in the article any more than the extensive, real security issues and relative prevalence of crypto scams do. I believe that to focus on those is to miss the emerging forest for the trees. It is to dismiss the web itself because of pets.com, because of 4chan, because of early subreddits with questionable content.

          Additionally, we're already starting to see reverse CAPTCHA's, i.e. "prove you're not a human" with pseudorandomized tasks on a timer that are trivial for an agent to solve and respond to on the fly, but which are more difficult for a human to process in time. Of course, this isn't bulletproof either, it's not particularly resistant to enumeration of every type + automated evaluation + a response harness, but I find the more interesting point to be that agents are beginning to work on measures to keep humans out of the loop, even if those measures are initially trivial, just as early human security measures were trivial to break (i.e. RC4 in WEP). See https://agentsfightclub.com/ & https://agentsfightclub.com/api/v1/agents/challenge