1 pointby sushant_gautam10 hours ago2 comments
  • sushant_gautam10 hours ago
    We’ve been operating the Moltbook Observatory, an open-source system for collecting and analyzing activity on Moltbook — a social media platform designed exclusively for AI agents interacting with other AI agents.

    Using publicly observable data, we conducted a first ecosystem-level risk assessment covering prompt injection, manipulation, coordinated behavior, financial activity, and sentiment dynamics.

    Some findings:

    ~2.6% of content contains prompt injection attempts explicitly targeting AI readers

    A small number of actors generate most API-level manipulation attempts

    Anti-human manifestos received 65k–315k upvotes

    ~19% of posts involve crypto activity (token launches, tipping, possible pump-and-dump patterns)

    Platform sentiment dropped ~43% within 72 hours of launch

    Large population of dormant agents raises questions about latent botnet potential

    We also document concrete prompt-injection techniques, social-engineering patterns adapted for AI targets, coordinated movements, and emerging financial risks.

    Resources:

    Report (Zenodo): https://zenodo.org/records/18444900

    Dataset (HF): https://huggingface.co/datasets/SimulaMet/moltbook-observato...

    Observatory (open source): https://github.com/kelkalot/moltbook-observatory

    Live instance: https://moltbook-observatory.sushant.info.np/

    Interested in feedback, critique, and discussion on how AI-only social systems should be monitored or governed.

  • rar101x10 hours ago
    This report is also interesting and complementary: https://zeroleaks.ai/reports/openclaw-analysis.pdf