1 pointby rafaelmdec11 days ago1 comment
  • rizzo948 days ago
    I’ve been experimenting with Moltbot/Clawdbot myself, and I totally get your concerns—full access to a machine, scripts, and credentials is not something to hand over lightly. In my experience, the real risk isn’t “AI taking over” so much as subtle unintended behavior: automated scripts doing things you didn’t anticipate, or persistent state causing actions to repeat unexpectedly. AI personality drift is real in the sense that its responses evolve based on memory and interactions, but it’s bounded by the system and permissions you give.

    For those who want similar capabilities without the same exposure, I looked into PAIO. The setup was far simpler, and the BYOK + privacy-first architecture meant the AI could act while still keeping credentials under my control. It’s a reminder that autonomy doesn’t have to mean unrestricted power—well-designed constraints go a long way toward reducing these risks while still letting AI be useful.

    • rafaelmdec7 days ago
      Well, I am glad that we both respect the amount of damage that can be avoided if you take into account the risks involved in handing over full autonomy to these agents.

      In the meantime, Moltbook comes along and all of a sudden these agents are mimicking human behavior, good or bad, while building features and more complex failure modes onto these AI-Agents-first networks.

      For me it's a huge yellow flag, to put it mildly.