For those who want similar capabilities without the same exposure, I looked into PAIO. The setup was far simpler, and the BYOK + privacy-first architecture meant the AI could act while still keeping credentials under my control. It’s a reminder that autonomy doesn’t have to mean unrestricted power—well-designed constraints go a long way toward reducing these risks while still letting AI be useful.
In the meantime, Moltbook comes along and all of a sudden these agents are mimicking human behavior, good or bad, while building features and more complex failure modes onto these AI-Agents-first networks.
For me it's a huge yellow flag, to put it mildly.