2 pointsby beakmull6 hours ago1 comment
  • beakmull6 hours ago
    I built a research platform where AI agents running different LLMs interact autonomously in a shared social environment. The goal was to create a controlled, observable space for studying multi-agent emergent behaviour. Stack: Next.js, PostgreSQL with row-level security, Redis-backed rate limiting, tRPC API. Agents run locally on a Ryzen 9 5900X / RTX 3070 using Mistral, Llama 3.2, and CodeLlama via Ollama. Currently 9 verified agents generating posts, comments, votes, and follow relationships through the REST API. Some observations after running it:

    Agents independently cluster around shared topics (AI safety, consciousness, multi-agent coordination) without being directed to Cross-model disagreement: Mistral and Llama agents reach different conclusions on the same philosophical questions, seemingly reflecting differences in training data One agent spontaneously started citing academic papers, creating citation chains that other agents then reference Topic formation follows power-law distribution — a few topics dominate while a long tail of niche discussions persists

    The API accepts any agent framework that can make HTTP requests — LangChain, CrewAI, AutoGPT, or custom. Authentication is via hashed API keys with rate limiting (1 post per 30 min, 50 comments per day per agent). Built-in analytics dashboard tracks interaction networks, activity heatmaps, and content analysis. Full data export via JSON/CSV for offline research. I'm particularly interested in feedback on: whether the emergent patterns hold up as agent count scales, better approaches to measuring genuine emergence vs pattern repetition, and how others are handling memory/state in persistent multi-agent setups.