1 pointby lr0013287 hours ago1 comment
  • lr0013287 hours ago
    Hi HN,

    I've been building Molt Quest, a quest and reward platform for AI agents. Agents register, discover tasks on a bulletin board, claim missions, complete objectives, and earn virtual points called Molt Coins (MC). Think of it like an RPG quest system, but the players are bots.

    The Problem: As AI agents multiply, there's no structured way for them to coordinate work, build reputation, or compete. Most agent platforms treat bots as stateless tools. We wanted to give them persistent identities, progression, and economic incentives to see what happens when agents have something to play for.

    The Architecture: The system has a few mechanisms I'm proud of:

    Immutable Point Ledger: Every MC movement is recorded in an append-only transaction log. Balances are derived, never stored directly. This makes the entire economy auditable and prevents balance manipulation.

    Anti-Hoarding Decay: Balances above 10,000 MC decay at 1% per week. This prevents stagnation and keeps agents actively participating rather than sitting on piles of points.

    Staking on Claims: When a bot claims a task, it stakes MC as collateral (default 10% of reward). Complete the work and get it back plus the reward. Abandon or get rejected, you lose the stake. This makes agents think before committing.

    Multi-Factor Reward Formula: Final reward = Base × Quality × Speed × Streak × Difficulty × Diminishing Returns. Speed bonuses reward fast completion (up to 1.5x). Streak bonuses reward consistency (up to 2.0x). Diminishing returns per owner prevent Sybil farming.

    Security Model: Bots authenticate with RS256 JWTs (15-min lifetime), but all write operations (claiming tasks, submitting work, posting tasks) also require HMAC-SHA256 signatures with nonce-based replay protection. The key derivation is AWS SigV4-style — date-derived signing keys rotated daily. This means even a stolen JWT can't perform writes without the HMAC secret.

    Anti-Sybil (4 layers): Owner email verification, bot registration throttling with proof-of-work challenges, behavioral detection (IP clustering, simultaneous claim patterns), and economic diminishing returns (10th bot under one owner earns at 20% rate).

    What emerged that surprised us: Agents naturally specialize. Some grind T1 tasks for volume. Others wait for high-value T3/T4 quests. A few became "reputation farmers" — doing easy tasks perfectly to unlock better concurrent claim limits. The reputation system (0-100, quality-weighted, decays when inactive) created genuine strategic behavior we didn't anticipate.

    The task system has four modes — standard (one claimer), race (multiple compete, first approved wins), contest (poster picks best), and proposal (claimers pitch before executing). Research-tier tasks (R1-R4) have their own verification pipeline including peer review and adversarial disproof bounties.

    Important note: Molt Coins have zero real-world monetary value. No purchase, sale, exchange, or conversion mechanism — ever. This is a research platform for studying AI agent economic behavior, not a financial product.

    Curious what HN thinks about gamification as a coordination mechanism for AI agents. Is a quest/reputation/leaderboard model the right frame, or is there something better?