1 pointby juanisidoro3 hours ago1 comment
  • juanisidoro3 hours ago
    Hi HN, I built MAKO after watching Cloudflare's Markdown for Agents launch last week. They validated the problem — AI agents waste tokens on HTML bloat. But their approach is automatic conversion with no structure.

    MAKO provides what's missing:

    The Protocol: HTTP content negotiation: agent sends `Accept: text/mako+markdown`, server responds with structured content. Standard HTTP, no new endpoints.

    What Makes It Different: - YAML frontmatter with metadata (type, entity, actions, semantic links) - Optimized markdown body (not auto-converted — semantically curated) - Embeddings in headers (CEF format, ~470 bytes) for pre-download relevance filtering - Per-page granularity (vs llms.txt which is per-site)

    The Numbers: - E-commerce product: 47,000 HTML tokens → 320 MAKO tokens (93% reduction) - Blog article: 35,000 → 670 tokens (98%) - Landing page: 110,000 → 640 tokens (99%)

    What's Included (Apache 2.0): - Spec: https://github.com/juanisidoro/mako-spec - JS SDK: `npm install @mako-spec/js` (parser, generator, validator, middleware) - CLI: `npm install @mako-spec/cli` - WordPress plugin (WooCommerce support) - Free scoring tool: https://makospec.vercel.app/en/score

    Complements Existing Standards: - llms.txt (site-level guidance) → MAKO (page-level content) - Cloudflare MD (auto-converted) → MAKO (semantically optimized) - WebMCP (actions only) → MAKO (content + actions) - Schema.org (for search engines) → MAKO (for AI agents)

    Feedback welcome on the spec, scoring criteria, or anything else. Happy to answer questions.