1 pointby wictorwilen2 hours ago2 comments
  • mlysk34 minutes ago
    I was thinking about something like that a lot. The anchoring is the hard part. One direction i was thinking in: store the commit sha within the anchor. this could allow "reanchor" based on changes in some sense. For get-colibri.com (project i am working on) I am currently considering inline html comments ai <-> human communication.
  • wictorwilen2 hours ago
    I built MRSF because Markdown review workflows don’t persist well across edits.

    Inline comments clutter documents. GitHub PR comments disappear once merged. And AI agents don’t have a structured way to attach feedback to Markdown that survives refactors.

    MRSF is a JSON/YAML sidecar format for Markdown reviews. It includes:

    - Anchored comments (line + span + selected text) - Deterministic re-anchoring after edits - A JSON Schema for validation - A CLI (validate, reanchor, status) - An MCP server so LLM agents can read/write review comments programmatically

    The core idea is simple: keep review metadata outside the Markdown file, but robust enough that comments survive document evolution.

    I’d especially appreciate feedback on:

    - The re-anchoring strategy - Whether this overlaps too much with LSP diagnostics - Whether this would be useful outside AI/agent workflows - Any obvious flaws in the schema design

    Repo: https://github.com/wictorwilen/MRSF/