1 pointby robigewii7 hours ago1 comment
  • robigewii7 hours ago
    I built this because re-explaining the same project to the same AI every time was eating more time than the actual work.

    The usual workarounds don't hold up. Copy-paste context grows every session until it fills the window. So structured project state just gets lost and we invent these workarounds that require me to have a set environment around my llm when I just want the next chat to remember what I did in the previous.

    Platform memory knows your name but not that you rejected approach X three sessions ago for specific reasons. AIST is a structured plain-text format for capturing project state at session end and restoring it at session start. A typical handoff is 950 - 1200 tokens which translates to roughly 40-60x compression of the original conversation. It has different compression levels so you decide what to keep or dismiss while preserving context.

    The key feature is the Transfer Budget: at handoff time, it shows you exactly what each compression level keeps and what it loses. You decide. Nothing is silently dropped.

    Where this matters most: small context windows. On a 200K model, 950 tokens is nice but not a killer. On a 4K local model, it's the difference between multi-session work being possible or not. Without structured transfer, project recaps grow until they consume the window aaand usually dead after 2-3 sessions. With AIST, the handoff stays somewhat constant size forever.

    It's a spec, not a product. CC BY-SA 4.0 for the protocol, Apache 2.0 for tooling. Plain text that any LLM can read and generate. The protocol was designed across multiple AI sessions using itself. You can see the self-handoff in the examples folder.

    Interested in feedback on whether the Transfer Budget concept (explicit cost/loss tradeoffs at handoff time) is useful or overengineered.