62 pointsby doppp6 hours ago12 comments
  • bisonbear2 minutes ago
    managing agents.md is important, especially at scale. however I wonder how much of a measurable difference something like this makes? in theory, it's cool, but can you show me that it's actually performing better as compared to a large agents.md, nested agents.md, skills?

    more general point being that we need to be methodical about the way we manage agent context. if lat.md shows a 10% broad improvement in agent perf in my repo, then I would certainly push for adoption. until then, vibes aren't enough

  • eliottre4 minutes ago
    The staleness problem mentioned here is real. For agentic systems, a markdown-based DAG of your codebase is more practical than a traditional graph because agents work within context windows. You can selectively load relevant parts without needing a complex query engine. The key is making updates low-friction -- maybe a pre-commit hook or CI job that refreshes stale nodes.
  • robertclaus23 minutes ago
    We've been doing this with simple mkdocs for ages. My experience is that rendering the markdown to feel like public docs is important for getting humans to review and take it seriously. Otherwise it goes stale as soon as one dev on the project doesn't care.
  • ssyhape4 hours ago
    Neat idea. The biggest problem I've had with code knowledge graphs is they go stale immediately -- someone renames a package and nobody updates the graph. Having it as Markdown in the repo is clever because it just goes through normal PR review like everything else, and you get git blame for free. My concern is scale though. Once you have thousands of nodes the Markdown files themselves become a mess to navigate, and at that point you're basically recreating a database with extra steps. Would love to see how this compares to just pointing an agent at LSP output.
    • cyanydeez4 hours ago
      We all know this isn't for humans. It's for LLMs.

      So better question is why there isn't a bootstrap to get your LLM to scaffold it out and assist in detailing it.

      • stingraycharles3 hours ago
        You’re replying to an LLM, too, fwiw.
      • drooby2 hours ago
        GraphRAG is for LLMS... markdown is for humans.. humans that exist in the meantime
      • 3 hours ago
        undefined
    • ossianericsonan hour ago
      I would say that when you treat your Markdown as the authoritative source, I of course don't get it automated but that is my choice. It takes knowledge of the domain, but when you have deep specific knowledge that is worth so much more than automated updates. I use AI to get the initial MD but then I edit that. Sure it doesn't get auto updated, but I would never trust advice on the fly that got updated based on AI output on the internet.
      • ssyhapean hour ago
        makes sense. AI for the first draft, human for the "why" -- probably the right split. the structural stuff (imports, call graphs) can be automated but knowing why module A talks to module B is where the real value is.
  • mmastracan hour ago
    I definitely agree with the need for this. There's just too much to put into the agents file to keep from killing your context window right off the bat. Knowledge compression is going to be key.

    I saw this a couple of days ago and I've been working on figuring out what the right workflows will be with it.

    It's a useful idea: the agents.md torrent of info gets replaced with a thinner shim that tells the agent how to get more data about the system, as well as how to update that.

    I suspect there's ways to shrink that context even more.

  • Yokohiii3 hours ago
    > "chalk": "^5.6.2",

    security.md ist missing apparently.

    • wiseowise7 minutes ago
      Why would you even need chalk on modern Node.js? It can style natively now.
    • touristtaman hour ago
      good catch. Makes me wonder if we could feed the Agent with a repository of known vulnerability and security best practices to check against and get ride of most deps. Just ask _outloud_ so to speak.
  • reactordev2 hours ago
    I found having smaller structured markdowns in each folder explaining the space and classes within keeps Claude and Codex grounded even in a 10M+ LOC codebase of c/c++
  • touristtaman hour ago
    At that point why not have an obsidian vault in your repo and get the Agent to write to it?
    • an hour ago
      undefined
  • nimonian4 hours ago
    I have a vitepress package in most of my repos. It is a knowledge graph that also just happens to produce heat looking docs for humans when served over http. Agents are very happy to read the raw .md.
  • maxbeech4 hours ago
    [dead]
  • jatins3 hours ago
    tl;dr: One file, bad (gets too big for context)

    So give you agent a whole obsidian

    I am skeptical how that helps. Agents cant just grep in one big file if reading entire file is the problem.

  • iddan3 hours ago
    So we are reinventing the docs /*/*.md directory? /s I think this is a good idea just don’t really get why would you need a tool around it