19 pointsby jer0me6 hours ago4 comments
  • tatrions31 minutes ago
    The bounded surface area insight is right, but the actual forcing function is context window size. Small codebase fits in context, LLM can reason end-to-end. You get the same containment with well-defined modules in a monolith if your tooling picks the right files to feed into the prompt.

    Interesting corollary: as context windows keep growing (8k to 1M+ in two years), this architectural pressure should actually reverse. When a model can hold your whole monolith in working memory, you get all the blast radius containment without the operational overhead of separate services, billing accounts, and deployment pipelines.

  • c1sc011 minutes ago
    Why microservices when small composable CLI tools seem a better fit for LLMs?
  • jeremie_strand6 hours ago
    [dead]
  • benh24776 hours ago
    [dead]