1 pointby nguthiru3 hours ago1 comment
  • nguthiru3 hours ago
    Most neural networks store knowledge and computation in the same weights, and that one decision is the root of a surprising number of headaches: catastrophic forgetting, expensive retraining, frozen cutoffs, and the difficulty of editing or auditing what a model knows.

    We've been working on a design principle we call Dynamics–Knowledge Separation (DKS). Dynamics are the rules of computation, fixed after training. Knowledge is accumulated in states that grow continuously. When knowledge changes, you add new states instead of updating weights.

    The intuition borrows from physics, laws stay constant while states evolve. We've been taking that seriously as an architecture constraint and thinking about what it actually implies for how you'd build learning systems. The post gets into how this compares to fine-tuning and RAG, and where we think viable architectures could come from.

    Curious to hear feedback.