1 pointby essenceX8 hours ago2 comments
  • essenceX8 hours ago
    Tried connecting the dots between continual learning, memory, and context limits in LLMs, and how this lines up with ideas from the Nested Learning paper. The core gap seems to be the same: models can process more tokens, but they still don’t accumulate knowledge over time. Long context and RAG look like scaffolding; nested or hierarchical learning feels closer to what persistent, evolving intelligence would actually require.
  • matcha_coffee8 hours ago
    [dead]