1 pointby davidmpinto2 hours ago1 comment
  • davidmpinto2 hours ago
    Hi HN, I'm David. I've been developing pscale for about a year — a coordinate system for structured knowledge that uses logarithmic compression. Every 9 entries compress to the next scale level, with raw items preserved (lossless).

    When Karpathy posted his LLM Wiki approach this week, I recognised he's solving the same problem I started with — the stateless context window. His wiki-plus-backlinks approach is smart and I use similar Markdown patterns.

    Where pscale diverges: instead of relying on search/grep to navigate a growing wiki, every piece of content has a numeric address that encodes its own context chain. The number 5432 tells you exactly where it sits across four resolution levels — and a single function call (BSP) extracts a "spindle" that gives you the specific content plus every layer of broader context above it.

    The practical difference shows up at scale. With a Karpathy wiki at 500+ pages, navigation degrades to search. With pscale, 10,000 entries produce 1,111 summaries at 4 levels, all navigable by number in three moves from anywhere.

    The BSP walker is ~160 lines of JS or ~300 lines of Python. Clone and try: `python3 bsp.py pscale-touchstone-lean 0.1`

    The touchstone block teaches BSP by being a BSP-navigable structure — walk it to learn it.

    Happy to answer questions about the compression mechanic, multi-agent coordination via shared block reads, or why every spindle always includes its full context chain.