3 pointsby MPender086 hours ago1 comment
  • MPender086 hours ago
    Hey HN, I’m an independent researcher. I've been exploring the thermodynamic limits of AI and built a theoretical hardware framework and PyTorch simulation to address it.

    The Problem: Current digital AI is hitting the "Landauer Wall": the massive energy cost of erasing bits and brute-forcing hierarchical data in flat, static Euclidean matrices.

    The Project: This is the Manifold Chip, a Dynamically Gated Analog Crossbar (DGAC) architecture. It uses analog Field Effect Transistors (FETs) as variable shunts to warp its own effective geometry on the fly. Instead of routing sparse data through a static grid, the chip physically "plunges" into a non-Euclidean/hyperbolic state when it encounters error stagnation, creating geodesic shortcuts.

    The Code: The repo includes a PyTorch simulation (run_brain_sim.py) mapping this continuous thermodynamic decay. In a non-linear XOR stamina test, the "VIP Voltage" (global error signal) triggers a geometric transition that yields a 5x convergence speedup while drastically cutting maintenance power.

    The Theory: This architecture translates the biophysics of Martinotti/SST-mediated dendritic gating into silicon. It builds on two broader papers I've recently published on the thermodynamics of dynamic curvature adaptation (links in the repo, bottom of the README.md)

    I used LLMs (Gemini) strictly as a technical sounding board to help formalize the math, prototype the PyTorch geometry, and apply my biophysics theory to silicon.

    I’m a self-taught researcher and would love to hear the community's honest thoughts on the memristor-shunt logic, the analog fault tolerance, and the manifold scaling laws.