5 pointsby varshith17a month ago4 comments
  • realitydrifta month ago
    This reads more like a semantic fidelity problem at the infrastructure layer. We’ve normalized drift because embeddings feel fuzzy, but the moment they’re persisted and reused, they become part of system state, and silent divergence across hardware breaks auditability and coordination. Locking down determinism where we still can feels like a prerequisite for anything beyond toy agents, especially once decisions need to be replayed, verified, or agreed upon.
  • codingdavea month ago
    > We assume that if we generate an embedding and store it, the "memory" is stable.

    Why do you assume that? In my experience, the "memory" is never stable. You seem to have higher expectations of reliability than would be reasonable.

    If you have proven that unreliability, that proof is actually interesting. But seems less like a bug, and more of an observation of how things work.

    • varshith17a month ago
      "You seem to have higher expectations of reliability than would be reasonable."

      If sqlite returned slightly different rows depending on whether the server was running an Intel or AMD chip, we wouldn't call that "an observation of how things work." We would call it data corruption.

      We have normalized this "unreliability" in AI because we treat embeddings as fuzzy probabilistic magic. But at the storage layer, they are just numbers.

      If I am building a search bar? Sure, 0.99 vs 0.98 doesn't matter.

      But if I am building a decentralized consensus network where 100 nodes need to sign a state root, or a regulatory audit trail for a financial agent, "memory drift" isn't a quirk, it's a system failure.

      My "proof" isn't just that it breaks; it's that it doesn't have to. I replaced the f32 math with a fixed-point kernel (Valori) and got bit-perfect stability across architectures.

      Non-determinism is not a law of physics. It’s just a tradeoff we got lazy about.

  • chrisjja month ago
    > Am I the only one worried about building "reliable" agents on such shaky numerical foundations?

    You might be the only one expecting a reliable "AI" agent period.

    • varshith17a month ago
      "You might be the only one expecting a reliable 'AI' agent period."

      That is a defeatist take.

      Just because the driver (the LLM) is unpredictable doesn't mean the car (the infrastructure) should have loose wheels.

      We accept that models are probabilistic. We shouldn't accept that our databases are.

      If the "brain" is fuzzy, the "notebook" it reads from shouldn't be rewriting itself based on which CPU it's running on. Adding system-level drift to model level hallucinations is just bad engineering.

      If we ever want to graduate from "Chatbot Toys" to "Agentic Systems," we have to lock down the variables we actually control. The storage layer is one of them.

      • michalsustra month ago
        It actually gets worse. The GPUs are numerically non deterministic too. So your embeddings may not be fully reproducible either.
        • varshith17a month ago
          You are absolutely right. GPU parallelism (especially reduction ops) combined with floating-point non-associativity means the same model can produce slightly different embeddings on different hardware.

          However, that makes deterministic memory more critical, not less.

          Right now, we have 'Double Non-Determinism':

          The Model produces drifting floats.

          The Vector DB (using f32) introduces more drift during indexing and search (different HNSW graph structures on different CPUs).

          Valori acts as a Stabilization Boundary. We can't fix the GPU (yet), but once that vector hits our kernel, we normalize it to Q16.16 and freeze it. This guarantees that Input A + Database State B = Result C every single time, regardless of whether the server is x86 or ARM.

          Without this boundary, you can't even audit where the drift came from.

        • chrisjja month ago
          One could switch ones GPU arithmetic to integer...

          ... or resign oneself to the fact we've entered the age of Approximate Computing.

          • varshith17a month ago
            Switching GPUs to integer (Quantization) is happening, yes. But that only fixes the inference step.

            The problem Valori solves is downstream: Memory State.

            We can accept 'Approximate Computing' for generating a probability distribution (the model's thought). We cannot accept it for storing and retrieving that state (the system's memory).

            If I 'resign myself' to approximate memory, I can't build consensus, I can't audit decisions, and I can't sync state between nodes.

            'Approximate Nearest Neighbor' (ANN) refers to the algorithm's recall trade-off, not an excuse for hardware-dependent non-determinism. Valori proves you can have approximate search that is still bit-perfectly reproducible. Correctness shouldn't be a casualty of the AI age.