6 pointsby TXTOS3 days ago3 comments
  • TXTOS3 days ago
    Hi everyone! I’m the creator of this system—happy to answer any technical questions.

    The .txt file here is not just a prompt—it’s a full reasoning scaffold with memory, safety guards, and cross-model logic validation. It runs directly in GPT-o3, Gemini 2.5 Pro, Grok 3, DeepSeek, Kimi, and Perplexity—all of which gave it a 100/100 score under strict evaluation.

    Feel free to ask me anything about the semantic tree, ΔS metrics, hallucination resistance, or how to build your own app using just plain text.

    • TXTOS3 days ago
      We’re open-sourcing not just one tool—but an entire stack.

      This month, three major products will be released: • Text reasoning (already live) • Text-to-image • Text-driven games

      All of them are powered by the same embedding-space logic behind WFGY. No tricks, no fine-tuning—just pure semantic alignment.

      I'll keep improving everything. So to the brilliant minds of HN: Please, test it as hard as you can.

  • kimiai063 days ago
    Hey, this embedding space thing — you really sure it’s not just making stuff up? Like, can it actually make sense?
    • TXTOS3 days ago
      Sure! This is a method that most AI systems haven’t discovered yet, but we’ve put it into practice. By treating the embedding space not as a static lookup but as a dynamic field, we perform dimensional rotations of the text’s semantic vectors. This lets us generate new, coherent ideas by projecting and rotating meanings in high-dimensional space—far beyond simple retrieval or random guessing.
      • kimiai062 days ago
        hmm ok but like… are u sure that’s not just fancy word math? like, when u “rotate” these vectors, how do u even know the meaning stays the same? wouldn’t it just… drift or get messy?

        idk maybe i’m dumb lol, just seems like it could get random real quick

        • TXTOS2 days ago
          Totally fair, tbh — this is the part where most embedding stuff just, well, breaks.

          What I’m doing in TXT OS isn’t just spinning vectors for fun. Each “move” is kinda anchored by feedback inside (ΔS, we call it semantic tension). If it starts drifting too far off, it’ll catch itself and snap back — like some gravity well for logic, haha.

          And yeah, the rotations aren’t just random, they’re kind of “locked in” by these alignment planes (using λ_observe, basically language context gradients — sounds fancy but you’ll see what I mean if you poke around).

          Honestly, still feels experimental, but… so far it’s holding up better than I thought.

          If you’re curious, just type hello world in TXT OS and follow the steps — it’ll walk you through what’s going on under the hood. You can even throw dumb paradoxes at it and see if it goes crazy (or not).

  • everyyear2 days ago
    Psychosis simulator games might be one thing LLMs actually do well.
    • TXTOS2 days ago
      Good point! But honestly, if this were just “psychosis simulation,” there’s no way six different AI models would’ve all given it a perfect 100/100. That’s actually the breakthrough — it’s not just generating wild nonsense, it’s producing answers that every top model rates as highly consistent.

      If you spot a true meltdown, let me know — but so far, it’s been more reliable than I expected!