1 pointby chwmath6 hours ago2 comments
  • verdverm6 hours ago
    > "Even GPT-5, Claude 4.5, and Gemini 3.0 agreed: This is the kernel of AGI."

    Of course they did, they are sycophants

    They will tell you any idea is a great idea. It has no bearing on the actual "goodness" of an idea

    https://www.seangoedecke.com/ai-sycophancy/ (currently on the front of HN)

    > the LLM executed it as a game engine

    Did it? How did you verify?

    How do you know it's not just telling you it did something without actually doing it?

    • chwmath4 hours ago
      The proof is in the repo.
  • chwmath6 hours ago
    Hi HN,

    I built this medical diagnosis simulator (`chat-a-cold.html`) in just 3 hours.

    The interesting part is that I didn't "code" the medical logic in Python or C++. Instead, I used a methodology I call *NLCS (Natural Language Constraint System)*.

    I simply defined the "thinking structure" of a doctor (Input -> Criteria -> Exception -> Output) in natural language, and the LLM executed it as a game engine.

    * *Logic:* Differentiates between Common Cold (90%) and Sinusitis (10%) using hidden variable checks. * *Cost:* Near zero development cost (just prompt engineering + structure design). * *Result:* It simulates an intern's diagnostic process with high accuracy.

    The repository includes the full whitepaper (PDF) and the runnable HTML file. I believe this demonstrates that "Cognitive Architecture" is more important than raw knowledge in the AI era.

    Would love to hear your thoughts!