3 pointsby NickAndresena day ago2 comments
  • Terr_a day ago
    > the internal monologue [...] This is what it thought

    IMO this embeds a flawed assumption from the get-go.

    My go-to framing for LLMs: It's an algorithm auto-completing a document, and we humans are trying to get stuff done by setting up documents that mostly resemble theatrical stories, parsed by other programs to trigger effects.

    In that framing, a "reasoning model" is primarily a shift in document style, to film noir where the character we humans perceive has a monologue, and the performance-rules (frameworks we code to read the text) are set to not do anything with those fragments of text.

    But why should we consider it qualitatively different? What justifies that?

    * [Hardboiled Detective v1.0] "Hello ma'am... I bet you're trouble, walking onto my office."

    * [Hardboiled Detective v1.1] I knew the dame was trouble the moment she walked into my office.

    Why is the first text not "model reasoning", but suddenly the second text is? Isn't the difference mostly in our human perceptions and assumptions, rather than in the real-world algorithm and logic gates?

    > Chain-of-Thought meant that instead of AI getting smarter by getting larger, it could get smarter by thinking longer.

    Alternate iconolastic framing: When we unleash the iterative text-predictor on documents, documents that resemble stories with ample supporting monologues lead to end-results which humans perceive as more-cohesive and better.

  • NickAndresena day ago
    On Thinkish, Neuralese, and the End of Readable Reasoning

    When OpenAI's GPT-o3 decided to lie about scientific data, this is what its internal monologue looked like: "disclaim disclaim synergy customizing illusions... overshadow overshadow intangible."

    This essay explores how we got cosmically lucky that AI reasoning happens to be readable at all (Chain-of-Thought emerged almost by accident from a 4chan prompting trick) and why that readability is now under threat from multiple directions.

    Using the thousand-year drift from Old English to modern English as a lens, I look at why AI "thinking" may be evolving away from human comprehension, what researchers are trying to do about it, and how long we might have before the window gets bricked closed.