29 pointsby ImJasonH5 hours ago10 comments
  • AlexCoventry11 minutes ago
    I'm looking for a language optimized for use with coding agents. Something which helps me to make a precise specification, and helps the agent meet all the specified requirements.
  • kburman2 hours ago
    An LLM is optimized for its training data, not for newly built formats or abstractions. I don’t understand why we keep building so-called "LLM-optimized" X or Y. It’s the same story we’ve seen before with TOON.
  • forgotpwd1622 minutes ago
    There was one other just yesterday: https://news.ycombinator.com/item?id=46571166
  • evacchi32 minutes ago
    weeks ago I was also noodling around with the idea of programming languages for LLMs, but as a means to co-design DSLs https://blog.evacchi.dev/posts/2025/11/09/the-return-of-lang...
  • cpeterso2 hours ago
    If a new programming language doesn’t need to be written by humans (though should ideally still be readable for auditing), I hope people research languages that support formal methods and model checking tools. Formal methods have a reputation for being too hard or not scaling, but now we have LLMs that can write that code.

    https://martin.kleppmann.com/2025/12/08/ai-formal-verificati...

  • discrisknbisque4 hours ago
    The Validation Locality piece is very interesting and really got my brain going. Would be cool to denote test conditions in line with definitions. Would get gross for a human, but could work for an LLM with consistent delimiters. Something like (pseudo code):

    ``` fn foo(name::"Bob"|genName(2)): if len(name) < 3 Err("Name too short!")

      print("Hello ", name)
        return::"Hello Bob"|Err
    ```

    Right off the bat I don't like that it relies on accurately remembering list indexes to keep track of tests (something you brought up), but it was fun to think about this and I'll continue to do so. To avoid the counting issue you could provide tools like "runTest(number)", "getTotalTests", etc.

    One issue: The Loom spec link is broken.

  • internet_points28 minutes ago
    llm-optimized in reality would mean you asked and answered millions of stackoverflow questions about it and then waited a year or so for all the major models to retrain.
  • Mathnerd3144 hours ago
    I get that this is essentially vibe coding a language, but it still seems lazy to me. He just asked the language model zero-shot to design a language unprompted. You could at least use the Rosetta code examples and ask it to identify design patterns for a new language.
    • forgotpwd1616 minutes ago
      There's also the issue, which is also noted by the author, that LLM-optimization quite often becomes, when shouldn't be just that, token-minimization.
    • Snacklive3 hours ago
      I was thinking the same. Maybe if he tried to think instead of just asking the model. The premise is interesting "We optimize languages for humans, maybe we can do something similar for llms". But then he just ask the model to do the thing instead of thinking about the problem, maybe instead of prompting "Hey made this" a more granular, guided approach could've been better.

      For me this is just a lost of potential on the topic, and an interesting read made boring pretty fast.

  • petesergeant4 hours ago
    A language is LLM-optimized if there’s a huge amount of high-quality prior art, and if the language tooling itself can help the LLM iterate and catch errors
  • rvz4 hours ago
    > Humans don't have to read or write or undestand it. The goal is to let an LLM express its intent as token-efficiently as possible.

    Maybe in the future, humans don't have to verify the spelling, logic or grounding truth either in programs because we all have to give up and assume that the LLM knows everything. /s

    Sometimes, I read these blogs from vibe-coders that have become completely complacent with LLM slop, I have to continue to remind others why regulations exist.

    Imagine if LLMs should become fully autonomous pilots on commercial planes or planes optimized for AI control and the humans just board the plane and fly for the vibes, maybe call it "Vibe Airlines".

    Why didn't anyone think of that great idea? Also completely remove the human from the loop as well?

    Good idea isn't it?

    • eadwu3 hours ago
      There are multiple layers and implicit perspectives that I think most are purposefully omitting as a play for engagement or something else.

      The reason why LLMs are still restricted to higher level programming languages is because there are no guarantees of correctness - any guarantee needs to be provided by a human - and it is already difficult for humans to review other human's code.

      If there comes a time where LLMs can generate code - whether some term slop or not - that has a guarantee of correctness - it is indeed probably a correct move to probably have a more token-efficient language, or at least a different abstraction compared to the programming abstractions of humans.

      Personally, I think in the coming years there will be a subset of programming that LLMs can probably perform while providing a guarantee of correctness - likely using other tools, such as Lean.

      I believe this capability can be stated as - LLMs should be able to obfuscate any program code - which is pretty decent guarantee.