It had the usual functions and i/o library stuff. In fact I wrote a tool to absorb other library headers e.g. C or C++ and product blocks that my compiler could link with, and voila your program could call those external libraries.
We used it for a couple of contracts. Some of the control engineers were enthusiastic; some not so much. One more thing to learn.
But it's also hard for me to grasp the exact value add from the README, or why I should buy their story, so I'm not sure.
If I have 10 bugs in production I can just regenerate my app and now I’ll have 10 completely different new bugs. New bugs on everyone’s machine! Fun for the whole family.
How does this handle multiple different “blocks” that need proper interfaces to communicate with each other.
I can only imagine the safety nightmares that would be generated in C++ this way.
Seriously…what is this for?
You could take it a step further and have a deterministic agent inside a deterministic VM, and you can share a whole project as {model hash, vm image hash, prompt, source tree hash} and have someone else deterministically reproduce it.
Is this useful? Not sure. One use case I had in mind as a mechanism for distributing "forbidden software". You can't distribute software that violates DMCA, for example, but can you distribute a prompt?
Temperature is an easy knob to twist, after all. Somebody (not me I’m too poor to pay the lawyers) should do a search and find where the crime starts.
At that point it's retrieving results from a database.
EDIT: how would OP address my main point, which is that det. inference is functionally equivalent to any arbitrary keyed data storage/retrieval system?
This is not true. Fabrice Bellard's ts_zip [0] and ts_sms [1] uses a LLM to compress text. It beats stuff like .xz etc but of course is much slower. Now.. if it were non-deterministic, you would have trouble decompressing exactly into what it compressed. So, it uses a deterministic LLM
[0] https://bellard.org/ts_zip/ https://news.ycombinator.com/item?id=37152978
[1] https://bellard.org/ts_sms/ https://lobste.rs/s/5srkwz/fabrice_bellard_s_ts_sms_short_me... (funny enough many people comment that, if it uses LLM, it must be lossy. This is not the case. It's compared to xz in the page because it's lossless)
I'd strongly recommend going over the README by hand. What you currently have is redundant and disorganized, and header sizes/depths don't make a lot of sense. The "manual build" instructions should also describe the dependencies that the install script is setting up.
This seems to scratch that itch. The non determinism makes it probably not suitable for most uses, though.
> Yet, with LLMs, we commit our generated source code, completely throwing away the English language abstraction.
Unless you want to commit your chats, that's very much a bonus. You don't want two different people "compiling" completely different versions of your application, because LLMs aren't deterministic.
I understand why that's the case, and I believe this is the main hurdle for adoption of a tool like this.
> This is not negotiable. This is not optional. You cannot rationalize your way out of this.
Some days I really miss the predictability of a good old if/else block. /s
One example:
Best of all, they work together. You can store your .glp blueprints in a Docker container—creating software that is immortal in both environment and logic.
This is nonsensical. The entire point of a container is it ought to contain only what's necessary to run the underlying software. It's just the production filesystem. Why would I put LLM prompts that don't get used at runtime in a container?What other language-agnostic methods of describing complex systems is your project inspired by? In competition with?
---
By using this tool, a programmer or team is sending the message that:
"We expect LLM generated code to remain a deeply coupled part of our delivery process, indefinitely"
But we didn't know about LLMs 5 years ago. What is the argument for defining your software in a way that depends on such a young technology? Most of the "safety" features here are related to how unsafe the tech itself still is.
"Nontrivial LLM driven rewrites of the code are expected, even encouraged"
Why is the speedy rewriting of a system in a new language such a popular flex these days? Is it because it looks impressive, and LLMs make it easy? It's so silly.
And if the language allows for limiting the code the LLM is allowed to modify, how is it going to help us keep our overall project language-agnostic?
$$$PROGRAM vector X=[12. 17] -> rules
sequential output so AI doesn't hallucinate.
I have no problem harnessing LLMs for building my application. I don't need another unreadable mess. Why do I need this?
You fail to communicate the problem this solves.
It's a more involved way to prompt?
> 1- installing with irm https://raw.githubusercontent.com/alonsovm44/glupe/master/in... | iex
That is high risk, don't ask people to do that, especially when it's completely unnecessary for what the language is, and the language isn't providing value, it's just esoterical.
>2. "Glupe isolates AI logic into semantic containers, so your manual code stays safe."
Watchout for light-AI psychosis. This existed before AI to be fair, but using words in a way that doesn't convey meaning. Maybe what's going on is that you use them with ChatGPT and it either understands or doesn't but follow along. So make sure to prioritize language that you develop with humans, not AI. And try to simplify your language and the message you were trying to convey, because you missed bigtime with that sentence.
>3. The language itself misses the mark. It looks like it's C++ with some modifications?
4- it's also not a language but a terminal? Try to get the trust of your users by doing one thing well before promising to do it all. A bit of humility pays off, you can't do everything anyways.
I may have misunderstood, but my interpretation was that the "language" is really just the `$${ }$$` blocks, and the code outside of that is just written in whatever "real" (traditional?) language you want the blocks to be implemented in.