The reason it hasn’t take off is that it’s a supremely bad and unmaintable idea. It also just doesn’t work very well because the LLM doesn’t have access to the rest of the codebase without an agentic loop to ground it.
> You write a Python function with a natural language specification instead of implementation code. You attach post-conditions – plain Python assertions that define what correct output looks like.
Vs
> You write a Python function with ~~a natural language specification instead of~~ implementation code.
In many cases.
What is the BENEFIT of all this?
Let's use Blockchain instead of a database - because we can.
Let's create a maze of microservices - because we can.
Let's make every function a lambda function - because we can.
Let's make AI write code, run it, verify it, fix it, then run it again - because we can.
Let's burn untold amounts of energy to do simple things - because we can.
Sure, every bit of f--ing around is research, but ROI is far from constant.
It might even be fun that the first call generates python (or other langauge), and then subsequent calls go through it. This "otpimized" or "compiled" natural langauge is "LLMJitted" into python. With interesting tooling, you could then click on the implementation and see the generated cod, a bit like looking at the generated asssembly. Usually you'd just write in some hybrid pytnon + natural language, but have the ability to look deeper.
I can also imagine some additional tooling that keeps track of good implementations of ideas that have been validated. This could extend to the community. Package manager. Through in TRL + web of tust and... this could be wild.
Really tricky functions that the LLM can't solve could be delegated back for human implementation.
As an experiment, it's kind of cool. I'm kind of at a loss to what useful software you'd build with it though. Surely once you've run the AI function once it would be much simpler to cache the resulting code than repeatedly re-generate it?
Can anyone think of any uses for this?
[0] e.g. something like the below which I expect to use maybe a dozen times total.
Main routine: In folder X are a bunch of ROM files (iso, bin, etc) and a JSON file with game metadata for each. Look for missing entries, and call [subroutine] once per file (can be called in parallel). When done, summarise the results (successes/failures) based on the now updated metadata.
Subroutine: (...) update XYZ, use metacritic to find metadata, fall back to Google.
Surely, you'll run a function that does an AI call to cache the resulting code.
(I'll admit that I've built a few "applications" exploring interaction descriptions with our Design team that do exactly this - but they were design explorations that, in effect, used the LLM to simulate a back-end. Glorious, but not shippable.)
I’m not just making this stuff up of course, got the idea yesterday after reading Karpathy’s tweet about Nanoclaws contribution model (don’t submit PRa with features, submit PRs that tell an llm how to modify the program). Now I can’t concentrate on my day job. Can’t stop thinking about my little elixir beam project.
I'm sure there's a lot of effort put into this, god knows why, but I pray I never have to have this in a production environment im on.
https://kylekukshtel.com/incremental-determinism-heisenfunct...
A lot of this was also inspired by Ian Bicking's work here:
https://github.com/Gabriella439/grace
It's still probably not a great idea.
nobody except for maybe nasa would make software in this scenario.
In my experience it’s a huge leap in terms of the agent being able to test and debug functionality. It’ll often write small code snippets to test that individual functions work as expected.
For example, connecting to endpoints, etc... then the logic of your script can run.
Eventually, perhaps. I've yet to see a use case for blockchains that isn't merely a worse facsimile of something already existing.
But the electron was useless when it was discovered, so maybe one day
These attempts at generating code that adheres to a whatever spec in Python of all languages are futile and just please investors.
There is a reason that really proving adherence to a spec or making arguments that the spec is reasonable in the first place is hard.
But hey, thinking is hard, let's go AI shopping.
nah, I'm skipping this update.