I think the intuition that lots of people jumped to early about how "specs are the new code" was always correct, but at the same time it was absolutely nuts to think that specs can be represented in good ways with natural language and bullet-lists in markdown. We need chain-of-spec that's leveraging something semi-formal and then iterating on that representation, probably with feedback from other layers. Natural-language provides constraints, guess-and-check code generation is sort at the implementation level, but neither are actually the specification which is the heart of the issue. A perfect intermediate language will probably end up being something pretty familiar that leverages and/or combines existing formal methods from model-checkers, logic, games, discrete simulations, graphs, UML, etc. Why? It's just very hard to beat this stuff for compression, and this is what all the "context compaction" things are really groping towards anyway. See also the wisdom about "programming is theory building" and so on.
I think if/when something like that starts getting really useful you probably won't hear much about it, and there won't be a lot of talk about the success of hybrid-systems and LLMs+symbolics. Industry giants would have a huge vested interest in keeping the useful intermediate representation/languages a secret-sauce. Why? Well, they can pretend they are still doing something semi-magical with scale and sufficiently deep chain-of-thought and bill for extra tokens. That would tend to preserve the appearance of a big-data and big-computing moat for training and inference even if it is gradually drying up.
I've been working on a project to turn markdown into a computational substrate, sort of Skills+. It embeds the computation in the act of reading the file so you don't have to teach anything (or anyone) how to do anything other than read the data on the page along with the instructions on what to do with it. It seemed the simplest way of interacting with a bunch of machines that really love to read and write text.
I use a combination of reference manuals and user guides to replace the specs as a description of intent for the input to the process. They need to be written and accurate anyways and if they're the input to the process, then how can they not be. After all
requirements = specs = user stories = tests = code = manuals = reference guides = runbooks
They're all different projections of the same intent and they tend to be rather difficult to keep in sync, so why not compress them?
https://tech.lgbt/@graeme/115642072183519873
This lets one artifact play all of the roles in the process, or for anything non-trivial, you can use compositional vectors like ${interpolation:for-data.values}, {{include:other:sections-as.whole-units}} or run special ```graphnode:my-funky-cold-medina:fetch fences that execute code found on other nodes and feed the output through middleware to transform and transpile it into the parent document.
Think of it like Rack for your thoughts.
I threw the AST thing on it because I'd been playing with that node and thought a symbol table would be useful for two reasons. Hard to hallucinate a symbol table that's being created deterministically and if I've got it, it saves scanning entire files when I'm just looking for a method.
New computing paradigms sometimes require new new tools.
I think you're absolutely right about the rest of it. LLM assisted development is the process of narrowing the solution space by contextual constraints and then verifying the outcome matches the intent. We need to develop tools on both ends of that spectrum in order to take full advantage of them.
Try /really/ telling one what you want next time, instead of how to do it. See if your results are any better
It seems obvious that language models are not suitable for determinative number-crunching unless they generate a program to compute the response as an interim step.
https://www.reddit.com/r/ChatGPT/comments/14sqcg8/anyone_els...
If you compare the outputs of a CoT input vs a control input, the outputs will have the reasoning step either way for the current generation of models.
What is great is that you can define DSPy signature of the type “question, data -> answer” where “data” is a pandas dataframe, then DSPy prompts the llm to answer the question using the data and python code. Extremely powerful.
Afaik PaLM (Google's OG big models) tried this trick, but it didn't work for them. I think it's because PaL used descriptive inline comments + meaningful variable names. Compare the following:
```python
# calculate the remaining apples
apples_left = apples_bought - apples_eaten
```
vs.
```python
x = y - z
```
We have ablations in https://arxiv.org/abs/2211.10435 showing that both are indeed useful (see "Crafting prompts for PAL").
See "Programmatic Tool Calling"
And there was an AI productivity startup called Lutra AI doing this, although they've since pivoted to some kind of MCP infra thing: https://lutra.ai/
"One of the hardest things in programming?" for $1000, Alex.