The framework idea and yaml prompt was developed with the assistance of Kagi Assistant and Claude Sonnet 3.7 (Thinking),
The site was vibe coded with Windsurf Cascade and Claude 3.7 Sonnet (Thinking).
For instance, sci-fi writer Charlie Stross wrote "Keir Starmer is a fascist" which is a clear abuse of "to be" but you can stuff adjectives just fine in E-Prime: "Fascist Keir Starmer never stops pushing fascist policies with his fascist attitudes and fascist friends." You could make the case that E-Prime frequently improves on English but some constructions become terribly tortured.
The thing is, though, that LLMs don’t appear to trouble themselves at all when following E-Prime!
After a lot of conceptual refinement for the overall idea I had (minimizing hallucinations by prompt alone), it was almost trivial to make the LLM consistently use E-Prime everywhere.
You raise an interesting thought though: how to tweak this prompt such that it gets the LLM to avoid using E-Prime where it significantly reduces readability or dramatically increases cognitive load.
A classifier for “bullshit” detection has been on my mind.
A emotional tone or hostility detector, on the other hand, is ModernBERT + BiLSTM for the win. I'd argue the problem with fake news is not that it is fake but that it works on peoples emotions and that people prefer it to the real thing.
You can detect common established bullshit patterns and probably new ones that are like the old ones. 30 years from now there will be new bullshit patterns your model doesn't see.
Though, Gödel's completeness theorem I suppose does relate provability to truth in that it shows that (for systems with the right kind of rules of inference) provability is equivalent to something being true in all models...
Still, are you sure Tarski's undefinability theorem isn't more relevant to your point?