There is a real crisis of AI slop getting posted to this forum. I don't even bother reading posted articles related to AI anymore, but now it's seemingly extending to everything.
When LLMs are good enough to not be detectable, what happens then? They aren't that far away atm, so it's only a matter of time until _everyone_ is assumed to be an LLM.
> For example, a system program is often designed to gather statistics about its own operation, but such statistics-gathering is pointless unless someone is actually going to use the results. In order to make the instrumentation code optional, I include the word ‘stat’ just before any special code for statistics, and ‘tats’ just after such code; and I tell WEAVE to regard stat and tats as if they were begin and end. But stat and tats are actually simple macros.
I encourage you to read that paragraph a few times. Even if you have no idea what the context is, you get that there's a point, that there's something else to dig into, that the author might be being a bit cheeky. In other words, you can feel Knuth behind the ink. Philosophers would call this intentionality[2]. LLMs produce the polar opposite of garden path sentences[3] (and, imo, that's why they're so easy to spot).
† I specifically picked something technical to illustrate that even in domains where semantic precision is of utmost importance, human expression is still just that: human.
[1] https://www.cs.tufts.edu/~nr/cs257/archive/literate-programm...
[2] https://plato.stanford.edu/archives/fall2014/entries/intenti...
[3] https://www.sjsu.edu/writingcenter/docs/handouts/Garden%20Pa...
> In this light, the activity of programming becomes less an act of issuing commands and more an act of communication. The computer is, after all, an obedient but uncomprehending servant; it will execute whatever precise instructions we provide. But our colleagues, our future selves, and the broader community of readers are not so easily satisfied. They demand clarity, intention, and narrative. A program, then, should be structured not merely for execution, but for reading—its logic unfolding in a manner that mirrors the way one might naturally explain the solution to another person.
> This shift in perspective has practical consequences. When we write with exposition in mind, we are compelled to confront ambiguities that might otherwise remain hidden. Vague assumptions must be made explicit; convoluted steps must be reorganized into simpler, more digestible ideas. The discipline of explaining a program often leads to improvements in the program itself, since confusion in the prose is frequently a symptom of confusion in the underlying design.
Fascinating technology. I would not be able to immediately tell this was AI generated. So these models can in some cases produce text that doesn't immediately set off alarm bells. As an avid reader and writer I'm not really sure what to make of it. I don't want to consume AI generated art or literature because it's completely besides the point, but in the future will we even be able to tell? How do we even know if anyone around us is real? Could they just be sufficiently advanced LLM's, fooling us? Am I the only human in the matrix?
But of course, writing as good as a grad student's (just not the particular delightful idiosyncratic style of a specific person) is still very impressive and amazing, so your concerns are still valid.
> ...the activity of programming becomes less an act of issuing commands and more an act of communication
directly contradicts:
> The computer is, after all, an obedient but uncomprehending servant...
If programming becomes "an act of communication" how can an "uncomprehending servant" make heads or tails of what I'm telling it? And I get that the two aren't exactly contradictory here, but this implied claim would certainly require at least a throwaway sentence.
> When we write with exposition in mind, we are compelled to confront ambiguities that might otherwise remain hidden.
I'm being a bit nitpicky, but this is a non-sequitur; we aren't necessarily required to confront any ambiguities, even when we're trying very hard to be expository. The counter-examples I'm thinking of at the moment are contrived (amnesia, my four-year-old niece trying to tell a story, etc.) but I mainly take issue with the word "compelled."
> its logic unfolding in a manner that mirrors the way one might naturally explain the solution to another person
People explain things in all kinds of weird circuitous ways, so while this (as all AI-generated output) seems interesting prima facia, it's actually kind of a dud when you think about it for more than 5 seconds.
> Vague assumptions must be made explicit; convoluted steps must be reorganized into simpler, more digestible ideas.
and
> ...ambiguities that might otherwise remain hidden...
directly contradicts:
> ...whatever precise instructions we provide
It seems like the computer can somehow encode "ambiguities" and "vague assumptions" as "precice instructions." How, exactly, does that work? (Spoiler: it doesn't, it's gibberish.) On the other hand, if you read Knuth's first few paragraphs, he clearly has a point in mind; I'd even say he's being a bit wordy, but never equivocating. In fact, by the fourth paragraph, he's almost giddy with excitement.