There's a flaw in the Milli Vanilli argument. The band had no input into their songs. They 'performed' them by lip-syncing on stage, but all of the music and lyrics were someone elses. Milli Vanilli had no part in the creative process.
That's not technically true of AI content. There's some tiny little seed of a creative starting point in the form of a prompt needed for AI. When someone makes something with Claude or Nano Banana it's based on their idea, with their prompt, and their taste selecting whether the output is an acceptable artefact of what they wanted to make. I don't think you can just disregard that. They might not have wielded the IDE or camera or whatever, and you might believe that prompting and selecting which output you like has no value, but you can't claim there's no input or creativity required from the author. There is.
Ah yes, the “tiny little seed” defense — because if I hum three notes and Quincy Jones writes the symphony, clearly we co-composed it.
Sure, prompting involves taste and direction. So does ordering at a restaurant. But if I tell the chef “spicy, but make it fusion” and then Instagram the plate as my culinary creation, I’m not suddenly Gordon Ramsay.
Nobody’s saying there’s zero input. We’re saying input isn’t authorship. A seed isn’t a forest — and picking your favorite output isn’t the same as growing it.
Maybe the arguing is really over whether it's higher-status to enjoy longform content, or to criticize it for not being more efficient? By identifying the argument, I've revealed it as silly, and clearly proven myself to be higher status than either side. The arguing may stop now. You're welcome.
I read it all, and found myself engaged throughout. Not to say that it was all riveting, there were certainly dryer spots than others, but it felt 'real'. Maybe they did use AI (I somehow doubt that given the content), but even if they did they went over everything in a way that retained a voice that felt authentic.
I hate many of the articles I read now all feel like they have the same half hearted attempt at trying to grab your attention without every actually clearly saying what they mean.
As for the content, I had actually just been told by management this last week that I need to become AI 'fluent' as part of future performance evaluations and I have been deeply conflicted about it. I do think AI has value to add, but I don't think it's something that should be forced and so this article resonated with me.
It's a long read, and not for everyone, but I recommend it as a way of hearing another humans opinion and deciding for yourself if it has value.
I hear this and FWIW, if there aren't very specific things being asked of you, using AI as a stack overflow replacement as the OP admits to doing is as "AI fluent" as anything else in my book.
Also, Aislopica. He missed the opportunity to say Aislopica Fables.
The author has made the correct call. There's a pretty deep irony that all the top-level comments at the time of this writing are about how the article is too long. It's quite clearly not trying to succinctly convince you of a point, it's meant to be a piece of genuinely human writing, and enjoyed (or not!) on the basis of that.
All other top level arguments offer AI summaries that miss all of the interesting, nuanced, wide-reaching topics about AI and its impact on our humanity, and complain it was too long to read.
Truly a gem of irony.
Reminds me of literature lessons in high school where the teacher would explain why given book is exceptionally important while for me it was exceptionally boring but I had to take part in this theatre where I need to pretend that the book is indeed flawless.
The true gem of irony is that the author could really benefit from an LLM which could review his text before publishing. It's not 1920 where people read everything they have access to multiple times over and over because text on its own is rare. It's 2026 and before I engage with your work, you need to convince me that it's not slop.
I don't agree that any piece of text should do that. For fiction I watch movies and read books without reading the synopsis/back cover. Being surprised have its own enjoyment factor.
But this article's first secton starts with whom the article is for (thus implying the content), and the section ends with: "Pull up a chair and endure yet another goddamn article about generative AI". I think it's pretty direct.
Apart from that, content wise a preliminary abstract is nice to have. I do like how the author provides a table of contents.
Over sixteen thousand words about how the author doesn’t really use language models very much but might in the future
But thanks for saving the rest of us. This is why I read the comments first.
It feels like half the people here do not read or write in their free time, which would be understandable if this were not primarily a site for software engineers who write (sorta) as a job
But after reading this comment section... I mean if enjoying well written prose counts as enjoying craft and artistry I guess I do then? Damn.
This is not prose, it is exposition. It is perfectly valid to critique any expository essay, especially one of this length, for its density (or lack thereof) of substantive information.
A person writing an essay on their own site doesn't need to have the information density of bus timetable.
But this seemed like it bridges the gap between prose and an expository essay -it was doing both.
Putting prose in an essay means there are more valid criticisms of a piece of writing, not fewer. If somebody is breakdancing and reciting the periodic table at the same time it’s ok if somebody notices if they skipped the lanthanides and actinides.
I’m a fan of blending the two! It’s just really really hard to do both well at the same time. My most recent example is Malcolm Harris’ history of Palo Alto, it is incredibly well-done.
It’s an exponentially more difficult way to accomplish either goal because one reader will see it and think “this is a sixteen thousand word essay that says very little” and another will see it and think “what a wonderful story” and there’s nobody to adjudicate who is correct.
Like I posted “this is sixteen thousand words about how the author doesn’t really use language models but might one day” and some folks’ rebuttal is that they enjoyed reading it. Those are two completely unrelated things! It’s like if folks saw the cover of The Hobbit and thought “Hell yeah!” and then when they read “there and back again” thought “whoever wrote that was being unnecessarily reductive”
I mostly skimmed it. It’s entirely feasible that the author buried a confession about getting away with manslaughter or whatever that I missed somewhere in a few sentences in the middle of that novella though. It does begin with several paragraphs essentially telling you not to read the post and has a lot of completely unnecessary exposition (for example the section on Luddites)
Edit: I want to point out that I went over the post with my own eyeballs and brain
So you don't have to:
"you don’t have to embrace a trend, tool, or narrative simply because others say you should — especially if it doesn’t resonate with you or align with your values"
An important new twist to add to the great AI versus NO AI discussion.
Every time I check this comment section, this sentence jumps out at me again. You "had to" ask an LLM. You "had to".
>Stop me if you’ve heard this one before: “After [however long] using AI coding assistants, there’s no way I’m going back!” You know, I don’t doubt that this is true. Because I’m not sure some of the people who say this could go back. It reads like praise on the surface, but those same words betray a chilling sense of dependence.
Perhaps, very ironically, they did "have to."
Genuine human writing can be great, this isnt it.