> [...]
> I've started asking clients a simple question when they bring it up. Not to be difficult, just to understand.
> [...]
> It's not about utility. It's not even really about the chatbot. It's about visibility, the fear of looking behind.
> [...]
> No pop-ups. No blinking corners. Just content, clear and immediate.
It’s been long enough that this might even have plausibly come from a human with LLM writing overrepresented in their brain rather than an LLM. But either way there’s this record-scratch feeling that I experience on each one of these, and (fittingly) it just completely knocks me out of the groove, requiring deliberate effort to resume reading.
And, I mean, none of these is even bad in isolation, but it sure feels like we’re due either a backlash where these patterns become underused even when appropriate, or them becoming so common they lose their power (is syntax subject to semantic bleaching?). Or perhaps both. Socioliguists are going to have a blast.
In this particular case the linked article is definitely AI generated.
The thing is, by now it doesn’t actually matter if AI or not AI or partly AI or whatever, because the record scratch is still there and still breaks my immersion. I could be oversensitive (I definitely am to some other English-language things, and also feel that others are to yet other things like em dashes), but it feels like there’s a new language/social-signalling thing now, and you may have to avoid it even if you’re not an LLM.
They maintain such a consistent paragraph length that they're either a professional copyeditor or, as is clearly the case, are an LLM.
Humans deviate a lot more than this, they use run on sentences or lose the thread in their writing.
This blog however reads like every-other post on LinkedIn. Semi-professional tone, with a strong "You, Me" hook to most posts.
I encourage everyone to make an LLM-generated blog, don't post the articles anywhere, but generate one, to get a feeling for how these things write.
Because this is unmistakably LLM. I'd even go so far as to identify the model of these particular posts as ChatGPT.
Yet when we point this out, we're told it is "unmistakably human" and that we're rude for pointing it out.
https://adele.pages.casa/md/blog/the-joy-of-a-simple-life-wi...
It feels like you're trying for a lazy gotcha, but the actual point here is something like "AI models often generate writing with specific noticeable characteristics that make it obviously AI output, and TFA is an instance of such writing, and this should be called out when possible"
The op is a blog post. You’re talking about blog post writing. Maybe you just don’t like their style?
It’s also true llm second drafts are a thing.
And it’s true both can ‘record scratch’ you right out of attention.
As well as the now present trend as readers to be impatient and quickly bored.
And this criticism of writing style (for my take this article is perfectly readable)—what is the aim? Call for writers to perform some kind of disclosure? Because without a goal, it sounds like complaining you don’t like the soup.
There've been stylistic fads before LLMs where a thing, with results just as chalkboard-screech-inducing as the current one. That this one is just a button-push away does make it worse, though, because it proliferates so greedily.
Bad writing is bad writing, and writing like an LLM is writing like an LLM. We should be able to call this out. In fact, calling out the human responsibility in it is the very opposite of dehumanizing to me.
Sure, call the style bad or even similar to LLMs, but there's no reason to believe the style came from LLMs. It existed before and people who used it before still exist and still use it now.
Hell, this person seems to be a web(site) developer, that's a very marketing-speak-heavy field. It's far morely likely that's where they "caught" thos style. It happened to me too back when I was still in it.
> explicitly saying it was either LLM generated or the person's style was influenced by consuming LLM content
We might disagree here, but if we're strict they did not say "either/or", especially not explicitly. They raised two possibilities, but didn't exclude others.
> there's no reason to believe the style came from LLMs
They say "might" and "plausibly". I think there's no belief there until you assume it.
And even if: It's not unlikely that a contemporary author's mind is influenced by the prevalent LLM style. We are influenced by what we read. This has been happening to everyone for ages, without anyone questioning the agency of writers. There's nothing wrong with suggesting like that could be the case here. It's entirely human.
I know it's easy for one's mind to jump to conclusions, but I am not a fan of taking that as far as accusing someone of "dehumanizing" others. Such an escalation should ideally cause a pause and a think, before pressing submit.
The whole corpus is in there, but the standard style is tuned for.
And people I read had better ability to not put in unneceasary random completely made up facts or illogical implications.
As for being dehumanizing, perhaps I did commit the sin of psychoanalysis at a distance here, but I’ve felt enough loose wires sticking out of my brain’s own language production apparatus that I don’t think pointing out the mechanistic aspects reduces anyone’s humanity.
For instance, nobody can edit their own writing until they forget what’s in it—that’s why any publishing pipeline needs editors, and preferably two layers of them, because the first one, who edits for style and grammar, consequently becomes incapable of spotting their own mechanical mistakes like typos, transposed or merged words, etc. Ever spotted a bug in a code-review tool that you’ve read and overlooked a dozen times in your editor? Why does a change in font or UI cause a presumably rational human being to become capable of drawing logical inferences they were not before? In either case, there seems to be a conclusion cache of sorts that we can’t flush and can’t disable, requiring these sorts of actually quite expensive hacks. I don’t think this makes us any less human, and it pays to be aware of your own imperfections. (Don’t merge your copy- and line editors into a single position, please?..)
As for syntactic patterns, I’ve quite often thought of a slick way to phrase things and then realized that I’d used it three times in as many sentences. On some occasions I’ve needed to literally grep every linking word in my writing to make sure I haven’t used a single specific one five times in a row. If you pay attention during meetings or presentations, you’ll notice that speakers (including me!) will very often reuse the question’s phrasing word for word regardless of how well it fits, without being aware of it in the slightest. (I’m now wondering if lawyers and witnesses train to avoid this.) Language production is stupidly taxing on the brain (or so I’ve heard), so the brain will absolutely take every possible shortcut whether we want it to or not.
Thus I expect that the priming effect I’m alleging can be very real even before getting into equally real intangibles like “taste”. I don’t think it dehumanizes anyone; you could say it dehumanizes everyone equally instead, but my point of view is that being aware of these mechanical realities of the mind is essential to competent writing (or thinking, or problem solving) in the same way that being aware of mechanical realities of the body is essential to competent dancing (or fighting, or doing sports). A bit of innocence lost is a fair trade for the wisdom gained.
(Not that I claim to be a particularly good writer.)
[1] https://slatestarcodex.com/2014/07/30/meditations-on-moloch/
> It's not about utility. It's not even really about the chatbot. It's about novelty of talking to a machine
Which of course doesn't connect to the rest of the article contents, because the AI doesn't have any intention in its writing.
The consultants apparently had the bot load and fed it an immediate prompt which greeted the user. This was happening on every page load. Bad consultants, bad bot.
This sums up everything driving the tech sector right now. From execs at big tech to nobodies on X.
EDIT; if I think about the nature of it. The visibility fight is the decreasing attention with increasing channels and noise. Visibility tactics go to the extreme. And the fear of looking behind comes from the previous tech cycles and the thoughts around what if you had missed those? And maybe those with the most fear are the ones that did.
It's always been like this. I used to build websites in the 90s and it was exactly like that. It was also horrible. People who had no tech background whatsoever making decisions on which tech to use (PHP vs ASP vs ColdFusion, remember those?); overpaying agencies to make HTML "templates" that had to have round corners everywhere. Etc.
Not everything's great today, but it's a little less bad I think.
I was skeptical but it gets a 68 NPS from users, even if we do get the occasional "why are you investing in AI I hate it" coming through the feedback channel.
As ever, the issue is "what problem are you solving". If it's that you want more people to put their hand up and talk to you/order something, chatbots seem like a bad solution. If it's that you have a ton of complex docs that people have to read in order to implement and use your product, it's not the solution but it's probably part of a solution.
Obviously it just a script embedded in the page, so it has not actual place in the design. So the effect, especially on mobile, is this dance of starting to read a page, have it obscured by annoying popups, and trying (and failing) to close the popup with the hidden 12x12 pixels x button.
Just like the entire ads market, it’s all forgery to drive up clicks so owners can say to the clients that there is interaction.
Don’t get me started on the recent YouTube ads on iPad that place a banner that sits on top of the video, hiding subtitles, and closing it is behind a menu that requires you to be a brain surgeon specialist in order to interact with, instead of clicking the ad itself. I currently have 15 tabs in safari for ads that I inadvertently clicked.
Your clients seem to have got what they wanted, or at least someone who has learned to write like one.
I'm not witch-hunting, there are just a lot of witches.
Tbh the whole smolweb concept by this person seemed kinda weird right when I discovered it was a thing. It seems to not really be a thing but the person is really trying to convince you that it is
Fedex has now a voice bot when you call and it is kind of good and fast. I mean faster than navigating their website. It picks up directly after some boilerplate. It can understand me.
With website chatbots we could have similar leaps if they are done well and have access to CRM/ERP etc. to actually help you.
Back in the day, websites could just put up an animated "under construction" gif.