Note that on Kagi, you can click "Report this page as AI-generated" [1]. Unfortunately though, my last report from January is still "under review" :/
Unfortunately many believe they can, and it is impossible to disprove. So now real people need to write avoiding certain styles, because a lot of other people have decided those are "LLM clues." Bullets, EM Dash, certain common English phases or words (e.g. Delve, Vibrant, Additionally, etc)[0].
Basicaly you need to sprinkle subtle mistakes, or lower the quality of your written communications to avoid accusations that will side-track whatever youre writing into a "you're a witch" argument. Ironically LLM accusations are now a sign of the high quality written word.
[0] https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
Essentially 0 people use emoji to create a bulleted list. Nobody unintentionally cites fake legal precedents or non-existent events, articles, or papers. Even the “it’s not X, it’s Y” structure, in the presence of other suspicious style/tone cues signals LLM text.
Ask an LLM to read your project specs and add a section headed: Performance Optimizations, to see an example of this
Another is a certain punchy and sensationalist style that does not change throughout a longer piece of writing.
Eg - The Strait of Hormuz: Chokepoint or Opportunity?
I suppose my high school essays were not. Apologies, but those are lost.
I wonder where some of this comes from. Another one is 'real unlock', it's not a common phrasing that I really recall.
https://trends.google.com/explore?q=real%2520unlock&date=all...
I haven't seen this yet, but I guess the only reason I haven't done it is because it never crossed my mind.
What I have found an easy detection is non-breaking spaces. They tend to get littered through the passages of text without reason.
> It’s the fake drama. Punchy sentences. Contrast. And then? A banal payoff.
It's great because it's a double-decker of annoying marketing copy style and nonsensical content.
I think people will be able to detect the lowest-user-effort version of LLM text pretty reliably after a while (ie what you describe; many people have a good sense of LLM clues). But there's probably a *ton* of LLM text out there where some of the instructions given were "throw a few errors in", "don't use bullet points or em dashes", "don't do the `it's not this, it's that` thing" going undetected.
And then those changes will get built into ChatGPT's main instructions, and in a few months people will start to pick up on other indicators, and then slightly smarter/more motivated users will give new instructions to hide their LLM usage... (or everyone stops caring, which is an outcome I find hard to wrap my head around)
So judge the content on its merit irrespective of its source.
Staccato (too may short sentences with periods) is also a telltale for me. Most humans prefer longer sentences with more varied punctuation; I, for example, am a sucker for run-on sentences.
If one measures for perplexity (how likely text is under a certain language model), common text in a training set will be very likely. But you can easily create better models.
Citation needed. The LLM accusations come from the specific cadence they use. You can remove all em-dashes from a piece of text and it still becomes clear when something is LLM written.
Can they be prompted to be less obvious? Sure, but hardly anyone does that.
It's more "The Core Insight", "The Key Takeaway", etc. than it is about emdashes.
Incidentally, the only people annoyed about "witch-hunts" tend to be those who are unable to recognise cadence in the written word.
However, reasoning models adding a random typo to seem less automated, still do not hide the fairly repeatable quantized artifacts from the training process. For LLM, it is rather trivial to find where people originally scraped the data from if they still have annotated training metadata.
Finally, reading LLM output is usually clear once one abandons the trap of thinking "I think the author meant [this/that]", and recognizing a works tone reads like a fake author had a stroke [0]. =3
As far as how I / other people do it, there are some obvious styles that reek of LLMs, I think it’s chatgpt.
There’s a very common structure of “nice post, the X to Y is real. miscellaneous praise — blah blah blah. Also curious about how you asjkldfljaksd?"
From today:
This comment is almost certainly AI-generated: https://news.ycombinator.com/item?id=47658796
And I'm suspicious of this one too - https://news.ycombinator.com/item?id=47660070 - reads just a bit too glazebot-9000 to believe it's written by a person.
I think the better question to ask is: What are your goals? Is it to prevent AI SPAM, or to discourage people copy-pasting AI? Those are two very different problems: in the case of AI SPAM you look for patterns of usage, (IE, unusually high interaction from a single IP, timing patterns around when things are read and the response comes in,) and in the other case it all comes down to cultural norms.
Stylistic tells like 'delve' and bullet formatting are just RLHF training artifacts. Already shifting between model versions, compare GPT-4 to 4o output and the word frequency distributions changed noticeably.
Long term the only thing with real theoretical legs is watermarking at generation time, but that needs provider buy-in and it slightly hurts output quality so adoption has been basically nonexistent.
If the text is full of punchy three word phrases or nonsense GenAI images then that's an obvious sign. But so is if the other person has some revolutionary project with great results but they can't really explain why their solution works where presumably many failed in the past (or it's a word salad, or some lengthy writing that doesn't show any signs of getting you to an "aha, that's some great insight" moment).
A good sign is also if the author had something interesting going before 2022, and they didn't fall into the earliest low quality LLM waves. Unfortunately some genuinely talented people have started using LLMs to turbocharge their output while leaving some quality on the table nowadays, so I don't really know. I'm becoming a lot more sceptical of the Internet, to be honest.
This is an artifact of the default LLM writing style, cross-poisoned through training on outputs -- not an "universal" property.
I asked an LLM to rewrite this to make it nicer and got the following. I'd flag the first because I don't usually hear "majority of your interactions" in conversation but I might miss it. The second will probably get by me. As for the third, I never say "considerably easier" unless I'm trying to sound artificially posh.
1. It becomes much more noticeable when the majority of your interactions are with non-native English speakers.
2.It tends to stand out more when most of the people you interact with speak English as a second language.
3. It's considerably easier to identify when most of your interactions involve people whose primary language isn't English.
To me, it often feels like the text version of the uncanny valley.
But again, that's just "feels", I don't have proof or anything.
For humans I think it just comes down to interacting with LLMs enough to realize their quirks, but that's not really fool-proof.
There are a couple of tells like em dashes and similar patterns but you should be able to suppress that with even a simple prompt.
Specific language tells, such as: unusual punctuation, including em–dashes and semicolons; hedged, safe statements, but not always; and text that showcases certain words such as “delve”.
Here’s the kicker. If you happen to include any of these words or symbols in your post they’ll stop reading and simply comment “AI slop”. This adds even less to the conversation than the parent, who may well be using an LLM to correct their second or third language and have a valid point to make.