I don't think this is accurate. AI has a flavour or tone we all know, but it could have generated factually plausible statements (that you could not diagnose in this test) or plausible text.
I could not tell the real from fake music at all.
I support (and pay for) Kagi, but wasn't overly impressed here. At worst I think it might give people too much confidence. Wikipedia has a great guideline on spotting AI text and I think the game here should integrate and reflect its contents: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
Not sure if it's in my head (haven't done a blind test or anything) but all AI music I've heard has painfully bad drum acoustics (very clicky). Seems like the most tell tale marker, although would love to make a "spot the AI song" game to prove myself right/wrong.
- AI slop is trivially factually wrong, and frequently overconfident.
- AI slop is verbose.
But, as you note, IRL this is not usually the case. It might have been true in the GPT-3.5 or early GPT-4 days, but things have moved on. GPT-5.1 Pro can be laconic and is rarely factually wrong.
The best way to identify AI slop text is by their use of special and nonstandard characters. A human would usually write "Gd2O3" for gadolinium oxide, whereas an AI would default to "Gd₂O₃". Chat-GPT also loves to use the non-breaking hyphen (U+2011), whereas all humans typically use the standard hyphen-minus character (U+002D). There's more along these lines. The issue is that the bots are too scrupulously correct in the characters they use.
As for music, it can be very tough to distinguish. Interestingly, there are some genres of music that are entirely beyond the ability of AI to replicate.
I made it a point to learn to type the em dash—only to have it stolen by the bots; it's forced me to become reacquainted with my long lost friend, the semicolon.
But I was referring to the special hyphen that the AIs frequently use today, and which is a hallmark of AI generated text, as it's not on regular keyboards and difficult to access: https://en.wikipedia.org/wiki/Wikipedia:Non-breaking_hyphen
They're also fond of this apostrophe: ’
Whereas almost every human uses: '
I kneel Hideo Kojima. You saw this all coming: https://youtu.be/PnnP4sA80D8
Except for the huge the amounts of already generated slop that is combined with SEO to pop up in search results
> "Readers Prefer Outputs of AI Trained on Copyrighted Books over Expert Human Writers"
Sounds interesting, what are some of those genres?
It simply doesn't get it. This sort of thing probably wasn't in its training data.
The really interesting thing is that when I upload something like that track, and tell it to compose something similar, it usually gives me an error and refunds my credits.
Also, and this is far more mainstream, both Suno and ElevenLabs are totally incapable of generating anything like, e.g., Darkthrone's "Transylvanian Hunger." Music that is intentionally unpolished is anathema to them.
I could go on. There are lots. I think that they understand melody and harmony, but they don't understand atmosphere, just in general...
This website strikes me as merely a marketing gimmick.
Perhaps this is just a sign for you to listen to more (human) music is all!
(minor spoiler)
The text accompanying an image of a painting:
> This image shows authentic human photography with natural imperfections, consistent lighting, and realistic proportions that indicate genuine capture rather than artificial generation. Meindert Hobbema. The Avenue at Middelharnis (1689, National Gallery, London)
I don't mind that you're selling an AI product if it's good but at least put some humanity on the marketing side.
If veracity matters, use authorative sources. Nothing has really changed about the skills needed for media literacy.
So having a good heuristic for identifying a broad category of non-authoritative sources would be useful, then?
>If you can't tell the difference, and it's entertainment stop worrying about it.
At the end of the day it's a philosophical/existential choice. Not everyone would step into the awesome-life-simulator where you can't tell the difference. On similar grounds one might decide on principle to consume only human-made media, be a part of the dynamical system that is real human culture.
We're meant to assume correct sentences were written by humans and AI adds glaring factual errors. I don't think it is possible at this point to tell a single human written sentence from an AI written sentence with no other context and it's dangerous to pretend it is this easy.
Several of the AI images included obvious mistakes a human wouldn't have made, but some of them also just seemed like entirely plausible digital illustrations.
Oversimplifying generative AI identification risks overconfidence that makes you even easier to fool.
Loosely related anecdote: A few months ago I showed an illustration of an extinct (bizarre looking) fish to a group of children (ages 10-13ish). They immediately started yelling that it was AI. I'm glad they are learning that images can be fake, but I actually had to explain that "Yes, I know this is not a photo. This animal is long extinct and this is what we think it looked like so a person drew it. No one is trying to fool you."
There's a lot of anti-AI sentiment in the art world (not news) but real artists are now actively accused of using AI and getting kicked off reddit or whatever. That tells me there is going to be 0 market for 100% human created art, not the other way around.
This sounds to me like a message is "poor fakes are generated, and everything else is genuine", which I think would be a very counterproductive message, even now.
> Correct! Well done, detective!
> This image shows authentic human photography with natural imperfections, consistent lighting, and realistic proportions that indicate genuine capture rather than artificial generation.
> Albert Pinkham Ryder, Seacoast in Moonlight (1890, the Phillips Collection, Washington)
The image is not photography, I guess technically it's a photograph of a painting but still, confusing text.
> Bees collect pollen from flowers and make honey. They also drive tiny cars to get from flower to flower!
The explanation given is that it’s not factually correct, therefore it’s AI slop. Maybe I didn’t pay enough attention to the instructions, but aren’t humans also capable of creating text that is not factually correct, and at times is done so not out of ignorance for for artistic or humorous purposes? This example here sounds like something that would be written by a child with an active imagination, and not likely the kind of “seems plausible but is actually false” slop that LLMs come up with.
>Fake stuff made by computers that tries to look like it was made by real people. It's everywhere online!
Tricking people is not what makes it slop. Being low quality is what makes it slop. This is a dangerous definition as it could mean that anything AI generated could be considered slop, even if it was higher quality than regular things.
But, you can take what AI generates, refine it, change it or use only parts of it, fact check it, etc. etc. Now, it's still AI generated, but not a "slop".
AI can do 90% of your work, but the other 90% is still your job if you want someone else to care about it.
I started on "Level 1" and got 2 things wrong (both false positives if it matters) and instead of feeling like I learned anything, I felt as though I was set up to fail because the image prompt was missing sufficient context or the text prompt was too simple to be human. Either I was dumb or the game was dumb.
Maybe I'm just too old and 8-11 year-old kids wouldn't be so easily discouraged, but I'd recommend:
1. Picking on one member of the "slop syndicate" at a time.
2. Show some examples (evidence) before beginning the evaluation.
First of all, there are only 27 "slop" image examples, but 200 real ones - very bad ratio. And almost all real examples are just dated photographs, paintings, photos of old books - there are genuinely 0 (not joking) modern photos or digital artwork. Also multiple "slop" image examples were actual screenshots of ChatGPT interface or clearly cropped screenshots.
Text is even worse - they somehow present it as if LLMs cannot write factually correct or simple text.
I genuinely believe that they should take this down immediately and do a major rework, because at this stage it will only do harm. It might teach the children or adults who complete this that AI can never write factually correct text or create very realistic-looking photos (good luck with with Nano Banana Pro).
P.S. To see how bad it is, just scrape https://slopdetective.kagi.com/data/images/not_slop/{file} from image_001.webp to 200 and slop/image_001.webp to 027.
Also see https://slopdetective.kagi.com/data/text/slop/l3_lines.json and https://slopdetective.kagi.com/data/text/not_slop/l3_lines.j... for real vs LLM-written text.
https://www.astralcodexten.com/p/ai-art-turing-test
Though maybe these are not examples of "slop" but instead good use of AI?
I think you gotta start with a definition of what AI slop is and why it matters. Most of what LLMs generate is not obviously incorrect.
>This was actually AI-generated slop! Repeats 'water is wet' multiple times.
I didn't know writing "water is wet" repeatedly was enough to de-humanize you.
>In many situations, it could be argued that grass may sometimes appear to have a greenish quality, though this might not always be the case.
>This was actually AI-generated slop! Won't commit to 'grass is green' and uses uncertain words.
What? Not all grass is green.
Fun times ahead.
hey kids, learn about ai slop by reading this guide to ai slop written by ai and full of ai slop mistakes. sheesh
https://arxiv.org/abs/2510.15061
Also somewhat tangentially relevant video: https://www.youtube.com/watch?v=Tsp2bC0Db8o