FWIW, I found the “medium” one’s hardest. Most of the “hard” ones have dead giveaways in the form of either punctuation or common AI text rhythms.
Some were hard but spottable after re-reading the answers a good 10 times... ahah.
Some were hard though, yeah (at least if not looking longer than 5-10 seconds). Btw, it seemed more logical to me to just see a green/red card when you click, i.e. right choice or wrong choice. Getting red for the correct answer confused me a bit (but this might just be me).
This time around I prompted the models not necessarily to be adversarial - i didn't ask them to try and fool the reader. But i gave them contextual info - something to the effect of "you're a user posting on hacker news"
Yeah there are some very obvious tells, but the models that are most capable are very good at writing like human.
Especially when the human responses for reddit or HN prompts were presumably made after reading the content of the article or the post; whilw the model is simply going off of the title.