There’s something hilariously poetic about a ~2,500 year old fable being relevant today, because of AI.
"South Korean police have arrested a man for sharing an AI-generated image that misled authorities who were searching for a wolf that had broken out of a zoo in Daejeon city.
The 40-year-old unnamed man is accused of disrupting the search by creating and distributing a fake photo purporting to show Neukgu, the wolf, trotting down a road intersection"
To cry wolf is to say there’s a wolf here when it’s actually located elsewhere. The AI photo said there was a wolf at a certain intersection when it was actually located elsewhere.
In fact crying wolf is doubly appropriate because it means disturbing an operation looking for a wolf.
This is misdirection while there is a wolf
Similar but different
That's not pedantic, that's the meaning of the idiom.
But have you considered that the criteria arose organically as opposed to being engineered top down to account for edge cases such as this? I think in practice the term can probably apply to any instance where you might consider the longer term reputation of an individual or group that is separate from the response team.
Basically you've decided the two things must be mutually exclusive but haven't provided any reasoning or precedent for that constraint.
¹ Following pronoun variant used in the fine article here.
Original comment was clever and subsequent commenters were uninteresting to me. In this case, I only saw it because I’m on my phone which doesn’t have Chrome extensions. Turns out I’d already blocked them.
Don't even link me to the comment about how this has always been a complaint on HN, it's boring and it isn't the "gotcha" that you think it is.
Did they? The article says it's unclear as to their intent.
> Authorities did not specify if the man had intentionally sent the photo to authorities during their search or simply shared it online.
They don't have total surveillance, so also rely on public information
- a citicien posts information about the location of the wolf, a picture!
- authorities adopt their search based on that picture
Where is the incompetence here?
And you'll be shocked what the kids have been doing with databases and API calls
Like most important advances like plastics, nuclear power, diesel engines, synthetic fertilizers, computers and the internet, good and bad things came out of it.
It is like saying that plastics screw up everything they touch, for example when a plastic part is used to replace a more durable metal part, but before realizing that plastics are everywhere in our lives, often without a suitable replacement material.
> Guns primary purpose is to kill
Willfully diverting limited public service resources, that might potentially be assigned to saving someone's life or health?
Practically a social DoS
This is an accurate criticism of the boy in the fable, if... an unnecessary way to express the idea.
I don't care enough to bother finding out, but seems like the BBC could have done some more journalism, if they were so inclined.
The thing is, there's basically no reason to create this photo other than to mislead the authorities. It's purposefully blurry and not aesthetically pleasing. I cannot come up with any plausible artistic intent.
This could have happened without AI. Imagine if the police were trying to catch a serial killer, and I posted on Twitter that I saw him in a small town in Idaho or wherever, not because I had any real information but because I thought it would be amusing to create chaos. Maybe I'd create a bunch of sock puppet accounts with correlated sightings. At no point would I explicitly make a false police report, but the fake posts would get noticed all the same.
Is this illegal? I have no idea, I'm not a lawyer—but it feels like the sort of thing you'd want to have laws against. I'm not sure whether you'd run into first amendment issues in the United States.
If it was true and police saw it but didn’t act, the fallout for them could be much worse depending on the outcome.
With the info presented in the article, it sounds like the cops jumped to conclusions, got publicly embarrassed and are now going after him to either save face or get revenge (depending on how credulous you are of LEO).
The only reason you are seeing this right now is because it has AI in the title.
Hypothetically, if a hacking tool was released that let non-technical people hack into sensitive databases, and then a journalist wrote the headline "local man hacks IRS", without any mention of the tool, wouldn't that be a bit irresponsible, to purposely leave that information out?
Photoshop? I don't think you need much skill.
Because we're talking about the ease of Photoshopping a wolf into a scene, I think it's also worth pointing out that floating objects are a lot easier to work with than grounded objects, since cast shadows and bounce lighting are less of an issue. Having said that, it would still require some basic skill to achieve the WTC image which I think you're discounting. You'd need a working knowledge of layers, masks, and the lasso tool, which already would have placed it out of reach for most people at the time. Online resources were much more scarce, so I wouldn't be surprised if this guy was a hobbyist photographer or graphic designer. It definitely wouldn't have been achievable in a few minutes for the average person, and doing the same thing with a wolf would have been far more difficult, and well outside the realm of possibility for anyone who wasn't an expert.
I guess we're not going to agree on just how far that bar has fallen. Learning Photoshop as a teen got me my first job. The only reason I had one at all was because most people couldn't do a very convincing job of it. Now even my mom, a person who struggles to open her email, can do a better photoshop than me.
The original argument was about whether these tools had accelerated fraud and misuse, or whether not much had changed because people could just do the same thing with photoshop before. You and others were playing down the impact of AI. It sounds like we now both agree they have accelerated fraud and misuse.
> And your grandkids will be better at certain things than you too. That's just what progress in tech is.
You're misunderstanding my argument. I was pointing out that a large shift has happened that has enabled deception where it was not possible before, not lamenting the loss of a job that I don't do anymore. I have nothing against the progress of technology. But I do think we should think carefully about how we implement it.
Have you used Photoshop before? You come across as commenting on something you don't understand.
It’s a crime of opportunity¹, one where you have the idea and act on it on a whim. No opportunity, no crime, and the technology provided the opportunity.
So yes, the technology used matters.
http://web.archive.org/web/20250201051019/https://www.ojp.go...
We need to learn/adapt what we post, see, believe in photos to avoid arrest. Especially so in the AI reality because generating these images, and these pranks, has become increasingly easy for anyone to do with no skills and minimal time.
I think the part I find most fascinating though is it’s not clear if he took this picture to the police, actively wasting their time, or if he just posted it and they found it and mistakenly took it as truth. I have no insight to SK laws but for me it’s going to be unfair if they were the ones that used this picture as evidence when if it was never meant to be taken seriously.
If Tesla (insert any car manufacturer you hate) ran over a kid I'd like to see the title say it, instead of "Tesla fined for violating traffic laws."
[1] waiting for some example where fool policemen where outsmarted with simple tricks /s
In the gun debate, there's something called "Weapon Instrumentality Effect"
Sure a little bit more involved than the two second AI prompt, but 3 min job for the lulz photoshoppers.
There are significantly more people able to type a few words into a prompt than people who can use an image editor fast and convincingly and would be inclined to waste their time on this kind of fake.
But would you? People grumble about $0.99 for an app they’ll use everyday, I doubt paying even $5 (and waiting for a result!) for a fake image to mislead police is high on anyone’s list.
Making this image was likely fast and free. It’s a crime of opportunity.
And there are literally billions of everyone else.
Do you not see that the amount of fake images has exploded with free access and ease of use? That’s what a tool does. It’s silly to argue generative AI doesn‘t make a difference in the proliferation of fake images, just like it’d be arguing that digital photography on a small multi-purpose device that is always with you doesn’t make people take more pictures.
What I actually said couldn't be any clearer, and it's rather silly to twist my words into a strawman you can argue against.
I very much disagree, since you went on to make your whole point with an unrelated matter and apparently I misunderstood your point. Maybe you don’t know how to make your point clearer, but that isn’t the same as it being impossible to be clearer.
> and it's rather silly to twist my words
There was no twisting intended, and if I misconstrued your point I’d appreciate the correction (i.e. clarification).
Specifically: If you do agree that access to generative AI increases the proliferation of fake images (do you? I’m really asking. Sounds like you might), then what exactly is your objection to the original point?
I don't know why people are so determined to miss the point that "people can do [image manipulation] faster with AI" does not magically mean that people weren't doing it before AI, at scale mind you. Did y'all really unironically believe EVERY single image you saw on the internet prior to the past few years was entirely real and entirely what it was presented as? My goodness
No, that is not the question. I mean, maybe it’s the question you are asking, but no one else is.
> I don't know why people are so determined to miss the point that "people can do [image manipulation] faster with AI" does not magically mean that people weren't doing it before AI, at scale mind you.
That is not the point. The argument is simple: easier and cheaper access to a tool makes more people use the tool more often. Manual image editing is harder and takes longer than typing words into a box, thus more people do it more often and with fewer thought.
If you have the idea to manually edit a wolf into a street, you’ll first have to go to your computer or tablet, have a bunch of skills, and spend time doing it. You have plenty of opportunity to say “fuck it, I’ll do something else”. Most people drop at that point because they can’t be bothered.
With generative AI, you can be so drunk you can barely stand, sitting on a portable toilet at a concert, haphazardly type a few words and get the result, immediately and for free.
Do you not see the difference between those two?
We can go further back: You could do image manipulation on film, before digital was a thing. But few people knew how or had access to the necessary chemicals and dark room. Do you not think the ease of access and digital tools increased the amount of people doing it?
> Did y'all really unironically believe EVERY single image you saw on the internet prior to the past few years was entirely real and entirely what it was presented as?
No, no one believed that and no one is making that argument and I think you know that.
To answer your question, relative ease is a function, in part, of one's skills & resources, so, it's certainly a reasonable claim to make, but will be different person to person.
Did Orwell teach anything? What will they do with the next Visitors' spaceship photo?
I don't understand, shouldn't they have let him go if the idea is that they still roam in the wild? Why forcing it back to a zoo?
The zoo provides a controlled environment needed to restore the species.
EDIT: typo/word ordering
Our local children's museum is part of a network of sites working to restore red wolf [1] populations. Every few years they get new wolves as the coordinators move young wolves around to optimize mating pairs.
You could adjust the firmware of a wildlife tag to start transmitting location every 10 minutes when the animal leaves a geo-fence.
They are also not implanted in the birds, but are a relatively large "backpack" or leg tag.
He would have been arrested even if the image wasnt AI.
The title and article are very...tabloid-y
Needs to be supported by smartphones, of course.
“Authorities are investigating him for disrupting government work by deception, an offence that carries up to five years in prison or a maximum fine of 10 million Korean won ($6,700; £5,000)”
Somewhat harsher than the UK at least, where “wasting police time” would only get you six months or around a £2500 fine.
AI is plagiarism—full stop—nothing more, nothing less.
Of course, this point could have been made without sarcasm (and AI tells for parody)—I’m aware—but that would remove a certain… texture from the argument. And where, exactly, is the fun in that?
If it helps, imagine the text more as a work of art than an instruction manual. Art matters.
- Click on the timestamp for the comment which will take you to the comment page
- Then you can click the flag button
What if another citizen forwarded the image to the police, not knowing it was AI generated? Should it have been ignored because it was not made by the sender? Should it have been ignored because it was forwarded from a public post?
[1] https://www.thehindu.com/news/national/fir-against-reporter-...