It’s not, but they could fix the issue by raising the flagging threshold for Musk-related posts.
There is. But seems like it’s only for [dead], not [flagged].
His hypocrisy, his position on the main-character-syndrome-to-narcissism spectrum, him getting a kick out of trolling everyone, or him having straight up psychopathy: whatever it is, I find I no longer care.
This may be giving him too much credit, the only thing we actually know is he thinks being accused of being a pedophile is bad. We know this because he's done it to several people, and flips his shit when it happens to him or his platform. He doesn't actually seem to care about pedophiles or pedophilia given his on going relationships with people he's accused.
If he's only operating on the impact of the words, and ignoring the existence of an observable testable shared reality behind the words, then yes, accusations (either direction) are more damaging in his mind than being seen to support or oppose whatever.
Which is, ironically, a reason to *oppose* absolute freedom of speech, when words have power beyond their connection to reality the justifications fall short. But like I said, I don't care if his inconsistency is simple hypocrisy or something more complex, not any more.
> Which is, ironically, a reason to oppose absolute freedom of speech...
Since the former theory of mind can't explain the latter behavior, I guess it's wrong then, right?
Edit: to the bots downvoting me - prove me wrong. Prove either of the above statements wrong.
11M+ X accounts were suspended for CSE violations in 2023 (vs 2.3M on Twitter in 2022).
X has recently made the penalty for prompting for CSAM the same as uploading it.
You could find this out yourself very easily.
Recent evidence and behaviors trump past behavior.
Seems like a pointless and foolish product design error for X/grok to publish arbitrary image generation results under its own name. How could you expect that to go anything but poorly?
We can expect the same level of institutional breakdown with regards to various types of harassment, misappropriation, libel, and even manufactured revenge porn from AI.
I keep coming to the same conclusion with X as they did in the 80’s masterpiece War Games.
From what I saw the 'undressing' problem was the tip of the iceberg of crazy things people have asked Grok to do.
It may be a failure of imagination on my part, but I can't imagine a bot limited to style transfer or replacing faces with corresponding emoji would cause total chaos.
Even if someone used that kind of thing with a picture from an open-casket funeral, it would get tuts rather than chaos.
> From what I saw the 'undressing' problem was the tip of the iceberg of crazy things people have asked Grok to do.
Indeed. I mean, how out of touch does one have to be to look at Twitter and think "yes, this place will benefit from photorealistic image editing driven purely by freeform natural language, nothing could go wrong"?
Why is X still on the app stores?
It is all a well-defined implicit caste hierarchy at this point and anyone with enough net worth and a willingness to publicly fellate the orange dong gets a protected spot on the 2nd tier of the pyramid.
I’m not against people using AI to generate a fantasy image for their own needs. I guess in a way it’s like what people imagine in their own heads anyways. But I do think it is problematic when you share those publicly because it can damage others’ reputation, and because it makes social media hostile to some groups of people who are targeted with misogynist or racist deepfakes. It may seem like a small problem but the actual final effect is that the digital public square becomes a space only for identity groups that aren’t harassed.
The problem is that doing this would get me banned. Shouldn't using Grok in this way get you banned similarly?
That's fundamentally different to "You can make this thing if you're fairly skilled and - for some kinds of images - have specialist tools."
Yes, you should be banned for undressing people without consent and posting it on a busy social media site.
Do you find the two equally objectionable?
If a big boobed stick figure with a label saying "<coworker name>" was being posted on your social media a lot such that people could clearly interpret who you were talking about, there would be a case for harassment but also you'd probably just get fired anyway.
Where is the actual problem?
Is it that it's realistic? Or that the behavior of the person creating it is harassing?
This is pretty straight forward.
That's fundamentally different to "You can make this thing if you're fairly skilled and - for some kinds of images - have specialist tools."
Yes, you should be banned for undressing adults and kids without consent and posting it on a busy social media site.
There’s a frantic effort to claim 230 protection, but this doesn’t protect you from the consequences of posting content all by yourself on the site you own and control.
Which, in this case, is Twitter itself, no?
> The users who are creating text prompts are just writing words.
With highly specific intentions. It's not as if grok is curing cancer. Perhaps it's worth throwing away this minor distinction and considering the problem holistically.
Intentions to pull the CSAM out of the server full of CSAM that twitter is running.
Yes, you are making the flailing argument that the operators of the CSAM site desperately want to establish as the false but dominant narrative.
If you have a database full of CSAM, and investigators write queries with specific intentions, and results show that there is CSAM in your database: you have a database full of CSAM. Now substitute 'model' for 'database.'
An investigator does not _create novel child porn_ in doing a query.
You're making a fallacious argument.
And a prompt, without being aided and abetted by twitter, doesn't "create novel child porn" either. A prompt is essentially searching the space, and in the model operated by twitter it's yielding CSAM which is then being distributed to the world.
If twitter were operating in good faith, even if this was the fault of its customers, it would shut the CSAM generator operation down until it could get a handle on the rampant criminal activity on its platform.
If Adobe had a service where you could E-mail them "Please generate and post CSAM for me" and in response, their backend service did it and posted it, that's a totally different story then the user doing it themself in Photoshop. Come on. We all know about tech products here, and we can all make this distinction.
Grok's interface is not "draw this pixel here, draw this pixel there." It's "Draw this child without clothing." Or "Draw this child in a bikini." Totally different.
For 1, crowbars are generally available but knives and guns are heavily regulated in the vast majority of the world, even though both are used for murder as well as legitimate applications.
For 2, things get even more complicated. Eg if my router is hacked and participates in a botnet I am generally not liable, but if I rent out my house and the tenant turns it into a weed farm i am liable.
Liability is placed where it minimises perceived societal cost. Emphasis on perceived.
What is worse for society, limiting information access to millions of people or allowing csam, harrassment and shaming?
It is difficult to put similar safeguards into Photoshop and the difficulty of doing the same in Photoshop is much higher.
you are in 1500's before the printing press was invented. surely the printing press can also reduce the friction to distribute unethical stuff like CP.
what is the appropriate thing to do here to ensure justice? penalise the authors? penalise the distributors? penalise the factory? penalise the technology itself?
If I ask the maintainers of curl to hack something and they do it, then they are culpable (and possibly me as well).
Using Photoshop to do something doesn’t make Adobe complicit because Adobe isn’t involved in what you’re using Photoshop for. I suppose they could involve themselves, if you’d prefer that.
You don’t understand how scale and accessibility matter? That having easy cheap access to something makes it so there is more of it?
You don’t understand that because any talentless hack can generate child and revenge porn on a whim, they will do it instead of having time to cool off and think about their actions?
You made one specific question, but then responded with something unrelated to the three people (so far) who have replied.
On the other hand, if you bought a car that had a “Mad Max” self driving mode that drives erratically and causes accidents, yes, you are still responsible as the driver for putting your car into “Mad Max” mode. But the manufacturer of the car is also responsible for negligence in creating this dangerous mode that need not exist.
There is a meaningful distinction between a tool that can be used for illegal purposes and a tool that is created specifically to enable or encourage illegal purposes.
Where can we realistically draw the line? Preventing distribution of this sort of shit is impossible, anyone can run their own generator. CSAM is already banned pretty much everywhere, and making money off it certainly is, but somehow Musk is getting away with distributing it at a massive scale. Is it because it's fake? And can we even tell whether it's still fake? Do we ban profiting from fake porn? Do we ban computing? Do we ban unregulated access to generative AI?
X/Grok is an attractive obvious target because it's so heinous and widespread, but putting the axe on them won't make much of a difference.
It's because law is slow and right now the US government is completely stalled out in terms of performing its job (thanks in part to Musk himself). Things will eventually catch up but it's simply the wild west for the next few years.
If this isn't illegal, then sure, government intervention will be required, laws will have to be amended, etc. Until that happens, what are realistic options? Shaming the perps? A bit of hacktivism?
Government includes courts in American. And prosecutors are part of the executive branch in the US.
Like, if I sell a gun and you go and shoot someone I'm not necessarily responsible. Okay, makes sense.
But if I run a shooting range and I give you zero training and don't even bother to put up walls, and someone gets shot, then I probably am responsible.
That might mean something like Grok cannot realistically run at scale. I say good riddance and who cares.
As is, Musk probably isn't going to get confronted by this current DoJ. The state courts may try to take this up, but it has less reach than the federal courts. Other country's courts may take action and even ban X.
>what are realistic options? Shaming the perps? A bit of hacktivism?
Those can happen. I don't know how much it moves the needle, but those will be inevitable reactions. The only way out for the American people would be to mass boycott X over this, but our political activism has been fairly weak. Especially for software.
But I think mostly musk just acts like all laws don't apply to him - regulations, property lines, fines, responsibility to anyone else.
I would argue it's a symptom of rich people knowing the rules don't apply to them.
Arguably the rules never really have applied to them, but now they don't even bother to pretend they do.
> Every few seconds, Grok is continuing to create images of women in bikinis or underwear in response to user prompts on X, according to a WIRED review of the chatbots’ publicly posted live output. On Tuesday, at least 90 images involving women in swimsuits and in various levels of undress were published by Grok in under five minutes, analysis of posts show.
ChatGPT and Gemini also do this: https://x.com/Marky146/status/2009743512942579911?s=20
One of the many reasons I prefer Claude is that it doesn't even generate images.
No other company would touch this sort of thing - they’d be unable to make any money, their payment providers would ban them, their banks would run away.
This is a great example of why "CSAM" is a terrible term and why CP was/is better. If you generate pornographic images of children using an AI tool it is by definition not CSAM, as no children were sexually assaulted. But it is still CP.
Also what changed, over the past 20 years even hosting stuff like this, or even any pornography whatsoever, would get you pulled from every App Store, shut down by any payment providers. Now it’s just totally fine? To me that’s a massive change entirely decided by Elon Musk.
Generating pictures of a real child naked is assault. Imagine finding child photos of yourself online naked being passed around. Its extremely unpleasant and its assault.
If you're arguing that generating a "fake child" is somehow significantly different and that you want to split hairs over the CSAM/CP term in that specific case. Its not a great take to be honest, people understand CSAM, actually verifying if its a "real" child or not, is not really relevant.
It's entirely relevant. Is the law protecting victims or banning depictions?
If you try to do the latter, you'll run head first into the decades long debate that is the obscenity test in the US. The former, meanwhile, is made as a way to make sure people aren't hurt. It's not too dissimilar to freedom of speech vs slander.
Both. When there's plausible deniability, it slows down all investigations.
> If you try to do the latter, you'll run head first into the decades long debate that is the obscenity test in the US. The former, meanwhile, is made as a way to make sure people aren't hurt. It's not too dissimilar to freedom of speech vs slander.
There's a world outside the US, a world of various nations which don't care about US legal rulings, and which are various degrees of willing-to-happy to ban US services.
Cool, I'm all for everyone else banning X. But sadly it's a US company subject to US laws.
I'm just explaining why anyone in the US who would take legal action may have trouble without making the above distinction
Definitely a core weakness of the Constitution. One that assumed a lot of good faith in its people.
a lawyer and judge are discussing a case and using the terminology CSAM in the case and needs to argue between the legality or issue between the child being real or not. What help is it in this situation to use CP vs CSAM in that moment. I dont really think it changes things at all. In both cases the lawyer and judge would need to still clarify for everyone that "presumably" the person is not real. So an acronym change on this point to me is still not a great take. Its regressive, not progressive.
Yes, and it's a lawyer's job to split hairs. Up thread was talking about legal action so being able distinguish the term changes how you'd attack the issue.
> What help is it in this situation to use CP vs CSAM in that moment. I dont really think it changes things at all.
I just explaied it.
You're free to have your own colloquial opinion on the matter. But if you want to discuss law you need to understand the history on the topic. Especially one as controversial as this. These are probably all tired talking points from before we were born, so while it's novel and insignificant to us, it's language that has made or broken cases in the past. Cases that will be used for precedent.
>So an acronym change on this point to me is still not a great take. Its regressive, not progressive.
I don't really care about the acronym. I'm not a lawyer. A duck is a duck to me.
I'm just explaining why in this legal context the wording does matter. Maybe it shouldn't, but that's not my call.
The children aspect just makes a bad thing even worse and seems to thankfully get some (though enough IMO) people to realize it.
If there's a business operating for profit (and Twitter is, ostensibly) and their tool posts pictures of me undressed, then I am going to have a problem with it.
And I'm just some dude. It probably means a lot more for women who are celebrities.
"It's inevitable" isn't an excuse for bad corporate or personal behavior involving technology. Taken to its logical conclusion, we're all going to die, so it's just executing on the inevitable when someone is murdered.
In the USA ... a company can declare bankruptcy and shed the debt / liabilities while a person cannot shed most debt after declaring bankruptcy. [0] [1] USA politicians favor companies over the people.
I personal support new corporate laws similar to California's three strikes law. Instead of allow companies to budget for fines the CEO and Executives go to jail with the corporation being broken up after habitually breaking same the laws.
[0] https://hls.harvard.edu/today/expert-explains-how-companies-...
Society can create disincentives, but not cures.
The person telling grok to comment on a thread by a woman with an image of her with her clothes off, on all fours, and covered in what appears to be semen is to hurt her. It is an act of domination. She can either leave the platform or be forced to endure a process that repeatedly makes her into a literal sex object as she uses the platform. Discussing something related to your professional work? Doesn't matter. There's now an image in the thread of this shit.
This is rape culture. There is no other word for it.
Gender theorists have been studying this very question for decades. But you'll regularly find this community shitting on that entire field of study even though I'm not sure if it is has ever been more relevant than it is today.
Might send a good message about consent ya know?
Obviously we can't just - poof - make people not child molesters or not murderers. But that doesn't mean we should sit on our asses and do nothing.
You can of course pay for Grok if you like, but that just buys you bigger quota (up to 50 videos a day is free), not new capabilities or less censorship.
Warning: it's quite gross
[0] https://www.cps.gov.uk/prosecution-guidance/indecent-and-pro... ("downloading an image from a website onto a computer screen: R v Smith; R v Jayson [2003] 1 Cr. App. R. 13")
I also did scroll through the public grok feed and the AI generated bikini pics were mostly Onlyfans creators requesting their own fans to generate these pictures (or sometimes generating them themselves).
You know this but somehow are rationalizing this game changing fact away.
Yes, people can draw and photoshop things. But it takes time, skill, dedication, etc. This time cost is load bearing in the way society needs to deal with the tools it has for the same reason at the extreme that kitchen knives have different regulations than nuclear weapons.
It is also trivially easy for grok to censor this usage for the vast majority of offenders by using the same LLM technology they already have to classify content created by their own tools. Yes, it could get jailbroken but that requires skill, time, dedication, etc; And it can be rapidly patched, greatly mitigating the scale of abuse.
The scale of effect and barrier to entry. Both are orders of magnitude easier and faster. It would take hours of patience and work to mostly create one convincing fake using photoshop, once you had spent the time and money to learn the tool and acquire it. This creates a natural large moat to the creation process. With Groom it takes a minute at most with no effort or energy needed.
And then there is the ease of distribution to a wide audience, X/Groom handles that for you by automatically giving you an audience of millions.
It’s like with guns. Why prevent selling weapons to violent offenders when they could just build their own guns from high quality steel, a precision drill, and a good CNC machine? Scale and barrier to entry are real blockers for a problem to mostly solve itself. And sometimes a 99% solution is better than no solution.
It's not obvious to me that this is your position. What safeguards do you propose as an alternative to those discussed in the article?
But I'm not sure if the tool itself should be banned, as some people seem to be suggesting. There are content creators on the platform that do use NSFW image generation capabilities in a consensual and legitimate fashion.
But for NSFW work it dominates. It’s clearly deliberate.
I would say lots of ways. And that's probably why I have a few knives, and zero atomic bombs.
If an individual invented a tool that can generate such pictures, he'd be arrested immediately. A company does it, it's just a woopsie. And most people don't find this strange.
I think this is an important question to ask despite the subject matter because the subject matter makes it easy for authorities to scream, "think of the children you degenerate!" while they take away your freedoms.
I think Musk is happy to pander to and profit from degeneracy, especially by screaming, "it's freedom of speech!" I would bet the money in my pocket that his intent is that he knows this stuff makes him more money than if he censored it. But he will of course pretend it's about 1A freedoms.
We are going to be in some serious fucking trouble if we can't tackle these issues of scale implied by modern information technology without resorting to disingenuous (or simply naive) appeals to these absurd equivalences as justification for each new insane escalation.
When you use a service like Grok now, the service is the one using the tool (Grok model) to generate it, and thus the service is producing CSAM. This would also apply if you paid someone to use Photoshop to produce CSAM: they would be breaking the law in doing so.
This is setting aside the issue of twitter actually distributing the CSAM.
Last I checked Photoshop doesn't have a "undress this person" button? "A person could do bad thing at a very low rate, so what's wrong with automating it so that bad things can be done millions of times faster?" Like seriously? Is that a real question?
But also I don't get what your argument is, anyway. A person doing it manually still typically runs into CSAM or revenge porn laws or other similar harassment issues. All of which should be leveraged directly at these AI tools, particularly those that lack even an attempt at safeguards.
This could be easily fixed by making the generated images sent through private Grok DMs or something, but that would harm the bottom line. Maybe they will do that eventually once they have milked enough subscriptions from the "advertising".
It could easily be solved by basic age verification.
The csam stuff though needs to be filtered and fixed as this breaks laws and I'm not aware what would make it legal, lucky enough