However I think for Europe the regular sexual content moderation (even in text chat) is way over the top. I know the US is very prudish but here most people aren't.
If you mention something erotic to a mainstream AI it will immediately close down which is super annoying because it blocks using it for such discussion topics. It feels a bit like foreign morals are being forced upon us.
Limits on topics that aren't illegal should be selectable by the user. Not baked in hard to the most restricted standards. Similar to the way I can switch off safe search in Google.
However CSAM generation should obviously be blocked and it's very illegal here too.
One should search Huggingface for role-playing models to have a decent level of erotic content, but even that does not guarantee you a pleasant experience.
> It feels a bit like foreign morals are being forced upon us.
Welcome to the rest of the world, where US morals have been forced upon us for decades. You should probably get used to it.
If you think people here think that models should enable CSAM you're out of your mind. There is such thing as reasonable safety, it not all or nothing. You also don't understand the diversity of opinion here.
More broadly, if you don't reasonable regulate your own models and related work, then it attracts government regulation.
> If you think people here think that models should enable CSAM you're out of your mind.
Intentional creation of “virtual” CSAM should be prosecuted aggressively. Note that that’s not the same thing as “models capable of producing CSAM”. I very much draw the line in terms of intent and/or result, not capability.
> There is such thing as reasonable safety, it not all or nothing. You also don't understand the diversity of opinion here.
I agree, but believe we are quite far away from “reasonable safety”, and far away from “reasonable safeguards”. I can get GPT-5 to try to talk me into committing suicide more easily than I can get it to translate objectionable text written in a language I don’t know.
If you're suggesting Grok is fine-tuned to allow any kind of nudity, some evidence would be in order.
The article suggests otherwise: "The service prohibits pornography involving real people’s likenesses and sexual content involving minors, which is illegal to create or distribute."
And yesterday.
Not by this article, for sure.
"The service prohibits pornography involving real people’s likenesses and sexual content involving minors, which is illegal to create or distribute.
Still, users have prompted Grok to digitally remove clothing from photos — mostly of women — so the subjects appeared to be wearing only underwear or bikinis."
Not possible.
To which governments, courts, and populations likely respond "We don't care if you can't go to market. We don't want models that do this. Solve it or don't offer your services here."
Also… I think they probably could solve this. AI image analysis is a thing. AI that estimate age from an image has been a thing for ages. It's not like the idea of throwing the entire internet worth of images at a training sessions just to make a single "allowed/forbidden" filter is even ridiculous compared to the scale of all the other things going on right now.
No, they likely won't. AI has become far too big to fail at this point. So much money has been invested in it that speculation on AI alone is holding back a global economic collapse. Governments and companies have invested in AI so deeply that all failure modes have become existential.
If models can't be contained, controlled or properly regulated then they simply won't be contained, controlled or properly regulated.
We'll attempt it, of course, but the limits of what the law deems acceptable will be entirely defined by what is necessary for AI to succeed, because at this point it must. There's no turning back.
Not in Europe it hasn't, and definitely not for specifically image generation, where it seems to be filling the same role as clipart, stock photos, and style transfer that can be done in other ways.
Image editing is the latest hotness in GenAI image models, but knowledge of this doesn't seem to have percolated very far around the economy, only with weird toys like this one currently causing drama.
> If models can't be contained, controlled or properly regulated then they simply won't be contained, controlled or properly regulated.
I wish I could've shown this kind of message to people 3.5 years ago, or even 2 years ago, saying that AI will never take over because we can always just switch it off.
Mind you, for 2 years ago I did, and they still didn't like it.
Things that cannot happen will not happen. "AI" (aka LLMs dressed up as AGI by giga-scalr scammers) is never going to work as hyped. What I expect to see in the collision is an attempt to leverage corporate fear and greed into wealth-extractive social control. Hopefully it burns to the ground.
This might be true for the glorified search engine type of AI that everyone is familiar with, but not for image generation. It's a novelty at best, something people try a couple times and then forget about.
Grok is a novelty, but that's Grok.
And they know that eventually people will just learn to accept it.
Yes, GenAI content is cheap.
But a business whose output is identical to everyone else's, because everyone is using the same models to solve the same problems, has no USP and no signal to customers to say why they're different.
The meme a while back about OpenAI having no moat? That's just as true for businesses depending on any public AI tool. If you can't find something that AI fails at, and also show this off to potential customers, then your business is just a lottery ticket with extra steps.
I think businesses assume the output of AI can be the same as with their current workflow, just with the benefit of cutting their workforce, so all upside and no downside.
I also suspect that a lot of businesses (at least the biggest ones) are looking into hosting their own LLM infrastructure rather than depending on third party services, but even if not there are plenty of "indispensible" services that businesses rely on already. Look at AWS.
For context, the top 5 HN links as of this comment contain one attributed (https://xeiaso.net/notes/2026/year-linux-desktop/, characters page discloses Stable Diffusion usage) and one likely (https://www.madebywindmill.com/tempi/blog/hbfs-bpm/, high-context unattributed image with no Tineye results) AI generated image.
But plenty enough people do want them. Grok is meeting demand.
"Many individuals" != democratic majority.
To argue otherwise is to claim that the ~1% of the population who are into this are going to sway the governments or the people they represent.
What the former want is not illegal. So the fact they are a minority is irrelevent. Minorites have rights too.
If we're talking about genuine CSAM, that very different and not even limited to undressing.
Why would you think I was talking about anything else?
Also, "subset" != "very different"
> What the former want is not illegal. So the fact they are a minority is irrelevent. Minorites have rights too.
This is newsworthy because non-consensual undressing of images of a minor, even by an AI, already passes the requisite threshold in law and by broad social agreement.
This is not a protected minority.
Also, lets test your commitment to consistency on this matter. In most jurisdictions possession and creation of CSAM is a strict liability crime, so do you support prosecuting whatever journalist demonstrated this capability to the maximum extent of the law? Or are you only in favor of protecting children when it happens to advance other priorities of yours?
I did not see the details of what happened, but if someone did in fact take a photo of a real child they had no connection to and caused the images to be created, then yes, they should be investigated, and if the prosecutor thinks they can get a conviction they should be charged.
That is just what the law says today (AIUI), and is consistent with how it has been applied.
What if Photoshop is provided as a web service? This is analogous to running image generation as a service. In both cases provider takes input from the user (in one case textual description, on the other case sequence of mouse events) and generates and image with an automated process, without specific human intentional input from the provider.
Note that in this case using them for producing CSAM was against terms of service, so the business was tricked to produce CSAM.
And there are other automated services that could be used for CSAM generation, for example automated photo booths. Should their operator be held liable if someone use them to produce CSAM?
I anticipate there will already be case law/prescident showing the shape of what is allowed/forbidden, and most of us won't know the legal jargon necessary to understand the answer.
Or answers, plural, because laws vary by jurisdiction.
Most of us here are likely to be worse at painting such boundaries than an LLM. LLMs can pass at least one of the bar exams, most of us probably cannot.
I doubt anyone will go to jail over this. What (I think) should happen is state or federal law enforcement need to make it very clear to Xai (and the others) that this is unacceptable, and that if it keep happening, and you are not showing that you are fixing it (even if that means some degradation in the capability of the system/service), then you will be charged.
One of the strengths of the western legal system that I think is under appreciated by people here is that it is subject to interpretation. Law is not Code. This makes it flexible to deal with new situations, and this is (IME) always accompanied by at least a small amount of discretion in enforcement. And in the end, the laws and how they are interpreted and enforced are subject to democratic forces.
Collectively, probably more. Grok? Not unless you count each frame of a video, I think.
> If getting it wrong for even one of those images is enough to get the entire model banned then it probably isn't possible and this de facto outlaws all image models.
If the threshold is one in a billion… well, the risk is for adversarial outcomes, so you can't just toss a billion attempts at it and see what pops out, but a billion images, if it's anything like Stable Diffusion you can stop early, and my experiments with SD suggested the energy cost even for a full generation is only $0.0001/image*, so a billion is merely $100k.
Given the current limits of GenAI tools, simply not including unclothed or scantily clad people in the training set would prevent this. I mean, I guess you could leave topless bodybuilders in there, then all these pics would look like Arnold Schwarzenegger, almost everyone would laugh and not care.
> That may precisely be the point of this tbh.
Perhaps. But I don't think we need that excuse if this was the goal, and I am not convinced this is the goal in the EU for other reasons besides.
* https://benwheatley.github.io/blog/2022/10/09-19.33.04.html
It's like cyber insurance requirements - for better or worse, you need to show that you have been audited, not prove you are actually safe.
> Not possible.
Note that the description of the accusation earlier in the article is:
> The French government accused Grok on Friday of generating “clearly illegal” sexual content on X without people’s consent, flagging the matter as potentially violating the European Union’s Digital Services Act.
It may be impossible to perfectly regulate what content the model can create, it is quite practical for the Grok product to enforce consent of the user whose content is being operated on before content can be generated based on it and, after the context is generated, before it can be viewed by or distributed to anyone else.
No, because it cannot even ID that user.
AI image editors attached to social media networks with a design that allows producing AI edits (including, but not limited to, nonconsensual intimate images and child pornography) of other user’s media without consent are not a national defense issue, and, even to the extent that AI arguably is a national defense issue, those particular applications can be curtailed entirely by a nation without any adverse impact on national defense.
You can distort any issue by zooming out to orbital level and ignoring the salient details.
I don't think the ability for citizens to make deep fake porn of whoever they want is the same as a country not investing in practical defensive applications of AI.
You don't have the right to act in violation of the law merely because it's the only way to make a buck.
Sometimes it is. Sometimes "democracy" isn't just a buzzword.
X.com has been blocked by poorer nations than France (specifically, Brazil) for not following local law.
And if you want to change the law to allow the business, go for it. But until then, we must follow the law.