If you produce a product that causes harm, and there are steps that could be taken to prevent that harm, you should be held responsible for it. Before the trump admin dropped the Boeing case, Boeing was going to be held liable for design defects in its Max planes that caused crashes. The government wasn’t going after Boeing bc a plane crashed, but bc Boeing did not take adequate steps from preventing that from happening.
This is wholly unrealistic. Any product can be used to cause harm and there are always steps that could be taken to prevent that. E.g. ceasing sales. But that would often do more harm than it prevents.
OK. A different and better question.
The problem is, would it be considered reasonable to avoid harm to the mental wellbeing of bikinified persons at the cost of harm to all users enjoying a service supported by bikinification earnings.
You can’t go after a company that makes kitchen knives if those are used to harm bc there’s nothing reasonable they could have done to prevent that harm, and there’s a legitimate use case for knives for cooking.
In this case, my understanding is other companies (OpenAI and Anthropic) have done more to limit harm, whereas XAI hasn’t.
But in this case it’s pretty easy, other model providers have in fact limited harm better than grok. So you don’t even need something arbitrary, just do it as well as competitors.
How's that any different?
So, no different to the standard that X should be held to.
The practical difference is simply that now it is happening far more frequently.
If you believe in democracy, and the rule of law, and citizenship, then the responsibility obviously lies with people who create and publish pictures, not the makers of tools.
Think of it. You can use a phone camera to produce illegal pictures. What kind of a world would we live in if Apple was required to run an AI filter on your pics to determine whether they comply with the laws?
A different question is if X actually hosts generated pictures that are illegal in the UK. In that case, X acts as a publisher, and you can sue them along with the creator for removal.
The power of the AI tools is so great in comparison to a non-AI image editor that there's probably debate on who -- the user, or the operator of the AI -- is creating the image.
Compute power is irrelevent. What's relevant in law is who is causing the generation, and that's obviously the operator.
Photoshop in the 90s was the former, Grok is the latter.
If the answer was Yes, these Govt. complaints would claim so. They don't.
The Govt's problem is imagery it calls 'barely legal'. I.e. "legal but we wish it wasn't." https://www.theguardian.com/society/2025/aug/03/uk-pornograp...
Ironic that the minister who spearheaded that awful bill (Peter Kyle) as Tech minister is now being the government spokesperson for this debacle as Business Minister. The UK needs someone who knows how tech and business works to tackle this, and that's not Peter Kyle.
A platform suspension in the UK should have been swift, with clear terms for how X can be reinstated. As much as it appears Musk is doubling down on letting Grok produce CSAM as some form of free speech, the UK government should treat it as a limited breach or bug that the vendor needs to resolve, whilst taking action to block the site causing harm until they've fixed it.
Letting X and Grok continue to do harm, and get free PR, is just the worst case scenario for all involved.
Peter Kyle was in opposition until July 2024, so how could he have spearheaded it?
I'm referring to the Act's powers to compel companies to actually do things in my original comment. I don't know exactly when various parts came into effect that would constitute that, but for the point of my post I'm going on Peter Kyle's own website's dated reference to holding companies accountable.
"As of the 24th July 2025, platforms now have a legal duty to protect children"
https://www.peterkyle.co.uk/blog/2025/07/25/online-safety-ac...
I don't understand why people are taking issue with that. Peter Kyle is the minister who delivered the measures from the bill that a lot of people are angry about and this latest issue on X is just another red flag that the bill is poorly worded and thought out putting too much emphasis on ID age checks for citizens than actually stopping any abuse. Peter Kyle is the one who called out objections to the bill as being on the "side of predators". Peter Kyle is now the one, despite having moved department, who is somehow commenting on this issue.
Totally happy to call out the Tories, and prior ministers who worked on the Bill/Act but Kyle implemented it, made reckless comments about, and now is trying to look proactive about an issue it covers that it's so ineffectively applying to.
The bill is designed to protect children from extreme sexual violence, extreme pornography and CSAM.
Not to protect adults from bikinification.
It is working as designed.
Sometimes people just dig themselves into a hole and they start going off the deep end. Why did it take until 1944 for someone to blow up Hitler?
It does not empower platform suspension for bikinification.
And there's as yet no substantiation of your claim Grok produces CSAM.
Is there actually a significant number of problematic sexualised AI images of men on X that the title fails to mention? If not, the follow-up question would be: what are you actually complaining about, exactly?
Women are often sexualised, way more than men. Would it be more comfortable to you if this fact was invisibilized?
I think the solution is not to disallow the titles, but to comment on them and draw attention to the sexism in the article.
Submission titles should be the original article titles, as long as those aren't problematic.
I agree this is problematic, but I am inclined to see it as an opportunity to discuss the problem and illustrate how widespread it is. We can also mention real issues, such as about half of all domestic abuse victims in the UK are male (if you count emotional abuse, otherwise its still 40%), more than half of rapes in the US are of men (because of the huge number of prison rapes), etc.
Sexual abuse towards men is as prevalent as it is towards women.
Besides, who is going to decide when people's images are sexualized enough? Are images of Elon Musk in bikini alright because he's not a woman or a child?
Didn't he consent?
https://www.digitaltrends.com/computing/googles-gemini-deeme...