25 pointsby chrisjj7 hours ago8 comments
  • dunhuang_nomad4 hours ago
    This move makes perfect sense to me. I think people are bit too online pilled to think about this as if it were a different product.

    If you produce a product that causes harm, and there are steps that could be taken to prevent that harm, you should be held responsible for it. Before the trump admin dropped the Boeing case, Boeing was going to be held liable for design defects in its Max planes that caused crashes. The government wasn’t going after Boeing bc a plane crashed, but bc Boeing did not take adequate steps from preventing that from happening.

    • chrisjj3 hours ago
      > If you produce a product that causes harm, and there are steps that could be taken to prevent that harm, you should be held responsible for it.

      This is wholly unrealistic. Any product can be used to cause harm and there are always steps that could be taken to prevent that. E.g. ceasing sales. But that would often do more harm than it prevents.

      • chrisjjan hour ago
        > The question is did the manufacturer do everything reasonable to limit the harm caused?

        OK. A different and better question.

        The problem is, would it be considered reasonable to avoid harm to the mental wellbeing of bikinified persons at the cost of harm to all users enjoying a service supported by bikinification earnings.

      • dunhuang_nomad3 hours ago
        I appreciate the pushback. I’m reading your argument that every product can be used to cause harm, I agree with that take. The question is did the manufacturer do everything reasonable to limit the harm caused?

        You can’t go after a company that makes kitchen knives if those are used to harm bc there’s nothing reasonable they could have done to prevent that harm, and there’s a legitimate use case for knives for cooking.

        In this case, my understanding is other companies (OpenAI and Anthropic) have done more to limit harm, whereas XAI hasn’t.

        • Nasrudith3 hours ago
          Personally I can't help but think that 'reasonable' is a dangerous legal standard due to its unpredictability, subjectivity and assumed values and knowledge. Is it reasonable to put powdered aluminum and iron oxide into paint? What about when the paint is going onto a Zeppelin? Oh wait, those are thermite's ingredients. Oops. Is it reasonable for the paint seller to be held liable for selling paint with common reagents?
          • dunhuang_nomad2 hours ago
            Things aren’t black and white, and that’s why we have humans and the law. There’s no clear definition of what probable cause means in search warrants but it’s “subjectivity” does not mean you should have no searches?

            But in this case it’s pretty easy, other model providers have in fact limited harm better than grok. So you don’t even need something arbitrary, just do it as well as competitors.

    • moralestapia3 hours ago
      I can use Photoshop to create a sexualized image of someone irl.

      How's that any different?

      • roryirvinean hour ago
        If you were to provide Photoshop as a Service on a sufficiently large scale, you would also be expected to take all reasonable measures to prevent it being used to disseminate CSAM and other abusive material.

        So, no different to the standard that X should be held to.

      • edgineer2 hours ago
        And people do do this and have been making crazy and creepy pictures online since its inception. It's never been that much of an issue until now.
      • chrisjj3 hours ago
        In UK law, it isn't.

        The practical difference is simply that now it is happening far more frequently.

  • leobg4 hours ago
    So I guess in the 90s they would’ve sued Adobe for not putting spyware into Photoshop?

    If you believe in democracy, and the rule of law, and citizenship, then the responsibility obviously lies with people who create and publish pictures, not the makers of tools.

    Think of it. You can use a phone camera to produce illegal pictures. What kind of a world would we live in if Apple was required to run an AI filter on your pics to determine whether they comply with the laws?

    A different question is if X actually hosts generated pictures that are illegal in the UK. In that case, X acts as a publisher, and you can sue them along with the creator for removal.

    • Symbiote4 hours ago
      Photoshop does have (since the late 1990s or so) algorithms to detect and prevent editing images of currency.

      The power of the AI tools is so great in comparison to a non-AI image editor that there's probably debate on who -- the user, or the operator of the AI -- is creating the image.

      https://en.wikipedia.org/wiki/EURion_constellation

      • chrisjj3 hours ago
        > The power of the AI tools is so great in comparison to a non-AI image editor that there's probably debate on who -- the user, or the operator of the AI -- is creating the image.

        Compute power is irrelevent. What's relevant in law is who is causing the generation, and that's obviously the operator.

    • graemep4 hours ago
      There is a big difference between running spyware on things running locally, and monitoring how people use a service running on your own computers. The former means you have to exfiltrate data, the latter is monitoring data you already have.

      Photoshop in the 90s was the former, Grok is the latter.

    • chrisjj3 hours ago
      > A different question is if X actually hosts generated pictures that are illegal in the UK.

      If the answer was Yes, these Govt. complaints would claim so. They don't.

      The Govt's problem is imagery it calls 'barely legal'. I.e. "legal but we wish it wasn't." https://www.theguardian.com/society/2025/aug/03/uk-pornograp...

    • SteveMqz4 hours ago
      Apple does run software for detecting CSAM on pictures users store to the cloud.
      • chrisjj3 hours ago
        That's to ensure Apple compliance, not user compliance.
  • Festro6 hours ago
    This further exposes just how pointless and ill-thought out the Online Safety Act was in the UK. It does nothing to actually limit harm at the source, and empower the UK's public body's to take immediate action.

    Ironic that the minister who spearheaded that awful bill (Peter Kyle) as Tech minister is now being the government spokesperson for this debacle as Business Minister. The UK needs someone who knows how tech and business works to tackle this, and that's not Peter Kyle.

    A platform suspension in the UK should have been swift, with clear terms for how X can be reinstated. As much as it appears Musk is doubling down on letting Grok produce CSAM as some form of free speech, the UK government should treat it as a limited breach or bug that the vendor needs to resolve, whilst taking action to block the site causing harm until they've fixed it.

    Letting X and Grok continue to do harm, and get free PR, is just the worst case scenario for all involved.

    • roryirvine5 hours ago
      The draft Online Safety Bill was first published in 2021, was substantially re-written and re-introduced in early 2022, was amended over the course of the next 18 months, and eventually passed into law as the Online Safety Act in October 2023.

      Peter Kyle was in opposition until July 2024, so how could he have spearheaded it?

      • Festro5 hours ago
        Because he implemented it under his tenure in July 2025. He didn't come up with it, he spearheaded its actual implementation. Sorry if that wasn't clear.
        • tonyedgecombe5 hours ago
          The first conviction under the bill was March 2024 so that makes no sense.
          • Festro5 hours ago
            Why would it make no sense? Like many bills/acts, it came into effect in stages. You're referring to new laws/crimes that came into effect in January 2024.

            I'm referring to the Act's powers to compel companies to actually do things in my original comment. I don't know exactly when various parts came into effect that would constitute that, but for the point of my post I'm going on Peter Kyle's own website's dated reference to holding companies accountable.

            "As of the 24th July 2025, platforms now have a legal duty to protect children"

            https://www.peterkyle.co.uk/blog/2025/07/25/online-safety-ac...

            I don't understand why people are taking issue with that. Peter Kyle is the minister who delivered the measures from the bill that a lot of people are angry about and this latest issue on X is just another red flag that the bill is poorly worded and thought out putting too much emphasis on ID age checks for citizens than actually stopping any abuse. Peter Kyle is the one who called out objections to the bill as being on the "side of predators". Peter Kyle is now the one, despite having moved department, who is somehow commenting on this issue.

            Totally happy to call out the Tories, and prior ministers who worked on the Bill/Act but Kyle implemented it, made reckless comments about, and now is trying to look proactive about an issue it covers that it's so ineffectively applying to.

            • chrisjj2 hours ago
              > this latest issue on X is just another red flag that the bill is poorly worded and thought out putting too much emphasis on ID age checks for citizens than actually stopping any abuse.

              The bill is designed to protect children from extreme sexual violence, extreme pornography and CSAM.

              Not to protect adults from bikinification.

              It is working as designed.

            • blitzar4 hours ago
              Partisan politics has rotted peoples brains, I wonder if it is by design to lower peoples critical thinking skills or if it is just a fringe benefit from the tribalism it creates.
              • PearlRiver3 hours ago
                Reminds me of something emperor Trump said. "I can shoot somebody on Fifth Avenue and they will still vote for me".

                Sometimes people just dig themselves into a hole and they start going off the deep end. Why did it take until 1944 for someone to blow up Hitler?

    • chrisjj4 hours ago
      The OSA very much does empower action e.g. against images of extreme sexual violence and extreme pornography.

      It does not empower platform suspension for bikinification.

      And there's as yet no substantiation of your claim Grok produces CSAM.

  • bmacho4 hours ago
    Flagged for the title implying men have no rights. That's totally uncalled for and I hope such submission titles are not allowed here.
    • jraph4 hours ago
      I don't think this title implies this. The title says "There were sexualised AI images of women and children, and the UK threatens X over this". What more than this do you read in this title?

      Is there actually a significant number of problematic sexualised AI images of men on X that the title fails to mention? If not, the follow-up question would be: what are you actually complaining about, exactly?

      Women are often sexualised, way more than men. Would it be more comfortable to you if this fact was invisibilized?

    • saaaaaam2 hours ago
      The title is the title of the article published by the Guardian. It has not been editorialised by the person submitting the article. If you have an issue with the title of the article flagging the submission is not hugely useful. Email the editor.
    • graemep4 hours ago
      I agree with the underlying point and the social bias it reflects (which i have experienced myself) but the title here is (as usual) just the article so The Guardian is to blame rather than HN.

      I think the solution is not to disallow the titles, but to comment on them and draw attention to the sexism in the article.

      • bmacho4 hours ago
        The solution is absolutely disallow offending titles. The same principle that would make HN moderators take down a "Kill all the Jews" title from the front page should apply for this one too.

        Submission titles should be the original article titles, as long as those aren't problematic.

        • graemep3 hours ago
          Its not as offensive.

          I agree this is problematic, but I am inclined to see it as an opportunity to discuss the problem and illustrate how widespread it is. We can also mention real issues, such as about half of all domestic abuse victims in the UK are male (if you count emotional abuse, otherwise its still 40%), more than half of rapes in the US are of men (because of the huge number of prison rapes), etc.

          • 3 hours ago
            undefined
    • chrisjj2 hours ago
      The HN title is true to the source and should not be flagged.
    • moralestapia3 hours ago
      Agree 100% and thanks for bringing this up.

      Sexual abuse towards men is as prevalent as it is towards women.

    • NedF3 hours ago
      [dead]
  • thw_9a83c3 hours ago
    I think this is a lost cause. Even if the mainstream services are blocked or forced to comply, there will always be hundreds of lesser-known tools and services offering the same features. At this point, nobody has the power to close this can of worms.

    Besides, who is going to decide when people's images are sexualized enough? Are images of Elon Musk in bikini alright because he's not a woman or a child?

    • chrisjj2 hours ago
      > Are images of Elon Musk in bikini alright because he's not a woman or a child?

      Didn't he consent?

  • 0xy5 hours ago
    This is a purely political move to censor dissent by a government that polls like a minor party and is slated for electoral wipeout next election. If it were not, they'd issue the same threats to Gemini and ChatGPT.

    https://www.digitaltrends.com/computing/googles-gemini-deeme...

  • ulfw3 hours ago
    Don't threaten. Do it.

    Indonesia has. Malaysia has. Why not you?

    https://www.bbc.com/news/articles/cg7y10xm4x2o

    • chrisjj2 hours ago
      > Indonesia has. Malaysia has.

      They banned porn sites too.

      > Why not you?

      The UK Govt has no power to ban it, since it is legal.

      • ulfwan hour ago
        So do many US states.
        • chrisjjan hour ago
          So perhaps ask why those US states allow it (Grok+X).
  • AniseAbyss4 hours ago
    [dead]