105 pointsby vikaveri5 hours ago7 comments
  • pogue5 hours ago
    Finally, someone is taking action against the CSAM machine operating seemingly without penalty.
    • chrisjj4 hours ago
      I am not a fan of Grok, but there has been zero evidence of it creating CSAM. For why, see https://www.iwf.org.uk/about-us/
      • mortarion4 hours ago
        CSAM does not have a universal definition. In Sweden for instance, CSAM is any image of an underage subject (real or realistic digital) designed to evoke a sexual response. If you take a picture of a 14 year old girl (age of consent is 15) and use Grok to give her bikini, or make her topless, then you are most definately producing and possessing CSAM.

        No abuse of a real minor is needed.

        • logicchains2 hours ago
          You don't see a huge difference between abusing a child (and recording it) vs drawing/creating an image of a child in a sexual situation? Do you believe they should have the same legal treatment? In Japan for instance the latter is legal.
          • ffsm836 minutes ago
            He made no judgement in his comment, he just observed the fact that the term csam - in at least the specified jurisdiction - applies to generated pictures of teenagers, wherever real people were subjected to harm or not.

            I suspect none of us are lawyers with enough legal knowledge of the French law to know the specifics of this case

            • yafinder16 minutes ago
              This comment is a part of the chain that starts with a very judgemental comment and is an answer to a response challenging that starting one. You don't need legal knowledge of the French law to want to distinguish real child abuse from imaginary. One can give arguments why the latter is also bad, but this is not an automatic judgment, should not depend on the laws of a particular country and I, for one, am deeply shocked that some could think it's the same crime of the same severity.
        • worthless-trash4 hours ago
          As good as Australia's little boobie laws.
        • chrisjj3 hours ago
          > CSAM does not have a universal definition.

          Strange that there was no disagreement before "AI", right? Yet now we have a clutch of new "definitions" all of which dilute and weaken the meaning.

          > In Sweden for instance, CSAM is any image of an underage subject (real or realistic digital) designed to evoke a sexual response.

          No corroboration found on web. Quite the contrary, in fact:

          "Sweden does not have a legislative definition of child sexual abuse material (CSAM)"

          https://rm.coe.int/factsheet-sweden-the-protection-of-childr...

          > If you take a picture of a 14 year old girl (age of consent is 15) and use Grok to give her bikini, or make her topless, then you are most definately producing and possessing CSAM.

          > No abuse of a real minor is needed.

          Even the Google "AI" knows better than that. CSAM "is considered a record of a crime, emphasizing that its existence represents the abuse of a child."

          Putting a bikini on a photo of a child may be distasteful abuse of a photo, but it is not abuse of a child - in any current law.

          • lava_pidgeon3 hours ago
            " Strange that there was no disagreement before "AI", right? Yet now we have a clutch of new "definitions" all of which dilute and weaken the meaning. "

            Are you from Sweden? Why do you think the definition was clear across the world and not changed "before AI"? Or is it some USDefaultism where Americans assume their definition was universal?

            • chrisjj2 hours ago
              > Are you from Sweden?

              No. I used this interweb thing to fetch that document from Sweden, saving me a 1000-mile walk.

              > Why do you think the definition was clear across the world and not changed "before AI"?

              I didn't say it was clear. I said there was no disagreement.

              And I said that because I saw only agreement. CSAM == child sexual abuse material == a record of child sexual abuse.

              • lava_pidgeon2 hours ago
                "No. I used this interweb thing to fetch that document from Sweden, saving me a 1000-mile walk."

                So you cant speak Swedish, yet you think you grasped the Swedish law definition?

                " I didn't say it was clear. I said there was no disagreement. "

                Sorry, there are lots of different judical definitions about CSAM in different countries, each with different edge cases and how to handle them. I very doubt it, there is a disaggrement.

                But my guess about your post is, that an American has to learn again there is a world outside of the US with different rules and different languages.

                • chrisjj30 minutes ago
                  > So you cant speak Swedish, yet you think you grasped the Swedish law definition?

                  I guess you didn't read the doc. It is in English.

                  I too doubt there's material disagreement between judicial definitions. The dubious definitions I'm referring to are the non-judicial fabrications behind accusations such as the root of this subthread.

          • rented_mule2 hours ago
            > Even the Google "AI" knows better than that. CSAM "is [...]"

            Please don't use the "knowledge" of LLMs as evidence or support for anything. Generative models generate things that have some likelihood of being consistent with their input material, they don't "know" things.

            Just last night, I did a Google search related to the cell tower recently constructed next to our local fire house. Above the search results, Gemini stated that the new tower is physically located on the Facebook page of the fire department.

            Does this support the idea that "some physical cell towers are located on Facebook pages"? It does not. At best, it supports that the likelihood that the generated text is completely consistent with the model's input is less than 100% and/or that input to the model was factually incorrect.

            • chrisjj22 minutes ago
              Thanks. For a moment I slipped and fell for the "AI" con trick :)
          • fmbb2 hours ago
            > - in any current law.

            It has been since at least 2012 here in Sweden. That case went to our highest court and they decided a manga drawing was CSAM (maybe you are hung up on this term though, it is obviously not the same in Swedish).

            The holder was not convicted but that is besides the point about the material.

            • chrisjj14 minutes ago
              > It has been since at least 2012 here in Sweden. That case went to our highest court

              This one?

              "Swedish Supreme Court Exonerates Manga Translator Of Porn Charges"

              https://bleedingcool.com/comics/swedish-supreme-court-exoner...

              It has zero bearing on the "Putting a bikini on a photo of a child ... is not abuse of a child" you're challenging.

              > and they decided a manga drawing was CSAM

              No they did not. They decided "may be considered pornographic". A far lesser offence than CSAM.

          • lawn2 hours ago
            In Swedish:

            https://www.regeringen.se/contentassets/5f881006d4d346b199ca...

            > Även en bild där ett barn t.ex. genom speciella kameraarrangemang framställs på ett sätt som är ägnat att vädja till sexualdriften, utan att det avbildade barnet kan sägas ha deltagit i ett sexuellt beteende vid avbildningen, kan omfattas av bestämmelsen.

            Which translated means that the children does not have to be apart of sexual acts and indeed undressing a child using AI could be CSAM.

            I say "could" because all laws are open to interpretation in Sweden and it depends on the specific image. But it's safe to say that many images produces by Grok are CSAM by Swedish standards.

          • tokai2 hours ago
            "Sweden does not have a legislative definition of child sexual abuse material (CSAM)"

            Because that is up to the courts to interpret. You cant use your common law experience to interpret the law in other countries.

      • moolcoolan hour ago
        Are you implying that it's not abuse to "undress" a child using AI?

        You should realize that children have committed suicide before because AI deepfakes of themselves have been spread around schools. Just because these images are "fake" doesn't mean they're not abuse, and that there aren't real victims.

      • secretsatan4 hours ago
        It doesn't mention grok?
        • chrisjj3 hours ago
          Sure does. Twice. E.g.

          Musk's social media platform has recently been subject to intense scrutiny over sexualised images generated and edited on the site using its AI tool Grok.

          • mfruan hour ago
            CTRL-F "grok": 0/0 found
            • lawn43 minutes ago
              I found 8 mentions.
  • techblueberry3 hours ago
    I'm not saying I'm entirely against this, but just out of curiosity, what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?
    • rsynnott2 hours ago
      > what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?

      You would be _amazed_ at the things that people commit to email and similar.

      Here's a Facebook one (leaked, not extracted by authorities): https://www.reuters.com/investigates/special-report/meta-ai-...

    • afavour2 hours ago
      It was known that Grok was generating these images long before any action was taken. I imagine they’ll be looking for internal communications on what they were doing, or deciding not to do, doing during that time.
    • Mordisquitos2 hours ago
      What do they hope to find, specifically? Who knows, but maybe the prosecutors have a better awareness of specifics than us HN commenters who have not been involved in the investigation.

      What may they find, hypothetically? Who knows, but maybe an internal email saying, for instance, 'Management says keep the nude photo functionality, just hide it behind a feature flag', or maybe 'Great idea to keep a backup of the images, but must cover our tracks', or perhaps 'Elon says no action on Grok nude images, we are officially unaware anything is happening.'

      • cwilluan hour ago
        Or “regulators don't understand the technology; short of turning it off entirely, there's nothing we can do to prevent it entirely, and the costs involved in attempting to reduce it are much greater than the likely fine, especially given that we're likely to receive such a fine anyway.”
        • piratesan hour ago
          They could shut it off out of a sense of decency and respect, wtf kind of defense is this?
    • moolcoolan hour ago
      Moderation rules? Training data? Abuse metrics? Identities of users who generated or accessed CSAM?
  • robtherobber5 hours ago
    > The prosecutor's office also said it was leaving X and would communicate on LinkedIn and Instagram from now on.

    I mean, perhaps it's time to completely drop these US-owned, closed-source, algo-driven controversial platforms, and start treating the communication with the public that funds your existence in different terms. The goal should be to reach as many people, of course, but also to ensure that the method and medium of communication is in the interest of the public at large.

    • nonethewiser10 minutes ago
      >I mean, perhaps it's time to completely drop these US-owned, closed-source, algo-driven controversial platforms

      I think we are getting very close the the EU's own great firewall.

      There is currently a sort of identity crisis in the regulation. Big tech companies are breaking the laws left and right. So which is it?

      - fine harvesting mechanism? Keep as-is.

      - true user protection? Blacklist.

    • Mordisquitos2 hours ago
      I agree with you. In my opinion it was already bad enough that official institutions were using Twitter as a communication platform before it belonged to Musk and started to restrict visibility to non-logged in users, but at least Twitter was arguably a mostly open communication platform and could be misunderstood as a public service in the minds of the less well-informed. However, deciding to "communicate" at this day and age on LinkedIn and Instagram, neither of which ever made a passing attempt to pretend to be a public communications service, boggles the mind.
    • 3 hours ago
      undefined
    • valar_m2 hours ago
      >The goal should be to reach as many people, of course, but also to ensure that the method and medium of communication is in the interest of the public at large.

      Who decides what communication is in the interest of the public at large? The Trump administration?

      • robtherobber37 minutes ago
        You appear to have posted a bit of a loaded question here, apologies if I'm misinterpreting your comment. It is, of course, the public that should decide what communication is of public interest, at least in a democracy operating optimally.

        I suppose the answer, if we're serious about it, is somewhat more nuanced.

        To begin, public administrations should not get to unilaterally define "the public interest" in their communication, nor should private platforms for that matter. Assuming we're still talking about a democracy, the decision-making should be democratically via a combination of law + rights + accountable institutions + public scrutiny, with implementation constraints that maximise reach, accessibility, auditability, and independence from private gatekeepers. The last bit is rather relevant, because the private sector's interests and the citizen's interests are nearly always at odds in any modern society, hence the state's roles as rule-setter (via democratic processes) and arbiter. Happy to get into further detail regarding the actual processes involved, if you're genuinely interested.

        That aside - there are two separate problems that often get conflated when we talk about these platforms:

        - one is reach: people are on Twitter, LinkedIn, Instagram, so publishing there increases distribution; public institutions should be interested in reaching as many citizens as possible with their comms;

        - the other one is dependency: if those become the primary or exclusive channels, the state's relationship with citizens becomes contingent on private moderation, ranking algorithms, account lockouts, paywalls, data extraction, and opaque rule changes. That is entirely and dangerously misaligned with democratic accountability.

        A potential middle position could be ti use commercial social platforms as secondary distribution instead of the authoritative channel, which in reality is often the case. However, due to the way societies work and how individuals operate within them, the public won't actually come across the information until it's distributed on the most popular platforms. Which is why some argue that they should be treated as public utilities since dominant communications infrastructure has quasi-public function (rest assured, I won't open that can of worms right now).

        Politics is messy in practice, as all balancing acts are - a normal price to pay for any democratic society, I'd say. Mix that with technology, social psychology and philosophies of liberty, rights, and wellbeing, and you have a proper head-scratcher on your hands. We've already done a lot to balance these, for sure, but we're not there yet and it's a dynamic, developing field that presents new challenges.

    • spacecadet3 hours ago
      This. What a joke. Im still waiting on my tax refund from NYC for plastering "twitter" stickers on every publicly funded vehicle.
  • vessenes3 hours ago
    Interesting. This is basically the second enforcement on speech / images that France has done - first was Pavel Durov @ Telegram. He eventually made changes in Telegram's moderation infrastructure and I think was allowed to leave France sometime last year.

    I don't love heavy-handed enforcement on speech issues, but I do really like a heterogenous cultural situation, so I think it's interesting and probably to the overall good to have a country pushing on these matters very hard, just as a matter of keeping a diverse set of global standards, something that adds cultural resilience for humanity.

    linkedin is not a replacement for twitter, though. I'm curious if they'll come back post-settlement.

    • tokai2 hours ago
      In what world is generating CSAM a speech issue? Its really doing a disservice to actual free speech issues to frame it was such.
      • logicchains2 hours ago
        The point of banning real CSAM is to stop the production of it, because the production is inherently harmful. The production of AI or human generated CSAM-like images does not inherently require the harm of children, so it's fundamentally a different consideration. That's why some countries, notably Japan, allow the production of hand-drawn material that in the US would be considered CSAM.
        • cwilluan hour ago
          If libeling real people is a harm to those people, then altering photos of real children is certainly also a harm to those children.
          • whamlastxmas6 minutes ago
            I'm strongly against CSAM but I will say this analogy doesn't quite hold (though the values behind it does)

            Libel must be as assertion that is not true. Photoshopping or AIing someone isn't an assertion of something untrue. It's more the equivalent of saying "What if this is true?" which is perfectly legal

        • tokai2 hours ago
          That's not what we are discussing here. Even less when a lot of the material here is edits of real pictures.
        • duckbilled23 minutes ago
          [dead]
    • StopDisinfo9102 hours ago
      Very different charges however.

      Durov was held on suspicion Telegram was willingly failing to moderate its platform and allowed drug trafficking and other illegal activities to take place.

      X has allegedly illegally sent data to the US in violation of GDPR and contributed to child porn distribution.

      Note that both are directly related to direct violation of data safety law or association with a separate criminal activities, neither is about speech.

    • derrida3 hours ago
      I wouldn't equate the two.

      There's someone who was being held responsible for what was in encrypted chats.

      Then there's someone who published depictions of sexual abuse and minors.

      Worlds apart.

      • cbeach3 hours ago
        Unlike Clinton, Gates et al, there is ZERO evidence that Musk ever visited the island, although he was invited by Epstein.

        If you're going to make serious accusations like that you're going to need to provide some evidence.

        • techblueberry3 hours ago
          In November 2012, Epstein sent Musk an email asking “how many people will you be for the heli to island”.

          “Probably just Talulah and me. What day/night will be the wildest party on your island?” Musk replied, in an apparent reference to his former wife Talulah Riley.

          https://www.theguardian.com/technology/2026/jan/30/elon-musk...

          I think there's just as much evidence Clinton did as Musk. Gates on the other hand.

          • antonymoose2 hours ago
            To my knowledge Musk asked to go but never actually went. Clinton, however, went a dozen or so times with Epstein on his private jet?

            Has the latest release changed that narrative?

            • whamlastxmas4 minutes ago
              Additionally Clinton is listed several times on the Lolita express flight logs, Elon never

              Elon didn't ask to go, he was invited multiple times

            • orwinan hour ago
              Yes. He went at least once in 2012, then asked to go again in 2013 and Epstein refused.
              • rsynnottan hour ago
                Oof.
              • 41 minutes ago
                undefined
            • lawn2 hours ago
              Musk did ask to go after Epstein was sentenced.
        • 2 hours ago
          undefined
        • 3 hours ago
          undefined
        • rsynnott2 hours ago
          ... Eh? This isn't about Musk's association with Epstein, it's about his CSAM generating magic robot (and also some other alleged dodgy practices around the GDPR etc).
    • logicchains2 hours ago
      >but I do really like a heterogenous cultural situation, so I think it's interesting and probably to the overall good to have a country pushing on these matters very hard

      Censorship increases homogeneity, because it reduces the amount of ideas and opinions that are allowed to be expressed. The only resilience that comes from restricting people's speech is resilience of the people in power.

      • moolcool38 minutes ago
        I really don't see reasonable enforcement of CSAM laws as a restriction on "diversity of thought".
      • AureliusMAan hour ago
        This is precisely the point of the comment you are replying to: a balance has to be found and enforced.
    • btreecat3 hours ago
      >but I do really like a heterogenous cultural situation

      Why isn't that a major red flag exactly?

  • afavour3 hours ago
    I’m sure Musk is going to say this is about free speech in an attempt to gin up his supporters. It isn’t. It’s about generating and distributing non consensual sexual imagery, including of minors. And, when notified, doing nothing about it. If anything it should be an embarrassment that France are the only ones doing this.

    (it’ll be interesting to see if this discussion is allowed on HN. Almost every other discussion on this topic has been flagged…)

    • rsynnott2 hours ago
      > If anything it should be an embarrassment that France are the only ones doing this.

      As mentioned in the article, the UK's ICO and the EC are also investigating.

      France is notably keen on raids for this sort of thing, and a lot of things that would be basically a desk investigation in other countries result in a raid in France.

    • cbeach3 hours ago
      > when notified, doing nothing about it

      When notified, he immediately:

        * "implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing" - https://www.bbc.co.uk/news/articles/ce8gz8g2qnlo 
      
        * locked image generation down to paid accounts only (i.e. those individuals that can be identified via their payment details).
      
      Have the other AI companies followed suit? They were also allowing users to undress real people, but it seems the media is ignoring that and focussing their ire only on Musk's companies...
      • afavour3 hours ago
        You and I must have different definitions of the word “immediately”. The article you posted is from January 15th. Here is a story from January 2nd:

        https://www.bbc.com/news/articles/c98p1r4e6m8o

        > Have the other AI companies followed suit? They were also allowing users to undress real people

        No they weren’t? There were numerous examples of people feeding the same prompts to different AIs and having their requests refused. Not to mention, X was also publicly distributing that material, something other AI companies were not doing. Which is an entirely different legal liability.

        • bonesss2 hours ago
          The part of X’s reaction to their own publishing I’m most looking forward to seeing in slow-motion in the courts and press was their attempt at agency laundering by having their LLM generate an apology in first-person.

          Sorry I broke the law. Oops for reals tho.

      • derrida3 hours ago
        The other LLMs probably don't have the training data in the first place.
      • techblueberry3 hours ago
        [flagged]
    • gulfofamerica3 hours ago
      [dead]
  • pu_pe3 hours ago
    I suppose those are the offices from SpaceX now that they merged.
    • omnimus3 hours ago
      So France is raiding offices of US military contractor?
      • mkjs2 hours ago
        How is that relevant? Are you implying that being a US military contractor should make you immune to the laws of other countries that you operate in?

        The onus is on the contractor to make sure any classified information is kept securely. If by raiding an office in France a bunch of US military secrets are found, it would suggest the company is not fit to have those kind of contracts.

      • hermanzegermanan hour ago
        I know it's hard to grasp for you. But in France, french laws and jurisdiction applies, not those of the United States
      • fanatic2popean hour ago
        Even if it is, being affiliated with the US military doesn't make you immune to local laws.

        https://www.the-independent.com/news/world/americas/crime/us...

      • an hour ago
        undefined
  • Altern4tiveAccan hour ago
    > Prosecutors say they are now investigating whether X has broken the law across multiple areas.

    This step could come before a police raid.

    This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.

    • moolcoolan hour ago
      > This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.

      The company made and released a tool with seemingly no guard-rails, which was used en masse to generate deepfakes and child pornography.

    • aaomidian hour ago
      Lmao they literally made a broad accessible CSAM maker.