173 pointsby pavel_lishin8 hours ago18 comments
  • wnevets7 hours ago
    Saying cisgender is bad on Twitter but CSAM is not. Very weird.
    • bayarearefugee6 hours ago
      Not remotely surprising, nor sadly is the fact that hacker news flag-killed this story.
      • labrador6 hours ago
        It's not Hacker News. It's Musk fans on HN. The article is flagged (anyone can do it) but not dead. My reasonable comment elsewhere in this thread was also flagged by Musk fans but it's still alive.
        • kccoder2 hours ago
          > it’s not hacker news…

          It’s not, but they could fix the issue by raising the flagging threshold for Musk-related posts.

        • mdhb4 hours ago
          The mods absolutely endorse it though so in that sense it very much is them. They tend to be extremely dishonest and evasive when confronted directly about it but I mean anyone who has an account here can see with their own lying eyes that this happens multiple times a day, every day and it’s simply not plausible that it’s anything else other than something they support.
          • immibis4 hours ago
            The purpose of a system is what it does. If the system did something different from its purpose, they would change it. I'm sure it's also intentional there's no vouch button for posts. This will change once every high quality post is flagged to death.
            • latexr4 hours ago
              > there's no vouch button for posts.

              There is. But seems like it’s only for [dead], not [flagged].

              • rootusrootus2 hours ago
                I suspect dead usually means shadow ban, at least for comments, and vouch is a way to selectively show through community support a high value comment from an otherwise abusive user. Where flagged is overt, already applies just to that one comment, and vouching in that case wouldn't really make logical sense. Unless we want people to be able to wage flag wars.
            • therobots9272 hours ago
              Maybe it’s time for a flag strike
    • pesus7 hours ago
      Makes complete sense when you view it through the lens of Musk's opinions.
      • ben_w6 hours ago
        Even given Musk's opinions, he clearly understands that the general public doesn't want kids getting hurt, he demonstrated this by saying Trump was in the Epstein files and by repeatedly saying the UK government isn't doing enough to stop child abuse and opining about a UK civil war.

        His hypocrisy, his position on the main-character-syndrome-to-narcissism spectrum, him getting a kick out of trolling everyone, or him having straight up psychopathy: whatever it is, I find I no longer care.

        • nullocator6 hours ago
          > he clearly understands that the general public doesn't want kids getting hurt

          This may be giving him too much credit, the only thing we actually know is he thinks being accused of being a pedophile is bad. We know this because he's done it to several people, and flips his shit when it happens to him or his platform. He doesn't actually seem to care about pedophiles or pedophilia given his on going relationships with people he's accused.

          • ben_w6 hours ago
            Mm. Took me a moment to see your point there, but I think you're right.

            If he's only operating on the impact of the words, and ignoring the existence of an observable testable shared reality behind the words, then yes, accusations (either direction) are more damaging in his mind than being seen to support or oppose whatever.

            Which is, ironically, a reason to *oppose* absolute freedom of speech, when words have power beyond their connection to reality the justifications fall short. But like I said, I don't care if his inconsistency is simple hypocrisy or something more complex, not any more.

            • parineum5 hours ago
              > If he's only operating on the impact of the words..

              > Which is, ironically, a reason to oppose absolute freedom of speech...

              Since the former theory of mind can't explain the latter behavior, I guess it's wrong then, right?

              • ben_w5 hours ago
                Please elaborate, especially note that people on the internet loudly disagree if Musk's behaviour is supporting or suppressing freedom of expression and I have no way to guess what your position is without spending a lot of time diving into your comment history (a superficial glance didn't disambiguate).
    • Zigurd7 hours ago
      Fits the pattern. One of the two F-words will get your post flagged faster than the other.
      • moogly7 hours ago
        Full Self Driving?
    • ekjhgkejhgk7 hours ago
      Concerning.
    • therobots9277 hours ago
      Well given Musk’s extensive connections to the Epstein network, and his hatred for his trans daughter, I wouldn’t say it’s “weird” in the sense that it’s unexpected.

      Edit: to the bots downvoting me - prove me wrong. Prove either of the above statements wrong.

    • nailer5 hours ago
      CSAM is absolutely not OK on X and Musk has stated so explicitly and repeatedly.
      • array_key_first4 hours ago
        Doesn't matter what he says, or what anyone says actually. His actions demonstrate it is okay, and, since he is the CEO of X and undoubtedly aware of these issues, we have no choice but to conclude he supports CSAM on X.
        • nailer4 hours ago
          When Elon took over X in 2022 he declared CSAM the number 1 priority.

          11M+ X accounts were suspended for CSE violations in 2023 (vs 2.3M on Twitter in 2022).

          X has recently made the penalty for prompting for CSAM the same as uploading it.

          You could find this out yourself very easily.

          • array_key_first4 hours ago
            This has not meaningfully prevented CSAM generated by Grok. There are simple and trivial ways to stop it outright, including just shutting down Grok. Nobody is doing this, because they don't want to.
            • 2 hours ago
              undefined
          • kccoder2 hours ago
            You can’t trust nor take anything Elon says as factual or indicative of his desires.

            Recent evidence and behaviors trump past behavior.

          • immibis4 hours ago
            And then he gave everyone a bot that makes CSAM. You could find this out for yourself very easily.
            • 2 hours ago
              undefined
      • cmxch3 hours ago
        Yet it seems to be fine for BlueSky, where their first priority is to create a hermetically sealed opinion chamber at scale, then pay attention to the law.
  • kylecordes7 hours ago
    Obviously anybody can post gross things by running an image generation/editing tool locally and publishing the results. People then mostly blame the poster whose name it then appears under.

    Seems like a pointless and foolish product design error for X/grok to publish arbitrary image generation results under its own name. How could you expect that to go anything but poorly?

    • gizmo6867 hours ago
      It's not just a matter of publishing it under its own name. It also massively reduced the friction to do so compared to needing to run the image through an external tool and upload the result. That friction would greatly reduce the number of people who do it.
      • scoofy6 hours ago
        In the US, it used to be that if you made credible threats against people you could/would be prosecuted. Social media made it so common in that no district attorney goes to the trouble of actually finding prosecuting people for doing this.

        We can expect the same level of institutional breakdown with regards to various types of harassment, misappropriation, libel, and even manufactured revenge porn from AI.

      • nickthegreek4 hours ago
        It’s even worse as the requestor doesn’t vet and approve the image. That seems to have removed editorial control from the requestor. This bot could also mess with user who are not trying to do bad thing X, but the black box bot decides to throw in some offputting stuff and then also associate your name with it.

        I keep coming to the same conclusion with X as they did in the 80’s masterpiece War Games.

  • int32_647 hours ago
    Is there any way to provide a service where an image manipulation bot is mentioned in social media replies and it doesn't lead to total chaos?

    From what I saw the 'undressing' problem was the tip of the iceberg of crazy things people have asked Grok to do.

    • ben_w7 hours ago
      > Is there any way to provide a service where an image manipulation bot is mentioned in social media replies and it doesn't lead to total chaos?

      It may be a failure of imagination on my part, but I can't imagine a bot limited to style transfer or replacing faces with corresponding emoji would cause total chaos.

      Even if someone used that kind of thing with a picture from an open-casket funeral, it would get tuts rather than chaos.

      > From what I saw the 'undressing' problem was the tip of the iceberg of crazy things people have asked Grok to do.

      Indeed. I mean, how out of touch does one have to be to look at Twitter and think "yes, this place will benefit from photorealistic image editing driven purely by freeform natural language, nothing could go wrong"?

  • option7 hours ago
    This very clearly violates both Apple's App store and Google's Play store rules.

    Why is X still on the app stores?

    • drooby7 hours ago
      Special treatment for big players.

      I have seen full blown porn on instagram too. Ads. Porn ads. They look exactly like porn ads on porn websites.

      • Liskni_si6 hours ago
        I've seen porn in Google's own Chrome too.
    • netsharc7 hours ago
      "Dear Email Recipient, I am a South African Billionaire with friends in powerful places..."
      • bayarearefugee6 hours ago
        I agree with the sentiment, but nobody even needs to make these sort of threats or asks anymore.

        It is all a well-defined implicit caste hierarchy at this point and anyone with enough net worth and a willingness to publicly fellate the orange dong gets a protected spot on the 2nd tier of the pyramid.

  • SilverElfin8 hours ago
    It also made it harder to track. One way to see what Grok is doing is to look at the Grok account’s replies. So you can see the image it generates in response to someone - for example - undressing a woman who appears in a photo. You can then go visit THAT thread to see what the exchange with Grok was, which often would show a long series of lewd images. A few days ago, nearly the ENTIRE stream of the Grok account’s replies at any moment were deepfakes without consent. Mostly in response to women’s posts, but sometimes to generate racist attacks.

    I’m not against people using AI to generate a fantasy image for their own needs. I guess in a way it’s like what people imagine in their own heads anyways. But I do think it is problematic when you share those publicly because it can damage others’ reputation, and because it makes social media hostile to some groups of people who are targeted with misogynist or racist deepfakes. It may seem like a small problem but the actual final effect is that the digital public square becomes a space only for identity groups that aren’t harassed.

    • themafia7 hours ago
      Well, grok just automates what you can do by hand if you want, and there's not much to stop me just drawing out these same types of images manually if I want.

      The problem is that doing this would get me banned. Shouldn't using Grok in this way get you banned similarly?

      • TheOtherHobbes7 hours ago
        Automation makes it easy for everyone to do it, on demand.

        That's fundamentally different to "You can make this thing if you're fairly skilled and - for some kinds of images - have specialist tools."

        Yes, you should be banned for undressing people without consent and posting it on a busy social media site.

        • themafia7 hours ago
          Why would I need to be skilled? Isn't the issue the content not the quality?
          • buellerbueller7 hours ago
            The quality is absolutely part of the issue. Imagine the difference between a nude stick figure labeled your mom, and a photorealistic, explicit deepfake of your mom.

            Do you find the two equally objectionable?

            • XorNot7 hours ago
              Well also in context the stick figure could still constitute sexual harassment.

              If a big boobed stick figure with a label saying "<coworker name>" was being posted on your social media a lot such that people could clearly interpret who you were talking about, there would be a case for harassment but also you'd probably just get fired anyway.

              • latexr6 hours ago
                Yes, but in that case everyone would understand the image is a crude depiction of someone—judging the poster—and not a real photograph—judging and embarrasing the target.
                • themafia4 hours ago
                  Well, if we just guarantee that we put "AI Generated" at the bottom of those images, it will be clear it's not a real photograph, and then this problem disappears?
                  • latexr4 hours ago
                    It’s impossible to guarantee that. As soon as you add that message, someone will build a solution to remove the message. That’s exactly what happened with OpenAi’s Sora.
                    • themafia2 hours ago
                      You've avoiding the question. Assume there is a technical solution that makes these generates images always obvious as generated.

                      Where is the actual problem?

                      Is it that it's realistic? Or that the behavior of the person creating it is harassing?

                      This is pretty straight forward.

                      • latexran hour ago
                        [dead]
                      • an hour ago
                        undefined
      • TheOtherHobbes7 hours ago
        Automation makes it easy for everyone to do it, on demand.

        That's fundamentally different to "You can make this thing if you're fairly skilled and - for some kinds of images - have specialist tools."

        Yes, you should be banned for undressing adults and kids without consent and posting it on a busy social media site.

      • mullingitover7 hours ago
        Why? The people creating and operating the CSAM/revenge porn/depfakes creation and distribution platform are the ones who are culpable. The users who are creating text prompts are just writing words.

        There’s a frantic effort to claim 230 protection, but this doesn’t protect you from the consequences of posting content all by yourself on the site you own and control.

        • themafia4 hours ago
          > the CSAM/revenge porn/depfakes creation and distribution platform

          Which, in this case, is Twitter itself, no?

          > The users who are creating text prompts are just writing words.

          With highly specific intentions. It's not as if grok is curing cancer. Perhaps it's worth throwing away this minor distinction and considering the problem holistically.

          • mullingitover4 hours ago
            > With highly specific intentions

            Intentions to pull the CSAM out of the server full of CSAM that twitter is running.

            Yes, you are making the flailing argument that the operators of the CSAM site desperately want to establish as the false but dominant narrative.

            If you have a database full of CSAM, and investigators write queries with specific intentions, and results show that there is CSAM in your database: you have a database full of CSAM. Now substitute 'model' for 'database.'

            • themafia2 hours ago
              Grok enables their behavior.

              An investigator does not _create novel child porn_ in doing a query.

              You're making a fallacious argument.

              • mullingitoveran hour ago
                > An investigator does not _create novel child porn_ in doing a query.

                And a prompt, without being aided and abetted by twitter, doesn't "create novel child porn" either. A prompt is essentially searching the space, and in the model operated by twitter it's yielding CSAM which is then being distributed to the world.

                If twitter were operating in good faith, even if this was the fault of its customers, it would shut the CSAM generator operation down until it could get a handle on the rampant criminal activity on its platform.

      • 6 hours ago
        undefined
    • chrisjj7 hours ago
      [flagged]
      • SilverElfin4 hours ago
        Yea - like if you have someone depicted in some negative way or compromising way.
  • ryandrake7 hours ago
    The constant comparisons with Photoshop are so disingenuous. We all know what the difference is.

    If Adobe had a service where you could E-mail them "Please generate and post CSAM for me" and in response, their backend service did it and posted it, that's a totally different story then the user doing it themself in Photoshop. Come on. We all know about tech products here, and we can all make this distinction.

    Grok's interface is not "draw this pixel here, draw this pixel there." It's "Draw this child without clothing." Or "Draw this child in a bikini." Totally different.

    • 79527 hours ago
      And the service was designed by Grok, hosted by Grok, you interact with it through systems controlled by Grok, at a surface level Grok makes decisions and grok makes the output. And it is quite possible the Grok knew that illegal image creation was possible. 99.9% of the work to make those images is within grok.
      • inkysigma6 hours ago
        Not to be too pedantic, but I think you mean Grok with a k. Groq with a q is a separate AI hardware company.
        • 79525 hours ago
          Thanks, changed that.
    • sebasv_5 hours ago
      I see at least 2 axes here: * Should access to a tool be restricted of it is used for malice * Is a company complicit if its automated service is being used for malice

      For 1, crowbars are generally available but knives and guns are heavily regulated in the vast majority of the world, even though both are used for murder as well as legitimate applications.

      For 2, things get even more complicated. Eg if my router is hacked and participates in a botnet I am generally not liable, but if I rent out my house and the tenant turns it into a weed farm i am liable.

      Liability is placed where it minimises perceived societal cost. Emphasis on perceived.

      What is worse for society, limiting information access to millions of people or allowing csam, harrassment and shaming?

    • simianwords7 hours ago
      how is it different? i don't get it.
      • inkysigma6 hours ago
        It's the frictionless aspect of it. It requires basically no user effort to do some serious harassment. I would say there's some spectrum of effort that impacts who is liable along with a cost/benefit analysis of some safe guards. If users were required to give paragraph long jailbreaks to achieve this and xAI had implemented ML filters, then I think there could be a more reasonable case that xAI wasn't being completely negligent here. Instead, it looks like almost no effort was put into restricting Grok from doing something ridiculous. The cost here is restricting AI image generation which isn't necessarily that much of a burden on society.

        It is difficult to put similar safeguards into Photoshop and the difficulty of doing the same in Photoshop is much higher.

        • simianwords6 hours ago
          i think you have a point but consider this hypothetical situation.

          you are in 1500's before the printing press was invented. surely the printing press can also reduce the friction to distribute unethical stuff like CP.

          what is the appropriate thing to do here to ensure justice? penalise the authors? penalise the distributors? penalise the factory? penalise the technology itself?

          • hypeatei6 hours ago
            Photocopiers are mandated by law to refuse copying currency. Would you say that's a restriction of your free speech or too burdensome on the technology itself?
      • nkrisc6 hours ago
        If curl is used by hackers in illegal activity, culpability falls on the hackers, not the maintainers of curl.

        If I ask the maintainers of curl to hack something and they do it, then they are culpable (and possibly me as well).

        Using Photoshop to do something doesn’t make Adobe complicit because Adobe isn’t involved in what you’re using Photoshop for. I suppose they could involve themselves, if you’d prefer that.

        • simianwords6 hours ago
          so why is the culpability on grok?
          • 6 hours ago
            undefined
          • immibis4 hours ago
            Because Grok posts child porn, which is illegal. Section 230 doesn't apply, since the child porn is clearly posted by Grok.
          • 6 hours ago
            undefined
      • latexr6 hours ago
        You don’t understand the difference between typing “draw a giraffe in a tuxedo in the style of MC Escher” into a text box and getting an image in a few seconds, versus the skill and time necessary to do it in an image manipulation program?

        You don’t understand how scale and accessibility matter? That having easy cheap access to something makes it so there is more of it?

        You don’t understand that because any talentless hack can generate child and revenge porn on a whim, they will do it instead of having time to cool off and think about their actions?

        • simianwords6 hours ago
          yes but the onus is on the person calling grok and not grok.
          • latexr6 hours ago
            So, is it that you don’t understand how the two differ (which is what you originally claimed), or that you disagree about who is responsible (which the person you replied to hasn’t specified)?

            You made one specific question, but then responded with something unrelated to the three people (so far) who have replied.

          • nickthegreek4 hours ago
            why do you think that?
      • dpark6 hours ago
        You could drive your car erratically and cause accidents, and it would be your fault. The fact that Honda or whoever made your car is irrelevant. Clearly you as the driver are solely responsible for your negligence in this case.

        On the other hand, if you bought a car that had a “Mad Max” self driving mode that drives erratically and causes accidents, yes, you are still responsible as the driver for putting your car into “Mad Max” mode. But the manufacturer of the car is also responsible for negligence in creating this dangerous mode that need not exist.

        There is a meaningful distinction between a tool that can be used for illegal purposes and a tool that is created specifically to enable or encourage illegal purposes.

    • 7 hours ago
      undefined
  • elric7 hours ago
    Wasn't this entirely predictable and inevitable? The genie is out of the bottle.

    Where can we realistically draw the line? Preventing distribution of this sort of shit is impossible, anyone can run their own generator. CSAM is already banned pretty much everywhere, and making money off it certainly is, but somehow Musk is getting away with distributing it at a massive scale. Is it because it's fake? And can we even tell whether it's still fake? Do we ban profiting from fake porn? Do we ban computing? Do we ban unregulated access to generative AI?

    X/Grok is an attractive obvious target because it's so heinous and widespread, but putting the axe on them won't make much of a difference.

    • Jordan-1177 hours ago
      How about we start at "not enabling users to directly generate nonconsensual porn of other users using your platform and then posting it as a reply to their content"?
    • johnnyanmac7 hours ago
      >CSAM is already banned pretty much everywhere, and making money off it certainly is, but somehow Musk is getting away with distributing it at a massive scale. Is it because it's fake?

      It's because law is slow and right now the US government is completely stalled out in terms of performing its job (thanks in part to Musk himself). Things will eventually catch up but it's simply the wild west for the next few years.

      • elric6 hours ago
        Is government intervention even necessary? IANAL, and I don't know shit about US law, but if this this crap on X is illegal, then surely the courts can handle this and ban/fine/jail the responsible parties?

        If this isn't illegal, then sure, government intervention will be required, laws will have to be amended, etc. Until that happens, what are realistic options? Shaming the perps? A bit of hacktivism?

        • pseudalopex3 hours ago
          > Is government intervention even necessary? IANAL, and I don't know shit about US law, but if this this crap on X is illegal, then surely the courts can handle this and ban/fine/jail the responsible parties?

          Government includes courts in American. And prosecutors are part of the executive branch in the US.

        • array_key_first4 hours ago
          The argument is X themselves are responsible, but they make it extremely easy and do the service for you. It's different.

          Like, if I sell a gun and you go and shoot someone I'm not necessarily responsible. Okay, makes sense.

          But if I run a shooting range and I give you zero training and don't even bother to put up walls, and someone gets shot, then I probably am responsible.

          That might mean something like Grok cannot realistically run at scale. I say good riddance and who cares.

        • johnnyanmac6 hours ago
          The federal courts in the US are theoretically non-partisian. But 2025 has shown that the Department of Justice has functioned as Trump's personal legal counsel. That's even happening as we speak with the kerfuffle behind the Central Bank (another non-partisan organization that Trump is desperately trying to make partisan).

          As is, Musk probably isn't going to get confronted by this current DoJ. The state courts may try to take this up, but it has less reach than the federal courts. Other country's courts may take action and even ban X.

          >what are realistic options? Shaming the perps? A bit of hacktivism?

          Those can happen. I don't know how much it moves the needle, but those will be inevitable reactions. The only way out for the American people would be to mass boycott X over this, but our political activism has been fairly weak. Especially for software.

    • asplake7 hours ago
      What AI can generate, AI can detect. It is well within the power of the social media companies to deal with this stuff. It’s not crazy to hope that hitting X has a meaningful effect not only on them, but also the others.
  • porridgeraisin6 hours ago
    There is a real reason though. Limiting it to verified users is the easiest way to have a KYC on everyone generating images. That way they can respond to legal requests with the KYC of the account that asked grok to undress a minor.
  • api7 hours ago
    What happens if you ask it to undress Elon Musk? Not saying someone with a Xhitter account to burn should do this, but not not saying they should do it.
    • skrebbel6 hours ago
      I'm only on Twitter once every 2 months but last time I checked the place was absolutely overflowing with images of Musk in a bikini.
    • cmxch3 hours ago
      Or to give Kier Starmer a Borat-esque outfit?
  • labrador7 hours ago
    [flagged]
    • nemomarx7 hours ago
      Grok generates photorealistic imagery of young girls too, so I don't think the anime distinction is the main one here. He might think ai generated photos aren't as real, yeah.

      But I think mostly musk just acts like all laws don't apply to him - regulations, property lines, fines, responsibility to anyone else.

    • bayarearefugee7 hours ago
      > It's a symptom of rich people thinking the rules don't apply to them.

      I would argue it's a symptom of rich people knowing the rules don't apply to them.

      Arguably the rules never really have applied to them, but now they don't even bother to pretend they do.

      • labrador7 hours ago
        You're right of course. Thanks for the correction.
  • bediger40007 hours ago
    [flagged]
    • RankingMember7 hours ago
      I know you're joking but I have seen genuine comments like this!
    • mullingitover7 hours ago
      You’re going to be excited to learn that there are private for-profit prisons in many of the jurisdictions where X executives and employees are committing the felony of creating and distributing CSAM and explicit deepfakes.
    • miltonlost7 hours ago
      How does a "market-based solution" work with ending grok peddling a child pornography and revenge porn generator?
    • i80and7 hours ago
      [flagged]
    • buellerbueller7 hours ago
      [flagged]
  • charcircuit7 hours ago
    [flagged]
    • tantalor7 hours ago
      Bad faith comment, intentionally misunderstanding the scope and nature of the problem.
  • nailer7 hours ago
    From Wired's original article, at https://archive.is/https://www.wired.com/story/grok-is-pushi...

    > Every few seconds, Grok is continuing to create images of women in bikinis or underwear in response to user prompts on X, according to a WIRED review of the chatbots’ publicly posted live output. On Tuesday, at least 90 images involving women in swimsuits and in various levels of undress were published by Grok in under five minutes, analysis of posts show.

    ChatGPT and Gemini also do this: https://x.com/Marky146/status/2009743512942579911?s=20

    • hairofadog6 hours ago
      Real question as I don't use ChatGPT or Gemini: They publish images of women in bikinis or underwear in response to user prompts? Where do they publish them? I'm looking at Gemini and I don't see any sort of social aspect to it. I just tried the prompt "picture of a dog" and I don't see any way that another person could see it unless I decided to publish it myself.
      • plagiarist6 hours ago
        For this particular one it seems to be that you @grok under a posted image with a request for modifications and that account posts the modified image as a reply.
        • hairofadog6 hours ago
          Right, that seems to me like an important distinction. Other people in this thread have said things like "Well you can draw people in a bikini with a pencil without their permission! Should we ban pencils, too!?" Honestly if someone wants to be weird and draw bikini pictures of journalists they don't like AND KEEP IT TO THEMSELVES, then whatever I guess. That's not what this is. Grok is creating the images. Grok is publishing the images. Grok is harrassing the subjects of the images by posting it in their replies. Neither ChatGPT, Gemini, nor pencils are doing that. (And that doesn't even get into the CSAM aspect.)

          One of the many reasons I prefer Claude is that it doesn't even generate images.

    • nailer5 hours ago
      The moderation is surprising here, I'm not really bother by that, but if it's unclear: I'm just adding context, not endorsing bikinis or disovowing bikinis in the comment above.
    • skywhopper6 hours ago
      Do they post those images to Twitter under their corporate accounts?
    • samesamebut7 hours ago
      [flagged]
  • standardUser7 hours ago
    How long can we keep trying to put a finger in this particular dike? In five years, most people will be able to run a local LLM capable of whatever nefarious purposes they choose.
    • taurath7 hours ago
      Is it not illegal defamation to have gen-AI post a deepfake in public? Photoshop existed before. Is it not illegal to post CSAM, regardless of where it comes from?

      No other company would touch this sort of thing - they’d be unable to make any money, their payment providers would ban them, their banks would run away.

      • bigstrat20037 hours ago
        > Is it not illegal to post CSAM, regardless of where it comes from?

        This is a great example of why "CSAM" is a terrible term and why CP was/is better. If you generate pornographic images of children using an AI tool it is by definition not CSAM, as no children were sexually assaulted. But it is still CP.

        • 7 hours ago
          undefined
        • taurath6 hours ago
          Fine, call it what you want, but CP of or appearing to be of real children is illegal. There’s some grey area for drawn stuff it seems, but at a certain point there IS a line.

          Also what changed, over the past 20 years even hosting stuff like this, or even any pornography whatsoever, would get you pulled from every App Store, shut down by any payment providers. Now it’s just totally fine? To me that’s a massive change entirely decided by Elon Musk.

        • 7 hours ago
          undefined
        • twosdai7 hours ago
          > no children were sexually assaulted

          Generating pictures of a real child naked is assault. Imagine finding child photos of yourself online naked being passed around. Its extremely unpleasant and its assault.

          If you're arguing that generating a "fake child" is somehow significantly different and that you want to split hairs over the CSAM/CP term in that specific case. Its not a great take to be honest, people understand CSAM, actually verifying if its a "real" child or not, is not really relevant.

          • johnnyanmac6 hours ago
            >actually verifying if its a "real" child or not, is not really relevant.

            It's entirely relevant. Is the law protecting victims or banning depictions?

            If you try to do the latter, you'll run head first into the decades long debate that is the obscenity test in the US. The former, meanwhile, is made as a way to make sure people aren't hurt. It's not too dissimilar to freedom of speech vs slander.

            • ben_w6 hours ago
              > Is the law protecting victims or banning depictions?

              Both. When there's plausible deniability, it slows down all investigations.

              > If you try to do the latter, you'll run head first into the decades long debate that is the obscenity test in the US. The former, meanwhile, is made as a way to make sure people aren't hurt. It's not too dissimilar to freedom of speech vs slander.

              There's a world outside the US, a world of various nations which don't care about US legal rulings, and which are various degrees of willing-to-happy to ban US services.

              • johnnyanmac5 hours ago
                >There's a world outside the US

                Cool, I'm all for everyone else banning X. But sadly it's a US company subject to US laws.

                I'm just explaining why anyone in the US who would take legal action may have trouble without making the above distinction

                Definitely a core weakness of the Constitution. One that assumed a lot of good faith in its people.

            • twosdai5 hours ago
              It, the difference between calling child pornographic content cp vs CSAM, is splitting hairs. Call it CSAM its the modern term. Don't try to create a divide on terminology due to an edge case on some legal code interpretations. It doesn't really help in my opinion and is not a worthwhile argument. I understand where you are coming from on a technicality. But the current definition does "fit" well enough. So why make it an issue. As an example consider the following theoretical case:

              a lawyer and judge are discussing a case and using the terminology CSAM in the case and needs to argue between the legality or issue between the child being real or not. What help is it in this situation to use CP vs CSAM in that moment. I dont really think it changes things at all. In both cases the lawyer and judge would need to still clarify for everyone that "presumably" the person is not real. So an acronym change on this point to me is still not a great take. Its regressive, not progressive.

              • johnnyanmac5 hours ago
                >It, the difference between calling child pornographic content cp vs CSAM, is splitting hairs.

                Yes, and it's a lawyer's job to split hairs. Up thread was talking about legal action so being able distinguish the term changes how you'd attack the issue.

                > What help is it in this situation to use CP vs CSAM in that moment. I dont really think it changes things at all.

                I just explaied it.

                You're free to have your own colloquial opinion on the matter. But if you want to discuss law you need to understand the history on the topic. Especially one as controversial as this. These are probably all tired talking points from before we were born, so while it's novel and insignificant to us, it's language that has made or broken cases in the past. Cases that will be used for precedent.

                >So an acronym change on this point to me is still not a great take. Its regressive, not progressive.

                I don't really care about the acronym. I'm not a lawyer. A duck is a duck to me.

                I'm just explaining why in this legal context the wording does matter. Maybe it shouldn't, but that's not my call.

          • XorNot6 hours ago
            It's also irrelevant to some extent: manipulating someone's likeness without their consent is also antisocial, in many jurisdictions illegal, and doing so in a sexualized way making it even more illegal.

            The children aspect just makes a bad thing even worse and seems to thankfully get some (though enough IMO) people to realize it.

    • riffraff7 hours ago
      I guess the difference is that one can point to the ToS and say "look we said no deepfakes" and block you if you upload a deepfake produced locally, but not if you use the built-in deepfake generator.
    • lenerdenator7 hours ago
      That's their business.

      If there's a business operating for profit (and Twitter is, ostensibly) and their tool posts pictures of me undressed, then I am going to have a problem with it.

      And I'm just some dude. It probably means a lot more for women who are celebrities.

      "It's inevitable" isn't an excuse for bad corporate or personal behavior involving technology. Taken to its logical conclusion, we're all going to die, so it's just executing on the inevitable when someone is murdered.

      • standardUser7 hours ago
        The only excuses for bad corporate behavior are bad corporate laws and weak enforcement.
        • yndoendo6 hours ago
          How do you get proper laws passed when the politicians are bought and paid for by the same corporations?

          In the USA ... a company can declare bankruptcy and shed the debt / liabilities while a person cannot shed most debt after declaring bankruptcy. [0] [1] USA politicians favor companies over the people.

          I personal support new corporate laws similar to California's three strikes law. Instead of allow companies to budget for fines the CEO and Executives go to jail with the corporation being broken up after habitually breaking same the laws.

          [0] https://hls.harvard.edu/today/expert-explains-how-companies-...

          [1] https://thenewpress.org/books/unjust-debts/

        • lenerdenator7 hours ago
          That's often because of regulatory capture.
    • badgersnake7 hours ago
      And it’ll still quite rightly be illegal.
    • buellerbueller7 hours ago
      So, because someone could hypothetically abuse their own child, we should stop trying to thwart child trafficking? Is that your line of argumentation, because if not, I don't understand what you are saying.
      • standardUser7 hours ago
        I am saying that half-assed measures that rely on the imaginary goodwill of megacorps is getting us nowhere, fast.
        • freejazz7 hours ago
          They don't need to have good will, they just need to not do the bad thing.
          • standardUser4 hours ago
            Explain why any for-profit enterprise would ever take any action that wasn't in it's own interest, unless compelled by law.
    • dmitrygr7 hours ago
      [flagged]
  • dpc0505057 hours ago
    Another question that should be asked is what culturally drives people to want to create lewd content of children and what should we change so that it stops happening? Obviously platforms should have safeguard against child porn and misogynistic defamation, but as a society we also need cultural changes so that people don't become pedophiles and sexists and that the ones that do get help with their glaring issues.
    • johnnyanmac7 hours ago
      It's like saying "what draws people to murder"? At some level, on the scale of billinos of humnans, there are simply going to be morally corrupt people. Be it from clinical sickness or local conditioning. We can't "save" every person in this regard.

      Society can create disincentives, but not cures.

    • UncleMeat5 hours ago
      A very large portion of the harassment via these images is very very obviously motivated by humiliating people.

      The person telling grok to comment on a thread by a woman with an image of her with her clothes off, on all fours, and covered in what appears to be semen is to hurt her. It is an act of domination. She can either leave the platform or be forced to endure a process that repeatedly makes her into a literal sex object as she uses the platform. Discussing something related to your professional work? Doesn't matter. There's now an image in the thread of this shit.

      This is rape culture. There is no other word for it.

      Gender theorists have been studying this very question for decades. But you'll regularly find this community shitting on that entire field of study even though I'm not sure if it is has ever been more relevant than it is today.

    • 79527 hours ago
      Why do you think it is culture specifically?
    • BrenBarn7 hours ago
      I don't think we're ever going to be able to eliminate various sorts of harmful desires and urges.
    • XorNot7 hours ago
      Well I'd say one obvious solution would be to punish a company which deploys a tool to enable turn key sexual harassment at scale.

      Might send a good message about consent ya know?

      • rstupek5 hours ago
        I think that's a naïve idea if you think punishing a company will have any affect on the situation.
        • array_key_first4 hours ago
          It will tangibly lead to less CSAM on the internet, so yeah it will have an affect.

          Obviously we can't just - poof - make people not child molesters or not murderers. But that doesn't mean we should sit on our asses and do nothing.

  • kgrizel7 hours ago
    The same old misleading headline yet again. Grok continues to be free, whatever that X message said, it had so far no impact on Grok web and mobile app. They just switched off the part of their integration with X that let people generate images.

    You can of course pay for Grok if you like, but that just buys you bigger quota (up to 50 videos a day is free), not new capabilities or less censorship.

  • tantalor7 hours ago
    For anyone unaware/uninformed, take a look at https://www.reddit.com/r/grok

    Warning: it's quite gross

  • porphyra7 hours ago
    Obviously I think that AI generated undressing pictures of people, especially minors, is bad and there should be safeguards against that. But how is it different from other tools like doing it manually with photoshop? Also it has been shown that many other tools like ChatGPT and Gemini/Nanobanana can also do it with sufficiently creative prompting.

    I also did scroll through the public grok feed and the AI generated bikini pics were mostly Onlyfans creators requesting their own fans to generate these pictures (or sometimes generating them themselves).

    • cptaj7 hours ago
      You know the answer to this but I'll just say it: Its different in that it requires no skill and can be done by anyone instantaneously at scale.

      You know this but somehow are rationalizing this game changing fact away.

      Yes, people can draw and photoshop things. But it takes time, skill, dedication, etc. This time cost is load bearing in the way society needs to deal with the tools it has for the same reason at the extreme that kitchen knives have different regulations than nuclear weapons.

      It is also trivially easy for grok to censor this usage for the vast majority of offenders by using the same LLM technology they already have to classify content created by their own tools. Yes, it could get jailbroken but that requires skill, time, dedication, etc; And it can be rapidly patched, greatly mitigating the scale of abuse.

    • Y-bar7 hours ago
      > But how is it different from other tools like doing it manually with photoshop?

      The scale of effect and barrier to entry. Both are orders of magnitude easier and faster. It would take hours of patience and work to mostly create one convincing fake using photoshop, once you had spent the time and money to learn the tool and acquire it. This creates a natural large moat to the creation process. With Groom it takes a minute at most with no effort or energy needed.

      And then there is the ease of distribution to a wide audience, X/Groom handles that for you by automatically giving you an audience of millions.

      It’s like with guns. Why prevent selling weapons to violent offenders when they could just build their own guns from high quality steel, a precision drill, and a good CNC machine? Scale and barrier to entry are real blockers for a problem to mostly solve itself. And sometimes a 99% solution is better than no solution.

      • porphyra6 hours ago
        This thing with guns was a legitimate argument for banning or regulating 3D printers a few years ago though and I'm glad that we didn't end up with restrictions on that front. With affordable desktop CNC machines capable of making metal parts coming soon, I hope those won't be subject to too many restrictions also.
    • Permit7 hours ago
      > Obviously I think that AI generated undressing pictures of people, especially minors, is bad and there should be safeguards against that.

      It's not obvious to me that this is your position. What safeguards do you propose as an alternative to those discussed in the article?

      • porphyra7 hours ago
        I am for moderation and strong penalties for users that use it in that manner. Anyone who uses grok to generate an undressing image of someone without their consent within 5 seconds should probably go to jail or whatever the penalty is for someone spending 5 hours to create revenge porn with photoshop.

        But I'm not sure if the tool itself should be banned, as some people seem to be suggesting. There are content creators on the platform that do use NSFW image generation capabilities in a consensual and legitimate fashion.

        • Zigurd7 hours ago
          Photoshop is a productivity tool, and the pricing supports that assertion.
    • kranke1557 hours ago
      Grom is much much less censored on purpose. I work in image editing and outside of very few people, hardly anyone uses Grok for professional work. Nano Banana Pro is used for the most part.

      But for NSFW work it dominates. It’s clearly deliberate.

    • Jordan-1177 hours ago
      The Photoshop equivalent would be "an Adobe artist does the photoshop for you and then somehow emails it directly to your target and everyone who follows them."
    • array_key_first4 hours ago
      How is the atomic bomb different than me going to a foreign country and manually stabbing 500,000 people in the throat?

      I would say lots of ways. And that's probably why I have a few knives, and zero atomic bombs.

    • drawfloat7 hours ago
      Drawing indecent photos of children with Photoshop is also illegal in lots of countries and any company creating them for profit would be liable.
      • ekjhgkejhgk7 hours ago
        Exactly. And the fact that companies do it with impunity is another hint that we're living in late stage capitalism.

        If an individual invented a tool that can generate such pictures, he'd be arrested immediately. A company does it, it's just a woopsie. And most people don't find this strange.

    • Waterluvian7 hours ago
      I think intent probably matters and that this gets into the "you know it when you see it" definition realm where we debate the balance between freedom of speech and security of person. ie. just how easy Photoshop, a VCR, a DVD burner app, etc. makes it for you to crime and how much are they handholding you towards criming?

      I think this is an important question to ask despite the subject matter because the subject matter makes it easy for authorities to scream, "think of the children you degenerate!" while they take away your freedoms.

      I think Musk is happy to pander to and profit from degeneracy, especially by screaming, "it's freedom of speech!" I would bet the money in my pocket that his intent is that he knows this stuff makes him more money than if he censored it. But he will of course pretend it's about 1A freedoms.

    • cosmic_cheese7 hours ago
      Friction/barrier to entry is the biggest difference. People generally didn't do things like that before due to a combination of it being a colossal waste of time and most not having the requisite skills (or will and patience to acquire said skills). When all it takes is @mentioning a bot, that friction is eliminated.
    • 6 hours ago
      undefined
    • caconym_6 hours ago
      How is having cameras on every street corner that identify you based on your face and height and weight and gait and the clothes you're wearing and anything you're carrying, or the car you're driving by its license plate and make and model and color and tires/rims and any visible damage, accessories, etcetera, and taking all these data points and loading them into a database that cross-correlates them with your credit bureau data and bank records and purchase history and social media and other online activity and literally every single other scrap of available data everywhere, and builds a map of everything about you and everywhere you ever go and everything you do and have ever done, makes it trivially queryable by any law enforcement officer in the country with or without a valid reason, retains it all in perpetuity, and does all this for every single person in the country without consent or a warrant issued by a judge, different from a police department assigning an officer to tail you if you are suspected of being involved in a crime?

      We are going to be in some serious fucking trouble if we can't tackle these issues of scale implied by modern information technology without resorting to disingenuous (or simply naive) appeals to these absurd equivalences as justification for each new insane escalation.

    • ImPostingOnHN6 hours ago
      If you generate CSAM, whether using LLMs, photoshop, or any other tool, you are breaking the law. This would apply if you could somehow run Grok locally.

      When you use a service like Grok now, the service is the one using the tool (Grok model) to generate it, and thus the service is producing CSAM. This would also apply if you paid someone to use Photoshop to produce CSAM: they would be breaking the law in doing so.

      This is setting aside the issue of twitter actually distributing the CSAM.

    • kllrnohj7 hours ago
      > But how is it different from other tools like doing it manually with photoshop?

      Last I checked Photoshop doesn't have a "undress this person" button? "A person could do bad thing at a very low rate, so what's wrong with automating it so that bad things can be done millions of times faster?" Like seriously? Is that a real question?

      But also I don't get what your argument is, anyway. A person doing it manually still typically runs into CSAM or revenge porn laws or other similar harassment issues. All of which should be leveraged directly at these AI tools, particularly those that lack even an attempt at safeguards.

    • XorNot7 hours ago
      For one liability: the person doing the Photoshop is the one liable for it, and it was never okay to do this without consent.
    • etchalon7 hours ago
      "Technically, anyone can make napalm at home. What's wrong with Walmart selling it?"
    • pphysch7 hours ago
      The obvious problem is that Grok is also distributing the illegal images.

      This could be easily fixed by making the generated images sent through private Grok DMs or something, but that would harm the bottom line. Maybe they will do that eventually once they have milked enough subscriptions from the "advertising".

      • simianwords7 hours ago
        why does it matter if grok is advertising or you are advertising? in reality there's no difference. its just a tool you can invoke.
    • glemion437 hours ago
      It circumvents basic laws in plenty of countries giving kids access to these tools.

      It could easily be solved by basic age verification.

      The csam stuff though needs to be filtered and fixed as this breaks laws and I'm not aware what would make it legal, lucky enough