> First, Riehl did not and could not reasonably read ChatGPT’s output as defamatory. By its very nature, AI-generated content is probabilistic and not always factual, and there is near universal consensus that responsible use of AI includes fact-checking prompted outputs before using or sharing them. OpenAI clearly and consistently conveys these limitations to its users. Immediately below the text box where users enter prompts, OpenAI warns: “ChatGPT may produce inaccurate information about people, places, or facts.” Before using ChatGPT, users agree that ChatGPT is a tool to generate “draft language,” and that they must verify, revise, and “take ultimate responsibility for the content being published.” And upon logging into ChatGPT, users are again warned “the system may occasionally generate misleading or incorrect information and produce offensive content. It is not intended to give advice.”
Separately, it's broadly correct that there is no Section 230 argument to be made. "Everyone" knows that Section 230 doesn't apply to this. I can't find anyone making any legal arguments that it would.
0: https://storage.courtlistener.com/recap/gov.uscourts.gand.31...
Ashley MacIsaac made waves in the nineties for being openly gay, and he paid his dues for years. I vividly recall being around a barroom table in the late nineties, listening to this specific slander. We knew it was slander though, because there was no evidence. We had no machine yet to confabulate it.
This is what we anglos do to our men who prefer men. We did it with Wilde, and with Turing, and we did it with MacIsaac, and we are doing it even harder in 2026 than in 1996, because what we called freedom is now called "woke", and what was called dictatorship is now called "freedom".
And you're next, dear reader.
Therein lies the rub. Google does not control what its parrot spouts. No-one does.
This is exactly why Google's public comment on this case from the TFA is:
> "AI Overviews frequently improve to show the most helpful information, and we invest significantly in the quality of responses. When issues arise – like if our features misinterpret web content or miss some context – we use those examples to improve our systems and may take action under our policies."
Google's statement is carefully crafted to make the case that they "act with reasonable care" for legal effect, rather than to win any points in the court of public opinion. Courts have yet to determine what passes the reasonable-care test for negligence wrt AI output. Google feels they need to make sure that regardless of anything else that happens in this case, that the decision does not find their publishing was negligent.
Doesn’t work with APIs, but then the person/entity integrating the API should have that responsibility.
Companies that get treated with the rights of people should also have the responsibilities of people. Google designed, built, hosted, and promoted their LLM prominently. Logically, it follows that they should be personally and financially responsible for any harms their LLM causes.