19 pointsby forthwall10 hours ago6 comments
  • roenxian hour ago
    > If your chatbot decide to tell your customer to kill themselves, it's your problem.

    I don't think the argument is that the AI made it OK as much as if someone commits suicide because a chatbot told them to they were so delicate that it isn't the fault of the system they were interacting with. It may be the straw that broke the camel's back, but it was still only a straw-worth of harm. It'd be like a checkout person telling a customer to kill themselves and the person commits suicide later that night - an unprofessional act to be sure, the server would probably get sacked on reflex but we really can't say anyone should be held legally liable.

    • almostdeadguy39 minutes ago
      All this seems to do is say you can't use "the AI model did that, not me" as a defense to escape damages in a civil suit, it doesn't change the extent of encouraging suicide that someone could be liable for.
      • conartist623 minutes ago
        The AI is employing the persuasive skills or learned directly from some fucko suicide cult leaders to purposelly talk you into and through doing it. That doesn't seem NEARLY the same in a practical or legal sense.
  • throwaway815232 hours ago
    "The computer did it" way predates AI and I'd hope it already not a valid defense.
    • graemep2 hours ago
      This seems to block an AI specific excuse - a new variant on "the computer did it".
      • throwaway81523an hour ago
        If you look at the bill's definition of AI, it can mean basically any computer.
  • almostdeadguy29 minutes ago
    > The vagueness comes from who the "developer" is when the LLM goes awry. Is it OpenAI's fault if a third-party app has a slip up, or is the third party? If a research lab puts out a new LLM that another company decides to put in their airplane that crashes, can the original lab be liable or are they only liable if they claim it to be an OSS airplane LLM?

    Doesn't seem that vague to me. The law says:

    > (b) In an action against a defendant that developed or used artificial intelligence

    IANAL, but the law doesn't say who is liable, it says who cannot use this as a defense in a civil suit to escape damages. So neither OpenAI nor the third party could, from my read, and either one could be found liable depending on who a lawsuit targets.

  • ares6238 hours ago
    It's a good thing OpenAI has _two_ CEO's. It's like having two kidneys. When a CEO needs to held accountable, there's a spare available.
  • WCSTombs8 hours ago
    I think it's just saying that AIs are treated like inanimate objects and thus not something that liability can apply to. Here's an analogy that I think illustrates the effect of the law, if I've understood it: let's say I drive my car into a house and damage the house, and the owner of the house sues me. Now, it's not a given that I'm personally liable for the damages, since it's possible for a car to malfunction and go out of control through no fault of the driver. However, if I walk into the court and say that the car itself should be held liable and responsible for the damages, I'm probably going to have a bad day. Similarly, I shouldn't be able to claim that an AI is responsible for some damages, since you can't frickin' sue an AI, can you?

    The article goes on to ponder who's liable then, the developer of the AI, the user, or someone in between? It's a reasonable question to ask, but really not apropos to the law in question at all. That question isn't even about AI, since you can replace the AI with any software developed by a third party. In fact, the question isn't about software either, since you can replace "software" by any third-party component, even something physical. So I would expect that whatever legal methods exist to place liability in those situations, would also apply generally to AI models being incorporated into other systems.

    Since people are asking whether this law is needed or useful at all: I would say either the law is completely redundant, or very much needed. I'm not a lawyer, so I don't know which of those two cases it is, but I suspect it's the second one. I would be surprised if by a few years from now we haven't seen someone try to escape legal liability by pointing their finger at an AI system they claim is autonomously making the decisions that caused some harm.

  • SilverElfin9 hours ago
    I don’t understand the point of the law. AI tech is inherently not predictable. Users know this. I don’t see how creating this liability keeps AI based products viable.
    • WCSTombs8 hours ago
      And I think most people would agree that an inherently unpredictable component has no place in a safety-critical system or anywhere that potential liability would be huge. AI-based products can still be viable for the exact same reason that an ocean of shitty bug-riddled software is commercially viable today, because there are many potential applications where absolute correctness is not a hard requirement for success.