I don't think the argument is that the AI made it OK as much as if someone commits suicide because a chatbot told them to they were so delicate that it isn't the fault of the system they were interacting with. It may be the straw that broke the camel's back, but it was still only a straw-worth of harm. It'd be like a checkout person telling a customer to kill themselves and the person commits suicide later that night - an unprofessional act to be sure, the server would probably get sacked on reflex but we really can't say anyone should be held legally liable.
Doesn't seem that vague to me. The law says:
> (b) In an action against a defendant that developed or used artificial intelligence
IANAL, but the law doesn't say who is liable, it says who cannot use this as a defense in a civil suit to escape damages. So neither OpenAI nor the third party could, from my read, and either one could be found liable depending on who a lawsuit targets.
The article goes on to ponder who's liable then, the developer of the AI, the user, or someone in between? It's a reasonable question to ask, but really not apropos to the law in question at all. That question isn't even about AI, since you can replace the AI with any software developed by a third party. In fact, the question isn't about software either, since you can replace "software" by any third-party component, even something physical. So I would expect that whatever legal methods exist to place liability in those situations, would also apply generally to AI models being incorporated into other systems.
Since people are asking whether this law is needed or useful at all: I would say either the law is completely redundant, or very much needed. I'm not a lawyer, so I don't know which of those two cases it is, but I suspect it's the second one. I would be surprised if by a few years from now we haven't seen someone try to escape legal liability by pointing their finger at an AI system they claim is autonomously making the decisions that caused some harm.