In general, agents "stand in the shoes" of the principal for all actions the principal delegated to them (i.e., "scope of agency"). So if Amy works for Global Corp and has the authority to sign legal documents on their behalf, the company is bound. Similarly, if I delegate power of attorney to someone to sign documents on my behalf, I'm bound to whatever those agreements are.
The law doesn't distinguish between mechanical and personal agents. If you give an agent the power to do something on your behalf, and it does something on your behalf under your guidance, you're on the hook for whatever it does under that power. It's as though you did it yourself.
If I were an attorney in court, I would argue that a "mechanical or automatic agent" cannot truly be a personal agent unless it can be trusted to do things only in line with that person's wishes and consent.
If an LLM "agent" runs amok and does things without user consent and without reason or direction, how can the person be held responsible, except for saying that they never should've granted "agency" in the first place? Couldn't the LLM's corporate masters be held liable instead?
So in a case like this, if your agent exceeded its authority, and you could prove it, you might not be bound.
Keep in mind that an LLM is not an agent. Agents use LLMs, but are not LLMs themselves. If you only want your agent to be capable of doing limited actions, program or configure it that way.
It's basically the same with longstanding customer service "agents". They are authorized to do only what they are authorized to semantically express in the company's computer system. Even if you get one to verbally agree "We will do X for $Y', if they don't put that into their computer system it's not like you can take the company to court to enforce that.
It's not that straightforward. A contract, at heart, is an agreement between two parties, both of whom must have (among other things) reasonable material reliance in each other that they were either the principals themselves or were operating under the authority of their principal.
I am sure that Air Canada did not intend to give the autonomous customer service agent the authority to make the false promises that it did. But it did so anyway by not constraining its behavior.
> It's basically the same with longstanding customer service "agents". They are authorized to do only what they are authorized to semantically express in the company's computer system. Even if you get one to verbally agree "We will do X for $Y', if they don't put that into their computer system it's not like you can take the company to court to enforce that.
I don't think that's necessarily correct. I believe the law (again, not legal advice) would bind the seller to the agent's price mistake unless 1/the customer knew it was a mistake and tried to take advantage of it anyway or 2/the price was so outlandish that no reasonable person would believe it. That said, there's often a wide gap between what the law requires and what actually happens. Nobody's going to sue over a $10 price mistake.
Fundamentally, running `sdkmanager --licenses` does not consummate a contract [0]. Rather running this command is an indication that the user has been made aware that there is a non-negotiated contract they will be entering into by using the software - it's the continued use of the software which indicates acceptance of the terms. If an LLM does this unbeknownst to a user, this just means there is one less indication that the user is aware of the license. Of course this butts up against the limits to litigation you pointed out, which is why contracts of adhesion mostly revolve around making users disclaim legal rights, and upholding copyright (which can be enforced out of band).
[0] if it did then anyone could trivially work around this by skipping the check with a debugger, independently creating whatever file/contents this command creates, or using software that someone else already installed.
(I edited the sentence you quoted slightly, to make it more explicit. I don't think it changes anything but if it does then I am sorry)
A guy who's not a lawyer arguing about lawyering with an actual lawyer. Typical tech bubble hubris.
The closest analogy we have, I guess, is the power of attorney. If a principal signs off on power of attorney to, e.g. take out a loan/mortgage to buy a thing on principal's behalf, that does not extend to taking any extra liabilities. Any extra liabilities signed off by the agent would be either rendered null or transferred to the agent in any court of law. There is extent to which agency is assigned.
The core questions here are agency and liability boundaries. Are there any agency boundaries on the agent? If so, what's the extent? There are many future edge cases where these questions will arise. Licenses and patents are just the tip of an iceberg.
This is not copyright infringement in the USA:
> …it is not an infringement for the owner of a copy of a computer program to make or authorize the making of another copy or adaptation of that computer program provided… that such a new copy or adaptation is created as an essential step in the utilization of the computer program in conjunction with a machine and that it is used in no other manner
> Citing the prior Ninth Circuit case of MAI Systems Corp. v. Peak Computer, Inc., 991 F.2d 511, 518-19 (9th Cir. 1993), the district court held that RAM copying constituted "copying" under 17 U.S.C. § 106.
Step N in the analysis is "is it a copy?" Step N+1 is "does the copy infringe upon the rights of the owner"?
On the other hand, if you take a laser and intentionally induce the cat to push the key, then you are bound.
> If licenses are valid in any way (the argument is they get you out of the copyright infringement caused by copying the software from disk to memory) then it's your job to go find the license to software you use and make sure you agree to it; the little popup is just a nice way to make your aware of the terms.
The way you set up the scenario, the user has no reason to even know that they're using this new version with this new license. An update has happened without their knowledge. So far as they know, they're running the old software under the old license.
You could make an equally good argument that whoever wrote the software installed software on the user's computer without the user's permission. If it's the user's fault that a cat might jump on the keyboard, why isn't it equally the software provider's fault?
... but the reality is that, taking your description at face value, nobody has done anything. The user had no expectation or intention of installing the software or accepting the new license, and the software provider had no expectation or intention of installing it without user permission, and they were both actually fairly reasonable in their expectations. Unfortunately shit happens.
Probably a real judge would want to say something like "Why are all of you bozos in my courtroom wasting public money with some two-bit shrinkwrap bullshit? I was good at surfing. I could have gone pro. I hate my life..."
The general question of the personhood of artificial intelligence aside, perhaps the personhood could be granted as an extension of yourself, like power of attorney? (Or we could change the law so it works that way.)
It all sounds a bit weird but I think we're going to have to make up our minds on the subject very soon. (Or perhaps the folks over at Davos have already done us the favour of making up theirs ;)
Your English is very interesting by the way. You have some obvious grammatical errors in your text yet beautiful use of formal register.
it's a piece of non-deterministic software running on someone's computer
who is responsible for its actions? hardly clear cut
You buy a piece of software, marketed to photographers for photo editing. Nowhere in the manuals or license agreements does it specify anything else. Yet the software also secretly joins a botnet and participates in coordinated attacks.
Question: are you on the hook for cyber-crimes?
You can't sit back and go "lalalala" to some tool (AI, photo software, whatever) doing illegal things when you know about it. But you also aren't on the hook for someone else's secret actions that are unreasonable for you to know about.
IANAL.
I guess you could say that you didn't have a reasonable expectation that a bot could accept a license, but you're on a lot shakier ground there...