40 pointsby niyikiza6 hours ago18 comments
  • JohnFen6 hours ago
    > “The AI hallucinated. I never asked it to do that.”

    > That’s the defense. And here’s the problem: it’s often hard to refute with confidence.

    Why is it necessary to refute it at all? It shouldn't matter, because whoever is producing the work product is responsible for it, no matter whether genAI was involved or not.

    • nerdsniper5 hours ago
      The distinction some people are making is between copy/pasting text vs agentic action. Generally mistakes "work product" as in output from ChatGPT that the human then files with a court, etc. are not forgiven, because if you signed the document, you own its content. Versus some vendor-provided AI Agent which simply takes action on its own that a "reasonable person" would not have expected it to. Often we forgive those kinds of software bloopers.
      • Wobbles422 hours ago
        "Agentic action" is just running a script. All that's different is now people are deploying scripts that they don't understand and can't predict the outcome of.

        It's negligence, pure and simple. The only reason we're having this discussion is that a trillion dollars was spent writing said scripts.

      • ori_b4 hours ago
        If you put a brick on the accelerator of a car and hop out, you don't get to say "I wasn't even in the car when it hit the pedestrian".
        • Shalomboy4 hours ago
          This is true for bricks, but it is not true if your dog starts up your car and hits a pedestrian. Collisions caused by non-human drivers are a fascinating edge case for the times we're in.
          • jacquesm3 hours ago
            It is very much true for dogs in that case: (1) it is your dog (2) it is your car (3) it is your responsibility to make sure your car can not be started by your dog (4) the pedestrian has a reasonable expectation that a vehicle that is parked without a person in it has been made safe to the point that it will not suddenly start to move without an operator in it and dogs don't qualify.

            You'd lose that lawsuit in a heartbeat.

            • direwolf202 hours ago
              what if your car was parked in a normal way that a reasonable person would not expect to be able to be started by a dog, but the dog did several things that no reasonable person would expect and started it anyway?
              • jacquesm2 hours ago
                You can 'what if' this until the cows come home but you are responsible, period.

                I don't know what kind of drivers education you get where you live but where I live and have lived one of the basic bits is that you know how to park and lock your vehicle safely and that includes removing the ignition key (assuming your car has one) and setting the parking brake. You aim the wheels at the kerb (if there is one) when you're on an incline. And if you're in a stick shift you set the gear to neutral (in some countries they will teach you to set the gear to 1st or reverse, for various reasons).

                We also have road worthiness assessments that ensure that all these systems work as advertised. You could let a pack of dogs loose in my car in any external circumstance and they would not be able to move it, though I'd hate to clean up the interior afterwards.

                • direwolf202 hours ago
                  I agree. The dog smashed the window, hot–wired the ignition, released the parking brake, shifted to drive, and turned the wheel towards the opposite side of the road where a mother was pushing a stroller, killing the baby. I know, crazy right, but I swear I'm not lying, the neighbor caught it on camera.

                  Who's liable?

                  I think this would be a freak accident. Nobody would be liable.

                  • jacquesm2 hours ago
                    > I agree. The dog smashed the window, hot–wired the ignition, > released the parking brake, shifted to drive, and turned the > wheel towards the opposite side of the road where a mother was > pushing a stroller, killing the baby. I know, crazy right, but > I swear I'm not lying, the neighbor caught it on camera.

                    > Who's liable?

                    You are. It's still your dog. If you would replace dog with child the case would be identical (but more plausible). This is really not as interesting as you think it is. The fact that you have a sentient dog is going to be laughed out of court and your neighbor will be in the docket together with you for attempting to mislead the court with your AI generated footage. See, two can play at that.

                    When you make such ridiculously contrived examples turnaround is fair play.

                  • gamblor95641 minutes ago
                    You would not be guilty of a crime, because that requires intent.

                    But you would be liable for civil damages, because that does not. There are multiple theories for which to establish liability, but most likely this would be treated as negligence.

          • b00ty4breakfast10 minutes ago
            I'm dubious, do you have any examples of this happening?
          • victorbjorklund4 hours ago
            I don’t know where you from but at least in Sweden you have strict liability for anything your dog does
          • 4 hours ago
            undefined
          • ori_b4 hours ago
            In the USA, at least, it seems pet owners are liable for any harm their pets do.
          • cess113 hours ago
            Legally, in a lot of jurisdictions, a dog is just your property. What it does, you did, usually with presumed intent or strict liability.
            • gowld3 hours ago
              What if you planted a bush that attracted a bat that bit a child?
              • b00ty4breakfast8 minutes ago
                what if my auntie had wheels, would she be a wagon?
              • Muromec3 hours ago
                What if you have an email in your inbox warning you that 1) this specific bush attracts bats and 2) there were in fact bats seen near you bush and 3) bats were observed almost biting a child before. And you also have "how do I fuck up them kids by planting a bush that attracts bats" in your browser history. It's a spectrum you know.
              • dragonwriter3 hours ago
                Well, if it was a bush known to also attract children, it was on your property, and the child was in fact attracted by it and also on your property, and the presence of the bush created the danger of bat bites, the principal of “attractive nuisance” is in play.
          • freejazz4 hours ago
            Prima facie negligence = liability
      • observationist4 hours ago
        To me, it's 100% clear - if your tool use is reckless or negligent and results in a crime, then you are guilty of that crime. "It's my robot, it wasn't me" isn't a compelling defense - if you can prove that it behaved significantly outside of your informed or contracted expectations, then maybe the AI platform or the Robot developer could be at fault. Given the current state of AI, though, I think it's not unreasonable to expect that any bot can go rogue, that huge and trivially accessible jailbreak risks exist, so there's no excuse for deploying an agent onto the public internet to do whatever it wants outside direct human supervision. If you're running moltbot or whatever, you're responsible for what happens, even if the AI decided the best way to get money was to hack the Federal Reserve and assign a trillion dollars to an account in your name. Or if Grok goes mechahitler and orders a singing telegram to Will Stancil's house, or something. These are tools; complex, complicated, unpredictable tools that need skillfull and careful use.

        There was a notorious dark web bot case where someone created a bot that autonomously went onto the dark web and purchased numerous illicit items.

        https://wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww.bitnik.or...

        They bought some ecstasy, a hungarian passport, and random other items from Agora.

        >The day after they took down the exhibition showcasing the items their bot had bought, the Swiss police “arrested” the robot, seized the computer, and confiscated the items it had purchased. “It seems, the purpose of the confiscation is to impede an endangerment of third parties through the drugs exhibited, by destroying them,” someone from !Mediengruppe Bitnik wrote on their blog.

        In April, however, the bot was released along with everything it had purchased, except the ecstasy, and the artists were cleared of any wrongdoing. But the arrest had many wondering just where the line gets drawn between human and computer culpability.

        • dragonwriter3 hours ago
          > To me, it's 100% clear - if your tool use is reckless or negligent and results in a crime, then you are guilty of that crime.

          For most crimes, this is circular, because whether a crime occurred depends on whether a person did the requisite act of the crime with the requisite mental state. A crime is not an objective thing independent of an actor that you can determine happened as a result of a tool and then conclude guilt for based on tool use.

          And for many crimes, recklessness or negligence as mental states are not sufficient for the crime to have occurred.

        • b00ty4breakfast4 hours ago
          that darknet bot one always confuses me. The artists/programmers/whatever specifically instructed the computer, through the bot, to perform actions that would likely result in breaking the law. It's not a side-effect of some other, legal action which they were trying to accomplish, it's entire purpose was to purchase things on a marketplace known for hosting illegal goods and services.

          If I build an autonomous robot that swings a hunk of steel on the end of a chain and then program it to travel to where people are likely to congregate and someone gets hit in the face, I would rightfully be held liable for that.

        • cess113 hours ago
          "computer culpability"

          That idea is really weird. Culpa (and dolus) in occidental law is a thing of the mind, what you understood or should have understood.

          A database does not have a mind, and it is not a person. If it could have culpa, then you'd be liable for assault, perhaps murder, if you took it apart.

          • Muromec2 hours ago
            >A database does not have a mind, and it is not a person. If it could have culpa, then you'd be liable for assault, perhaps murder, if you took it apart.

            We as a society, for our own convenience can choose to believe that LLM does have a mind and can understand results of it's actions. The second part doesn't really follow. Can you even hurt LLM in a way that is equivalent to murdering a person? Evicting it off my computer isn't necessarily a crime.

            It would be good news if the answer was yes, because then we just need to find a convertor of camel amounts to dollar amounts and we are all good.

            Can LLM perceive time in a way that allows imposing an equivalent of jail time? Is the LLM I'm running on my computer the same personality as the one running on yours and should I also shut down mine when yours acted up? Do we even need the punishment aspect of it and not just rehabilitation, repentance and retraining?

            • Wobbles422 hours ago
              The only hallucination here is the idea that giant equation is a mind.
              • Muromec2 hours ago
                It's only a hallucination if you are the only one seeing it. Otherwise the line between that, a social construct and a religious belief is a bit blurry.
          • observationist3 hours ago
            Yeah - I'm pretty sure, technically, that current AI isn't conscious in any meaningful way, and even the agentic scaffolding and systems put together lack any persistent, meaningful notion of "mind", especially in a legal sense. There are some newer architectures and experiments with the subjective modeling and "wiring" that I'd consider solid evidence of structural consciousness, but for now, AI is a tool. It also looks like we can make tools arbitrarily intelligent and competent, and we can extend the capabilities to superhuman time scales, so I think the law needs to come up with an explicit precedent for "This person is the user of the tool which did the bad thing" - it could be negligent, reckless, deliberate, or malicious, but I don't think there's any credibility to the idea that "the AI did it!"

            At worst, you would confer liability to the platform, in the case of some sort of blatant misrepresentation of capabilities or features, but absolutely none of the products or models currently available withstand any rational scrutiny into whether they are conscious or not. They at most can undergo a "flash" of subjective experience, decoupled from any coherent sequence or persistent phenomenon.

            We need research and legitimate, scientific, rational definitions for agency and consciousness and subjective experience, because there will come a point where such software becomes available, and it not only presents novel legal questions, but incredible moral and ethical questions as well. Accidentally oopsing a torment nexus into existence with residents possessed of superhuman capabilities sounds like a great way to spark off the first global interspecies war. Well, at least since the Great Emu War. If we lost to the emus, we'll have no chance against our digital offspring.

            A good lawyer will probably get away with "the AI did it, it wasn't me!" before we get good AI law, though. It's too new and mysterious and opaque to normal people.

      • kazinator3 hours ago
        That's the same thing. You signed off on the agent doing things on your behalf; you are responsible.

        If you gave a loaded gun to a five year old, would "five-year-old did it" be a valid excuse?

        • Wobbles422 hours ago
          If the five year old was a product resulting from trillions of dollars in investments, and the marketability of that product required people to be able to hand guns to that five year old without liability, then we would at least be having that discussion.

          Purely organically of course.

      • niyikiza4 hours ago
        > if you signed the document, you own its content. Versus some vendor-provided AI Agent which simply takes action on its own

        Yeah that's exactly the I think we should adopt for AI agent tool calls as well: cryptographically signed, task scoped "warrants" that can be traceable even in cases of multi-agent delegation chains

        • Muromec3 hours ago
          Why does it need cryptography even? If you gave the agent a token to interact with your bank account, then you gave it permission. If you want to limit the amount it is allowed to sent and a list of recipients, put a filter that sits between the account and the agent that enforces it. If you want the money to be sent only based on the invoice, let the filter check that invoice reference is provided by the agent. If you did neither of that and the platform that runs the agents didn't accept the liability, it's on you. Setting up filters and engineering prompts it's on you too.

          Now if you did all of that, but made a bug in implementing the filter, then you at least tried and wasn't negligible, but it's on you.

          • niyikiza2 hours ago
            Tokens + filters work for single-agent, single-hop calls. Gets murky when orchestrators spawn sub-agents that spawn tools. Any one of them can hallucinate or get prompt-injected. We're building around signed authorization artifacts instead. Each delegation is scoped and signed, chains are verifiable end-to-end. Deterministic layer to constrain the non-deterministic nature of LLMs.
            • Muromecan hour ago
              >We're building around signed authorization artifacts instead. Each delegation is scoped and signed, chains are verifiable end-to-end. Deterministic layer to constrain the non-deterministic nature of LLMs.

              Ah, I get it. So the token can be downscoped to be passed, like the pledge thing, so sub agent doesn't exceed the scope of it's parent. I have a feeling, that it's like cryptography in general -- you get one problem and reduce it to key management problem.

              In a more practical sense, if the non-deterministic layer decides what the reduced scope should be, all delegations can become "Allow: *" in the most pathological case, right? Or like play store, where a shady calculator app can have a permission to read your messages. Somebody has to review those and flag excessive grants.

              • niyikizaan hour ago
                Right, the non-deterministic layer can't be the one deciding scope. That's the human's job at the root.

                The LLM can request a narrower scope, but attenuation is monotonic and enforced cryptographically. You can't sign a delegation that exceeds what you were granted. TTL too: the warrant can't outlive its parent.

                So yes, key management. But the pathological "Allow: *" has to originate from a human who signed it. That's the receipt you're left holding.

                You're poking at the right edges though. UX for scope definition and revocation propagation are what we're working through now. We're building this at tenuo.dev if you want to dig in the spec or poke holes.

          • Wobbles422 hours ago
            How can you give an agent a token without cryptography being involved?
            • Muromecan hour ago
              Not every access token is a (public) key or a signed object. It may be, but it doesn't have to. It's not state of the art, but also not unheard of to use a pre-shared secret with no cryptography involved and to rely on presenting the secret itself with each request. Cookie sessions are often like that.
        • embedding-shape3 hours ago
          Kind of like https://github.com/cursor/agent-trace but cryptographically signed?

          > Agent Trace is an open specification for tracking AI-generated code. It provides a vendor-neutral format for recording AI contributions alongside human authorship in version-controlled codebases.

          • niyikiza3 hours ago
            Similar space, different scope/Approach. Tenuo warrants track who authorized what across delegation chains (human to agent, agent to sub-agent, sub-agent to tool) with cryptographic proof & PoP at each hop. Trace tracks provenance. Warrants track authorization flow. Both are open specs. I could see them complementing each other.
      • jacquesm3 hours ago
        If you signed the document you are responsible for its content, you are most likely not the owner of it.
    • ibejoeb4 hours ago
      Yeah. Legal will need to catch up to deal with some things, surely, but the basic principles for this particular scenario aren't that novel. If you're a professional and have an employee acting under your license, there's already liability. There is no warrant concept (not that I can think of right now, at least) that will obviate the need to check the work and carry professional liability insurance. There will always be negligence and bad actors.

      The new and interesting part is that while we have incentives and deterrents to keep our human agents doing the right thing, there isn't really an analog to check the non-human agent. We don't have robot prison yet.

    • imiric2 hours ago
      That's quickly becoming difficult to determine.

      The workflow of starting dozens or hundreds of "agents" that work autonomously is starting to gain traction. The goal of people who work like this is to completely automate software development. At some point they want to be able to give the tool an arbitrary task, presumably one that benefits them in some way, and have it build, deploy, and use software to complete it. When millions of people are doing this, and the layers of indirection grow in complexity, how do you trace the result back to a human? Can we say that a human was really responsible for it?

      Maybe this seems simple today, but the challenges this technology forces on society are numerous, and we're far from ready for it.

      • niyikiza2 hours ago
        This is the problem we're working on.

        When orchestrators spawn sub-agents spawn tools, there's no artifact showing how authority flowed through the chain.

        Warrants are a primitive for this: signed authorization that attenuates at each hop. Each delegation is signed, scope can only narrow, and the full chain is verifiable at the end. Doesn't matter how many layers deep.

      • Wobbles422 hours ago
        Translation:

        People want to use a tool and not be liable for the result.

        People not wanting to be liable for their actions is not new. AI hasn't changed anything here, it's just a new lame excuse.

    • salawat6 hours ago
      Except for the fact that that very accountability sink is relied on by senior management/CxO's the world over. The only difference is that before AI, it was the middle manager's fault. We didn't tell anyone to break the law. We just put in place incentive structures that require it, and play coy, then let anticipatory obedience do the rest. Bingo. Accountability severed. You can't prove I said it in a court of law, and skeevy shit gets done because some poor bloke down the ladder is afraid of getting fired if he doesn't pull out all the stops to meet productivity quotas.

      AI is just better because no one can actually explain why the thing does what it does. Perfect management scapegoat without strict liability being made explicit in law.

      • pixl974 hours ago
        Hence why many life and death things require licencing and compliance, and tend to come with very long paper trails.

        The software world has been very allergic to getting anywhere near the vicinity of a system like that.

        • salawat3 hours ago
          Did I give the impression that the phenomena was unique to software? Hell, Boeing was a shining example of the principle in action with 737 MAX. Don't get much more "people live and die by us, and we know it (but management set up the culture and incentives to make a deathtrap anyway)." No one to blame of course. These things just happen.

          Licensure alone doesn't solve all these ills. And for that matter, once regulatory capture happens, it has a tendency to make things worse due to consolidation pressure.

      • Muromec2 hours ago
        >AI is just better because no one can actually explain why the thing does what it does. Perfect management scapegoat without strict liability being made explicit in law.

        AI is worse in that regard, because, although you can't explain why it does so, you can point a finger at it, say "we told you so" and provide the receipts of repeated warnings that the thing has a tendency of doing the things.

    • doctorpangloss5 hours ago
      Wait till you find out about “pedal confusion.”
    • NedFan hour ago
      [dead]
    • niyikiza6 hours ago
      You're right, they should be responsible. The problem is proving it. "I asked it to summarize reports, it decided to email the competitor on its own" is hard to refute with current architectures.

      And when sub-agents or third-party tools are involved, liability gets even murkier. Who's accountable when the action executed three hops away from the human? The article argues for receipts that make "I didn't authorize that" a verifiable claim

      • bulatb5 hours ago
        There's nothing to prove. Responsibility means you accept the consequences for its actions, whatever they are. You own the benefit? You own the risk.

        If you don't want to be responsible for what a tool that might do anything at all might do, don't use the tool.

        The other option is admitting that you don't accept responsibility, not looking for a way to be "responsible" but not accountable.

        • tossandthrow5 hours ago
          Sounds good in theory, doesn't work in reality.

          Had it worked then we would have seen many more CEOs in prison.

          • walt_grata5 hours ago
            There being a few edge cases where it doesn't work in doesn't mean it doesn't work in the majority of cases and that we shouldn't try to fix the edge cases.
          • Muromec2 hours ago
            CEOs are like cars and immigrants. Both kill people all the time, but we choose to believe they are net positive to society, look the other way and try to put symbolic band aids here and there.

            The same may happen to AI or not. We can bite the bullet and say it's fine that it sometimes happens. We can ban the entire thing too if we feel the tradeoff not worth it

            • direwolf202 hours ago
              You're not doing any favors to your hirability with those first two sentences.
              • Muromec2 hours ago
                The market is allmighty, but it's allmerciful as well, and thankully, not allknowing.
          • freejazz4 hours ago
            This isn't a legal argument and these conversations are so tiring because everyone here is insistent upon drawing legal conclusions from these nonsense conversations.
          • bulatb4 hours ago
            We're taking about different things. To take responsibility is volunteering to accept accountability without a fight.

            In practice, almost everyone is held potentially or actually accountable for things they never had a choice in. Some are never held accountable for things they freely choose, because they have some way to dodge accountability.

            The CEOs who don't accept accountability were lying when they said they were responsible.

          • NoMoreNicksLeft5 hours ago
            The veil of liability is built into statute, and it's no accident.

            Such so magic forcefield exists for you, though.

      • LeifCarrotson5 hours ago
        > "I asked it to summarize reports, it decided to email the competitor on its own" is hard to refute with current architectures.

        No, it's trivial: "So you admit you uploaded confidential information to the unpredictable tool with wide capabilities?"

        > Who's accountable when the action executed three hops away from the human?

        The human is accountable.

        • pixl974 hours ago
          As the saying goes

          ----

          A computer can never be held accountable

          Therefore a computer must never make a management decision

          • direwolf202 hours ago
            That's when companies were accountable for their results and needed to push the accountability to a person to deter bad results. You couldn't let a computer make a decision because the computer can't be deterred by accountability.

            Now companies are all about doing bad all the time, they know they're doing it, and need to avoid any individual being accountable for it. Computers are the perfect tool to make decisions without obvious accountability.

        • Muromec2 hours ago
          >The human is accountable.

          That's an orthodoxy. It holds for now (in theory and most of the time), but it's just an opinion, like a lot of other things.

          Who is accountable when we have a recession or when people can't afford whatever we strongly believe should be affordable? The system, the government, the market, late stage capitalism or whatever. Not a person that actually goes to jail.

          If the value proposition becomes attractive, we can choose to believe that the human is not in fact accountable here, but the electric shaitan is. We just didn't pray good enough, but did our best really. What else can we expect?

        • gowld3 hours ago
          What if you carried a stack of papers between buildings on a windy day, and the papers blew away?
          • bigfishrunning2 hours ago
            You should have put the papers in a briefcase or a bag. You are responsible.
      • phoe-krk5 hours ago
        > "I asked it to summarize reports, it decided to email the competitor on its own" is hard to refute with current architectures.

        If one decided to paint a school's interior with toxic paint, it's not "the paint poisoned them on its own", it's "someone chose to use a paint that can poison people".

        Somebody was responsible for choosing to use a tool that has this class of risks and explicitly did not follow known and established protocol for securing against such risk. Consequences are that person's to bear - otherwise the concept of responsibility loses all value.

        • Muromec2 hours ago
          >Somebody was responsible for choosing to use a tool that has this class of risks and explicitly did not follow known and established protocol for securing against such risk. Consequences are that person's to bear - otherwise the concept of responsibility loses all value.

          What if I hire you (instead of LLM) to summarize the reports and you decide to email the competitors? What if we work in the industry where you have to be sworn in with an oath to protect secrecy? What if I did (or didn't) check with the police about your previous deeds, but it's first time you emailed competitors? What if you are a schizo that heard God's voice that told you to do so and it's the first episode you ever had?

        • im3w1l5 hours ago
          > otherwise the concept of responsibility loses all value.

          Frankly, I think that might be exactly where we end up going. Finding a responsible person to punish is just a tool we use to achieve good outcomes, and if scare tactics is no longer applicable to the way we work, it might be time to discard it.

          • phoe-krk4 hours ago
            A brave new world that is post-truth, post-meaning, post-responsibility, and post-consequences. One where the AI's hallucinations eventually drag everyone with it and there's no other option but to hallucinate along.

            It's scary that a nuclear exit starts looking like an enticing option when confronted with that.

            • direwolf202 hours ago
              I saw some people saying the internet, particularly brainrot social media, has made everyone mentally twelve years old. It feels like it could be true.

              Twelve–year–olds aren't capable of dealing with responsibility or consequence.

            • Muromec2 hours ago
              >A brave new world that is post-truth, post-meaning, post-responsibility, and post-consequences. One where the AI's hallucinations eventually drag everyone with it and there's no other option but to hallucinate along.

              That value proposition depends entirely on whether there is also an upside to all of that. Do you actually need truth, meaning, responsibility and consequences while you are tripping on acid? Do you even need to be alive and have a physical organic body for that? What if Ikari Gendo was actually right and everyone else are assholes who don't let him be with his wife.

            • im3w1l3 hours ago
              Ultimately the goal is to have a system that prevents mistakes as much as possible adapts and self-corrects when they do happen. Even with science we acknowledge that mistakes happen and people draw incorrect conclusions, but the goal is to make that a temporary state that is fixed as more information comes in.

              I'm not claiming to have all the answers about how to achieve that, but I am fairly certain punishment is not a necessary part of it.

      • QuadmasterXLII5 hours ago
        This doesn't seem conceptually different from running

            [ $[ $RANDOM % 6] = 0 ] && rm -rf / || echo "Click"
        
        on your employer's production server, and the liability doesn't seem murky in either case
        • staticassertion5 hours ago
          What if you wrote something more like:

              # terrible code, never use ty
              def cleanup(dir):
                system("rm -rf {dir}")
          
          
              def main():
                  work_dir = os.env["WORK_DIR"]
                  cleanup(work_dir)
          
          and then due to a misconfiguration "$WORK_DIR" was truncated to be just "/"?

          At what point is it negligent?

          • direwolf205 hours ago
            This is not hypothetical. Steam and Bumblebee did it.
            • extraduder_ire5 hours ago
              That was the result of an additional space in the path passed to rm, IIRC.

              Though rm /$TARGET where $TARGET is blank is a common enough footgun that --preserve-root exists and is default.

            • a_t483 hours ago
              Bungie, too, in a similar way.
      • groby_b5 hours ago
        "And when sub-agents or third-party tools are involved, liability gets even murkier."

        It really doesn't. That falls straight on Governance, Risk, and Compliance. Ultimately, CISO, CFO, CEO are in the line of fire.

        The article's argument happens in a vacuum of facts. The fact that a security engineer doesn't know that is depressing, but not surprising.

        • Muromec2 hours ago
          >The fact that a security engineer doesn't know that is depressing, but not surprising.

          That's a very subtle guinea pig joke right there.

      • freejazz4 hours ago
        The burden of substantiating a defense is upon the defendant and no one else.
      • groby_b5 hours ago
        "Our tooling was defective" is not, in general, a defence against liability. Part of a companys obligations is to ensure all its processes stay within lawful lanes.

        "Three months later [...] But the prompt history? Deleted. The original instruction? The analyst’s word against the logs."

        One, the analysts word does not override the logs, that's the point of logs. Two, it's fairly clear the author of the fine article has never worked close to finance. A three month retention period for AI queries by an analyst is not an option.

        SEC Rule 17a-4 & FINRA Rule 4511 have entered the chat.

  • kazinator3 hours ago
    This article is well-written insanity.

    With no amount of detailed logging makes "the AI did it" a valid excuse.

    It's just a tool.

    It's like blaming a loose bolt in a Boeing 737 on "screwdriver did it".

    • kulahanan hour ago
      If you tighten a bolt using a breaker bar and it fails unexpectedly, it really is the screwdriver’s fault though. Nobody can know if the bolt truly has 70 pounds of torque applied. Maybe it broke at 62, and will fail in 3 flights.
  • stronglikedan3 hours ago
    If one of my reports came to me with that defense, I'd write them up twice. Once for whatever they did wrong, and once for insulting my intelligence and wasting my time with that "defense".

    On the contrary, if they just owned up to it, chances are I wouldn't even write them up once.

  • onoesworkacct3 hours ago
    IMO everyone is missing the point of this thing. It's not an auth system or security boundary, it doesn't provide any security guarantees whatsoever, it doesn't do anything. The entire point is to cover a company's derriere should their agentic security apparatus (or lack thereof) fail to prevent malicious prompt injection etc.

    This way, they can avoid being legally blamed for stuff-ups and instead scapegoat some hapless employee :-) using cryptographic evidence the employee "authorized" whatever action was taken

  • amelius2 hours ago
    We should demand money back if an LLM hallucinates. And they should be liable.
  • RobotToaster5 hours ago
    If an employee does something during his employment, even if he wasn't told directly to do it, the company can be held vicariously liable, how is this any different?
    • apercu5 hours ago
      I agree with you but you can’t jail a gen-ai model, I guess, is where the difference lies?
      • LeifCarrotson5 hours ago
        "The company can be held vicariously liable" means that in this analogy, the company represents the human who used AI inappropriately, and the employee represents the AI model that did something it wasn't directly told to do.
      • phailhaus5 hours ago
        Nobody tries to jail Microsoft Word, they jail the person using it.
        • gorjusborg4 hours ago
          Nobody tries to jail the automobile being driven when it hits a pedestrian when on cruise control. The driver is responsible for knowing the limits of the tool and adjusting accordingly.
  • 0xTJ4 hours ago
    Why would that be any better of a defense than "that preschooler said that I should do it"? People are responsible for their work.
  • tboyd475 hours ago
    Anytime someone gives you unverified information, they're asking you to become their guinea pig.
  • Wobbles422 hours ago
    I don't understand what has changed here.

    AI is just branding. At the end of the day it's still just people using computer software to do stuff.

    There is going to be a person who did a thing at the end of the day -- either whoever wrote the software or whoever used the tool.

    The fact that software inexplicably got unreliable when we started stamping "AI" on the box shouldn't really change anything.

  • andrewflnr4 hours ago
    How does the old proverb go?

    > A computer must never make a management decision, because a computer cannot be held accountable.

  • booleandilemma2 hours ago
    It's the same thing as saying "It's not my fault, it's a bug in the spreadsheet". The human is ultimately responsible. Heck, it's the same thing as someone saying "the computer made a mistake". Computers don't make mistakes. People do. The hallucination defense is not a defense at all and this is a non-issue. There's nothing to talk about here.

    And if a professional wants to delegate their job to non-deterministic software they're not professionals at all. This overreliance on LLMs is going to have long-term consequences for society.

    • Wobbles42an hour ago
      If the LLMs don't start producing positive revenue soon, the consequences for society will be rather short term.
  • noitpmeder4 hours ago
    This is some absolute BS. In the current day and age you are 1000% responsible for the externalities of your use of AI.

    Read the terms and conditions of your model provider. The document you signed, regardless if you read or considered it, explicitly removes any negative consequences being passed to the AI provider.

    Unless you have something equally as explicit, e.g. "we do not guarantee any particular outcome from the use of our service" (probably needs to be significantly more explicitly than that, IANAL) all responsibility ends up with the entity who itself, or it's agents, foists unreliable AI decisions on downstream users.

    Remember, you SIGNED THE AGGREMENT with the AI company the explicitly says it's outputs are unreliable!!

    And if you DO have some watertight T&C that absolves you of any responsibility of your AI-backed-service, then I hope either a) your users explicitly realize what they are signing up for, or b) once a user is significantly burned by your service, and you try to hide behind this excuse, you lose all your business

    • ceejayoz4 hours ago
      T&Cs aren't ironclad.

      One in which you sell yourself into slavery, for example, would be illegal in the US.

      All those "we take no responsibility for the [valet parking|rocks falling off our truck|exploding bottles]" disclaimers are largely attempts to dissuade people from trying.

      As an example, NY bans liability waivers at paid pools, gyms, etc. The gym will still have you sign one! But they have no enforcement teeth beyond people assuming they're valid. https://codes.findlaw.com/ny/general-obligations-law/gob-sec...

      • noitpmeder4 hours ago
        So I can pass on contact breaches due to bugs in software I maintain due to hallucinations by the AI that I used to write the software?? Absolutely no way.

        "But the AI wrote the bug."

        Who cares? It could be you, your relative, your boss, your underling, your counterpart in India, ... Your company provided some reasonable guarantee of service (whether explitly enumerated in a contact or not) and you cannot just blindly pass the buck.

        Sure, after you've settled your claim with the user, maybe TRY to go after the upstream provider, but good luck.

        (Extreme example) -- If your company produces a pacemaker dependent on AWS/GCP/... and everyone dies as soon as cloudflare has a routing outage that cascades to your provider, oh boy YOU are fucked, not cloudflare or your hosting provider.

        • ceejayoz4 hours ago
          More than one person/organization can be liable at once.
          • noitpmeder4 hours ago
            The point of signing contracts is you explicitly set expectations for service, and explicitly assign liability. You can't just reverse that and try to pass the blame.

            Sure, if someone from GCP shows up at your business and breaks your leg or burns down your building, you can go after them, as it's outside the reasonable expectation of the business agreement you signed.

            But you better believe they will never be legally responsible for damages caused by outages of their service beyond what is reasonable, and you better believe "reasonable outage" in this case is explicitly enumerated in the contact you or your company explicitly agreed to.

            Sure they might give you free credits for the outage, but that's just to stop you from switching to a competitor, not any explicit acknowledgement they are on the hook for your lost business opportunity.

            • ceejayoz4 hours ago
              > The point of signing contacts is you explicitly set expectations for servkce, and explicitly assign liability.

              Sure, but not all liability can be reassigned; I linked a concrete example of this.

              > But you better believe they will never be legally responsible for damages caused by outages of their service beyond what is reasonable, and you better believe "reasonable outage" in this case is explicitly enumerated in the contact you or your company explicitly agreed to.

              Yes, on this we agree. It'd have to be something egregious enough to amount to intentional negligence.

          • freejazz4 hours ago
            "Can" isn't the same as "is"
  • 3 hours ago
    undefined
  • bitwize4 hours ago
    aka the Shaggy Defense for the 2020s.
  • thedudeabides53 hours ago
    a machine can never be held accountable

    but the person who turned it on can

    simple as

  • rpodraza4 hours ago
    What problem is this guy trying to solve? Sorry, but in the end, someone's gonna have to be responsible and it's not gonna be a computer program. Someone approved the program's use, it's no different to any other software. If you know agent can make mistakes then you need to verify everything manually, simple as.
    • pixl974 hours ago
      While we're a long way off from the day science fiction becomes fact, the world is going to shit itself if a self actionable AI bootstraps and causes havoc.
  • gamblor9564 hours ago
    It's not a legal defense at all.

    Licensed professionals are required to review their work product. It doesn't matter if the tools they use mess up--the human is required to fix any mistakes made by their tools. In the example given by the blog, the financial analyst is either required to professional review their work product or is low enough that someone else is required to review their work product. If they don't, they can be held strictly liable for any financial losses.

    However, this blog post isn't about AI Hallucinations. It's about the AI doing something else separate from the output.

    And that's not a defense either. The law already assigns liability in situations like this: the user will be held liable (or more correctly: their employer, for whom the user is acting as an agent). If they want to go after the AI tooling (i.e., an indemnification action) vendor the courts will happily let them do so after any plaintiffs are made whole (or as part of an impleader action).

  • freejazz4 hours ago
    What a stupid article from someone that has no idea when liability attaches.

    It is the burden of a defendant to establish their defense. A defendant can't just say "I didn't do it". They need to show they did not do it. In this (stupid) hypothetical, the defendant would need to show the AI acted on its own, without prompting from anyone, in particular, themselves.