68 pointsby vaxman7 hours ago10 comments
  • bradley136 hours ago
    Really, it's more about the police not doing their job. Face recognition pointed her out, the police saw she had a rap sheet, and therefore they didn't check further.

    She apparently could not afford a lawyer, who would have pointed out that she was provably at home (transactions, etc.) at the time the crime was committed in another state.

    Really it's not specifically AIs fault, though it made the error easier.

    • mft_6 hours ago
      Quite; AI contributed to a (criminally?) inept and negligent "justice" system ruining an innocent woman's life.

      The AI was akin to an unreliable eye-witness in this case, although people's trust in the AI's judgement may have been higher than a human eyewitness?

    • throwaway4390805 hours ago
      Yes and no. I think the interesting thing about this story is how it's been presented: AI as a scapegoat for incompetence.

      The police made an inexcusable mistake out of carelessness. They simply couldn't be bothered to spend five minutes fact-checking the facial recognition match, and it caused catastrophic harm to an innocent woman.

      And what's the headline? "AI did this". It's a new and exciting way for people to shirk accountability for their actions. We're already seeing it in the reporting on the Iranian school bombed by the United States: blame AI for selecting the target, and not the humans in the loop who failed to do the most basic due diligence.

    • tim3333 hours ago
      They don't even look that identical - the fraudster on cctv appears about a decade younger than Angela Lipps.
    • underlipton6 hours ago
      You shouldn't have to have a lawyer to get something this basic entered into the record. Rule of law that can't even get that right is useless, which is part of why so many people have less, or zero, faith in it today.
    • santoshalper6 hours ago
      I still wouldn't let AI off the hook here. Every link in the chain has to be accountable for fuckups. You don't get to pass it along to the supposed "human in the loop" when you fail spectacularly. That's how we end up with shitty "almost works" AI.
      • mft_6 hours ago
        Sure, the AI contributed, but it was far less responsible overall than the humans in this case.

        Don't let the AI system off the hook by all means, but by focusing on it to this extent, the narrative ignores (deliberately?) the hugely negligent actions of the police et al involved.

        • alcomatt4 hours ago
          AI or more precisely the way it is being sold to us is the most responsible factor here. People by nature are lazy and will take shortcuts given an opportunity. AI is the ultimate shortcut these days, a "mental crutch" majority of the people using it are leaning on. Humans just did what they always do, be lazy - AI should never have been used for processes with this level of life-altering impact because what happened here was bound to happen.
          • Hizonner3 hours ago
            Nobody's "selling it" as more reliable than it is. People are assuming it's more reliable than it is.

            > People by nature are lazy and will take shortcuts given an opportunity.

            So, um, the fact that humans are behaving incompetently means we should shift the responsibility onto a machine?

            Suppose a human had looked at some crappy surveillance video from hundreds of miles away, and told the primary investigator "that looks like it could be her; you might want to check it out". Would that human be the most responsible person in the chain? The moron who took that as gospel and actually made an arrest has no agency at all here?

            Come on, a facial recognition match? Facial recognition probably shouldn't be used because it's bad when it works, but everybody with a functioning synapse knows that facial recognition is going to get lots of false hits.

        • jjj1236 hours ago
          I agree, but I think the broader point here is that any automated system is a way to offload accountability. And it will be used for that without a doubt no matter how “good” the officers or human processes are.

          So it’s still reasonable to be skeptical of (or outright reject) the use of the technology in systems that can ruin or end people’s lives.

    • mkoubaa6 hours ago
      Give them a hammer and everything becomes a nail
      • righthand6 hours ago
        There's no better comparison to chimps with a gun than cops with technology.
  • hyperhello6 hours ago
    In Oregon the courts just ruled that since defendants weren’t provided a public defender in a certain amount of time, their cases were voided. There was an outcry, of course. But the ruling was sound: the pain had to be pushed to the part of the system that was failing. An honest system does not allow things like this; the accused either needs to either have a competent advocate, or the case is void.
  • gnabgib6 hours ago
    Discussion (730 points, 2 days ago, 379 comments) https://news.ycombinator.com/item?id=47356968
  • mulosolitario3 hours ago
    AI gets wrong 40%,50% even 70% of the times. Nonetheless Anthropic Claude has been used (behind Palantir) by the most moral army in the world to decide who to kill in Gaza or where to drop the bombs in Teheran. AI "solves" the problem of the accountability because it can fabricate all the "legitimate targets" you need. So now you can drop a bomb and kill 10 children and claim it is moral because AI said they are terrorists.
  • rectang6 hours ago
    My takeaway from the huge discussion thread yesterday was that the big divide among HN commenters is whether or not purveyors of AI tech have any responsibility to account for automation bias in their users.

    https://en.wikipedia.org/wiki/Automation_bias

    > Automation bias is the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct.

    In other words, if it is foreseeable that the tool will be misused, what does that mean for the toolmaker?

  • OutOfHere6 hours ago
    Those deploying AI where it can affect individuals must ensure that the UI always prominently shows the failure rate.

    For example, if a person's face is matched to a ID, the UI must show not just the match percentage (which is very misleading) but also contextually the odds of getting it wrong. For example, if there are 7 IDs whose face is at least a 95% feature match to the thief, the odds of getting it wrong are at least 6 out of 7, meaning the chances of an accurate classification is just 14% at best!

  • mvrckhckr6 hours ago
    AI is a tool. It is humans who abdicate their responsibility (and thinking).
    • rectang6 hours ago
      Howitzers are also tools, but we don't let just anyone own and operate them.
    • wat100006 hours ago
      Computers often serve as a tool for the avoidance of responsibility.
  • righthand6 hours ago
    I’m sure the cops got a slap on the wrist and their lives are fine. ACAB.
  • mannanj6 hours ago
    Humans kill people not AI.
  • odshoifsdhfs6 hours ago
    But have they have tried the latest models? I understand this from October last year but Opus 4.6 is light and day and I wasn't a believer but now with this latest model it changed everything. it hasn't send any innocent person to jail yet and identified all my neighboorhood creeps 100%.

    /s