She apparently could not afford a lawyer, who would have pointed out that she was provably at home (transactions, etc.) at the time the crime was committed in another state.
Really it's not specifically AIs fault, though it made the error easier.
The AI was akin to an unreliable eye-witness in this case, although people's trust in the AI's judgement may have been higher than a human eyewitness?
The police made an inexcusable mistake out of carelessness. They simply couldn't be bothered to spend five minutes fact-checking the facial recognition match, and it caused catastrophic harm to an innocent woman.
And what's the headline? "AI did this". It's a new and exciting way for people to shirk accountability for their actions. We're already seeing it in the reporting on the Iranian school bombed by the United States: blame AI for selecting the target, and not the humans in the loop who failed to do the most basic due diligence.
Don't let the AI system off the hook by all means, but by focusing on it to this extent, the narrative ignores (deliberately?) the hugely negligent actions of the police et al involved.
> People by nature are lazy and will take shortcuts given an opportunity.
So, um, the fact that humans are behaving incompetently means we should shift the responsibility onto a machine?
Suppose a human had looked at some crappy surveillance video from hundreds of miles away, and told the primary investigator "that looks like it could be her; you might want to check it out". Would that human be the most responsible person in the chain? The moron who took that as gospel and actually made an arrest has no agency at all here?
Come on, a facial recognition match? Facial recognition probably shouldn't be used because it's bad when it works, but everybody with a functioning synapse knows that facial recognition is going to get lots of false hits.
So it’s still reasonable to be skeptical of (or outright reject) the use of the technology in systems that can ruin or end people’s lives.
https://en.wikipedia.org/wiki/Automation_bias
> Automation bias is the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct.
In other words, if it is foreseeable that the tool will be misused, what does that mean for the toolmaker?
For example, if a person's face is matched to a ID, the UI must show not just the match percentage (which is very misleading) but also contextually the odds of getting it wrong. For example, if there are 7 IDs whose face is at least a 95% feature match to the thief, the odds of getting it wrong are at least 6 out of 7, meaning the chances of an accurate classification is just 14% at best!
/s