24 pointsby Brajeshwar4 hours ago9 comments
  • bsenftneran hour ago
    Former author of one of the top 5 facial recognition servers in the world for multiple years running, here's what's going on: the industry has solved this issue, but the potential clients are seeking the lowest bidder, and picking the newer companies, the nepostically created not really players but well connected, and those companies have terrible implementations. This is not a case of the technology not there yet, we solved all these racial bias issues 10 years ago. But new companies with new training sets and new ML engineers that do not know any of the industry's history are now landing contracts with terrible quality models, but well connected sales channels.
    • graemep12 minutes ago
      So just like the rest of government IT then.
  • blitzar2 hours ago
    > the system was more likely to correctly identify men than women and it was “statistically significantly more likely to correctly identify black participants than participants from other ethnic groups”

    Technology has moved on a lot no doubt, however, studies were finding the opposite (and with order of magnitude errors) as recently as 2020 with a lazy google literature search

    > these algorithms were found to be between 10 and 100 times more likely to misidentify a Black or East Asian face than a white face

    https://jolt.law.harvard.edu/digest/why-racial-bias-is-preva...

    • 2 hours ago
      undefined
    • krisoftan hour ago
      Given that these are machine learning algorithms their performance will very much depend on the training dataset. So it is probably not (just) that “technology has moved on a lot”, but that the engineers working on it curated new training sets. It is not entirely unreasonable to think that they too read the paper you are talking about and made measures in an attempt to correct for the effect.
      • stuaxo8 minutes ago
        Maybe, or there might be qualities around say contrast and the physical cameras themselves that build this in.
      • blitzaran hour ago
        then in theory, the dataset can be changed to make model error rates "fair" for all intersections of race, gender, age etc.
  • OJFord2 hours ago
    This is actually more (socially/ethically/philosophically) interesting than one might assume from the headline: it's not false positives, it's that it's more effective (correctly identifies someone is on a watch-list) for one group than another within a protected characteristic.

    So essentially they're pausing the use of it because it works too well for group A / not well enough for group B, potentially leading to disproportionate (albeit correct) arrests of group A.

    • metalman2 hours ago
      Absolutly impossible to condone further structural bias against a minority, and just ignore the free "white pass" built into the software, and esspecialy troubling that it passes white women, the most. The only possible action is to reject and dissable any system with a racial bias, investigate how such a thing happened, with a very pointy look for intent on the part of the vendors, who would then qualify for bieng housed in one of his majestys facilities for persons such as these.
      • edgyquant2 hours ago
        If it’s not falsely identifying people I don’t see a problem at all. If it’s identifying criminals every criminal should be caught
        • almostjazz17 minutes ago
          If you start with hypothetical demographic groups A and B that are for all intents and purposes exactly identical, but you implement a system such that if A commits a crime they have a 10% chance of being caught and if B commits a crime they have a 50% chance of being caught, you will achieve the following:

          1. More short-term crime prevention than a system catching 10% of A's crimes and 10% of B's crimes (good!)

          2. Enforce a societal belief that A is intrinsically better than B (bad!)

          3. Disproportionately burden children, families, and communities in B than A, causing them to indeed perform worse in everything than those in A (bad!)

          4. As a result of 2 & 3 it is not a stretch to say simply causing B to do more actual crime (potentially negating point 1 entirely)

          If you believe that crime enforcement is not for the sake of vengeance but instead something done to improve the well-being, safety, and happiness of citizens, you may see that inequality=bad just as crime=bad. How to best solve this trolley problem is complicated but it's important that people are aware that it is complicated before firing off an answer.

        • metalman31 minutes ago
          it is FALSELY unidentifying people, which makes the harware, software, sales, implimentation of the whole system a criminal enterprise, which it is. Kudos to the police for rejecting this racist biggoted unjust criminal software implimentation.
        • Joker_vD2 hours ago
          See, what you've said is precisely "structural bias against a minority", or "systemic injustice". Then again, the elites are, technically, also a minority as well, and we all know how well letting their crimes slide works out for the rest of the society.
  • ap993 hours ago
    > more likely to correctly identify men than women.

    > more likely to correctly identify black participants than participants from other ethnic groups.

    > AI surveillance that is experimental, untested, inaccurate or potentially biased has no place on our streets.

    I wonder if they're more worried about putting too many men in prison or too many black people.

    • graemep7 minutes ago
      They are concerned about a higher rate of false positives (therefore a higher rate of incorrect arrests etc.) of white people (and probably Asians etc.) and women. This is also discriminatory.

      People forget equality law runs both says. it is illegal to discriminate against men, whites, or heterosexuals just as it is to discriminate against women, non-whites or gays.

    • xenocratus2 hours ago
      Neither, they're worried about bad rep.
  • an hour ago
    undefined
  • pingou3 hours ago
    If the suspect is Black, the software should automatically return zero matches in 30% of cases. Problem solved.
  • ghusto2 hours ago
    > the system was more likely to correctly identify men than women and it was “statistically significantly more likely to correctly identify black participants than participants from other ethnic groups”.

    I am genuinely unsure what's going on.

    My understanding of the article is that the system is problematic because it is more likely to correctly identify black people than "other ethnic groups". Is that right?

    • defrost2 hours ago
      It's problematic for use in Essex as it works best for a small minority of the Essex population and has a much higher error rate for a typical sample of the Essex community.

      Adendum: Essex Ethnicity breakdown- 85.1% White British · 5.2% Other White · 3.7% Asian · 2.5% Black · 2.4% Mixed · 1.1% Other · (2021).

      from: https://en.wikipedia.org/wiki/Essex

      ie: most accurate (however acccurate that is) for the men of 2.5% of the regions population

      Not so accurate for 98.75% of the regions population.

    • OJFord2 hours ago
      Essentially (with made up numbers): 100 men on a high street, 4 of which are on a watch-list; 2 of which are black. Both black guys get identified, only one of the others does.

      Ditto men vs. women, mutatis mutandis.

      • graemep6 minutes ago
        The problem is that a likely outcome is that they will arrest two white men who are not the ones on the white list. That is discriminatory, at least if it keeps happening so that you get a higher rate of wrongful arrests of one group.
      • edgyquant2 hours ago
        So it should be improved but sounds like it’s just catching criminals who need to be caught no?
        • bondarchuk2 hours ago
          • ap9939 minutes ago
            Having lived in large urban areas my entire adult life and watching how different cultures behave, there are in fact differences.

            Ignoring the color of someone's skin, do you think the person who routinely litters, breaks small rules, breaks large rules, ignores customs, flouts laws, is not deferential to authority, etc... Do you think they'll be more or less likely to end up in prison?

  • bloqs2 hours ago
    Correlation does not indicate causation
  • gib4443 hours ago
    Alternative headlines:

    Essex police, well aware of all the issues before using it, pause use until expected bad publicity dies down

    Or

    Essex police chosen as force to take some flack for the issues while other forces steam ahead