Calls for Amazon to stop selling its facial recognition technology

via Flickr © IBM Research (CC BY-ND 2.0)

via Flickr © IBM Research (CC BY-ND 2.0)

  • "Rekognition" system allegedly biased against women and people of colour
  • Test misidentified 38 per cent of Members of US Congress as convicted criminals.
  • Unacceptable levels of inaccuracy and lack of regulation a recipe for potential disaster
  • But moves afoot to draw up standards and governmental regulatory oversight

Serendipity strikes again! As today's TelecomTV's publishes a story about the disbandment of Google's Advanced Technology External Advisory Council (ATEAC), which collapsed, in part, over concerns about the accuracy and use of facial recognition technology, comes news that 25 noted and influential AI scientists have written to Amazon calling on the company to cease selling its "Rekognition" facial recognition technology to police forces and other law enforcement agencies because, they allege, the systems are biased against women and people of colour.

The signatories to the letter say facial recognition is immature and insecure and the artificial intelligence behind the systems carries within it the inherent (and probably subconscious) biases and prejudices of the people who developed and wrote them. Opinion is hardening that the hurried deployment of facial recognition technology in the US requires immediate regulation and oversight by a properly appointed government body. There is also great concern that the technology is being used secretly and in many cases the public have no idea they are being spied upon and profiled.

Amazon promotes and sells Rekognition via Amazon Web Services, its cloud-computing arm, and when it introduced the technology the company was bullish, quickly and loudly announcing that some of the first to adopt Rekognition were the Orlando Police Department in Florida and the Washington County Sheriff’s Office in Oregon. These days Amazon is rather more coy and reticent about who, where, is buying its "deep learning image analysis" product.

On its website Amazon Web Services describes Rekognition as "a highly accurate facial analysis and facial recognition system used to detect, analyse, and compare faces for a wide variety of use cases, including user verification, cataloging, people counting, and public safety".

However back in January, a study conducted by researchers at the famous Massachusetts Institute of Technology (MIT) found that Rekognition is unreliable when it comes to determining physical attributes (as well as gender) among some ethnicities. The research indicates that in experiments that took place throughout 2018 Rekognitition misidentified photos of women as men 19 per cent of the time and darker-skinned women as men 31 per cent of the time. This compared very badly indeed to Microsoft’s facial recognition software where misclassification of the same aspects that did so badly in the tests with the Amazon product, stood at a mere 1.5 per cent. Amazon says the MIT findings are inaccurate and misleading.

Rekognition has history and form in regards to the system's alleged propensity to bias. In a humdinger of a test undertaken by the the American Civil Liberties Union (ACLU) in the summer of 2018, Rekognition, was loaded with 25,000 mugshots from a “public source” and set to comparing them to official photos of members of Congress. The Amazon produce misidentified 28 Congressmen and Congresswomen as convicted criminals and 38 per cent of the incorrect identifications were Congressman and Congresswomen of colour.

Again, Amazon disputes the results of the test but in all conscience how can such a massive level of misidentification augur well for the public when Rekognition is used by law enforcement agencies and the system is not subject overarching governmental control?

Increasing need for regulation and oversight

Amazon says it is continually striving to refine and improve Rekognition, not just in-house but also by providing significant funding to individual academic research projects and adds that it is “interested” in in the design and implementation of a "standardised" test for both facial analysis and facial recognition.

It also claims to be keen to co-operate with federal regulators on guidance in the use of the technology. Nonetheless after the massive rate of misidentifications that characterised the ACLU test, legislators have taken up the cudgels and the Senate is looking at a Bill that would limit the collection and tracking of facial data without consent.

Among academics, regulators and some companies, a consensus on basic standards to be applied to facial recognition is beginning to emerge. They include the provisions that:

  • The technology should always be used lawfully, and that includes all legislation that protects civil rights.
  • When law enforcement agencies use facial recognition technology, decisions made in regard to arrests, arraignment and trials must be reviewed by more than one human being of high enough rank to determine if action should be taken. The level of confidence that facial recognition technology is accurate must be at least 99 per cent.
  • Law enforcement agencies must be required by law to be 100 per cent transparent about the deployment, scale and use of the technology and be subject to regulatory oversight and inspection of facial recognition systems.
  • Highly visible public notices telling the public that video- surveillance and facial recognition are being used in public and commercial settings must always be in place.
  • That all law enforcement agencies obtain a court order when using facial recognition to identify and track specific subjects.

Well, it's a start, but given that that there are as yet no laws or standards in place to ensure that facial recognition technology is used in ways that will not infringe civil liberties and other legal rights, for now such technology is Big Brother's wet dream come true.

Email Newsletters

Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.