May 13, 2020

The Future of Facial Recognition: What Can We Expect, and Should We Be Worried?

The Future of Facial Recognition: What Can We Expect, and Should We Be Worried?

In London, a person commuting to work is recorded around 300 times a day. CCTV cameras have primarily been created to enforce laws and protect citizens. Across the UK, it is estimated that there is one CCTV camera for every 11 people. The presence of these cameras has never been too imposing, and some may even feel reassured to be under constant surveillance.

The situation is a little different in China. For Black Mirror fans who loved its “Nosedive” episode, China’s Social Credit Score feels eerily identical. With the largest video surveillance network in the world, China uses its CCTV cameras to capture jaywalkers, people having arguments with their neighbours and more, compiling all the video footage with shared online data to create a citizen’s social credit score.

Citizens are able to enjoy the benefits of a high social credit score. They can skip hospital waiting list queues, have their children receive a better education, and rent a car without having to pay a deposit. However, failure to comply to the system can lead to a citizen being blacklisted as an “Untrustworthy Person”. Citizens with a low social credit score are unable to leave the country or make any luxury purchases. People who call them are warned to be careful due to their low social credit score and are encouraged to remind these people to repay their debts.

Although you may think that the drastic measures China takes to control its citizens seems miles away, you'd be wrong. More recently, crime prevention has gone one step further in the UK, creating facial recognition tools for police forces that help to identify suspects. And although some of these tools are still in their early stages, one company has managed to go above and beyond expectations, building software that boasts a success rate of 75%.

The company is called Clearview AI, and it’s already beginning to sell its facial recognition tool to federal authorities across the US. It’s relatively low-profile. If users want to locate its headquarters, Clearview AI’s webpage provides them with a fake Manhattan address. The Australian technical prodigy who founded the company, Hoan Ton-That, is nowhere to be found on LinkedIn. Nor are any of his employees.

If you don’t think this already sounds pretty sketchy, I’ll explain how the Clearview AI facial recognition tool differs from the ones UK police forces are currently using. Whereas most recognition tools identify people by searching through mugshots, Clearview AI possesses a colossal database of photos scraped from social media platforms, newspapers, employment sites and more. Whereas most facial recognition tools can only work if the suspect is directly facing a camera, Clearview’s software can identify a partially covered face in seconds. The image is converted to a mathematical formula based on the suspect’s facial geometry, which is then run through millions of online images.

https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html

Its success stories are astonishing. In one particular case, a paedophile was arrested when the software located him in the background of someone’s gym selfie. But that’s not all. During the pandemic, Clearview AI is planning to use its tool as the US reopens its economy, so that people who may have come into contact with the deadly virus can quickly be tracked and identified.

The problem is, where is this data being stored? Government agencies, law enforcement and other companies who use the software must sign a non-disclosure agreement and are therefore unable to reveal much information on how the facial recognition tool works. Currently, Clearview’s FAQ sheet states that the company are continually “adding hundreds of millions of new faces [to its software] every month”.

Once your image is discovered, users can find out your address, who you hang out with, the amount of money you’re making at work, and much, much more. You can only begin to imagine the dangers that Clearview’s facial recognition tool would pose if it were to fall in the wrong hands.

Not only are people stripped of their right to privacy, but in some cases, facial recognition tools have proved to be disastrous when misidentifying black men or women, reinforcing the presence of ongoing racial bias through the façade of seemingly flawless algorithms. Although Clearview AI claims to have a 98.6% accuracy rate, its precision has never been audited by the National Institute of Standards and Technology’s “Face Recognition Vendor Test”.

In her TED Talk, “How I’m fighting back algorithms”, computer scientist Joy Buolamwini discusses how she had to put on a white mask in order to be recognised by some facial recognition tools. These tools possess a low error rate of 0.8% when identifying white men, compared to the staggering error rate of 34.7% when identifying black women. We can draw up a fairly obvious conclusion from this: facial recognition tools designed by one demographic are likely to be most effective for that very same demographic.

Terrified? I certainly am. But there are ways to prevent this. In May 2018, the EU introduced the GDPR, a regulation that adds restrictions to how European companies can store an individual's personal data. Additionally, Tim Berners-Lee is creating a web infrastructure called Solid, which will allow people to have their own individual pods of data, which they can choose to share (or stop sharing) with other companies.

Certainly, the future is looking bright. But it’s always better to be careful, and make sure you know exactly where your personal information is going...