Have you ever walked up to someone at a cocktail party (remember those?) you thought you knew, started talking, only to realize that wasn’t who you thought it was? Chances are you have. And that alone illustrates a major challenge to a burgeoning field of artificial intelligence, facial recognition.
On one hand, lots of people use facial recognition technology several times a day when they unlock their phones. It’s fast and easy, and it works nearly all of the time. But there are other uses for facial recognition that aren’t nearly as reliable. A good example of facial recognition where it’s problematic is when facial recognition is used to identify people in crowds. This is the reverse of the recognition that takes place on your phone, where the software is comparing your face with one that’s known and stored in the phone’s memory. Facial recognition used to pick people out of crowds is more difficult.
In fact the problem with false positives when using facial recognition is well known, and it’s one reason that the use of such software in body worn cameras is specifically banned in the George Floyd Justice in Policing Act of 2021 (H.R. 1280). Currently, the law has been passed by the House of Representatives, but languishes in the Senate because of what may be an insoluble partisan divide. As a result, a patchwork of agencies uses facial recognition technology is a variety of ways with little consistency or guidance.
What’s happening instead is that some makers of facial recognition software have made it a policy to control who can buy their products and how they can be used. For example, Amazon has announced that it will continue indefinitely its ban on sales of its facial recognition software to law enforcement agencies.
Meanwhile, U.S. states and localities are making their own rules. Virginia, for example, placed strict limits on the use of facial recognition software. Those limits were so strict that the entire Washington, DC, region was forced to stop using its own recognition software. In Virginia, each law enforcement agency must receive advance permission from the General Assembly for each use, and because of that, the Washington Council of Governments couldn’t continue its use.
The Washington region gained some notoriety when its facial recognition system was used to identify protesters in Lafayette Square, which is across from the White House, when then-President Donald Trump decided to use the area for a photo op. The states of California and Illinois, and some localities in California have similarly placed restrictions on the use of facial recognition.
The problem with facial recognition and the reason for the pushback is that the AI software behind it is faced with a nearly impossible job. There are significant differences between the recognition process when your phone is recognizing you, and when a security camera is being used to recognize potentially millions of people. When your iPhone does its Face ID registration of your face, it uses an infrared projector to paint your face with thousands of dots in a grid pattern, and uses that pattern to develop a detailed 3D image of your face. This image is then stored in your phone, and is probably the only face it needs to recognize.
When an AI system is trying to compare faces in a crowd, it’s comparing them to millions of flat, low-resolution images of people. In some cases, those are mug shots of people who’ve been arrested. Sometimes those photos are from social media or other sources. There is no 3D image.
To make matters worse, a lot of people look alike, and facial recognition simply doesn’t have enough information in its databases to register fine differences between them. This is like the problem you had at that cocktail party.
Add to these difficulties the fact that facial recognition works best with Caucasian males. It doesn’t work as well on females and it doesn’t work as well on people who aren’t white. There is speculation that this is because the people who develop such software are predominantly white males.
Complicating matters is the use of facial recognition by hundreds of law enforcement agencies with investigators that have little or no training on how to use the technology. One company, Clearview.ai, has been marketing to law enforcement heavily, where it may be the dominant software in use by those agencies. The company has a database of over 3 billion facial images which it got from publicly available sources such as social media and news sites.
The problem with using such images is that they’re frequently not representatives of the actual person, and even if they are, they may not contain sufficient data to be useful for an accurate identification. Yet, some law enforcement agencies will use facial recognition based on those images as if they were definitive, resulting in arrests of innocent people.
For its part, Clearview says that it’s working to train investigators about the ethical use of facial recognition as well as how to figure out the level of accuracy involved. But there’s nothing to prevent a misguided action on the part of law enforcement based on inept usage. And that’s the reason for the laws banning facial recognition, and for the call for laws on a federal level setting standards.