Article in Cyber Defense Magazine, January 2019
— Ned Hayes, General Manager, SureID

Cyber Defense Magazine

Facial recognition, one of the most popular methods of biometric enrollment and customized marketing, will bring us to ultra-surveillance, targeted assassinations and Black Mirror-style oversight. At least, this is what critics of the technology would have you believe. Yet we don’t see such dystopian outcomes in commercial authentication and identity verification today. So why are these critics so concerned, and what can security professionals do to alleviate their concerns?

By 2024, the market for facial recognition applications and related biometric functions is expected to grow at a 20% compounded rate to almost $15.4 billion. Already, almost 245 million video surveillance systems have been deployed worldwide, and that number is growing. Video facial recognition technology isn’t going away.

Yet as the technology keeps expanding to new segments and use cases, ethical concerns have not settled down – in fact, they’ve proliferated. Early concerns focused on the surveillance itself: should human beings be watched 24/7? As the use of CCTV data in criminal investigations proved to have value, though, these concerns have grown to focus on the data that video stream provides and the inferences that can be drawn from that data.

Machine learning and new predictive techniques, when used to analyze a video stream, can produce findings well beyond facial identity. They can infer emotional state, religious affiliation, class, race, gender and health status. In addition, machine learning methods can determine where someone is going (travel trajectory), where they came from (national origin), how much they make (through clothing analysis), diseases they have (through analysis of the vocal track), and much more.

Yet like all technology, these techniques are imperfect. They don’t always recognize faces accurately: false positives and false negatives happen. Some algorithms get confused if you wear a hat or sunglasses, grow a beard, or cut your hair differently.

Even worse, training data used to develop many early facial recognition algorithms was originally mostly Caucasian, so people of African and Asian descent are not recognized as well. This can result in biased conclusions, even if the intent was not to create a biased system.

And biometrics themselves are not a foolproof system. Some facial recognition systems can be “hacked” with dolls, masks and false faces. Recently, Philip Bontrager, a researcher at NYU, revealed that he’d created a “DeepMasterPrint” fingerprint that could combine the characteristics of many fingers into one “master print” that could log into devices secured with only a single fingerprint authentication routine. A single finger on a pad or a single face seen by a camera should be insufficient to grant access. Biometrics are hackable, and over time it’s clear that we’ll find increasing exploits that take advantage of known and unknown weaknesses.

So the critics of biometrically-based recognition and authentication have a right to be concerned about the weaknesses of an early technology that is only now beginning to mature, yet is already broadly deployed in critical infrastructure and identity verification scenarios.

Two recent developments that are changing the game for everyone who relies on biometrics magnify the importance of these concerns. And this time, artificial intelligence researchers, activists, lawmakers and many of the largest technology companies also express concern.

These two developments, happening simultaneously, are:

  1. Machine Learning in Real-time: The advent of machine learning techniques that can act very quickly to make inferences from video data and provide them in near real-time, with convincing conclusions (especially to untrained observers). The tech is really fast, and it looks good.
  • Autonomous System Integration: The joining together of these video surveillance conclusions with autonomous systems, so a conclusion from a facial recognition system can lead an autonomous system to take immediate action – no human interaction required.

How might this be used? Let’s look at an example. Today’s autonomous systems can already take action. For example, when you walk in a room, a home system camera could recognize you and set the lights (or music) to your preferred setting. Alexa can order stuff for you, a car can drive itself, or a building can lock.

Of course, then, activists and others are concerned that we will give these systems power over human life and agency. What if Alexa calls the cops on your son? What if a system relies on a false recognition to take lethal action? What if the door locking mechanism also incapacitates an intruder? What if they incapacitate a legal resident by false face recognition?

These scenarios, to a limited extent, have already happened. Last March, a self-driving Uber car with “human recognition algorithms” built into its video system failed to recognize a pedestrian and killed her. News reports already indicate that the government of China is using such techniques to track minority populations and to assign risk factors to citizens, without these citizens’ knowledge or seeming consent.

Activists point to this use of surveillance and facial analysis tech as an example how trust degrades in in a society and how specific attitudes might be tracked by unscrupulous players, even in democratic societies with a free press and freedom of movement.

Businesses also have their reputations – and their stock prices – negatively affected by unethical activity. More than one company has discovered that when they violate the trust of partners or customers, business collapses overnight.

However, some moves are afoot to protect us against bad actors. This month, the Algorithmic Justice League and the Center of Privacy & Technology at Georgetown University Law Center unveiled the Safe Face Pledge, which asks companies not to provide facial AI for autonomous weapons or sell to law enforcement unless explicit laws are passed to protect people.

Microsoft last week said that facial recognition married to autonomous systems carries significant risks and proposed rules to combat the threat. Research group AI Now, which includes AI researchers from Google and other companies, issued a similar call.

The problem that the Safe Face Pledge is trying to solve is that autonomous systems don’t truly have agency: if a system takes action, there is no one to hold accountable. An autonomous system doesn’t lose its job, get charged with a felony or get a report in the file. This is a problem of accountability: who is ultimately responsible?

IT professionals and security experts now fall into the often-uncomfortable position of pondering the philosophical implications of tech deployment and mediating between the needs of a business and the need to act ethically. Fortunately, there are some simple steps you can take to navigate this tightrope.

Use multiple biometrics for protection

Three distinct cautionary actions can protect your systems against charges of bias or overreach:

  1. Use multiple biometrics: Don’t rely on one low fidelity biometric for high security authentication. Enroll multiple fingerprints via a higg fidelity enrollment mechanism like a certified FBI channeler – not a single smartphone scanner.  Even better is to use a two-factor biometric solution that includes scanning of multiple fingerprints, facial+fingerprint, or voice+facial+fingerprint.
  • Safe Face Pledge: it’s worth checking out the Safe Face Pledge (safefacepledge.org) to understand the implications of marrying facial recognition to autonomous systems (even a door lock could be autonomous!) and to prevent risks to your employer or the larger population. Ensure you’ve educated your business decision-makers on the evident problems with the proliferation of this technology without safeguards.
  • Put a Human in the Loop: be very cautious about allowing an autonomous system to take action based solely on facial recognition. This technology, in many regards, is still in its infancy, and can’t be fully trusted. Always put a human being in the loop. A person needs to be accountable for decisions that have an impact on your business.

With these protections in place, it’s possible to deliver the clear differentiator of real-time facial recognition and autonomous technology to accelerate your business, while at the same time continue to protect your business and accelerate your trust with partners and customers.