Biometric bias and how to prevent it [Q&A]
From Dr. Mohamed Lazzouni and Betanew
This article first appeared on betanews.
As we move away from passwords to other forms of authentication, there’s increasing reliance being placed on technologies like biometrics.
But there’s growing evidence that this technology could be flawed, with facial recognition exhibiting higher error rates for those with darker skin for example. We spoke to Dr. Mohamed Lazzouni, CTO of Aware, to discuss the ethical issue of bias in biometrics and what needs to be done to prevent it.
BN: In recent months, there’s been extensive coverage of facial recognition systems being biased and inherently flawed. Do you think this is a fair assessment of the technology as a whole?
ML: Biometrics can be a major force for good and it is unfortunate that most if not all of the negativity we see is due to the use of subpar technology. The most recent example is Rite Aid, which in late December the FTC banned from using facial recognition technology in its US retail locations for five years. This is because it was discovered that Rite Aid’s technology, which was used to identify shoplifters, had generated thousands of false positives between 2012 and 2020, disproportionately impacting women and people of color.
Fortunately, other biometric technologies are far more accurate and able to overcome this bias, making biometrics good for business, and moreover, society. Today, the top 150 biometric algorithms are over 99 percent accurate across a variety of demographics. The result is bias-proofed systems that are delivering ‘close to perfect’ performance with miss rates averaging a mere 0.1 percent. These types of biometrics are the most reliable and accurate forms of identity verification in the world.
BN: Are there differences in the way various facial recognition systems work? Are some more or less prone to errors?
ML: There are key differences in the way various facial recognition systems work. In recent years, there have been tremendous advances in facial recognition in the areas of deep learning, the availability of massive amounts of data for algorithm training, and extensive testing. A facial algorithm can only be effective and accurate if it is trained on the most extensive data sets in the world, including all genders, nationalities and minorities.
In a commercial setting, for instance, accuracy is vital for anyone accessing a financial services app with their face or checking-in their luggage at an airport. Speed and convenience are also critical. The best performing algorithms will be those that can deliver the holy grail — eliminating racial, gender and other biases to achieve optimal demographic parity, while also keeping the customer experience top of mind.
BN: What needs to happen in order to make these systems more universally accurate?
ML: Ongoing training on diverse datasets is critical in order to continue to teach AI-based algorithms and improve their performance. However, the rise of deepfakes is creating the need to augment this bullet-proof accuracy with something more — liveness detection, a technique where an algorithm securely detects whether the source of a biometric sample comes from a fake representation or a live human being.
Deepfakes can deliver incredibly convincing images and video hoaxes — essentially stitching anyone in the world into an image or video in which they never actually participated. Face swapping is one common example of this. Liveness detection works on the basic premise that any deepfake generator creates artifacts and patterns which are distinctly different from natural human interactions and physiological attributes. There are some exciting advances happening in liveness detection. Intel, for example, recently rolled out a real-time deepfake detector able to determine whether a video’s subject is real by ascertaining whether there is blood flow to the face.
BN: What are some guidelines organizations should consider as they evaluate the implementation of facial recognition systems?
ML: First and foremost, make sure you select an algorithm with a proven track record in achieving optimal demographic parity. Widely considered the gold standard in cybersecurity, regularly released rankings from the National Institute of Standards and Technology (NIST) are borne out of a highly rigorous process and are an excellent validation.
Second, be certain that your algorithm can deliver accurate results without compromising speed and convenience for users. As biometrics are increasingly utilized in applications involving peoples’ money — online investing and online wagering, for example, where time, even seconds, are of the essence — maintaining an excellent customer experience is critical. In commercial settings, offering biometrics as the primary form of authentication can offer a distinct competitive advantage to many organizations. In fact, according to one of our recent surveys, more than half of consumers would rather sign up for a new product or service using biometrics and this easy verification makes them more likely to continue using the product or service.
In addition, there are a number of ethical considerations when deploying biometrics in a physical commercial setting. Rite Aid allegedly did not inform customers of its use of facial recognition, and providing notice to customers about the use of this technology at any given location is essential.
As organizations roll-out biometric authentication, they should make every effort to educate users by offering clear Opt-In/Opt-Out procedures, which gives everyone complete control over how their biometric data gets used (or not). If a person does not wish to provide his or her biometric data, organizations should always offer an alternative means of verification. In the vast majority of cases, the desire for convenience will win out and most people will choose the biometric method. An excellent case in point: airports around the world have noted that by using biometrics, they can board flights in a fraction of the time it takes using standard identification documents and passengers greatly appreciate the more rapid admission to the planes.
Finally, when implementing biometrics, organizations must stay fully transparent in terms of how the data gets collected, transferred for processing, retained, and how and when it’s discarded. Specifically, organizations will want to highlight for the end users, the added protections put in place; and the extent (if any) to which biometric data is shared with third-parties.