NIST is working on the third revision of SP 800-63, which used to be called the Electronic Authentication Guideline and has now been renamed the Digital Identity Guidelines. An important change in the current draft of the third revision is a much expanded scope for biometrics. The following are comments by Pomcor on that aspect of the new guidelines, and more specifically on Section 5.2.3 of Part B, which we have sent to NIST in response to a call for public comments.
The draft is right in recommending the use of presentation attack detection (PAD). We think it should go farther and make PAD a mandatory requirement right away, without waiting for a future edition as stated in a note.
But the draft only considers PAD performed at the sensor. In modalities such as fingerprint verification PAD can only be performed at the sensor. But in modalities such as face, eye, iris or voice biometrics, PAD can be verified, and is commonly verified today, by the remote, or central, verifier.
For example, liveness verification with replay detection can be performed in face verification by asking the subject to read a random sequence of digits, and using lip reading techniques to verify that the challenge sequence has been read. Similarly, in voice verification, liveness can verified with replay detection by asking the subject to read random prompted text and using speech recognition techniques to verify that the challenge text is the one being read.
Biometric verification with presentation attack detection by the remote verifier provides a key security benefit: it is the only remote verification technique that is not vulnerable to malware or physical tampering attacks against the user device where the sensor is located. (Update: It is actually vulnerable to malware. See my own comment below.)
There is another issue with Section 5.2.3. When biometric matching is performed by the verifier, Section 5.2.3 requires the use of biometric verification techniques discussed in ISO/IEC 24745 and variously known as revocable biometrics, biometric template protection, renewable biometrics, cancelable biometrics, biometric key generation, biometric cryptosystems, fuzzy extractors, fuzzy vaults, etc. In those techniques the verifier combines a biometric authentication sample with auxiliary, or helper, data derived from an enrollment sample and random bits to generate a biometric key. Error correction techniques are used to produce the same key from varying but genuine samples. The consistently generated biometric can then be verified, e.g., against a biometric hash.
Revocable biometric techniques provide important security and privacy benefits in some use cases, because the auxiliary data, if captured by an adversary, provides no useful biometric information to the adversary. Thus biometric information is safe against an adversary who breaches a user database that contains such auxiliary data. But revocable biometric techniques are not applicable or provide limited benefits in other use cases. They are not applicable if the match is performed against a database of existing, non-revocable biometric data, such as a Department of Motor Vehicles (DMV) database, or against an image in a photo ID presented by the claimant. They do provide a benefit if the match is performed against biometric verification data in a rich credential, by protecting the subject’s biometric information against an adversary who captures the credential; but the benefit is not as great as if the match is performed against a database, because the risk that they mitigate is not as dire. Rich credentials are not stored in a database, so an adversary who goes after biometric information in rich credentials must capture them one at a time, instead of capturing them all at once by breaching a database.
Moreover, as pointed out in Section 5.2.3 itself, availability of revocable biometric techniques is limited. Verification techniques for hot biometric modalities such as face and voice verification are evolving rapidly. Revocable biometric techniques were proposed in academia for those modalities but do not seem to have kept up with the latest improvements. Mandating their use for remote matching would prevent Federal Government agencies from using state of the art techniques for face and voice verification with remote presentation attack detection that are commonly used today in the private sector.
I am commenting on my own post to correct a mistake. The post says that biometric verification with presentation attack detection by a remote verifier “is not vulnerable to malware or physical tampering attacks against the user device where the sensor is located”. I still believe that it is not vulnerable to physical capture of the user’s device and subsequent tampering to extract data from the device. But it is vulnerable to a malware-mediated man-in-the-middle attack. The attack is as follows. The victim and the adversary each have a computing device, which they use to access a remote, or central, verifier across the Internet. The verifier identifies subjects by asking each subject to submit, e.g., a video stream where the subject performs actions in response to a challenge. The verifier matches the face that appears in the video against a reference facial image or template, and verifies that the subject is performing the actions specified by the challenge, as part of presentation attack detection. The adversary wants to impersonate the victim while accessing the verifier from the adversary’s own device. To that purpose, the adversary attacks the victim’s device and succeeds in exploiting a vulnerability to install malware that gains full control of the victim’s device. The malware patiently waits until the victim tries to prove his/her identity to the verifier. When that happens, the victim’s malware-controlled device makes no connection to the verifier, but makes the victim believe that a connection has been made and a biometric identification is taking place by displaying what the victim would expect to see if that were the case. The malware notifies the adversary’s device that the victim is trying to identify him/herself to the verifer. This automatically causes the adversary’s device to connect to the verifier and initiate a biometric identification. The verifier sends a challenge to the adversary’s device, which the adversary’s device forwards to the victim’s device. The victim’s device displays the challenge to the victim, who executes the actions specified by the challenge in front of the camera. The malware relays the resulting video stream to the adversary’s device, which forwards it to the verifier for a successful impersonation of the victim.