NIST is working on the third revision of SP 800-63, which used to be called the Electronic Authentication Guideline and has now been renamed the Digital Identity Guidelines. An important change in the current draft of the third revision is a much expanded scope for biometrics. The following are comments by Pomcor on that aspect of the new guidelines, and more specifically on Section 5.2.3 of Part B, which we have sent to NIST in response to a call for public comments.
The draft is right in recommending the use of presentation attack detection (PAD). We think it should go farther and make PAD a mandatory requirement right away, without waiting for a future edition as stated in a note.
But the draft only considers PAD performed at the sensor. In modalities such as fingerprint verification PAD can only be performed at the sensor. But in modalities such as face, eye, iris or voice biometrics, PAD can be verified, and is commonly verified today, by the remote, or central, verifier.
For example, liveness verification with replay detection can be performed in face verification by asking the subject to read a random sequence of digits, and using lip reading techniques to verify that the challenge sequence has been read. Similarly, in voice verification, liveness can verified with replay detection by asking the subject to read random prompted text and using speech recognition techniques to verify that the challenge text is the one being read.
Biometric verification with presentation attack detection by the remote verifier provides a key security benefit: it is the only remote verification technique that is not vulnerable to malware or physical tampering attacks against the user device where the sensor is located.
There is another issue with Section 5.2.3. When biometric matching is performed by the verifier, Section 5.2.3 requires the use of biometric verification techniques discussed in ISO/IEC 24745 and variously known as revocable biometrics, biometric template protection, renewable biometrics, cancelable biometrics, biometric key generation, biometric cryptosystems, fuzzy extractors, fuzzy vaults, etc. In those techniques the verifier combines a biometric authentication sample with auxiliary, or helper, data derived from an enrollment sample and random bits to generate a biometric key. Error correction techniques are used to produce the same key from varying but genuine samples. The consistently generated biometric can then be verified, e.g., against a biometric hash.
Revocable biometric techniques provide important security and privacy benefits in some use cases, because the auxiliary data, if captured by an adversary, provides no useful biometric information to the adversary. Thus biometric information is safe against an adversary who breaches a user database that contains such auxiliary data. But revocable biometric techniques are not applicable or provide limited benefits in other use cases. They are not applicable if the match is performed against a database of existing, non-revocable biometric data, such as a Department of Motor Vehicles (DMV) database, or against an image in a photo ID presented by the claimant. They do provide a benefit if the match is performed against biometric verification data in a rich credential, by protecting the subject’s biometric information against an adversary who captures the credential; but the benefit is not as great as if the match is performed against a database, because the risk that they mitigate is not as dire. Rich credentials are not stored in a database, so an adversary who goes after biometric information in rich credentials must capture them one at a time, instead of capturing them all at once by breaching a database.
Moreover, as pointed out in Section 5.2.3 itself, availability of revocable biometric techniques is limited. Verification techniques for hot biometric modalities such as face and voice verification are evolving rapidly. Revocable biometric techniques were proposed in academia for those modalities but do not seem to have kept up with the latest improvements. Mandating their use for remote matching would prevent Federal Government agencies from using state of the art techniques for face and voice verification with remote presentation attack detection that are commonly used today in the private sector.