Invited Talk at the University of Utah

I’ve been so busy that I haven’t had time to write for more than three months, which is a pity because things have been happening and there is much to report. I’m trying to catch up today.

The first thing to report is that Prof. Gopalakrishnan of the University of Utah invited Karen Lewison and myself to give a joint talk at the University, on May 29. We talked about the need to replace TLS, which I’ve discussed earlier on this blog. The slides can be found at the usual location for papers and presentations at the bottom of each page of this web site.

The University of Utah has a renowned School of Computing and it was quite stimulating to meet with faculty and discuss research after the talk. We were happy to discover common research interests, and we have been exploring the possibility of doing joint research work with Profs. Ganesh Gopalakrishnan, Sneha Kasera, and Tammy Denning; we are thrilled that the prospects look promising.

Other things to report include that we had papers accepted at the forthcoming M2MSec workshop and the forthcoming GlobalPlatform TEE conference. I will report on that in the next two posts.

Derived Credentials in a Trusted Execution Environment (TEE)

In the previous post I discussed the storage of derived credentials (Federal credentials carried in a mobile device instead of a PIV/CAC card) in a software token, i.e. in a cryptographic module implemented entirely in software, whose contents are stored in ordinary flash memory. In this post I will discuss the storage of derived credentials in a Trusted Execution Environment (TEE).

Malware Attacks

As discussed in the previous post and in a technical report, it is possible to protect derived credentials stored in ordinary flash storage by encrypting them under a high entropy key-wrapping key kept in a secure back-end, which the mobile device retrieves by authenticating to the back-end with a key pair regenerated from a protocredential and an activation passcode.

This provides effective protection against an adversary who captures the device while the software token is not active, preventing the adversary from extracting or using the credentials. But it does not provide protection against malware running on the device while the legitimate user is using the device. Such malware can carry out the following attacks:

  1. It can use the derived credentials, by issuing instructions to the software token after it has been activated by the legitimate user.
  2. It can read the plaintext derived credentials from the flash storage after the software token has been activated, and transmit them to the adversary responsible for the malware, who can then use them at will on a different machine.
  3. It can capture the activation passcode by phishing or intercepting it. In a phishing attack, malware prompts the user for the passcode while masquerading as legitimate code that needs the passcode, such as token activation code. In an interception attack, malware gets the passcode after it has been obtained from the user by legitimate code.

The first of these attacks may be impossible to prevent once privileged malware is running on the mobile device without the user being aware of it. But the second and third attacks can be prevented using a TEE as we shall see below; and preventing them is important because they are more damaging than the first attack.

The second attack, extracting the credentials and sending them to the adversary, is more damaging than the first because it cannot be stopped by recovering or wiping the stolen device. Use of an authentication or signature private key cannot be stopped until the associated certificate is revoked and relying parties become aware of the revocation. Correspondents should avoid sending messages encrypted under a symmetric key wrapped by a “key management” public key after becoming aware that the key management certificate has been revoked. But there is no time limit for using a key management private key to decrypt earlier messages that the adversary may have previously captured or may capture in the future, e.g. by breaching the security of a MS Exchange server containing older encrypted messages.

The third attack is even more damaging for several reasons. First, it enables the first two attacks, because once it has the passcode, malware can activate the software token and use and extract the plaintext derived credentials. Second, if the adversary captures the device after using malware to obtain the passcode, the adversary can use the device, or install more comprehensive malware that is able to extract the credentials. Third, the passcode may be independently exploitable because it may be used for other purposes.

A TEE has security features that make it possible to prevent the second and third attacks.

Features of a TEE

A TEE is a computing environment provided by a secure OS running on the same processor as a normal OS. One or more trusted applications (TAs) run under the secure OS. A hardware bus architecture ensures that a portion of the flash storage can only be accessed by the secure OS. Both OSes can access the touchscreen, but a security indicator lets the user know when the screen is controlled by the secure OS and the user interface can be trusted. GlobalPlatform is developing TEE specifications, including a Trusted User Interface API specification, which can be downloaded from the GlobalPlatform site. TEEs are provided by ARM Cortex-A processors, where a TEE is also referred to as a TrustZone. A TA running in a TEE can be used to implement a cryptographic module in which derived credentials can be stored and used.

Using a TEE to Protect Derived Credentials

Derived credentials stored and used in a cryptographic module implemented within a TEE can be protected against the second malware attack discussed above by making their private keys unextractable from the cryptographic module. The ability to mark private keys as being unextractable is a typical feature of cryptographic modules. The PKCS #11 cryptographic module API, for example, allows private keys to be made non-extractable by setting the value of their CKA_EXTRACTABLE attribute to CK_FALSE. The forthcoming TEE Functional API, mentioned in the TEE white paper, will no doubt allow private keys stored in a cryptographic module within a TEE to be made non-extractable as well.

Furthermore, derived credentials stored in a cryptographic module within a TEE can be protected against the third malware attack using the Trusted User Interface feature of the TEE. The passcode can be prompted for by the TA that implements the cryptographic module, and the user can be instructed to only enter the passcode when a Security Indicator shows that the touchscreen is controlled by the Secure OS of the TEE. The passcode is then protected against phishing and interception by malware, assuming that all TAs can be trusted and that the secure OS is not infected by malware. The latter assumption is motivated by the fact that the secure OS is simpler than the normal OS and presents a much smaller attack surface.

Virtual Tamper Resistance

Using the same processor and a portion of the same storage for the secure OS as for the normal OS has important benefits. It provides greater performance for the secure OS than would typically achieved by a secondary processor located in a secure element, and it saves the cost of the secure element. On the other hand, it means that a TEE is not expected to provide much, if any, tamper resistance. Indeed, the TEE Secure Element API, available at the GlobalPlatform site, is concerned with using together a TEE and a secure element, with the TEE providing a Trusted User Interface, and the secure element providing tamper resistance.

(BTW, some secure elements do provide serious tamper resistance, but tamper resistance is never absolute. A fascinating description of the elaborate anti-tampering countermeasures in a family of Infineon chips, and how they were defeated by an attacker with no insider knowledge, can be found in an 80-minute video demonstration—broken down into ten eight-minute segments—presented at Black Hat 2010.)

But the lack of tamper resistance in a TEE can be remedied using the same technique that I described in the previous post as a solution to the problem of protecting derived credentials stored in a software token. Encrypting the derived credentials under a high entropy key-wrapping key, kept in a secure back-end and retrieved by authenticating to the back-end with a key pair regenerated from a protocredential and an activation passcode, can be viewed as a form of cloud-based virtual tamper resistance.

Combining such virtual tamper resistance with the TEE Trusted User Interface feature would make it possible to implement a cryptographic module that would protect both the derived credentials and their activation passcode.

Protecting Derived Credentials without Secure Hardware in Mobile Devices

NIST has recently released drafts of two documents with thoughts and guidelines related to the deployment of derived credentials,

and requested comments on the drafts by April 21. We have just sent our comments and we encourage you to send yours.

Derived credentials are credentials that are derived from those in a Personal Identity Verification (PIV) card or Common Access Card (CAC) and carried in a mobile device instead of the card. (A CAC card is a PIV card issued by the Department of Defense.) The Electronic Authentication Guideline, SP 800-63, defines a derived credential more broadly as:

A credential issued based on proof of possession and control of a token associated with a previously issued credential, so as not to duplicate the identity proofing process.

A PIV/CAC card may carry a PIV authentication credential, a digital signature credential, a current key management credential and up to 20 retired key management credentials, each credential consisting of a private key and an associated certificate that contains the corresponding public key. The digital signature private key is used for signing email messages, and the key management keys for decrypting symmetric keys used to encrypt email messages. The retired key management keys are needed to decrypt old messages that have been saved encrypted. The PIV authentication credential is mandatory for all users, while the digital signature credential and the current key management credential are mandatory for users who have government email accounts.

A mobile device may similarly carry an authentication credential, a digital signature credential, and current and retired key management credentials. Although this is not fully spelled out in the NIST documents, the current and retired key management private keys in the mobile device should be able to decrypt the same email messages as those in the card, and therefore should be the same as those in the card, except that we see no need to limit the number of retired key management private keys to 20 in the mobile device. The key management private keys should be downloaded to the mobile device from the escrow server that should already be in use today to recover from the loss of a PIV/CAC card containing those keys. On the other hand the authentication and digital signature key pairs should be generated in the mobile device, and therefore should be different from those in the card.

In a puzzling statement, SP 800-157 insists that only an authentication credential can be considered a “derived PIV credential”:

While the PIV Card may be used as the basis for issuing other types of derived credentials, the issuance of these other credentials is outside the scope of this document. Only derived credentials issued in accordance with this document are considered to be Derived PIV credentials.

Nevertheless, SP 800-157 discusses details related to the storage of digital signature and key management credentials in mobile devices in informative appendix A and normative appendix B.

Software Tokens

The NIST documents provide guidelines regarding the lifecycle of derived credentials, their linkage to the lifecycle of the PIV/CAC card, their certificate policies and cryptographic specifications, and the storage of derived credentials in several kinds of hardware cryptographic modules, which the documents refer to as hardware tokens, including microSD tokens, UICC tokens, USB tokens, and embedded hardware tokens. But the most interesting, and controversial, aspect of the documents concerns the storage of derived credentials in software tokens, i.e. in cryptographic modules implemented entirely in software.

Being able to store derived credentials in software tokens would mean being able to use any mobile device to carry derived credentials. This would have many benefits:

  1. Federal agencies would have the flexibility to use any mobile devices they want.
  2. Federal agencies would be able to use inexpensive devices that would not have to be equipped with special hardware for secure storage of derived credentials. This would save taxpayer money and allow agencies to do more with their IT budgets.
  3. Mobile authentication and secure email solutions used by the Federal Government would be affordable and could be broadly used in the private sector.

The third benefit would have huge implications. Today, the requirement to use PIV/CAC cards means that different IT solutions must be developed for the government and for the private sector. IT solutions specifically developed for the government are expensive, while private sector solutions too often rely on passwords instead of cryptographic credentials. Using the same solutions for the government and the private sector would lower costs and increase security.

Security

But there is a problem. The implementation of software tokens hinted at in the NIST documents is not secure.

NISTIR 7981 describes a software token as follows:

Rather than using specialized hardware to store and use PIV keys, this approach stores the keys in flash memory on the mobile device protected by a PIN or password. Authentication operations are done in software provided by the application accessing the IT system, or the mobile OS.

And SP 800-157 adds the following:

For software implementations (LOA-3) of Derived PIV Credentials, a password-based mechanism shall be used to perform cryptographic operations with the private key corresponding to the Derived PIV Credential. The password shall meet the requirements of an LOA-2 memorized secret token as specified in Table 6, Token Requirements per Assurance Level, in [SP800-63].

Taken together, these two paragraphs seem to suggest that the derived credentials should be stored in ordinary flash memory storage encrypted under a data encryption key derived from a PIN or password satisfying certain requirements. What requirements would ensure sufficient security?

Smart phones are frequently stolen, therefore we must assume that an adversary will be able to capture the mobile device. After capturing the device the adversary can immediately place it in a metallic box or other Faraday cage to prevent a remote wipe. The contents of the flash memory storage may be protected by the OS, but in many Android devices, the OS can be replaced, or rooted, with instructions for doing so provided by Google or the manufacturer. OS protection may be more effective in some iOS devices, but since a software token does not provide any tamper resistance by definition, we must assume that the adversary will be able to extract the encrypted credentials. Having done so, the adversary can mount an offline password guessing attack, testing each password guess by deriving a data encryption key from the password, decrypting the credentials, and checking if the resulting plaintext contains well-formed credentials. To carry out the password guessing attack, the adversary can use a botnet. Botnets with tens of thousands of computers can be easily rented by the day or by the hour. Botnets are usually programmed to launch DDOS attacks, but can be easily reprogrammed to carry out password cracking attacks instead. The adversary has at least a few hours to run the attack before the authentication and digital signature certificates are revoked and the revocation becomes visible to relying parties; and there is no time limit for decrypting the key management keys and using them to decrypt previously obtained encrypted email messages.

To resist such an attack, the PIN or password would need to have at least 64 bits of entropy. According to Table A.1 of the Electronic Authentication Guideline (SP 800-63), a user-chosen password must have more than 40 characters chosen appropriately from a 94-character alphabet to achieve 64 bits of entropy. Entering such a password on the touchscreen keyboard of a smart phone is clearly unfeasible.

SP 800-157 calls instead for a password that meets the requirements of an LOA-2 memorized secret token as specified in Table 6 of SP 800-63, which are as follows:

The memorized secret may be a randomly generated PIN consisting of 6 or more digits, a user generated string consisting of 8 or more characters chosen from an alphabet of 90 or more characters, or a secret with equivalent entropy.

The equivalent entropy is only 20 bits. Why does Table 6 require so little entropy? Because it is not concerned with resisting an offline guessing attack against a password that is used to derive a data encryption key. It is instead concerned with resisting an online guessing attack against a password that is used for authentication, where password guesses can only be tested by attempting to authenticate to a verifier who throttles the rate of failed authentication attempts. In Table 6, the quoted requirement on the memorized secret token is coupled with the following requirement on the verifier:

The Verifier shall implement a throttling mechanism that effectively limits the number of failed authentication attempts an Attacker can make on the Subscriber’s account to 100 or fewer in any 30-day period.

and the necessity of the coupling is emphasized in Section 8.2.3 as follows:

When using a token that produces low entropy token Authenticators, it is necessary to implement controls at the Verifier to protect against online guessing attacks. An explicit requirement for such tokens is given in Table 6: the Verifier shall effectively limit online Attackers to 100 failed attempts on a single account in any 30 day period.

Twenty bits is not sufficient entropy for encrypting derived credentials, and requiring a password with sufficient entropy is not a feasible proposition.

Solutions

But the problem has solutions. It is possible to provide effective protection for derived credentials in a software token.

One solution is to encrypt the derived credentials under a high-entropy key that is stored in a secure back-end and retrieved when the user activates the software token. The problem then becomes how to retrieve the high-entropy key from the back-end. To do so securely, the mobile device must authenticate to the back-end using a device-authentication credential stored in the mobile device, which seems to bring us back to square one. However, there is a difference between the device-authentication credential and the derived credentials stored in the token: the device-authentication credential is only needed for the specific purpose of authenticating the device to the back-end and retrieving the high-entropy key. This makes it possible to use as device-authentication credential a credential regenerated on demand from a PIN or password supplied by the user to activate the token and a protocredential stored in the device, in a way that deprives an attacker who captures the device of any information that would make it possible to test guesses of the PIN or password offline.

The device-authentication credential can consist, for example, of a DSA key pair whose public key is registered with the back-end, coupled with a handle that refers to a device record where the back-end stores a hash of the registered public key. In that case the protocredential consists of the device record handle, the DSA domain parameters, which are (p,q,g) with the notations of the DSS, and a random high-entropy salt. To regenerate the DSA key pair, a key derivation function is used to compute an intermediate key-pair regeneration key (KPRK) from the activation PIN or password and the salt, then the DSA private and public keys are computed as specified in Appendix B.1.1 of the DSS, substituting the KPRK for the random string returned_bits produced by a random number generator.

To authenticate to the back-end and retrieve the high-entropy key, the mobile device establishes a TLS connection to the back-end, over which it sends the device record handle, the DSA public key, and a signature computed with the DSA private key on a challenge derived from the TLS master secret. (Update—April 24, 2014: The material used to derive the challenge must also include the TLS server certificate of the back-end, due to a recently reported UKS vulnerability of TLS. See footnote 2 of the technical report.) The DSA public and private keys are deleted after authentication, and the back-end keeps the public key confidential. An adversary who is able to capture the device and extract the protocredential has no means of testing guesses of the PIN or password other than regenerating the DSA key pair and attempting online authentication to the back-end, which locks the device record after a small number of consecutive failed authentication attempts that specify the handle of the record.

An example of a derived credentials architecture that uses this solution can be found in a technical report.

Other solutions are possible as well. The device-authentication credential itself could serve as a derived credential, as we proposed earlier; SSO can then be achieved by sharing login sessions, as described in Section 7.5 of a another technical report. And I’m sure others solutions can be found.

Other Topics

There are several other topics related to derived credentials that deserve discussion, including the pros and cons of storing credentials in a Trusted Execution Environment (TEE), whether biometrics should be used for token activation, and whether derived credentials should be used for physical access. I will leave those topics for future posts.

Update (April 10, 2014). A post discussing the storage of derived credentials in a TEE is now available.

It’s Time to Redesign Transport Layer Security

One difficulty faced by privacy-enhancing credentials (such as U-Prove tokens, Idemix anonymous credentials, or credentials based on group signatures), is the fact that they are not supported by TLS. We noticed this when we looked at privacy-enhancing credentials in the context of NSTIC, and we proposed an architecture for the NSTIC ecosystem that included an extension of TLS to accommodate them.

Several other things are wrong with TLS. Performance is poor over satellite links due to the additional roundtrips and the transmission of certificate chains during the handshake. Client and attribute certificates, when used, are sent in the clear. And there has been a long list of TLS vulnerabilities, some of which have not been addressed, while others are addressed in TLS versions and extensions that are not broadly deployed.

The November SSL Pulse reported that only 18.2% of surveyed web sites supported TLS 1.1, which dates back to April 2006, only 20.7% supported TLS 1.2, which dates back to August 2008, and only 30.6% had server-side protection against the BEAST attack, which requires either TLS 1.1 or TLS 1.2. This indicates upgrade fatigue, which may be due to the age of the protocol and the large number of versions and extensions that it has accumulated during its long life. Changing the configuration of a TLS implementation to protect against vulnerabilities without shutting out a large portion of the user base is a complex task that IT personnel is no doubt loath to tackle.

So perhaps it is time to restart from scratch, designing a new transport layer security protocol — actually, two of them, one for connections and the other for datagrams — that will incorporate the lessons learned from TLS — and DTLS — while discarding the heavy baggage of old code and backward compatibility requirements.

We have written a new white paper that recapitulates the drawbacks of TLS and discusses ingredients for a possible replacement.

The paper emphasizes the benefits of redesigning transport layer security for the military, because the military in particular should be very much interested in better transport layer security protocols. The military should be interested in better performance over satellite and radio links, for obvious reasons. It should be interested in increased security, because so much is at stake in the security of military networks. And I would argue that it should also be interested in increased privacy, because what is viewed as privacy on the Internet may be viewed as resistance to traffic analysis in military networks.

Surveillance and Internet Identity

Last week I attended IIW 17, the 17th meeting of the Internet Identity Workshop, which is held twice a year in Mountain View, California. As usual it was a great opportunity to exchange ideas and meet people, with its unconference format, its many sessions, its rotating demos, its wide space for discussions, and its two free dinners with free drinks.

For me, however, it was tinged with sadness, because of what has happened since the first IIW I attended, IIW 12, in May 2011. IIW 12 was the first IIW after the launch of NSTIC. IIW 17 was the first IIW after Snowden.

The NSTIC Strategy Document, released in April 2011 with a preface signed by President Obama, repeatedly emphasized the goal of enhancing privacy as a key element of the “vision” and “guiding principles” of NSTIC. The document explicitly stated that the Identity Ecosystem will use privacy-enhancing technology and policies to inhibit the ability of service providers to link an individual’s transactions, thus ensuring that no one service provider can gain a complete picture of an individual’s life in cyberspace. At the time, Facebook Connect was threatening to inject Facebook as a middleman in all or most Internet activities, and I was happy to see that the US Government seemingly wanted to prevent such a massive invasion of privacy; I even convened a session at IIW 12 proposing a technique for achieving the privacy goals of NSTIC in the short term. Little did I know that the government was busy building a massive surveillance apparatus that would give the government a complete picture of an individual’s life in cyberspace, by means including bulk collection of data from service providers.

The Internet, given to the world by the US Department of Defense, was a world-wide forum for free-flowing, spontaneous exchange of ideas. Now the NSA, part of the same Department of Defense, has taken that away. People know that they are being tracked and identified when they post an anonymous comment. People know that their conversations are being recorded. Therefore people must think twice about they say.

I don’t know if Congress will be able to rein in the NSA. It should be clear that spying on US citizens is unconstitutional, but some politicians think that it is the NSA’s job to spy on everybody else in the planet. They don’t seem to consider or care that, if the US Government insists on a God-given right to spy on everybody else, other countries or regions may develop their own national or regional networks, separated from the US Internet by an air gap.

Fortunately, the technical community has reacted strongly against the NSA’s attacks on Internet privacy. And thanks to Snowden’s revelations, many of the attack techniques are known. It may therefore be possible to protect Internet privacy by technical means.

Coming back to the subject of the workshop, Internet Identity, I would argue that the first thing to do to protect Internet privacy is to get rid of the pernicious technology variously known as third-party login, social login or federated login. To be precise, I am referring to authentication techniques where the user authenticates to a third-party identity provider, which then provides identity and/or attribute information to a relying party, using a protocol such as OAuth or OpenID Connect. (These are the techniques in Group 2 of the taxonomy proposed in the paper Privacy Postures of Authentication Technologies.)

The only intrinsic advantage of federated login is that it allows the identity provider to collect vast amounts of information about the user, since the identity provider learns not only the user’s identity and/or attributes, but also what relying parties the user logs in to. The identity provider uses the information to sell ads that target the user accurately. We now know that the information is also shared with the government, which makes it available to thousands of analysts and IT personnel who use it for legal or illegal government or personal purposes.

There are no other intrinsic advantages to federated login.

The government and the identity providers argue that federated login is more secure than direct authentication to the relying party with username and password, but the opposite is true.

Security is supposedly increased because federated login reduces password reuse. But password reuse will not be substantially reduced unless a large majority of world-wide web sites force their users to use federated login with one of a small number of global identity providers such as Google or Facebook, something that will hopefully not come to pass.

Security is also supposedly increased because a large identity provider supposedly does a better job of protecting the user’s password. But I don’t know why a large identity provider would provide better protection against hackers, since large companies are not known to provide great security. And I do know that a password entrusted to a large identity provider may become available to thousands of employees of the government, of government contractors, and of the identity provider.. And the capture of a password used at an identity provider, which provides access to multiple web sites, is more damaging to the user than the capture of a password used at a single web site.

There is an alternative to authenticating to a web site with username and password that provides both security and privacy: namely, authentication with a cryptographic key pair automatically generated on the user’s machine when the user registers with the site. The site stores the hash of the public key component of the key pair in its database, and uses it to locate the user’s account when the user visits the site again and demonstrates knowledge of the private key component.

Another claimed advantage of federated login is that the user can register at a new site with a single click if logged in to the identity provider, any personal data required by the site being provided by the identity provider. This is a real advantage, but not an intrinsic one. The same benefit could be easily obtained by storing the personal data in the browser, and specifying a protocol by which the browser would supply selected personal data items to a web site upon demand by the site and approval by the user. Such a protocol would be much simpler than any of the federated login protocols and would provide more security and more privacy.

Yet another claimed advantage of federated login is that the identity provider could provide the relying party with a user’s identity and/or attributes verified by an identity proofing procedure; however, such verified identity and/or attributes could equally well be provided by a certificate authority using a public key certificate (or by multiple authorities providing a combination of a certificate binding a public key to an identity and one or more certificates binding the identity to various attributes), without the certificate authority having to be informed of what relying parties the certificate is submitted to.

It is sometimes argued (cf. the NSTIC 101 session at last week’s IIW) that using public key cryptography for authentication would be expensive and would require the user to carry a separate dongle or smartcard for every credential. This is not true. There is no need for special hardware to store a cryptographic credential, and if special hardware is desired for some reason, there is no need to use different pieces of hardware for different credentials.

Two sessions at IIW 17 gave me hope that Internet privacy is not a lost cause.

One of them was convened by Tim Bray of Google to report on the comments he received in response to a blog post arguing to developers that they should use federated login rather than login with username and password. The comments, which he referred to as a “bloodbath,” showed that neither developers nor end-users like federated login. I hope that such pushback will eventually force companies like Google to give up on federated login.

The other one was convened by Kazue Sako of NEC to discuss anonymous credentials and their possible uses. The room was overflowing and the level of engagement of the audience was high, showing that technical people are interested in privacy-enhancing authentication technologies even if large companies are not.

Feedback on the Paper on Privacy Postures of Authentication Technologies

Many thanks to every one who provided feedback on the paper on privacy postures of authentication technologies which was announced in the previous blog post. The paper was discussed on the Identity Commons mailing list and we also received feedback at the ID360 conference, where we presented the paper, and at IIW 16, where we showed a poster summarizing the paper. In this post I will recap the feedback that we have received and the revisions that we have made to the paper based on that feedback.

Steven Carmody pointed out that SWITCH, the Swiss InCommon federation, has developed an extension of Shibboleth called uApprove that allows the identity or attribute provider to ask the user for consent before disclosing attributes to the relying party. Ken Klingestein told us that the Scalable Privacy NSTIC pilot is developing a privacy manager that will let the user choose what attributes will be disclosed to the the relying party by the Shibboleth identity provider. We have added references to these Shibboleth extensions to Section 4.2 of the paper.

The original paper explained that, although a U-Prove token does not provide multishow unlinkability, the user may obtain multiple tokens from the issuer, and present different tokens to different relying parties. Christian Paquin said that a U-Prove credential is defined as a batch of such tokens, created simultaneously by an efficient parallel procedure. We have added this definition of a U-Prove credential to Section 4.3.

Christian Paquin also pointed out that a U-Prove token is a mathematical concept that can be embodied in a variety of technologies. He sent me a link to the WS-Trust embodiment, which was used in CardSpace. We have explained this and included the link in Section 4.3.

Tom Jones said that what we call anonymity is called pseudonymity by others. In fact, column 9, labeled “Anonymity”, covers both pseudonymity, as provided, e.g., by an Idemix pseudonym or an uncertified key pair or a combination of a user ID and a password when the user ID is freely chosen by the user, and full anonymity, as provided when a relying party learns only attributes that do not uniquely identify the user. I think it is not unreasonable to view anonymity (the service provider does not learn the user’s “name”) as encompassing pseudonymity (the service provider learns a pseudonym instead of the “real name”).

Nat Sakimura provided a lot of feedback, for which we are grateful. He said that Google and Yahoo implemented OpenID Pairwise Pseudonymous Identifiers (PPID), i.e. different identifiers for the same user provided to different relying parties, before ICAM specified its OpenID profile. We have noted this in Section 4.2 of the revised paper and changed the label of row 8 to “OpenID (without PPID)”.

He also said that OpenID Connect supports an ephemeral identifier, which provides anonymity. I was able to find a discussion of an ephemeral identifier in the archives of the OpenID Connect mailing list, but no mention of it in any of the OpenID Connect specifications; so ephemeral identifiers may be added in the future, but they are not there yet.

Nat also argued that OpenID Connect provides multishow unlinkability by different parties and by the same party. I disagree, however. The Subject Identifier in the ID Token makes OpenID Connect authentication events linkable. Furthermore, OpenID is built on top of OAuth, whose purpose is to provide the relying party with access to resources owned by the user by means of an access token. In a typical use case the relying party gets access to the user’s account at a social network such as Facebook, Twitter or Google+. It is unlikely that two relying parties who share information cannot determine that they are both accessing the same account, or that a relying party cannot determine that it has accessed the same account in two different occasions.

Nat said that OpenID Connect can be used for two-party authentication using a “Self-Issued OpenID Provider”. We have added a checkmark to row 11, column 1 of the table to indicate this, and an explanation to Section 4.2.

He also said that OpenID Connect provides group 4 functionality by allowing the relying party to obtain attributes from “distributed attribute providers”. We have mentioned this in Section 4.4 of the revised version of the paper.

Finally, Nat said:

Just by reading the paper, I was not very clear what is the requirement for Issue-show unlinkability. By issuance, I imagine it means the credential issuance. I suppose then it means that the credential verifier (in ISO 29115 | ITU-T X.1254 sense) cannot tell which credential was used though it can attest that the user has a valid credential. Is that correct? If so, much of the technology in group 2 should have n/a in the column because they are independent of the actual authentication itself. They could very well use anonymous authentication or partially anonymous authentication (ISO 29191).

The technologies in group 2 are recursive authentication technologies. The relying party directs the browser to the identity or attribute provider, which recursively authenticates the user and provides a bearer credential to the relying party based on the result of the inner authentication. In all generality there may be multiple inner authentications, as the identity or attribute provider may require multiple credentials. So the authentication process may consist of a tree of nested authentications, with internal nodes of the tree involving group 2 technologies, and leaf nodes other technologies. However, rows 5-11 (group 2) are only concerned with the usual case where the user authenticates to the identity or attribute provider as a returning user with a user ID and a password or some other form of two-party authentication; we have now made that clear in Section 4.2 of the revised paper. In that case there is no issue-show unlinkability.

We have also made a couple of other improvements to the paper, motivated in part by the feedback:

  • We have replaced the word possession with the word ownership in the definition of closed-loop authentication (Section 2), so that it now reads: authentication is closed-loop when the credential authority that issues or registers a credential is later responsible for verifying ownership of the credential at authentication time. The motivation for this change is that, in group 2, the credential is the information that the identity or attribute provider has about the user, and is thus kept by the identity or attribute provider rather than by the user.
  • We have added a distinction between two forms of multishow unlinkability, a strong form that holds even if the credential authority colludes and shares information with the relying parties, and a weak form that holds only if there is no such collusion. The technologies in group 2 that provide multishow unlinkability provide the weak form, whereas Idemix anonymous credentials provide the strong form.

Comparing the Privacy Features of Eighteen Authentication Technologies

This blog post motivates and elaborates on the paper Privacy Postures of Authentication Technologies, which we presented at the recent ID360 conference.

There is a great variety of user authentication technologies, and some of them are very different from each other. Consider, for example, one-time passwords, OAuth, Idemix, and ICAM’s Backend Attribute Exchange: any two of them have little in common.

Different authentication technologies have been developed by different communities, which have created their own vocabularies to describe them. Furthermore, some of the technologies are extremely complex: U-Prove and Idemix are based on mathematical theories that may be impenetrable to non-specialists; and OpenID Connect, which is an extension of OAuth, adds seven specifications to a large number of OAuth specifications. As a result, it is difficult to compare authentication technologies to each other.

This is unfortunate because decision makers in corporations and governments need to decide what technologies or combinations of technologies should replace passwords, which have been rendered even more inadequate by the shift from traditional personal computers to smart phones and tablets. Decision makers need to evaluate and compare the security, usability, deployability, interoperability and, last but not least, privacy, provided by the very large number of very different authentication technologies that are competing in the marketplace of technology innovations.

But all these technologies are trying to do the same thing: authenticate the user. So it should be possible to develop a common conceptual framework that makes it possible to describe them in functional terms without getting lost in the details, to compare their features, and to evaluate their adequacy to different use cases.

The paper that we presented at the recent ID360 conference can be viewed as a step in that direction. It focuses on privacy, an aspect of authentication technology which I think is in need of particular attention. It surveys eighteen technologies, including: four flavors of passwords and one-time passwords; the old Microsoft Passport (of historical interest); the browser SSO profile of SAML; Shibboleth; OpenID; the ICAM profile of OpenID; OAuth; OpenID Connect; uncertified key pairs; public key certificates; structured certificates; Idemix pseudonyms; Idemix anonymous credentials; U-Prove tokens; and ICAM’s Backend Attribute Exchange.

The paper classifies the technologies along four different dimensions or facets, and builds a matrix indicating which of the technologies provide seven privacy features: unobservability by an identity or attribute provider; free choice of identity or attribute provider; anonymity; selective disclosure; issue-show unlinkability; multishow unlinkability by different parties; and multishow unlinkability by the same party. I will not try to recap the details here; instead I will elaborate on observations made in the paper regarding privacy enhancements that have been used to improve the privacy postures of some closed-loop authentication technologies.

Privacy Enhancements for Closed-Loop Authentication

One of the classification facets that the paper considers for authentication technologies is the distinction between closed-loop and open-loop authentication, which I discussed in an earlier post. Closed-loop authentication means that the credential authority that issues or registers a credential is later responsible for verifying possession of the credential at authentication time. Closed-loop authentication may involve two parties, or may use a third-party as a credential authority, which is usually referred to as an identity provider. Examples of third-party closed-loop authentication technologies include the browser SSO profile of SAML, Shibboleth, OpenID, OAuth, and OpenID Connect.

I’ve pointed out before that third-party closed-loop authentication lacks unobservability by the identity provider. Most third-party closed-loop authentication technologies also lack anonymity and multishow unlinkability. However, some of them implement privacy enhancements that provide anonymity and a form of multishow unlinkability. There are two such enhancements, suitable for two different use cases.

The first enhancement consists of omitting the user identifier that the identity provider usually conveys to the relying party. The credential authority is then an attribute provider rather than an identity provider: it conveys attributes that do not necessarily identify the user. This enhancement provides anonymity, and multishow unlinkability assuming no collusion between the attribute provider and the relying parties. It is useful when the purpose of authentication is to verify that the user is entitled to access a service without necessarily having an account with the service provider. This functionality is provided by Shibboleth, which can be used, e.g., to allow a student enrolled in one educational institution to access the library services of another institution without having an account at that other institution.

The core OpenID 2.0 specification specifies how an identity provider conveys an identifier to a relying party. Extensions of the protocol such as the Simple Registration Extension specify methods by which the identity provider can convey user attributes in addition to the user identifier; and the core specification hints that the identifier could be omitted when extensions are used. It would be interesting to know whether any OpenID server or client implementations allow the identifier to be omitted. Any comments?

The second enhancement consists of requiring the identity provider to convey different identifiers for the same user to different relying parties. The identity provider can meet the requirement without allocating large amounts of storage by computing a user identifier specific to a relying party as a cryptographic hash of a generic user identifier and an identifier of the relying party such as a URL. This privacy enhancement is required by the ICAM profile of OpenID. It achieves user anonymity and multishow unlinkability by different parties assuming no collusion between the identity provider and the relying parties; but not multishow unlinkability by the same party. It is useful for returning user authentication.

Two Methods of Cryptographic Single Sign-On on Mobile Devices

This is the sixth and last post of a series discussing the paper A Comprehensive Approach to Cryptographic and Biometric Authentication from a Mobile Perspective.

To conclude this series I am going to discuss briefly two methods of single sign-on (SSO) described in the paper, one based on data protection, the other on shared login sessions.

SSO Based on Data Protection

Section 5 of the paper explains how the multifactor closed-loop authentication method described in the third and fourth posts of the series provides an effective mechanism for protecting data stored in a mobile device against an adversary who captures the device. The data is encrypted under a data encryption key that is entrusted to a key storage service. To retrieve the key, the user provides a PIN and/or a biometric that are used to regenerate an uncertified key pair, which is used to authenticate to the storage service.

An adversary who captures the device needs the PIN and/or the biometric sample to regenerate the key pair, and cannot mount an offline attack to guess the PIN or to guess a biometric key derived from the biometric sample; so the adversary cannot authenticate to the key storage service, and cannot retrieve the key. For additional security the data encryption key can be cryptographically split in several portions entrusted to different storages services. Furthermore a protokey can be entrusted to those services instead of the data encryption key, the key being then derived from the protokey and the same non-stored secrets that are used to regenerate the authentication key pair as described in Section 5.4.

This data protection mechanism can be used to protect any kind of data. In particular, it can be used to protect credentials used for open-loop authentication or one-factor closed-loop authentication to any number of mobile applications or, more precisely, to the back-ends of those applications, which may be have browser-based or native front-ends. As discussed in Section 5.5, this amounts to single sign-on to those applications because, after the user enters a PIN and/or provides a biometric sample, the data encryption key retrieved from the storage service(s) can be kept in memory for a certain amount of time, making it possible to authenticate to the applications without further user intervention.

SSO Based On Shared Login Sessions

Whereas SSO based on data protection can be used for any collection of applications, SSO based on shared login sessions, described in Section 7.5, is best suited for authenticating to enterprise applications from a mobile device. A dedicated PBB in the mobile device and a VBB in the enterprise cloud are used to that purpose. The PBB contains a single protocredential shared by all the enterprise applications, which is used to regenerate an uncertified key pair, in conjunction with a PIN and/or a biometric sample supplied by the user. The VBB has access to an enterprise database that contains device and user records and where the VBB stores shared session records, as illustrated in Figure 8.

It is not difficult to share login sessions among a group of web-based applications owned by an enterprise, using a mechanism readily available on the web. Once the user has logged in to one of the web-based applications in the group, that application can set in the browser a session cookie whose scope (defined explicitly or implicitly by the domain and path attributes of the cookie) comprises the applications in the group and no others. The browser will send the cookie along with every HTTP request targeting an application in the scope of the cookie, thus authenticating the request without user intervention.

But we want to share login sessions among a group of enterprise applications comprising applications with native front-ends in addition to web-based applications. To that purpose we use the mobile authentication architecture that I discussed in the previous post, modifying it as follows.

Recall that an authentication event in the architecture consists of a cryptographic authentication of the PBB to the VBB, followed by a secondary non-cryptographic authentication using a one-time authentication token, which plays the role of a bearer token, as illustrated in Figure 6 for the case of an application with a native front-end, and in Figure 7 for the case of a web-based application. The authentication token is only used once because of the risk of a Referer leak in the case of a web-based application. However there is no such risk in the case of an application with a native front-end.

To implement shared login sessions we replace the one-time authentication token with a pair of session tokens, a one-time session token and a reusable session token. After successful cryptographic authentication of the PBB to the VBB, the VBB creates a pair of session tokens and a shared session record containing the two tokens, and sends the two tokens to the PBB, which stores them.

A native front-end obtains a reusable session token from the PBB and uses it repeatedly to authenticate to its back-end until the back-end rejects it because the session referenced by the token has expired because an expiration time in the shared session record has been reached or some other reason. Then the native front-end sends the reusable token to the PBB asking for a replacement. If the PBB has a different reusable token, it sends it to the native front-end. If not, it prompts the user for a PIN and/or a biometric sample, regenerates the uncertified key pair, authenticates cryptographically to the VBB, obtains from the VBB a pair of session tokens pertaining to a new session, and sends the new reusable token to the native front-end.

A web-based application obtains a one-time session token from the PBB and uses it to locate a shared session record and retrieve a reusable session token, which it sets in the browser as the value of a session cookie. After the PBB sends the one-time token to the application, it erases the one-time token from its storage; and after the application uses the one-time token to retrieve the reusable token, it erases the one-time token from the shared session record. The session cookie is used to authenticate HTTP requests sent by the browser to web-based applications in the group, until one of the applications finds that the session referenced by the reusable token contained in the cookie has expired. Then that application sends the reusable token to the PBB and asks for a one-time token. If the PBB has a one-time token paired with a reusable token different from the one sent by the application, it sends the one-time token to the application. Otherwise it authenticates cryptographically to the VBB as in the case of a native front-end, obtaining a pair of fresh tokens and sending the new one-time token to the application.

Pros and Cons of the Two Methods

The method based on data protection is more flexible than the method based on shared sessions. It can be used to implement SSO for any set of applications, whether or not those applications are related to each other. By contrast, the method based on shared sessions can only be used to implement SSO for a group of related applications: the set of web-based applications in the group must be circumscribable by the scope of a cookie; and, as explained in Section 8.2.2, native front-ends of applications in the group must be signed with the same code-signing key pair in Android, or must have the same Team ID in iOS, so that the PBB can refuse requests from applications not in the group.

On the other, the method based on shared login sessions has performance and security advantages, as explained in Section 7.5.3. In the method based on data protection, SSO is accomplished by making cryptographic authentication transparent to the user, whereas in the method based on shared login sessions cryptographic authentication is avoided altogether; hence the performance advantage. In the method based on data protection, the data encryption key must be present in the device while the user interacts with the applications, whereas in the method based on shared login sessions the uncertified key pair is only needed when a new session is created, and can be erased after it is used; hence the security advantage.

Using Cryptographic Authentication without a Cryptographic API on iOS and Android Devices

This is the fifth of a series of posts discussing the paper A Comprehensive Approach to Cryptographic and Biometric Authentication from a Mobile Perspective.

Everybody agrees that passwords provide very poor security for user authentication, being vulnerable to capture by phishing attacks or database breaches, or by being reused at malicious sites. Authentication using public key cryptography does not have any of these vulnerabilities, and yet, after being available for several decades, it is only used in limited contexts. As computing shifts from traditional PCs to mobile devices, everybody agrees that passwords are terribly inconvenient on touchscreen keyboards, in addition to being insecure; and yet I don’t see a rush to adopting cryptographic authentication methods on mobile devices.

What obstacles stand in the way of widespread adoption of cryptographic authentication?

One obstacle is no doubt the complexity of cryptography. Implementing cryptographic functionality is difficult even when cryptographic libraries are available. Using a cryptographic API is no trivial matter, as documented by Martin Georgiev et al. in a recent paper (reference [39] in the paper).

Another obstacle is poor support by web browsers for the deployment and use of cryptographic credentials. In particular, there are no easy-to-use standards generally supported by browser vendors for issuing cryptographic credentials to a browser and requesting the presentation by the browser of particular credentials or credentials asserting particular attributes.

In Section 7 the paper proposes an architecture for cryptographic authentication on mobile devices that addresses these two obstacles. It does that by encapsulating cryptographic authentication of a mobile device to an application back-end inside a Prover Black Box (PBB) located in the device and a Verifier Black Box (VBB) located in the cloud, as shown in figures 6 (page 48) and 7 (page 54).

The PBB may contain one or more protocredentials for multifactor closed-loop authentication, or credentials for single factor closed-loop or open-loop authentication; and it takes care of proving possession of credentials to the VBB. After a cryptographic authentication event in which the PBB proves possession of one or more credentials, the VBB creates an authentication object that records the event and contains authentication data such as the hash of a public key or attributes asserted by a public key certificate, a U-Prove token, or an Idemix anonymous credential. The authentication object is retrievable by a one-time authentication token, which the VBB passes to the PBB and the PBB passes to the application back-end via a native front-end or via the web browser. The authentication token plays the role of a bearer token in a secondary non-cryptographic authentication of the native front-end or web browser to the back-end, and allows the application back-end to retrieve the authentication data.

In Figure 6 the native front-end of a mobile application receives the authentication token from the PBB and uses it to authenticate to the back-end of the same application, which presents it to the VBB to retrieve the authentication data.

In Figure 7, the PBB sends the token via the browser to the back-end of a web-based application, thus authenticating the browser to the back-end, which again uses the token to retrieve the authentication data from the VBB. (As a matter of terminology, we view a web-based application as having a back-end and a front-end, the back-end being its cloud portion, while the front-end consists of web pages and client-side code running in the browser.)

This architecture circumvents the two obstacles identified above to the adoption of cryptographic authentication.

The browser obstacle is avoided in Figure 6 because no browser is involved, and in Figure 7 because the browser is not involved in storing or presenting credentials, and no modification of standard browser functionality is required.

The obstacle presented by the complexity of cryptography is avoided by the encapsulation of cryptographic functionality in the PBB and the VBB and by making the PBB and the VBB accessible through non-cryptographic APIs in a manner familiar to native and web-based application developers.

In Figure 6, arrows (1) and (4) represent messages sent via the operating system of the mobile device using inter-application communication mechanisms available in iOS and Android; each message is a URL having a custom scheme, with message parameters embedded as usual in the query portion of the URL. Arrow (6) represents an HTTP POST request, and arrow (7) the corresponding response. Arrow (5) is internal to the application and can be implemented as part of a standard web API through which the native front-end accesses its back-end.

In Figure 7, arrow (1) represents an HTTP response that redirects the browser to a custom scheme that targets the PBB, with parameters included in the query portion of the URL; when the browser receives the response, it forwards it to the PBB as a message, using the inter-application communication mechanism provided by the operating system. Arrow (4) represents a message sent by the PBB using the same mechanism, with scheme https; the operating system delivers it to the browser, which forwards it as an HTTP GET request to the application back-end. Arrow (5) represents an HTTP POST request, and arrow (6) the corresponding response.

The architecture is very flexible. It covers a wide variety of use cases, some of which are sketched out in Section 7.1.

A PBB-VBB pair may be used for returning-user authentication to one particular application. In that case the PBB contains a single credential (for one-factor authentication) or protocredential (for multifactor authentication).

Alternatively, a general purpose PBB may be made available to any mobile application that has a native front-end on the device or is accessed from the device through a browser, each application having its own VBB. In that case the PBB may contain any number of credentials or protocredentials used for closed-loop authentication, as well as credentials used for open-loop authentication.

An application may ask a general purpose PBB to prove possession of an uncertified key pair to the application’s VBB for returning-user authentication, or to the VBB of an identity/attribute provider or a social network for third-party closed-loop authentication or social login. The VBB of an identity/attribute provider delivers the user’s identity or attributes to the application back-end as authentication data upon presentation of the authentication token. The VBB of a social network may instead deliver an access token that provides limited access to the user’s account, thus allowing the application to obtain the user’s identity and attributes from the user’s profile, to issue social updates on behalf of the user, and more generally to provide an alternative user interface to the social network.

An application may also ask a general purpose PBB to demonstrate that the user has certain attributes by presenting public key certificates, U-Prove tokens or Idemix anonymous credentials to the application’s VBB in open-loop authentication.

For enterprise use, a PBB-VBB pair may be shared by a group of enterprise applications, including web-based applications and applications with native front-ends, with single sign-on based on shared login sessions. I will discuss this functionality in the next post.

A security analysis of the architecture is provided in Section 8. Among other security considerations, it discusses protection against leaks through so-called Referer headers, protection against misuse of an authentication token by its recipient to impersonate the user, a countermeasure against a form of Login CSRF, identification of the application that requests presentation of one or more credentials kept by a general purpose browser, and countermeasures against a malicious application masquerading as a different application or as the system browser.

Strong Authentication with a Low-Entropy Biometric Key

This is the fourth of a series of posts discussing the paper A Comprehensive Approach to Cryptographic and Biometric Authentication from a Mobile Perspective.

Biometrics are a strong form of authentication when there is assurance of liveness, i.e. assurance that the biometric sample submitted for authentication belongs to the individual seeking authentication. Assurance of liveness may be relatively easy to achieve when a biometric sample is submitted to a reader in the presence of human operator, if the reader and the operator are trusted by the party to which the user is authenticating; but it is practically impossible to achieve for remote authentication with a reader controlled by the authenticating user. When there is no assurance of liveness, security must rely on the relative secrecy of biometric features, which is never absolute, and may be non-existent. Fingerprints, in particular, cannot be considered a secret, since you leave fingerprints on most surfaces you touch. Using a fingerprint as a login password would mean leaving sticky notes with your password everywhere you go.

In addition to these security caveats, biometric authentication raises acute privacy concerns. Online transactions authenticated with biometric features would be linkable not only to other online transactions, but also to offline activities of the user. And both online and offline transactions would become linked to the user’s identity if a biometric sample or template pertaining to the user became public knowledge or were acquired by an adversary.

Yet, in Section 3, the paper proposes a method of using biometrics for user authentication on a mobile device to an application back-end. The method addresses the above security and privacy concerns as follows:

  1. First, biometrics is not used by itself, but rather as one factor in multifactor authentication, another required factor being possession of a protocredential stored in the user’s device, and another optional factor being knowledge of a passcode such as a PIN.
  2. Second, the paper suggests using an iris scan, which provides more secrecy than fingerprints. (The scan could be taken by a camera on the user’s mobile device. The paper cites the work of Hao, Anderson, and Daugman at the University of Cambridge, which achieved good results with iris scans using a near-infrared camera. I have just been told that phone cameras filter our near-infrared light, so a special camera may be needed. The Wikipedia article on iris recognition discusses the use of near-infrared vs. visible light for iris scanning.)
  3. Third, no biometric-related data is sent by the user’s device to the application back-end, neither at authentication time nor at enrollment time. The biometric sample is used to regenerate a key pair on the device, and the key pair is used to authenticate the device to the back-end.
  4. Fourth, neither a biometric sample nor a biometric template are stored in the user’s device. Instead, the paper proposes to use one of several methods described in the literature, cited in Section 3.2, for consistently producing a biometric key from auxiliary data and genuine but varying biometric samples. Only the auxiliary data is stored in the device, and it is deemed unfeasible to recover any biometric information from the auxiliary data.

The resulting security and privacy posture is discussed in Section 4.4 of the paper.

As shown in Figure 3 (in page 22 of the paper), we combine the biometric key generation process with the key pair regeneration process of our protocredential-based authentication method. The biometric sample (the iris image in the figure) is a non-stored secret (the only one in this case), and the auxiliary data is kept in the protocredential as a non-stored-secret related parameter. The auxiliary data and the biometric sample are combined to produce the biometric key. A randomized hash of the biometric key is computed using a salt which is also kept in the protocredential, as a second non-stored-secret related parameter. The randomized hash of the biometric key is used to regenerate the key pair, in conjunction with the key-pair related parameters. The key pair regeneration process produces a DSA, ECDSA or RSA key pair as described in sections 2.6.2, 2.6.3 and 2.6.4 respectively. The public key is sent to the application back-end, and the private key is used to demonstrate possession of the credential by signing a challenge. Figure 4 (in page 23 of the paper) adds a PIN as a second non-stored secret for three-factor authentication; in that case the auxiliary data is kept encrypted in the protocredential, and decrypted by x-oring the ciphertext with a randomized hash of the PIN.

The combination of biometric key generation with our protocredential-based authentication method represents a significant improvement on biometric authentication methodology. There is an intrinsic trade-off between the consistency of a biometric key across genuine biometric samples and the entropy of the key, because the need to accommodate large enough variations among genuine biometric samples reduces the entropy of the key. In the above mentioned paper by Hao et al., the authors are apologetic about the fact that their biometric key has only 44 bits of entropy when the auxiliary data is known. But this is not a problem in our authentication framework, for two reasons:

  1. The auxiliary data is not public. An adversary must capture the user’s device to obtain it.
  2. An adversary who captures the user’s device and obtains the auxiliary data cannot mount an offline guessing attack against the biometric key. All biometric keys produce well-formed DSA or ECDSA key pairs, and most biometric keys produce well-formed RSA key pairs. To determine if a guessed biometric key is valid, the adversary must therefore use it to generate a key pair, and use the key pair to authenticate online against the application back-end, which will limit the number of guesses to a small number. Forty-four bits of entropy is plenty if the adversary can only make, say, 10 guesses.

Therefore our authentication method makes it possible to use low-entropy biometric keys without compromising security. This may enable the use of biometric modalities or techniques that otherwise would not provide sufficient security.

Nevertheless we do not advocate the routine use of biometrics for authentication. As pointed out in Section 10, while malware running on the user’s device after an adversary has captured it cannot obtain biometric data, malware running on the device while the user is using it could obtain a biometric sample by prompting the user for the sample. A biometric authentication factor should only be used when exceptional security requirements demand it and exceptional security precautions are in place to protect the confidentiality of the user’s biometric features.