Invited Talk at the University of Utah

I’ve been so busy that I haven’t had time to write for more than three months, which is a pity because things have been happening and there is much to report. I’m trying to catch up today.

The first thing to report is that Prof. Gopalakrishnan of the University of Utah invited Karen Lewison and myself to give a joint talk at the University, on May 29. We talked about the need to replace TLS, which I’ve discussed earlier on this blog. The slides can be found at the usual location for papers and presentations at the bottom of each page of this web site.

The University of Utah has a renowned School of Computing and it was quite stimulating to meet with faculty and discuss research after the talk. We were happy to discover common research interests, and we have been exploring the possibility of doing joint research work with Profs. Ganesh Gopalakrishnan, Sneha Kasera, and Tammy Denning; we are thrilled that the prospects look promising.

Other things to report include that we had papers accepted at the forthcoming M2MSec workshop and the forthcoming GlobalPlatform TEE conference. I will report on that in the next two posts.

Patent Illustrates Five Different Problems with Software Patents

I recently looked at a patent issued last January in the area of secure messaging, US 8,625,805. It uses the term “Digital Security Bubble” (the title of the patent) to refer to a concept which, in my opinion, is no different from the concept of digital envelope or enveloped data found in RFC 5652 (Cryptographic Message Syntax) or the earlier RFC 2315 (PKCS #7). I posed a question on Ask Patents, asking what could be done to challenge the patent short of obtaining a Post Grant Review, which would cost $30,000 or more just in USPTO fees (including a $12,000 request fee and a $18,000 post-institution fee for challenging up to 15 claims, plus $250 for each additional claim being challenged beyond 15). Phoenix88 suggested submitting prior art under 37 CFR 1.501 and 35 USC 301, which requires no fee. Following his advice, I studied the patent in detail and I have submitted as prior art PKCS #7 as well as RFC 1422, an RFC related to Privacy Enhanced Mail (PEM) that PKCS #7 relies upon. If the USPTO accepts the submission, it will be entered into the patent file. In the meantime, it can be found online on the Pomcor site.

But the reason I’m writing this post is that US patent 8,625,805 can serve to identify and illustrate five different problems with software patents, some well known, some that may not have been identified before. Here are those five problems.

1. The Vocabulary Problem

Whereas disciplines such as medicine, biology, chemistry and the various branches of engineering have developed mature and well-established vocabularies for their subject matters, software engineers like to invent their own fanciful vocabulary as they go. Think for example of the invention of the term cookie by Netscape to refer to a data item that stores server state in an HTTP client.

Inventing a new term is justified when the term is applied to a new concept, or when existing terminology is inadequate. But it is deplorable when there exists adequate terminology for the same concept.

Creation of unnecessary terminology may be due to ignorance of existing terminology. But in their comments on prior art resources, which can be found in a USPTO web page, Public Knowledge, the Electronic Frontier Foundation and Engine Advocacy have said that “applicants are able to use invented terminology in order to avoid prior art.” Without judicial discovery it is not possible to tell whether the term “digital security bubble” was used in US 8,625,805 instead of “digital envelope” for the purpose of obfuscation, or simply because the inventors were not familiar with standard secure messaging terminology.

The lack of stable and broadly accepted terminology drastically reduces the ability to find relevant documents by keyword search, i.e. it reduces what is known as recall in information retrieval. The patent file of US 8,625,805 includes a Search Strategy entry (SRNT) showing that 13 out of 26 prior art search queries contained the keyword “bubble”; the useless keyword “bubble” thus took up 50% of the time and effort spent by the examiner on keyword search. (The search strategy entry can be found using the Image File Wrapper tab in the Public Patent Application Information Retrieval tool — Public PAIR — of the USPTO.)

Besides poor recall, keyword searches of software literature also suffer from the opposite problem, poor precision. This is due to the fact that some popular words are used for many different purposes. The word token, for example, has a large number of different meanings. The combination of poor recall and poor precision means that keyword search is not well suited to finding prior art relevant to software patent applications.

The USPTO has launched a Glossary Pilot that provides incentives for applicants to include a glossary section in the specification. While a glossary section may be useful for other purposes, it does not prevent or discourage the use of non-standard terminology.

2. The Search Corpus Problem

The second problem with software patents has to do with the databases that examiners use to search for prior art.

Although the Search Notes entry (SRFW) in the patent file of US 8,625,805 indicates that the examiner did some Non-Patent Literature (NPL) searching, all 26 queries documented in the Search Strategy entry (SRNT) were run on patent literature databases. Since few software patents were granted or applied for before the late nineties, documentation of fundamental software inventions such as secure messaging, cannot be found in the patent literature.

3. Lack of “Full, Clear, Concise and Exact” Descriptions

A patent is supposed to embody a quid pro quo: the inventor gets a monopoly on the use of the invention, and in exchange discloses the invention so that the public can use it once the patent has expired. The inventor’s side of the bargain is codified in 35 USC 112(a):

The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.

But too often, software patents make claims without providing a “full, clear, concise and exact description” of what is being claimed. Claim 15 of US 8,625,805 is a good example. Here is the claim:

15. The system of claim 1 wherein the processor is configured to perform the encapsulation at least in part by performing a spreading function.

And here is what the specification has to say on what it means to “perform the encapsulation at least in part by performing a spreading function”:

In some embodiments (e.g., as is shown in FIG. 5), a spreading function is used to spread the encrypted symmetric keys inside the DSB (as shown in region 512), by spreading the bits of the encrypted key in a spreading function generated pattern, with the default function being a sequential block or data. The spreading function also contains the cryptographic hashed representation of the recipient usernames that are used by the server to identify the recipients of the message and to set the message waiting flag for each of them.

I don’t understand this at all, and FIG. 5 does not help. If you understand it and would like to explain it in a comment, I would appreciate it.

Lack of compliance with 35 USC 112(a) seems to be a common problem. Software engineers often complain that software patents are incomprehensible. Sometimes, software engineers do not even understand their own patents, written up by patent attorneys:

Against my better judgement, I sat in a conference room with my co-founders and a couple of patent attorneys and told them what we’d created. They took notes and created nonsensical documents that I still can’t make sense of.

A “person skilled in the art” of software is called a software engineer or a software developer. Hence patents that are incomprehensible to software engineers, by definition, do not comply with 35 USC 112(a). Unfortunately, the USPTO does not seem to be keen on enforcing compliance with 35 USC 112(a). Sometimes I wonder if examiners read the patent specification at all, or only read the claims.

4. The Means-or-Step-Plus-Function Problem for Security Claims

US 8,625,805 also illustrates a tricky problem specific to security claims. Here is claim 16:

16. The system of claim 1 wherein only a designated recipient, having a device with applicable device characteristics, is able to decrypt the message.

I believe this claim is objectionable under 35 USC 112(b) because it does not point out any subject matter. It should have been written in means-plus-function form, e.g. “the system of claim 1, further comprising a means of preventing the decryption of the message other than by a designated recipient having a device with applicable characteristics,” with “means” referring to the following portions of the specification:

At 208, a device identifier (“deviceID”) is created from captured hardware information. Examples of captured hardware information include: hard drive identifiers, motherboard identifiers, CPU identifiers, and MAC addresses for wireless, LAN, Bluetooth, and optical cards. Combinations of information pertaining to device characteristics, such as RAM, CACHE, controller cards, etc., can also be used to uniquely identify the device. Some, or all, of the captured hardware information is run through a cryptographic hash algorithm such as SHA-256, to create a unique deviceID for the device. The captured hardware information can also be used for other purposes, such as to seed cryptographic functions.

FIG. 10 illustrates an example of a process for accessing a message included inside a digital security bubble. In some embodiments, process 1000 is performed on a client device, such as Bob’s client device 114. The process begins at 1002 when a DSB is received. As one example, a DSB is received at 1002 when app 138 contacts platform 102, determines a flag associated with Bob’s account has been set, and downloads the DSB from platform 102. In such circumstances, upon receipt of the DSB, client 114 is configured to decrypt the DSB using Bob’s private key (e.g., generated by Bob at 202 in process 200).

At 1004 (i.e., assuming the decryption was successful), hardware binding parameters are checked. As one example, a determination is made as to whether device information (i.e., collected from device 114) can be used to construct an identical hash to the one included in the received DSB. If the hardware binding parameters fail the check (i.e., an attempt is being made to access Alice’s message using Bob’s keys on a device that is not Bob’s), contents of the DSB will be inaccessible, preventing the decryption of Alice’s message. If the hardware binding parameter check is successful, the device is authorized to decrypt the symmetric key (i.e., using Bob’s private key generated at 202) which can in turn be used to decrypt Alice’s message.

This is very vague, and I don’t think it qualifies as a full, clear, concise, and exact description. But the gist of it seems to be that an encrypted message is accompanied by a hash of hardware parameters of a destination device. When the message is received, an app checks whether the hash matches a hash of the hardware parameters of the device where the app is running. If the check fails, the app refuses to decrypt the message.

The point I really want to make, however, is that this method of “hardware binding” does not work. An adversary who has Bob’s private key is not prevented from decrypting the message on a device other than Bob’s device just because an app on Bob’s device is programmed to check the hash of hardware parameters. The adversary can do anything he or she wants on his or her own device. The adversary can, for example, use an app that behaves the same as the app used by Bob except that it omits the check.

This illustrates an important point, specific to security claims, that I have not seen discussed before. It is practically impossible to verify that a means-or-step-plus-function claim is supported by the specification, if the function being claimed is to achieve a security goal. It may be easy to see that a claim like the above is NOT supported. But establishing that a security claim IS supported would require writing and verifying a mathematical proof that the security goal is achieved based on a mathematical model of the a system described in the specification, something which is theoretically possible but not realistically achievable today. Furthermore the statement of the goal would have to be probabilistic, since security is rarely absolute.

This is important because allowing a security claim supported by a description of a technique that does not work does a lot of damage. Somebody else may later invent a technique that does work. Then the person who has been granted a patent on the security claim based on the technique that does not work will be able not only to prevent the person who has found the technique that works from obtaining a patent, but also to prevent everybody from using the technique that works.

5. A Loophole for Avoiding Third-Party Preissuance Submissions

The America Invents Act (AIA) has introduced Preissuance Submissions of Prior Art, which allow third parties to submit prior art to the examiners, and the USPTO is keen on crowdsourcing access to prior art. This a is good thing. But US 8,625,805 avoided third-party scrutiny because the application underwent a Prioritized, a.k.a. Track I, Examination and, like many Track I applications, was not published until granted. (The fact that US 8,625,805 was a Track I patent was noted by George White in a comment on Ask Patents.)

Prioritized Examination, which requires an additional fee of $2,000 for a small entity, has the effect of shortening the waiting time before an examiner takes up the application from years to a few months. Before AIA this was already an extremely unfair and undemocratic procedure, shortening the process for corporations and rich inventors who could afford it while lengthening it for everybody else. Now, after AIA, it can also be used as a loophole to shield those who can afford the fee from third party submissions of prior art, making it easier for them to obtain low quality and overly broad patents which they can inflict on society.

A second loophole for preventing preissuance submission is simply for the applicant to request non-publication of the application. This loophole costs nothing, but it precludes filing in foreign countries that require publication.

More generally, preissuance submission must take place after pre-grant publication, so there can be no preissuance submission if there is no pre-grant publication. The above loopholes prevent preissuance submission by eliminating pre-grant publication. But pre-grant publication may also fail to take place in the normal course of business. By default, it takes place no earlier than 18 months after the earliest benefit date. If the application does not claim the benefit of any earlier application, in an ideal world the USPTO should be able to examine it in less than 18 months, in which case there would be no pre-grant publication, and hence no possibility of preisssuance submission.

Since the 18 months delay in publication and the non-publication request provision are statutory, allowing preissuance submission for all applications requires a change in the law. Since preissuance submission is essential for improving the quality of software patents, such a change is badly needed. I would suggest introducing a minimum 3-month time period between publication and allowance. A request for non-publication would hold publication in abeyance until the examiner thinks the claims are allowable; but then the application would be published and open to preissuance submission of prior art and comments for three months before actual allowance.

While waiting for legislation to be enacted, the Post-Grant Review fees should be drastically reduced, and the onerous requirements on Post-Grant Review petitions should be simplified so that it is not necessary to hire a patent lawyer to file a petition.

Conclusion

The problem of poor quality software patents is difficult and multi-faceted, and many ideas have been proposed for addressing it. Here I would just like to make a few suggestions related to the above observations.

  • Prioritized Examination should be eliminated.
  • The Post Grant Review request fee should be no greater than the Appeal Forwarding fee for a small entity ($1,000). There should be no post-institution fee and no per-claim fee, and a refund of 50% of the request fee should be given if the request is found to have merit and the review is granted.
  • Examiners of software patents should be instructed to direct at least half of their search efforts towards non-patent literature, and to document the non-patent literature queries that they run. Some searchable sources of non-patent literature have been suggested by others in comments on prior art resources. I would add the collection of IETF RFCs and Internet Drafts, which can be searched by restricting a web search to the site datatracker.ietf.org.
  • In addition to crowdsourcing the search for prior art, the USPTO should accept and encourage comments by persons skilled in the art, such as software engineers in the case of software patents, on whether specifications are comprehensible, provide a full, clear, concise, and exact description of the invention, and support the claims.
  • The specification of every patent application containing one or more means-or-step-plus-function claims should be required to contain a separate section explaining in detail how each claimed function is provided. This will not guarantee that every security goal asserted by a means-or-step-plus-function claim is achieved by the invention, but it will at least help third-party reviewers and examiners to identify unsupported claims.

The USPTO has requested comments on “The Use of Crowdsourcing and Third-Party Preissuance Submissions To Identify Relevant Prior Art,” and more generally on “ways the Office can use crowdsourcing to improve the quality of examination.” They are due by April 25. We are sending ours. Please consider sending yours.

Protecting Derived Credentials without Secure Hardware in Mobile Devices

NIST has recently released drafts of two documents with thoughts and guidelines related to the deployment of derived credentials,

and requested comments on the drafts by April 21. We have just sent our comments and we encourage you to send yours.

Derived credentials are credentials that are derived from those in a Personal Identity Verification (PIV) card or Common Access Card (CAC) and carried in a mobile device instead of the card. (A CAC card is a PIV card issued by the Department of Defense.) The Electronic Authentication Guideline, SP 800-63, defines a derived credential more broadly as:

A credential issued based on proof of possession and control of a token associated with a previously issued credential, so as not to duplicate the identity proofing process.

A PIV/CAC card may carry a PIV authentication credential, a digital signature credential, a current key management credential and up to 20 retired key management credentials, each credential consisting of a private key and an associated certificate that contains the corresponding public key. The digital signature private key is used for signing email messages, and the key management keys for decrypting symmetric keys used to encrypt email messages. The retired key management keys are needed to decrypt old messages that have been saved encrypted. The PIV authentication credential is mandatory for all users, while the digital signature credential and the current key management credential are mandatory for users who have government email accounts.

A mobile device may similarly carry an authentication credential, a digital signature credential, and current and retired key management credentials. Although this is not fully spelled out in the NIST documents, the current and retired key management private keys in the mobile device should be able to decrypt the same email messages as those in the card, and therefore should be the same as those in the card, except that we see no need to limit the number of retired key management private keys to 20 in the mobile device. The key management private keys should be downloaded to the mobile device from the escrow server that should already be in use today to recover from the loss of a PIV/CAC card containing those keys. On the other hand the authentication and digital signature key pairs should be generated in the mobile device, and therefore should be different from those in the card.

In a puzzling statement, SP 800-157 insists that only an authentication credential can be considered a “derived PIV credential”:

While the PIV Card may be used as the basis for issuing other types of derived credentials, the issuance of these other credentials is outside the scope of this document. Only derived credentials issued in accordance with this document are considered to be Derived PIV credentials.

Nevertheless, SP 800-157 discusses details related to the storage of digital signature and key management credentials in mobile devices in informative appendix A and normative appendix B.

Software Tokens

The NIST documents provide guidelines regarding the lifecycle of derived credentials, their linkage to the lifecycle of the PIV/CAC card, their certificate policies and cryptographic specifications, and the storage of derived credentials in several kinds of hardware cryptographic modules, which the documents refer to as hardware tokens, including microSD tokens, UICC tokens, USB tokens, and embedded hardware tokens. But the most interesting, and controversial, aspect of the documents concerns the storage of derived credentials in software tokens, i.e. in cryptographic modules implemented entirely in software.

Being able to store derived credentials in software tokens would mean being able to use any mobile device to carry derived credentials. This would have many benefits:

  1. Federal agencies would have the flexibility to use any mobile devices they want.
  2. Federal agencies would be able to use inexpensive devices that would not have to be equipped with special hardware for secure storage of derived credentials. This would save taxpayer money and allow agencies to do more with their IT budgets.
  3. Mobile authentication and secure email solutions used by the Federal Government would be affordable and could be broadly used in the private sector.

The third benefit would have huge implications. Today, the requirement to use PIV/CAC cards means that different IT solutions must be developed for the government and for the private sector. IT solutions specifically developed for the government are expensive, while private sector solutions too often rely on passwords instead of cryptographic credentials. Using the same solutions for the government and the private sector would lower costs and increase security.

Security

But there is a problem. The implementation of software tokens hinted at in the NIST documents is not secure.

NISTIR 7981 describes a software token as follows:

Rather than using specialized hardware to store and use PIV keys, this approach stores the keys in flash memory on the mobile device protected by a PIN or password. Authentication operations are done in software provided by the application accessing the IT system, or the mobile OS.

And SP 800-157 adds the following:

For software implementations (LOA-3) of Derived PIV Credentials, a password-based mechanism shall be used to perform cryptographic operations with the private key corresponding to the Derived PIV Credential. The password shall meet the requirements of an LOA-2 memorized secret token as specified in Table 6, Token Requirements per Assurance Level, in [SP800-63].

Taken together, these two paragraphs seem to suggest that the derived credentials should be stored in ordinary flash memory storage encrypted under a data encryption key derived from a PIN or password satisfying certain requirements. What requirements would ensure sufficient security?

Smart phones are frequently stolen, therefore we must assume that an adversary will be able to capture the mobile device. After capturing the device the adversary can immediately place it in a metallic box or other Faraday cage to prevent a remote wipe. The contents of the flash memory storage may be protected by the OS, but in many Android devices, the OS can be replaced, or rooted, with instructions for doing so provided by Google or the manufacturer. OS protection may be more effective in some iOS devices, but since a software token does not provide any tamper resistance by definition, we must assume that the adversary will be able to extract the encrypted credentials. Having done so, the adversary can mount an offline password guessing attack, testing each password guess by deriving a data encryption key from the password, decrypting the credentials, and checking if the resulting plaintext contains well-formed credentials. To carry out the password guessing attack, the adversary can use a botnet. Botnets with tens of thousands of computers can be easily rented by the day or by the hour. Botnets are usually programmed to launch DDOS attacks, but can be easily reprogrammed to carry out password cracking attacks instead. The adversary has at least a few hours to run the attack before the authentication and digital signature certificates are revoked and the revocation becomes visible to relying parties; and there is no time limit for decrypting the key management keys and using them to decrypt previously obtained encrypted email messages.

To resist such an attack, the PIN or password would need to have at least 64 bits of entropy. According to Table A.1 of the Electronic Authentication Guideline (SP 800-63), a user-chosen password must have more than 40 characters chosen appropriately from a 94-character alphabet to achieve 64 bits of entropy. Entering such a password on the touchscreen keyboard of a smart phone is clearly unfeasible.

SP 800-157 calls instead for a password that meets the requirements of an LOA-2 memorized secret token as specified in Table 6 of SP 800-63, which are as follows:

The memorized secret may be a randomly generated PIN consisting of 6 or more digits, a user generated string consisting of 8 or more characters chosen from an alphabet of 90 or more characters, or a secret with equivalent entropy.

The equivalent entropy is only 20 bits. Why does Table 6 require so little entropy? Because it is not concerned with resisting an offline guessing attack against a password that is used to derive a data encryption key. It is instead concerned with resisting an online guessing attack against a password that is used for authentication, where password guesses can only be tested by attempting to authenticate to a verifier who throttles the rate of failed authentication attempts. In Table 6, the quoted requirement on the memorized secret token is coupled with the following requirement on the verifier:

The Verifier shall implement a throttling mechanism that effectively limits the number of failed authentication attempts an Attacker can make on the Subscriber’s account to 100 or fewer in any 30-day period.

and the necessity of the coupling is emphasized in Section 8.2.3 as follows:

When using a token that produces low entropy token Authenticators, it is necessary to implement controls at the Verifier to protect against online guessing attacks. An explicit requirement for such tokens is given in Table 6: the Verifier shall effectively limit online Attackers to 100 failed attempts on a single account in any 30 day period.

Twenty bits is not sufficient entropy for encrypting derived credentials, and requiring a password with sufficient entropy is not a feasible proposition.

Solutions

But the problem has solutions. It is possible to provide effective protection for derived credentials in a software token.

One solution is to encrypt the derived credentials under a high-entropy key that is stored in a secure back-end and retrieved when the user activates the software token. The problem then becomes how to retrieve the high-entropy key from the back-end. To do so securely, the mobile device must authenticate to the back-end using a device-authentication credential stored in the mobile device, which seems to bring us back to square one. However, there is a difference between the device-authentication credential and the derived credentials stored in the token: the device-authentication credential is only needed for the specific purpose of authenticating the device to the back-end and retrieving the high-entropy key. This makes it possible to use as device-authentication credential a credential regenerated on demand from a PIN or password supplied by the user to activate the token and a protocredential stored in the device, in a way that deprives an attacker who captures the device of any information that would make it possible to test guesses of the PIN or password offline.

The device-authentication credential can consist, for example, of a DSA key pair whose public key is registered with the back-end, coupled with a handle that refers to a device record where the back-end stores a hash of the registered public key. In that case the protocredential consists of the device record handle, the DSA domain parameters, which are (p,q,g) with the notations of the DSS, and a random high-entropy salt. To regenerate the DSA key pair, a key derivation function is used to compute an intermediate key-pair regeneration key (KPRK) from the activation PIN or password and the salt, then the DSA private and public keys are computed as specified in Appendix B.1.1 of the DSS, substituting the KPRK for the random string returned_bits produced by a random number generator.

To authenticate to the back-end and retrieve the high-entropy key, the mobile device establishes a TLS connection to the back-end, over which it sends the device record handle, the DSA public key, and a signature computed with the DSA private key on a challenge derived from the TLS master secret. (Update—April 24, 2014: The material used to derive the challenge must also include the TLS server certificate of the back-end, due to a recently reported UKS vulnerability of TLS. See footnote 2 of the technical report.) The DSA public and private keys are deleted after authentication, and the back-end keeps the public key confidential. An adversary who is able to capture the device and extract the protocredential has no means of testing guesses of the PIN or password other than regenerating the DSA key pair and attempting online authentication to the back-end, which locks the device record after a small number of consecutive failed authentication attempts that specify the handle of the record.

An example of a derived credentials architecture that uses this solution can be found in a technical report.

Other solutions are possible as well. The device-authentication credential itself could serve as a derived credential, as we proposed earlier; SSO can then be achieved by sharing login sessions, as described in Section 7.5 of a another technical report. And I’m sure others solutions can be found.

Other Topics

There are several other topics related to derived credentials that deserve discussion, including the pros and cons of storing credentials in a Trusted Execution Environment (TEE), whether biometrics should be used for token activation, and whether derived credentials should be used for physical access. I will leave those topics for future posts.

Update (April 10, 2014). A post discussing the storage of derived credentials in a TEE is now available.

It’s Time to Redesign Transport Layer Security

One difficulty faced by privacy-enhancing credentials (such as U-Prove tokens, Idemix anonymous credentials, or credentials based on group signatures), is the fact that they are not supported by TLS. We noticed this when we looked at privacy-enhancing credentials in the context of NSTIC, and we proposed an architecture for the NSTIC ecosystem that included an extension of TLS to accommodate them.

Several other things are wrong with TLS. Performance is poor over satellite links due to the additional roundtrips and the transmission of certificate chains during the handshake. Client and attribute certificates, when used, are sent in the clear. And there has been a long list of TLS vulnerabilities, some of which have not been addressed, while others are addressed in TLS versions and extensions that are not broadly deployed.

The November SSL Pulse reported that only 18.2% of surveyed web sites supported TLS 1.1, which dates back to April 2006, only 20.7% supported TLS 1.2, which dates back to August 2008, and only 30.6% had server-side protection against the BEAST attack, which requires either TLS 1.1 or TLS 1.2. This indicates upgrade fatigue, which may be due to the age of the protocol and the large number of versions and extensions that it has accumulated during its long life. Changing the configuration of a TLS implementation to protect against vulnerabilities without shutting out a large portion of the user base is a complex task that IT personnel is no doubt loath to tackle.

So perhaps it is time to restart from scratch, designing a new transport layer security protocol — actually, two of them, one for connections and the other for datagrams — that will incorporate the lessons learned from TLS — and DTLS — while discarding the heavy baggage of old code and backward compatibility requirements.

We have written a new white paper that recapitulates the drawbacks of TLS and discusses ingredients for a possible replacement.

The paper emphasizes the benefits of redesigning transport layer security for the military, because the military in particular should be very much interested in better transport layer security protocols. The military should be interested in better performance over satellite and radio links, for obvious reasons. It should be interested in increased security, because so much is at stake in the security of military networks. And I would argue that it should also be interested in increased privacy, because what is viewed as privacy on the Internet may be viewed as resistance to traffic analysis in military networks.

Comparing the Privacy Features of Eighteen Authentication Technologies

This blog post motivates and elaborates on the paper Privacy Postures of Authentication Technologies, which we presented at the recent ID360 conference.

There is a great variety of user authentication technologies, and some of them are very different from each other. Consider, for example, one-time passwords, OAuth, Idemix, and ICAM’s Backend Attribute Exchange: any two of them have little in common.

Different authentication technologies have been developed by different communities, which have created their own vocabularies to describe them. Furthermore, some of the technologies are extremely complex: U-Prove and Idemix are based on mathematical theories that may be impenetrable to non-specialists; and OpenID Connect, which is an extension of OAuth, adds seven specifications to a large number of OAuth specifications. As a result, it is difficult to compare authentication technologies to each other.

This is unfortunate because decision makers in corporations and governments need to decide what technologies or combinations of technologies should replace passwords, which have been rendered even more inadequate by the shift from traditional personal computers to smart phones and tablets. Decision makers need to evaluate and compare the security, usability, deployability, interoperability and, last but not least, privacy, provided by the very large number of very different authentication technologies that are competing in the marketplace of technology innovations.

But all these technologies are trying to do the same thing: authenticate the user. So it should be possible to develop a common conceptual framework that makes it possible to describe them in functional terms without getting lost in the details, to compare their features, and to evaluate their adequacy to different use cases.

The paper that we presented at the recent ID360 conference can be viewed as a step in that direction. It focuses on privacy, an aspect of authentication technology which I think is in need of particular attention. It surveys eighteen technologies, including: four flavors of passwords and one-time passwords; the old Microsoft Passport (of historical interest); the browser SSO profile of SAML; Shibboleth; OpenID; the ICAM profile of OpenID; OAuth; OpenID Connect; uncertified key pairs; public key certificates; structured certificates; Idemix pseudonyms; Idemix anonymous credentials; U-Prove tokens; and ICAM’s Backend Attribute Exchange.

The paper classifies the technologies along four different dimensions or facets, and builds a matrix indicating which of the technologies provide seven privacy features: unobservability by an identity or attribute provider; free choice of identity or attribute provider; anonymity; selective disclosure; issue-show unlinkability; multishow unlinkability by different parties; and multishow unlinkability by the same party. I will not try to recap the details here; instead I will elaborate on observations made in the paper regarding privacy enhancements that have been used to improve the privacy postures of some closed-loop authentication technologies.

Privacy Enhancements for Closed-Loop Authentication

One of the classification facets that the paper considers for authentication technologies is the distinction between closed-loop and open-loop authentication, which I discussed in an earlier post. Closed-loop authentication means that the credential authority that issues or registers a credential is later responsible for verifying possession of the credential at authentication time. Closed-loop authentication may involve two parties, or may use a third-party as a credential authority, which is usually referred to as an identity provider. Examples of third-party closed-loop authentication technologies include the browser SSO profile of SAML, Shibboleth, OpenID, OAuth, and OpenID Connect.

I’ve pointed out before that third-party closed-loop authentication lacks unobservability by the identity provider. Most third-party closed-loop authentication technologies also lack anonymity and multishow unlinkability. However, some of them implement privacy enhancements that provide anonymity and a form of multishow unlinkability. There are two such enhancements, suitable for two different use cases.

The first enhancement consists of omitting the user identifier that the identity provider usually conveys to the relying party. The credential authority is then an attribute provider rather than an identity provider: it conveys attributes that do not necessarily identify the user. This enhancement provides anonymity, and multishow unlinkability assuming no collusion between the attribute provider and the relying parties. It is useful when the purpose of authentication is to verify that the user is entitled to access a service without necessarily having an account with the service provider. This functionality is provided by Shibboleth, which can be used, e.g., to allow a student enrolled in one educational institution to access the library services of another institution without having an account at that other institution.

The core OpenID 2.0 specification specifies how an identity provider conveys an identifier to a relying party. Extensions of the protocol such as the Simple Registration Extension specify methods by which the identity provider can convey user attributes in addition to the user identifier; and the core specification hints that the identifier could be omitted when extensions are used. It would be interesting to know whether any OpenID server or client implementations allow the identifier to be omitted. Any comments?

The second enhancement consists of requiring the identity provider to convey different identifiers for the same user to different relying parties. The identity provider can meet the requirement without allocating large amounts of storage by computing a user identifier specific to a relying party as a cryptographic hash of a generic user identifier and an identifier of the relying party such as a URL. This privacy enhancement is required by the ICAM profile of OpenID. It achieves user anonymity and multishow unlinkability by different parties assuming no collusion between the identity provider and the relying parties; but not multishow unlinkability by the same party. It is useful for returning user authentication.

Two Methods of Cryptographic Single Sign-On on Mobile Devices

This is the sixth and last post of a series discussing the paper A Comprehensive Approach to Cryptographic and Biometric Authentication from a Mobile Perspective.

To conclude this series I am going to discuss briefly two methods of single sign-on (SSO) described in the paper, one based on data protection, the other on shared login sessions.

SSO Based on Data Protection

Section 5 of the paper explains how the multifactor closed-loop authentication method described in the third and fourth posts of the series provides an effective mechanism for protecting data stored in a mobile device against an adversary who captures the device. The data is encrypted under a data encryption key that is entrusted to a key storage service. To retrieve the key, the user provides a PIN and/or a biometric that are used to regenerate an uncertified key pair, which is used to authenticate to the storage service.

An adversary who captures the device needs the PIN and/or the biometric sample to regenerate the key pair, and cannot mount an offline attack to guess the PIN or to guess a biometric key derived from the biometric sample; so the adversary cannot authenticate to the key storage service, and cannot retrieve the key. For additional security the data encryption key can be cryptographically split in several portions entrusted to different storages services. Furthermore a protokey can be entrusted to those services instead of the data encryption key, the key being then derived from the protokey and the same non-stored secrets that are used to regenerate the authentication key pair as described in Section 5.4.

This data protection mechanism can be used to protect any kind of data. In particular, it can be used to protect credentials used for open-loop authentication or one-factor closed-loop authentication to any number of mobile applications or, more precisely, to the back-ends of those applications, which may be have browser-based or native front-ends. As discussed in Section 5.5, this amounts to single sign-on to those applications because, after the user enters a PIN and/or provides a biometric sample, the data encryption key retrieved from the storage service(s) can be kept in memory for a certain amount of time, making it possible to authenticate to the applications without further user intervention.

SSO Based On Shared Login Sessions

Whereas SSO based on data protection can be used for any collection of applications, SSO based on shared login sessions, described in Section 7.5, is best suited for authenticating to enterprise applications from a mobile device. A dedicated PBB in the mobile device and a VBB in the enterprise cloud are used to that purpose. The PBB contains a single protocredential shared by all the enterprise applications, which is used to regenerate an uncertified key pair, in conjunction with a PIN and/or a biometric sample supplied by the user. The VBB has access to an enterprise database that contains device and user records and where the VBB stores shared session records, as illustrated in Figure 8.

It is not difficult to share login sessions among a group of web-based applications owned by an enterprise, using a mechanism readily available on the web. Once the user has logged in to one of the web-based applications in the group, that application can set in the browser a session cookie whose scope (defined explicitly or implicitly by the domain and path attributes of the cookie) comprises the applications in the group and no others. The browser will send the cookie along with every HTTP request targeting an application in the scope of the cookie, thus authenticating the request without user intervention.

But we want to share login sessions among a group of enterprise applications comprising applications with native front-ends in addition to web-based applications. To that purpose we use the mobile authentication architecture that I discussed in the previous post, modifying it as follows.

Recall that an authentication event in the architecture consists of a cryptographic authentication of the PBB to the VBB, followed by a secondary non-cryptographic authentication using a one-time authentication token, which plays the role of a bearer token, as illustrated in Figure 6 for the case of an application with a native front-end, and in Figure 7 for the case of a web-based application. The authentication token is only used once because of the risk of a Referer leak in the case of a web-based application. However there is no such risk in the case of an application with a native front-end.

To implement shared login sessions we replace the one-time authentication token with a pair of session tokens, a one-time session token and a reusable session token. After successful cryptographic authentication of the PBB to the VBB, the VBB creates a pair of session tokens and a shared session record containing the two tokens, and sends the two tokens to the PBB, which stores them.

A native front-end obtains a reusable session token from the PBB and uses it repeatedly to authenticate to its back-end until the back-end rejects it because the session referenced by the token has expired because an expiration time in the shared session record has been reached or some other reason. Then the native front-end sends the reusable token to the PBB asking for a replacement. If the PBB has a different reusable token, it sends it to the native front-end. If not, it prompts the user for a PIN and/or a biometric sample, regenerates the uncertified key pair, authenticates cryptographically to the VBB, obtains from the VBB a pair of session tokens pertaining to a new session, and sends the new reusable token to the native front-end.

A web-based application obtains a one-time session token from the PBB and uses it to locate a shared session record and retrieve a reusable session token, which it sets in the browser as the value of a session cookie. After the PBB sends the one-time token to the application, it erases the one-time token from its storage; and after the application uses the one-time token to retrieve the reusable token, it erases the one-time token from the shared session record. The session cookie is used to authenticate HTTP requests sent by the browser to web-based applications in the group, until one of the applications finds that the session referenced by the reusable token contained in the cookie has expired. Then that application sends the reusable token to the PBB and asks for a one-time token. If the PBB has a one-time token paired with a reusable token different from the one sent by the application, it sends the one-time token to the application. Otherwise it authenticates cryptographically to the VBB as in the case of a native front-end, obtaining a pair of fresh tokens and sending the new one-time token to the application.

Pros and Cons of the Two Methods

The method based on data protection is more flexible than the method based on shared sessions. It can be used to implement SSO for any set of applications, whether or not those applications are related to each other. By contrast, the method based on shared sessions can only be used to implement SSO for a group of related applications: the set of web-based applications in the group must be circumscribable by the scope of a cookie; and, as explained in Section 8.2.2, native front-ends of applications in the group must be signed with the same code-signing key pair in Android, or must have the same Team ID in iOS, so that the PBB can refuse requests from applications not in the group.

On the other, the method based on shared login sessions has performance and security advantages, as explained in Section 7.5.3. In the method based on data protection, SSO is accomplished by making cryptographic authentication transparent to the user, whereas in the method based on shared login sessions cryptographic authentication is avoided altogether; hence the performance advantage. In the method based on data protection, the data encryption key must be present in the device while the user interacts with the applications, whereas in the method based on shared login sessions the uncertified key pair is only needed when a new session is created, and can be erased after it is used; hence the security advantage.

Using Cryptographic Authentication without a Cryptographic API on iOS and Android Devices

This is the fifth of a series of posts discussing the paper A Comprehensive Approach to Cryptographic and Biometric Authentication from a Mobile Perspective.

Everybody agrees that passwords provide very poor security for user authentication, being vulnerable to capture by phishing attacks or database breaches, or by being reused at malicious sites. Authentication using public key cryptography does not have any of these vulnerabilities, and yet, after being available for several decades, it is only used in limited contexts. As computing shifts from traditional PCs to mobile devices, everybody agrees that passwords are terribly inconvenient on touchscreen keyboards, in addition to being insecure; and yet I don’t see a rush to adopting cryptographic authentication methods on mobile devices.

What obstacles stand in the way of widespread adoption of cryptographic authentication?

One obstacle is no doubt the complexity of cryptography. Implementing cryptographic functionality is difficult even when cryptographic libraries are available. Using a cryptographic API is no trivial matter, as documented by Martin Georgiev et al. in a recent paper (reference [39] in the paper).

Another obstacle is poor support by web browsers for the deployment and use of cryptographic credentials. In particular, there are no easy-to-use standards generally supported by browser vendors for issuing cryptographic credentials to a browser and requesting the presentation by the browser of particular credentials or credentials asserting particular attributes.

In Section 7 the paper proposes an architecture for cryptographic authentication on mobile devices that addresses these two obstacles. It does that by encapsulating cryptographic authentication of a mobile device to an application back-end inside a Prover Black Box (PBB) located in the device and a Verifier Black Box (VBB) located in the cloud, as shown in figures 6 (page 48) and 7 (page 54).

The PBB may contain one or more protocredentials for multifactor closed-loop authentication, or credentials for single factor closed-loop or open-loop authentication; and it takes care of proving possession of credentials to the VBB. After a cryptographic authentication event in which the PBB proves possession of one or more credentials, the VBB creates an authentication object that records the event and contains authentication data such as the hash of a public key or attributes asserted by a public key certificate, a U-Prove token, or an Idemix anonymous credential. The authentication object is retrievable by a one-time authentication token, which the VBB passes to the PBB and the PBB passes to the application back-end via a native front-end or via the web browser. The authentication token plays the role of a bearer token in a secondary non-cryptographic authentication of the native front-end or web browser to the back-end, and allows the application back-end to retrieve the authentication data.

In Figure 6 the native front-end of a mobile application receives the authentication token from the PBB and uses it to authenticate to the back-end of the same application, which presents it to the VBB to retrieve the authentication data.

In Figure 7, the PBB sends the token via the browser to the back-end of a web-based application, thus authenticating the browser to the back-end, which again uses the token to retrieve the authentication data from the VBB. (As a matter of terminology, we view a web-based application as having a back-end and a front-end, the back-end being its cloud portion, while the front-end consists of web pages and client-side code running in the browser.)

This architecture circumvents the two obstacles identified above to the adoption of cryptographic authentication.

The browser obstacle is avoided in Figure 6 because no browser is involved, and in Figure 7 because the browser is not involved in storing or presenting credentials, and no modification of standard browser functionality is required.

The obstacle presented by the complexity of cryptography is avoided by the encapsulation of cryptographic functionality in the PBB and the VBB and by making the PBB and the VBB accessible through non-cryptographic APIs in a manner familiar to native and web-based application developers.

In Figure 6, arrows (1) and (4) represent messages sent via the operating system of the mobile device using inter-application communication mechanisms available in iOS and Android; each message is a URL having a custom scheme, with message parameters embedded as usual in the query portion of the URL. Arrow (6) represents an HTTP POST request, and arrow (7) the corresponding response. Arrow (5) is internal to the application and can be implemented as part of a standard web API through which the native front-end accesses its back-end.

In Figure 7, arrow (1) represents an HTTP response that redirects the browser to a custom scheme that targets the PBB, with parameters included in the query portion of the URL; when the browser receives the response, it forwards it to the PBB as a message, using the inter-application communication mechanism provided by the operating system. Arrow (4) represents a message sent by the PBB using the same mechanism, with scheme https; the operating system delivers it to the browser, which forwards it as an HTTP GET request to the application back-end. Arrow (5) represents an HTTP POST request, and arrow (6) the corresponding response.

The architecture is very flexible. It covers a wide variety of use cases, some of which are sketched out in Section 7.1.

A PBB-VBB pair may be used for returning-user authentication to one particular application. In that case the PBB contains a single credential (for one-factor authentication) or protocredential (for multifactor authentication).

Alternatively, a general purpose PBB may be made available to any mobile application that has a native front-end on the device or is accessed from the device through a browser, each application having its own VBB. In that case the PBB may contain any number of credentials or protocredentials used for closed-loop authentication, as well as credentials used for open-loop authentication.

An application may ask a general purpose PBB to prove possession of an uncertified key pair to the application’s VBB for returning-user authentication, or to the VBB of an identity/attribute provider or a social network for third-party closed-loop authentication or social login. The VBB of an identity/attribute provider delivers the user’s identity or attributes to the application back-end as authentication data upon presentation of the authentication token. The VBB of a social network may instead deliver an access token that provides limited access to the user’s account, thus allowing the application to obtain the user’s identity and attributes from the user’s profile, to issue social updates on behalf of the user, and more generally to provide an alternative user interface to the social network.

An application may also ask a general purpose PBB to demonstrate that the user has certain attributes by presenting public key certificates, U-Prove tokens or Idemix anonymous credentials to the application’s VBB in open-loop authentication.

For enterprise use, a PBB-VBB pair may be shared by a group of enterprise applications, including web-based applications and applications with native front-ends, with single sign-on based on shared login sessions. I will discuss this functionality in the next post.

A security analysis of the architecture is provided in Section 8. Among other security considerations, it discusses protection against leaks through so-called Referer headers, protection against misuse of an authentication token by its recipient to impersonate the user, a countermeasure against a form of Login CSRF, identification of the application that requests presentation of one or more credentials kept by a general purpose browser, and countermeasures against a malicious application masquerading as a different application or as the system browser.

Strong Authentication with a Low-Entropy Biometric Key

This is the fourth of a series of posts discussing the paper A Comprehensive Approach to Cryptographic and Biometric Authentication from a Mobile Perspective.

Biometrics are a strong form of authentication when there is assurance of liveness, i.e. assurance that the biometric sample submitted for authentication belongs to the individual seeking authentication. Assurance of liveness may be relatively easy to achieve when a biometric sample is submitted to a reader in the presence of human operator, if the reader and the operator are trusted by the party to which the user is authenticating; but it is practically impossible to achieve for remote authentication with a reader controlled by the authenticating user. When there is no assurance of liveness, security must rely on the relative secrecy of biometric features, which is never absolute, and may be non-existent. Fingerprints, in particular, cannot be considered a secret, since you leave fingerprints on most surfaces you touch. Using a fingerprint as a login password would mean leaving sticky notes with your password everywhere you go.

In addition to these security caveats, biometric authentication raises acute privacy concerns. Online transactions authenticated with biometric features would be linkable not only to other online transactions, but also to offline activities of the user. And both online and offline transactions would become linked to the user’s identity if a biometric sample or template pertaining to the user became public knowledge or were acquired by an adversary.

Yet, in Section 3, the paper proposes a method of using biometrics for user authentication on a mobile device to an application back-end. The method addresses the above security and privacy concerns as follows:

  1. First, biometrics is not used by itself, but rather as one factor in multifactor authentication, another required factor being possession of a protocredential stored in the user’s device, and another optional factor being knowledge of a passcode such as a PIN.
  2. Second, the paper suggests using an iris scan, which provides more secrecy than fingerprints. (The scan could be taken by a camera on the user’s mobile device. The paper cites the work of Hao, Anderson, and Daugman at the University of Cambridge, which achieved good results with iris scans using a near-infrared camera. I have just been told that phone cameras filter our near-infrared light, so a special camera may be needed. The Wikipedia article on iris recognition discusses the use of near-infrared vs. visible light for iris scanning.)
  3. Third, no biometric-related data is sent by the user’s device to the application back-end, neither at authentication time nor at enrollment time. The biometric sample is used to regenerate a key pair on the device, and the key pair is used to authenticate the device to the back-end.
  4. Fourth, neither a biometric sample nor a biometric template are stored in the user’s device. Instead, the paper proposes to use one of several methods described in the literature, cited in Section 3.2, for consistently producing a biometric key from auxiliary data and genuine but varying biometric samples. Only the auxiliary data is stored in the device, and it is deemed unfeasible to recover any biometric information from the auxiliary data.

The resulting security and privacy posture is discussed in Section 4.4 of the paper.

As shown in Figure 3 (in page 22 of the paper), we combine the biometric key generation process with the key pair regeneration process of our protocredential-based authentication method. The biometric sample (the iris image in the figure) is a non-stored secret (the only one in this case), and the auxiliary data is kept in the protocredential as a non-stored-secret related parameter. The auxiliary data and the biometric sample are combined to produce the biometric key. A randomized hash of the biometric key is computed using a salt which is also kept in the protocredential, as a second non-stored-secret related parameter. The randomized hash of the biometric key is used to regenerate the key pair, in conjunction with the key-pair related parameters. The key pair regeneration process produces a DSA, ECDSA or RSA key pair as described in sections 2.6.2, 2.6.3 and 2.6.4 respectively. The public key is sent to the application back-end, and the private key is used to demonstrate possession of the credential by signing a challenge. Figure 4 (in page 23 of the paper) adds a PIN as a second non-stored secret for three-factor authentication; in that case the auxiliary data is kept encrypted in the protocredential, and decrypted by x-oring the ciphertext with a randomized hash of the PIN.

The combination of biometric key generation with our protocredential-based authentication method represents a significant improvement on biometric authentication methodology. There is an intrinsic trade-off between the consistency of a biometric key across genuine biometric samples and the entropy of the key, because the need to accommodate large enough variations among genuine biometric samples reduces the entropy of the key. In the above mentioned paper by Hao et al., the authors are apologetic about the fact that their biometric key has only 44 bits of entropy when the auxiliary data is known. But this is not a problem in our authentication framework, for two reasons:

  1. The auxiliary data is not public. An adversary must capture the user’s device to obtain it.
  2. An adversary who captures the user’s device and obtains the auxiliary data cannot mount an offline guessing attack against the biometric key. All biometric keys produce well-formed DSA or ECDSA key pairs, and most biometric keys produce well-formed RSA key pairs. To determine if a guessed biometric key is valid, the adversary must therefore use it to generate a key pair, and use the key pair to authenticate online against the application back-end, which will limit the number of guesses to a small number. Forty-four bits of entropy is plenty if the adversary can only make, say, 10 guesses.

Therefore our authentication method makes it possible to use low-entropy biometric keys without compromising security. This may enable the use of biometric modalities or techniques that otherwise would not provide sufficient security.

Nevertheless we do not advocate the routine use of biometrics for authentication. As pointed out in Section 10, while malware running on the user’s device after an adversary has captured it cannot obtain biometric data, malware running on the device while the user is using it could obtain a biometric sample by prompting the user for the sample. A biometric authentication factor should only be used when exceptional security requirements demand it and exceptional security precautions are in place to protect the confidentiality of the user’s biometric features.

Defense in Depth of Cryptographic Credentials on a Mobile Device

This is the third of a series of posts discussing the paper A Comprehensive Approach to Cryptographic and Biometric Authentication from a Mobile Perspective.

Credentials based on public key cryptography provide much stronger security than ordinary passwords or one-time passwords. But a mobile device can be lost or stolen. How can a credential kept in a mobile device be protected if the user’s device is captured by an adversary? Two methods are traditionally used:

  1. Private key encryption. The private key is encoded as specified by PKCS #8, together with cryptographic parameters that typically include the public key or a public key certificate, and the resulting encoded string is encrypted under a symmetric data-encryption key derived from a passcode. This method is used, for example to protect SSH credentials used to manage cloud-hosted virtual servers. But as explained in Section 4.3.1 of the paper this method requires a high-entropy password, which is exceedingly difficult to type on the touchscreen keyboard of a smart phone.
  2. Tamper resistance. This is relied upon, for example to protect credentials in smart cards such as PIV or CAC cards. But few mobile devices have tamper resistant modules.

On an iPhone or an iPad one could think of relying on the data protection method introduced in iOS 4, which encrypts data in a locked device under a key derived from the passcode that the user enters to unlock the device and a key embedded in a hardware encryption chip. But, as explained in section 5.1 of the paper, that method has not proved to be effective.

Instead, Section 2 of the paper proposes a method for using an uncertified key pair for multifactor closed-loop authentication that makes it possible to protect the key pair without relying on any special hardware. The method is generally applicable, but is particularly useful for authentication on a mobile device. The idea is to store in the device cryptographic parameters obtained during initial credential generation, at least one of them being a secret, and later, at authentication time, to regenerate the credential from the stored cryptographic parameters and non-stored secrets supplied by the user such as a PIN and/or a biometric sample. (The non-stored secrets could be supplied by a physical uncloneable function, a PUF, in the case of an autonomous device; but the paper is not concerned with autonomous devices.) We refer to the stored parameters as a protocredential. Possession of the protocredential counts as one authentication factor, while the non-stored secrets play the role of additional authentication factors.

The paper distinguishes between parameters related to the key pair and parameters related to the non-stored secrets. In the case where a PIN is the only non-stored secret, illustrated in Figure 2, there is one non-stored-secret related parameter, a salt used to compute a randomized hash of the PIN. (Two-factor authentication with a biometric sample and three-factor authentication with a PIN and a biometric sample are discussed in Figures 3 and 4. I will discuss biometric authentication in the next blog post.) The key-pair related parameters depend on the public key cryptosystem being used. In the case of DSA and ECDSA, the key-pair related parameters are the domain parameters specified in the Digital Signature Standard. In the case of RSA, there is one key-pair related parameter, the least common multiple &#x03BB of p-1 and q-1, where p and q are the prime factors of the modulus. The key pair regeneration procedures for DSA, ECDSA and RSA are described in sections 2.6.2, 2.6.3 and 2.6.4 respectively.

In a mobile device, once the key pair has been regenerated, it is used by the device to authenticate to a mobile application with which the device has been previously registered. The application may have a native front-end or use a web browser as its front-end. The application back-end has a database that contains a record for the device, identified by a device record handle (a database primary key). To authenticate, the device sends the device record handle and the public key to the application back-end and demonstrates knowledge of the private key by signing a challenge. The back-end verifies the signature, uses the device record handle to locate the device record, computes a cryptographic hash of the public key, and verifies that the hash coincides with a hash stored in the device record. (A mobile authentication architecture that allows application developers to implement the authentication process without using a cryptographic API is described in Section 7; I will discuss it in another post later in this series.)

An adversary who captures the device and is able to read the protocredential needs the non-stored secrets to be able to regenerate the credential and authenticate. The adversary can try to guess the non-stored secrets. If a PIN is the only non-stored secret and the user chooses a 4-digit PIN, the adversary only has to try 10,000 PINs. If the adversary can test each PIN offline, it is trivial to go through all 10,000 PINs. But all PINs (in the case of DSA and ECDSA) or most PINs (in the case of RSA) produce well-formed key pairs. If the adversary does not know the public key (nor a hash of the public key), the only way to test a PIN is to try to use the key pair that it produces to authenticate online against the application back-end; and the back-end can limit the number of guesses to a very small number, such as 3 or 5 or 10. A 4-digit PIN can then be deemed to provide sufficient security, just as 4-digit PIN is usually considered secure enough for withdrawing cash from an ATM, which also limits the number of PIN guesses.

To ensure that the adversary does not know the public key, the public key should be treated as a shared secret between the device and the application back-end. Treating a public key as a secret is an unconventional and paradoxical use of public key cryptography. Section 4.1 explains that a shared symmetric secret could be used instead of a key pair but would result in a weaker security posture.

To prevent a man-in-the-middle attack, the device connects to the back-end using TLS (or some other kind of secure connection). Furthermore, the challenge signed by the device to demonstrate knowledge of the private key includes the TLS server certificate of the application back-end. Section 2.1 explains how this prevents a man-in-the-middle attack even if the adversary is able to spoof the TLS server certificate of the back-end.

All this results in a very strong defense-in-depth security posture. As discussed in Section 4.2 and summarized in Table 1, authentication is secure even against an adversary who is able to:

  1. Capture the user’s device and read the protocredential from the device storage; or
  2. Breach the security of the database back-end and obtain the hash of the public key. The adversary cannot mount an offline attack against a PIN used as single non-stored secret because the adversary does not have the protocredential, which contains at least one secret parameter. Compare this to the effect of a breach of database security when the database contains hashes of passwords, all of which become vulnerable to offline dictionary attacks; or
  3. Breach network security and read the traffic from the device to the back-end (e.g. after the TLS connection has been terminated at a reverse proxy, in a misconfigured infrastructure-as-a-service cloud). Again, the adversary cannot mount an offline attack against the PIN; or
  4. Spoof the TLS server certificate of the application back-end, as discussed above.

Also, in use cases demanding exceptionally high security, by using a high-entropy set of non-stored secrets, it is possible to achieve security even against an adversary who breaches database or communication security and then captures the device and obtains the protocredential.

We have seen how to protect an uncertified key pair used for closed-loop authentication. How about other types of credentials? Section 5 shows how the multifactor closed-loop authentication method discussed above can be used to provide effective protection for data stored in a mobile device, and in particular to provide protection for any kind of credentials, including credentials used for open-loop authentication, such as such as public key certificates, U-Prove tokens or Idemix anonymous credentials.

In the next post I will discuss the use of a biometric sample as a non-stored secret, and explain how it can achieve strong security without putting at risk the confidentiality of the user’s biometric features.

Closed-Loop vs. Open-Loop Authentication

This is the second of a series of posts discussing the paper A Comprehensive Approach to Cryptographic and Biometric Authentication from a Mobile Perspective.

In this post I want to take the time to explain and emphasize the distinction made in the paper between closed-loop authentication and open-loop authentication. This may seem an unimportant matter of vocabulary, but the distinction is essential for two reasons: first, because it helps understand the privacy posture of authentication technologies; second, because it leads to what we think is the best choice of cryptographic authentication technologies for mobile devices.

The concepts of closed-loop and open-loop authentication are defined in the introduction, and examples are given. In open-loop authentication, a party such as a certificate authority or, more generally a credential authority, issues a cryptographic credential to the user’s device, and then is “out of the loop” when the device presents the credential to a relying party. Credentials used in open-loop authentication are typically public key certificates, but could also be U-Prove tokens or Idemix anonymous credentials. In closed-loop authentication, on the other hand, the credential authority is involved in the authentication process, taking care of verifying possession of the credential by the device. In third-party closed-loop authentication, the credential authority is an identity or attribute provider, which communicates user attributes to a relying party after verifying that the device possesses the credential. In two-party authentication, there is only one party besides the user’s device, so two-party authentication can only be closed-loop authentication.

The distinction between closed-loop and open-loop authentication makes it possible to make two observations.

The first observation is that closed-loop authentication can rely on an uncertified key pair, i.e. a key pair that is not bound to any attributes by a certificate. (As a matter of vocabulary, we say that an uncertified key pair is registered by the device with the credential authority, rather than issued by the credential authority to the device, because the credential authority plays no role in generating the key pair; the paper refers to the credential authority as “the party that issues or registers the credential”.) An uncertified key pair can be used because the credential authority can store user attributes in its internal storage and retrieve them at authentication time. Therefore the attributes need not be included in the credential.

The second observation is that, in third-party closed-loop authentication, the credential authority, i.e. the identity or attribute provider, is informed of the authentication transaction and, typically, is told what relying party the user is authenticating to. This impinges on the user’s privacy, especially if the user has no choice of identity or attribute provider and does not trust the provider. This is not just a theoretical consideration. The identity providers most commonly used today have track records of privacy violations, and users are wary of being spied upon.

Some time ago, before being concerned with mobile authentication, we wrote a white paper proposing to eliminate this privacy drawback by using the browser to hide the identity of the relying party. However, this would require substantial modifications of core browser functionality. More recently, in an ICAM blog post, Anil John has proposed hiding the identity of the relying party behind a proxy. But that complicates authentication and serves only to shift the trust issue from the identity provider to the proxy.

Open-loop authentication, on the other hand, does not suffer from this privacy drawback.

These observations led us to the following choice of technologies for cryptographic authentication on mobile devices:

  • For the sake of simplicity, an uncertified key pair should be used for two-party authentication.
  • For the sake of privacy, open-loop authentication should be used when attributes are asserted by a third party, except in special cases. Credentials used in open-loop authentication could be public key certificates, U-Prove tokens, or Idemix anonymous credentials, depending on the privacy requirements, as explained in section 6.1.

There are two special cases where it makes sense to use third-party closed-loop authentication. One is social login, where an application is granted limited access to the user’s account at a social network such as Facebook or Twitter and authenticates the user as side-effect, by obtaining user attributes from the user’s profile. In social login, the social network is necessarily involved in the authentication transaction. The other is third-party login using as identity provider a personal data repository service that emphasizes privacy and is freely chosen and trusted by the user. A company participating in the Personal Data Ecosystem Consortium (PDEC), for example, could play the role of identity provider.

However, this choice of technologies posed the problem of how to protect the credentials used in open-loop authentication against an adversary who captures the user’s mobile device, because the key pair regeneration method, which I mentioned in the previous post and will discuss in more detail in the next post, does not work for open-loop authentication.

We were happy to find a simple solution to that problem. As described in Section 5, key pair regeneration can be used to implement effective data protection against an adversary who captures the device, by encrypting the data under a data-encryption key, entrusting the key to a key storage service (or splitting it cryptographically across multiple services), and authenticating to the service(s) with a regenerated key pair to retrieve the key. A credential used in open-loop authentication can be protected as data in this way, thus benefiting indirectly from the security provided by the key pair regeneration technique.

In the next post I will finally get into the technical details of the paper.