Blog

Forthcoming Presentation at the GlobalPlatform TEE Conference

I’m happy to announce that I’ll be making a presentation at the forthcoming GlobalPlatform 2014 TEE Conference (September 29-30, Santa Clara, CA). Here are the title and abstract:

Virtual Tamper Resistance for a TEE

Derived credentials are cryptographic credentials carried in a mobile device that are derived from credentials carried in a smartcard. The term was coined by the US National Institute of Standards and Technology (NIST) in connection with US Federal employee credentials, but the concept is generally applicable to use cases encompassing high-security enterprise IDs, payment cards, national identity cards, driver licenses, etc.

The Trusted User Interface feature of a TEE can protect the passcode that activates derived credentials from being phished or intercepted by malware, the user being instructed to only enter the passcode when a Security Indicator shows that the touchscreen is controlled by the Secure OS of the TEE. Besides protecting the passcode, it is also necessary to protect the derived credentials themselves from an adversary who physically captures the device. This requires resistance against tampering. Physical tamper resistance can be provided by a Secure Element accessed from the TEE through the TEE Secure Element API, thus combining protection of the passcode against malware with protection of the credentials against physical capture.

Derived credentials can also be protected against physical capture using cloud-based virtual tamper resistance, which is achieved by encrypting them with a key stored in a secure back-end. The device uses a separate credential derived in part from the activation passcode to authenticate to the back-end and retrieve the encryption key. A novel technique makes it possible to do so without exposing the passcode to an offline guessing attack, so that a short numeric passcode is sufficient to provide strong security.

Physical tamper resistance and virtual tamper resistance have overlapping but distinct security postures, and can be combined, if desired, to maximize security.

Identity-Based Protocol Design Patterns for Machine-to-Machine Secure Channels

Cryptography is an essential tool for addressing the privacy and security issues faced by the Web and the Internet of Things. Sadly, however, there is a chronic technology transfer failure that causes important cryptographic techniques to be underutilized.

An example of an underutilized technique is Identity-Based Cryptography. It is used for secure email, although not broadly. But, to my knowledge, it has never been used to implement secure channel protocols, even though it has the potential to provide great practical advantages over traditional public key infrastructure if put to such usage. We pointed this out in our white paper on TLS. Now we have also shown the benefits of identity-based cryptography for machine-to-machine communications, in a new paper that we will present at the Workshop on Security and Privacy in Machine-to-Machine Communications (M2MSec, San Francisco, October 29, 2014). Machine-to-machine communications fall into many different use cases with very different requirements. So, instead of proposing one particular technique, we propose in the paper four different protocol design patterns that could be used to specify a variety of different protocols.

Update (August 4). I should point out that there is a proposal to use Identity-Based authenticated key exchange in conjunction with MIKEY (Multimedia Internet KEYing), a key management scheme for SRTP (Secure Real-Time Transport Protocol), which itself is used to provide security for audio and video conferencing on the Internet. The proposed authenticated key exchange protocol is called MIKEY-IBAKE and is described in RFC 6267. This is an informational RFC rather than a standards-track RFC, so it’s not clear if the proposed authenticated key exchange method will be eventually deployed. Interestingly, MIKEY-IBAKE uses identity-based encryption rather than identity-based key agreement. This is also what we do in the M2MSec paper, but with a difference. MIKEY-IBAKE uses identity-based encryption to carry ephemeral Elliptic Curve Diffie-Hellman parameters, and thus does not reduce the number of roundtrips. We use identity-based encryption to send a secret from the initiator to the responder, and we eliminate roundtrips by simultaneously sending application data protected with encryption and authentication keys derived from the secret. This gives up replay protection and forward secrecy for the first message; but replay protection, as well as forward secrecy in two of the four patterns, are provided from the second message onward.

Invited Talk at the University of Utah

I’ve been so busy that I haven’t had time to write for more than three months, which is a pity because things have been happening and there is much to report. I’m trying to catch up today.

The first thing to report is that Prof. Gopalakrishnan of the University of Utah invited Karen Lewison and myself to give a joint talk at the University, on May 29. We talked about the need to replace TLS, which I’ve discussed earlier on this blog. The slides can be found at the usual location for papers and presentations at the bottom of each page of this web site.

The University of Utah has a renowned School of Computing and it was quite stimulating to meet with faculty and discuss research after the talk. We were happy to discover common research interests, and we have been exploring the possibility of doing joint research work with Profs. Ganesh Gopalakrishnan, Sneha Kasera, and Tammy Denning; we are thrilled that the prospects look promising.

Other things to report include that we had papers accepted at the forthcoming M2MSec workshop and the forthcoming GlobalPlatform TEE conference. I will report on that in the next two posts.

Patent Illustrates Five Different Problems with Software Patents

I recently looked at a patent issued last January in the area of secure messaging, US 8,625,805. It uses the term “Digital Security Bubble” (the title of the patent) to refer to a concept which, in my opinion, is no different from the concept of digital envelope or enveloped data found in RFC 5652 (Cryptographic Message Syntax) or the earlier RFC 2315 (PKCS #7). I posed a question on Ask Patents, asking what could be done to challenge the patent short of obtaining a Post Grant Review, which would cost $30,000 or more just in USPTO fees (including a $12,000 request fee and a $18,000 post-institution fee for challenging up to 15 claims, plus $250 for each additional claim being challenged beyond 15). Phoenix88 suggested submitting prior art under 37 CFR 1.501 and 35 USC 301, which requires no fee. Following his advice, I studied the patent in detail and I have submitted as prior art PKCS #7 as well as RFC 1422, an RFC related to Privacy Enhanced Mail (PEM) that PKCS #7 relies upon. If the USPTO accepts the submission, it will be entered into the patent file. In the meantime, it can be found online on the Pomcor site.

But the reason I’m writing this post is that US patent 8,625,805 can serve to identify and illustrate five different problems with software patents, some well known, some that may not have been identified before. Here are those five problems.

1. The Vocabulary Problem

Whereas disciplines such as medicine, biology, chemistry and the various branches of engineering have developed mature and well-established vocabularies for their subject matters, software engineers like to invent their own fanciful vocabulary as they go. Think for example of the invention of the term cookie by Netscape to refer to a data item that stores server state in an HTTP client.

Inventing a new term is justified when the term is applied to a new concept, or when existing terminology is inadequate. But it is deplorable when there exists adequate terminology for the same concept.

Creation of unnecessary terminology may be due to ignorance of existing terminology. But in their comments on prior art resources, which can be found in a USPTO web page, Public Knowledge, the Electronic Frontier Foundation and Engine Advocacy have said that “applicants are able to use invented terminology in order to avoid prior art.” Without judicial discovery it is not possible to tell whether the term “digital security bubble” was used in US 8,625,805 instead of “digital envelope” for the purpose of obfuscation, or simply because the inventors were not familiar with standard secure messaging terminology.

The lack of stable and broadly accepted terminology drastically reduces the ability to find relevant documents by keyword search, i.e. it reduces what is known as recall in information retrieval. The patent file of US 8,625,805 includes a Search Strategy entry (SRNT) showing that 13 out of 26 prior art search queries contained the keyword “bubble”; the useless keyword “bubble” thus took up 50% of the time and effort spent by the examiner on keyword search. (The search strategy entry can be found using the Image File Wrapper tab in the Public Patent Application Information Retrieval tool — Public PAIR — of the USPTO.)

Besides poor recall, keyword searches of software literature also suffer from the opposite problem, poor precision. This is due to the fact that some popular words are used for many different purposes. The word token, for example, has a large number of different meanings. The combination of poor recall and poor precision means that keyword search is not well suited to finding prior art relevant to software patent applications.

The USPTO has launched a Glossary Pilot that provides incentives for applicants to include a glossary section in the specification. While a glossary section may be useful for other purposes, it does not prevent or discourage the use of non-standard terminology.

2. The Search Corpus Problem

The second problem with software patents has to do with the databases that examiners use to search for prior art.

Although the Search Notes entry (SRFW) in the patent file of US 8,625,805 indicates that the examiner did some Non-Patent Literature (NPL) searching, all 26 queries documented in the Search Strategy entry (SRNT) were run on patent literature databases. Since few software patents were granted or applied for before the late nineties, documentation of fundamental software inventions such as secure messaging, cannot be found in the patent literature.

3. Lack of “Full, Clear, Concise and Exact” Descriptions

A patent is supposed to embody a quid pro quo: the inventor gets a monopoly on the use of the invention, and in exchange discloses the invention so that the public can use it once the patent has expired. The inventor’s side of the bargain is codified in 35 USC 112(a):

The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.

But too often, software patents make claims without providing a “full, clear, concise and exact description” of what is being claimed. Claim 15 of US 8,625,805 is a good example. Here is the claim:

15. The system of claim 1 wherein the processor is configured to perform the encapsulation at least in part by performing a spreading function.

And here is what the specification has to say on what it means to “perform the encapsulation at least in part by performing a spreading function”:

In some embodiments (e.g., as is shown in FIG. 5), a spreading function is used to spread the encrypted symmetric keys inside the DSB (as shown in region 512), by spreading the bits of the encrypted key in a spreading function generated pattern, with the default function being a sequential block or data. The spreading function also contains the cryptographic hashed representation of the recipient usernames that are used by the server to identify the recipients of the message and to set the message waiting flag for each of them.

I don’t understand this at all, and FIG. 5 does not help. If you understand it and would like to explain it in a comment, I would appreciate it.

Lack of compliance with 35 USC 112(a) seems to be a common problem. Software engineers often complain that software patents are incomprehensible. Sometimes, software engineers do not even understand their own patents, written up by patent attorneys:

Against my better judgement, I sat in a conference room with my co-founders and a couple of patent attorneys and told them what we’d created. They took notes and created nonsensical documents that I still can’t make sense of.

A “person skilled in the art” of software is called a software engineer or a software developer. Hence patents that are incomprehensible to software engineers, by definition, do not comply with 35 USC 112(a). Unfortunately, the USPTO does not seem to be keen on enforcing compliance with 35 USC 112(a). Sometimes I wonder if examiners read the patent specification at all, or only read the claims.

4. The Means-or-Step-Plus-Function Problem for Security Claims

US 8,625,805 also illustrates a tricky problem specific to security claims. Here is claim 16:

16. The system of claim 1 wherein only a designated recipient, having a device with applicable device characteristics, is able to decrypt the message.

I believe this claim is objectionable under 35 USC 112(b) because it does not point out any subject matter. It should have been written in means-plus-function form, e.g. “the system of claim 1, further comprising a means of preventing the decryption of the message other than by a designated recipient having a device with applicable characteristics,” with “means” referring to the following portions of the specification:

At 208, a device identifier (“deviceID”) is created from captured hardware information. Examples of captured hardware information include: hard drive identifiers, motherboard identifiers, CPU identifiers, and MAC addresses for wireless, LAN, Bluetooth, and optical cards. Combinations of information pertaining to device characteristics, such as RAM, CACHE, controller cards, etc., can also be used to uniquely identify the device. Some, or all, of the captured hardware information is run through a cryptographic hash algorithm such as SHA-256, to create a unique deviceID for the device. The captured hardware information can also be used for other purposes, such as to seed cryptographic functions.

FIG. 10 illustrates an example of a process for accessing a message included inside a digital security bubble. In some embodiments, process 1000 is performed on a client device, such as Bob’s client device 114. The process begins at 1002 when a DSB is received. As one example, a DSB is received at 1002 when app 138 contacts platform 102, determines a flag associated with Bob’s account has been set, and downloads the DSB from platform 102. In such circumstances, upon receipt of the DSB, client 114 is configured to decrypt the DSB using Bob’s private key (e.g., generated by Bob at 202 in process 200).

At 1004 (i.e., assuming the decryption was successful), hardware binding parameters are checked. As one example, a determination is made as to whether device information (i.e., collected from device 114) can be used to construct an identical hash to the one included in the received DSB. If the hardware binding parameters fail the check (i.e., an attempt is being made to access Alice’s message using Bob’s keys on a device that is not Bob’s), contents of the DSB will be inaccessible, preventing the decryption of Alice’s message. If the hardware binding parameter check is successful, the device is authorized to decrypt the symmetric key (i.e., using Bob’s private key generated at 202) which can in turn be used to decrypt Alice’s message.

This is very vague, and I don’t think it qualifies as a full, clear, concise, and exact description. But the gist of it seems to be that an encrypted message is accompanied by a hash of hardware parameters of a destination device. When the message is received, an app checks whether the hash matches a hash of the hardware parameters of the device where the app is running. If the check fails, the app refuses to decrypt the message.

The point I really want to make, however, is that this method of “hardware binding” does not work. An adversary who has Bob’s private key is not prevented from decrypting the message on a device other than Bob’s device just because an app on Bob’s device is programmed to check the hash of hardware parameters. The adversary can do anything he or she wants on his or her own device. The adversary can, for example, use an app that behaves the same as the app used by Bob except that it omits the check.

This illustrates an important point, specific to security claims, that I have not seen discussed before. It is practically impossible to verify that a means-or-step-plus-function claim is supported by the specification, if the function being claimed is to achieve a security goal. It may be easy to see that a claim like the above is NOT supported. But establishing that a security claim IS supported would require writing and verifying a mathematical proof that the security goal is achieved based on a mathematical model of the a system described in the specification, something which is theoretically possible but not realistically achievable today. Furthermore the statement of the goal would have to be probabilistic, since security is rarely absolute.

This is important because allowing a security claim supported by a description of a technique that does not work does a lot of damage. Somebody else may later invent a technique that does work. Then the person who has been granted a patent on the security claim based on the technique that does not work will be able not only to prevent the person who has found the technique that works from obtaining a patent, but also to prevent everybody from using the technique that works.

5. A Loophole for Avoiding Third-Party Preissuance Submissions

The America Invents Act (AIA) has introduced Preissuance Submissions of Prior Art, which allow third parties to submit prior art to the examiners, and the USPTO is keen on crowdsourcing access to prior art. This a is good thing. But US 8,625,805 avoided third-party scrutiny because the application underwent a Prioritized, a.k.a. Track I, Examination and, like many Track I applications, was not published until granted. (The fact that US 8,625,805 was a Track I patent was noted by George White in a comment on Ask Patents.)

Prioritized Examination, which requires an additional fee of $2,000 for a small entity, has the effect of shortening the waiting time before an examiner takes up the application from years to a few months. Before AIA this was already an extremely unfair and undemocratic procedure, shortening the process for corporations and rich inventors who could afford it while lengthening it for everybody else. Now, after AIA, it can also be used as a loophole to shield those who can afford the fee from third party submissions of prior art, making it easier for them to obtain low quality and overly broad patents which they can inflict on society.

A second loophole for preventing preissuance submission is simply for the applicant to request non-publication of the application. This loophole costs nothing, but it precludes filing in foreign countries that require publication.

More generally, preissuance submission must take place after pre-grant publication, so there can be no preissuance submission if there is no pre-grant publication. The above loopholes prevent preissuance submission by eliminating pre-grant publication. But pre-grant publication may also fail to take place in the normal course of business. By default, it takes place no earlier than 18 months after the earliest benefit date. If the application does not claim the benefit of any earlier application, in an ideal world the USPTO should be able to examine it in less than 18 months, in which case there would be no pre-grant publication, and hence no possibility of preisssuance submission.

Since the 18 months delay in publication and the non-publication request provision are statutory, allowing preissuance submission for all applications requires a change in the law. Since preissuance submission is essential for improving the quality of software patents, such a change is badly needed. I would suggest introducing a minimum 3-month time period between publication and allowance. A request for non-publication would hold publication in abeyance until the examiner thinks the claims are allowable; but then the application would be published and open to preissuance submission of prior art and comments for three months before actual allowance.

While waiting for legislation to be enacted, the Post-Grant Review fees should be drastically reduced, and the onerous requirements on Post-Grant Review petitions should be simplified so that it is not necessary to hire a patent lawyer to file a petition.

Conclusion

The problem of poor quality software patents is difficult and multi-faceted, and many ideas have been proposed for addressing it. Here I would just like to make a few suggestions related to the above observations.

  • Prioritized Examination should be eliminated.
  • The Post Grant Review request fee should be no greater than the Appeal Forwarding fee for a small entity ($1,000). There should be no post-institution fee and no per-claim fee, and a refund of 50% of the request fee should be given if the request is found to have merit and the review is granted.
  • Examiners of software patents should be instructed to direct at least half of their search efforts towards non-patent literature, and to document the non-patent literature queries that they run. Some searchable sources of non-patent literature have been suggested by others in comments on prior art resources. I would add the collection of IETF RFCs and Internet Drafts, which can be searched by restricting a web search to the site datatracker.ietf.org.
  • In addition to crowdsourcing the search for prior art, the USPTO should accept and encourage comments by persons skilled in the art, such as software engineers in the case of software patents, on whether specifications are comprehensible, provide a full, clear, concise, and exact description of the invention, and support the claims.
  • The specification of every patent application containing one or more means-or-step-plus-function claims should be required to contain a separate section explaining in detail how each claimed function is provided. This will not guarantee that every security goal asserted by a means-or-step-plus-function claim is achieved by the invention, but it will at least help third-party reviewers and examiners to identify unsupported claims.

The USPTO has requested comments on “The Use of Crowdsourcing and Third-Party Preissuance Submissions To Identify Relevant Prior Art,” and more generally on “ways the Office can use crowdsourcing to improve the quality of examination.” They are due by April 25. We are sending ours. Please consider sending yours.

Derived Credentials in a Trusted Execution Environment (TEE)

In the previous post I discussed the storage of derived credentials (Federal credentials carried in a mobile device instead of a PIV/CAC card) in a software token, i.e. in a cryptographic module implemented entirely in software, whose contents are stored in ordinary flash memory. In this post I will discuss the storage of derived credentials in a Trusted Execution Environment (TEE).

Malware Attacks

As discussed in the previous post and in a technical report, it is possible to protect derived credentials stored in ordinary flash storage by encrypting them under a high entropy key-wrapping key kept in a secure back-end, which the mobile device retrieves by authenticating to the back-end with a key pair regenerated from a protocredential and an activation passcode.

This provides effective protection against an adversary who captures the device while the software token is not active, preventing the adversary from extracting or using the credentials. But it does not provide protection against malware running on the device while the legitimate user is using the device. Such malware can carry out the following attacks:

  1. It can use the derived credentials, by issuing instructions to the software token after it has been activated by the legitimate user.
  2. It can read the plaintext derived credentials from the flash storage after the software token has been activated, and transmit them to the adversary responsible for the malware, who can then use them at will on a different machine.
  3. It can capture the activation passcode by phishing or intercepting it. In a phishing attack, malware prompts the user for the passcode while masquerading as legitimate code that needs the passcode, such as token activation code. In an interception attack, malware gets the passcode after it has been obtained from the user by legitimate code.

The first of these attacks may be impossible to prevent once privileged malware is running on the mobile device without the user being aware of it. But the second and third attacks can be prevented using a TEE as we shall see below; and preventing them is important because they are more damaging than the first attack.

The second attack, extracting the credentials and sending them to the adversary, is more damaging than the first because it cannot be stopped by recovering or wiping the stolen device. Use of an authentication or signature private key cannot be stopped until the associated certificate is revoked and relying parties become aware of the revocation. Correspondents should avoid sending messages encrypted under a symmetric key wrapped by a “key management” public key after becoming aware that the key management certificate has been revoked. But there is no time limit for using a key management private key to decrypt earlier messages that the adversary may have previously captured or may capture in the future, e.g. by breaching the security of a MS Exchange server containing older encrypted messages.

The third attack is even more damaging for several reasons. First, it enables the first two attacks, because once it has the passcode, malware can activate the software token and use and extract the plaintext derived credentials. Second, if the adversary captures the device after using malware to obtain the passcode, the adversary can use the device, or install more comprehensive malware that is able to extract the credentials. Third, the passcode may be independently exploitable because it may be used for other purposes.

A TEE has security features that make it possible to prevent the second and third attacks.

Features of a TEE

A TEE is a computing environment provided by a secure OS running on the same processor as a normal OS. One or more trusted applications (TAs) run under the secure OS. A hardware bus architecture ensures that a portion of the flash storage can only be accessed by the secure OS. Both OSes can access the touchscreen, but a security indicator lets the user know when the screen is controlled by the secure OS and the user interface can be trusted. GlobalPlatform is developing TEE specifications, including a Trusted User Interface API specification, which can be downloaded from the GlobalPlatform site. TEEs are provided by ARM Cortex-A processors, where a TEE is also referred to as a TrustZone. A TA running in a TEE can be used to implement a cryptographic module in which derived credentials can be stored and used.

Using a TEE to Protect Derived Credentials

Derived credentials stored and used in a cryptographic module implemented within a TEE can be protected against the second malware attack discussed above by making their private keys unextractable from the cryptographic module. The ability to mark private keys as being unextractable is a typical feature of cryptographic modules. The PKCS #11 cryptographic module API, for example, allows private keys to be made non-extractable by setting the value of their CKA_EXTRACTABLE attribute to CK_FALSE. The forthcoming TEE Functional API, mentioned in the TEE white paper, will no doubt allow private keys stored in a cryptographic module within a TEE to be made non-extractable as well.

Furthermore, derived credentials stored in a cryptographic module within a TEE can be protected against the third malware attack using the Trusted User Interface feature of the TEE. The passcode can be prompted for by the TA that implements the cryptographic module, and the user can be instructed to only enter the passcode when a Security Indicator shows that the touchscreen is controlled by the Secure OS of the TEE. The passcode is then protected against phishing and interception by malware, assuming that all TAs can be trusted and that the secure OS is not infected by malware. The latter assumption is motivated by the fact that the secure OS is simpler than the normal OS and presents a much smaller attack surface.

Virtual Tamper Resistance

Using the same processor and a portion of the same storage for the secure OS as for the normal OS has important benefits. It provides greater performance for the secure OS than would typically achieved by a secondary processor located in a secure element, and it saves the cost of the secure element. On the other hand, it means that a TEE is not expected to provide much, if any, tamper resistance. Indeed, the TEE Secure Element API, available at the GlobalPlatform site, is concerned with using together a TEE and a secure element, with the TEE providing a Trusted User Interface, and the secure element providing tamper resistance.

(BTW, some secure elements do provide serious tamper resistance, but tamper resistance is never absolute. A fascinating description of the elaborate anti-tampering countermeasures in a family of Infineon chips, and how they were defeated by an attacker with no insider knowledge, can be found in an 80-minute video demonstration—broken down into ten eight-minute segments—presented at Black Hat 2010.)

But the lack of tamper resistance in a TEE can be remedied using the same technique that I described in the previous post as a solution to the problem of protecting derived credentials stored in a software token. Encrypting the derived credentials under a high entropy key-wrapping key, kept in a secure back-end and retrieved by authenticating to the back-end with a key pair regenerated from a protocredential and an activation passcode, can be viewed as a form of cloud-based virtual tamper resistance.

Combining such virtual tamper resistance with the TEE Trusted User Interface feature would make it possible to implement a cryptographic module that would protect both the derived credentials and their activation passcode.

Protecting Derived Credentials without Secure Hardware in Mobile Devices

NIST has recently released drafts of two documents with thoughts and guidelines related to the deployment of derived credentials,

and requested comments on the drafts by April 21. We have just sent our comments and we encourage you to send yours.

Derived credentials are credentials that are derived from those in a Personal Identity Verification (PIV) card or Common Access Card (CAC) and carried in a mobile device instead of the card. (A CAC card is a PIV card issued by the Department of Defense.) The Electronic Authentication Guideline, SP 800-63, defines a derived credential more broadly as:

A credential issued based on proof of possession and control of a token associated with a previously issued credential, so as not to duplicate the identity proofing process.

A PIV/CAC card may carry a PIV authentication credential, a digital signature credential, a current key management credential and up to 20 retired key management credentials, each credential consisting of a private key and an associated certificate that contains the corresponding public key. The digital signature private key is used for signing email messages, and the key management keys for decrypting symmetric keys used to encrypt email messages. The retired key management keys are needed to decrypt old messages that have been saved encrypted. The PIV authentication credential is mandatory for all users, while the digital signature credential and the current key management credential are mandatory for users who have government email accounts.

A mobile device may similarly carry an authentication credential, a digital signature credential, and current and retired key management credentials. Although this is not fully spelled out in the NIST documents, the current and retired key management private keys in the mobile device should be able to decrypt the same email messages as those in the card, and therefore should be the same as those in the card, except that we see no need to limit the number of retired key management private keys to 20 in the mobile device. The key management private keys should be downloaded to the mobile device from the escrow server that should already be in use today to recover from the loss of a PIV/CAC card containing those keys. On the other hand the authentication and digital signature key pairs should be generated in the mobile device, and therefore should be different from those in the card.

In a puzzling statement, SP 800-157 insists that only an authentication credential can be considered a “derived PIV credential”:

While the PIV Card may be used as the basis for issuing other types of derived credentials, the issuance of these other credentials is outside the scope of this document. Only derived credentials issued in accordance with this document are considered to be Derived PIV credentials.

Nevertheless, SP 800-157 discusses details related to the storage of digital signature and key management credentials in mobile devices in informative appendix A and normative appendix B.

Software Tokens

The NIST documents provide guidelines regarding the lifecycle of derived credentials, their linkage to the lifecycle of the PIV/CAC card, their certificate policies and cryptographic specifications, and the storage of derived credentials in several kinds of hardware cryptographic modules, which the documents refer to as hardware tokens, including microSD tokens, UICC tokens, USB tokens, and embedded hardware tokens. But the most interesting, and controversial, aspect of the documents concerns the storage of derived credentials in software tokens, i.e. in cryptographic modules implemented entirely in software.

Being able to store derived credentials in software tokens would mean being able to use any mobile device to carry derived credentials. This would have many benefits:

  1. Federal agencies would have the flexibility to use any mobile devices they want.
  2. Federal agencies would be able to use inexpensive devices that would not have to be equipped with special hardware for secure storage of derived credentials. This would save taxpayer money and allow agencies to do more with their IT budgets.
  3. Mobile authentication and secure email solutions used by the Federal Government would be affordable and could be broadly used in the private sector.

The third benefit would have huge implications. Today, the requirement to use PIV/CAC cards means that different IT solutions must be developed for the government and for the private sector. IT solutions specifically developed for the government are expensive, while private sector solutions too often rely on passwords instead of cryptographic credentials. Using the same solutions for the government and the private sector would lower costs and increase security.

Security

But there is a problem. The implementation of software tokens hinted at in the NIST documents is not secure.

NISTIR 7981 describes a software token as follows:

Rather than using specialized hardware to store and use PIV keys, this approach stores the keys in flash memory on the mobile device protected by a PIN or password. Authentication operations are done in software provided by the application accessing the IT system, or the mobile OS.

And SP 800-157 adds the following:

For software implementations (LOA-3) of Derived PIV Credentials, a password-based mechanism shall be used to perform cryptographic operations with the private key corresponding to the Derived PIV Credential. The password shall meet the requirements of an LOA-2 memorized secret token as specified in Table 6, Token Requirements per Assurance Level, in [SP800-63].

Taken together, these two paragraphs seem to suggest that the derived credentials should be stored in ordinary flash memory storage encrypted under a data encryption key derived from a PIN or password satisfying certain requirements. What requirements would ensure sufficient security?

Smart phones are frequently stolen, therefore we must assume that an adversary will be able to capture the mobile device. After capturing the device the adversary can immediately place it in a metallic box or other Faraday cage to prevent a remote wipe. The contents of the flash memory storage may be protected by the OS, but in many Android devices, the OS can be replaced, or rooted, with instructions for doing so provided by Google or the manufacturer. OS protection may be more effective in some iOS devices, but since a software token does not provide any tamper resistance by definition, we must assume that the adversary will be able to extract the encrypted credentials. Having done so, the adversary can mount an offline password guessing attack, testing each password guess by deriving a data encryption key from the password, decrypting the credentials, and checking if the resulting plaintext contains well-formed credentials. To carry out the password guessing attack, the adversary can use a botnet. Botnets with tens of thousands of computers can be easily rented by the day or by the hour. Botnets are usually programmed to launch DDOS attacks, but can be easily reprogrammed to carry out password cracking attacks instead. The adversary has at least a few hours to run the attack before the authentication and digital signature certificates are revoked and the revocation becomes visible to relying parties; and there is no time limit for decrypting the key management keys and using them to decrypt previously obtained encrypted email messages.

To resist such an attack, the PIN or password would need to have at least 64 bits of entropy. According to Table A.1 of the Electronic Authentication Guideline (SP 800-63), a user-chosen password must have more than 40 characters chosen appropriately from a 94-character alphabet to achieve 64 bits of entropy. Entering such a password on the touchscreen keyboard of a smart phone is clearly unfeasible.

SP 800-157 calls instead for a password that meets the requirements of an LOA-2 memorized secret token as specified in Table 6 of SP 800-63, which are as follows:

The memorized secret may be a randomly generated PIN consisting of 6 or more digits, a user generated string consisting of 8 or more characters chosen from an alphabet of 90 or more characters, or a secret with equivalent entropy.

The equivalent entropy is only 20 bits. Why does Table 6 require so little entropy? Because it is not concerned with resisting an offline guessing attack against a password that is used to derive a data encryption key. It is instead concerned with resisting an online guessing attack against a password that is used for authentication, where password guesses can only be tested by attempting to authenticate to a verifier who throttles the rate of failed authentication attempts. In Table 6, the quoted requirement on the memorized secret token is coupled with the following requirement on the verifier:

The Verifier shall implement a throttling mechanism that effectively limits the number of failed authentication attempts an Attacker can make on the Subscriber’s account to 100 or fewer in any 30-day period.

and the necessity of the coupling is emphasized in Section 8.2.3 as follows:

When using a token that produces low entropy token Authenticators, it is necessary to implement controls at the Verifier to protect against online guessing attacks. An explicit requirement for such tokens is given in Table 6: the Verifier shall effectively limit online Attackers to 100 failed attempts on a single account in any 30 day period.

Twenty bits is not sufficient entropy for encrypting derived credentials, and requiring a password with sufficient entropy is not a feasible proposition.

Solutions

But the problem has solutions. It is possible to provide effective protection for derived credentials in a software token.

One solution is to encrypt the derived credentials under a high-entropy key that is stored in a secure back-end and retrieved when the user activates the software token. The problem then becomes how to retrieve the high-entropy key from the back-end. To do so securely, the mobile device must authenticate to the back-end using a device-authentication credential stored in the mobile device, which seems to bring us back to square one. However, there is a difference between the device-authentication credential and the derived credentials stored in the token: the device-authentication credential is only needed for the specific purpose of authenticating the device to the back-end and retrieving the high-entropy key. This makes it possible to use as device-authentication credential a credential regenerated on demand from a PIN or password supplied by the user to activate the token and a protocredential stored in the device, in a way that deprives an attacker who captures the device of any information that would make it possible to test guesses of the PIN or password offline.

The device-authentication credential can consist, for example, of a DSA key pair whose public key is registered with the back-end, coupled with a handle that refers to a device record where the back-end stores a hash of the registered public key. In that case the protocredential consists of the device record handle, the DSA domain parameters, which are (p,q,g) with the notations of the DSS, and a random high-entropy salt. To regenerate the DSA key pair, a key derivation function is used to compute an intermediate key-pair regeneration key (KPRK) from the activation PIN or password and the salt, then the DSA private and public keys are computed as specified in Appendix B.1.1 of the DSS, substituting the KPRK for the random string returned_bits produced by a random number generator.

To authenticate to the back-end and retrieve the high-entropy key, the mobile device establishes a TLS connection to the back-end, over which it sends the device record handle, the DSA public key, and a signature computed with the DSA private key on a challenge derived from the TLS master secret. (Update—April 24, 2014: The material used to derive the challenge must also include the TLS server certificate of the back-end, due to a recently reported UKS vulnerability of TLS. See footnote 2 of the technical report.) The DSA public and private keys are deleted after authentication, and the back-end keeps the public key confidential. An adversary who is able to capture the device and extract the protocredential has no means of testing guesses of the PIN or password other than regenerating the DSA key pair and attempting online authentication to the back-end, which locks the device record after a small number of consecutive failed authentication attempts that specify the handle of the record.

An example of a derived credentials architecture that uses this solution can be found in a technical report.

Other solutions are possible as well. The device-authentication credential itself could serve as a derived credential, as we proposed earlier; SSO can then be achieved by sharing login sessions, as described in Section 7.5 of a another technical report. And I’m sure others solutions can be found.

Other Topics

There are several other topics related to derived credentials that deserve discussion, including the pros and cons of storing credentials in a Trusted Execution Environment (TEE), whether biometrics should be used for token activation, and whether derived credentials should be used for physical access. I will leave those topics for future posts.

Update (April 10, 2014). A post discussing the storage of derived credentials in a TEE is now available.

It’s Time to Redesign Transport Layer Security

One difficulty faced by privacy-enhancing credentials (such as U-Prove tokens, Idemix anonymous credentials, or credentials based on group signatures), is the fact that they are not supported by TLS. We noticed this when we looked at privacy-enhancing credentials in the context of NSTIC, and we proposed an architecture for the NSTIC ecosystem that included an extension of TLS to accommodate them.

Several other things are wrong with TLS. Performance is poor over satellite links due to the additional roundtrips and the transmission of certificate chains during the handshake. Client and attribute certificates, when used, are sent in the clear. And there has been a long list of TLS vulnerabilities, some of which have not been addressed, while others are addressed in TLS versions and extensions that are not broadly deployed.

The November SSL Pulse reported that only 18.2% of surveyed web sites supported TLS 1.1, which dates back to April 2006, only 20.7% supported TLS 1.2, which dates back to August 2008, and only 30.6% had server-side protection against the BEAST attack, which requires either TLS 1.1 or TLS 1.2. This indicates upgrade fatigue, which may be due to the age of the protocol and the large number of versions and extensions that it has accumulated during its long life. Changing the configuration of a TLS implementation to protect against vulnerabilities without shutting out a large portion of the user base is a complex task that IT personnel is no doubt loath to tackle.

So perhaps it is time to restart from scratch, designing a new transport layer security protocol — actually, two of them, one for connections and the other for datagrams — that will incorporate the lessons learned from TLS — and DTLS — while discarding the heavy baggage of old code and backward compatibility requirements.

We have written a new white paper that recapitulates the drawbacks of TLS and discusses ingredients for a possible replacement.

The paper emphasizes the benefits of redesigning transport layer security for the military, because the military in particular should be very much interested in better transport layer security protocols. The military should be interested in better performance over satellite and radio links, for obvious reasons. It should be interested in increased security, because so much is at stake in the security of military networks. And I would argue that it should also be interested in increased privacy, because what is viewed as privacy on the Internet may be viewed as resistance to traffic analysis in military networks.

Surveillance and Internet Identity

Last week I attended IIW 17, the 17th meeting of the Internet Identity Workshop, which is held twice a year in Mountain View, California. As usual it was a great opportunity to exchange ideas and meet people, with its unconference format, its many sessions, its rotating demos, its wide space for discussions, and its two free dinners with free drinks.

For me, however, it was tinged with sadness, because of what has happened since the first IIW I attended, IIW 12, in May 2011. IIW 12 was the first IIW after the launch of NSTIC. IIW 17 was the first IIW after Snowden.

The NSTIC Strategy Document, released in April 2011 with a preface signed by President Obama, repeatedly emphasized the goal of enhancing privacy as a key element of the “vision” and “guiding principles” of NSTIC. The document explicitly stated that the Identity Ecosystem will use privacy-enhancing technology and policies to inhibit the ability of service providers to link an individual’s transactions, thus ensuring that no one service provider can gain a complete picture of an individual’s life in cyberspace. At the time, Facebook Connect was threatening to inject Facebook as a middleman in all or most Internet activities, and I was happy to see that the US Government seemingly wanted to prevent such a massive invasion of privacy; I even convened a session at IIW 12 proposing a technique for achieving the privacy goals of NSTIC in the short term. Little did I know that the government was busy building a massive surveillance apparatus that would give the government a complete picture of an individual’s life in cyberspace, by means including bulk collection of data from service providers.

The Internet, given to the world by the US Department of Defense, was a world-wide forum for free-flowing, spontaneous exchange of ideas. Now the NSA, part of the same Department of Defense, has taken that away. People know that they are being tracked and identified when they post an anonymous comment. People know that their conversations are being recorded. Therefore people must think twice about they say.

I don’t know if Congress will be able to rein in the NSA. It should be clear that spying on US citizens is unconstitutional, but some politicians think that it is the NSA’s job to spy on everybody else in the planet. They don’t seem to consider or care that, if the US Government insists on a God-given right to spy on everybody else, other countries or regions may develop their own national or regional networks, separated from the US Internet by an air gap.

Fortunately, the technical community has reacted strongly against the NSA’s attacks on Internet privacy. And thanks to Snowden’s revelations, many of the attack techniques are known. It may therefore be possible to protect Internet privacy by technical means.

Coming back to the subject of the workshop, Internet Identity, I would argue that the first thing to do to protect Internet privacy is to get rid of the pernicious technology variously known as third-party login, social login or federated login. To be precise, I am referring to authentication techniques where the user authenticates to a third-party identity provider, which then provides identity and/or attribute information to a relying party, using a protocol such as OAuth or OpenID Connect. (These are the techniques in Group 2 of the taxonomy proposed in the paper Privacy Postures of Authentication Technologies.)

The only intrinsic advantage of federated login is that it allows the identity provider to collect vast amounts of information about the user, since the identity provider learns not only the user’s identity and/or attributes, but also what relying parties the user logs in to. The identity provider uses the information to sell ads that target the user accurately. We now know that the information is also shared with the government, which makes it available to thousands of analysts and IT personnel who use it for legal or illegal government or personal purposes.

There are no other intrinsic advantages to federated login.

The government and the identity providers argue that federated login is more secure than direct authentication to the relying party with username and password, but the opposite is true.

Security is supposedly increased because federated login reduces password reuse. But password reuse will not be substantially reduced unless a large majority of world-wide web sites force their users to use federated login with one of a small number of global identity providers such as Google or Facebook, something that will hopefully not come to pass.

Security is also supposedly increased because a large identity provider supposedly does a better job of protecting the user’s password. But I don’t know why a large identity provider would provide better protection against hackers, since large companies are not known to provide great security. And I do know that a password entrusted to a large identity provider may become available to thousands of employees of the government, of government contractors, and of the identity provider.. And the capture of a password used at an identity provider, which provides access to multiple web sites, is more damaging to the user than the capture of a password used at a single web site.

There is an alternative to authenticating to a web site with username and password that provides both security and privacy: namely, authentication with a cryptographic key pair automatically generated on the user’s machine when the user registers with the site. The site stores the hash of the public key component of the key pair in its database, and uses it to locate the user’s account when the user visits the site again and demonstrates knowledge of the private key component.

Another claimed advantage of federated login is that the user can register at a new site with a single click if logged in to the identity provider, any personal data required by the site being provided by the identity provider. This is a real advantage, but not an intrinsic one. The same benefit could be easily obtained by storing the personal data in the browser, and specifying a protocol by which the browser would supply selected personal data items to a web site upon demand by the site and approval by the user. Such a protocol would be much simpler than any of the federated login protocols and would provide more security and more privacy.

Yet another claimed advantage of federated login is that the identity provider could provide the relying party with a user’s identity and/or attributes verified by an identity proofing procedure; however, such verified identity and/or attributes could equally well be provided by a certificate authority using a public key certificate (or by multiple authorities providing a combination of a certificate binding a public key to an identity and one or more certificates binding the identity to various attributes), without the certificate authority having to be informed of what relying parties the certificate is submitted to.

It is sometimes argued (cf. the NSTIC 101 session at last week’s IIW) that using public key cryptography for authentication would be expensive and would require the user to carry a separate dongle or smartcard for every credential. This is not true. There is no need for special hardware to store a cryptographic credential, and if special hardware is desired for some reason, there is no need to use different pieces of hardware for different credentials.

Two sessions at IIW 17 gave me hope that Internet privacy is not a lost cause.

One of them was convened by Tim Bray of Google to report on the comments he received in response to a blog post arguing to developers that they should use federated login rather than login with username and password. The comments, which he referred to as a “bloodbath,” showed that neither developers nor end-users like federated login. I hope that such pushback will eventually force companies like Google to give up on federated login.

The other one was convened by Kazue Sako of NEC to discuss anonymous credentials and their possible uses. The room was overflowing and the level of engagement of the audience was high, showing that technical people are interested in privacy-enhancing authentication technologies even if large companies are not.

Pomcor Granted Patent on Method of Browsing Real-Time Search Results

Today I am writing to announce that Pomcor has been granted US patent 8,452,749 entitled Browsing Real-Time Search Results Effectively. If you have been following this blog you may be surprised that I am talking about search, since over the last two years I have been blogging about Internet identity, user authentication and privacy. The patented invention was made more than two years ago with NSF funding, and is now incorporated in our multisearch engine Noflail Search.

What do user authentication and browsing real-time search results have in common? They both fall within the scope of the Pomcor mission which, as stated on the company page, is to identify shortcomings of current technology and provide innovative solutions that advance the state of the art.

The problem with browsing real-time search results is that it is easy for the user to get confused and miss results when the result set of a query changes as the user is browsing it. Suppose for example that the user looks at the first page of results, which shows results 1 through 10, and decides to dig deeper into the result set by navigating to page 2. Suppose that result 11 is a tweet that is being frequently retweeted and goes up in rank from 11 to 10 by the time the user clicks on the Next button or the button for page 2 in the page menu. When page 2 is displayed it will not show the tweet, which now belongs in page 1, so the tweet will be missed.

To help the user keep track of changes in the result set, our invention maintains a set of identifiers of results that are deemed to have been seen by the user. When the user navigates away from a page, the identifiers of results in that page are added to the set. When a page is displayed, results whose identifiers are not in the set are highlighted, and page menu buttons of pages that contain such results are highlighted. In the example, this gives the user a chance to see the tweet, by noticing the that page 1 remains highlighted when she navigates to page 2. If she goes back to page 1 to see why it is highlighted, the tweet will be highlighted in the page.

Other features of the invention include a checkbox for freezing the result set temporarily so that no new results are brought from the back-end, and the option to shift-click on a page menu button, causing the identifiers in the current page to not be added to the set of results that are deemed to have been seen.

You may use Noflail Search to see the invention in action. Just go to noflail.com and run Noflail Search in Easy Mode (or, if you prefer, in Advanced Mode, which provides a search history; you can switch modes at any time). Enter a query on a trending topic in the query box, and run it on the search engine Topsy by clicking on the Topsy entry in the Search Engines panel. (Topsy is a great real-time search engine that has a site at topsy.com and a web API at otter.topsy.com, which Noflail Search uses.) If the trending topic is hot enough, it is fun to see how new results appear and old results change their positions in the result set as you use the page menu to navigate. Notice that you do have to navigate to see changes: nothing will happen until you click on a page menu button; but you can click on the button for the current page.

Feedback on the Paper on Privacy Postures of Authentication Technologies

Many thanks to every one who provided feedback on the paper on privacy postures of authentication technologies which was announced in the previous blog post. The paper was discussed on the Identity Commons mailing list and we also received feedback at the ID360 conference, where we presented the paper, and at IIW 16, where we showed a poster summarizing the paper. In this post I will recap the feedback that we have received and the revisions that we have made to the paper based on that feedback.

Steven Carmody pointed out that SWITCH, the Swiss InCommon federation, has developed an extension of Shibboleth called uApprove that allows the identity or attribute provider to ask the user for consent before disclosing attributes to the relying party. Ken Klingestein told us that the Scalable Privacy NSTIC pilot is developing a privacy manager that will let the user choose what attributes will be disclosed to the the relying party by the Shibboleth identity provider. We have added references to these Shibboleth extensions to Section 4.2 of the paper.

The original paper explained that, although a U-Prove token does not provide multishow unlinkability, the user may obtain multiple tokens from the issuer, and present different tokens to different relying parties. Christian Paquin said that a U-Prove credential is defined as a batch of such tokens, created simultaneously by an efficient parallel procedure. We have added this definition of a U-Prove credential to Section 4.3.

Christian Paquin also pointed out that a U-Prove token is a mathematical concept that can be embodied in a variety of technologies. He sent me a link to the WS-Trust embodiment, which was used in CardSpace. We have explained this and included the link in Section 4.3.

Tom Jones said that what we call anonymity is called pseudonymity by others. In fact, column 9, labeled “Anonymity”, covers both pseudonymity, as provided, e.g., by an Idemix pseudonym or an uncertified key pair or a combination of a user ID and a password when the user ID is freely chosen by the user, and full anonymity, as provided when a relying party learns only attributes that do not uniquely identify the user. I think it is not unreasonable to view anonymity (the service provider does not learn the user’s “name”) as encompassing pseudonymity (the service provider learns a pseudonym instead of the “real name”).

Nat Sakimura provided a lot of feedback, for which we are grateful. He said that Google and Yahoo implemented OpenID Pairwise Pseudonymous Identifiers (PPID), i.e. different identifiers for the same user provided to different relying parties, before ICAM specified its OpenID profile. We have noted this in Section 4.2 of the revised paper and changed the label of row 8 to “OpenID (without PPID)”.

He also said that OpenID Connect supports an ephemeral identifier, which provides anonymity. I was able to find a discussion of an ephemeral identifier in the archives of the OpenID Connect mailing list, but no mention of it in any of the OpenID Connect specifications; so ephemeral identifiers may be added in the future, but they are not there yet.

Nat also argued that OpenID Connect provides multishow unlinkability by different parties and by the same party. I disagree, however. The Subject Identifier in the ID Token makes OpenID Connect authentication events linkable. Furthermore, OpenID is built on top of OAuth, whose purpose is to provide the relying party with access to resources owned by the user by means of an access token. In a typical use case the relying party gets access to the user’s account at a social network such as Facebook, Twitter or Google+. It is unlikely that two relying parties who share information cannot determine that they are both accessing the same account, or that a relying party cannot determine that it has accessed the same account in two different occasions.

Nat said that OpenID Connect can be used for two-party authentication using a “Self-Issued OpenID Provider”. We have added a checkmark to row 11, column 1 of the table to indicate this, and an explanation to Section 4.2.

He also said that OpenID Connect provides group 4 functionality by allowing the relying party to obtain attributes from “distributed attribute providers”. We have mentioned this in Section 4.4 of the revised version of the paper.

Finally, Nat said:

Just by reading the paper, I was not very clear what is the requirement for Issue-show unlinkability. By issuance, I imagine it means the credential issuance. I suppose then it means that the credential verifier (in ISO 29115 | ITU-T X.1254 sense) cannot tell which credential was used though it can attest that the user has a valid credential. Is that correct? If so, much of the technology in group 2 should have n/a in the column because they are independent of the actual authentication itself. They could very well use anonymous authentication or partially anonymous authentication (ISO 29191).

The technologies in group 2 are recursive authentication technologies. The relying party directs the browser to the identity or attribute provider, which recursively authenticates the user and provides a bearer credential to the relying party based on the result of the inner authentication. In all generality there may be multiple inner authentications, as the identity or attribute provider may require multiple credentials. So the authentication process may consist of a tree of nested authentications, with internal nodes of the tree involving group 2 technologies, and leaf nodes other technologies. However, rows 5-11 (group 2) are only concerned with the usual case where the user authenticates to the identity or attribute provider as a returning user with a user ID and a password or some other form of two-party authentication; we have now made that clear in Section 4.2 of the revised paper. In that case there is no issue-show unlinkability.

We have also made a couple of other improvements to the paper, motivated in part by the feedback:

  • We have replaced the word possession with the word ownership in the definition of closed-loop authentication (Section 2), so that it now reads: authentication is closed-loop when the credential authority that issues or registers a credential is later responsible for verifying ownership of the credential at authentication time. The motivation for this change is that, in group 2, the credential is the information that the identity or attribute provider has about the user, and is thus kept by the identity or attribute provider rather than by the user.
  • We have added a distinction between two forms of multishow unlinkability, a strong form that holds even if the credential authority colludes and shares information with the relying parties, and a weak form that holds only if there is no such collusion. The technologies in group 2 that provide multishow unlinkability provide the weak form, whereas Idemix anonymous credentials provide the strong form.