Consistent Results from Inconsistent Data

This post is a continuation of my report on the NIST Cryptographic Key Management Workshop. In the previous post I said I had more to say about Rene Struik’s presentation [1] because there are multiple connections between his talk and our own presentation [2].

Dr. Struik proposed to use a physical uncloneable function (PUF) to generate a cryptographic key when the device is turned on. A PUF is a function implemented by a physical device that is difficult to predict and difficult to duplicate in a different device. One way of implementing a PUF is by applying stimuli to a device such as a chip, and reading responses that are not determined by the design of the chip, but depend instead on small random variations of the fabrication process within the tolerances of the process specification. For key generation, one stimulus is applied to the device, viz. power, and one response is produced, an uncloneable key.

One problem with this key generation method is that the response of the chip will be slightly different each time the device is turned on. Struik proposed an elegant method of removing these differences in order to obtain the same key consistently, and using these same differences as an entropy source for true random number generation.

I saw an interesting connection between Struick’s uncloneable key and the hardware key used by Apple for data protection in iOS, which I discussed during my talk.

Both keys are intended to be difficult to extract from the device where they are used. The hardware key is built into the silicon of a hardware encryption chip, whose interface does not allow the key to be exported from the chip; but no claims are made as to the chip being tamper resistant, so a well-equipped attacker may be able to read the key by probing the silicon. By contrast, the uncloneable key is not present in the device when the device is turned off and thus cannot be extracted by probing the silicon; but custom code running on the device could obtain the key when the device is turned on.

The two methods could be combined. For example, a device could contain an encryption chip that would generate an uncloneable key when power is applied, and the interface of the chip could prevent the key from being exported. However, custom code running on the device could still make use of the key, just like forensic tools are able to use the hardware key in the iPhone in brute-force attacks to crack the user’s passcode [3] [4].

Another connection between the two presentations is that they both proposed regenerating keys instead of storing them. In Struik’s presentation a symmetric key was regenerated from the physical properties of the device. In our presentation an RSA key pair was regenerated from user input (a PIN and/or a biometric sample). Our regeneration method is not applicable if there is no user, while Struik’s is; on the other hand, when used in a user-controlled mobile device, our method has the advantage that the key pair caonnot be obtained by custom code running on a device that has been lost or stolen.

But the most interesting connection, for me, was between Struik’s method for consistently producing the same uncloneable key from a succession of slightly different physical responses to power-on, and the method of Hao, Anderson and Daugman [5] for producing a consistent biometric key from a succession of genuine but slightly different iris image samples. (In our presentation, we proposed to use the method of Hao et al. in two- and/or three-factor authentication.) Both methods are based on the same ingenious adaptation of the error correction paradigm for producing consistent results from inconsistent data (which was pioneered by Juels and Wattenberg [6] for implementing fuzzy commitments as explained below).

Error correction techniques were originally intended, and are most often used, for correcting errors caused by the transmission of data over a noisy channel. Redundancy added to the data at the source makes it possible to correct a small number of errors at the destination. (A unit of data to which redundancy has been added is called a codeword.)

The small variations that occur between successive biometric samples or between successive samples of an uncloneable key are similar to errors produced by channel noise, but there is no correct source data to which redundancy could be added to eliminate errors. If fact, no errors are involved because, although one particular sample may serve as a reference, no sample is more correct than any other sample.

To remove data variations, error correction is used as follows:

  1. A random codeword c is generated by adding redundancy to a random string. The random codeword is of same length as a data sample, but is otherwise unrelated to any data sample.
  2. The random codeword c is bitwise x-ored with a reference sample r to produce helper data h:
    h = c r.
  3. When a new sample s is obtained, it is bitwise x-ored with the helper data to produce
    (cr) ⊕ s.
    Since x-or is associative, this is the same as
    c ⊕ (rs)
    and since r and s are similar, i.e. differ only in a few bits, the bit-string
    d = r s
    consists mostly of zeros. Bitwise x-oring c with d flips a few bits in c, thus having the same effect that could be produced by transmitting c over a noisy channel.
  4. Error correction is applied to recover the random codeword c from c d. If desired, c can then be bitwise x-ored with h = c r to obtain r.

What this accomplishes is that the same result is consistently produced from the variable sample s, as long as s in not too different from r. Either c or r can be used as the consistent result. Struik uses r, but I believe he could equivalently use c. Hao et al. use c, as their biometric key. In biometric applications, it is important to use c rather than r to avoid revealing the user’s biometric sample r, which could be a privacy violation. The codeword c contains no biometric information.

Juels and Wattenberg [6] used the same technique for implementing fuzzy commitments; as far as I can tell, they were the original inventors of the technique. In a cryptographic commitment scheme, a sender commits to a value by combining it with a witness to produce a blob, and sends the blob to a receiver. The sender can later reveal the value by providing the witness, which the receiver uses to compute the blob and verify that it coincides with the one received earlier. The receiver cannot derive the committed value from the blob without the witness, and only one value, the original committed value, can be successfully revealed by the sender. Juels and Wattenberg used the above technique to devise a fuzzy commitment scheme that tolerates small variations in the witness. With the above notations (different from their own), the committed value is the codeword c, the witness is r, and the blob is an ordered pair consisting of a conventional cryptographic hash of c and the helper data h. When the sender provides s to decommit the value, the receiver computes c as above and verifies that the hash of the computed c coincides with the hash in the blob received from the sender.

In their paper [6], Juels and Wattenberg made the connection with biometric authentication, but not with physical uncloneable functions.

References

[1] Rene Struik. Secure Key Storage and True Random Number Generation – An Overview. NIST Cryptographic Key Management Workshop, September 2012.
 
[2] Francisco Corella and Karen Lewison. Key Management Challenges of Derived Credentials and Techniques for Addressing Them. NIST Cryptographic Key Management Workshop, September 2012.
 
[3] Andrey Belenko. Overcoming iOS Data Protection to Re-enable iPhone Forensics. Black Hat USA, July-August 2011.
 
[4] Jean-Baptiste Bedrune and Jean Sigwald. iPhone data protection in depth. HITB Security Conference, May 2011.
 
[5] Feng Hao, Ross Anderson and John Daugman. Combining Cryptography with Biometrics Effectively. IEEE Transactions on Computers, volume 5, number 9, pages 1081 – 1088, 2006. Available as a Cambridge Computer Laboratory technical report.
 
[6] Ari Juels and Martin Wattenberg. A Fuzzy Commitment Scheme. ACM Conference on Computer and Communications Security, 1999.
 

Report on the NIST Cryptographic Key Management Workshop

This is a belated report on the Cryptographic Key Management Workshop that was held by NIST on September 10-11. Karen Lewison and I went to Washington DC for the workshop, where we presented a talk on techniques for addressing the key management challenges of derived credentials.

Cryptographic key management may seem to be a dry topic, but the workshop was quite interesting, especially the second day, which looked at the future. It was attended by about 50 cryptographers, and was webcast. It began with a fascinating keynote address by Whitfield Diffie on the history of cryptographic key management. His presentation is online, but slides cannot do justice to the wealth of stories and anecdotes that he narrated.

A Framework for Designing Cryptographic Key Management Systems

The main purpose of the workshop was to discuss the current drafts of NIST Special Publication 800-130, and NIST Special Publication 800-152 and solicit comments on them. (Instructions for sending comments on draft NIST publications can be found at http://csrc.nist.gov/publications/PubsDrafts.html.) SP 800-130 is a comprehensive framework of topics that should be considered by anybody who has to specify a Cryptographic Key Management System (CKMS); since key management is an essential aspect of cryptography, the framework should be invaluable to anybody designing a system that incorporates cryptographic functionality. SP 800-152 profiles the framework for cryptographic key management systems that will be used in US Federal agencies, but goes beyond the systems themselves to cover their procurement, installation, management, and operation.

The two publications were discussed during the first day of the workshop. I cannot possibly go over the very detailed discussions that took place, so I will limit myself to repeating one comment I made regarding Section 4.7 of SP 800-130, “Anonymity, Unlinkability and Unobservability”, and expanding upon it.

Anonymity, unlinkability and unobservability are privacy features that may not be directly relevant to the authentication of Federal employees in the course of their work, but they are very relevant to the authentication of both consumers on the Web at large, and citizens who access Federal information systems. Traditional authentication by username and password provides these three privacy features; but passwords have well-known security and usability drawbacks, one of them being the difficulty of remembering many different passwords. One way of reducing the number of passwords to be remembered is to rely on a third-party identity provider (IdP), so that one password (presented to the IdP) can be used to authenticate to any number of relying parties. The Federal Government allows citizens to access government web sites through redirection to several Approved Identity Providers.

But third party login has privacy drawbacks. In usual implementations, anonymity is lost because the relying party learns the user’s identity at the IdP, unlinkability is lost by the use of that identity at multiple relying parties, and unobservability is lost because the IdP is informed of the user’s logins. Profiles of third-party login protocols approved for citizen login to government sites mitigate some of these drawbacks by asking the identity provider to provide different identities for the same user to different relying parties. This mitigates the loss of anonymity, and the loss of unlinkability to a certain extent. (Relying parties by themselves cannot track the user, but they can track the user in collusion with the IdP.) But the loss of unobservability is not mitigated, because the IdP is still informed of the user’s activities.

I believe that the Government should work to develop and promote authentication methods that eliminate passwords while preserving anonymity, unlinkability and unobservability. Cryptographic authentication with a key pair, using different key pairs for different relying parties, can be a basis for such methods.

A Look at the Future

The second day of the workshop featured presentations on capabilities of future cryptographic key management systems, ranging from innovative to futuristic. (Both days’ presentations can be found in the workshop web page.)

Tim Polk, manager of the Cryptographic Technology Group at NIST, motivated the talks that followed by going over challenges identified during the development of the CKMS framework, related to interoperability across security domains, algorithmic agility, constrained devices, privacy, and scalability. He also stressed the need to develop CKMSs that are resilient to quantum computing attacks before it is too late.

Dennis Branstad of NIST discussed security policies, stating as a goal their automated specification, negotiation and enforcement.

Anna Lysyanskaya of Brown University discussed her work on anonymous credentials. She mentioned a new technique for revocation of anonymous credentials that was presented at Crypto 2012 by Libert, Peters and Yung, and said she thought it deserved the best paper award. I believe a full version of the conference paper can be found at http://eprint.iacr.org/2012/442. I haven’t read the paper yet. Revocation of privacy-enhancing credentials is practically difficult; I have discussed the topic in several earlier posts.

Paul Lambert of Marvell Semiconductors discussed authentication and privilege management for devices connected by wireless area networks. I was glad to hear him propose the use of a raw key pair as a credential. I later proposed the same thing in the talk on derived credentials.

Lily Chen of NIST discussed the difficult key management problem of handing over a secure link as a smart phone travels from one network to another, when the networks use technologies that may be as different as UMTS and WiFi.

Sarbari Gupta of Electrosoft discussed key management in a cloud environment. She argued that the Federal Risk and Authorization Management Program (FedRAMP) does not have sufficient requirements for secure key management, and advocated the establishment of a Federal Profile for Cloud Key Management.

Elaine Barker of NIST went over the intricacies and subtleties of random bit generation, and solicited comments on Draft Special Publication 800-90B (entropy sources) and Draft Special Publication 800-90C (RBG Constructions, DRBGs and NRBGs). Comments are due December 3rd.

Rene Struik discussed a method of secure key storage and true random number generation using physical unclonable functions (PUFs). The idea is to use accidental properties of a device to generate a unique key when the device is turned on. (So I would say that his technique is closer to key generation than key storage.) Error correction is used to remove minor differences in subsequent key generations. As an additional benefit, those differences are used for random number generation. This very interesting work is related in multiple ways to our own work on mobile authentication and derived credentials; I plan to discuss it in more detail in the next blog post.

Mary Theofanos of NIST went over two case studies of usability of key management procedures: a PKI deployment, and a PIV pilot. My personal getaways: the designer of a key management system must know the users and their mental models of security; must provide multiple authentication methods, e.g. by retaining username-password as a backup for a cryptographic credential; and must not require frequent PIN changes.

The usability talk was followed by a panel that presented three use cases of cross-domain interactions. Bob Griffin of RSA discussed key management in the cloud. Saikat Saha of SafeNet discussed virtualized hardware security modules. John Leiseboer of Quintessencelabs discussed quantum key distribution; this was the first presentation I’ve attended related to quantum cryptography, and it motivated me to find out more about this futuristic topic.

Derived Credentials

Finally, I gave a presentation on mobile authentication and derived credentials, co-authored with Karen Lewison. Even though this was the last presentation at the end of a long day of talks, I was gratified that, as far as I know, nobody snuck out early to the Dogfish Head brewery across the street from the NIST campus 🙂 . Derived credentials is a NIST concept referring to credentials that, in the future, will be installed in a mobile device after the user of the device authenticates with a PIV card. Our presentation went over three techniques for implementing derived credentials that we proposed earlier in a blog post and a white paper, viz. public key cryptography without certificates, key pair regeneration as an alternative to tamper resistance, and encapsulation of cryptographic and biometric processing in a “prover black box” and a “verifier black box” to insulate app developers from the complexities of cryptography and biometrics.

But we also went beyond derived credentials, in response to a request made by Elaine Barker on behalf of Dennis Branstad before the workshop. We discussed extensions of our techniques, for authentication across security domains, for social login without passwords, and for data protection at rest without tamper resistance. Since then we have put online a whitepaper on the data protection work. We have not yet written whitepapers on authentication across security domain or social login without passwords.

Wrap-up

Tim Polk wrapped up the workshop by encouraging everybody to send comments. Although there is an official comment period for each draft publication, NIST welcomes comments at any time.

Like the workshop on privacy-enhancing technology I attended last year, this workshop was both enjoyable and very useful. I’m glad to be on the email distribution list, and I’m looking forward to the next cryptography workshop at NIST.

Effective Data Protection for Mobile Devices

With iOS 4 Apple introduced a new method for protecting the data stored in an iPhone, an iPad or an iPod when the device is lost or stolen and the user has locked the device with a passcode (usually a 4-digit PIN) [1]. Different categories of data are encrypted under different keys, which are arranged in a key hierarchy. Some of the keys in the hierarchy are derived from the user’s passcode. However, every key that is derived from the passcode is also derived from a hardware key, which is hardwired into the silicon of a “hardware encryption chip” and cannot be extracted from the device by a casual attacker. This means that an attacker who steals a mobile device, opens it, and extracts data encrypted under the passcode cannot mount an offline attack against the passcode. That is, the attacker cannot try passcodes at high speed to find the one that decrypts the data, because trying passcodes requires the hardware key.

But the protection provided by the hardware encryption key, at least in iOS 4 and 5, has been defeated [2] [3] [4]. An attacker who steals an iPhone does not need to open it. The attacker can exploit an OS vulnerability or a firmware vulnerability to run custom code on the device. The custom code can read the encrypted data and, although it cannot extract the hardware key, it can use it to mount an “offline” attack on the device itself. The practical effect of the hardware key is only to force the attacker to run the attack on the CPU of the device, which is relatively slow. But it only takes 40 minutes to try all 4-digit PINs [2].

Would the hardware encryption key provide effective protection if Apple succeeded in eliminating all vulnerabilities that make it possible to run custom code? (It does not seem to have eliminated them in iOS 6, used in the recently released iPhone 5 [5].) An attack would be more difficult, but probably still possible. The attacker would have to open the phone to read the contents of the flash memory. The attacker might be able to not just read but also modify the contents of the flash memory, and thus run custom code, which could then use the hardware key to mount an offline attack against the passcode. The attacker might even be able to probe the silicon of the hardware encryption chip and extract the hardware key. Nobody has claimed to have done any of these fancy attacks, but that’s because exploiting vulnerabilities is much easier.

In the white paper

we have proposed a data protection method for mobile devices that does not require a hardware key, or any tamper resistance, and is effective even if the attacker is able to run custom code on the device. The data is encrypted under a (symmetric) data encryption key that is entrusted to an online server provided by the mobile network operator, or by the mobile device manufacturer, or by the provider of the mobile operating system, or by an independent data protection service provider trusted by the user. (The key can also be split using Shamir’s secret sharing technique into n “portions” entrusted to n different servers, so that k portions are needed to reconstruct the key. For example, with n = 5 and k = 3, portions of the key can be distributed to 5 different servers, and any 3 of those portions can be used to reconstruct the key.)

The key, or the portions of a split key, are retrieved when the user authenticates to unlock the phone, with a PIN, an iris image taken by a camera carried by the device, or both. User authentication regenerates an RSA key pair which the device uses to authenticate to the server(s). When a PIN is used, it is not vulnerable to an offline attack by an attacker who tampers with the phone; and when an iris image is used, no biometric sample or template is stored anywhere.

References

[1] Apple. iOS: Understanding data protection. October 28, 2011.
 
[2] Vladimir Katalov. ElcomSoft Breaks iPhone Encryption, Offers Forensic Access to FileSystem Dumps. March 23, 2011.
 
[3] Andrey Belenko. Overcoming iOS Data Protection to Re-enable iPhone Forensics. 2011.
 
[4] Jean-Baptiste Bedrune and Jean Sigwald. iPhone data protection in depth.
 
[5] Emil Protalinski, The Next Web. The Apple iPhone 5 has reportedly been jailbroken. September 22, 2012.
 

Convenient One-, Two- and Three-factor Authentication for Mobile Devices

Authentication methods used today on mobile devices are both inconvenient and insecure.

Ordinary passwords are difficult to type on small touch-screen displays that require switching keyboards for entering digits or punctuation. They provide even less security on mobile devices than on desktops or laptops. Due to the difficulty of typing on mobile keyboards, each character is prominently displayed after it is typed, circumventing the security provided by password input boxes that displays dots in lieu of characters. And users are motivated to choose shorter and simpler passwords, which have less entropy.

One-time passwords are often used on mobile devices due to the lack of security of ordinary passwords. Authenticating with a one-time password requires entering a PIN, obtaining the one-time password from a hard token, a soft token, a text message, or an email message, and entering the one-time password. This is a very cumbersome procedure. A one-time password is a two-factor authentication method, and is thus more secure than an ordinary password. But they have limited entropy, and they can be replayed within a time-window of several minutes. An attacker who observes or intercepts a one-time password has several minutes during which he or she can use it to log in as the legitimate user.

Social login avoids some of the inconvenience of ordinary and one-time passwords by outsourcing authentication to a social network. If the user is already logged in to the social network, he or she does not have to enter a password again. Current standards for social login are a mess, as I said in the previous post, and as confirmed by the recent resignation of the editor of the OAuth protocol. In the previous post I linked to a white paper where we propose a better social login protocol, SAAAM, well suited for mobile devices.

But while social login is useful in some cases, it is not always appropriate. There is no reason why applications should always rely on social networks to authenticate their users, or why a user should have to surrender his or her privacy to a social network in order to authenticate to an unrelated application. Also, social login does not completely solve the authentication problem, since the user still has to authenticate to the social network.

So there is a need for good authentication methods on mobile devices that do not rely on a third party. We have just written a white paper proposing one-, two- and three-factor authentication methods for mobile devices that provide strong security and are more convenient to use than ordinary or one-time passwords. They are particularly well suited for enterprise use, but are suitable for consumer use as well.

The proposed authentication methods are based on public key cryptography, but they are easy to implement and deploy. They are easy to implement because all cryptography is encapsulated in black boxes, so that developers do not have to program any cryptographic operations. They are easy to deploy because they avoid the use of certificates and do not require a public-key infrastructure.

In our one-factor authentication method the user does not have to provide any input. The device authenticates by demonstrating knowledge of a private key. A hash of the associated public key is stored in a device record, which is linked to a user record in an enterprise directory or user database.

In our two-factor authentication method, the user provides a PIN, which is used to regenerate the key pair. Because any PIN results in a well-formed key pair, the user’s PIN is not exposed to an exhaustive offline guessing attack by an attacker who steals the mobile device, opens it, and reads its persistent memory.

In our three-factor authentication method, the user provides a PIN and a biometric such as an iris scan. No biometric template is stored in the mobile device. Instead, the device contains an auxiliary string that is used in conjunction with the biometric to provide a biometric key. The biometric key is used to regenerate the key pair. The auxiliary string is encrypted by the PIN for additional security.

NSTIC Is Not Low-Hanging Fruit

In a recent tweet, Ian Glazer quoted Patrick Gallagher, director of NIST, saying at a recent White House meeting on NSTIC that the “current suite of technologies we rely on are insufficient”.

The identity technologies used today both in federal agencies and on the Web at large are indeed insufficient:

  • SSL client certificates have failed to displace passwords for Web authentication since they were introduced 17 years ago.
  • Credentials in PIV cards have failed to displace passwords in federal agencies eight years after HSPD 12; a GAO report does a good job of documenting the many obstacles faced by agencies in implementing the directive, ranging from the fact that some categories of agency employees do not have PIV cards, to the desire by employees to use Apple MAC computers and mobile devices that lack card readers. I’m glad that we don’t live in the Soviet Union and heads of agencies are not sent to the Gulag when they ignore unreasonable orders.
  • Third-party login solutions such as OpenID, as currently used on the Web, not only do not eliminate passwords, they make the password security problem worse, by facilitating phishing attacks. They also impinge on the user’s privacy, because the identity provider is told what relying parties the user logs in to.
  • Social login solutions based on OAuth, e.g. “login with Facebook”, worsen the privacy drawback of third party login by limiting the user’s choice of identity providers to those that the relying party has registered with, and by broadcasting the user’s activities to the user’s social graph. Eric Sachs of Google said at the last Internet Identity Workshop that users participating in usability testing were afraid of logging in via Facebook or Google+ because “their friends would be spammed”.

But some proponents of NSTIC do not seem to realize that. In a recent interview, Howard Schmidt went so far as to say that NSTIC is “low-hanging fruit”, because “the technology is there”. What technology would that be? In a blog post that he wrote last year shortly after the launch of NSTIC, it was clear that the technology he was considering for NSTIC was privacy-enhancing cryptography, used by Microsoft in U-Prove and by IBM in Idemix. He used the words “privacy-enhancing” in the interview, so he may have been referring to that technology in the interview as well.

(Credentials based on privacy-enhancing cryptography provide selective disclosure and unlinkability. Selective disclosure refers to the ability to combine multiple attributes in a credential but disclose only some of them when presenting the credential. Unlinkability, in the case of U-Prove, refers to the impossibility of linking the use of a credential to its issuance; Idemix also makes it impossible to link multiple uses of the same credential.)

But Idemix has never been deployed commercially, and an attempt at deploying U-Prove within the Information Cards framework failed when Microsoft discontinued CardSpace, two months before the launch of NSTIC.

Credentials based on privacy-enhancing cryptography, sometimes called anonymous credentials, have inherent drawbacks. One of them is that unlinkability makes revocation of such credentials harder than revocation of public key certificates, as I pointed out in a blog post on U-Prove and another blog post on Idemix. The difficulty of revoking credentials based on privacy-enhancing cryptography has led ABC4Trust, which can be viewed as the European counterpart of NSTIC, to propose arresting users for the purpose of revoking their credentials! See page 23, end of last paragraph, of the ABC4Trust document Architecture for Attribute-based Credential Technologies.

Another inherent drawback is that it is difficult to keep the owner of an anonymous credential from making it available for use online by others who are not entitled to it. For example, it would be difficult to prevent the owner of a proof-of-drinking-age anonymous credential (a use case often cited by proponents of anonymous credentials) from letting minors use it for a fee.

The mistaken belief that “the technology is there” explains why the NSTIC NPO has made little effort to improve on existing technology. Instead of requesting funding for research, it requested funding for pilots; a pilot is usually intended to demonstrate the usability of a newly developed technology; it assumes that the technology already exists. After the launch of NSTIC, the NPO announced three workshops, on governance, privacy and technology. The first two were held, but the workshop on technology, which was supposed to take place in September of last year, was postponed by six months and merged with the yearly NIST IDtrust workshop, which took place in March of this year. The IDtrust workshop usually includes a call for papers. But this year there was none: new ideas were not solicited.

The NSTIC NPO has been trying to “bring relying parties to the table”. Ian Glazer dubbed the recent White House meeting the NSTIC Relying Party Event. The meeting was about getting a bigger table according to the NPO blog post on the event, and about “getting people to volunteer” according the Senator Mikulski as quoted by the blog post. Earlier, Jim Sheire of the NPO convened a session entitled NSTIC How do we bring relying parties to the table? at the last Internet Identity Workshop.

One idea mentioned in the report on the IIW session for bringing relying parties to the table is to target 100 “top relying parties” in the hope of creating a snowball effect. But it’s not clear what it would mean for those 100 relying parties and any additional ones caught in the snowball, to “come to the table”. What would they do at the table? What technology would they use? OpenID? OAuth? Smart cards? Information cards? Anonymous credentials? NSTIC has not proposed any specific technology. Or would they come to the table just to talk?

There are many millions of Web sites that use passwords for user authentication. The goal should be to get all those sites to adopt an identity solution that eliminates the security risk of passwords. Web site developers will do that of their own initiative once a solution is available that is more secure and as easy to deploy as password authentication.

While the technology is not there, various technology ingredients are there, and missing ingredients could be developed. It is not difficult to conceive a roadmap that could lead to one or more good identity solutions. But success would require a concerted effort by many different parties: not only relying parties and identity and attribute providers, but also standards bodies, browser vendors, vendors of desktop and mobile operating systems, vendors of smart cards and other hardware tokens, perhaps biometric vendors, and the providers of the middleware, software libraries, and software development tools used on the Web. When I first heard of NSTIC I hoped that it would provide the impetus and the forum needed for such a concerted effort. But that has yet to happen.

Credential Sharing: A Pitfall of Anonymous Credentials

There is an inherent problem with anonymous credentials such as those provided by Idemix or U-Prove: if it is not possible to tell who is presenting a credential, the legitimate owner of a credential may be willing to lend it to somebody else who is not entitled to it. For example, somebody could sell a proof-of-drinking-age credential to a minor, as noted by Jaap-Henk Hoepman in a recent blog post [1].

This problem is known in cryptography as the credential sharing or credential transferability problem, and various countermeasures have been proposed. In this post I will briefly discuss some of these countermeasures, then I will describe a new method of sharing credentials that is resistant to most of them.

A traditional countermeasure proposed by cryptographers, mentioned for example in [2], is to deter the sharing of an anonymous credential by linking it to one or more additional credentials that the user would not want to share, such as a credential that gives access to a bank account, in such a way that the sharing of the anonymous credential would imply the sharing of the additional credential(s). I shall refer to this countermeasure as the “credential linking countermeasure”. I find this countermeasure unrealistic, because few people would escrow their bank account for the privilege of using an anonymous credential.

In her presentation [3] at the recent NIST Meeting on Privacy-Enhancing Cryptography [4], Anna Lysyanskaya said that it is a misconception to think that “if all transactions are private, you can’t detect and prevent identity fraud”. But the countermeasure that she proposes for preventing identity fraud is to limit how many times a credential is used and to disclose the user’s identity if the limit is exceeded. However this can only be done in cases where a credential only allows the legitimate user to access a resource a limited number of times, and I can think of few such cases in the realm of Web authentication. Lysyanskaya gives as an example a subscription to an online newspaper, but such subscriptions typically provide unlimited access for a monthly fee. I shall refer to this countermeasure as the “limited use countermeasure”.

Lysyanskaya’s presentation also mentions identity escrow as useful for conducting an investigation if “something goes very, very wrong”.

At the panel on Privacy in the Identification Domain at the same meeting Lysyanskaya also proposed binding an anonymous credential to a biometric. The relying party would check the biometric and then forget it to keep the presentation anonymous. But if the relying party can be trusted to forget the biometric, it may as well be trusted to forget the entire credential presentation, in which case an anonymous credential is not necessary.

An interesting approach to binding a biometric to a credential while keeping the user anonymous can be found in [5]. The biometric is checked by a tamper-proof smartcard trusted by the relying party, but a so-called warden trusted by the user is placed between the smartcard and the relying party, and mediates the presentation protocol to ensure that no information that could be used to identify or track the user is communicated by the smart card to the relying party.

However, if what we are looking for is an authentication solution that will replace passwords on the Web at large, biometric-based countermeasures are not good candidates because of their cost.

Update. In a response to this post on the Identity Commons mailing list Terry Boult has pointed out that cameras and microphones are pretty ubiquitous and said that, in volume, fingerprint sensors are cheaper than smartcard readers.

In his blog post [1], Hoepman suggested that, to prevent the sharing of an anonymous credential, the credential could be stored in the owner’s identity card, presumably referring to the national identity card that citizens carry in the Netherlands and other European countries. This is a good idea because lending the card would put the owner at risk of impersonation by the borrower. I shall refer to this as the “identity card countermeasure”.

Rather than storing a proof of age credential as an additional credential in a national identity card, anonymous proof of age could be accomplished by proving in zero knowledge that a birthdate attribute of a national identity credential (or, in the United States, of a driver’s license credential) lies in an open interval ending 21 years before the present time; Idemix implements such proofs. The identity credential could be stored in a smartcard or perhaps in a tamper-proof module within a smart phone or a personal computer. I’ll refer to this countermeasure as the “selective disclosure countermeasure”. As in the simpler identity card countermeasure, the legitimate user of the credential would be deterred from sharing the credential with another person because of the risk of impersonation.

But this countermeasure, like most of the above ones, does not help with the following method of sharing credentials.

A Countermeasure-Resistant Method of Sharing Credentials

An owner of a credential can make the credential available for use by another person without giving a copy of the credential to that other person. Instead, the owner can allow that other person to act as a proxy, or man-in-the-middle, between the owner and a relying party in a credential presentation. (Note that this is not a man-in-the-middle attack because the man in the middle cooperates with the owner.)

For example, somebody of drinking age could install his or her national identity credential or driver’s license credential on a Web server, either by copying the credential to the server or, if the credential is contained in a tamper-proof device, by connecting the device to the server. The credential owner could then allow minors to buy liquor by proxying a proof of drinking age based on the birthdate attribute in the credential. (Minors would need a special user agent to do the proxying, but the owner could make such user agent available for download from the same server where the credential is installed.) The owner could find a surreptitious way of charging a fee for the service.

This method of sharing a credential, which could be called proxy-based sharing, defeats most of the countermeasures mentioned above. Biometric-based countermeasures don’t work because the owner of the credential can input the biometric. Credential linking countermeasures don’t work because the secret of the credential is not shared. The identity card countermeasure and the selective disclosure countermeasure don’t work because the owner is in control of what proofs are proxied and can refuse to proxy proofs that could allow impersonation. The limited use countermeasure could work but, as I said above, I can think of few Web authentication cases where it would be applicable.

Are there any other countermeasures that would prevent or inhibit this kind of sharing? If a minor were trying to buy liquor using an identity credential and a payment credential, the merchant could require the minor to prove in zero-knowledge that the secret keys underlying both credentials are the same. That would defeat the sharing scheme by making the owner of the identity credential for pay for the purchase. However there are proof-of-age cases that do not require a purchase. For example, an adult site may be required to ask for proof of age without or before asking for payment.

The only generally applicable countermeasure that I can think of to defeat proxy-based sharing is the identity escrow scheme that Lysyanskaya referred to in her talk [3]. Using provable encryption, as available in Idemix, a liquor merchant could ask the user agent to provide the identity of the owner of the credential as an encrypted attribute that could be decrypted, say, by a judge. (The encrypted attribute would be randomized for unlinkability.) The user agent would include the encrypted attribute in the presentation proof after asking the user for permission to do so.

Unfortunately this requires the user to trust the government. This may not be a problem for most people in many countries. But it undermines one of the motivations for using privacy-enhancing technologies that I discussed in a previous blog [6].

References

[1] Jaap-Henk Hoepman. On using identity cards to store anonymous credentials. November 16, 2011. Blog post, at http://blog.xot.nl/2011/11/16/on-using-identity-cards-to-store-anonymous-credentials/.
 
[2] Jan Camenisch and Anna Lysyanskaya. An Efficient System for Non-transferable Anonymous Credentials with Optional Anonymity Revocation. In Proceedings of the International Conference on the Theory and Application of Cryptographic Techniques: Advances in Cryptology (EUROCRYPT 01). 2001. Research report available from http://www.zurich.ibm.com/security/privacy/.
 
[3] Anna Lysyanskaya. Conditional And Revocable Anonymity. Presentation at the NIST Meeting on Privacy-Enhancing Cryptography. December 8-9, 2011. Slides available at http://csrc.nist.gov/groups/ST/PEC2011/presentations2011/lysyanskaya.pdf.
 
[4] NIST Meeting on Privacy-Enhancing Cryptography. December 8-9, 2011. At NIST Meeting on Privacy-Enhancing Cryptography.
 
[5] Russell Impagliazzo and Sara Miner More. Anonymous Credentials with Biometrically-Enforced Non-Transferability. In Proceedings of the 2003 ACM workshop on Privacy in the electronic society (WPES 03).
 
[6] Francisco Corella. Are Privacy-Enhancing Technologies Really Needed for NSTIC? October 13, 2011. Blog post, at http://pomcor.com/2011/10/13/are-privacy-enhancing-technologies-really-needed-for-nstic/.
 

Trip Report: Meeting on Privacy-Enhancing Cryptography at NIST

Last week I participated in the Meeting on Privacy-Enhancing Cryptography at NIST. The meeting was organized by Rene Peralta, who brought together a diverse international group of cryptographers and privacy stakeholders. The agenda is online with links to the workshop presentations.

The presentations covered many applications of privacy-enhancing cryptography, including auctions with encrypted bids, database search and data stream filtering with hidden queries, smart metering, encryption-based access control to medical records, format-preserving encryption of credit card data, and of course authentication. There was a talk on U-Prove by Christian Paquin, and a talk on Idemix by Gregory Neven. There were also talks on several techniques besides anonymous credentials that could be used to implement privacy-friendly authentication: group signatures, direct anonymous attestation, and EPID (Enhanced Privacy ID). Kazue Sako’s talk described several possible applications of group signatures, including a method of paying anonymously with a credit card.

A striking demonstration of the practical benefits of privacy-enhancing cryptography was the presentation on the Danish auctions of sugar beets contracts by Thomas Toft. A contract gives a farmer the right to grow a certain quantity of beets for delivery to Danisco, the only Danish sugar producer. A yearly auction allows farmers to sell and buy contracts. Each farmer submits a binding bid, consisting of a supply curve or a demand curve. The curves are aggregated into a market supply curve and a market demand curve, whose intersection determines the market clearing price at which transactions take place. What’s remarkable is that farmers submit encrypted bids, and bids are never decrypted. The market clearing price is obtained by computations on encrypted data, using secure multiparty computation techniques. Auctions have been successfully held every year since 2008.

I was asked to participate in the panel on Privacy in the Identification Domain and to start the discussion by presenting a few slides summarizing my series of blog posts on privacy-enhancing technologies and NSTIC. In response to my slides, Gregory Neven of IBM reported that a credential presentation takes less than one second on his laptop, and Brian La Macchia of Microsoft pointed out that deployment is difficult for public key certificates as well as for privacy-friendly credentials. There were discussions with Gregory Neven on revocation and with Anna Lysyanskaya on how to avoid the sharing of anonymous credentials; these are big topics that deserve their own blog posts, which I plan to write soon, so I won’t say any more here. Jeremy Grant brought the audience up to date about NSTIC, which has received funding and is getting ready to launch pilots. Then there was a wide ranging discussion.

Deployment and Usability of Cryptographic Credentials

This is the fourth and last of a series of posts on the prospects for using privacy-enhancing technologies in the NSTIC Identity Ecosystem.

Experience has shown that it is difficult to deploy cryptographic credentials on the Web and have them adopted by users, relying parties and credential issuers. This is true for privacy-friendly credentials as well as for ordinary public-key certificates, both of which have a place in the NSTIC Identity Ecosystem, as I argued in the previous post.

I believe that this difficulty can be overcome by putting the browser in charge of managing and presenting credentials, by supporting cryptographic credentials in the core Web protocols, viz. HTTP and TLS, and by providing a simple and automated process for issuing credentials.

Browsers should manage and present credentials

For credentials to be widely adopted, users must not be required to install additional software, let alone proprietary software that only runs on one operating system, such as Windows Cardspace. Therefore credentials must be managed and presented by the browser.

The browser should allow users to set up multiple personas or profiles and associate particular credentials with particular personas. Many users, for example, have a personal email address and a business email address, which could be associated with a personal profile and a business profile respectively. The user could declare one profile to be the “currently active persona” in a particular browser window or tab, and thus facilitate the selection of appropriate credentials when visiting sites in that window or tab.

People who use multiple browsers in multiple computing devices (including desktop or laptop computers, smart phones and tablets) must have access to the same credentials on all those devices. Credentials can be synced between browsers through a Web service without having to trust the service by equipping each browser with a key pair for encryption and a key pair for signature (in the same way as email can be sent with end-to-end confidentiality and origin authentication using S/MIME or PGP). Credentials can be backed up to an untrusted Web service similarly.

Cryptographic credentials should be supported by HTTP and TLS

HTTP should provide a way for the relying party to ask for particular credentials or attributes, and TLS should provide a way for the browser to present one or multiple credentials. Within TLS, the mechanism for presenting credentials should be separate from and subsequent to, the handshake, to benefit from the confidentiality and integrity offered by the TLS connection after it has been secured.

Credentials should be issued automatically to the browser, through TLS

Privacy-friendly credentials have cryptographically complex interactive issuance protocols. Paradoxically, this suggests a way of simplifying the issuance process, for both PKI certificates and privacy-friendly credentials.

Since the process is interactive, it should be run directly on a transport layer connection, to avoid HTTP and application overhead. That connection should be secure to protect the confidentiality of the attributes being certified. To reduce the latency due to the cryptographic computations, the protocol interactions should be interleaved with the transmission of other data. And the cryptographic similarity of issuance and presentation protocols suggests that they should be run over the same kind of connection.

All this leads to the idea of running issuance protocols, like presentation protocols, directly over a TLS connection. TLS has a record layer specification that could be extended to define two new kinds of records, one for issuance protocol messages, the other for presentation protocol messages. TLS would then automatically interleave protocol interactions with transmission of other data. (Another benefit of TLS is that its PRF facility could be readily used to generate the common reference string used by some cryptographic protocols.)

Since TLS is universally supported by server middleware, implementing issuance protocols directly over TLS would make allow servers to issue credentials automatically without installing additional software. In particular, it would make it easy for any Web site to issue a PKI certificate as a result of the user registration process, for use in subsequent logins.

User Experiences

Once credentials are handled by browsers and directly supported by the core protocols of the Web, smooth and painless user experiences become possible.

For example, a user can open a bank account online as follows. The user accepts terms and conditions and clicks on an account creation button button. The bank asks the browser for a social security credential and a driver’s license credential. The browser presents the credentials to the bank after asking the user for permission. The bank checks the user’s credit ratings and automatically creates an account and issues a PKI certificate binding the account number to the public key component of a new key pair generated by the browser on the fly. On a return visit, the user clicks on a login button and the bank asks the browser for the certificate. The user may allow the browser to present the certificate without asking for permission each time. Double factor authentication can be achieved, for example, by keeping the private key and certificate in a password-proctected smart card.

As a second example, suppose a user visits a site to sign up for receiving coupons by email. The user accepts terms and conditions and clicks on a sign-up button. The site asks the browser for a verified email-address certificate (issued by an email service provider) and a number of self-asserted attributes, such as zip code, gender, age group, and set of shopping preferences. The browser finds in its certificate store (or in a connected smart card) an email address certificate and a personal-data credential associated with the currently active persona. The personal-data credential is a privacy-friendly credential featuring unlinkability and selective disclosure. The browser presents simultaneously the email certificate and the personal-data credential, disclosing only the personal-data attributes requested by the site. The browser may or may not ask the user for permission to present the credentials, depending on user preferences that may be persona-dependent.

Conclusions

In this series of posts I have argued that new privacy-enhancing technologies should be developed to fill the gaps in currently implemented systems and to take advantage of new techniques developed by cryptographers over the last 10 or 15 years. I have also argued that the NSTIC Identity Ecosystem should accomodate both privacy-friendly credentials and ordinary PKI certificates, because different use cases call for different kinds of credentials. Finally I have sketched above two examples of user experiences that can be provided if credentials are handled by browsers and directly supported by the core protocols of the Web.

Of course this requires major changes to the Web infrastructure, including: extensions to HTTP; a revamp of TLS to allow for the presentation of privacy-friendly credentials, the simultaneous presentation of multiple credentials, and the issuance of credentials; support of the protocol changes in browsers and server middleware; and implementation of browser facilities for managing credentials.

These changes may seem daunting. The private sector by itself could not carry them out, especially given the current reliance of technology companies on business models based on advertising, which benefit from reduced user privacy. But I hope NSTIC will make them possible.

Are Privacy-Enhancing Technologies Really Needed for NSTIC?

This is the third of a series of posts on the prospects for using privacy-enhancing technologies in the NSTIC Identity Ecosystem.

In the first two posts we’ve looked at two, or rather three, privacy-enhancing authentication technologies: U-Prove, Idemix, and the Idemix Java card. The credentials provided by these technologies have some or all of the privacy features called for by NSTIC, but they have various practical drawbacks, the most serious of which is that they are not revocable by the credential issuer.

Given these drawbacks, it is natural to ask the question: are privacy-friendly credentials really necessary for NSTIC? My answer is: they are not needed in many important use cases, and they are useful but not indispensable in other important cases; but they are essential in cases that are key to the success of NSTIC.

Use Cases Where Privacy-Enhancing Technologies Are Not Needed

The most common use case of Web authentication is the case where a user registers anonymously with a Web site and later logs in as a returning user. Traditionally, the user registers a username and a password with the site and later uses them as credentials to log in. Today, third-party login is becoming popular as a way of mitigating the proliferation or reuse of passwords: the user logs in with username and password to a third-party identity provider, and is then redirected to the Web site, which plays the role of relying party. But there is a way of avoiding passwords altogether: the Web site can issue a cryptographic credential to the user upon registration, which the user can submit back to the Web site upon login. In that case there is no third party involvement and no privacy issues. The cryptographic credential can therefore be an ordinary PKI certificate. No privacy-enhancing technologies are needed.

Update. The PKI certificate binds the newly created user account to the public key component of a key pair that the browser generates on the fly.

Other cases where privacy-enhancing technologies are not needed are those where a credential demonstrates that the user possesses an attribute whose value uniquely identifies the user, and the relying party needs to know the value of that attribute. (One example of such an attribute is an email address.) Privacy-enhancing technologies are not useful in such cases because a uniquely-identifying attribute communicated to relying parties can be used to track the user no matter what type of credential is used to communicate the attribute.

Use Cases Where Privacy-Enhancing Technologies Are Useful but not Essential

Privacy-enhancing technologies are useful but not essential when the attributes certified by a credential do not uniquely identify the user, and the user has a choice of credential issuers. They are useful in such cases because they prevent the issuer from tracking the user’s activities by sharing data with the relying parties. They are not essential, however, because the user may be able to choose a credential issuer that she trusts. (Most privacy-enhancing technologies also prevent relying parties from collectively tracking the user by sharing their login information, without involvement of the credential issuer, but the risk of this happening may be more remote.)

Examples of non-identifying attributes are demographic attributes (city of residence, gender, age group), shopping interests, hobby interests, etc.; such attributes are usually self-asserted, but they can be supplied by an identity provider, chosen by the user, as a matter of convenience, so that the user does not have to reenter them and maintain them uptodate at each relying party. Examples of sites that may ask for such attributes are dating sites, shopping deal sites, hobbyist sites, etc.

Of course, a credential that contains non-identifying attributes will not by itself allow a user to log in to a site. But it can be used in addition to a PKI certificate issued by the site itself to recognize repeat visitors.

Use Cases Where Privacy-Enhancing Technologies Are Necessary

Privacy-enhancing technologies are necessary when the relying party does not require uniquely identifying information, and there is only one credential issuer. That one credential issuer could be the government. Non-uniquely-identifying information provided by government-issued credentials could include assertions that the user is old enough to buy wine, or is a resident of a particular state, or is licensed to practice some profession in some state, or is a US citizen, or has the right to work in the US.

I find it difficult to find examples where people would have a reasonable fear of being tracked through their use of government-issued credentials. But the right to privacy is a human right that is held dear in the United States, and has been found to be implicitly protected by the US constitution. Government-issued credentials will only be acceptable if they incorporate all available privacy protections. That makes the use of privacy-enhancing technologies essential to the success of NSTIC.

Wanted: Efficiently-Revocable Privacy-Friendly Credentials

So: privacy-friendly credentials are necessary; but, in my opinion, the drawbacks of existing privacy-enhancing technologies make them impractical. Therefore we need new privacy-enhancing technologies. Those new technologies should have issue-show and multi-show unlinkability; they should provide partial information disclosure, including proofs of inequality relations involving numeric attributes; and they should be efficiently revocable.

Fortunately, that’s not too much to ask. U-Prove and Idemix have been pioneering technologies, but they are now dated. U-Prove is based on research carried out in the mid-nineties, and the core cryptographic scheme later used in Idemix was described in a paper written in 2001. A lot of research has been done in cryptography since then, and several new cryptographic schemes have been proposed that could be used to provide privacy-friendly credentials.

I don’t think a scheme meeting all the requirements, including efficient revocation, has been designed yet. (I would love to be corrected if I’m wrong!) But possible ingredients for such a system have been proposed, including methods for proving non-revocation in time proportional to the square root of the number of revoked credentials [1] or even in practically constant time [2].

Update. Stefan Brands has told me that the cryptosystem described in [1] is considered part of the U-Prove technology, and that the revocation technique of [1] could be integrated into the existing U-Prove implementation to provide issuer-driven revocation. If that were done and the resulting system proved to be suitably efficient, the only ingredient missing from that system would be multi-show unlinkability.

Once a scheme with all the ingredients has been designed and mathematically verified, it still needs to be implemented. Cryptographic implementations are few and far between, but that does not mean that they are difficult. Recently, for example, three different systems of privacy-friendly credentials were implemented just for the purpose of comparing their performance [3].

Next and Last: Usability and Deployment

To conclude the series, in the next post I’ll try to respond to a comment made by Anthony Nadalin on the Identity Commons mailing list: “if it’s not useable or deployable who cares?”.

References

[1] Stefan Brands, Liesje Demuynck and Bart De Decker. A practical system for globally revoking the unlinkable pseudonyms of unknown users. In Proceedings of the 12th Australasian Conference on Information Security and Privacy, ACISP’07. Springer-Verlag, 2007. ISBN 978-3-540-73457-4. Preconference technical report available at http://www.cs.kuleuven.be/publicaties/rapporten/cw/CW472.pdf.
 
[2] T. Nakanishi, H. Fujii, Y. Hira and N. Funabiki. Revocable Group Signature Schemes with Constant Costs for Signing and Verifying. In IEICE Transactions, volume 93-A, number 1, pages 50-62, 2010.
 
[3] J. Lapon, M. Kohlweiss, B. De Decker and V. Naessens. Performance Analysis of Accumulator-Based Revocation Mechanisms. In Proceedings of the 25th International Conference on Information Security (SEC 2010). Springer, 2010.
 

Pros and Cons of Idemix for NSTIC

This is the second of a series of posts on the prospects for using privacy-enhancing technologies in the NSTIC Identity Ecosystem.

In the previous post I discussed the pros and cons of U-Prove , so naturally I should now discuss the pros and cons of Idemix, the other privacy-enhancing technology thought to have inspired NSTIC. This post, like the previous one, is based on a review of the public literature. If I’ve missed or misinterpreted something, please let me know in a comment.

By the way, a link to the previous post that I posted to the Identity Commons mailing list triggered a wide-ranging discussion on NSTIC and privacy, which can be found in the mailing list archives .

Idemix is an open-source library implemented in Java. It is described in the Idemix Cryptographic Specification [1], and the academic paper [2]. It is mostly based on the cryptographic techniques of [3]. Curiously, although Idemix is provided by IBM, the main Idemix site is located at idemix.wordpress.com and disclaims to be an official IBM site.

There is also a smart card that implements a “light-weight variant of Idemix”. I discuss it at the end of this post.

Feature Coverage

Idemix provides all three privacy features alluded to in NSTIC documents [4] [5] and discussed in the previous post:

  1. Issuance-show unlinkability,
  2. Multi-show unlinkability, and
  3. Partial information disclosure.

The third feature includes both selective disclosure of attributes and the ability to prove inequalities such as the value of a birthdate attribute being less than today’s date minus 21 years without disclosing that birthdate attribute value.

Idemix also includes other features, such as the ability to prove that two attributes have the same value without disclosing that value, and the ability to prove that a certain attribute certified by the issuer has been encrypted under the public key of a third party, which may decrypt it under some circumstances. It could be argued that Idemix is over-engineered for the purpose of Web authentication, including features that add complexity but are not useful for that purpose.

Performance

The richer feature set of Idemix may come at a cost in terms of performance. From the data in Table 1 of [2], it follows that it would take about 12 seconds for the user to submit a credential with one attribute to a relying party that checks for expiration, and about 28 seconds to submit a credential with 20 attributes. The paper dates back to 2002, and the processor used was a relatively slow 1.1GHz Pentium III. (The authors say 1.1MHz but I assume they mean 1.1GHz.) But on the other hand the modulus size was 1024 bits, and Idemix currently uses a 2048 modulus [2]. The paper also promises optimizations that have no doubt been implememted by now. Unfortunately, I haven’t been able to find performance data in the Idemix site. A search for the word performance restricted to the site produces no results. If you know of any recent performance data, please let me know in a comment.

Revocation

We saw in the previous post that unlinkability makes revocation difficult. U-Prove credentials can be revoked by users because they do not have multi-show unlinkability, but cannot be revoked by issuers because they have issue-show unlinkability. Idemix credentials, which have both multi-show unlinkability and issue-show unlinkability, are revocable neither by users nor by issuers. I am not saying that unlinkability makes revocation impossible. Cryptographic techniques have been devised to allow revocation of unlinkable credentials, which I will discuss later in this series of posts. But those techniques are not used by U-Prove or Idemix.

Idemix has a credential update feature that can be used to extend the validity period of a credential that has expired. This facilitates the use of short-term credentials that may not need to be revoked. But the Idemix Cryptographic Specification [1] should not claim as it does that the credential-update feature can be used to implement credential revocation. Waiting for a credential to expire is not the same as revoking it. Short term credentials are an alternative to revocation. And, as an alternative, they have serious drawbacks: they are costly to implement for the issuer; they impose a logistic burden on the user agent; they may become unavailable if the issuer is down when the validity period needs to be extended; and the user agent may be overwhelmed by the need to renew many credentials at once if it has not been operational for an extended period of time. If a short-term credential is renewed on demand, just before it is used, renewal and use of the credential may be linkable by timing correlation.

The Idemix Java Card

The Idemix Java Card was intended as a smart identity card. Its implementation on the Java Card Open Platform (JCOP) is described in [6].

The cryptographic system in an Idemix card is described in the Idemix site as a “light-weight variant of Identity Mixer” (i.e. Idemix). But it is very different from the original Idemix system. According to [6], an implementation of the original system in a Java card would be impractical because credential submission could take 70 to 100 seconds. To make it less impractical, the issuer of a credential to a Java card certifies only that it trusts the Java card. The card is then free to present any attributes it wants to the relying party. (A different way of handling attributes is possible but not recommended, presumably because of the time it takes; see Footnote 5 of [6].) Security for the relying party depends on the issuer downloading the correct attributes and software to the card, and the user not being able to modify those attributes and software. The card must therefore be tamper resistant against the user. (Or at least tamper responsive, i.e. able to detect tampering and respond by zeroing out storage.)

Whereas a U-Prove smart card performs only a small portion of the cryptographic computations, the Idemix Java card is an autonomous system that performs all the cryptographic computations by itself, without help from the user’s computer. This takes time: 10.453 seconds for a transaction, i.e. for submitting a credential to a relying party, according to Table 2 of [6]; or 11.665 seconds according to Table 3. (In both cases, with a 1536-bit modulus, and not counting a 1.474 second revocation check; revocation is discussed below; the discrepancy between the two figures is not explained.) Some of the computations in Table 2 are labeled as precomputations, but no precomputations can take place if the card is not plugged in. The authors of [6] consider that a 10 second transaction time would be adequate. But I don’t think many Web users will be happy waiting 10 seconds each time they want to log in to a site.

Update. It is possible to implement full-blown privacy-friendly credentials systems very efficiently on a smart card. A non-Microsoft implementation of U-Prove on a MULTOS smart card [8], where all the cryptographic computations are carried out by the card, achieves credential-show times close to 0.3 seconds.

The Idemix card features a revocation mechanism. A card can be revoked by including its secret key in a revocation list. But the secret key is generated in the card when the card is “set up”, and it is not known to the party that sets up the card, nor to the issuers of credentials to the card, nor to the user who owns the card. The secret key can only be obtained by breaching the tamper pretection of the card, hence can only become known to an adversary. So the revocation feature seems useless.

Where does the peculiar idea of listing secret keys in a revocation list come from? It turns out that the cryptographic system of the Idemix card is derived from the cryptographic system of [7] which was designed for media copyright protection, e.g. to authenticate the Trusted Platform Module (TPM) in a DVD player before downloading a protected movie to the player. Apparently hackers extract secret keys of TPMs and publish them on the Web. Copyright owners find those secret keys and blacklist them. Blacklisting secret keys makes sense for copyright protection, but not as a revocation technique for smart cards.

Coming Next…

After reading this post and the previous post, you may be thinking whether privacy-enhanced technologies are really a good idea. I will try to answer that question in the next post.

References

[1] IBM Research, Zurich. Specification of the Identity Mixer Cryptographic Library Version 2.3.1. December 7, 2010. Available at http://www.zurich.ibm.com/~pbi/identityMixer_gettingStarted/ProtocolSpecification_2-3-2.pdf .
 
[2] Jan Camenisch and Els Van Herreweghen. Design and Implementation of the Idemix Anonymous Credential System. In Proceedings of the 9th ACM conference on Computer and Communications Security. 2002.
 
[3] J. Camenisch and A. Lysyanskaya. Efficient Non-Transferable Anonymous Multi-Show Credential System with Optional Anonymity Revocation. In Theory and Application of Cryptographic Techniques, EUROCRYPT, 2001.
 
[4] The White House. National Strategy for Trusted Identities in Cyberspace. April 2011. Available at http://www.whitehouse.gov/sites/default/files/rss_viewer/NSTICstrategy_041511.pdf.
 
[5] Howard A. Schmidt. The National Strategy for Trusted Identities in Cyberspace and Your Privacy. April 26, 2011. White House blog post, available at http://www.whitehouse.gov/blog/2011/04/26/national-strategy-trusted-identities-cyberspace-and-your-privacy.
 
[6] P. Bichsel, J. Camenisch, T. Groß and V. Shoup. Anonymous Credentials on a Standard Java Card. In ACM Conference on Computer and Communications Security, 2009.
 
[7] E. Brickell, J. Camenisch and L. Chen. Direct anonymous attestation. In Proceedings of the 11th ACM conference on Computer and Communications Security, 2004.
 
[8] Update. W. Mostowski and P. Vullers. Efficient U-Prove Implementation for Anonymous Credentials on Smart Cards. Available at http://www.cs.ru.nl/~pim/publications/2011_securecomm.pdf.