Faster Modular Exponentiation in JavaScript

Modular exponentiation is the algorithm whose performance determines the performance and practicality of many public key cryptosystems, including RSA, DH and DSA. We have recently achieved a manyfold improvement in the performance of modular exponentiation in JavaScript over the implementation of modular exponentiation in the Stanford JavaScript Crypto Library (SJCL). JavaScript was originally intended for performing simple tasks in web pages, but it has grown into a sophisticated general purpose programming language used for both client and server computing, which is arguably the most important programming language today. Good performance of public key cryptography is difficult to achieve in JavaScript, because JavaScript is an interpreted language inherently slower than a compiled language such as C, and provides floating point arithmetic but no integer arithmetic. But fast JavaScript public key cryptography is worth the effort, because it may radically change the way cryptography is used in web applications. Continue reading “Faster Modular Exponentiation in JavaScript”

Surveillance and Internet Identity

Last week I attended IIW 17, the 17th meeting of the Internet Identity Workshop, which is held twice a year in Mountain View, California. As usual it was a great opportunity to exchange ideas and meet people, with its unconference format, its many sessions, its rotating demos, its wide space for discussions, and its two free dinners with free drinks.

For me, however, it was tinged with sadness, because of what has happened since the first IIW I attended, IIW 12, in May 2011. IIW 12 was the first IIW after the launch of NSTIC. IIW 17 was the first IIW after Snowden.

The NSTIC Strategy Document, released in April 2011 with a preface signed by President Obama, repeatedly emphasized the goal of enhancing privacy as a key element of the “vision” and “guiding principles” of NSTIC. The document explicitly stated that the Identity Ecosystem will use privacy-enhancing technology and policies to inhibit the ability of service providers to link an individual’s transactions, thus ensuring that no one service provider can gain a complete picture of an individual’s life in cyberspace. At the time, Facebook Connect was threatening to inject Facebook as a middleman in all or most Internet activities, and I was happy to see that the US Government seemingly wanted to prevent such a massive invasion of privacy; I even convened a session at IIW 12 proposing a technique for achieving the privacy goals of NSTIC in the short term. Little did I know that the government was busy building a massive surveillance apparatus that would give the government a complete picture of an individual’s life in cyberspace, by means including bulk collection of data from service providers.

The Internet, given to the world by the US Department of Defense, was a world-wide forum for free-flowing, spontaneous exchange of ideas. Now the NSA, part of the same Department of Defense, has taken that away. People know that they are being tracked and identified when they post an anonymous comment. People know that their conversations are being recorded. Therefore people must think twice about they say.

I don’t know if Congress will be able to rein in the NSA. It should be clear that spying on US citizens is unconstitutional, but some politicians think that it is the NSA’s job to spy on everybody else in the planet. They don’t seem to consider or care that, if the US Government insists on a God-given right to spy on everybody else, other countries or regions may develop their own national or regional networks, separated from the US Internet by an air gap.

Fortunately, the technical community has reacted strongly against the NSA’s attacks on Internet privacy. And thanks to Snowden’s revelations, many of the attack techniques are known. It may therefore be possible to protect Internet privacy by technical means.

Coming back to the subject of the workshop, Internet Identity, I would argue that the first thing to do to protect Internet privacy is to get rid of the pernicious technology variously known as third-party login, social login or federated login. To be precise, I am referring to authentication techniques where the user authenticates to a third-party identity provider, which then provides identity and/or attribute information to a relying party, using a protocol such as OAuth or OpenID Connect. (These are the techniques in Group 2 of the taxonomy proposed in the paper Privacy Postures of Authentication Technologies.)

The only intrinsic advantage of federated login is that it allows the identity provider to collect vast amounts of information about the user, since the identity provider learns not only the user’s identity and/or attributes, but also what relying parties the user logs in to. The identity provider uses the information to sell ads that target the user accurately. We now know that the information is also shared with the government, which makes it available to thousands of analysts and IT personnel who use it for legal or illegal government or personal purposes.

There are no other intrinsic advantages to federated login.

The government and the identity providers argue that federated login is more secure than direct authentication to the relying party with username and password, but the opposite is true.

Security is supposedly increased because federated login reduces password reuse. But password reuse will not be substantially reduced unless a large majority of world-wide web sites force their users to use federated login with one of a small number of global identity providers such as Google or Facebook, something that will hopefully not come to pass.

Security is also supposedly increased because a large identity provider supposedly does a better job of protecting the user’s password. But I don’t know why a large identity provider would provide better protection against hackers, since large companies are not known to provide great security. And I do know that a password entrusted to a large identity provider may become available to thousands of employees of the government, of government contractors, and of the identity provider.. And the capture of a password used at an identity provider, which provides access to multiple web sites, is more damaging to the user than the capture of a password used at a single web site.

There is an alternative to authenticating to a web site with username and password that provides both security and privacy: namely, authentication with a cryptographic key pair automatically generated on the user’s machine when the user registers with the site. The site stores the hash of the public key component of the key pair in its database, and uses it to locate the user’s account when the user visits the site again and demonstrates knowledge of the private key component.

Another claimed advantage of federated login is that the user can register at a new site with a single click if logged in to the identity provider, any personal data required by the site being provided by the identity provider. This is a real advantage, but not an intrinsic one. The same benefit could be easily obtained by storing the personal data in the browser, and specifying a protocol by which the browser would supply selected personal data items to a web site upon demand by the site and approval by the user. Such a protocol would be much simpler than any of the federated login protocols and would provide more security and more privacy.

Yet another claimed advantage of federated login is that the identity provider could provide the relying party with a user’s identity and/or attributes verified by an identity proofing procedure; however, such verified identity and/or attributes could equally well be provided by a certificate authority using a public key certificate (or by multiple authorities providing a combination of a certificate binding a public key to an identity and one or more certificates binding the identity to various attributes), without the certificate authority having to be informed of what relying parties the certificate is submitted to.

It is sometimes argued (cf. the NSTIC 101 session at last week’s IIW) that using public key cryptography for authentication would be expensive and would require the user to carry a separate dongle or smartcard for every credential. This is not true. There is no need for special hardware to store a cryptographic credential, and if special hardware is desired for some reason, there is no need to use different pieces of hardware for different credentials.

Two sessions at IIW 17 gave me hope that Internet privacy is not a lost cause.

One of them was convened by Tim Bray of Google to report on the comments he received in response to a blog post arguing to developers that they should use federated login rather than login with username and password. The comments, which he referred to as a “bloodbath,” showed that neither developers nor end-users like federated login. I hope that such pushback will eventually force companies like Google to give up on federated login.

The other one was convened by Kazue Sako of NEC to discuss anonymous credentials and their possible uses. The room was overflowing and the level of engagement of the audience was high, showing that technical people are interested in privacy-enhancing authentication technologies even if large companies are not.

NSTIC Is Not Low-Hanging Fruit

In a recent tweet, Ian Glazer quoted Patrick Gallagher, director of NIST, saying at a recent White House meeting on NSTIC that the “current suite of technologies we rely on are insufficient”.

The identity technologies used today both in federal agencies and on the Web at large are indeed insufficient:

  • SSL client certificates have failed to displace passwords for Web authentication since they were introduced 17 years ago.
  • Credentials in PIV cards have failed to displace passwords in federal agencies eight years after HSPD 12; a GAO report does a good job of documenting the many obstacles faced by agencies in implementing the directive, ranging from the fact that some categories of agency employees do not have PIV cards, to the desire by employees to use Apple MAC computers and mobile devices that lack card readers. I’m glad that we don’t live in the Soviet Union and heads of agencies are not sent to the Gulag when they ignore unreasonable orders.
  • Third-party login solutions such as OpenID, as currently used on the Web, not only do not eliminate passwords, they make the password security problem worse, by facilitating phishing attacks. They also impinge on the user’s privacy, because the identity provider is told what relying parties the user logs in to.
  • Social login solutions based on OAuth, e.g. “login with Facebook”, worsen the privacy drawback of third party login by limiting the user’s choice of identity providers to those that the relying party has registered with, and by broadcasting the user’s activities to the user’s social graph. Eric Sachs of Google said at the last Internet Identity Workshop that users participating in usability testing were afraid of logging in via Facebook or Google+ because “their friends would be spammed”.

But some proponents of NSTIC do not seem to realize that. In a recent interview, Howard Schmidt went so far as to say that NSTIC is “low-hanging fruit”, because “the technology is there”. What technology would that be? In a blog post that he wrote last year shortly after the launch of NSTIC, it was clear that the technology he was considering for NSTIC was privacy-enhancing cryptography, used by Microsoft in U-Prove and by IBM in Idemix. He used the words “privacy-enhancing” in the interview, so he may have been referring to that technology in the interview as well.

(Credentials based on privacy-enhancing cryptography provide selective disclosure and unlinkability. Selective disclosure refers to the ability to combine multiple attributes in a credential but disclose only some of them when presenting the credential. Unlinkability, in the case of U-Prove, refers to the impossibility of linking the use of a credential to its issuance; Idemix also makes it impossible to link multiple uses of the same credential.)

But Idemix has never been deployed commercially, and an attempt at deploying U-Prove within the Information Cards framework failed when Microsoft discontinued CardSpace, two months before the launch of NSTIC.

Credentials based on privacy-enhancing cryptography, sometimes called anonymous credentials, have inherent drawbacks. One of them is that unlinkability makes revocation of such credentials harder than revocation of public key certificates, as I pointed out in a blog post on U-Prove and another blog post on Idemix. The difficulty of revoking credentials based on privacy-enhancing cryptography has led ABC4Trust, which can be viewed as the European counterpart of NSTIC, to propose arresting users for the purpose of revoking their credentials! See page 23, end of last paragraph, of the ABC4Trust document Architecture for Attribute-based Credential Technologies.

Another inherent drawback is that it is difficult to keep the owner of an anonymous credential from making it available for use online by others who are not entitled to it. For example, it would be difficult to prevent the owner of a proof-of-drinking-age anonymous credential (a use case often cited by proponents of anonymous credentials) from letting minors use it for a fee.

The mistaken belief that “the technology is there” explains why the NSTIC NPO has made little effort to improve on existing technology. Instead of requesting funding for research, it requested funding for pilots; a pilot is usually intended to demonstrate the usability of a newly developed technology; it assumes that the technology already exists. After the launch of NSTIC, the NPO announced three workshops, on governance, privacy and technology. The first two were held, but the workshop on technology, which was supposed to take place in September of last year, was postponed by six months and merged with the yearly NIST IDtrust workshop, which took place in March of this year. The IDtrust workshop usually includes a call for papers. But this year there was none: new ideas were not solicited.

The NSTIC NPO has been trying to “bring relying parties to the table”. Ian Glazer dubbed the recent White House meeting the NSTIC Relying Party Event. The meeting was about getting a bigger table according to the NPO blog post on the event, and about “getting people to volunteer” according the Senator Mikulski as quoted by the blog post. Earlier, Jim Sheire of the NPO convened a session entitled NSTIC How do we bring relying parties to the table? at the last Internet Identity Workshop.

One idea mentioned in the report on the IIW session for bringing relying parties to the table is to target 100 “top relying parties” in the hope of creating a snowball effect. But it’s not clear what it would mean for those 100 relying parties and any additional ones caught in the snowball, to “come to the table”. What would they do at the table? What technology would they use? OpenID? OAuth? Smart cards? Information cards? Anonymous credentials? NSTIC has not proposed any specific technology. Or would they come to the table just to talk?

There are many millions of Web sites that use passwords for user authentication. The goal should be to get all those sites to adopt an identity solution that eliminates the security risk of passwords. Web site developers will do that of their own initiative once a solution is available that is more secure and as easy to deploy as password authentication.

While the technology is not there, various technology ingredients are there, and missing ingredients could be developed. It is not difficult to conceive a roadmap that could lead to one or more good identity solutions. But success would require a concerted effort by many different parties: not only relying parties and identity and attribute providers, but also standards bodies, browser vendors, vendors of desktop and mobile operating systems, vendors of smart cards and other hardware tokens, perhaps biometric vendors, and the providers of the middleware, software libraries, and software development tools used on the Web. When I first heard of NSTIC I hoped that it would provide the impetus and the forum needed for such a concerted effort. But that has yet to happen.

After CardSpace, Microsoft Calls for Research on Passwords

In February 2011 Microsoft discontinued CardSpace, a Windows application for federated login that was the deployment vehicle for the U-Prove privacy-enhancing Web authentication technology, which itself is said to have inspired the NSTIC initiative. Cormac Herley, a Microsoft researcher, and Paul van Oorshot, a professor at Carleton University, have written a paper entitled A Research Agenda Acknowledging the Persistence of Passwords that mentions the CardSpace failure and calls for research on traditional password authentication.

The paper makes two points:

  1. It blames the failure of attempts at replacing passwords on a lack of research on identifying and prioritizing the requirements to be met by alternative authentication methods.
  2. It argues that passwords have many virtues, will persist for some time, and may be the best fit in many scenarios; and it calls for research on how to better support them.

I disagree with the first point but agree with the second.

The problem with the first point is that it does not take into account the non-technical obstacles faced by alternative authentication methods. Microsoft Passport was the first attempt at Web single sign-on. It was launched when Microsoft was in the process of annihilating the Netscape browser and acquiring a monopoly in Web browsing; it originally had an outrageous privacy policy, which was later modified; and if successful it would have made Microsoft a middleman for all Web commerce. No wonder it failed.

Other single sign-on initiatives had obvious non-technical obstacles. OpenID required people to use a URL as their identity, something that could only appeal to the tiny fraction of users who understand or care about the technical underpinnings of the Web. CardSpace was a Microsoft product; that by itself must have provided motivation for all Microsoft competitors to oppose it; furthermore it only ran on Windows; and in order to support CardSpace relying party developers had to install and learn to use a complex toolkit. Again, no wonder CardSpace failed.

The non-technical obstacles faced by Passport, OpenID and CardSpace were due to lack of maturity of the Web industry. Such obstacles will slowly go away as the industry matures. Signs of maturity are appearing: there are now five major browsers that seem to understand the need for common standards; the World Wide Web consortium (W3C) has shown that it can bring them together to develop standards such as HTML5 and has already engaged them in identity work through the Identity in the Browser workshop and the identity mailing list that was set up after the workshop; and OpenID 2.0 no longer insists on users using URLs as their identities. Industries can take decades to mature, so it’s not surprising that progress is slow.

As for passwords, I agree that they have virtues, will persist, and deserve research. There is actually research on passwords going on.

Password managers are an active area of research and development by browser providers and others.

There was a session on passwords at the last Internet Identity Workshop (IIW), called by Jay Unger, where Alan Karp described his site password tool, which can be viewed as an alternative to a password manager, where passwords for different sites are computed rather than retrieved from storage. The tool computes a high entropy password for a Web site from a master password and an easy-to-remember name for the site.

I have myself been recently granted two patents on password security, which were also discussed at the IIW session on passwords:

  • One of them describes a countermeasure against online password guessing that places a hard limit on the total number of guesses that an attacker can make against a password. Besides the traditional counter of consecutive bad guesses the countermeasure uses an additional counter of total bad guesses, not necessarily consecutive. The user is asked to change her password if and when this second counter reaches a threshold, rather than at arbitrary intervals.
  • The other describes a technique for password distribution, that allows an administrator to send a temporary password to a user, e.g. after a password reset, over an unprotected channel such as ordinary email. The administrator puts a hold on the user’s account that allows no further access beyond changing the temporary password into a password chosen by the user. The administrator removes the hold only after being notified by the legitimate user that she has successfully changed the password, e.g. over the phone. In abstract terms, instead of relying on a confidential channel to send the password, the administrator relies on a channel with data-origin authentication to receive the user’s notification.

Microsoft or anybody else who wants to increase password security can license either of these patents. You may use the contact form of this site to inquire about licensing.

Benefits of TLS for Issuing and Presenting Cryptographic Credentials

In comments on the previous post at the Identity Commons mailing list and comments at the session on deployment and usability of cryptographic credentials at the Internet Identity Workshop, people have questioned the advantages of running cryptographic protocols for issuing and presenting credentials inside TLS, and argued in favor of running them instead over HTTP. I believe running such protocols inside TLS removes several obstacles that have hindered the deployment of cryptographic credentials. So in this post I will try to answer those comments.

Here are three advantages of running issuance and presentation protocols inside TLS over running them outside TLS:

  1. TLS is ubiquitous. It is implemented by all browsers and all server middleware. If issuance and presentation protocols were implemented inside TLS, then users could use cryptographic credentials without having to install any applications or browser plugins, and developers of RPs and IdPs would not have to install and learn additional SDKs.
  2. The PRF facility of TLS is very useful for implementing cryptographic protocols. For example, in the U-Prove presentation protocol [1], when U-Prove is used for user authentication, the verifier must send a nonce to the prover; if the protocol were run inside TLS, that step could be avoided because the nonce could be independently generated by the prover and the verifier using the PRF. The PRF can also be used to provide common pseudo-random material for protocols based on the common reference string (CRS) model [2]. (Older cryptosystems such as U-Prove [1] and Idemix [3] rely on the Fiat-Shamir heuristic [4] to eliminate interactions, but more recent cryptosystems based on Groth-Sahai proofs [5] rely instead on the CRS model, which is more secure in some sense [6].)
  3. Inside TLS, an interactive cryptographic protocol can be run in a separate TLS layer, allowing the underlying TLS record layer to interleave protocol messages with application data (and possibly with messages of other protocol runs), thus mitigating the latency impact of protocol interactions.

And here are two advantages of running protocols either inside or directly on top of TLS, over running them on top of HTTP:

  1. Simplicity. Running a protocol over HTTP would require specifying how protocol messages are encapsulated inside HTTP requests and responses, i.e. it would require defining an HTTP-level protocol.
  2. Performance. Running a protocol over HTTP would add the overhead of sending HTTP headers, and, possibly, of establishing different TLS connections for different HTTP messages if TLS connections cannot be kept alive for some reason.

As always, comments are welcome.

References

[1] Christian Paquin. U-Prove Cryptographic Specification V1.1 Draft Revision 1, February 2011. Downloadable from http://www.microsoft.com/u-prove.
 
[2] M. Blum, P. Feldman and S. Micali. Non-Interactive Zero-Knowledge and Its Applications (Extended Abstract). In Proceedings of the Twentieth Annual ACM Symposium on Theory of Computing (STOC 1988).
 
[3] Jan Camenisch et al. Specification of the Identity Mixer Cryptographic Library, Version 2.3.1. December 2010. Available at http://www.zurich.ibm.com/~pbi/identityMixer_gettingStarted/ProtocolSpecification_2-3-2.pdf.
 
[4] A. Fiat and A. Shamir. How to Prove Yourself: Practical Solutions to Identification and Signature Problems. In Proceedings on Advances in Cryptology (CRYPTO 86), Springer-Verlag.
 
[5] J. Groth and A. Sahai. Efficient Non-Interactive Proof Systems for Bilinear Groups. In Theory and Applications of Cryptographic Techniques (EUROCRYPT 08), Springer-Verlag.
 
[6] R. Canetti, O. Goldreich and S. Halevi. The Random Oracle Methodology, Revisited. Journal of the ACM, vol. 51, no. 4, 2004.
 

Deployment and Usability of Cryptographic Credentials

This is the fourth and last of a series of posts on the prospects for using privacy-enhancing technologies in the NSTIC Identity Ecosystem.

Experience has shown that it is difficult to deploy cryptographic credentials on the Web and have them adopted by users, relying parties and credential issuers. This is true for privacy-friendly credentials as well as for ordinary public-key certificates, both of which have a place in the NSTIC Identity Ecosystem, as I argued in the previous post.

I believe that this difficulty can be overcome by putting the browser in charge of managing and presenting credentials, by supporting cryptographic credentials in the core Web protocols, viz. HTTP and TLS, and by providing a simple and automated process for issuing credentials.

Browsers should manage and present credentials

For credentials to be widely adopted, users must not be required to install additional software, let alone proprietary software that only runs on one operating system, such as Windows Cardspace. Therefore credentials must be managed and presented by the browser.

The browser should allow users to set up multiple personas or profiles and associate particular credentials with particular personas. Many users, for example, have a personal email address and a business email address, which could be associated with a personal profile and a business profile respectively. The user could declare one profile to be the “currently active persona” in a particular browser window or tab, and thus facilitate the selection of appropriate credentials when visiting sites in that window or tab.

People who use multiple browsers in multiple computing devices (including desktop or laptop computers, smart phones and tablets) must have access to the same credentials on all those devices. Credentials can be synced between browsers through a Web service without having to trust the service by equipping each browser with a key pair for encryption and a key pair for signature (in the same way as email can be sent with end-to-end confidentiality and origin authentication using S/MIME or PGP). Credentials can be backed up to an untrusted Web service similarly.

Cryptographic credentials should be supported by HTTP and TLS

HTTP should provide a way for the relying party to ask for particular credentials or attributes, and TLS should provide a way for the browser to present one or multiple credentials. Within TLS, the mechanism for presenting credentials should be separate from and subsequent to, the handshake, to benefit from the confidentiality and integrity offered by the TLS connection after it has been secured.

Credentials should be issued automatically to the browser, through TLS

Privacy-friendly credentials have cryptographically complex interactive issuance protocols. Paradoxically, this suggests a way of simplifying the issuance process, for both PKI certificates and privacy-friendly credentials.

Since the process is interactive, it should be run directly on a transport layer connection, to avoid HTTP and application overhead. That connection should be secure to protect the confidentiality of the attributes being certified. To reduce the latency due to the cryptographic computations, the protocol interactions should be interleaved with the transmission of other data. And the cryptographic similarity of issuance and presentation protocols suggests that they should be run over the same kind of connection.

All this leads to the idea of running issuance protocols, like presentation protocols, directly over a TLS connection. TLS has a record layer specification that could be extended to define two new kinds of records, one for issuance protocol messages, the other for presentation protocol messages. TLS would then automatically interleave protocol interactions with transmission of other data. (Another benefit of TLS is that its PRF facility could be readily used to generate the common reference string used by some cryptographic protocols.)

Since TLS is universally supported by server middleware, implementing issuance protocols directly over TLS would make allow servers to issue credentials automatically without installing additional software. In particular, it would make it easy for any Web site to issue a PKI certificate as a result of the user registration process, for use in subsequent logins.

User Experiences

Once credentials are handled by browsers and directly supported by the core protocols of the Web, smooth and painless user experiences become possible.

For example, a user can open a bank account online as follows. The user accepts terms and conditions and clicks on an account creation button button. The bank asks the browser for a social security credential and a driver’s license credential. The browser presents the credentials to the bank after asking the user for permission. The bank checks the user’s credit ratings and automatically creates an account and issues a PKI certificate binding the account number to the public key component of a new key pair generated by the browser on the fly. On a return visit, the user clicks on a login button and the bank asks the browser for the certificate. The user may allow the browser to present the certificate without asking for permission each time. Double factor authentication can be achieved, for example, by keeping the private key and certificate in a password-proctected smart card.

As a second example, suppose a user visits a site to sign up for receiving coupons by email. The user accepts terms and conditions and clicks on a sign-up button. The site asks the browser for a verified email-address certificate (issued by an email service provider) and a number of self-asserted attributes, such as zip code, gender, age group, and set of shopping preferences. The browser finds in its certificate store (or in a connected smart card) an email address certificate and a personal-data credential associated with the currently active persona. The personal-data credential is a privacy-friendly credential featuring unlinkability and selective disclosure. The browser presents simultaneously the email certificate and the personal-data credential, disclosing only the personal-data attributes requested by the site. The browser may or may not ask the user for permission to present the credentials, depending on user preferences that may be persona-dependent.

Conclusions

In this series of posts I have argued that new privacy-enhancing technologies should be developed to fill the gaps in currently implemented systems and to take advantage of new techniques developed by cryptographers over the last 10 or 15 years. I have also argued that the NSTIC Identity Ecosystem should accomodate both privacy-friendly credentials and ordinary PKI certificates, because different use cases call for different kinds of credentials. Finally I have sketched above two examples of user experiences that can be provided if credentials are handled by browsers and directly supported by the core protocols of the Web.

Of course this requires major changes to the Web infrastructure, including: extensions to HTTP; a revamp of TLS to allow for the presentation of privacy-friendly credentials, the simultaneous presentation of multiple credentials, and the issuance of credentials; support of the protocol changes in browsers and server middleware; and implementation of browser facilities for managing credentials.

These changes may seem daunting. The private sector by itself could not carry them out, especially given the current reliance of technology companies on business models based on advertising, which benefit from reduced user privacy. But I hope NSTIC will make them possible.

Are Privacy-Enhancing Technologies Really Needed for NSTIC?

This is the third of a series of posts on the prospects for using privacy-enhancing technologies in the NSTIC Identity Ecosystem.

In the first two posts we’ve looked at two, or rather three, privacy-enhancing authentication technologies: U-Prove, Idemix, and the Idemix Java card. The credentials provided by these technologies have some or all of the privacy features called for by NSTIC, but they have various practical drawbacks, the most serious of which is that they are not revocable by the credential issuer.

Given these drawbacks, it is natural to ask the question: are privacy-friendly credentials really necessary for NSTIC? My answer is: they are not needed in many important use cases, and they are useful but not indispensable in other important cases; but they are essential in cases that are key to the success of NSTIC.

Use Cases Where Privacy-Enhancing Technologies Are Not Needed

The most common use case of Web authentication is the case where a user registers anonymously with a Web site and later logs in as a returning user. Traditionally, the user registers a username and a password with the site and later uses them as credentials to log in. Today, third-party login is becoming popular as a way of mitigating the proliferation or reuse of passwords: the user logs in with username and password to a third-party identity provider, and is then redirected to the Web site, which plays the role of relying party. But there is a way of avoiding passwords altogether: the Web site can issue a cryptographic credential to the user upon registration, which the user can submit back to the Web site upon login. In that case there is no third party involvement and no privacy issues. The cryptographic credential can therefore be an ordinary PKI certificate. No privacy-enhancing technologies are needed.

Update. The PKI certificate binds the newly created user account to the public key component of a key pair that the browser generates on the fly.

Other cases where privacy-enhancing technologies are not needed are those where a credential demonstrates that the user possesses an attribute whose value uniquely identifies the user, and the relying party needs to know the value of that attribute. (One example of such an attribute is an email address.) Privacy-enhancing technologies are not useful in such cases because a uniquely-identifying attribute communicated to relying parties can be used to track the user no matter what type of credential is used to communicate the attribute.

Use Cases Where Privacy-Enhancing Technologies Are Useful but not Essential

Privacy-enhancing technologies are useful but not essential when the attributes certified by a credential do not uniquely identify the user, and the user has a choice of credential issuers. They are useful in such cases because they prevent the issuer from tracking the user’s activities by sharing data with the relying parties. They are not essential, however, because the user may be able to choose a credential issuer that she trusts. (Most privacy-enhancing technologies also prevent relying parties from collectively tracking the user by sharing their login information, without involvement of the credential issuer, but the risk of this happening may be more remote.)

Examples of non-identifying attributes are demographic attributes (city of residence, gender, age group), shopping interests, hobby interests, etc.; such attributes are usually self-asserted, but they can be supplied by an identity provider, chosen by the user, as a matter of convenience, so that the user does not have to reenter them and maintain them uptodate at each relying party. Examples of sites that may ask for such attributes are dating sites, shopping deal sites, hobbyist sites, etc.

Of course, a credential that contains non-identifying attributes will not by itself allow a user to log in to a site. But it can be used in addition to a PKI certificate issued by the site itself to recognize repeat visitors.

Use Cases Where Privacy-Enhancing Technologies Are Necessary

Privacy-enhancing technologies are necessary when the relying party does not require uniquely identifying information, and there is only one credential issuer. That one credential issuer could be the government. Non-uniquely-identifying information provided by government-issued credentials could include assertions that the user is old enough to buy wine, or is a resident of a particular state, or is licensed to practice some profession in some state, or is a US citizen, or has the right to work in the US.

I find it difficult to find examples where people would have a reasonable fear of being tracked through their use of government-issued credentials. But the right to privacy is a human right that is held dear in the United States, and has been found to be implicitly protected by the US constitution. Government-issued credentials will only be acceptable if they incorporate all available privacy protections. That makes the use of privacy-enhancing technologies essential to the success of NSTIC.

Wanted: Efficiently-Revocable Privacy-Friendly Credentials

So: privacy-friendly credentials are necessary; but, in my opinion, the drawbacks of existing privacy-enhancing technologies make them impractical. Therefore we need new privacy-enhancing technologies. Those new technologies should have issue-show and multi-show unlinkability; they should provide partial information disclosure, including proofs of inequality relations involving numeric attributes; and they should be efficiently revocable.

Fortunately, that’s not too much to ask. U-Prove and Idemix have been pioneering technologies, but they are now dated. U-Prove is based on research carried out in the mid-nineties, and the core cryptographic scheme later used in Idemix was described in a paper written in 2001. A lot of research has been done in cryptography since then, and several new cryptographic schemes have been proposed that could be used to provide privacy-friendly credentials.

I don’t think a scheme meeting all the requirements, including efficient revocation, has been designed yet. (I would love to be corrected if I’m wrong!) But possible ingredients for such a system have been proposed, including methods for proving non-revocation in time proportional to the square root of the number of revoked credentials [1] or even in practically constant time [2].

Update. Stefan Brands has told me that the cryptosystem described in [1] is considered part of the U-Prove technology, and that the revocation technique of [1] could be integrated into the existing U-Prove implementation to provide issuer-driven revocation. If that were done and the resulting system proved to be suitably efficient, the only ingredient missing from that system would be multi-show unlinkability.

Once a scheme with all the ingredients has been designed and mathematically verified, it still needs to be implemented. Cryptographic implementations are few and far between, but that does not mean that they are difficult. Recently, for example, three different systems of privacy-friendly credentials were implemented just for the purpose of comparing their performance [3].

Next and Last: Usability and Deployment

To conclude the series, in the next post I’ll try to respond to a comment made by Anthony Nadalin on the Identity Commons mailing list: “if it’s not useable or deployable who cares?”.

References

[1] Stefan Brands, Liesje Demuynck and Bart De Decker. A practical system for globally revoking the unlinkable pseudonyms of unknown users. In Proceedings of the 12th Australasian Conference on Information Security and Privacy, ACISP’07. Springer-Verlag, 2007. ISBN 978-3-540-73457-4. Preconference technical report available at http://www.cs.kuleuven.be/publicaties/rapporten/cw/CW472.pdf.
 
[2] T. Nakanishi, H. Fujii, Y. Hira and N. Funabiki. Revocable Group Signature Schemes with Constant Costs for Signing and Verifying. In IEICE Transactions, volume 93-A, number 1, pages 50-62, 2010.
 
[3] J. Lapon, M. Kohlweiss, B. De Decker and V. Naessens. Performance Analysis of Accumulator-Based Revocation Mechanisms. In Proceedings of the 25th International Conference on Information Security (SEC 2010). Springer, 2010.
 

Pros and Cons of Idemix for NSTIC

This is the second of a series of posts on the prospects for using privacy-enhancing technologies in the NSTIC Identity Ecosystem.

In the previous post I discussed the pros and cons of U-Prove , so naturally I should now discuss the pros and cons of Idemix, the other privacy-enhancing technology thought to have inspired NSTIC. This post, like the previous one, is based on a review of the public literature. If I’ve missed or misinterpreted something, please let me know in a comment.

By the way, a link to the previous post that I posted to the Identity Commons mailing list triggered a wide-ranging discussion on NSTIC and privacy, which can be found in the mailing list archives .

Idemix is an open-source library implemented in Java. It is described in the Idemix Cryptographic Specification [1], and the academic paper [2]. It is mostly based on the cryptographic techniques of [3]. Curiously, although Idemix is provided by IBM, the main Idemix site is located at idemix.wordpress.com and disclaims to be an official IBM site.

There is also a smart card that implements a “light-weight variant of Idemix”. I discuss it at the end of this post.

Feature Coverage

Idemix provides all three privacy features alluded to in NSTIC documents [4] [5] and discussed in the previous post:

  1. Issuance-show unlinkability,
  2. Multi-show unlinkability, and
  3. Partial information disclosure.

The third feature includes both selective disclosure of attributes and the ability to prove inequalities such as the value of a birthdate attribute being less than today’s date minus 21 years without disclosing that birthdate attribute value.

Idemix also includes other features, such as the ability to prove that two attributes have the same value without disclosing that value, and the ability to prove that a certain attribute certified by the issuer has been encrypted under the public key of a third party, which may decrypt it under some circumstances. It could be argued that Idemix is over-engineered for the purpose of Web authentication, including features that add complexity but are not useful for that purpose.

Performance

The richer feature set of Idemix may come at a cost in terms of performance. From the data in Table 1 of [2], it follows that it would take about 12 seconds for the user to submit a credential with one attribute to a relying party that checks for expiration, and about 28 seconds to submit a credential with 20 attributes. The paper dates back to 2002, and the processor used was a relatively slow 1.1GHz Pentium III. (The authors say 1.1MHz but I assume they mean 1.1GHz.) But on the other hand the modulus size was 1024 bits, and Idemix currently uses a 2048 modulus [2]. The paper also promises optimizations that have no doubt been implememted by now. Unfortunately, I haven’t been able to find performance data in the Idemix site. A search for the word performance restricted to the site produces no results. If you know of any recent performance data, please let me know in a comment.

Revocation

We saw in the previous post that unlinkability makes revocation difficult. U-Prove credentials can be revoked by users because they do not have multi-show unlinkability, but cannot be revoked by issuers because they have issue-show unlinkability. Idemix credentials, which have both multi-show unlinkability and issue-show unlinkability, are revocable neither by users nor by issuers. I am not saying that unlinkability makes revocation impossible. Cryptographic techniques have been devised to allow revocation of unlinkable credentials, which I will discuss later in this series of posts. But those techniques are not used by U-Prove or Idemix.

Idemix has a credential update feature that can be used to extend the validity period of a credential that has expired. This facilitates the use of short-term credentials that may not need to be revoked. But the Idemix Cryptographic Specification [1] should not claim as it does that the credential-update feature can be used to implement credential revocation. Waiting for a credential to expire is not the same as revoking it. Short term credentials are an alternative to revocation. And, as an alternative, they have serious drawbacks: they are costly to implement for the issuer; they impose a logistic burden on the user agent; they may become unavailable if the issuer is down when the validity period needs to be extended; and the user agent may be overwhelmed by the need to renew many credentials at once if it has not been operational for an extended period of time. If a short-term credential is renewed on demand, just before it is used, renewal and use of the credential may be linkable by timing correlation.

The Idemix Java Card

The Idemix Java Card was intended as a smart identity card. Its implementation on the Java Card Open Platform (JCOP) is described in [6].

The cryptographic system in an Idemix card is described in the Idemix site as a “light-weight variant of Identity Mixer” (i.e. Idemix). But it is very different from the original Idemix system. According to [6], an implementation of the original system in a Java card would be impractical because credential submission could take 70 to 100 seconds. To make it less impractical, the issuer of a credential to a Java card certifies only that it trusts the Java card. The card is then free to present any attributes it wants to the relying party. (A different way of handling attributes is possible but not recommended, presumably because of the time it takes; see Footnote 5 of [6].) Security for the relying party depends on the issuer downloading the correct attributes and software to the card, and the user not being able to modify those attributes and software. The card must therefore be tamper resistant against the user. (Or at least tamper responsive, i.e. able to detect tampering and respond by zeroing out storage.)

Whereas a U-Prove smart card performs only a small portion of the cryptographic computations, the Idemix Java card is an autonomous system that performs all the cryptographic computations by itself, without help from the user’s computer. This takes time: 10.453 seconds for a transaction, i.e. for submitting a credential to a relying party, according to Table 2 of [6]; or 11.665 seconds according to Table 3. (In both cases, with a 1536-bit modulus, and not counting a 1.474 second revocation check; revocation is discussed below; the discrepancy between the two figures is not explained.) Some of the computations in Table 2 are labeled as precomputations, but no precomputations can take place if the card is not plugged in. The authors of [6] consider that a 10 second transaction time would be adequate. But I don’t think many Web users will be happy waiting 10 seconds each time they want to log in to a site.

Update. It is possible to implement full-blown privacy-friendly credentials systems very efficiently on a smart card. A non-Microsoft implementation of U-Prove on a MULTOS smart card [8], where all the cryptographic computations are carried out by the card, achieves credential-show times close to 0.3 seconds.

The Idemix card features a revocation mechanism. A card can be revoked by including its secret key in a revocation list. But the secret key is generated in the card when the card is “set up”, and it is not known to the party that sets up the card, nor to the issuers of credentials to the card, nor to the user who owns the card. The secret key can only be obtained by breaching the tamper pretection of the card, hence can only become known to an adversary. So the revocation feature seems useless.

Where does the peculiar idea of listing secret keys in a revocation list come from? It turns out that the cryptographic system of the Idemix card is derived from the cryptographic system of [7] which was designed for media copyright protection, e.g. to authenticate the Trusted Platform Module (TPM) in a DVD player before downloading a protected movie to the player. Apparently hackers extract secret keys of TPMs and publish them on the Web. Copyright owners find those secret keys and blacklist them. Blacklisting secret keys makes sense for copyright protection, but not as a revocation technique for smart cards.

Coming Next…

After reading this post and the previous post, you may be thinking whether privacy-enhanced technologies are really a good idea. I will try to answer that question in the next post.

References

[1] IBM Research, Zurich. Specification of the Identity Mixer Cryptographic Library Version 2.3.1. December 7, 2010. Available at http://www.zurich.ibm.com/~pbi/identityMixer_gettingStarted/ProtocolSpecification_2-3-2.pdf .
 
[2] Jan Camenisch and Els Van Herreweghen. Design and Implementation of the Idemix Anonymous Credential System. In Proceedings of the 9th ACM conference on Computer and Communications Security. 2002.
 
[3] J. Camenisch and A. Lysyanskaya. Efficient Non-Transferable Anonymous Multi-Show Credential System with Optional Anonymity Revocation. In Theory and Application of Cryptographic Techniques, EUROCRYPT, 2001.
 
[4] The White House. National Strategy for Trusted Identities in Cyberspace. April 2011. Available at http://www.whitehouse.gov/sites/default/files/rss_viewer/NSTICstrategy_041511.pdf.
 
[5] Howard A. Schmidt. The National Strategy for Trusted Identities in Cyberspace and Your Privacy. April 26, 2011. White House blog post, available at http://www.whitehouse.gov/blog/2011/04/26/national-strategy-trusted-identities-cyberspace-and-your-privacy.
 
[6] P. Bichsel, J. Camenisch, T. Groß and V. Shoup. Anonymous Credentials on a Standard Java Card. In ACM Conference on Computer and Communications Security, 2009.
 
[7] E. Brickell, J. Camenisch and L. Chen. Direct anonymous attestation. In Proceedings of the 11th ACM conference on Computer and Communications Security, 2004.
 
[8] Update. W. Mostowski and P. Vullers. Efficient U-Prove Implementation for Anonymous Credentials on Smart Cards. Available at http://www.cs.ru.nl/~pim/publications/2011_securecomm.pdf.
 

Pros and Cons of U-Prove for NSTIC

This is the first of a series of posts on the prospects for using privacy-enhancing technologies in the NSTIC Identity Ecosystem.

NSTIC calls for the use of privacy-friendly credentials, and NSTIC documents [1] [2] refer to the existence of privacy-enhancing technologies that can be used to implement such credentials. Although those technologies are not named, they are widely understood to be U-Prove and Idemix.

There is confusion regarding the capabilities of privacy-enhancing technologies and the contributions that they can make to NSTIC. For example, I sometimes hear the opinion that “U-Prove has been oversold”, but without technical arguments to back it up. To help clear some of the confusion, I’m starting a series of posts on the prospects for using privacy-enhancing technologies in the NSTIC Identity Ecosystem. This first blog is on the pros and cons of U-Prove, the second one will be on the pros and cons of Idemix, and there will probably be two more after that.

U-Prove is described in the U-Prove Cryptographic Specification V1.1 [3] and the U-Prove Technology Overview [4]. It is based on cryptographic techniques described in Stefan Brands’s book [5].

Privacy Feature Coverage

Three features of privacy-friendly credentials are informally described in NSTIC documents:

  1. Issuance of a credential cannot be linked to a use, or “show,” of the credential even if the issuer and the relying party share information, except as permitted by the attributes certified by the issuer and shown to the relying party.
  2. Two shows of the same credential to the same or different relying parties cannot be linked together, even if the relying parties share information.
  3. The user agent can disclose partial information about the attributes asserted by a credential. For example, it can prove that the user if over 21 years of age based on a birthdate attribute, without disclosing the birthdate itself.

Here I will not discuss how desirable these features are; I leave that for a later post in the series. In this post I will only discuss the extent to which U-Prove provides these features.

U-Prove provides the first feature, which is called untraceability in the U-Prove Technology Overview [4]. A U-Prove credential consists of a private key, a public key, a set of attributes, and a signature by the credential issuer. The signature is jointly computed by the issuer and the user agent in the course of a three-turn interactive protocol, the issuance protocol, where the issuer sees the attributes but not the public key nor the signature itself. Therefore the issue of a credential can be linked to a show of the credential only on the basis of the attribute information disclosed during the show.

U-Prove, on the other hand, does not provide the second feature, because all relying parties see the same public key and the same signature. Stefan Brands acknowledges this in Section 2.2 of [6], where he compares the system of his book [5] to the system of Camenisch and Lysianskaya, i.e. U-Prove to Idemix, acknowledging that the latter provides multi-show unlinkability but the former does not.

Unfortunately, the U-Prove Technology Overview [4] is less candid. It does discuss the fact that multiple shows of the same U-Prove credential (U-Prove token) are linkable, in Section 4.2, but the section is misleadingly entitled Unlinkability. It starts as follows:

Similarly, the use of a U-Prove token cannot inherently be correlated to uses by the same Prover of other U-Prove tokens, even if the Issuer identified the Prover and issued all of the Prover’s U-Prove tokens at the same time.

This is saying that different U-Prove tokens are not linkable! Which is a vacuous feature: why would different tokens be linkable? The section goes on to argue that that the issuer should issue many tokens to the user agent (the Prover) with the same attributes, one for each relying party (each Verifier). On the Web, this is utterly impractical. There are millions of possible relying parties: how many tokens should be issued? How can all those tokens be stored on a smart card? What if the user agent runs out of tokens? And how does the user agent know if two parties are different or the same? (Is example.com the same relying party as xyz.example.com?)

Update. Stefan Brands has pointed out that a U-Prove token does not take up any storage in a smart card if the private key splitting technique featured by U-Prove (which I refer to below) is used.

As for the third feature of privacy-friendly credentials, partial information disclosure, U-Prove provides it to a certain extent. When showing a credential, the user agent can disclose only some of the attributes in the credential, proving to the relying party that those attributes were certified by the credential issuer without disclosing the other attributes. However, U-Prove does not support the “age over 21” example found in several NSTIC documents. That would require the ability to prove that a value is contained in an interval without disclosing the value. Appendix 1 of the U-Prove Technology Overview [4] lists the ability to perform such a proof as one of the “U-Prove features” that have not been included in Version 1.1, suggesting that it could be included in a future version. In Section 3.7 of his book [5], Stefan Brands does suggest a method for proving that a secret is contained in an interval. However, it dismisses it as involving “a serious amount of overhead”, because it requires executing many auxiliary proofs of knowledge. (I believe that proving “age over 21” would require at least 30 auxiliary proofs, which is clearly impractical.)

An interesting feature of U-Prove is the ability to split the private key of a credential between the user agent and a device such as a smart card. The device must then be present for the credential to be usable, thus providing two-factor authentication; but the device only has to perform a limited amount of cryptographic computations, most of the cryptographic computations being carried out by the user agent. This makes it possible to use slower, and hence cheaper, devices than if all the cryptographic computations were carried out by the device (as is the case, for example, in an Idemix smart card).

Update. A non-Microsoft implementation of U-Prove on a MULTOS smart card, where all the cryptographic computations are carried out by the card with impressive performance (close to 0.3 seconds in some cases), can be found in [8].

Revocation

The ability to revoke credentials is usually taken for granted. In the case of privacy-friendly credentials, however, it is difficult to achieve. An ordinary CRL (Certificate Revocation List) cannot be used, since it would require some kind of credential identifier known to both the issuer and the relying parties, which would defeat unlinkability.

U-Prove credentials have a Token Identifier, which is a hash of the public key and the signature. Because U-Prove does not provide multi-show unlinkability, the Token Identifier, like the public key and the signature, is known to all the relying parties. The user agent could therefore revoke the credential by including the Token Identifier in a CRL. However, because U-Prove provides issue-show unlinkability, the credential issuer does not know the Token Identifier, nor the public key or the signature, and therefore cannot use it to revoke the credential.

Section 5.2 of the U-Prove Technology Overview [4] says that an identifier could be included in a special attribute called the Token Information Field for the purpose of revocation, and “blacklisted using the same methods that are available for X.509 certificates”; this, however, would destroy the only unlinkability feature of U-Prove credentials, viz. issuance-show unlinkability (which [4] calls untraceability).

Section 5.2 of [4] also suggests using on-demand credentials. However that does not seem practical: the user agent would have to authenticate somehow to the issuer, then conduct a three-turn interactive issuance protocol with the issuer to obtain the token, then conduct a presentation protocol with the relying party. The latency of all these computations and interactions would too high; and since the issuance computations would have to be carried out each time the credential is used, the cost for the issuer would be staggering. Furthermore, on-demand credentials may allow linking of issuance to show by timing correlation.

A workaround to the revocation problem is suggested in Section 6.2 of [4], for cases where the credential is protected by a device such as a smart card by splitting the private key between the user agent and the device. In those cases the Issuer could revoke the device rather than a particular credential protected by the device, by adding an identifier of the device to a revocation list. However this would require downloading the revocation list to the device in a Device Message when the credential is used, so that the device can check if its own identifier is in the list. Since a revocation list can have hundreds of thousands of entries (e.g. the state of West Virginia revokes about 90,000 driver licenses per year [7]), downloading it to a smart card each time the smart card is used is not a viable option.

Update. Stefan Brands has pointed out that only a revocation list increment needs to be downloaded to the card.

Appendix 1 of [4] includes an “issuer-driven revocation” feature in a list of U-Prove features not yet implemented:

An Issuer can revoke one or more tokens of a known Prover by blacklisting a unique attribute encoded in these tokens (even if it is never disclosed at token use time).

How this can be achieved is not explained in the appendix, nor in Brands’s book [5]. However it is explained in [6], where Brands proposes a new system and compares it to his previous system of [5], i.e. to U-Prove. He says that [5] “allows an issuer to invisibly encode into all of a user’s digital credentials a unique number that the issuer can blacklist in order to revoke that user’s credentials” adding that the blacklist technique “consists of repeating a NOT-proof for each blacklist element”. In other words, the idea is to prove a formula stating that the unique number is NOT equal to the first element of the blacklist, and NOT equal to the second element, and NOT equal to the third element, etc., without revealing the unique number. That can be done but, as Brands further says in [6], it is “not practical for large blacklists”. Indeed, based on Section 3.6 of [5], proving a formula with multiple negated subformulas requires proving separately each negated subformula. So if the blacklist has 100,000 elements, 100,000 proofs would have to be performed each time a credential is used.

By the way, the system of [6] is substantially different from U-Prove and not well suited for use on the Web, since it requires the set of relying parties to be known at system-setup time.

Update. Stefan Brands has told me that the cryptosystem described in [6] is considered part of the U-Prove technology, and that the revocation technique of [6] could be integrated into the existing U-Prove implementation to provide issuer-driven revocation.

Conclusions

I will save my conclusions for the last post in the series, but of course any comments are welcome now.

References

[1] The White House. National Strategy for Trusted Identities in Cyberspace. April 2011. Available at http://www.whitehouse.gov/sites/default/files/rss_viewer/NSTICstrategy_041511.pdf.
 
[2] Howard A. Schmidt. The National Strategy for Trusted Identities in Cyberspace and Your Privacy. April 26, 2011. White House blog post, available at http://www.whitehouse.gov/blog/2011/04/26/national-strategy-trusted-identities-cyberspace-and-your-privacy.
 
[3] Christian Paquin. U-Prove Cryptographic Specification V1.1 Draft Revision 1. February 2011. No http URL seems to be available for this document, but it can be downloaded from the Specifications and Documentation page, which itself is available at http://www.microsoft.com/u-prove.
 
[4] Christian Paquin. U-Prove Technology Overview V1.1 Draft Revision 1. February 2011. No http URL seems to be available for this document, but it can be downloaded from the Specifications and Documentation page, which itself is available at http://www.microsoft.com/u-prove.
 
[5] Stefan Brands. Rethinking Public Key Infrastructures and Digital Certificates: Building in Privacy 2000. MIT Press, Cambridge, MA, USA, 2000. ISBN 0262024918. Available for free download at http://www.credentica.com/the_mit_pressbook.php
 
[6] Stefan Brands, Liesje Demuynck and Bart De Decker. A practical system for globally revoking the unlinkable pseudonyms of unknown users. In Proceedings of the 12th Australasian Conference on Information Security and Privacy, ACISP’07. Springer-Verlag, 2007. ISBN 978-3-540-73457-4. Preconference technical report available at http://www.cs.kuleuven.be/publicat ies/rapporten/cw/CW472.pdf.
 
[7] West Virginia Department of Transportation, Division of Motor Vehicles. Annual Report 2010. Available at http://www.transportation.wv.gov /business-manager/Finance/Financial%20Reports/DMV_AR_2010.pdf.
 
[8] Update. W. Mostowski and P. Vullers. Efficient U-Prove Implementation for Anonymous Credentials on Smart Cards. Available at http://www.cs.ru.nl/~pim/publications/2011_securecomm.pdf.