After CardSpace, Microsoft Calls for Research on Passwords

In February 2011 Microsoft
discontinued CardSpace,
a Windows application for federated login that was the deployment
vehicle for the U-Prove privacy-enhancing Web authentication
technology, which itself is said to have inspired the
NSTIC
initiative.
Cormac Herley,
a Microsoft researcher, and
Paul van Oorshot,
a professor at Carleton University, have written a paper entitled
A Research Agenda Acknowledging the Persistence of Passwords
that mentions the CardSpace failure and calls for research on
traditional password authentication.

The paper makes two points:

  1. It blames the failure of attempts at replacing passwords on a lack of
    research on identifying and prioritizing the requirements to be met by
    alternative authentication methods.
  2. It argues that passwords have many virtues, will persist for some
    time, and may be the best fit in many scenarios; and it calls for
    research on how to better support them.

I disagree with the first point but agree with the second.

The problem with the first point is that it does not take into account
the non-technical obstacles faced by alternative authentication
methods. Microsoft Passport was the first attempt at Web single
sign-on. It was launched when Microsoft was in the process of
annihilating the Netscape browser and acquiring a monopoly in Web
browsing; it originally had an outrageous privacy policy, which was
later modified; and if successful it would have made Microsoft a
middleman for all Web commerce. No wonder it failed.

Other single sign-on initiatives had obvious non-technical obstacles.
OpenID required people to use a URL as their identity, something that
could only appeal to the tiny fraction of users who understand or care
about the technical underpinnings of the Web. CardSpace was a
Microsoft product; that by itself must have provided motivation for
all Microsoft competitors to oppose it; furthermore it only ran on
Windows; and in order to support CardSpace relying party developers
had to install and learn to use a complex toolkit. Again, no wonder
CardSpace failed.

The non-technical obstacles faced by Passport, OpenID and CardSpace
were due to lack of maturity of the Web industry. Such obstacles will
slowly go away as the industry matures. Signs of maturity are
appearing: there are now five major browsers that seem to understand
the need for common standards; the World Wide Web consortium (W3C) has
shown that it can bring them together to develop standards such as
HTML5 and has already engaged them in identity work through
the
Identity in the Browser workshop
and the
identity mailing list
that was set up after the workshop; and
OpenID 2.0
no longer insists on users using URLs as their identities. Industries can take decades
to mature, so it’s not surprising that progress is slow.

As for passwords, I agree that they have virtues, will persist, and
deserve research. There is actually research on passwords going on.

Password managers are an active area of research and development by
browser providers and others.

There was a session on passwords at the last
Internet Identity Workshop (IIW),
called by
Jay Unger,
where
Alan Karp
described his
site password tool,
which can be viewed as an alternative to a password manager, where
passwords for different sites are computed rather than retrieved from
storage. The tool computes a high entropy password for a Web site
from a master password and an easy-to-remember name for the site.

I have myself been recently granted two patents on password security,
which were also discussed at the IIW session on passwords:

  • One of them describes a
    countermeasure against online password guessing
    that places a hard limit on the total number of guesses that an
    attacker can make against a password. Besides the traditional counter
    of consecutive bad guesses the countermeasure uses an additional
    counter of total bad guesses, not necessarily consecutive. The user
    is asked to change her password if and when this second counter
    reaches a threshold, rather than at arbitrary intervals.
  • The other describes a
    technique for password distribution,
    that allows an administrator to send a temporary password to a user,
    e.g. after a password reset, over an unprotected channel such as
    ordinary email. The administrator puts a hold on the user’s account
    that allows no further access beyond changing the temporary password
    into a password chosen by the user. The administrator removes the
    hold only after being notified by the legitimate user that she has
    successfully changed the password, e.g. over the phone. In abstract
    terms, instead of relying on a confidential channel to send the
    password, the administrator relies on a channel with data-origin
    authentication to receive the user’s notification.

Microsoft or anybody else who wants to increase password security can
license either of these patents. You may use the
contact form
of this site to inquire about licensing.

Credential Sharing: A Pitfall of Anonymous Credentials

There is an inherent problem with anonymous credentials such as those
provided by Idemix or U-Prove: if it is not possible to tell who is
presenting a credential, the legitimate owner of a credential may be
willing to lend it to somebody else who is not entitled to it. For
example, somebody could sell a proof-of-drinking-age credential to a
minor, as noted by Jaap-Henk Hoepman in a recent blog
post [1].

This problem is known in cryptography as the credential sharing or
credential transferability problem, and various countermeasures have
been proposed. In this post I will briefly discuss some of these
countermeasures, then I will describe a new method of sharing
credentials that is resistant to most of them.

A traditional countermeasure proposed by cryptographers, mentioned for
example in [2], is to deter the sharing of an
anonymous credential by linking it to one or more additional
credentials that the user would not want to share, such as a
credential that gives access to a bank account, in such a way that the
sharing of the anonymous credential would imply the sharing of the
additional credential(s). I shall refer to this countermeasure as
the “credential linking countermeasure”. I find this countermeasure
unrealistic, because few people would escrow their bank account for
the privilege of using an anonymous credential.

In her presentation [3] at the recent NIST
Meeting on Privacy-Enhancing Cryptography
[4],
Anna Lysyanskaya said that it is a misconception to think that “if all
transactions are private, you can’t detect and prevent identity
fraud”. But the countermeasure that she proposes for preventing
identity fraud is to limit how many times a credential is used and to
disclose the user’s identity if the limit is exceeded. However this
can only be done in cases where a credential only allows the
legitimate user to access a resource a limited number of times, and I
can think of few such cases in the realm of Web authentication.
Lysyanskaya gives as an example a subscription to an online newspaper,
but such subscriptions typically provide unlimited access for a
monthly fee. I shall refer to this countermeasure as the “limited use
countermeasure”.

Lysyanskaya’s presentation also mentions identity escrow as useful for
conducting an investigation if “something goes very, very wrong”.

At the panel on Privacy in the Identification Domain at the
same meeting Lysyanskaya also proposed binding an anonymous credential
to a biometric. The relying party would check the biometric and then
forget it to keep the presentation anonymous. But if the relying
party can be trusted to forget the biometric, it may as well be
trusted to forget the entire credential presentation, in which case an
anonymous credential is not necessary.

An interesting approach to binding a biometric to a credential while
keeping the user anonymous can be found in [5]. The
biometric is checked by a tamper-proof smartcard trusted by the
relying party, but a so-called warden trusted by the user is
placed between the smartcard and the relying party, and mediates the
presentation protocol to ensure that no information that could be used
to identify or track the user is communicated by the smart card to the
relying party.

However, if what we are looking for is an authentication solution that
will replace passwords on the Web at large, biometric-based
countermeasures are not good candidates because of their cost.

Update.
In a response to this post on the
Identity Commons mailing list Terry Boult has pointed out that cameras and microphones are pretty ubiquitous and said that, in volume, fingerprint sensors are cheaper than smartcard readers.

In his blog post [1], Hoepman suggested that, to
prevent the sharing of an anonymous credential, the credential could
be stored in the owner’s identity card, presumably referring to the
national identity card that citizens carry in the Netherlands and
other European countries. This is a good idea because lending the
card would put the owner at risk of impersonation by the borrower. I
shall refer to this as the “identity card countermeasure”.

Rather than storing a proof of age credential as an additional
credential in a national identity card, anonymous proof of age could
be accomplished by proving in zero knowledge that a birthdate
attribute of a national identity credential (or, in the United
States, of a driver’s license credential) lies in an open interval
ending 21 years before the present time; Idemix implements such
proofs. The identity credential could be stored in a smartcard or
perhaps in a tamper-proof module within a smart phone or a personal
computer. I’ll refer to this countermeasure as the “selective
disclosure countermeasure”. As in the simpler identity card
countermeasure, the legitimate user of the credential would be
deterred from sharing the credential with another person because of
the risk of impersonation.

But this countermeasure, like most of the above ones, does not help
with the following method of sharing credentials.

A Countermeasure-Resistant Method of Sharing Credentials

An owner of a credential can make the credential available for use by
another person without giving a copy of the credential to that
other person. Instead, the owner can allow that other person to act
as a proxy, or man-in-the-middle, between the owner and a relying
party in a credential presentation. (Note that this is not a
man-in-the-middle attack because the man in the middle cooperates with
the owner.)

For example, somebody of drinking age could install his or her
national identity credential or driver’s license credential on a Web
server, either by copying the credential to the server or, if the
credential is contained in a tamper-proof device, by connecting the
device to the server. The credential owner could then allow minors to
buy liquor by proxying a proof of drinking age based on the birthdate
attribute in the credential. (Minors would need a special user agent
to do the proxying, but the owner could make such user agent available
for download from the same server where the credential is installed.)
The owner could find a surreptitious way of charging a fee for the
service.

This method of sharing a credential, which could be called
proxy-based sharing, defeats most of the countermeasures
mentioned above. Biometric-based countermeasures don’t work because
the owner of the credential can input the biometric. Credential
linking countermeasures don’t work because the secret of the
credential is not shared. The identity card countermeasure and the
selective disclosure countermeasure don’t work because the owner is in
control of what proofs are proxied and can refuse to proxy proofs that
could allow impersonation. The limited use countermeasure could work
but, as I said above, I can think of few Web authentication cases
where it would be applicable.

Are there any other countermeasures that would prevent or inhibit this
kind of sharing? If a minor were trying to buy liquor using an
identity credential and a payment credential, the merchant could
require the minor to prove in zero-knowledge that the secret keys
underlying both credentials are the same. That would defeat the
sharing scheme by making the owner of the identity credential for pay
for the purchase. However there are proof-of-age cases that do not
require a purchase. For example, an adult site may be required to ask
for proof of age without or before asking for payment.

The only generally applicable countermeasure that I can think of to
defeat proxy-based sharing is the identity escrow scheme that
Lysyanskaya referred to in her talk [3]. Using
provable encryption, as available in Idemix, a liquor merchant could
ask the user agent to provide the identity of the owner of the
credential as an encrypted attribute that could be decrypted, say, by
a judge. (The encrypted attribute would be randomized for
unlinkability.) The user agent would include the encrypted attribute
in the presentation proof after asking the user for permission to do
so.

Unfortunately this requires the user to trust the government. This
may not be a problem for most people in many countries. But it
undermines one of the motivations for using privacy-enhancing
technologies that I discussed in a previous blog [6].

References

[1] Jaap-Henk Hoepman.
On using identity cards to store anonymous credentials.
November 16, 2011. Blog post, at
http://blog.xot.nl/2011/11/16/on-using-identity-cards-to-store-anonymous-credentials/.

 
[2] Jan Camenisch and Anna Lysyanskaya.
An Efficient System for Non-transferable Anonymous Credentials with Optional Anonymity Revocation.
In Proceedings of the International Conference on the Theory and
Application of Cryptographic Techniques: Advances in Cryptology (EUROCRYPT 01)
.
2001.
Research report available from
http://www.zurich.ibm.com/security/privacy/.

 
[3] Anna Lysyanskaya.
Conditional And Revocable Anonymity.
Presentation at the
NIST Meeting on Privacy-Enhancing Cryptography.
December 8-9, 2011.
Slides available at
http://csrc.nist.gov/groups/ST/PEC2011/presentations2011/lysyanskaya.pdf.

 
[4] NIST Meeting on Privacy-Enhancing Cryptography.
December 8-9, 2011.
At NIST Meeting on Privacy-Enhancing Cryptography.

 
[5] Russell Impagliazzo and Sara Miner More.
Anonymous Credentials with Biometrically-Enforced Non-Transferability.
In Proceedings of the 2003 ACM workshop on Privacy in the electronic society (WPES 03).

 
[6] Francisco Corella.
Are Privacy-Enhancing Technologies Really Needed for NSTIC?
October 13, 2011.
Blog post, at https://pomcor.com/2011/10/13/are-privacy-enhancing-technologies-really-needed-for-nstic/.

 

Trip Report: Meeting on Privacy-Enhancing Cryptography at NIST

Last week I participated in the
Meeting on Privacy-Enhancing Cryptography
at NIST. The meeting was organized by Rene Peralta, who
brought together a diverse international group of
cryptographers and privacy stakeholders. The agenda is
online with links to the
workshop presentations.

The presentations covered many applications of
privacy-enhancing cryptography, including auctions with
encrypted bids, database search and data stream filtering
with hidden queries, smart metering, encryption-based access
control to medical records, format-preserving encryption of
credit card data, and of course authentication. There was a
talk on U-Prove
by Christian Paquin, and a
talk on Idemix
by Gregory Neven. There were also talks on several
techniques besides anonymous credentials that could be used
to implement privacy-friendly authentication: group
signatures, direct anonymous attestation, and EPID (Enhanced
Privacy ID).
Kazue Sako’s talk
described several possible applications of group signatures,
including a method of paying anonymously with a credit card.

A striking demonstration of the practical benefits of
privacy-enhancing cryptography was the
presentation on the Danish auctions of sugar beets contracts
by Thomas Toft. A contract gives a farmer the right to grow
a certain quantity of beets for delivery to Danisco, the
only Danish sugar producer. A yearly auction allows farmers
to sell and buy contracts. Each farmer submits a binding
bid, consisting of a supply curve or a demand curve. The
curves are aggregated into a market supply curve and a
market demand curve, whose intersection determines the
market clearing price at which transactions take place.
What’s remarkable is that farmers submit encrypted bids, and
bids are never decrypted. The market clearing price
is obtained by computations on encrypted data, using secure
multiparty computation techniques. Auctions have been
successfully held every year since 2008.

I was asked to participate in the panel on Privacy in the
Identification Domain and to start the discussion by
presenting a
few slides
summarizing my series of blog posts on

privacy-enhancing technologies and NSTIC
. In response
to my slides, Gregory Neven of IBM reported that a
credential presentation takes less than one second on his
laptop, and Brian La Macchia of Microsoft pointed out that
deployment is difficult for public key certificates as well
as for privacy-friendly credentials. There were discussions
with Gregory Neven on revocation and with Anna Lysyanskaya
on how to avoid the sharing of anonymous credentials; these
are big topics that deserve their own blog posts, which I
plan to write soon, so I won’t say any more here. Jeremy
Grant brought the audience up to date about NSTIC, which has
received funding and is getting ready to launch pilots.
Then there was a wide ranging discussion.

Do-Not-Track and Third-Party Login

Recently the
World Wide Web Consortium
(W3C)
launched a Tracking Protection Working Group, following
several recent proposals for Do-Not-Track mechanisms, and more
specifically in response to a W3C-member
submission by Microsoft
. A useful list of links to proposals and
discussions related to Do-Not-Track can be found in
the working group’s
home page
.

The Microsoft submission was concerned with tracking by third-party
content embedded in a Web page via cookies and other means of
providing information to the third party. It proposed a Do-Not-Track
setting in the browser, to be sent to Web sites in an HTTP header and
made available to Javascript code as a DOM property. It also proposed
a mechanism allowing the user to specify a white list of third party
content that the browser would allow in a Web page and/or a black list
of third party content that the browser would block. The browser
would filter the requests made by a Web page for downloading
third-party content, allowing some and rejecting others.

(The specific filtering mechanism proposed by Microsoft would allow
third-party content that is neither in the white list nor in the black
list. This would be ineffective, since the third party could
periodically change the domain name it uses to avoid being
blacklisted. I trust that the W3C working group will come up with a
more effective filtering mechanism.)

A Do-Not-Track setting and a filtering mechanism are good ideas, but
they only deal with the traditional way of tracking a user. Today
there is another way of tracking a user, which can be used whenever
the user logs in to a Web site with authentication provided by a third
party, such as Facebook, Google or Yahoo.

Third-party login uses a double-redirection protocol. When the user
wants to log in to a Web site, the user’s browser is redirected to a
third party, which plays the role of “identity provider.” The
identity provider authenticates the user and redirect the browser back
to the Web site, which plays the role of “relying party.” The
identity provider is told who each relying party is, and can therefore
can track the user without any need for cookies. The identity
provider can link the user’s logins to relying parties to the
information in the user’s account at the identity provider, which in
the case of Facebook includes the user’s real name and much other real
identity information.

Privacy-enhancing technologies, which I discussed in a recent series
of blog posts (starting with the
one on U-Prove
), may eventually make it possible to log in with a
third party credential without the identity provider being able to
track the user; but in the meantime, means must be found of providing
protection against tracking via third-party login. The W3C Tracking
Protection working group could provide such protection by broadening
the scope of the Do-Not-Track setting so that it would apply
to both the traditional method
of tracking via embedded content and the new method of tracking via
third-party login. An identity provider who receives a Do-Not-Track
header while participating in a double-redirection protocol would be
required to forget the transaction after authenticating the user.

The scope of the filtering mechanism could also be broadened
so that it would apply to redirection requests in
addition to third-party content embedding. This could
mitigate a security weakness that affects third-party login protocols
such as OpenID and OAuth. Such protocols are highly vulnerable to a
phishing attack that captures the user’s password for an identity
provider: the attacker sets up a malicious relying party that
redirects the browser to a site masquerading as the identity provider.
A filtering mechanism that would block redirection
by default could prevent
the attack based on the fact that the site masquerading as the
identity provider would not be whitelisted (while the legitimate identity
provider would be).

Benefits of TLS for Issuing and Presenting Cryptographic Credentials

In comments on the previous post at the
Identity Commons mailing list
and comments at the session on deployment and usability of cryptographic
credentials at the
Internet Identity Workshop,
people have questioned the advantages of
running cryptographic protocols for issuing and presenting credentials
inside TLS, and argued in favor of running them instead over HTTP.
I believe running such protocols inside TLS removes several obstacles
that have hindered the deployment of cryptographic credentials. So in
this post I will try to answer those comments.

Here are three advantages of running issuance and presentation
protocols inside TLS over running them outside TLS:

  1. TLS is ubiquitous. It is implemented by all browsers and all
    server middleware. If issuance and presentation protocols were
    implemented inside TLS, then users could use cryptographic credentials
    without having to install any applications or browser plugins, and
    developers of RPs and IdPs would not have to install and learn
    additional SDKs.
  2. The PRF facility of TLS is very useful for implementing cryptographic
    protocols. For example, in the U-Prove presentation protocol
    [1],
    when U-Prove is used for user authentication, the verifier must send a
    nonce to the prover; if the protocol were run inside TLS, that step
    could be avoided because the nonce could be independently generated by
    the prover and the verifier using the PRF. The PRF can also be used
    to provide common pseudo-random material for protocols based on the
    common reference string (CRS) model
    [2].
    (Older cryptosystems such as
    U-Prove
    [1]
    and Idemix
    [3]
    rely on the Fiat-Shamir heuristic
    [4]
    to
    eliminate interactions, but more recent cryptosystems based on
    Groth-Sahai proofs
    [5]
    rely instead on the CRS model, which is more
    secure in some sense
    [6].)
  3. Inside TLS, an interactive cryptographic protocol can be run in a
    separate TLS layer, allowing the underlying TLS record layer to
    interleave protocol messages with application data (and possibly with
    messages of other protocol runs), thus mitigating the latency impact
    of protocol interactions.

And here are two advantages of running protocols either inside or directly
on top of TLS, over running them on top of HTTP:

  1. Simplicity.
    Running a protocol over HTTP would require specifying how protocol messages
    are encapsulated inside HTTP requests and responses, i.e. it would require
    defining an HTTP-level protocol.
  2. Performance.
    Running a protocol over HTTP would add the overhead of sending HTTP
    headers, and, possibly, of establishing different TLS connections for
    different HTTP messages if TLS connections cannot be kept alive for
    some reason.

As always, comments are welcome.

References

[1] Christian Paquin. U-Prove Cryptographic Specification V1.1 Draft
Revision 1
, February 2011.
Downloadable from
http://www.microsoft.com/u-prove.

 
[2] M. Blum, P. Feldman and S. Micali. Non-Interactive Zero-Knowledge
and Its Applications (Extended Abstract). In Proceedings of the
Twentieth Annual ACM Symposium on Theory of Computing (STOC 1988)
.

 
[3] Jan Camenisch et al. Specification of the Identity Mixer
Cryptographic Library, Version 2.3.1
. December 2010. Available at
http://www.zurich.ibm.com/~pbi/identityMixer_gettingStarted/ProtocolSpecification_2-3-2.pdf.

 
[4] A. Fiat and A. Shamir. How to Prove Yourself: Practical Solutions
to Identification and Signature Problems. In Proceedings on Advances
in Cryptology (CRYPTO 86)
, Springer-Verlag.

 
[5] J. Groth and A. Sahai. Efficient Non-Interactive Proof Systems
for Bilinear Groups. In Theory and Applications of Cryptographic
Techniques (EUROCRYPT 08)
, Springer-Verlag.

 
[6] R. Canetti, O. Goldreich and S. Halevi. The Random Oracle
Methodology, Revisited. Journal of the ACM, vol. 51, no. 4, 2004.

 

Deployment and Usability of Cryptographic Credentials


This is the fourth and last of a series of posts on the prospects for
using privacy-enhancing technologies in the NSTIC Identity Ecosystem.

Experience has shown that it is difficult to deploy cryptographic
credentials on the Web and have them adopted by users, relying parties
and credential issuers. This is true for privacy-friendly credentials
as well as for ordinary public-key certificates, both of which have a
place in the NSTIC Identity Ecosystem, as I argued in the previous
post.

I believe that this difficulty can be overcome by putting the browser
in charge of managing and presenting credentials, by supporting
cryptographic credentials in the core Web protocols, viz. HTTP and
TLS, and by providing a simple and automated process for issuing
credentials.

Browsers should manage and present credentials

For credentials to be widely adopted, users must not be required to
install additional software, let alone proprietary software that only
runs on one operating system, such as
Windows Cardspace.
Therefore credentials must be managed and presented by the browser.

The browser should allow users to set up multiple personas or
profiles and associate particular credentials with particular
personas. Many users, for example, have a personal email address and
a business email address, which could be associated with a personal
profile and a business profile respectively. The user could declare
one profile to be the “currently active persona” in a particular
browser window or tab, and thus facilitate the selection of
appropriate credentials when visiting sites in that window or tab.

People who use multiple browsers in multiple computing devices
(including desktop or laptop computers, smart phones and tablets) must
have access to the same credentials on all those devices. Credentials
can be synced between browsers through a Web service without having to
trust the service by equipping each browser with a key pair for
encryption and a key pair for signature (in the same way as email can
be sent with end-to-end confidentiality and origin authentication
using S/MIME or PGP). Credentials can be backed up to an untrusted
Web service similarly.

Cryptographic credentials should be supported by HTTP and TLS

HTTP should provide a way for the relying party to ask for particular
credentials or attributes, and TLS should provide a way for the
browser to present one or multiple credentials. Within TLS, the
mechanism for presenting credentials should be separate from and
subsequent to, the handshake, to benefit from the confidentiality and
integrity offered by the TLS connection after it has been secured.

Credentials should be issued automatically to the browser, through TLS

Privacy-friendly credentials have cryptographically complex
interactive issuance protocols. Paradoxically, this suggests a way of
simplifying the issuance process, for both PKI certificates and
privacy-friendly credentials.

Since the process is interactive, it should be run directly on a
transport layer connection, to avoid HTTP and application overhead.
That connection should be secure to protect the confidentiality of the
attributes being certified. To reduce the latency due to the
cryptographic computations, the protocol interactions should be
interleaved with the transmission of other data. And the
cryptographic similarity of issuance and presentation protocols
suggests that they should be run over the same kind of connection.

All this leads to the idea of running issuance protocols, like
presentation protocols, directly over a TLS connection. TLS has a
record layer specification that could be extended to define two new
kinds of records, one for issuance protocol messages, the other for
presentation protocol messages. TLS would then automatically
interleave protocol interactions with transmission of other data.
(Another benefit of TLS is that its PRF facility could be readily used
to generate the common reference string used by some cryptographic
protocols.)

Since TLS is universally supported by server middleware, implementing
issuance protocols directly over TLS would make allow servers to issue
credentials automatically without installing additional software. In
particular, it would make it easy for any Web site to issue a PKI
certificate as a result of the user registration process, for use in
subsequent logins.

User Experiences

Once credentials are handled by browsers and directly supported by the
core protocols of the Web, smooth and painless user experiences become
possible.

For example, a user can open a bank account online as follows. The
user accepts terms and conditions and clicks on an account creation
button button. The bank asks the browser for a social security
credential and a driver’s license credential. The browser presents
the credentials to the bank after asking the user for permission. The
bank checks the user’s credit ratings and automatically creates an
account and issues a PKI certificate binding the account number to the
public key component of a new key pair generated by the browser on the
fly. On a return visit, the user clicks on a login button and the
bank asks the browser for the certificate. The user may allow the
browser to present the certificate without asking for permission each
time. Double factor authentication can be achieved, for example, by
keeping the private key and certificate in a password-proctected smart
card.

As a second example, suppose a user visits a site to sign up for
receiving coupons by email. The user accepts terms and conditions and
clicks on a sign-up button. The site asks the browser for a
verified email-address certificate (issued by an email service
provider) and a number of self-asserted attributes, such as zip code,
gender, age group, and set of shopping preferences. The browser finds
in its certificate store (or in a connected smart card) an email
address certificate and a personal-data credential associated with the
currently active persona. The personal-data credential is a
privacy-friendly credential featuring unlinkability and selective
disclosure. The browser presents simultaneously the email certificate
and the personal-data credential, disclosing only the personal-data
attributes requested by the site. The browser may or may not ask the
user for permission to present the credentials, depending on user
preferences that may be persona-dependent.

Conclusions

In this series of posts I have argued that new privacy-enhancing
technologies should be developed to fill the gaps in currently
implemented systems and to take advantage of new techniques developed
by cryptographers over the last 10 or 15 years. I have also argued
that the NSTIC Identity Ecosystem should accomodate both
privacy-friendly credentials and ordinary PKI certificates, because
different use cases call for different kinds of credentials. Finally
I have sketched above two examples of user experiences that can be
provided if credentials are handled by browsers and directly supported
by the core protocols of the Web.

Of course this requires major changes to the Web infrastructure, including:
extensions to HTTP; a revamp of TLS to allow for the presentation of
privacy-friendly credentials, the simultaneous presentation of
multiple credentials, and the issuance of credentials; support of the
protocol changes in browsers and server middleware; and implementation
of browser facilities for managing credentials.

These changes may seem daunting. The private sector by itself could
not carry them out, especially given the current reliance of technology
companies on business models based on advertising, which benefit from
reduced user privacy. But I hope NSTIC will make them possible.

Are Privacy-Enhancing Technologies Really Needed for NSTIC?


This is the third of a series of posts on the prospects for using
privacy-enhancing technologies in the NSTIC Identity Ecosystem.

In the first two posts we’ve looked at two, or rather three,
privacy-enhancing authentication technologies: U-Prove, Idemix, and
the Idemix Java card. The credentials provided by these technologies
have some or all of the privacy features called for by NSTIC, but they
have various practical drawbacks, the most serious of which is that
they are not revocable by the credential issuer.

Given these drawbacks, it is natural to ask the question: are
privacy-friendly credentials really necessary for NSTIC? My answer
is: they are not needed in many important use cases, and they are
useful but not indispensable in other important cases; but they are
essential in cases that are key to the success of NSTIC.

Use Cases Where Privacy-Enhancing Technologies Are Not Needed

The most common use case of Web authentication is the case where a
user registers anonymously with a Web site and later logs in as a
returning user. Traditionally, the user registers a username and a
password with the site and later uses them as credentials to log in.
Today, third-party login is becoming popular as a way of mitigating
the proliferation or reuse of passwords: the user logs in with
username and password to a third-party identity provider, and is then
redirected to the Web site, which plays the role of relying party.
But there is a way of avoiding passwords altogether: the Web site can
issue a cryptographic credential to the user upon registration, which
the user can submit back to the Web site upon login. In that case
there is no third party involvement and no privacy issues. The
cryptographic credential can therefore be an ordinary PKI certificate.
No privacy-enhancing technologies are needed.

Update.

The PKI certificate binds the newly created user account to the
public key component of a key pair that the browser generates on the
fly.

Other cases where privacy-enhancing technologies are not needed are
those where a credential demonstrates that the user possesses an
attribute whose value uniquely identifies the user, and the relying
party needs to know the value of that attribute. (One example of such
an attribute is an email address.) Privacy-enhancing technologies are
not useful in such cases because a uniquely-identifying attribute
communicated to relying parties can be used to track the user no
matter what type of credential is used to communicate the attribute.

Use Cases Where Privacy-Enhancing Technologies Are Useful but not Essential

Privacy-enhancing technologies are useful but not essential when the
attributes certified by a credential do not uniquely identify the
user, and the user has a choice of credential issuers. They are
useful in such cases because they prevent the issuer from tracking the
user’s activities by sharing data with the relying parties. They are
not essential, however, because the user may be able to choose a
credential issuer that she trusts. (Most privacy-enhancing
technologies also prevent relying parties from collectively tracking
the user by sharing their login information, without involvement of
the credential issuer, but the risk of this happening may be more
remote.)

Examples of non-identifying attributes are demographic attributes
(city of residence, gender, age group), shopping interests, hobby
interests, etc.; such attributes are usually self-asserted, but they
can be supplied by an identity provider, chosen by the user, as a
matter of convenience, so that the user does not have to reenter them
and maintain them uptodate at each relying party. Examples of sites
that may ask for such attributes are dating sites, shopping deal
sites, hobbyist sites, etc.

Of course, a credential that contains non-identifying attributes will
not by itself allow a user to log in to a site. But it can be used in
addition to a PKI certificate issued by the site itself to recognize
repeat visitors.

Use Cases Where Privacy-Enhancing Technologies Are Necessary

Privacy-enhancing technologies are necessary when the relying party
does not require uniquely identifying information, and there is only
one credential issuer. That one credential issuer could be the
government. Non-uniquely-identifying information provided by
government-issued credentials could include assertions that the user
is old enough to buy wine, or is a resident of a particular state, or
is licensed to practice some profession in some state, or is a US
citizen, or has the right to work in the US.

I find it difficult to find examples where people would have a
reasonable fear of being tracked through their use of
government-issued credentials. But the right to privacy is a human
right that is held dear in the United States, and has been found to be
implicitly protected by the US constitution. Government-issued
credentials will only be acceptable if they incorporate all available
privacy protections. That makes the use of privacy-enhancing
technologies essential to the success of NSTIC.

Wanted: Efficiently-Revocable Privacy-Friendly Credentials

So: privacy-friendly credentials are necessary; but, in my opinion,
the drawbacks of existing privacy-enhancing technologies make them
impractical. Therefore we need new privacy-enhancing technologies.
Those new technologies should have issue-show and multi-show
unlinkability; they should provide partial information disclosure,
including proofs of inequality relations involving numeric attributes;
and they should be efficiently revocable.

Fortunately, that’s not too much to ask. U-Prove and Idemix have been
pioneering technologies, but they are now dated. U-Prove is based on
research carried out in the mid-nineties, and the core cryptographic
scheme later used in Idemix was described in a paper written in 2001.
A lot of research has been done in cryptography since then, and
several new cryptographic schemes have been proposed that could be
used to provide privacy-friendly credentials.

I don’t think a scheme meeting all the requirements, including
efficient revocation, has been designed yet. (I would love to be
corrected if I’m wrong!) But possible ingredients for such a system
have been proposed, including methods for proving non-revocation in time
proportional to the square root of the number of revoked credentials
[1]
or even in practically constant time
[2].

Update.

Stefan Brands has told me that the cryptosystem described in [1] is considered part of the U-Prove technology, and that the revocation technique of [1] could be integrated into the existing U-Prove implementation to provide issuer-driven revocation. If that were done and the resulting system proved to be suitably efficient, the only ingredient missing from that system would be multi-show unlinkability.

Once a scheme with all the ingredients has been designed and
mathematically verified, it still needs to be implemented.
Cryptographic implementations are few and far between, but that does
not mean that they are difficult. Recently, for example, three
different systems of privacy-friendly credentials were implemented
just for the purpose of comparing their performance
[3].

Next and Last: Usability and Deployment

To conclude the series, in the next post I’ll try to respond to a
comment made by Anthony Nadalin on the
Identity
Commons mailing list
:
“if it’s not useable or deployable who cares?”.

References

[1] Stefan Brands, Liesje Demuynck and Bart De Decker.
A practical system for globally revoking the unlinkable pseudonyms of unknown users.
In Proceedings of the 12th Australasian Conference on Information Security and Privacy, ACISP’07.
Springer-Verlag, 2007.
ISBN 978-3-540-73457-4.
Preconference technical report available at http://www.cs.kuleuven.be/publicaties/rapporten/cw/CW472.pdf.

 
[2] T. Nakanishi, H. Fujii, Y. Hira and N. Funabiki.
Revocable Group Signature Schemes with Constant Costs for Signing and Verifying.
In IEICE Transactions,
volume 93-A,
number 1,
pages 50-62,
2010.

 
[3] J. Lapon, M. Kohlweiss, B. De Decker and V. Naessens.
Performance Analysis of Accumulator-Based Revocation Mechanisms.
In Proceedings of the 25th International Conference on Information Security (SEC 2010).
Springer, 2010.

 


Pros and Cons of Idemix for NSTIC


This is the second of a series of posts on the prospects for using
privacy-enhancing technologies in the NSTIC Identity Ecosystem.

In the previous post I discussed the

pros and cons of U-Prove
, so naturally I should now discuss the pros and cons of Idemix,
the other privacy-enhancing technology thought to have inspired
NSTIC.
This post, like the previous one, is based on a review of the public
literature. If I’ve missed or misinterpreted something, please let me
know in a comment.

By the way, a link to the previous post that I posted to the Identity
Commons mailing list triggered a wide-ranging discussion on NSTIC
and privacy, which can be found in the

mailing list archives
.

Idemix is an open-source library implemented in Java. It is described
in the Idemix Cryptographic Specification
[1], and the academic paper
[2]. It is mostly based on the cryptographic techniques of
[3]. Curiously, although Idemix is provided by IBM,
the main Idemix site is located at

idemix.wordpress.com

and disclaims to be an official IBM site.

There is also a smart card that implements a “light-weight variant of
Idemix”. I discuss it at the end of this post.

Feature Coverage

Idemix provides all three privacy features alluded to in NSTIC
documents
[4]
[5]
and discussed in the previous post:

  1. Issuance-show unlinkability,
  2. Multi-show unlinkability, and
  3. Partial information disclosure.

The third feature includes both selective disclosure of attributes and
the ability to prove inequalities such as the value of a birthdate
attribute being less than today’s date minus 21 years without
disclosing that birthdate attribute value.

Idemix also includes other features, such as the ability to prove that
two attributes have the same value without disclosing that value, and
the ability to prove that a certain attribute certified by the issuer
has been encrypted under the public key of a third party, which may
decrypt it under some circumstances. It could be argued that Idemix
is over-engineered for the purpose of Web authentication, including
features that add complexity but are not useful for that purpose.

Performance

The richer feature set of Idemix may come at a cost in terms of
performance. From the data in Table 1 of
[2], it follows that it
would take about 12 seconds for the user to submit a credential with
one attribute to a relying party that checks for expiration, and about
28 seconds to submit a credential with 20 attributes. The paper dates
back to 2002, and the processor used was a relatively slow 1.1GHz
Pentium III. (The authors say 1.1MHz but I assume they mean 1.1GHz.)
But on the other hand the modulus size was 1024 bits, and Idemix
currently uses a 2048 modulus
[2]. The paper also promises
optimizations that have no doubt been implememted by now.
Unfortunately, I haven’t been able to find performance data in the
Idemix site. A

search for the word performance restricted to the site

produces no results. If you know of any recent performance data, please let
me know in a comment.

Revocation

We saw in the previous post that unlinkability makes revocation
difficult. U-Prove credentials can be revoked by users because they
do not have multi-show unlinkability, but cannot be revoked by issuers
because they have issue-show unlinkability. Idemix credentials, which
have both multi-show unlinkability and issue-show unlinkability, are
revocable neither by users nor by issuers. I am not saying that
unlinkability makes revocation impossible. Cryptographic
techniques have been devised to allow revocation of unlinkable
credentials, which I will discuss later in this series of posts. But
those techniques are not used by U-Prove or Idemix.

Idemix has a credential update feature that can be used to extend the
validity period of a credential that has expired. This facilitates
the use of short-term credentials that may not need to be revoked.
But the Idemix Cryptographic Specification
[1] should not claim as it does that the
credential-update feature can be used to implement credential
revocation. Waiting for a credential to expire is not the same as
revoking it. Short term credentials are an alternative to
revocation. And, as an alternative, they have serious drawbacks: they
are costly to implement for the issuer; they impose a logistic burden
on the user agent; they may become unavailable if the issuer is down
when the validity period needs to be extended; and the user agent may
be overwhelmed by the need to renew many credentials at once if it has not
been operational for an extended period of time. If a short-term
credential is renewed on demand, just before it is used, renewal and
use of the credential may be linkable by timing correlation.

The Idemix Java Card

The Idemix Java
Card
was intended as a smart identity card. Its
implementation on the Java Card Open Platform (JCOP) is described in
[6].

The cryptographic system in an Idemix card is described in the Idemix
site as a “light-weight variant of Identity Mixer” (i.e. Idemix). But
it is very different from the original Idemix system. According to
[6], an implementation of the original system in a
Java card would be impractical because credential submission could
take 70 to 100 seconds. To make it less impractical, the issuer of a
credential to a Java card certifies only that it trusts the Java card.
The card is then free to present any attributes it wants to the
relying party. (A different way of handling attributes is possible
but not recommended, presumably because of the time it takes; see
Footnote 5 of
[6].) Security for the relying party depends on the
issuer downloading the correct attributes and software to the card,
and the user not being able to modify those attributes and software.
The card must therefore be tamper resistant against the user. (Or at
least tamper responsive, i.e. able to detect tampering and respond by
zeroing out storage.)

Whereas a U-Prove smart card performs only a small portion of the
cryptographic computations, the Idemix Java card is an autonomous
system that performs all the cryptographic computations by itself, without help
from the user’s computer. This takes time: 10.453 seconds for a
transaction, i.e. for submitting a credential to a relying party,
according to Table 2 of
[6]; or 11.665 seconds according to Table 3. (In
both cases, with a 1536-bit modulus, and not counting a 1.474 second
revocation check; revocation is discussed below; the discrepancy
between the two figures is not explained.) Some of the computations
in Table 2 are labeled as precomputations, but no precomputations can
take place if the card is not plugged in. The authors
of [6] consider that a 10 second transaction time
would be adequate. But I don’t think many Web users will be happy
waiting 10 seconds each time they want to log in to a site.

Update.

It is possible to implement full-blown privacy-friendly credentials systems very efficiently on a smart card. A non-Microsoft implementation of U-Prove on a MULTOS smart card [8], where all the cryptographic computations are carried out by the card, achieves credential-show times close to 0.3 seconds.

The Idemix card features a revocation mechanism. A card can be
revoked by including its secret key in a revocation list. But the
secret key is generated in the card when the card is “set up”, and it is not known to the party that sets up the card,
nor to the issuers of credentials to the card, nor to the user who
owns the card. The secret key can only be obtained by breaching the
tamper pretection of the card, hence can only become known to an
adversary. So the revocation feature seems useless.

Where does the peculiar idea of listing secret keys in a revocation
list come from? It turns out that the cryptographic system of the
Idemix card is derived from the cryptographic system of
[7] which was designed for media copyright
protection, e.g. to authenticate the Trusted Platform Module (TPM) in
a DVD player before downloading a protected movie to the player. Apparently
hackers extract secret keys of TPMs and publish them on the Web.
Copyright owners find those secret keys and blacklist them. Blacklisting
secret keys makes sense for copyright protection, but not as a
revocation technique for smart cards.

Coming Next…

After reading this post and the previous post, you may be thinking
whether privacy-enhanced technologies are really a good idea. I will
try to answer that question in the next post.

References

[1] IBM Research, Zurich.
Specification of the Identity Mixer Cryptographic Library Version 2.3.1.
December 7, 2010.
Available at

http://www.zurich.ibm.com/~pbi/identityMixer_gettingStarted/ProtocolSpecification_2-3-2.pdf
.

 
[2] Jan Camenisch and Els Van Herreweghen.
Design and Implementation of the Idemix Anonymous Credential System.
In Proceedings of the 9th ACM conference on Computer and Communications Security.
2002.

 
[3] J. Camenisch and A. Lysyanskaya.
Efficient Non-Transferable Anonymous Multi-Show Credential System with Optional Anonymity Revocation.
In Theory and Application of Cryptographic Techniques, EUROCRYPT,
2001.

 
[4] The White House.
National Strategy for Trusted Identities in Cyberspace.
April 2011.
Available at
http://www.whitehouse.gov/sites/default/files/rss_viewer/NSTICstrategy_041511.pdf
.

 
[5] Howard A. Schmidt.
The National Strategy for Trusted Identities in Cyberspace and Your Privacy.
April 26, 2011.
White House blog post, available at

http://www.whitehouse.gov/blog/2011/04/26/national-strategy-trusted-identities-cyberspace-and-your-privacy
.

 
[6] P. Bichsel, J. Camenisch, T. Groß and V. Shoup.
Anonymous Credentials on a Standard Java Card.
In ACM Conference on Computer and Communications Security,
2009.

 
[7] E. Brickell, J. Camenisch and L. Chen.
Direct anonymous attestation.
In Proceedings of the 11th ACM conference on Computer and Communications Security,
2004.

 
[8] Update.
W. Mostowski and P. Vullers.
Efficient U-Prove Implementation for Anonymous Credentials on Smart Cards.
Available at http://www.cs.ru.nl/~pim/publications/2011_securecomm.pdf.

 

Pros and Cons of U-Prove for NSTIC


This is the first of a series of posts on the prospects for using
privacy-enhancing technologies in the NSTIC Identity Ecosystem.

NSTIC calls for the use of
privacy-friendly credentials, and NSTIC documents
[1]
[2] refer to the existence of privacy-enhancing
technologies that can be used to implement such credentials. Although
those technologies are not named, they are widely understood to be
U-Prove and Idemix.

There is confusion regarding the capabilities of privacy-enhancing
technologies and the contributions that they can make to NSTIC. For
example, I sometimes hear the opinion that “U-Prove has been
oversold”, but without technical arguments to back it up. To help
clear some of the confusion, I’m starting a series of posts on the
prospects for using privacy-enhancing technologies in the NSTIC
Identity Ecosystem. This first blog is on the pros and cons of
U-Prove, the second one will be on the pros and cons of Idemix, and
there will probably be two more after that.

U-Prove is described in the U-Prove Cryptographic Specification V1.1
[3] and the U-Prove Technology Overview
[4]. It is based on cryptographic techniques
described in Stefan Brands’s book
[5].

Privacy Feature Coverage

Three features of privacy-friendly credentials are informally
described in NSTIC documents:

  1. Issuance of a credential cannot be linked to a use, or “show,” of
    the credential even if the issuer and the relying party share
    information, except as permitted by the attributes certified by the
    issuer and shown to the relying party.
  2. Two shows of the same credential to the same or different relying
    parties cannot be linked together, even if the relying parties share
    information.
  3. The user agent can disclose partial information about the
    attributes asserted by a credential. For example, it can prove that
    the user if over 21 years of age based on a birthdate attribute,
    without disclosing the birthdate itself.

Here I will not discuss how desirable these features are; I
leave that for a later post in the series. In this post I will only
discuss the extent to which U-Prove provides these features.

U-Prove provides the first feature, which is
called untraceability in the U-Prove Technology Overview
[4]. A U-Prove credential consists of a private key,
a public key, a set of attributes, and a signature by the credential
issuer. The signature is jointly computed by the issuer and the user
agent in the course of a three-turn interactive protocol,
the issuance protocol, where the issuer sees the attributes but
not the public key nor the signature itself. Therefore the issue of a
credential can be linked to a show of the credential only on the basis
of the attribute information disclosed during the show.

U-Prove, on the other hand, does not provide the second
feature, because all relying parties see the same public key and the
same signature. Stefan Brands acknowledges this in Section 2.2 of
[6], where he compares the system of his book
[5] to the system of Camenisch and Lysianskaya,
i.e. U-Prove to Idemix, acknowledging that the latter provides
multi-show unlinkability but the former does not.

Unfortunately, the U-Prove Technology Overview
[4] is less candid. It does discuss the fact that
multiple shows of the same U-Prove credential (U-Prove token) are
linkable, in Section 4.2, but the section is misleadingly
entitled Unlinkability. It starts as follows:

Similarly, the use of a U-Prove token cannot inherently
be correlated to uses by the same Prover of other U-Prove tokens, even
if the Issuer identified the Prover and issued all of the Prover’s
U-Prove tokens at the same time.

This is saying that different U-Prove tokens are not linkable!
Which is a vacuous feature: why would different tokens be
linkable? The section goes on to argue that that the issuer should
issue many tokens to the user agent (the Prover) with the same
attributes, one for each relying party (each Verifier). On the Web,
this is utterly impractical. There are millions of possible relying
parties: how many tokens should be issued? How can all those tokens
be stored on a smart card? What if the user agent runs out of tokens?
And how does the user agent know if two parties are different or the
same? (Is example.com the same relying party
as xyz.example.com?)

Update.
Stefan Brands has pointed out that a U-Prove token does not take up any storage in a smart card if the private key splitting technique featured by U-Prove (which I refer to below) is used.

As for the third feature of privacy-friendly credentials, partial
information disclosure, U-Prove provides it to a certain extent. When
showing a credential, the user agent can disclose only some of the
attributes in the credential, proving to the relying party that those
attributes were certified by the credential issuer without disclosing
the other attributes. However, U-Prove does not support the “age over
21” example found in several NSTIC documents. That would require the
ability to prove that a value is contained in an interval without
disclosing the value. Appendix 1 of the U-Prove Technology Overview
[4] lists the ability to perform such a proof as one
of the “U-Prove features” that have not been included in Version 1.1,
suggesting that it could be included in a future version. In Section
3.7 of his book
[5], Stefan Brands does suggest a method for proving
that a secret is contained in an interval. However, it dismisses it
as involving “a serious amount of overhead”, because it requires
executing many auxiliary proofs of knowledge. (I believe that proving
“age over 21” would require at least 30 auxiliary proofs, which is
clearly impractical.)

An interesting feature of U-Prove is the ability to split the private
key of a credential between the user agent and a device such as a
smart card. The device must then be present for the credential to be
usable, thus providing two-factor authentication; but the device only
has to perform a limited amount of cryptographic computations, most of
the cryptographic computations being carried out by the user agent.
This makes it possible to use slower, and hence cheaper, devices than
if all the cryptographic computations were carried out by the device
(as is the case, for example, in an Idemix smart card).

Update.

A non-Microsoft implementation of U-Prove on a MULTOS smart card, where all the cryptographic computations are carried out by the card with impressive performance (close to 0.3 seconds in some cases), can be found in [8].

Revocation

The ability to revoke credentials is usually taken for granted. In
the case of privacy-friendly credentials, however, it is difficult to
achieve. An ordinary CRL (Certificate Revocation List) cannot be
used, since it would require some kind of credential identifier known
to both the issuer and the relying parties, which would defeat
unlinkability.

U-Prove credentials have a Token Identifier, which is a hash of the
public key and the signature. Because U-Prove does not provide
multi-show unlinkability, the Token Identifier, like the public key
and the signature, is known to all the relying parties. The user
agent could therefore revoke the credential by including the Token
Identifier in a CRL. However, because U-Prove provides issue-show
unlinkability, the credential issuer does not know the Token
Identifier, nor the public key or the signature, and therefore cannot
use it to revoke the credential.

Section 5.2 of the U-Prove Technology Overview
[4] says that an identifier could be included in a
special attribute called the Token Information Field for the
purpose of revocation, and “blacklisted using the same methods that
are available for X.509 certificates”; this, however, would destroy
the only unlinkability feature of U-Prove credentials, viz.
issuance-show unlinkability (which
[4] calls untraceability).

Section 5.2 of
[4] also suggests using on-demand credentials.
However that does not seem practical: the user agent would have to
authenticate somehow to the issuer, then conduct a three-turn
interactive issuance protocol with the issuer to obtain the token,
then conduct a presentation protocol with the relying party. The
latency of all these computations and interactions would too high; and
since the issuance computations would have to be carried out each time
the credential is used, the cost for the issuer would be staggering.
Furthermore, on-demand credentials may allow linking of issuance to
show by timing correlation.

A workaround to the revocation problem is suggested in Section 6.2 of
[4], for cases where the credential is protected by a
device such as a smart card by splitting the private key between the
user agent and the device. In those cases the Issuer could revoke the
device rather than a particular credential protected by the device, by
adding an identifier of the device to a revocation list. However this
would require downloading the revocation list to the device in a
Device Message when the credential is used, so that the device
can check if its own identifier is in the list. Since a revocation
list can have hundreds of thousands of entries (e.g. the state of West
Virginia revokes about 90,000 driver licenses per year
[7]), downloading it to a smart card each time the
smart card is used is not a viable option.

Update.

Stefan Brands has pointed out that only a revocation list increment needs to be downloaded to the card.

Appendix 1 of
[4] includes an “issuer-driven revocation” feature in
a list of U-Prove features not yet implemented:

An Issuer can revoke one or more tokens of a known Prover by
blacklisting a unique attribute encoded in these tokens (even if it is
never disclosed at token use time).

How this can be achieved is not explained in the
appendix, nor in Brands’s book
[5]. However it is explained in
[6],
where Brands proposes a new system and compares it to his previous
system of
[5], i.e. to U-Prove. He says that
[5] “allows an issuer to invisibly encode into all of
a user’s digital credentials a unique number that the issuer can
blacklist in order to revoke that user’s credentials” adding that the
blacklist technique “consists of repeating a NOT-proof for each
blacklist element”. In other words, the idea is to prove a formula
stating that the unique number is NOT equal to the first element of
the blacklist, and NOT equal to the second element, and NOT equal to
the third element, etc., without revealing the unique number. That
can be done but, as Brands further says in
[6], it is “not practical for large blacklists”.
Indeed, based on Section 3.6 of
[5], proving a formula with multiple negated
subformulas requires proving separately each negated subformula. So
if the blacklist has 100,000 elements, 100,000 proofs would have to be
performed each time a credential is used.

By the way, the system of
[6] is substantially different from U-Prove
and not well suited for use on the Web, since it requires the set of
relying parties to be known at system-setup time.

Update.

Stefan Brands has told me that the cryptosystem described in [6] is considered part of the U-Prove technology, and that the revocation technique of [6] could be integrated into the existing U-Prove implementation to provide issuer-driven revocation.

Conclusions

I will save my conclusions for the last post in the series, but of
course any comments are welcome now.

References

[1] The White House.
National Strategy for Trusted Identities in Cyberspace.
April 2011.
Available at
http://www.whitehouse.gov/sites/default/files/rss_viewer/NSTICstrategy_041511.pdf
.

 
[2] Howard A. Schmidt.
The National Strategy for Trusted Identities in Cyberspace and Your Privacy.
April 26, 2011.
White House blog post, available at

http://www.whitehouse.gov/blog/2011/04/26/national-strategy-trusted-identities-cyberspace-and-your-privacy
.

 
[3] Christian Paquin.
U-Prove Cryptographic Specification V1.1 Draft Revision 1.
February 2011.
No http URL seems to be available for this document, but it can be
downloaded from the Specifications and Documentation page,
which itself is available at http://www.microsoft.com/u-prove.

 
[4] Christian Paquin.
U-Prove Technology Overview V1.1 Draft Revision 1.
February 2011.
No http URL seems to be available for this document, but it can be
downloaded from the Specifications and Documentation page,
which itself is available
at http://www.microsoft.com/u-prove.

 
[5] Stefan Brands.
Rethinking Public Key Infrastructures and Digital Certificates: Building in Privacy
2000.
MIT Press, Cambridge, MA, USA, 2000.
ISBN 0262024918.
Available for free download at http://www.credentica.com/the_mit_pressbook.php

 
[6] Stefan Brands, Liesje Demuynck and Bart De Decker.
A practical system for globally revoking the unlinkable pseudonyms of unknown users.
In Proceedings of the 12th Australasian Conference on Information Security and Privacy, ACISP’07.
Springer-Verlag, 2007.
ISBN 978-3-540-73457-4.
Preconference technical report available at http://www.cs.kuleuven.be/publicat
ies/rapporten/cw/CW472.pdf
.

 
[7] West Virginia Department of Transportation, Division of Motor Vehicles.
Annual Report 2010.
Available at http://www.transportation.wv.gov
/business-manager/Finance/Financial%20Reports/DMV_AR_2010.pdf
.

 
[8] Update.
W. Mostowski and P. Vullers.
Efficient U-Prove Implementation for Anonymous Credentials on Smart Cards.
Available at http://www.cs.ru.nl/~pim/publications/2011_securecomm.pdf.