Comments on the Recommended Use of Biometrics in the New Digital Identity Guidelines, NIST SP 800-63-3

NIST is working on the third revision of SP 800-63, which used to be called the Electronic Authentication Guideline and has now been renamed the Digital Identity Guidelines. An important change in the current draft of the third revision is a much expanded scope for biometrics. The following are comments by Pomcor on that aspect of the new guidelines, and more specifically on Section 5.2.3 of Part B, which we have sent to NIST in response to a call for public comments.

The draft is right in recommending the use of presentation attack detection (PAD). We think it should go farther and make PAD a mandatory requirement right away, without waiting for a future edition as stated in a note.

But the draft only considers PAD performed at the sensor. Continue reading “Comments on the Recommended Use of Biometrics in the New Digital Identity Guidelines, NIST SP 800-63-3”

Remote Identity Proofing Discussed at the Internet Identity Workshop

This is Part 3 of a series of posts presenting results of a project sponsored by an SBIR Phase I grant from the US Department of Homeland Security. These posts do not necessarily reflect the position or the policy of the US Government.

To get community feedback on our remote identity proofing project we made a presentation two days ago at the 23rd Internet Identity Workshop in Mountain View. The slides can be found here. We were gratified that the feedback was positive and there were in-depth discussions with identity experts both during and after the presentation.

We started by explaining the goal of the project. Remote identity proofing has often relied on asking the subject multiple-choice “knowledge questions” (e.g. which of the following zip codes did you live in five years ago?). This method is terrible for privacy, since it relies on the identity proofing service gathering and using troves of personal information about people. Furthermore, due to the proliferation of personal data available online, it has now become ineffective. Continue reading “Remote Identity Proofing Discussed at the Internet Identity Workshop”

Revocable Biometrics Discussion at the Internet Identity Workshop

One thing I like about the Internet Identity Workshop (IIW) is its unconference format, which allows for impromptu sessions. A discussion during one session can raise an issue that deserves its own session, and an impromptu session can be called the same day or the following day to discuss it. A good example of this happened at the last IIW (IIW XXII), which was held on April 26-28, 2016 at the Computer History Museum in Mountain View, California.

During the second day of the workshop, a participant in a session drew attention to one of the dangers of using biometrics for authentication, viz. the fact that biometrics are not revocable. This is true in the sense that you cannot change at will the biometric features of the human body, and it is a strong reason for using biometrics sparingly; but I pointed out that there is something called “revocable biometrics”. Continue reading “Revocable Biometrics Discussion at the Internet Identity Workshop”

Biometrics and Derived Credentials

This is Part 4 of a series discussing the public comments on Draft NIST SP 800-157, Guidelines for Derived Personal Identity Verification (PIV) Credentials and the final version of the publication. Links to all the posts in the series can be found here.

As reviewed in Part 3, a PIV card carries two fingerprint templates for off-card comparison, and may also carry one or two additional fingerprint templates for on-card comparison, one or two iris images, and an electronic facial image. These biometrics may be used in a variety of ways, by themselves or in combination with cryptographic credentials, for authentication to a Physical Access Control System (PACS) or a local workstation. The fingerprint templates for on-card comparison can also be used to activate private keys used for authentication, email signing, and email decryption.

By contrast, neither the draft version nor the final version of SP 800-157 consider the use of any biometrics analogous to those carried in a PIV card for activation or authentication. Actually, they “implicitly forbid” the storage of such biometrics by the Derived PIV Application that manages the Derived PIV Credential, according to NIST’s response to comment 30 by Precise Biometrics.

But several comments requested or suggested the use of biometrics by the Derived PIV Application. In this post I review those comments, and other comments expressing concern for biometric privacy. Then I draw attention to privacy-preserving biometric techniques that should be considered for possible use in activating derived credentials.
Continue reading “Biometrics and Derived Credentials”

Biometrics in PIV Cards

This is Part 3 of a series discussing the public comments on Draft NIST SP 800-157, Guidelines for Derived Personal Identity Verification (PIV) Credentials and the final version of the publication. Links to all the posts in the series can be found here.

After Part 1 and Part 2, in this Part 3 I intended to discuss comments received by NIST regarding possible uses of biometrics in connection with derived credentials. But that requires explaining the use of biometrics in PIV cards, and as I delved into the details, I realized that this deserves a blog post of its own, which may be of interest in its own right. So in this post I will begin by reviewing the security and privacy issues raised by the use of biometrics, then I will recap the biometrics carried in a PIV card and how they are used.

Biometric security

When used for user authentication, biometrics are sometimes characterized as “something you are“, while a password or PIN is “something you know” and a private key stored in a smart card or computing device is “something you have“, “you” being the cardholder. However this is only an accurate characterization when a biometric sample is known to come from the cardholder or device user, which in practice requires the sample to be taken by, or at least in the presence of, a human attendant. How easy it was to dupe the fingerprint sensors in Apple’s iPhone (as demonstrated in this video) and Samsung’s Galaxy S5 (as demonstrated in this video) with a spoofed fingerprint shows how difficult it is to verify that a biometric sample is live, Continue reading “Biometrics in PIV Cards”

It’s Time to Redesign Transport Layer Security

One difficulty faced by privacy-enhancing credentials (such as U-Prove tokens, Idemix anonymous credentials, or credentials based on group signatures), is the fact that they are not supported by TLS. We noticed this when we looked at privacy-enhancing credentials in the context of NSTIC, and we proposed an architecture for the NSTIC ecosystem that included an extension of TLS to accommodate them.

Several other things are wrong with TLS. Performance is poor over satellite links due to the additional roundtrips and the transmission of certificate chains during the handshake. Client and attribute certificates, when used, are sent in the clear. And there has been a long list of TLS vulnerabilities, some of which have not been addressed, while others are addressed in TLS versions and extensions that are not broadly deployed.

The November SSL Pulse reported that only 18.2% of surveyed web sites supported TLS 1.1, which dates back to April 2006, only 20.7% supported TLS 1.2, which dates back to August 2008, and only 30.6% had server-side protection against the BEAST attack, which requires either TLS 1.1 or TLS 1.2. This indicates upgrade fatigue, which may be due to the age of the protocol and the large number of versions and extensions that it has accumulated during its long life. Changing the configuration of a TLS implementation to protect against vulnerabilities without shutting out a large portion of the user base is a complex task that IT personnel is no doubt loath to tackle.

So perhaps it is time to restart from scratch, designing a new transport layer security protocol — actually, two of them, one for connections and the other for datagrams — that will incorporate the lessons learned from TLS — and DTLS — while discarding the heavy baggage of old code and backward compatibility requirements.

We have written a new white paper that recapitulates the drawbacks of TLS and discusses ingredients for a possible replacement.

The paper emphasizes the benefits of redesigning transport layer security for the military, because the military in particular should be very much interested in better transport layer security protocols. The military should be interested in better performance over satellite and radio links, for obvious reasons. It should be interested in increased security, because so much is at stake in the security of military networks. And I would argue that it should also be interested in increased privacy, because what is viewed as privacy on the Internet may be viewed as resistance to traffic analysis in military networks.

Surveillance and Internet Identity

Last week I attended IIW 17, the 17th meeting of the Internet Identity Workshop, which is held twice a year in Mountain View, California. As usual it was a great opportunity to exchange ideas and meet people, with its unconference format, its many sessions, its rotating demos, its wide space for discussions, and its two free dinners with free drinks.

For me, however, it was tinged with sadness, because of what has happened since the first IIW I attended, IIW 12, in May 2011. IIW 12 was the first IIW after the launch of NSTIC. IIW 17 was the first IIW after Snowden.

The NSTIC Strategy Document, released in April 2011 with a preface signed by President Obama, repeatedly emphasized the goal of enhancing privacy as a key element of the “vision” and “guiding principles” of NSTIC. The document explicitly stated that the Identity Ecosystem will use privacy-enhancing technology and policies to inhibit the ability of service providers to link an individual’s transactions, thus ensuring that no one service provider can gain a complete picture of an individual’s life in cyberspace. At the time, Facebook Connect was threatening to inject Facebook as a middleman in all or most Internet activities, and I was happy to see that the US Government seemingly wanted to prevent such a massive invasion of privacy; I even convened a session at IIW 12 proposing a technique for achieving the privacy goals of NSTIC in the short term. Little did I know that the government was busy building a massive surveillance apparatus that would give the government a complete picture of an individual’s life in cyberspace, by means including bulk collection of data from service providers.

The Internet, given to the world by the US Department of Defense, was a world-wide forum for free-flowing, spontaneous exchange of ideas. Now the NSA, part of the same Department of Defense, has taken that away. People know that they are being tracked and identified when they post an anonymous comment. People know that their conversations are being recorded. Therefore people must think twice about they say.

I don’t know if Congress will be able to rein in the NSA. It should be clear that spying on US citizens is unconstitutional, but some politicians think that it is the NSA’s job to spy on everybody else in the planet. They don’t seem to consider or care that, if the US Government insists on a God-given right to spy on everybody else, other countries or regions may develop their own national or regional networks, separated from the US Internet by an air gap.

Fortunately, the technical community has reacted strongly against the NSA’s attacks on Internet privacy. And thanks to Snowden’s revelations, many of the attack techniques are known. It may therefore be possible to protect Internet privacy by technical means.

Coming back to the subject of the workshop, Internet Identity, I would argue that the first thing to do to protect Internet privacy is to get rid of the pernicious technology variously known as third-party login, social login or federated login. To be precise, I am referring to authentication techniques where the user authenticates to a third-party identity provider, which then provides identity and/or attribute information to a relying party, using a protocol such as OAuth or OpenID Connect. (These are the techniques in Group 2 of the taxonomy proposed in the paper Privacy Postures of Authentication Technologies.)

The only intrinsic advantage of federated login is that it allows the identity provider to collect vast amounts of information about the user, since the identity provider learns not only the user’s identity and/or attributes, but also what relying parties the user logs in to. The identity provider uses the information to sell ads that target the user accurately. We now know that the information is also shared with the government, which makes it available to thousands of analysts and IT personnel who use it for legal or illegal government or personal purposes.

There are no other intrinsic advantages to federated login.

The government and the identity providers argue that federated login is more secure than direct authentication to the relying party with username and password, but the opposite is true.

Security is supposedly increased because federated login reduces password reuse. But password reuse will not be substantially reduced unless a large majority of world-wide web sites force their users to use federated login with one of a small number of global identity providers such as Google or Facebook, something that will hopefully not come to pass.

Security is also supposedly increased because a large identity provider supposedly does a better job of protecting the user’s password. But I don’t know why a large identity provider would provide better protection against hackers, since large companies are not known to provide great security. And I do know that a password entrusted to a large identity provider may become available to thousands of employees of the government, of government contractors, and of the identity provider.. And the capture of a password used at an identity provider, which provides access to multiple web sites, is more damaging to the user than the capture of a password used at a single web site.

There is an alternative to authenticating to a web site with username and password that provides both security and privacy: namely, authentication with a cryptographic key pair automatically generated on the user’s machine when the user registers with the site. The site stores the hash of the public key component of the key pair in its database, and uses it to locate the user’s account when the user visits the site again and demonstrates knowledge of the private key component.

Another claimed advantage of federated login is that the user can register at a new site with a single click if logged in to the identity provider, any personal data required by the site being provided by the identity provider. This is a real advantage, but not an intrinsic one. The same benefit could be easily obtained by storing the personal data in the browser, and specifying a protocol by which the browser would supply selected personal data items to a web site upon demand by the site and approval by the user. Such a protocol would be much simpler than any of the federated login protocols and would provide more security and more privacy.

Yet another claimed advantage of federated login is that the identity provider could provide the relying party with a user’s identity and/or attributes verified by an identity proofing procedure; however, such verified identity and/or attributes could equally well be provided by a certificate authority using a public key certificate (or by multiple authorities providing a combination of a certificate binding a public key to an identity and one or more certificates binding the identity to various attributes), without the certificate authority having to be informed of what relying parties the certificate is submitted to.

It is sometimes argued (cf. the NSTIC 101 session at last week’s IIW) that using public key cryptography for authentication would be expensive and would require the user to carry a separate dongle or smartcard for every credential. This is not true. There is no need for special hardware to store a cryptographic credential, and if special hardware is desired for some reason, there is no need to use different pieces of hardware for different credentials.

Two sessions at IIW 17 gave me hope that Internet privacy is not a lost cause.

One of them was convened by Tim Bray of Google to report on the comments he received in response to a blog post arguing to developers that they should use federated login rather than login with username and password. The comments, which he referred to as a “bloodbath,” showed that neither developers nor end-users like federated login. I hope that such pushback will eventually force companies like Google to give up on federated login.

The other one was convened by Kazue Sako of NEC to discuss anonymous credentials and their possible uses. The room was overflowing and the level of engagement of the audience was high, showing that technical people are interested in privacy-enhancing authentication technologies even if large companies are not.

Feedback on the Paper on Privacy Postures of Authentication Technologies

Many thanks to every one who provided feedback on the paper on privacy postures of authentication technologies which was announced in the previous blog post. The paper was discussed on the Identity Commons mailing list and we also received feedback at the ID360 conference, where we presented the paper, and at IIW 16, where we showed a poster summarizing the paper. In this post I will recap the feedback that we have received and the revisions that we have made to the paper based on that feedback.

Steven Carmody pointed out that SWITCH, the Swiss InCommon federation, has developed an extension of Shibboleth called uApprove that allows the identity or attribute provider to ask the user for consent before disclosing attributes to the relying party. Ken Klingestein told us that the Scalable Privacy NSTIC pilot is developing a privacy manager that will let the user choose what attributes will be disclosed to the the relying party by the Shibboleth identity provider. We have added references to these Shibboleth extensions to Section 4.2 of the paper.

The original paper explained that, although a U-Prove token does not provide multishow unlinkability, the user may obtain multiple tokens from the issuer, and present different tokens to different relying parties. Christian Paquin said that a U-Prove credential is defined as a batch of such tokens, created simultaneously by an efficient parallel procedure. We have added this definition of a U-Prove credential to Section 4.3.

Christian Paquin also pointed out that a U-Prove token is a mathematical concept that can be embodied in a variety of technologies. He sent me a link to the WS-Trust embodiment, which was used in CardSpace. We have explained this and included the link in Section 4.3.

Tom Jones said that what we call anonymity is called pseudonymity by others. In fact, column 9, labeled “Anonymity”, covers both pseudonymity, as provided, e.g., by an Idemix pseudonym or an uncertified key pair or a combination of a user ID and a password when the user ID is freely chosen by the user, and full anonymity, as provided when a relying party learns only attributes that do not uniquely identify the user. I think it is not unreasonable to view anonymity (the service provider does not learn the user’s “name”) as encompassing pseudonymity (the service provider learns a pseudonym instead of the “real name”).

Nat Sakimura provided a lot of feedback, for which we are grateful. He said that Google and Yahoo implemented OpenID Pairwise Pseudonymous Identifiers (PPID), i.e. different identifiers for the same user provided to different relying parties, before ICAM specified its OpenID profile. We have noted this in Section 4.2 of the revised paper and changed the label of row 8 to “OpenID (without PPID)”.

He also said that OpenID Connect supports an ephemeral identifier, which provides anonymity. I was able to find a discussion of an ephemeral identifier in the archives of the OpenID Connect mailing list, but no mention of it in any of the OpenID Connect specifications; so ephemeral identifiers may be added in the future, but they are not there yet.

Nat also argued that OpenID Connect provides multishow unlinkability by different parties and by the same party. I disagree, however. The Subject Identifier in the ID Token makes OpenID Connect authentication events linkable. Furthermore, OpenID is built on top of OAuth, whose purpose is to provide the relying party with access to resources owned by the user by means of an access token. In a typical use case the relying party gets access to the user’s account at a social network such as Facebook, Twitter or Google+. It is unlikely that two relying parties who share information cannot determine that they are both accessing the same account, or that a relying party cannot determine that it has accessed the same account in two different occasions.

Nat said that OpenID Connect can be used for two-party authentication using a “Self-Issued OpenID Provider”. We have added a checkmark to row 11, column 1 of the table to indicate this, and an explanation to Section 4.2.

He also said that OpenID Connect provides group 4 functionality by allowing the relying party to obtain attributes from “distributed attribute providers”. We have mentioned this in Section 4.4 of the revised version of the paper.

Finally, Nat said:

Just by reading the paper, I was not very clear what is the requirement for Issue-show unlinkability. By issuance, I imagine it means the credential issuance. I suppose then it means that the credential verifier (in ISO 29115 | ITU-T X.1254 sense) cannot tell which credential was used though it can attest that the user has a valid credential. Is that correct? If so, much of the technology in group 2 should have n/a in the column because they are independent of the actual authentication itself. They could very well use anonymous authentication or partially anonymous authentication (ISO 29191).

The technologies in group 2 are recursive authentication technologies. The relying party directs the browser to the identity or attribute provider, which recursively authenticates the user and provides a bearer credential to the relying party based on the result of the inner authentication. In all generality there may be multiple inner authentications, as the identity or attribute provider may require multiple credentials. So the authentication process may consist of a tree of nested authentications, with internal nodes of the tree involving group 2 technologies, and leaf nodes other technologies. However, rows 5-11 (group 2) are only concerned with the usual case where the user authenticates to the identity or attribute provider as a returning user with a user ID and a password or some other form of two-party authentication; we have now made that clear in Section 4.2 of the revised paper. In that case there is no issue-show unlinkability.

We have also made a couple of other improvements to the paper, motivated in part by the feedback:

  • We have replaced the word possession with the word ownership in the definition of closed-loop authentication (Section 2), so that it now reads: authentication is closed-loop when the credential authority that issues or registers a credential is later responsible for verifying ownership of the credential at authentication time. The motivation for this change is that, in group 2, the credential is the information that the identity or attribute provider has about the user, and is thus kept by the identity or attribute provider rather than by the user.
  • We have added a distinction between two forms of multishow unlinkability, a strong form that holds even if the credential authority colludes and shares information with the relying parties, and a weak form that holds only if there is no such collusion. The technologies in group 2 that provide multishow unlinkability provide the weak form, whereas Idemix anonymous credentials provide the strong form.

Comparing the Privacy Features of Eighteen Authentication Technologies

This blog post motivates and elaborates on the paper Privacy Postures of Authentication Technologies, which we presented at the recent ID360 conference.

There is a great variety of user authentication technologies, and some of them are very different from each other. Consider, for example, one-time passwords, OAuth, Idemix, and ICAM’s Backend Attribute Exchange: any two of them have little in common.

Different authentication technologies have been developed by different communities, which have created their own vocabularies to describe them. Furthermore, some of the technologies are extremely complex: U-Prove and Idemix are based on mathematical theories that may be impenetrable to non-specialists; and OpenID Connect, which is an extension of OAuth, adds seven specifications to a large number of OAuth specifications. As a result, it is difficult to compare authentication technologies to each other.

This is unfortunate because decision makers in corporations and governments need to decide what technologies or combinations of technologies should replace passwords, which have been rendered even more inadequate by the shift from traditional personal computers to smart phones and tablets. Decision makers need to evaluate and compare the security, usability, deployability, interoperability and, last but not least, privacy, provided by the very large number of very different authentication technologies that are competing in the marketplace of technology innovations.

But all these technologies are trying to do the same thing: authenticate the user. So it should be possible to develop a common conceptual framework that makes it possible to describe them in functional terms without getting lost in the details, to compare their features, and to evaluate their adequacy to different use cases.

The paper that we presented at the recent ID360 conference can be viewed as a step in that direction. It focuses on privacy, an aspect of authentication technology which I think is in need of particular attention. It surveys eighteen technologies, including: four flavors of passwords and one-time passwords; the old Microsoft Passport (of historical interest); the browser SSO profile of SAML; Shibboleth; OpenID; the ICAM profile of OpenID; OAuth; OpenID Connect; uncertified key pairs; public key certificates; structured certificates; Idemix pseudonyms; Idemix anonymous credentials; U-Prove tokens; and ICAM’s Backend Attribute Exchange.

The paper classifies the technologies along four different dimensions or facets, and builds a matrix indicating which of the technologies provide seven privacy features: unobservability by an identity or attribute provider; free choice of identity or attribute provider; anonymity; selective disclosure; issue-show unlinkability; multishow unlinkability by different parties; and multishow unlinkability by the same party. I will not try to recap the details here; instead I will elaborate on observations made in the paper regarding privacy enhancements that have been used to improve the privacy postures of some closed-loop authentication technologies.

Privacy Enhancements for Closed-Loop Authentication

One of the classification facets that the paper considers for authentication technologies is the distinction between closed-loop and open-loop authentication, which I discussed in an earlier post. Closed-loop authentication means that the credential authority that issues or registers a credential is later responsible for verifying possession of the credential at authentication time. Closed-loop authentication may involve two parties, or may use a third-party as a credential authority, which is usually referred to as an identity provider. Examples of third-party closed-loop authentication technologies include the browser SSO profile of SAML, Shibboleth, OpenID, OAuth, and OpenID Connect.

I’ve pointed out before that third-party closed-loop authentication lacks unobservability by the identity provider. Most third-party closed-loop authentication technologies also lack anonymity and multishow unlinkability. However, some of them implement privacy enhancements that provide anonymity and a form of multishow unlinkability. There are two such enhancements, suitable for two different use cases.

The first enhancement consists of omitting the user identifier that the identity provider usually conveys to the relying party. The credential authority is then an attribute provider rather than an identity provider: it conveys attributes that do not necessarily identify the user. This enhancement provides anonymity, and multishow unlinkability assuming no collusion between the attribute provider and the relying parties. It is useful when the purpose of authentication is to verify that the user is entitled to access a service without necessarily having an account with the service provider. This functionality is provided by Shibboleth, which can be used, e.g., to allow a student enrolled in one educational institution to access the library services of another institution without having an account at that other institution.

The core OpenID 2.0 specification specifies how an identity provider conveys an identifier to a relying party. Extensions of the protocol such as the Simple Registration Extension specify methods by which the identity provider can convey user attributes in addition to the user identifier; and the core specification hints that the identifier could be omitted when extensions are used. It would be interesting to know whether any OpenID server or client implementations allow the identifier to be omitted. Any comments?

The second enhancement consists of requiring the identity provider to convey different identifiers for the same user to different relying parties. The identity provider can meet the requirement without allocating large amounts of storage by computing a user identifier specific to a relying party as a cryptographic hash of a generic user identifier and an identifier of the relying party such as a URL. This privacy enhancement is required by the ICAM profile of OpenID. It achieves user anonymity and multishow unlinkability by different parties assuming no collusion between the identity provider and the relying parties; but not multishow unlinkability by the same party. It is useful for returning user authentication.

NSTIC Is Not Low-Hanging Fruit

In a recent tweet, Ian Glazer quoted Patrick Gallagher, director of NIST, saying at a recent White House meeting on NSTIC that the “current suite of technologies we rely on are insufficient”.

The identity technologies used today both in federal agencies and on the Web at large are indeed insufficient:

  • SSL client certificates have failed to displace passwords for Web authentication since they were introduced 17 years ago.
  • Credentials in PIV cards have failed to displace passwords in federal agencies eight years after HSPD 12; a GAO report does a good job of documenting the many obstacles faced by agencies in implementing the directive, ranging from the fact that some categories of agency employees do not have PIV cards, to the desire by employees to use Apple MAC computers and mobile devices that lack card readers. I’m glad that we don’t live in the Soviet Union and heads of agencies are not sent to the Gulag when they ignore unreasonable orders.
  • Third-party login solutions such as OpenID, as currently used on the Web, not only do not eliminate passwords, they make the password security problem worse, by facilitating phishing attacks. They also impinge on the user’s privacy, because the identity provider is told what relying parties the user logs in to.
  • Social login solutions based on OAuth, e.g. “login with Facebook”, worsen the privacy drawback of third party login by limiting the user’s choice of identity providers to those that the relying party has registered with, and by broadcasting the user’s activities to the user’s social graph. Eric Sachs of Google said at the last Internet Identity Workshop that users participating in usability testing were afraid of logging in via Facebook or Google+ because “their friends would be spammed”.

But some proponents of NSTIC do not seem to realize that. In a recent interview, Howard Schmidt went so far as to say that NSTIC is “low-hanging fruit”, because “the technology is there”. What technology would that be? In a blog post that he wrote last year shortly after the launch of NSTIC, it was clear that the technology he was considering for NSTIC was privacy-enhancing cryptography, used by Microsoft in U-Prove and by IBM in Idemix. He used the words “privacy-enhancing” in the interview, so he may have been referring to that technology in the interview as well.

(Credentials based on privacy-enhancing cryptography provide selective disclosure and unlinkability. Selective disclosure refers to the ability to combine multiple attributes in a credential but disclose only some of them when presenting the credential. Unlinkability, in the case of U-Prove, refers to the impossibility of linking the use of a credential to its issuance; Idemix also makes it impossible to link multiple uses of the same credential.)

But Idemix has never been deployed commercially, and an attempt at deploying U-Prove within the Information Cards framework failed when Microsoft discontinued CardSpace, two months before the launch of NSTIC.

Credentials based on privacy-enhancing cryptography, sometimes called anonymous credentials, have inherent drawbacks. One of them is that unlinkability makes revocation of such credentials harder than revocation of public key certificates, as I pointed out in a blog post on U-Prove and another blog post on Idemix. The difficulty of revoking credentials based on privacy-enhancing cryptography has led ABC4Trust, which can be viewed as the European counterpart of NSTIC, to propose arresting users for the purpose of revoking their credentials! See page 23, end of last paragraph, of the ABC4Trust document Architecture for Attribute-based Credential Technologies.

Another inherent drawback is that it is difficult to keep the owner of an anonymous credential from making it available for use online by others who are not entitled to it. For example, it would be difficult to prevent the owner of a proof-of-drinking-age anonymous credential (a use case often cited by proponents of anonymous credentials) from letting minors use it for a fee.

The mistaken belief that “the technology is there” explains why the NSTIC NPO has made little effort to improve on existing technology. Instead of requesting funding for research, it requested funding for pilots; a pilot is usually intended to demonstrate the usability of a newly developed technology; it assumes that the technology already exists. After the launch of NSTIC, the NPO announced three workshops, on governance, privacy and technology. The first two were held, but the workshop on technology, which was supposed to take place in September of last year, was postponed by six months and merged with the yearly NIST IDtrust workshop, which took place in March of this year. The IDtrust workshop usually includes a call for papers. But this year there was none: new ideas were not solicited.

The NSTIC NPO has been trying to “bring relying parties to the table”. Ian Glazer dubbed the recent White House meeting the NSTIC Relying Party Event. The meeting was about getting a bigger table according to the NPO blog post on the event, and about “getting people to volunteer” according the Senator Mikulski as quoted by the blog post. Earlier, Jim Sheire of the NPO convened a session entitled NSTIC How do we bring relying parties to the table? at the last Internet Identity Workshop.

One idea mentioned in the report on the IIW session for bringing relying parties to the table is to target 100 “top relying parties” in the hope of creating a snowball effect. But it’s not clear what it would mean for those 100 relying parties and any additional ones caught in the snowball, to “come to the table”. What would they do at the table? What technology would they use? OpenID? OAuth? Smart cards? Information cards? Anonymous credentials? NSTIC has not proposed any specific technology. Or would they come to the table just to talk?

There are many millions of Web sites that use passwords for user authentication. The goal should be to get all those sites to adopt an identity solution that eliminates the security risk of passwords. Web site developers will do that of their own initiative once a solution is available that is more secure and as easy to deploy as password authentication.

While the technology is not there, various technology ingredients are there, and missing ingredients could be developed. It is not difficult to conceive a roadmap that could lead to one or more good identity solutions. But success would require a concerted effort by many different parties: not only relying parties and identity and attribute providers, but also standards bodies, browser vendors, vendors of desktop and mobile operating systems, vendors of smart cards and other hardware tokens, perhaps biometric vendors, and the providers of the middleware, software libraries, and software development tools used on the Web. When I first heard of NSTIC I hoped that it would provide the impetus and the forum needed for such a concerted effort. But that has yet to happen.