New Research on Mobile Authentication

This is the first of a series of posts discussing the paper A Comprehensive Approach to Cryptographic and Biometric Authentication from a Mobile Perspective

In the next few posts I will be reporting on research that we have been doing over the last six months related to cryptographic and biometric authentication, focused on mobile devices. I have held off from writing while we were doing the research but now I have a lot to say, so stay tuned.

By the way, in the last six months we have also moved from San Diego to San Jose. I used to work in Silicon Valley, so it’s nice to be back here and renew old friendships. If you are interested in cryptographic and/or biometric authentication and you are based in Silicon Valley or passing by, let me know; I would be happy to meet for coffee and chat.

The starting point of the this latest research was the work we presented at the NIST Cryptographic Key Management workshop last September (Key Management Challenges of Derived Credentials and Techniques for Addressing Them) and at the Internet Identity Workshop last October (New Authentication Method for Mobile Devices), and wrote up in the paper Strong and Convenient Multi-Factor Authentication on Mobile Devices.

In that early work we devised a mobile authentication architecture where the user authenticates with an uncertified key pair, and a method for regenerating an RSA key pair from a PIN and/or a biometric key. The architecture facilitates implementation by encapsulating the complexities of cryptography and biometrics in a Prover Black Box located in the device and Verifier Black Box located in the cloud, while the key pair regeneration method protects the credential against an adversary who captures the user’s mobile device, by preventing an offline attack against the PIN and/or the biometric key. The architecture was primarily intended for mobile devices but could be adapted for use in traditional PCs by means of browser extensions.

The early work left three questions open:

  1. Can the key pair regeneration method be adapted to cryptosystems other than RSA? This question is practically important because RSA can be used for encryption, and is therefore subject to export controls. The export restrictions have been relaxed a lot since the nineties, but they are so complex that consultation with a lawyer may be required to figure out whether and to what extent they are applicable to a particular product.
  2. Can the mobile authentication architecture accomodate credentials other than uncertified key pairs, including public key certificates and privacy-enhancing credentials such as U-Prove tokens and Idemix anonymous credentials? Uncertified key pairs are ideal for returning-user authentication, but they cannot be used to provide evidence that the user is entitled to attributes asserted by authoritative third parties.
  3. Does the architecture support single sign-on (SSO)? SSO is an essential usability feature when multiple frequently used applications require multifactor authentication.

I am happy to report that we have found good answers to all three questions. First, we have found efficient regeneration methods for DSA and ECDSA key pairs; since DSA and ECDSA can only be used for digital signature, they are not subject to export restrictions. Second, we have found a way of extending the architecture to accomodate a variety of credentials, including public key certificates and privacy-enhancing credentials, without giving up on the strong security properties of the original architecture. Third, we found have found two different ways of providing SSO, one of them well suited for web-wide consumer SSO, the other for enterprise SSO; and both applicable to a mix of web-based apps and apps with native front-ends.

An unanticipated result of the research was the discovery of a defense against an adversary who has succeeded in spoofing a TLS server certificate. Spoofing a certificate is difficult, but not unheard of. The defense, which relies on a form of mutual cryptographic authentication, prevents a man-in-the-middle attack and helps the user detect that a server controlled by the adversary is masquerading as a legitimate server using the spoofed certificate.

We have written all this up in a technical whitepaper,

The paper is quite long, because we thought it was important to describe everything in one place, showing how it all fits together. It would be difficult to discuss the entire paper at once, but in the next few posts I will go one by one over some of the topics in the paper; hopefully that will make it easier to discuss each topic. Watch for the next post in a few days.

One-Click OpenID: A Solution to the NASCAR Problem

OpenID allows the user to choose any identity provider, even one that the relying party has never heard of. This freedom of choice is, in my opinion, the most valuable feature of OpenID. Unfortunately, this feature comes with a difficult challenge: how to provide the relying party with the information it needs to interact with the identity provider.

OAuth does not have this problem because the relying party has to preregister with the identity provider, typically a social site, and therefore must know of the identity provider. An OAuth relying party displays one or a few buttons labeled with the logos of the social sites it supports, e.g. Facebook and Twitter, and the user chooses a site by clicking on a button. But of course freedom of choice is lost: the user can only use as identity provider a social site supported by the relying party.

The traditional OpenID user interface consists of an input box where the user types in an OpenID identifier, which serves as the starting point of an identity provider discovery process. To compete with the simplicity of picking a social site by clicking on a button, some OpenID relying parties present the user with many buttons labeled by logos of popular OpenID identity providers, in addition to the traditional input box; but this user interface has been deemed ugly and confusing to the user. The many logos have been compared to the many ads on a race car, hence the term NASCAR problem that is used to refer to the OpenID user interface challenge.

To solve the challenge we propose to let the browser keep track of the identity provider(s) that the user has signed up with. The list of identity providers will be maintained by the browser as a user preference.

An identity provider will be added to the list by explicit declaration. As the user is visiting the identity provider’s site, the provider will offer its identity service to the user. The user will accept the offer by clicking on a button or link. In the HTTP response to the browser that follows the HTTP request triggered by this action the identity provider will include an ad-hoc HTTP header containing identity provider data including the OP Endpoint URL. The browser will ask the user for permission to add the identity provider to the list and store the identity provider data.

A relying party will use a login form containing a single button, with a label such as Login with OpenID. There need not be any input box for entering an OpenID identifier, nor any buttons with logos of particular identity providers. The form will contain a new ad-hoc non-visual element <idp>. When the form is submitted, the browser will choose one identity provider from the list and send its data to the relying party as the value of the <idp> element.

Which identity provider is chosen is up to the browser. It could be the default, or an identity provider that has previously been used for the relying party, as recorded by the browser, or an identity provider explicitly chosen by the user from a menu presented by the browser. The browser could choose to permanently display a menu showing the user’s list of identity providers as part of the browser chrome.

We arrived at this solution while thinking about the NSTIC pilot that we plan to propose. In the planned proposal the identity provider issues a certificate to the browser, which the browser imports automatically. A natural extension is to let the identity provider download to the browser other data besides the certificate, such as the OP Endpoint URL. Also, a browser user interface for selecting an identity provider is akin to the user interface for selecting a client certificate that browsers already have. We realize that existing user interfaces for certificate selection are less than optimal, but we believe that this is due to lack of attention by browser manufacturers to a rarely used feature, and that better interfaces can be designed.

Welcome to the Pomcor Blog

Welcome to the Pomcor blog! This will be a continuation of the Noflail Search blog previously hosted on wordpress.com, extended to also cover our research on other areas of Web technology, including security and social login.

We have recently concluded Phase I of an ambitious NSF SBIR project and we will soon be reporting on several exciting results from that work. Stay tuned!