A Demonstration of Two-Factor Cryptographic Authentication with a Familiar User Experience

I have just published a GitHub repository demonstrating a method of two-factor cryptographic authentication with a fusion credential, which provides the same user experience as traditional authentication with username and password, but with strong security. Developers with an Amazon AWS account can use a script provided in the repository to install the demo on an EC2 instance of their own. A live demo running on a Pomcor server is also available at demo.pomcor.com.

Security benefits of credential fusion

By analogy with biometric fusion, where two biometric modalities are combined in a manner that provides higher accuracy than if they were used separately, credential fusion combines authentication factors in a manner that provides stronger security than if they where used independently of each other.

In the demo, a password is fused with a cryptographic credential comprising a key pair extended with a secret salt. To authenticate, the frontend of the relying party (RP) hashes the user’s password with the secret salt, signs a challenge with the private key, and sends the public key, the signature, and the salted password to the backend. The backend verifies the signature with the public key, then computes a hash of the salted password with the public key, called the fusion hash, and verifies it against a registered version of that hash. The public key and the secret salt are deleted after authentication, and only the fusion hash is stored in the backend.

If the password and the extended key pair were used separately, the password would provide protection against device theft and the key pair would provide protection against a man-in-the-middle (MITM) phishing attack where the phishing site would relay messages between the legitimate site and the user’s browser and capture the session cookie after the user logs in. This would be prevented because the frontend of the phishing site would not have access to the private key, which is protected by the same origin policy of the web enforced by the browser. But the password would be still be vulnerable to phishing attacks, reuse at malicious sites, and backend breaches.

In the fusion credential, on the other hand, the password and the cryptographic credential protect each other as follows:

  1. The password is protected against capture by a phishing site, because it is not sent in the clear.
  2. The password is protected against reuse at malicious sites that use traditional authentication by username and password because the password is not sent in the clear, and at malicious sites that use a fusion credential as in the present authentication method, because different such sites would use different secret salts.
  3. The password is protected against backend breaches because neither the password nor any value derived from the password that could be used in a dictionary attack are stored in the backend. In traditional authentication with username and password, by contrast, a salted password is stored in the password database, along with the salt itself. The salt prevents dictionary entries being tried against all salted passwords at once, but does not prevent dictionary entries being tried against the salted passwords one at a time. In the present authentication method the password is hashed with a salt, but like the private key, the salt is a secret that never leaves the user’s browser, and neither the salted password nor the salt are stored in the backend.
  4. The key pair is protected against cryptanalytic and postquantum attacks, because the public key is not stored in the backend. In traditional cryptograhic authentication with a key pair, the public key is registered with the RP and stored in the backend database. An attacker who breaches the backend might be able to derive the private key from the public key, either by exploiting a weakness of the signature cryptosystem, or, in a perhaps not so distant future, by using a quantum computer. But in the present authentication method, only the fusion hash is stored in the backend.
Continue reading “A Demonstration of Two-Factor Cryptographic Authentication with a Familiar User Experience”

Passwordless Authentication for the Consumer Space

This is part 1 of a series on cryptographic authentication. Part 2 and Part 3 are now available.

FIDO adoption lags in spite of general availability

In a white paper issued in March 2022 the FIDO Alliance candidly announced that FIDO-based authentication based on the FIDO2 standards, which include the Client-To-Authenticator Protocol of the FIDO Alliance and the companion Web Authentication API (WebAuthn) of the W3C “has not attained large-scale adoption in the consumer space”.

FIDO2 is a cryptographic authentication solution for the web, which uses a key pair managed by an authenticator and is advertised by the FIDO Alliance as being “passwordless”. The key pair may be stored in the authenticator, or, equivalently from a security viewpoint, it may be encrypted under a symmetric key stored in the authenticator, and exported to play the role of a “credential ID”. The authenticator may be a “roaming authenticator” carried in a “security key”, or a “platform authenticator” provided by the OS of the user’s smartphone or laptop.

Early authenticators were security keys, which few web users had. Today most smartphones and laptops have platform authenticators, and that makes FIDO2 a generally available web technology. But the announcement by the FIDO Alliance shows that general availability has not translated into general adoption.

The white paper attributes this to challenges that consumers face with platform authenticators: “having to re-enroll each new device”, and having “no easy ways to recover from a lost or stolen device” as the credentials managed by the platform authenticator of the device are lost. To address the loss-of-credential problem, Apple, Google and Microsoft have announced a joint effort to devise solutions that are expected to become available “in the course of the coming year” and that, according to the white paper, will involve “multi-device credentials”.

Another contributing factor to the lack of adoption, however, is no doubt the complexity and cost of the FIDO2 authentication solution. Implementing the solution in a web app requires FIDO Server software provided by a company certified to provide such software by the FIDO Alliance. A team from the certified company must work with a team from the company that is developing the app to integrate the solution into the app. By contrast, an ordinary 2FA solution is implemented by the app developers themselves, possibly by a single developer, without any integration effort.

Thus FIDO faces two obstacles to widespread adoption: usability and cost.

Two working demonstrations of cryptographic authentication on GitHub

But cryptographic authentication need not be complicated, costly or challenging to the consumer. It can be implemented simply by storing a key pair in persistent browser storage (localStorage or IndexDB), registering the public key, and authenticating by proof of possession of the private key. I will refer to this as the browser storage solution to cryptographic authentication while referring to the use of a FIDO authenticator as the FIDO solution, or the authenticator storage solution, glossing over the fact that the private key may be exported under encryption rather than physically kept in the authenticator.

The browser storage solution can easily overcome the two obstacles that FIDO faces in the consumer space. To demonstrate this I have published on GitHub two demo web apps that implement passwordless, phishing-resistant cryptographic authentication with a key pair credential, without relying on an authenticator. In both of them the key pair is generated in the browser by the JavaScript frontend of the app, and kept in the localStorage facility provided by the Web Storage API. One of them uses a “nosql” (MongoDB) backend database to register the public key and store the user data, while the other uses an “sql” database for that purpose.

Continue reading “Passwordless Authentication for the Consumer Space”

Pomcor Releases PJCL 1.0.0 on GitHub and npm and Deprecates the Beta Versions of the Pomcor JavaScript Cryptographic Library

In 2018 we published a series of beta versions of the Pomcor JavaScript Cryptographic Library (PJCL), which we called 0.9.0, 0.9.0r1, 0.9.1, 0.9.1r1 and 0.9.1r2. (We shall use semantic versioning to name future versions.) We then had to put the PJCL work on hold, but have now been able to resume development. We have refactored the library as an ES6 module and released version 1.0.0 on GitHub at https://github.com/fcorella/pjcl.git, and on npm.

While testing the refactored version 1.0.0 we found two bugs that we tracked back to version 0.9.1r2. Specifically, we found a bug in function pjclPBKDF2_SHA256 and a bug in function pjclFFCValidateG_256, which caused the JavaScript interpreter to throw exceptions. We fixed the bugs in a maintenance release 0.9.1r3, but we have now archived and deprecated the beta versions and will no longer be maintaining them.

If you want to stay informed about PJCL in the future, please subscribe to GitHub notifications about the pjcl repository.

Pomcor Granted Patent on Frictionless Cardholder Authentication

This post has been updated to include the patent number.

Pomcor has been granted US patent 10,825,025 on the method of cardholder authentication that I have discussed before on this blog. The Cardholder Authentication page has links to earlier materials. Actually, the patent was granted on November 3, 2020, but I was busy working on TLS traffic visibility at the time.

3-D Secure 2 purports to reduce the friction created by 3-D Secure 1 as it redirects the cardholder browser to the card issuer’s site for authentication. But it does so by omitting cardholder authentication altogether for transactions deemed low risk, and may increase friction for other transactions.

Eliminating friction with a Service Worker

We eliminate friction instead, for all transactions, by using a Service Worker registered with the browser by the issuer to intercept the redirected request and authenticate the user by proof of possession of a private key, which is used to sign a description of the transaction and thus provide a defense against transaction repudiation to the merchant.

The private key is associated with a credit card certificate that contains the corresponding public key and a cryptographic hash of the data printed on the credit card. The card data is not included in the certificate to avoid exposing it to an attacker who uses malware or physical capture of the cardholder’s device to obtain the certificate. When the merchant’s site or native app receives the certificate along with signature, it verifies the hash against the printed card data entered by the cardholder.

We originally used the idea of intercepting an authentication request with a Service Worker in connection with Rich Credentials, then presented it in general terms at ICMC 2017.

Are Service Workers still usable?

People have been telling us recently that Service Workers can no longer be used because WebKit deletes Service Worker registrations, along with first-party data stored in the browser, after seven days of non-use. This is an unfortunate complication, but it does not mean that Service Workers and first-party data can no longer be used. It just means that the web app that registers the Service Worker, in this case the issuer’s web app, must be added to the home screen, as explained in this WebKit blog post.

A New Tool Against the Surge of Application Fraud

This blog post has been coauthored with Karen Lewison

In recent posts we have been concerned with online credit card fraud and how to fight it using cardholder authentication. In this post we are concerned with another kind of financial fraud, known as application fraud or new account fraud. Both kinds of fraud have been rising after the introduction of chip cards, for reasons mentioned by Elizabeth Lasher in her article The Surge of Application Fraud:

“Due to the high volume of data breaches, Social Security numbers, mailing addresses, passwords, health history, even the name of our first pet is all for sale on the Dark Web. When you combine this phenomenon with the economic pressure applied on fraudsters to find a new cash cow after chip and signature plugged a gap in card-present fraud in the US, there is a perfect storm.”

The term “application fraud” refers to the creation of a financial account, such as a bank account or a mortgage account, with the intention to commit fraud. Application fraud can be first-party fraud, where the account is opened under the fraudster’s own identity, or third-party fraud, where the fraudster uses a stolen identity. Here we are primarily concerned with the latter.

Continue reading “A New Tool Against the Surge of Application Fraud”

Online Cardholder Authentication without Accessing the Card Issuer’s Site

One of the saddest failings of Internet technology is the lack of security for online credit card transactions. In in-store transactions, the cardholder authenticates by presenting the card, and card counterfeiting has been made much more difficult by the addition of a chip to the card. But in online transactions, the cardholder is still authenticated by his or her knowledge of credit card and cardholder data, a weak secret known by many.

Credit card networks have been trying to provide security for online transactions for a long time. In the nineties they proposed a complicated cryptographic protocol called SET (Secure Electronic Transactions) that was never deployed. Then they came up with a simpler protocol called 3-D Secure, where the merchant redirects the cardholder’ browser to the issuing bank, which asks the cardholder to authenticate with a password. 3-D Secure is rarely used in the US and unevenly used in other countries, due to the friction that it causes and the risk of transaction abandonment; lately some issuers have been asking for a second authentication factor, adding more friction. Now the networks have come up with version 2 of 3-D Secure, which removes friction for low risk transactions by introducing a “frictionless flow”. But the frictionless flow does not authenticate the cardholder. Instead, the merchant sends device and cardholder data to the issuer through a back channel, potentially violating the cardholder’s privacy.

Last August we wrote a blog post and a paper proposing a scheme for authenticating the cardholder without friction using a cryptographic payment credential consisting of a public key certificate and the associated private key. We have recently written a revised version of the paper with major improvements to the scheme. The paper will be presented next month at HCII 2019 in Orlando.

Continue reading “Online Cardholder Authentication without Accessing the Card Issuer’s Site”

Frictionless Secure Web Payments without Giving up on Cardholder Authentication

The 3-D Secure protocol version 1.0, marketed under different names by different payment networks (Verified by Visa, MasterCard SecureCode, American Express SafeKey, etc.) aims at reducing online credit card fraud by authenticating the cardholder. To that purpose, the merchant’s web site redirects the cardholder’s browser to the issuing bank, which typically authenticates the cardholder by asking for a static password and/or a one-time password delivered to a registered phone number. 3-D Secure was introduced by Visa in 1999, but it is still unevenly used in European countries and rarely used in the United States. One reason for the limited deployment of 3-D Secure is the friction caused by requiring users to remember and enter a password and/or retrieve and enter a one-time password. Consumers “hate” 3-D Secure 1.0, and merchants are wary of transaction abandonment. Another reason may be that it facilitates phishing attacks by asking for a password after redirection, as discussed here, here, and here.

3-D Secure 2.0 aims at reducing that friction. When 3-D Secure 2.0 is deployed, it will introduce a frictionless flow that will eliminate cardholder authentication friction for 95% of transactions deemed to be low risk. But it will do so by eliminating cardholder authentication altogether for those transactions. The merchant will send contextual information about the intended transaction to the issuer, including the cardholder’s payment history with the merchant. The issuer will use that information, plus its own information about the cardholder and the merchant, to assess the transaction’s risk, and will communicate the assessment to the merchant, who will redirect the browser to the issuer for high risk transactions but omit authentication for low risk ones.

This new version of 3-D Secure has serious drawbacks. It is privacy invasive for the cardholder. It puts the merchant in a bind, who has to keep customer information for the sake of 3D-Secure while minimizing and protecting such information to comply with privacy regulations. It is complex for the issuer, who has to set up an AI “self-learning” risk assessment system. It requires expensive infrastructure: the contextual information that the merchant sends to the issuer goes through no less than three intermediate servers—a 3DS Server, a Directory Server and an Access Control server. And it provides little or no security benefit for low risk transactions, as the cardholder is not authenticated and the 3-D Secure risk assessment that the issuer performs before the merchant submits the transaction to the payment network is redundant with the risk assessment that it performs later before authorizing or declining the submitted transaction forwarded by the payment network.

There is a better way. In a Pomcor technical report we propose a scheme for securing online credit card payments with two-factor authentication of the cardholder without adding friction.

Continue reading “Frictionless Secure Web Payments without Giving up on Cardholder Authentication”

Random Bit Generation with Full Entropy and Configurable Prediction Resistance in a Node.js Application

This is the fourth and last post of a series describing a proof-of-concept web app that implements cryptographic authentication using Node.js with a MongoDB back-end. Part 1 described the login process. Part 2 described the registration process. Part 3 described login session maintenance. The proof-of-concept app, called app-mongodb.js, or simply app-mongodb, can be found in a zip file downloadable from the cryptographic authentication page.

In app-mongodb, random bits are used on the server side for generating registration and login challenges to be signed by the browser, and for generating login session IDs. On the client side, they are used for generating key pairs and computing randomized signatures on server challenges.

In quest of full entropy

There are established methods for obtaining random bits to be used in web apps. On the client side, random bits can be obtained from crypto.getRandomValues, which is part of the W3C Web Crypto API. On the server side, /dev/urandom can be used in Linux, MacOS and most flavors of Unix. However, neither of these methods guarantees full entropy.

Continue reading “Random Bit Generation with Full Entropy and Configurable Prediction Resistance in a Node.js Application”

Login Session Maintenance in Node.js using Express and Handlebars

This is part 3 of a series of posts describing a proof-of-concept web app that implements cryptographic authentication using Node.js with a MongoDB back-end. Part 1 described the login process. Part 2 described the registration process. This Part 3 is concerned with login session maintenance in a broader scope than cryptographic authentication. Part 4, concerned with random bit generation, is now available. The proof-of-concept app, called app-mongodb.js, can be found in a zip file downloadable from the cryptographic authentication page.

Update. The name of the constant securityStrength has been changed to rbgSecurityStrength as noted in the last post of the series and reflected in one of the snippets below.

At first glance it may seem that there is no need for login session maintenance in a web app that implements cryptographic authentication with a key pair. Every HTTP request can be authenticated on its own without linking it to a session, by sending the public key to the back-end and proving possession of the private key, as in the login process described in Part 1. That login process relied on the user supplying the username in order to locate the user record, but this is not essential, since the user record could be located in the database by searching for the public key, which is unique with overwhelming probability.

But login sessions provide important login/logout functionality, allowing the user to choose whether to authenticate or not. A member of a site accessible to both members and non-members, for example, may choose to visit the site without authenticating in order to see what information is made available by the site to non-members. Also, the proof of possession of the private key has a latency cost for the user due to the need to retrieve the challenge from the server, and a computational cost for the server and the browser. These costs are insignificant if incurred once per session, but may not be insignificant if incurred for every HTTP request.

The app discussed in this series, app-mongodb.js, implements login sessions in the traditional way using session cookies. Having said that I could stop here. But the Express framework used in the app provides interesting ways of implementing traditional login sessions, which are worth discussing.

Continue reading “Login Session Maintenance in Node.js using Express and Handlebars”

Credential Registration for Cryptographic Authentication with Node.js and MongoDB

This is part 2 of a series of posts describing a proof-of-concept web app that implements cryptographic authentication using Node.js, Express, Handlebars, MongoDB and Mongoose. All parts are now available. Part 1 describes the login process. This Part 2 describes the registration process. Part 3 describes login session maintenance. Part 4 is concerned with random bit generation.

Update. The name of the constant securityStrength has been changed to rbgSecurityStrength as noted in the last post of the series and reflected in the snippets below.

Part 1 of this series described the login process of a proof-of-concept Node.js application that implements cryptographic authentication using a MongoDB database back-end. The app, called app-mongodb.js, can be found in a zip file downloadable from the cryptographic authentication page, where it is bundled together with a simpler app that has the same functionality and the same front-end but emulates the database using JavaScript objects, provided for comparison.

This post describes the registration process of app-mongodb.js. The app has a registration page reachable from a link found under a top-of-page login form in the public pages of the app. The registration page has a form where the user enters a username, a first name and a last name, but no password. The first and last names are representative of any info that the user may be asked to provide in a full-fledged application.

The registration process of app-mongodb.js has a structure similar to that of the login process described in Part 1. The browser sends an HTTP POST request to the /register-username endpoint of the server, conveying the username, first name and last name. The server creates a user record, called a “user document” in MongoDB terminology, and responds with a JavaScript POST redirection. The JavaScript POST redirection consists of downloading a script that generates a key pair, signs a server challenge with the private key, and sends the public key and the signature to the /register-public-key endpoint in a second HTTP POST request. The server cryptographically validates the public key, verifies the signature, and adds the public key to the user document.

The following code snippet shows how the server processes the first HTTP POST request, received at the /register-username endpoint.

Continue reading “Credential Registration for Cryptographic Authentication with Node.js and MongoDB”