3-D Secure 2 May Allow the Merchant to Impersonate the Cardholder

3-D Secure is a protocol that provides security for online credit card payments by redirecting the cardholder’s browser to the web site of the bank that has issued the credit card, where the cardholder is authenticated by methods such as username-and-password or a one-time password. 3-D Secure is rarely used in the US because the cardholder authentication creates friction that may cause transaction abandonment, but it is used more frequently in other countries. The credit card networks have been working on a new version of the protocol, called 3-S Secure 2, where the issuing bank assesses fraud risk based on information received from the merchant through a back channel and waives authentication for low-risk transactions.

In a paper presented at HCII 2019 we showed that 3-D Secure 2 has serious privacy and usability issues and we proposed an alternative protocol that provides strong security without friction for all transactions by cryptographically authenticating the cardholder. We have now looked in more detail at a particular configuration of 3-D Secure 2 where the cardholder uses a native app instead of a browser to access the merchant’s site, and we have found security flaws, described in detail in a technical report, that may allow a malicious merchant to impersonate the cardholder.

Continue reading “3-D Secure 2 May Allow the Merchant to Impersonate the Cardholder”

Online Cardholder Authentication without Accessing the Card Issuer’s Site

One of the saddest failings of Internet technology is the lack of security for online credit card transactions. In in-store transactions, the cardholder authenticates by presenting the card, and card counterfeiting has been made much more difficult by the addition of a chip to the card. But in online transactions, the cardholder is still authenticated by his or her knowledge of credit card and cardholder data, a weak secret known by many.

Credit card networks have been trying to provide security for online transactions for a long time. In the nineties they proposed a complicated cryptographic protocol called SET (Secure Electronic Transactions) that was never deployed. Then they came up with a simpler protocol called 3-D Secure, where the merchant redirects the cardholder’ browser to the issuing bank, which asks the cardholder to authenticate with a password. 3-D Secure is rarely used in the US and unevenly used in other countries, due to the friction that it causes and the risk of transaction abandonment; lately some issuers have been asking for a second authentication factor, adding more friction. Now the networks have come up with version 2 of 3-D Secure, which removes friction for low risk transactions by introducing a “frictionless flow”. But the frictionless flow does not authenticate the cardholder. Instead, the merchant sends device and cardholder data to the issuer through a back channel, potentially violating the cardholder’s privacy.

Last August we wrote a blog post and a paper proposing a scheme for authenticating the cardholder without friction using a cryptographic payment credential consisting of a public key certificate and the associated private key. We have recently written a revised version of the paper with major improvements to the scheme. The paper will be presented next month at HCII 2019 in Orlando.

Continue reading “Online Cardholder Authentication without Accessing the Card Issuer’s Site”

A Formal Proof of Omission-Tolerant Integrity Protection

This is the fourth and last part of a series on omission-tolerant integrity protection and related topics. See also part 1, part 2, and part 3. A technical report on the topic is available on this site and in the IACR ePrint Archive.

In part 3 we saw how the system parameterization concept introduced by Boneh and Shoup in their Graduate Course in Applied Cryptography makes it possible to provide a formal definition of collision resistance for keyless hash functions. In this last post I will explain how that definition, and the security and system parameters used in the definition, are used in the proof of Theorem 3 of the technical report, which states that the root label of a typed hash tree can serve as an omission-tolerant checksum of an encoding of a set of key-value pairs.

Continue reading “A Formal Proof of Omission-Tolerant Integrity Protection”

Mapping a Formal Definition of Collision Resistance to Existing Implementations of Hash Functions

This is part 3 of a series on omission-tolerant integrity protection and related topics. See also part 1 and part 2. A technical report on the topic is available on this site and in the IACR ePrint Archive.

When a cryptographic hash is used as a checksum of a bit string, security depends on the collision resistance of the hash function. When the root label of a typed hash tree is used as an omission-tolerant checksum of a set of key-value pairs, security also depends on the collision resistance of a hash function, in this case of the hash function that is used to construct the typed hash tree. (It may also depend on the preimage resistance of the hash function depending on how the set is encoded, as we shall see in another post of this series.) The formal proof of security in the technical report is therefore based on a formal definition of collision resistance. But, surprising as this may seem, there is no standard, widely used, formal definition of collision resistance for the keyless, or unkeyed hash functions such as SHA256 that are used in practice.

Collision resistance is hard to define

The concept of a hash function collision is simple and clear enough: it is a pair of distinct elements of the domain of the hash function that have the same image. The concept of collision resistance, on the other hand, is difficult to define. The reason for this, as explained for example by Katz and Lindell in their Introduction to Modern Cryptography (Chapman & Hall/CRC, 2nd edition, 2014, bottom of page 155), is that collisions exist, and for any collision there exists a trivial algorithm that hardcodes the collision and outputs it in constant time.

Continue reading “Mapping a Formal Definition of Collision Resistance to Existing Implementations of Hash Functions”

Using an Omission-Tolerant Checksum for Selective Disclosure of Attributes Asserted by a Public Key Certificate

This is part 2 of a series on omission-tolerant integrity protection and related topics. A technical report on the topic is available on this site and in the IACR ePrint Archive.

The first post of this series introduced a new technical report that describes the concept of an omission-tolerant checksum in a broader context than the rich credentials paper and includes a formal proof of security in an asymptotic security setting. In this post I give an example showing how an omission-tolerant checksum implemented using a typed hash tree can provide selective disclosure of attributes asserted by a public key certificate.

Typed hash trees vs. Merkle trees

A typed hash tree is a hash tree where every node has a type in addition to a label, and the label of an internal node is a cryptographic hash of an encoding of the types and labels of its children. A distinguished type is assigned to all the internal nodes, and may also be assigned to some of the leaf nodes.

Hash trees were first proposed by Merkle. In a Merkle tree, each internal node is labeled by a hash of the concatenation of the labels of its children, and each leaf node is labeled by a hash of a data block. In both a typed hash tree and a Merkle tree, a subtree can be pruned without modifying the root label of the tree. (By “pruning a subtree” we mean removing the descendants of an internal node.) But in the case of a Merkle tree this is a “bug” to be mitigated if the tree is to provide integrity protection, while in the case of a typed hash tree it is a “feature” that allows the root label of the tree to be used as an omission-tolerant checksum.

Continue reading “Using an Omission-Tolerant Checksum for Selective Disclosure of Attributes Asserted by a Public Key Certificate”

An Omission-Tolerant Cryptographic Checksum

This is part 1 of a series on omission-tolerant integrity protection and related topics. A technical report on the topic is available on this site and in the IACR ePrint Archive.

Broadly speaking, an omission-tolerant cryptographic checksum is a checksum on data that does not change when items are removed from the data but makes it infeasible for an adversary to modify the data in other ways without invalidating the checksum.

We discovered the concept of omission-tolerant integrity protection while working on rich credentials. A rich credential includes subject attributes and verification data stored in a typed hash tree. We noted in an interim report that the root label of the tree could be viewed as an “omission-tolerant cryptographic checksum”. Prof. Phil Windley, who read the report, told us that he had not seen the concept before, and asked if we had invented it. We then added a section on typed hash trees and omission-tolerant integrity protection to the final report.

We’ve now written a new technical report that discusses omission-tolerant checksums and omission-tolerant integrity protection in a broader context than rich credentials. The main contributions of the new paper are a formal definition of omission-tolerant integrity protection, a method of computing an omission-tolerant checksum on a bit-string encoding of a set of key-value pairs, and a formal proof of security in an asymptotic security setting that uses the system parameterization concept introduced by Boneh and Shoup in their online book.

I have not said much in this blog about omission-tolerant integrity protection, and there is a lot to say: how an omission-tolerant checksum can be used to implement selective disclosure of subject attributes in public key certificates; how public key certificates with selective disclosure could easily provide security and privacy for client authentication in TLS; what’s special about Boneh and Shoup’s system parameterization concept and how we use it in our definitions and proofs; how can a typed hash tree provide omission-tolerant integrity protection whereas a Merkle tree cannot; and a number of narrower but no less interesting topics. This is the first of a series of posts on these topics.

Pomcor Contributes Biometrics Chapter to HCI and Cybersecurity Handbook

Karen Lewison and I have contributed the chapter on Biometrics to the book Human-Computer Interaction and Cybersecurity Handbook, published by Taylor & Francis in the CRC Press series on Human Factors and Ergonomics. The editor of the paper, Abbas Moallem, has received the SJSU 2018 Author and Artist Award for the book.

Biometrics is a very complex topic because there are many biometric modalities, and different modalities use different technologies that require different scientific backgrounds for in-depth understanding. The chapter focuses on biometric verfication and packs a lot of knowledge in only 20 pages, which it organizes by identifying general concepts, matching paradigms and security architectures before diving into the details of fingerprint, iris, face and speaker verification, briefly surveying other modalities, and discussing several methods of combining modalities in biometric fusion. It emphasizes presentation attacks and mitigation methods that can be used in what will always be an arms race between impersonators and verifiers, and discusses the security and privacy implications of biometric technologies.

Feedback or questions about the chapter would be very welcome as comments on this post.

New Conference to Address the Human Aspects of Cybersecurity and Cryptography

Human factors are an essential aspect of cybersecurity. Take for example credit card payments on the web. A protocol for reducing fraud by authenticating the cardholder, 3-D Secure, was introduced by VISA in 1999 and adopted by other payment networks, but has seen limited deployment because of poor usability. Now 3-D Secure 2.0 attempts to reduce friction by asking the merchant to share privacy-sensitive customer information with the bank and giving up on cardholder authentication for transactions deemed low-risk based on that data. A protocol with better usability would provide better security without impinging on cardholder privacy.

But human factors are not limited to the usability of cybersecurity defenses. In biometric authentication, human factors are the very essence of the defense. Human factors are also of the essence in cybersecurity attacks such as phishing and social engineering attacks, and play a role in enabling or spreading attacks that exploit technical vulnerabilities.

The 1st International Conference on HCI for Cybersecurity, Privacy and Trust (HCI-CPT) recognizes the multifaceted role played by human factors in cybersecurity, and intends to promote research that views Human-Computer Interaction (HCI) as “a fundamental pillar for designing more secure systems”. A call for participation can be found here.

Continue reading “New Conference to Address the Human Aspects of Cybersecurity and Cryptography”

Frictionless Secure Web Payments without Giving up on Cardholder Authentication

The 3-D Secure protocol version 1.0, marketed under different names by different payment networks (Verified by Visa, MasterCard SecureCode, American Express SafeKey, etc.) aims at reducing online credit card fraud by authenticating the cardholder. To that purpose, the merchant’s web site redirects the cardholder’s browser to the issuing bank, which typically authenticates the cardholder by asking for a static password and/or a one-time password delivered to a registered phone number. 3-D Secure was introduced by Visa in 1999, but it is still unevenly used in European countries and rarely used in the United States. One reason for the limited deployment of 3-D Secure is the friction caused by requiring users to remember and enter a password and/or retrieve and enter a one-time password. Consumers “hate” 3-D Secure 1.0, and merchants are wary of transaction abandonment. Another reason may be that it facilitates phishing attacks by asking for a password after redirection, as discussed here, here, and here.

3-D Secure 2.0 aims at reducing that friction. When 3-D Secure 2.0 is deployed, it will introduce a frictionless flow that will eliminate cardholder authentication friction for 95% of transactions deemed to be low risk. But it will do so by eliminating cardholder authentication altogether for those transactions. The merchant will send contextual information about the intended transaction to the issuer, including the cardholder’s payment history with the merchant. The issuer will use that information, plus its own information about the cardholder and the merchant, to assess the transaction’s risk, and will communicate the assessment to the merchant, who will redirect the browser to the issuer for high risk transactions but omit authentication for low risk ones.

This new version of 3-D Secure has serious drawbacks. It is privacy invasive for the cardholder. It puts the merchant in a bind, who has to keep customer information for the sake of 3D-Secure while minimizing and protecting such information to comply with privacy regulations. It is complex for the issuer, who has to set up an AI “self-learning” risk assessment system. It requires expensive infrastructure: the contextual information that the merchant sends to the issuer goes through no less than three intermediate servers—a 3DS Server, a Directory Server and an Access Control server. And it provides little or no security benefit for low risk transactions, as the cardholder is not authenticated and the 3-D Secure risk assessment that the issuer performs before the merchant submits the transaction to the payment network is redundant with the risk assessment that it performs later before authorizing or declining the submitted transaction forwarded by the payment network.

There is a better way. In a Pomcor technical report we propose a scheme for securing online credit card payments with two-factor authentication of the cardholder without adding friction.

Continue reading “Frictionless Secure Web Payments without Giving up on Cardholder Authentication”

Random Bit Generation with Full Entropy and Configurable Prediction Resistance in a Node.js Application

This is the fourth and last post of a series describing a proof-of-concept web app that implements cryptographic authentication using Node.js with a MongoDB back-end. Part 1 described the login process. Part 2 described the registration process. Part 3 described login session maintenance. The proof-of-concept app, called app-mongodb.js, or simply app-mongodb, can be found in a zip file downloadable from the cryptographic authentication page.

In app-mongodb, random bits are used on the server side for generating registration and login challenges to be signed by the browser, and for generating login session IDs. On the client side, they are used for generating key pairs and computing randomized signatures on server challenges.

In quest of full entropy

There are established methods for obtaining random bits to be used in web apps. On the client side, random bits can be obtained from crypto.getRandomValues, which is part of the W3C Web Crypto API. On the server side, /dev/urandom can be used in Linux, MacOS and most flavors of Unix. However, neither of these methods guarantees full entropy.

Continue reading “Random Bit Generation with Full Entropy and Configurable Prediction Resistance in a Node.js Application”