Will Cardholder Authentication Ever Come to the US?

This blog post has been coauthored with Karen Lewison

You may have heard that the EU is struggling to implement the Strong Customer Authentication (SCA) requirements of Payment Services Directive 2 (PSD2). The directive was issued four years ago, Regulatory Technical Standards (RTS) followed two years later, and the SCA requirements went into effect on September 14. But on October 16 the European Banking Authority (EBA) had to postpone enforcement until December 31, 2020, due to pushback from the National Competent Authorities (NCAs) of the EU member countries. In an opinion announcing the postponement, the EBA cited as a reason for the pushback the fact that 3-D Secure 2 (3DS2) is not ready.

The problems that the EBA is having with the SCA requirements have more to do with the bureaucratic formulation of the requirements in PSD2, than with the technical difficulty of providing strong security. We will discuss this in another post, but first we want to ask here whether cardholder authentication will ever come to the US.

Continue reading “Will Cardholder Authentication Ever Come to the US?”

3-D Secure 2 May Allow the Merchant to Impersonate the Cardholder

3-D Secure is a protocol that provides security for online credit card payments by redirecting the cardholder’s browser to the web site of the bank that has issued the credit card, where the cardholder is authenticated by methods such as username-and-password or a one-time password. 3-D Secure is rarely used in the US because the cardholder authentication creates friction that may cause transaction abandonment, but it is used more frequently in other countries. The credit card networks have been working on a new version of the protocol, called 3-S Secure 2, where the issuing bank assesses fraud risk based on information received from the merchant through a back channel and waives authentication for low-risk transactions.

In a paper presented at HCII 2019 we showed that 3-D Secure 2 has serious privacy and usability issues and we proposed an alternative protocol that provides strong security without friction for all transactions by cryptographically authenticating the cardholder. We have now looked in more detail at a particular configuration of 3-D Secure 2 where the cardholder uses a native app instead of a browser to access the merchant’s site, and we have found security flaws, described in detail in a technical report, that may allow a malicious merchant to impersonate the cardholder.

Continue reading “3-D Secure 2 May Allow the Merchant to Impersonate the Cardholder”

Pomcor Contributes Biometrics Chapter to HCI and Cybersecurity Handbook

Karen Lewison and I have contributed the chapter on Biometrics to the book Human-Computer Interaction and Cybersecurity Handbook, published by Taylor & Francis in the CRC Press series on Human Factors and Ergonomics. The editor of the paper, Abbas Moallem, has received the SJSU 2018 Author and Artist Award for the book.

Biometrics is a very complex topic because there are many biometric modalities, and different modalities use different technologies that require different scientific backgrounds for in-depth understanding. The chapter focuses on biometric verfication and packs a lot of knowledge in only 20 pages, which it organizes by identifying general concepts, matching paradigms and security architectures before diving into the details of fingerprint, iris, face and speaker verification, briefly surveying other modalities, and discussing several methods of combining modalities in biometric fusion. It emphasizes presentation attacks and mitigation methods that can be used in what will always be an arms race between impersonators and verifiers, and discusses the security and privacy implications of biometric technologies.

Feedback or questions about the chapter would be very welcome as comments on this post.

New Conference to Address the Human Aspects of Cybersecurity and Cryptography

Human factors are an essential aspect of cybersecurity. Take for example credit card payments on the web. A protocol for reducing fraud by authenticating the cardholder, 3-D Secure, was introduced by VISA in 1999 and adopted by other payment networks, but has seen limited deployment because of poor usability. Now 3-D Secure 2.0 attempts to reduce friction by asking the merchant to share privacy-sensitive customer information with the bank and giving up on cardholder authentication for transactions deemed low-risk based on that data. A protocol with better usability would provide better security without impinging on cardholder privacy.

But human factors are not limited to the usability of cybersecurity defenses. In biometric authentication, human factors are the very essence of the defense. Human factors are also of the essence in cybersecurity attacks such as phishing and social engineering attacks, and play a role in enabling or spreading attacks that exploit technical vulnerabilities.

The 1st International Conference on HCI for Cybersecurity, Privacy and Trust (HCI-CPT) recognizes the multifaceted role played by human factors in cybersecurity, and intends to promote research that views Human-Computer Interaction (HCI) as “a fundamental pillar for designing more secure systems”. A call for participation can be found here.

Continue reading “New Conference to Address the Human Aspects of Cybersecurity and Cryptography”

Storing Cryptographic Keys in Persistent Browser Storage

This blog post is a companion to a presentation made at the 2017 International Cryptographic Module Conference and refers to the presentation slides, revised after the conference. Karen Lewison is a co-author of the presentation and of this blog post.

Slide 2: Key storage in web clients

Most Web applications today use TLS, thus relying on cryptography to provide a secure channel between client and server, and to authenticate the server to the client by means of a cryptographic credential, consisting of a TLS server certificate and its associated private key. But other uses of cryptography by Web applications are still rare. Client authentication still relies primarily on traditional username-and-password, one-time passwords, proof of possession of a mobile phone, biometrics, or combinations of two or more of such authentication factors. Web payments still rely on a credit card number being considered a secret. Encrypted messaging is on the rise, but is not Web-based.

A major obstacle to broader use of cryptography by Web applications is the problem of where to store cryptographic keys on the client side. Continue reading “Storing Cryptographic Keys in Persistent Browser Storage”

What kind of “encrypted fingerprint template” is used by MasterCard?

In a press release, MasterCard announced yesterday an EMV payment card that features a fingerprint reader. The release said that two trials have been recently concluded in South Africa and, after additional trials, a full roll out is expected this year.

In the United States, EMV chip cards are used without a PIN. The fingerprint reader is no doubt intended to fill that security gap. But any use of biometrics raises privacy concerns. Perhaps to address such concerns, the press release stated that a fingerprint template stored in the card is “encrypted”.

That’s puzzling. If the template is encrypted, what key is used to decrypt it before use?

Continue reading “What kind of “encrypted fingerprint template” is used by MasterCard?”

Cryptographic Module Standards at a Crossroads after Snowden’s Revelations

Last week I participated in the third International Cryptographic Module Conference (ICMC), organized by the Cryptographic Module User Forum (CMUF), and concerned with the validation of cryptographic modules against government and international standards. You may think of cryptographic module validation as a dry topic, but it was quite an exciting conference, full of technical and political controversy. The technical controversy resulted from the fact that the standards are out of sync with current technology and it is not at all clear how they can be fixed. The political controversy resulted from the fact that, after Snowden’s revelations, it is not at all clear who should try to fix them. The organizers signalled that they were not afraid of controversy by inviting as keynote speakers both Phil Zimmerman, creator of PGP and co-founder of Silent Circle, and Marianne Bailey, Deputy CIO for Cybersecurity at the US Department of Defense, besides well known expert Paul Kocher of SSL fame. I enjoyed an exchange between Zimmerman and Bailey on the imbalance between defense and offense at the NSA and its impact on cybersecurity. Continue reading “Cryptographic Module Standards at a Crossroads after Snowden’s Revelations”

It’s Time to Redesign Transport Layer Security

One difficulty faced by privacy-enhancing credentials (such as U-Prove tokens, Idemix anonymous credentials, or credentials based on group signatures), is the fact that they are not supported by TLS. We noticed this when we looked at privacy-enhancing credentials in the context of NSTIC, and we proposed an architecture for the NSTIC ecosystem that included an extension of TLS to accommodate them.

Several other things are wrong with TLS. Performance is poor over satellite links due to the additional roundtrips and the transmission of certificate chains during the handshake. Client and attribute certificates, when used, are sent in the clear. And there has been a long list of TLS vulnerabilities, some of which have not been addressed, while others are addressed in TLS versions and extensions that are not broadly deployed.

The November SSL Pulse reported that only 18.2% of surveyed web sites supported TLS 1.1, which dates back to April 2006, only 20.7% supported TLS 1.2, which dates back to August 2008, and only 30.6% had server-side protection against the BEAST attack, which requires either TLS 1.1 or TLS 1.2. This indicates upgrade fatigue, which may be due to the age of the protocol and the large number of versions and extensions that it has accumulated during its long life. Changing the configuration of a TLS implementation to protect against vulnerabilities without shutting out a large portion of the user base is a complex task that IT personnel is no doubt loath to tackle.

So perhaps it is time to restart from scratch, designing a new transport layer security protocol — actually, two of them, one for connections and the other for datagrams — that will incorporate the lessons learned from TLS — and DTLS — while discarding the heavy baggage of old code and backward compatibility requirements.

We have written a new white paper that recapitulates the drawbacks of TLS and discusses ingredients for a possible replacement.

The paper emphasizes the benefits of redesigning transport layer security for the military, because the military in particular should be very much interested in better transport layer security protocols. The military should be interested in better performance over satellite and radio links, for obvious reasons. It should be interested in increased security, because so much is at stake in the security of military networks. And I would argue that it should also be interested in increased privacy, because what is viewed as privacy on the Internet may be viewed as resistance to traffic analysis in military networks.

Pomcor’s Comments on the Cybersecurity Green Paper

We have written a response to the Call for Comments on the report entitled Cybersecurity, Innovations and the Internet Economy, written by the Internet Policy Task Force of the US Department of Commerce.

In the response we call for research and development efforts aimed at improving and broadening the scope of the TLS protocol (formerly known as SSL). This would benefit NSTIC and the many IETF protocols that rely on TLS for their security.

If you have any comments on our response, please leave then below.