As the travel restrictions imposed to control the coronavirus pandemic are beginning to be relaxed in some parts of the world, it is time to start rethinking airport security in the age of COVID-19. Even if an effective vaccine is found for COVID-19, it will be out of the question to go back to long lines at security checkpoints and boarding gates, and the manual checking of identity documents and boarding passes.
In a provisional patent application that I coauthored with Karen Lewison before the pandemic and have now published, we proposed an automated method of verifying the identity of travelers that could be used in the post-pandemic world to speed up the security check and the boarding process, and to eliminate the face-to-face interaction with a security officer at the checkpoint and a flight attendant at the boarding gate. The method takes advantage of the high accuracy achieved by today’s deep neural networks for face recognition, while overcoming the privacy concerns raised by the collection and storage of facial images.
Here is a summary of the method.
Continue reading “Airport Security in the Age of COVID-19”
This blog post has been coauthored with Karen Lewison
In recent posts we have been concerned with online credit card fraud
and how to fight it using cardholder authentication. In this post we
are concerned with another kind of financial fraud, known as
application fraud or new account fraud. Both kinds of fraud have been
rising after the introduction of chip cards, for reasons mentioned by
Elizabeth Lasher in her
Surge of Application Fraud:
“Due to the high volume of data breaches, Social Security numbers,
mailing addresses, passwords, health history, even the name of our
first pet is all for sale on the Dark Web. When you combine this
phenomenon with the economic pressure applied on fraudsters to find a
new cash cow after chip and signature plugged a gap in card-present
fraud in the US, there is a perfect storm.”
The term “application fraud” refers to the creation of a
financial account, such as a bank account or a mortgage account, with
the intention to commit fraud. Application fraud can be first-party
fraud, where the account is opened under the fraudster’s own identity,
or third-party fraud, where the fraudster uses a stolen identity.
Here we are primarily concerned with the latter.
Continue reading “A New Tool Against the Surge of Application Fraud”
This blog post has been coauthored with Karen Lewison
You may have heard that the EU is struggling to implement the Strong
Customer Authentication (SCA) requirements of Payment Services
The directive was issued four years ago, Regulatory Technical
followed two years later, and the SCA requirements went into effect on
September 14. But on October 16 the European Banking Authority (EBA) had
to postpone enforcement until December 31, 2020, due to pushback from
the National Competent Authorities (NCAs) of the EU member countries.
announcing the postponement, the EBA cited as a reason for the
pushback the fact that 3-D Secure 2
is not ready.
The problems that the EBA is having with the SCA requirements have
more to do with the bureaucratic formulation of the requirements in
PSD2, than with the technical difficulty of providing strong security.
We will discuss this in another post, but first we want to ask here
whether cardholder authentication will ever come to the US.
Continue reading “Will Cardholder Authentication Ever Come to the US?”
This is part 1 of a series on omission-tolerant integrity
protection and related topics.
A technical report on the topic is available
on this site and
in the IACR ePrint Archive.
Broadly speaking, an omission-tolerant cryptographic checksum
is a checksum on data that does not change when items are removed from
the data but makes it infeasible for an adversary to modify the data
in other ways without invalidating the checksum.
We discovered the concept of omission-tolerant integrity protection
while working on rich
credentials. A rich credential includes subject attributes and
verification data stored in a typed hash tree. We noted in an interim
report that the root label of the tree could be viewed as an
“omission-tolerant cryptographic checksum”. Prof. Phil
Windley, who read the report, told us that he had not seen the concept
before, and asked if we had invented it. We then added a section on
typed hash trees and omission-tolerant integrity protection to the
We’ve now written a new technical
report that discusses omission-tolerant checksums and
omission-tolerant integrity protection in a broader context than rich
credentials. The main contributions of the new paper are a formal
definition of omission-tolerant integrity protection, a method of
computing an omission-tolerant checksum on a bit-string encoding of a
set of key-value pairs, and a formal proof of security in an
asymptotic security setting that uses the system
parameterization concept introduced by Boneh and Shoup in
I have not said much in this blog about omission-tolerant integrity
protection, and there is a lot to say: how an omission-tolerant
checksum can be used to implement selective disclosure of subject
attributes in public key certificates; how public key certificates
with selective disclosure could easily provide security and privacy
for client authentication in TLS; what’s special about Boneh and
Shoup’s system parameterization concept and how we use it in our
definitions and proofs; how can a typed hash tree provide
omission-tolerant integrity protection whereas a Merkle tree cannot;
and a number of narrower but no less interesting topics. This is
the first of a series of posts on these topics.
Karen Lewison and I have contributed the chapter on Biometrics to the book
Interaction and Cybersecurity Handbook, published by Taylor &
Francis in the CRC Press series on Human Factors and Ergonomics. The
editor of the paper, Abbas Moallem, has received the SJSU 2018 Author
and Artist Award for the book.
Biometrics is a very complex topic because there are many biometric
modalities, and different modalities use different technologies that
require different scientific backgrounds for in-depth understanding.
The chapter focuses on biometric verfication and packs a lot of
knowledge in only 20 pages, which it organizes by identifying general
concepts, matching paradigms and security architectures before diving
into the details of fingerprint, iris, face and speaker verification,
briefly surveying other modalities, and discussing several methods of
combining modalities in biometric fusion. It emphasizes presentation
attacks and mitigation methods that can be used in what will always be
an arms race between impersonators and verifiers, and discusses the
security and privacy implications of biometric technologies.
Feedback or questions about the chapter would be very welcome as
comments on this post.
Human factors are an essential aspect of cybersecurity. Take for
example credit card payments on the web. A protocol for reducing
fraud by authenticating the cardholder, 3-D Secure, was introduced by
VISA in 1999 and adopted by other payment networks, but has seen
limited deployment because of poor
usability. Now 3-D
Secure 2.0 attempts to reduce friction by asking the merchant to
share privacy-sensitive customer information with the bank and giving
up on cardholder authentication for transactions deemed low-risk based
on that data. A protocol
with better usability would provide better security without
impinging on cardholder privacy.
But human factors are not limited to the usability of
cybersecurity defenses. In biometric authentication, human factors
are the very essence of the defense. Human factors are also of the
essence in cybersecurity attacks such as phishing and social
engineering attacks, and play a role in enabling or spreading attacks
that exploit technical vulnerabilities.
International Conference on HCI for Cybersecurity, Privacy and Trust
(HCI-CPT) recognizes the multifaceted role played by human factors
in cybersecurity, and intends to promote research that views
Human-Computer Interaction (HCI) as “a fundamental pillar
for designing more secure systems”. A call for participation
can be found here.
Continue reading “New Conference to Address the Human Aspects of Cybersecurity and Cryptography”
This blog post is a companion to a presentation made at the
2017 International Cryptographic Module Conference
and refers to the presentation
slides, revised after the
conference. Karen Lewison is a co-author of the presentation and of
this blog post.
Slide 2: Key storage in web clients
Most Web applications today use TLS, thus relying on cryptography to
provide a secure channel between client and server, and to
authenticate the server to the client by means of a cryptographic
credential, consisting of a TLS server certificate and its
associated private key. But other uses of cryptography by Web
applications are still rare. Client authentication still relies
primarily on traditional username-and-password, one-time passwords,
proof of possession of a mobile phone, biometrics, or combinations of
two or more of such authentication factors. Web payments still rely
on a credit card number being considered a secret. Encrypted
messaging is on the rise, but is not Web-based.
A major obstacle to broader use of cryptography by Web applications is
the problem of where to store cryptographic keys on the client side.
Continue reading “Storing Cryptographic Keys in Persistent Browser Storage”
In a press
release, MasterCard announced yesterday an EMV payment card that
features a fingerprint reader. The release said that two trials have
been recently concluded in South Africa and, after additional trials,
a full roll out is expected this year.
In the United States, EMV chip cards are used without a PIN. The
fingerprint reader is no doubt intended to fill that security gap.
But any use of biometrics raises privacy concerns. Perhaps to address
such concerns, the press release stated that a fingerprint template
stored in the card is “encrypted”.
That’s puzzling. If the template is encrypted, what key is used to
decrypt it before use?
Continue reading “What kind of “encrypted fingerprint template” is used by MasterCard?”
NIST is working on the third revision of SP 800-63, which used to be
called the Electronic Authentication Guideline and has now
been renamed the Digital Identity Guidelines. An important
change in the current draft
of the third revision is a much expanded scope for biometrics.
The following are comments by Pomcor on that aspect of the new
guidelines, and more specifically on
5.2.3 of Part B, which we have sent to NIST in response to a call
for public comments.
The draft is right in recommending the use of presentation attack
detection (PAD). We think it should go farther and make PAD a
mandatory requirement right away, without waiting for a future edition
as stated in a note.
But the draft only considers PAD performed at the sensor.
Continue reading “Comments on the Recommended Use of Biometrics in the New Digital Identity Guidelines, NIST SP 800-63-3”
This is Part 3 of a series of posts presenting results of a project
sponsored by an SBIR Phase I grant from the US Department of Homeland
Security. These posts do not necessarily reflect the position or the policy
of the US Government.
To get community feedback on our
identity proofing project we made a presentation two days ago at
the 23rd Internet
Identity Workshop in Mountain View. The slides can be found
were gratified that the feedback was positive and there were in-depth
discussions with identity experts both during and after the
We started by explaining the goal of the project. Remote identity
proofing has often relied on asking the subject multiple-choice
“knowledge questions” (e.g. which of the following zip
codes did you live in five years ago?). This method is terrible for
privacy, since it relies on the identity proofing service gathering
and using troves of personal information about people. Furthermore,
due to the proliferation of personal data available online, it has now
Continue reading “Remote Identity Proofing Discussed at the Internet Identity Workshop”