Human factors are an essential aspect of cybersecurity. Take for
example credit card payments on the web. A protocol for reducing
fraud by authenticating the cardholder, 3-D Secure, was introduced by
VISA in 1999 and adopted by other payment networks, but has seen
limited deployment because of poor
usability. Now 3-D
Secure 2.0 attempts to reduce friction by asking the merchant to
share privacy-sensitive customer information with the bank and giving
up on cardholder authentication for transactions deemed low-risk based
on that data. A protocol
with better usability would provide better security without
impinging on cardholder privacy.
But human factors are not limited to the usability of
cybersecurity defenses. In biometric authentication, human factors
are the very essence of the defense. Human factors are also of the
essence in cybersecurity attacks such as phishing and social
engineering attacks, and play a role in enabling or spreading attacks
that exploit technical vulnerabilities.
International Conference on HCI for Cybersecurity, Privacy and Trust
(HCI-CPT) recognizes the multifaceted role played by human factors
in cybersecurity, and intends to promote research that views
Human-Computer Interaction (HCI) as “a fundamental pillar
for designing more secure systems”. A call for participation
can be found here.
Continue reading “New Conference to Address the Human Aspects of Cybersecurity and Cryptography”
This blog post is a companion to a presentation made at the
2017 International Cryptographic Module Conference
and refers to the presentation
slides, revised after the
conference. Karen Lewison is a co-author of the presentation and of
this blog post.
Slide 2: Key storage in web clients
Most Web applications today use TLS, thus relying on cryptography to
provide a secure channel between client and server, and to
authenticate the server to the client by means of a cryptographic
credential, consisting of a TLS server certificate and its
associated private key. But other uses of cryptography by Web
applications are still rare. Client authentication still relies
primarily on traditional username-and-password, one-time passwords,
proof of possession of a mobile phone, biometrics, or combinations of
two or more of such authentication factors. Web payments still rely
on a credit card number being considered a secret. Encrypted
messaging is on the rise, but is not Web-based.
A major obstacle to broader use of cryptography by Web applications is
the problem of where to store cryptographic keys on the client side.
Continue reading “Storing Cryptographic Keys in Persistent Browser Storage”
In a press
release, MasterCard announced yesterday an EMV payment card that
features a fingerprint reader. The release said that two trials have
been recently concluded in South Africa and, after additional trials,
a full roll out is expected this year.
In the United States, EMV chip cards are used without a PIN. The
fingerprint reader is no doubt intended to fill that security gap.
But any use of biometrics raises privacy concerns. Perhaps to address
such concerns, the press release stated that a fingerprint template
stored in the card is “encrypted”.
That’s puzzling. If the template is encrypted, what key is used to
decrypt it before use?
Continue reading “What kind of “encrypted fingerprint template” is used by MasterCard?”
Last week I participated in the
third International Cryptographic
Module Conference (ICMC), organized by the Cryptographic Module
User Forum (CMUF), and concerned with the validation of cryptographic
modules against government and international standards. You may think
of cryptographic module validation as a dry topic, but it was quite an
exciting conference, full of technical and political controversy. The
technical controversy resulted from the fact that the standards are
out of sync with current technology and it is not at all clear how
they can be fixed. The political controversy resulted from the fact
that, after Snowden’s revelations, it is not at all clear who should
try to fix them. The organizers signalled that they were not afraid
of controversy by inviting as keynote speakers both Phil Zimmerman,
creator of PGP and co-founder of Silent Circle, and Marianne Bailey,
Deputy CIO for Cybersecurity at the US Department of Defense, besides
well known expert Paul Kocher of SSL fame. I enjoyed an exchange
between Zimmerman and Bailey on the imbalance between defense and
offense at the NSA and its impact on cybersecurity. Continue reading “Cryptographic Module Standards at a Crossroads after Snowden’s Revelations”
One difficulty faced by privacy-enhancing credentials (such as U-Prove
tokens, Idemix anonymous credentials, or credentials based on group
signatures), is the fact that they are not supported by TLS. We
noticed this when we looked at privacy-enhancing credentials in the
context of NSTIC, and we proposed an architecture for the NSTIC
ecosystem that included an extension of TLS to accommodate them.
Several other things are wrong with TLS. Performance is poor over
satellite links due to the additional roundtrips and the transmission
of certificate chains during the handshake. Client and attribute
certificates, when used, are sent in the clear. And there has been a
long list of TLS vulnerabilities, some of which have not been
addressed, while others are addressed in TLS versions and extensions
that are not broadly deployed.
Pulse reported that only 18.2% of surveyed web sites supported TLS
1.1, which dates back to April 2006, only 20.7% supported TLS 1.2,
which dates back to August 2008, and only 30.6% had server-side
protection against the BEAST attack, which requires either TLS 1.1 or
TLS 1.2. This indicates upgrade fatigue, which may be due to
the age of the protocol and the large number of versions and
extensions that it has accumulated during its long life. Changing the
configuration of a TLS implementation to protect against
vulnerabilities without shutting out a large portion of the user base
is a complex task that IT personnel is no doubt loath to tackle.
So perhaps it is time to restart from scratch, designing a new
transport layer security protocol — actually, two of them, one
for connections and the other for datagrams — that will
incorporate the lessons learned from TLS — and DTLS —
while discarding the heavy baggage of old code and backward
We have written a
paper that recapitulates the drawbacks of TLS and discusses
ingredients for a possible replacement.
The paper emphasizes the benefits of redesigning transport layer
security for the military, because the military in particular should
be very much interested in better transport layer security protocols.
The military should be interested in better performance over satellite
and radio links, for obvious reasons. It should be interested in
increased security, because so much is at stake in the security of
military networks. And I would argue that it should also be
interested in increased privacy, because what is viewed as privacy on
the Internet may be viewed as resistance to traffic analysis in
We have written a response to the
Comments on the report
Innovations and the Internet Economy, written by the Internet
Policy Task Force of the US Department of Commerce.
In the response we call for research and development efforts aimed at
improving and broadening the scope of the TLS protocol (formerly known
as SSL). This would
benefit NSTIC and the many
IETF protocols that rely on TLS for their security.
If you have any comments on our response, please leave then below.