Overview of ISO/IEC 18013-5: Innovations and Vulnerabilities in the mDL Standard

Two weeks ago I gave a talk about the mobile driver’s license standard at IIW XXXVII, the 37th meeting of the Internet Identity Workshop, which took place as usual at the Computer History Museum in Mountain View.

One of the great things about IIW is that the agenda is created each day. That makes it possible for people interested in the same topic to merge their sessions. When I announced the session that I wanted to convene, Andrew Hughes “hijacked my session”, as he said, to present a progress update on the series of ISO driving license standards, which was a perfect introduction to the details of part 5 of the series that I discussed in the second half of the session. Andrew is a member of the committee that wrote ISO/IEC 18013-5, and other committee members came to the combined session. The notes of the session, taken by Dan Bachenheimer, will eventually be in the Book of Proceedings, and can now be found here. My slides were based in part on an early draft of a chapter of a book on Foundations of Cryptographic Authentication that I am coauthoring with Sukhi Chuhan and Veronica Wojnas.

The mDL standard has many interesting innovations and privacy features.

One innovation, explained in slide 26, is the inclusion of self-asserted (device-signed) and certified (issuer-signed) data elements in the same credential. One wouldn’t expect to find self-asserted claims in a driver’s license, and Section 8.3.2.1.2.2 explicitly says that the structure containing the device-signed elements may be empty. But the mDL standard is in fact a general purpose standard for mobile credentials, which competes with verifiable credentials as discussed in this UL white paper.

Both kinds of data elements are retrieved in an encrypted session established by an ECDH key agreement where both parties use ephemeral key pairs and therefore neither party is authenticated. After the session has been established, the mobile device that carries the credential authenticates as a side-effect of signing the list of self-asserted data elements requested by the reader, whether or not it is empty!

Another innovation, explained in slide 28, is a clever use of an asymmetric key pair to produce a repudiable symmetric signature (an “ECDH-agreed MAC”), and a third innovation, explained in slide 29, is a clever adaptation of OpenID Connect to a use case where it would not seem to be applicable.

Privacy features include declaration by the relying party of the intent to retain some of the data elements, data minimization using selective disclosure, and proof of age without revealing the birthdate by means of age attestations.

Selective disclosure is implemented by means of cryptographic hashing, as explained in slide 11. Full unlinkability (protection against tracking by collusion of the issuer and the relying parties) is not provided, but selective disclosure based on hashing combined with age attestations provides the key benefits of data minimization and proof of age in a simpler way than anonymous credentials. Alternative implementations of selective disclosure, based on hash functions or proofs of knowledge, are described in slides 12-23.

On the other hand, the mDL standard also has privacy drawbacks and vulnerabilities to unauthorized access and man-in-the-middle attacks. The vulnerabilities are discussed in slides 30-39, with an example of a man-in-the-middle attack shown in slide 37. They are also discussed in Section 13.1.9 of the book chapter, along with proposed mitigations in the current or future versions of the standard. Privacy is discussed in slides 40-42 and in Section 13.1.10 of the book chapter.

The vulnerabilities and the privacy drawbacks have two independent root causes.

Continue reading “Overview of ISO/IEC 18013-5: Innovations and Vulnerabilities in the mDL Standard”

A Demonstration of Two-Factor Cryptographic Authentication with a Familiar User Experience

I have just published a GitHub repository demonstrating a method of two-factor cryptographic authentication with a fusion credential, which provides the same user experience as traditional authentication with username and password, but with strong security. Developers with an Amazon AWS account can use a script provided in the repository to install the demo on an EC2 instance of their own. A live demo running on a Pomcor server is also available at demo.pomcor.com.

Security benefits of credential fusion

By analogy with biometric fusion, where two biometric modalities are combined in a manner that provides higher accuracy than if they were used separately, credential fusion combines authentication factors in a manner that provides stronger security than if they where used independently of each other.

In the demo, a password is fused with a cryptographic credential comprising a key pair extended with a secret salt. To authenticate, the frontend of the relying party (RP) hashes the user’s password with the secret salt, signs a challenge with the private key, and sends the public key, the signature, and the salted password to the backend. The backend verifies the signature with the public key, then computes a hash of the salted password with the public key, called the fusion hash, and verifies it against a registered version of that hash. The public key and the secret salt are deleted after authentication, and only the fusion hash is stored in the backend.

If the password and the extended key pair were used separately, the password would provide protection against device theft and the key pair would provide protection against a man-in-the-middle (MITM) phishing attack where the phishing site would relay messages between the legitimate site and the user’s browser and capture the session cookie after the user logs in. This would be prevented because the frontend of the phishing site would not have access to the private key, which is protected by the same origin policy of the web enforced by the browser. But the password would be still be vulnerable to phishing attacks, reuse at malicious sites, and backend breaches.

In the fusion credential, on the other hand, the password and the cryptographic credential protect each other as follows:

  1. The password is protected against capture by a phishing site, because it is not sent in the clear.
  2. The password is protected against reuse at malicious sites that use traditional authentication by username and password because the password is not sent in the clear, and at malicious sites that use a fusion credential as in the present authentication method, because different such sites would use different secret salts.
  3. The password is protected against backend breaches because neither the password nor any value derived from the password that could be used in a dictionary attack are stored in the backend. In traditional authentication with username and password, by contrast, a salted password is stored in the password database, along with the salt itself. The salt prevents dictionary entries being tried against all salted passwords at once, but does not prevent dictionary entries being tried against the salted passwords one at a time. In the present authentication method the password is hashed with a salt, but like the private key, the salt is a secret that never leaves the user’s browser, and neither the salted password nor the salt are stored in the backend.
  4. The key pair is protected against cryptanalytic and postquantum attacks, because the public key is not stored in the backend. In traditional cryptograhic authentication with a key pair, the public key is registered with the RP and stored in the backend database. An attacker who breaches the backend might be able to derive the private key from the public key, either by exploiting a weakness of the signature cryptosystem, or, in a perhaps not so distant future, by using a quantum computer. But in the present authentication method, only the fusion hash is stored in the backend.
Continue reading “A Demonstration of Two-Factor Cryptographic Authentication with a Familiar User Experience”

A Brief Overview of Cryptographic Authentication with a Discussion of Three Hot Topics

Updated August 8 2023

I have just revamped the cryptographic authentication page of the Pomcor site to reflect two major changes that are happening in internet identity and authentication:

  1. It is now clear that traditional MFA is vulnerable to MITM phishing attacks and cryptographic authentication is the solution. But the technology that the industry has bet on as a replacement, FIDO authentication, faces user experience (UX) challenges that have been impeding adoption.
  2. Governments are trying to issue digital credentials usable instead of physical credentials, and some are experimenting with verifiable credentials and self-sovereign identifiers. But a UL white paper has noted that the ISO/IEC 18013-5 standard, although entitled “Mobile driving licence (mDL) application”, can be used to define any kind of credential and is in direct competition with verifiable credentials. And the arguably most successful government app in the world, the Diia app of Ukraine, described in a presentation to the Canadian CIO Strategy Council shown in this YouTube video, uses neither verifiable credentials nor the ISO/IEC 18013-5 standard.

The revamped page includes a definition of the term cryptographic authentication that manages to encompass authentication with key pairs, public key certificates, anonymous credentials, symmetric key credentials and verifiable credentials. It also includes a classification of cryptographic credentials and authentication methods, a recapitulation of the benefits and challenges of cryptographic authentication, and a discussion of three hot topics unsettled issues:

  1. How to use cryptographic authentication to actually provide effective protection against MITM phishing attacks.
  2. How to let the user authenticate on multiple devices, and
  3. How to provide protection to combine the cryptographic factor with additional factors for protection against theft of the device that carries the credential.

The 0-RTT Feature of TLS 1.3 Can Be Used As an Encrypted Steganographic Channel to Operate a Backdoor into an Enterprise Network

The TLS 1.3 specification in RFC 8446 allows the client to send application data to the server immediately after the ClientHello message, with zero round-trip time, and refers to that data as 0-RTT data or early data.

A server that receives early data may accept it or reject it. Rejected data is ignored by the server but seen by all routers, switches, firewalls and other network appliances in the network path from the client to the server. Therefore an attacker-controlled client can use rejected early data as a steganographic channel to communicate with any compromised network appliance situated in the network path. Furthermore neither the server, nor any of the TLS visibility solutions that are currently in the market among those that I surveyed in an earlier post, attempt to decrypt rejected early data. Hence the attacker-controlled client can encrypt the channel using a key unknown to the server but shared with the compromised appliance without risking detection.

An attacker who has implanted persistent malware on an enterprise network appliance can therefore use rejected early data as an encrypted steganographic channel to send command-and-control (C2) instructions from an external client to the implant in the compromised appliance and thus operate a backdoor into the enterprise network.

In this post I go over some of the details of the 0-RTT feature of TLS 1.3, describe several methods that an attacker-controlled client can use to cause rejection of early data by the server, sketch out an attack scenario and propose mitigations.

Continue reading “The 0-RTT Feature of TLS 1.3 Can Be Used As an Encrypted Steganographic Channel to Operate a Backdoor into an Enterprise Network”

Nubeva Explains How It handles TLS 1.3 Key Updates in Response to Pomcor Blog Post

In the last post of the TLS traffic visibility series, before a survey of solutions, I drew attention to how in TLS 1.3 different kinds of traffic are protected under different keys and sometimes with different ciphers, and how client and server can update their application traffic keys at any time. I referred to this as the multiple protection state problem of TLS 1.3.

This problem means that PFS visibility solutions where a single symmetric session key per direction of traffic is sent to a passive visibility middlebox will not work for TLS 1.3 even if they work for TLS 1.2. I mentioned two such solutions in the previous post, one of them being Nubeva’s Symmetric Key Intercept (SKI), described in a presentation at a NIST workshop.

In response to the blog post, Nubeva has sent me a detailed explanation of how their SKI solution handles the multiplicity of symmetric keys in TLS 1.3. It turns out that, although the solution is called Symmetric Key Intercept and the workshop presentation referred to the extraction of symmetric keys from system memory, it is not the symmetric keys that are extracted and sent to a decryptor, but rather the TLS 1.3 traffic secrets, from which the symmetric keys are derived by the decryptor as described in Nubeva’s response.

Continue reading “Nubeva Explains How It handles TLS 1.3 Key Updates in Response to Pomcor Blog Post”

A Survey of Existing and Proposed TLS Visibility Solutions

This is the fifth and last post of a series on providing visibility of TLS 1.3 traffic in the intranet. An index to the series and related materials can be found in the TLS Traffic Visibility page.

Update. This post has been updated in response to a clarification received from Nubeva. See the section on SKI below and the next blog post.

It is well known that TLS 1.3 has created a visibility problem for encrypted intranet traffic by removing the static RSA key exchange method. Except in PSK-only mode, TLS 1.3 traffic has forward secrecy protection and cannot be decrypted by a passive middlebox provisioned with a static private key. This is known as the PFS visibility problem, where PFS stands for “perfect” forward secrecy.

But there is no awareness yet of a second problem created by TLS 1.3 that makes it harder to solve the PFS visibility problem than is generally understood. I call it the multiple protection state problem.

TLS 1.2 has PFS cipher suites, and therefore it has its own PFS visibility problem. If a client insists on using a PFS cipher suite, a passive middlebox provisioned with a static private key won’t be able to decrypt the traffic. Some existing TLS visibility solutions provide the middlebox with the symmetric keys used to protect the traffic, rather than with the private key used to perform the key exchange. Such solutions are being successfully deployed for decrypting TLS 1.2 traffic. But the multiple protection state problem means that those solutions are not applicable to TLS 1.3.

I realized this as I was working on a survey of TLS visibility solutions. The problem is described in the next section and the survey can be found in the following section.

Continue reading “A Survey of Existing and Proposed TLS Visibility Solutions”

A Two-Version Visibility Solution for TLS 1.2 and TLS 1.3 based on a Handshake-Agnostic Middlebox

This is the fourth post of a series on providing visibility of TLS 1.3 traffic in the intranet. An index to the series and related materials can be found in the TLS Traffic Visibility page.

In earlier posts I have proposed a solution for the intranet visibility problem of TLS 1.3 based on the establishment of a visibility shared secret (VSS) between the TLS server and a visibility middlebox, using a long term TCP connection on the same or a different wire than the TLS connection. The visibility middlebox does not relay the TLS traffic: it uses port mirroring to observe the traffic, decrypts it (or, using TLS 1.3 terminology, deprotects it), and forwards the plaintext to a monitoring facility. The solution has a secret derivation (SD) variant where the middlebox derives the TLS 1.3 traffic secrets on its own, and a secret transmission (ST) variant where the server sends the traffic secrets to the middlebox encrypted under keys derived from VSS.

But a server that upgrades to TLS 1.3 must continue to support clients that use earlier versions of TLS. TLS 1.0 and TLS 1.1 have been deprecated, but TLS 1.2 may remain in use for many years. In this post I introduce a third variant that provides visibility for TLS 1.2 in addition to TLS 1.3. This two-version (2V) variant uses a handshake-agnostic visibility middlebox to handle all the key exchange modes of both versions of TLS, and preserves forward secrecy for those modes that provide it. At the end of this post I also describe a VSS precomputation feature, usable in all three variants, that I have mentioned in earlier posts but not discussed in detail yet.

Continue reading “A Two-Version Visibility Solution for TLS 1.2 and TLS 1.3 based on a Handshake-Agnostic Middlebox”

Extending the TLS 1.3 Visibility Solution to Include PSK and 0-RTT

This is the third post of a series on providing visibility of TLS 1.3 traffic in the intranet. An index to the series and related materials can be found in the TLS Traffic Visibility page.

Update. This post has been updated to say that, in the ST variant, the messages that convey the traffic secrets also convey the two-byte designation of the cipher suite that specifies the AEAD algorithm to be used with the keys derived from the secrets, and that the messages include the connection ID of the client-server connection as the AEAD associated data. The middlebox needs to be told what algorithm to use to decrypt early data if the early data is rejected by the server.

TLS 1.3 has created a problem for enterprises by discontinuing all key exchange methods that use static key pairs. In the first post of this series I described a solution to this problem that preserves forward secrecy, based on the establishment of an ephemeral shared secret between the TLS server and a visibility middlebox. In the second post I provided full details of the solution for the (EC)DHE-only key exchange mode of TLS 1.3. In this post I show how the solution can be extended to handle the PSK-only and PSK + (EC)DHE key exchange modes and the 0-RTT feature of TLS 1.3 by providing the PSK to the middlebox. In this post I also introduce a variant of the solution that handles the PSK modes without the middlebox having to know the PSK and provides different benefits. Both variants can be used in all three key exchange modes of TLS 1.3.

Continue reading “Extending the TLS 1.3 Visibility Solution to Include PSK and 0-RTT”

Protocol-Level Details of the TLS 1.3 Visibility Solution

This is the second post of a series on providing visibility of TLS 1.3 traffic in the intranet. An index to the series and related materials can be found in the TLS Traffic Visibility page.

TLS 1.3 has created a major problem for enterprise data centers. The new version of the protocol has discontinued the RSA ciphersuites, as well as the static Diffie Hellman (DH) and Elliptic Curve Diffie Hellman (ECDH) ciphersuites, leaving Ephemeral DH (DHE) and Ephemeral ECDH (ECDHE) as the only key exchange primitives based on asymmetric cryptography. These primitives provide forward secrecy, but make it impossible to inspect TLS traffic in the intranet by provisioning a middlebox with a static RSA key, as is done for earlier versions of TLS. Since traffic inspection is necessary for essential tasks such as troubleshooting, attack detection and compliance audits, enterprises cannot migrate to TLS 1.3 without a solution to this problem.

On September 25 NIST held a workshop to discuss the problem. Before the workshop I posted a quick write up on this blog proposing a solution that provides plaintext visibility of the TLS traffic while preserving the forward secrecy provided by TLS 1.3. This post explains the solution in more detail with reference to the specification of TLS 1.3 in RFC 8446, and includes security considerations and performance considerations.

Continue reading “Protocol-Level Details of the TLS 1.3 Visibility Solution”

Reconciling Forward Secrecy with Network Traffic Visibility in Enterprise Deployments of TLS 1.3

This is the first post of a series on providing visibility of TLS 1.3 traffic in the intranet. An index to the series and related materials can be found in the TLS Traffic Visibility page.

Update. I have corrected the post to say that the middlebox and the server must both use an ephemeral key pair for their key exchange.

Update. I said that the TLS server uses a key derivation function to derive a key pair from the secret that it shares with the middlebox. I should have said, more precisely, that it uses the secret to derive bits that are then used to generate a key pair. I’ve corrected this below, and I will write another post to provide more details.

TLS 1.3 has removed the static RSA and static Diffie-Hellman cipher suites, so that all key exchange mechanisms based on public-key cryptography now provide forward secrecy. This is great for security, but creates a problem for enterprise deployments of the TLS protocol.

As explained in the Enterprise Transport Security specification of the European Telecommunications Standards Institute (ETSI), enterprises need to inspect the network traffic inside their data centers for reasons including application health monitoring, troubleshooting, intrusion detection, detection of malware activity, detection of denial-of-service attacks, and compliance audits.

Visibility of plaintext network traffic is usually achieved by means of passive middleboxes that observe the encrypted network traffic and are able to decrypt it. When a middlebox observes a TLS 1.2 key exchange, if the server uses a static RSA or static Diffie-Hellman key pair and the middlebox is provided with a copy of the private key component of the static key pair, the middlebox can compute the session keys in the same manner as the server, and use the session keys to decrypt the subsequent traffic.

The problem is that this method cannot be used with TLS 1.3, and enterprise data centers cannot refuse to upgrade and get stuck at TLS 1.2 forever.

The above mentioned ETSI specification proposes a clever solution. The TLS client and server follow the TLS 1.3 specification, but the server cheats by using a static Diffie-Hellman key pair while pretending to use an ephemeral one, and shares the static private key with the middlebox. This solution works, but fails to achieve the security benefit of forward secrecy.

I would like to propose instead a solution, illustrated in Figure 1, that requires no cheating and achieves both forward secrecy and visibility of the traffic plaintext to the middlebox.

Figure 1

The TLS client and the TLS server fully implement TLS 1.3. When the server and the middlebox see the ClientHello message, they perform a Diffie-Hellman (DH) or Elliptic Curve Diffie-Hellman (ECDH) key exchange where each side (server and middlebox) uses an ephemeral key pair whose public key component is signed by the private key component of a long-term signature key pair. The result of this (EC)DH key exchange is an ephemeral secret shared between the TLS server and the middlebox. The TLS server uses that shared secret to derive bits, by means of a key derivation algorithm such as HKDF, that it in turn uses to generate an (EC)DH key pair that it uses in the TLS key exchange. This (EC)DH key pair is ephemeral and provides forward secrecy, because it is derived from the ephemeral shared secret. The middlebox uses the shared secret to derive the same ephemeral (EC)DH key pair in the same manner as the TLS server. Then it uses that shared ephemeral key pair to compute the session keys, and uses the session keys to decrypt the subsequent traffic.

Next post in this series: Protocol-Level Details of the TLS 1.3 Visibility Solution.