Biometrics in PIV Cards

This is Part 3 of a series discussing the public comments on Draft NIST SP 800-157, Guidelines for Derived Personal Identity Verification (PIV) Credentials and the final version of the publication.

After Part 1 and Part 2, in this Part 3 I intended to discuss comments received by NIST regarding possible uses of biometrics in connection with derived credentials. But that requires explaining the use of biometrics in PIV cards, and as I delved into the details, I realized that this deserves a blog post of its own, which may be of interest in its own right. So in this post I will begin by reviewing the security and privacy issues raised by the use of biometrics, then I will recap the biometrics carried in a PIV card and how they are used.

Biometric security

When used for user authentication, biometrics are sometimes characterized as “something you are“, while a password or PIN is “something you know” and a private key stored in a smart card or computing device is “something you have“, “you” being the cardholder. However this is only an accurate characterization when a biometric sample is known to come from the cardholder or device user, which in practice requires the sample to be taken by, or at least in the presence of, a human attendant. How easy it was to dupe the fingerprint sensors in Apple’s iPhone (as demonstrated in this video) and Samsung’s Galaxy S5 (as demonstrated in this video) with a spoofed fingerprint shows how difficult it is to verify that a biometric sample is live, i.e. comes from a human body. In the absence of an assurance of liveness, the security of biometric authentication relies on the relative secrecy of the biometric or, more precisely, on the likelihood of an adversary not having access to a sample. When the adversary captures a smart phone with a fingerprint sensor there is zero relative secrecy, since the phone owner’s fingerprints are on the phone itself; an adversary may also be able to lift latent fingerprints from a captured smart card. But there may be substantial relative secrecy for other kinds of biometric samples, such as an iris image.

Biometric privacy

The use of biometrics for user authentication raises privacy concerns for two reasons. First, if a biometric sample or template used to authenticate a transaction is provided to the party to which the user authenticates (the verifier), then that transaction can be linked to unrelated but biometrically authenticated transactions, as well as to offline activities of the user. Second, biometrics are not cancelable, so if the relative secrecy of a particular biometric, such as the fingerprint of a particular finger or the image of the iris of a particular eye, is compromised, that biometric cannot be used again securely.

These privacy concerns mean that biometric samples and templates used on a device such as a smart card, a smart phone or a laptop must not be provided to transaction verifiers, and must be protected against adversaries who may physically capture the device and malware that may be running on the device.

Biometrics in PIV cards

The specification of biometric usage in PIV cards is distributed over six different documents. As of this writing, the latest versions of these documents are as follows: FIPS 201-2 (August 2013); Revised Draft Part 1, Revised Draft Part 2 and Revised Draft Part 3 of SP800-73-4 (May 2014); SP800-76-2 (July 2013); and SP800-116 (November 2008). SP800-116 is out of date, but is useful for understanding the thinking behind the various methods of physical access control.

The biometrics that must or may be carried in a PIV card include two mandatory fingerprint templates for off-card comparison, one or two optional fingerprint templates for on-card comparison, one or two optional iris images, and a mandatory electronic facial image. The optional fingerprint templates for on-card comparison may be derived from the same enrollment fingerprints as the mandatory ones for off-card comparison, but they are encoded in different formats. The biometrics in a PIV card may be used for a variety of purposes, as explained in the following sections.

Physical access control with authentication by PIN and fingerprint

The authentication methods called BIO and BIO-A can be used for cardholder authentication to a Physical Access Control System (PACS) using one of the off-card comparison fingerprint templates. The cardholder inserts the card into a card reader, supplies a fingerprint by touching a sensor, and enters a card activation PIN on a PIN pad, which the PACS forwards to the card. After verifying the PIN, the card sends the fingerprint template to the PACS, which the PACS uses to verify the cardholder’s fingerprint. The template is signed with a digital signature, verified by the PACS, which binds the template to a card identifier. (Actually, the signature binds the template to two card identifiers, the original FASC-N identifier for federal smart cards, and the newer Card UUID suitable for PIV Interoperable (PIV-I) cards that may be issued by federal contractors to their employees.) A card equipped with a contactless interface may communicate with the reader through a secure channel established over NFC, instead of through a contact interface. BIO-A differs from BIO only in that BIO is unattended whereas a human guard watches the procedure in BIO-A.

At first glance, these methods provide three-factor authentication: one factor being the PIN (something you know), a second factor being the card (something you have) and a third factor being the fingerprint (something you are). But SP800-116 explains that, in fact, they only provide one-factor authentication. This is because the PACS does not authenticate the card in these authentication methods. A signed template obtained from the card of a legitimate cardholder could be placed in a fake card and used by an impostor to authenticate. The impostor would have to spoof the biometric, but would not need to be in possession of a valid card, nor to know a valid PIN since the PIN would be verified by the fake card manufactured by the impostor. Spoofing the biometric may be difficult in BIO-A under the attendant’s watch, but should be relatively easy in BIO if the impostor has access to a fingerprint sample (which may be available on the card itself), judging by how easy it was to dupe the iPhone and Galaxy S5 fingerprint sensors. The signed template could be obtained, for example, by malware running on a desktop equipped with a card reader, if the legitimate cardholder uses the card for remote authentication from the desktop and enters the card activation PIN via the keyboard. (The BIO and BIO-A procedures also include retrieval from the card and verification by the PACS of the Cardholder Unique Identifier (CHUID), which includes a signature binding the FASC-N and UUID to an expiration date. But the CHUID could be obtained and placed in a fake card just as well as the signed template.)

While SP800-116 candidly explains that BIO and BIO-A provide a lot less security than it seems, FIPS 201-2 does not. It says that they do not provide protection against the use of a revoked card, but does not say that they do not provide protection against the use of a fake card. It assigns the highest security level to BIO-A, and rates BIO at Security Level 3 out of 4. (See Table 6-2 of FIPS 201-2, in conjunction with Table 6-1.)

SP800-116 suggests combining BIO or BIO-A with one of two authentication methods that do authenticate the card and check for revocation, called PKI-AUTH and CAK-AUTH. In PKI-AUTH the card authenticates with the PIV Authentication private key and associated PIV Authentication certificate, and in CAK-AUTH with the Card Authentication private key and associated Card Authentication certificate. (The two methods differ in that CAK-AUTH does not require card activation while PKI-AUTH does. When PKI-AUTH is combined with BIO or BIO-A, a single activation by a PIN enables both the retrieval of the signed fingerprint template and the use of the private key.) These combinations provide three-factor authentication. The combination of BIO-A with either PKI-AUTH or CAK-AUTH provides strong security if the PIV card is strongly tamper resistant.

Physical access control with authentication by PIN and iris image

In BIO and BIO-A, an iris image may be used instead of or in addition to a fingerprint, in which case the PACS retrieves one of the optional iris images from the card, signed, instead of or in addition to the mandatory off-card comparison fingerprint template. Iris recognition provides more security than fingerprint recognition for two reasons. First, iris recognition achieves a lower FAR (False Acceptance Rate) than fingerprint recognition for a given FRR (False Rejection Rate). Second, an iris image has higher relative secrecy than a fingerprint, because it is difficult to obtain a high quality iris image without cooperation from the subject while it is easy to lift a latent fingerprint from any object touched by the subject, possibly from the PIV card itself.

Physical access control with fingerprint-only authentication

A new authentication method called OCC-AUTH, introduced in version 2 of FIPS 201 dated August 2013, can be used to authenticate the cardholder to a PACS by means of a fingerprint but without a PIN. Instead retrieving a fingerprint template from the card, the PACS sends the card a fingerprint provided by the cardholder, which the card compares against one of the optional on-card comparison fingerprint templates, reporting the success or failure of the comparison to the PACS. Which of the on-card comparison fingerprint templates is used when there are two of them is not discussed.

The card sets a limit on the number of consecutive failures, after which the “authentication mechanism” is blocked and a reset procedure must be used (if implemented) to re-enroll the “verification data”. In the case where there are two on-card comparison fingerprint templates, it is not clear if “verification data” refers to one or both of them, nor whether there is a separate count of consecutive failures for each of them.

Although FIPS 201-2, Section 6.2.2, says that OCC-AUTH can be used with contact and contactless card readers, SP800-73 Revised Draft Part 1 requires communication between the card and the reader to take place through a secure channel over NFC, relying on the secure channel establishment protocol for authentication of the card to the reader. Card authentication is of course essential, since a fake card could be programmed to ignore the fingerprint and report success to the reader.

Table 6-2 of FIPS 201-2, in conjunction with Table 6-1, assigns Security Level 4, the highest, to OCC-AUTH. Yet OCC-AUTH has several security weaknesses: there is no protection against the use of a revoked card; presence of a human attendant is not required; and no PIN must be entered. An adversary who physically captures a card may find a latent fingerprint on the card, use it to make a fake finger, and use the fake finger to dupe the fingerprint sensor.

Surprisingly, the NIST publications do not allow the optional iris images stored in the PIV card to be used for on-card comparison, hence they cannot be used in OCC-AUTH. No reason is given for that.

Authentication by fingerprint to local workstation

OCC-AUTH can also be used for authentication to a local workstation (which presumably means a desktop or laptop computer). There is no requirement that the cardholder’s fingerprint be read by a sensor on the card reader, hence it may be read by a sensor attached to the workstation. Furthermore, use of a workstation-attached sensor is suggested by the assertion in FIPS 201-2 (Section 3.1.1) that a keyboard is generally used when it comes to enter a PIN for logical access to an information system, the keyboard being analogous to the sensor. A comment by Steven Sprague on Part 1 of this series pointed out that a PIN entered through a computer keyboard is vulnerable to malware running on the computer, and the same is true of a biometric entered through a sensor attached to a computer. A biometric or PIN that travels through a computer may also leave one or more copies of itself stranded in permanent storage that has been used to store virtual memory pages, and those copies may be captured by an adversary who gains access to the computer. Capture of a biometric has not only security implications like capture of a PIN, but also privacy implications, as discussed above.

Cryptographic credential activation by fingerprint

A fingerprint may be used instead of a PIN for card activation. The fingerprint is sent to the card and compared in the card to one of the optional fingerprint templates for on-card comparison, as in OCC-AUTH. But there is no requirement to authenticate the card before sending the fingerprint, and hence no requirement to use a secure channel over NFC for transmission of the fingerprint. Since a contactless interface is a relatively new feature in PIV cards, I suppose the contact interface is generally used to transmit the fingerprint.

There is a separate count of consecutive authentication failures for each activation method, after which a reset procedure must be used (if implemented). When two on-card comparison fingerprint templates are stored in the card it is not clear whether use of each template is considered a different activation method with its own count.

Since, as noted above, the optional iris images cannot be used for on-card comparison, an iris image cannot be used for card activation.

Activation by fingerprint is arguably less secure than activation by PIN. Since activation is an unattended procedure, security relies on the relative secrecy of the fingerprint, which is low since it may be possible to lift a latent fingerprint from any object touched by the cardholder, including the card itself. As discussed above, spoofing the fingerprint should be relatively easy once an impostor has access to a fingerprint sample, judging by how easy it was to dupe the iPhone and Galaxy S5 fingerprint sensors.

Furthermore, if the card is activated for use by a computer, and the fingerprint is scanned by a sensor attached to the computer as allowed by FIPS 201-2, the fingerprint is exposed to capture by malware running on the computer or by an adversary who later gains access to the computer and finds a copy of the fingerprint in permanent storage that has been used to store a virtual memory page, as is the case in OCC-AUTH authentication to a local workstation. So is a PIN entered through a computer keyboard; but, as discussed above, capture of a biometric has privacy implications besides the obvious security implications of capturing any activation secret.

Activation of the PIV card enables the use of the PIV Authentication private key and associated certificate for PKI-AUTH authentication to a PACS, a local workstation, or a remote information system such as an agency web site; a common way of authenticating to a web site with a PIV card is to use the PIV Authentication private key and certificate for TLS client authentication.

If the cardholder has a government-issued email account, card activation also enables the use of a digital signature key and a key management key, which are private keys used for signing and decrypting S/MIME messages.

Visual authentication with facial image for physical access control

The electronic facial image can be displayed to a guard controlling access to a facility. The facial image is signed, and its use requires card activation, usually with a PIN. However, like BIO and BIO-A, this authentication method only provides one-factor authentication, unless combined with PKI-AUTH of CAK-AUTH to authenticate the card.

Card issuance and maintenance

Biometrics carried on the PIV card (or at least, presumably, those intended for off-card comparison), can be used during card issuance and maintenance procedures, such as delivery of the PIV card to the cardholder after manufacturing and personalization, or PIN reset after a limit on consecutive PIN entry failures has been reached.

See also:

Posted in Identity | Tagged , , , , , , , , , | Leave a comment

NIST Omits Encryption Requirement for Derived Credentials

This is Part 2 of a series of posts reviewing the public comments received by NIST on Draft SP800-157, Guidelines for Derived Personal Identity Verification (PIV) Credentials, their disposition, and the final version of the document.

In the first post of this series I discussed how NIST failed to address many concerns expressed in the 400+ comments that it received on the guidelines for derived credentials published in March of last year as Draft Special Publication (SP) 800-157, including concerns about insufficient discussion of business need, lack of guidance, narrow scope, lack of attention to embedded solutions, and security issues. But I postponed a discussion of what I think is the most critical security problem in SP800-157: the lack of security of the so-called software tokens, a concern that was raised in comments including 111 by the Treasury, 291, 311 and 318 by ICAMSC, 406 by PrimeKey AB, 413 by NSA, and 424 by Exponent. This post focuses on that problem.

The concept of a software token, or software cryptographic module is defined in Draft NISTIR 7981 (Section 3.2.1) as follows:

Rather than using specialized hardware to store and use PIV keys, this approach stores the keys in flash memory on the mobile device protected by a PIN or password. Authentication operations are done in software provided by the application accessing the IT system, or the mobile OS.

What does it mean for the keys to be “protected by a PIN or password“? The draft of SP800-157 added the following in the section on Activation Data for Software Implementations (Section 3.4):

For software implementations (LOA-3) of Derived PIV Credentials, a password-based mechanism shall be used to perform cryptographic operations with the private key corresponding to the Derived PIV Credential. The password shall meet the requirements of an LOA-2 memorized secret token as specified in Table 6, Token Requirements per Assurance Level, in [SP800-63].

These two statements led us to believe that the derived credentials were meant to be encrypted under the PIN or password (i.e. under a key-encryption key derived from the PIN or password). We assumed that the “password-based mechanism” specifically required for activation of derived credentials in software tokens consisted of decrypting the Derived PIV Credential (or the private key portion of the credential) using the activation password entered by the user, allowing the resulting plaintext private key to be used for performing cryptographic operations to authenticate to remote information systems of Federal Agencies. (What else could it be?)

But the referenced entry of Table 6 of SP800-63 only requires 20 bits of entropy for the memorized secret. In our own comments we pointed out that 20 bits of entropy provide no security against an adversary who extracts encrypted keys from a software token and carries out an offline brute-force attack against the PIN or password used to encrypt the keys, an attack that can be easily carried out with a botnet; but this portion of our comments was omitted by NIST from the list of received comments. Comment 248 by DHS also seemed to assume that the private key of a derived credential was to be stored encrypted, and also pointed out that such a private key removed from the device is vulnerable to a parallel brute-force attack. The need to protect against brute-force attack was also noted in comment 318 by ICAMSC. And the draft version of SP800-157 seemed to refer to the possibility of derived credentials being extracted from software tokens when it noted that

… as a practical matter it will often be impossible to prevent users from making copies of software tokens or porting them to other devices.

But whereas the draft version of SP800-157 seemed to require that the private keys of derived credentials be stored encrypted, the final version does not. The requirement that “a password-based mechanism shall be used to perform cryptographic operations with the private key” in a software token has been removed. Both versions of the document require a software token to be “validated to [FIPS140] Level 1 or higher“; but although FIPS 140 envisions the possibility of encrypting private keys (Section 4.7.5), it does not require encryption at any security level. The final version of SP800-157 has added a qualification of the Derived PIV Authentication private key as being a “plaintext or wrapped [i.e. encrypted] private key” (page 13, line 7), without including any requirement that the key be wrapped, in software tokens or any other kind of tokens.

Without encryption, a private key stored in a software token within a mobile device has no protection against physical capture. Data in the software token may be protected by the operating system against unauthorized access, but an adversary who steals the mobile device may be able to jailbreak or root the device to circumvent the operating system, or may physically tamper with the device to read the contents of flash memory.

In response to the comments, NIST made the following modifications to the final version of SP800-157:

  • It added a requirement for a mechanism to block the use of the Derived PIV Authentication private key after a number of consecutive failed activation attempts, and an option to implement a throttling mechanism to limit the number of attempts that may be performed over a given period of time. The draft required a blocking mechanism (also referred to as a lockout mechanism) for hardware tokens at Level of Assurance 4 (LOA-4), but not for software tokens. A blocking mechanism, however, does not mitigate the risk of the private key being extracted by an adversary who captures the mobile device, or by malware running on the device.
  • It removed the statement about the impossibility of preventing users from “making copies of software tokens or porting them to other devices“, thus hiding the risk instead of mitigating it.
  • It added a warning that “Protecting and using the Derived PIV Credential’s corresponding private key in software may potentially increase the risk that the key could be stolen or compromised“, followed by the sentence “For this reason, software-based Derived PIV Credentials cannot be issued at LOA-4“, which suggests that they can be issued at LOA-3 without saying it. But LOA-3 is defined in the Office of Management and Budget (OMB) memorandum M-04-04 on E-Authentication Guidance for Federal Agencies as providing “High confidence in the asserted identity’s validity“, which is not consistent with storing authentication credentials in smart phones, millions of which are stolen every year, without tamper resistance or encryption.
  • It added a note recommending (but not requiring) the use of a “hybrid approach“, previously mentioned in the companion Draft NISTIR 7981, instead of a software-only approach.

The note regarding the hybrid approach, in Section 3.3.2, deserves further discussion. It is phrased as follows:

Note: Many mobile devices on the market provide a hybrid approach where the key is stored in hardware, but a software cryptographic module uses the key during an authentication operation. While the hybrid approach is a LOA-3 solution, it does provide many security benefits over software-only approaches. Therefore the hybrid approach is recommended when supported by mobile devices and applications.

I don’t know of any mobile devices on the market today that come equipped with a software cryptographic module, at least not one available to applications. And I’m not sure what it means to store a key “in hardware“. Presumably this refers to the kind of “specialized hardware” other than flash memory of the above-quoted definition of a software token. But it is not clear what specialized hardware is available “in many mobile devices on the market” for implementing a hybrid approach. Perhaps it means storing the key in a OS-provided storage facility such as the iOS key chain or the Android key store. Such a facility could conceivably be implemented using tamper resistant hardware. But how the facility is implemented is proprietary information that may not be readily available from the device manufacturer to Federal Agencies implementing derived credentials; and I believe most devices use ordinary flash memory for such facilities today.

Even if the hybrid approach stored the key in tamper resistant hardware, it would provide little security against an adversary who physically captures the device. The adversary might not be able to read the key while stored in the tamper resistant storage, but would be able to read it after copying it from the tamper resistant storage to the ordinary flash memory storage. Copying may require circumventing the operating system, but the adversary may be able to do that by jailbreaking or rooting the device, or by overwriting the non-tamper resistant storage where the operating system code resides.

It is surprising that SP800-157 does not require derived credentials to be encrypted. Encryption is a standard method for protecting data at rest, and data encryption is routinely used today in mobile devices. Federal agencies were actually put under the obligation to encrypt all sensitive data on mobile devices (which, I would think, should include derived credentials) by OMB memorandum M-06-16 on Protection of Sensitive Agency Information. The memorandum, sent by the White House in 2006 to the Heads of Departments and Agencies, was followed in 2007 by the publication of NIST SP800-111, Guide to Storage Encryption Technologies for End User Devices. M-06-16 requires Federal Agencies to:

1. Encrypt all data on mobile computers/devices which carry agency data unless the data is determined to be non-sensitive, in writing, by your Deputy Secretary or an individual he/she may designate in writing;

Although M-06-16 dates back to the administration of George W. Bush, it is still considered valid by NIST. In fact, Appendix D of the final version of SP800-157 discusses an effort by NIST to obtain alternative guidance from OMB that will remove the need to comply with requirement number 2 in the same memorandum:

2. Allow remote access only with two-factor authentication where one of the factors is provided by a device separate from the computer gaining access;

It is not clear why NIST thinks that requirement 2 matters but ignores requirement 1.

A reason why NIST omitted the requirement to encrypt derived credentials in the final version of SP800-157 may be the difficulty of figuring out what key to use for encryption, and how to manage such key. As discussed above, the naive solution to that problem hinted at in the draft version, in which the encryption key is derived from the PIN or password, provides no security because it makes the PIN or password vulnerable to a brute force guessing attack.

There are solutions to the problem, however. Some solutions make use of a biometric for derived credential activation instead of a PIN or password, as suggested in several comments, and rely on the biometric for deriving or protecting the credential-encryption key. This is a big topic that I plan to discuss in the next blog post of this series.

Other solutions entrust the credential-encryption key to a trusted key storage service in the cloud. Storing a data encryption key on a server is a technique mentioned in SP800-111 (Section 4.2.1, page 4-4, lines 6-7). The activation PIN or password could be used to authenticate to the server and retrieve the encryption key; this is more secure than deriving the encryption key from the PIN or password, because it does not expose the PIN or password to an offline brute-force attack. In our comments, we proposed an even more secure method of retrieving the encryption key, in which the device authenticates to the key storage service cryptographically, using a device authentication credential that is regenerated on demand from a protocredential and the activation PIN or password, without exposing the PIN or password to an offline brute-force attack. Regrettably, NIST included in the list of comments (as comment 9) an excerpt of our proposal which, taken out of context, could be construed as meaning that the device authentication credential is to be used as the derived PIV credential. Then it rejected the proposal arguing that such a credential “could only be electronically verified by the agency that issued the credential“, rather than by all agencies. Our full comments, however, make it perfectly clear that the device authentication credential is only used to retrieve the encryption key, which is in turn used to decrypt the derived PIV credential. Our derived PIV credential is no different than the one of SP800-157, and can therefore be verified by all agencies, not just by the agency that issued it.

See also:

Posted in Identity | Tagged , , , , , , | Leave a comment

NIST Fails to Address Concerns on Derived Credentials

This is the first of a series of posts reviewing the comments received by NIST on Draft SP800-157, their disposition, and the final version of the document.

In March 2014, NIST released the drafts of two documents on derived credentials, Draft NISTIR 7981 and Draft SP800-157, and requested comments. Last month it announced that it had received more than 400 comments and released a file with comments and their dispositions.

The file is hard to read, because it contains snippets of comments rather than entire comments (and snippets of comments by the same organization are not always consecutive!). But we have made the effort to read it, and the effort was worth it. The file contains snippets from companies, individuals, industry organizations, and many US Federal government organizations, including the Consumer Financial Protection Bureau (CFPB), the Coast Guard, the Department of Justice (DOJ), the Department of the Treasury, the Department of Agriculture Mobility Program Management Office (USDA MPO), the Department of State (DOS) the Social Security Administration (SSA), the National Aeronautics and Space Administration (NASA), the Department of Homeland Security (DHS), the Air Force Public Key Infrastructure System Program Office (AF PKI SPO), the Identity, Credential, and Access Management Subcommittee (ICAMSC), the Centers for Disease Control and Prevention (CDC), the Federal Public Key Infrastructure Certificate Policy Working Group (FPKI CPWG) and the Information Assurance Directorate of the National Security Agency (NSA).

Given the number and depth of the comments, it would have been a good idea to issue a revised draft and ask for a second round of comments (as NIST has recently done, for example, in connection with NISTIR 7977), or to call a conference, or both. Instead, NIST issued a final version of SP800-157 that fails to address serious concerns expressed in the comments. As of this writing the draft of NISTIR 7981 has not been revised. (No revised version can be found in the NISTIRs page.)

Here are some concerns that were not addressed.

1. Insufficient discussion of business need

Draft SP800-157 asserted that it is impractical to use a PIV card in conjunction with a mobile device. Comments 68 by the Treasury and 299 by ICAMSC asked for specific use cases in which such usage is deemed impractical. The ICAMSC comment said:

Please add clarification language and/or provide examples that agencies can leverage when determining if the use of a PIV Card is impractical.

NIST replied that both comments were resolved by comment 41; comment 41 is unrelated, but the response to comment 41 states that it is up to agencies to decide when the use of the PIV card is impractical.

Regarding the same assertion in Draft SP800-157, comment 22 by Precise Biometrics Inc submitted that currently available form fitted cases for mobile devices (in which a PIV card can be inserted) are practical. NIST rejected this comment as out-of-scope, saying that the topic is covered in NISTIR 7981. It should be noted, however, that NISTIR 7981 and SP800-157 are companion documents, they have the same set of authors, their drafts were issued on the same day, and comments were requested for both of them, to be sent to the same email address; and Draft NISTIR 7981 does not contemplate the use of form-fitted cases.

2. Lack of guidance

The issuance, management and usage of derived credentials is related to areas of modern mobile technology including screen locking and device encryption, device and application management, and BYOD policy. Yet Draft SP800-157 provided no guidance in any of these areas. Comments including 164 by DOS, 235 by DHS, 260 by Emergent LLC, 414-415 and 418 by NSA asked for guidance in these areas, but no guidance has been provided in the final version.

Comment 194 by the Smart Card Alliance asked whether it is the intention that the Distinguished Name (DN) for the Derived PIV authentication certificate should be the same as the DN for the original PIV authentication certificate. NIST declined to provide guidance arguing that requirements for the subject DNs in certificates are specified in Section 3.1.1 of the Common Policy. However, the requirements in Section 3.1.1 of the X.509 Certificate Policy For The U.S. Federal PKI Common Policy Framework Version 1.23 do not fully determine the DN; in particular they provide leeway on how to set the structural_container organizational unit, which might be used to distinguish the kind of token containing a credential if desired. Moreover, just as alternate OMB guidance on the Control Remote Access requirement of OMB Memorandum M-07-16 has been requested to accommodate integrated tokens, a modification of the common policy framework could be requested, if necessary to accommodate derived credentials.

Appendix A pointed out that a digital signature private key and certificate in a mobile device must be different from those in a PIV card, because the private key in the card is not exportable. In comments 205-206, NASA noted that some certificate authority products do not allow multiple active digital signature certificates for the same DN. NASA asked for guidance as to whether products should be required to allow for the issuance of multiple certificates or whether it would be acceptable to use certificates having different distinguished names or issued by different certificate authorities. NASA also asked for guidance on the challenges resulting from identity duality, and on the appropriate usage of different certificates that might be issued under different certificate policies. Comment 186 by DOS referred to the same issue. NIST declined to provide guidance in the final version of SP800-157.

Comment 206 was specifically concerned with the possible use of multiple digital signature certificates issued under different certificate policies resulting in “a mix of digital signature assurance levels being used for digital signing for the same individual”, and with “allowing an alternative signature certificate with relaxed policies”. NIST responded to comment 206 as follows:

Declined. It is up to Departments and Agencies to consider this risk as they create their digital signature certificate issuance and usage policies.

Thus NIST deemed that the issues raised in the comment create a risk that agencies should consider, but declined to mention the risk in the final version of SP800-157.

In comment 360, Giesecke & Devrient noted that SP800-157 requires hardware tokens to be cryptographic modules validated to FIPS140 L2 or higher, and that FIPS 140 requires a cryptographic module to perform self-tests. (Indeed, Section 4.9 of FIPS 140-2 requires self-tests to be performed when the cryptographic module is powered up and when certain critical functions are performed.) Giesecke & Devrient pointed out that these self-tests might conflict with performance requirements of UICC tokens. (This is indeed a concern because UICCs are used by the mobile network operator for unrelated purposes critical to the functionality of a mobile device.) The comment suggested that “a special FIPS140 scheme for UICCs should be developed which improves the concept of self tests in terms of performance”. NIST responded by saying that “this would be an issue for the Cryptographic Module Validation Program, not for SP 800-157″ and declined to mention this difficulty in the final version of SP800-157. (It should be noted that the Cryptographic Module Validation Program is concerned with validating modules, and is unlikely to be concerned with the performance impact that using a UICC to store derived credentials could have on mobile device performance.)

3. Narrow scope

Many comments complained about SP800-157 having a narrow scope and failing to take advantage of desired functionality that could be provided by derived credentials.

A PIV card includes a private key for user authentication, a private key for signing email messages, a current private key for decrypting new email messages, and a set of retired private keys for decrypting old email messages. (Private keys for decrypting email messages are called key management keys by NIST.) SP800-157 discusses the storage of an equivalent set of credentials in a mobile device (each comprising a private key and the associated public key certificate), but insists that only the user authentication credential is a “derived PIV credential”. Email-related credentials are discussed in Appendix A, which is labeled as informative rather than normative. Comment 61 by Gemalto and our own comment 6 argued that email is the most important use case for derived credentials, and hence email-related credentials should have the same status as authentication credentials. In comment 227, Apple asked: “Is the Derived Credential solely for remote user authentication and nothing else? If this is indeed restricted to just user authentication, it severely limits the scope of use and value add to the mobile device.” NIST responded to these comments by saying that email-related credentials are not less important, but “they are not within the scope of this particular publication”.

Other comments suggested uses of derived credentials besides authentication and email on a mobile device.

PIV cards are used for physical access to government facilities, and derived credentials could be used for the same purpose. This functionality was requested in comments 15 and 17 by Oberthur Technologies, 62-63 by Gemalto, 187 by SSA, 250 by DHS, and 359 by Giesecke & Devrient. But it was ruled out, as stated in the response to comment 15, because “based on current policy, the Derived PIV Credential should only be used where use of a PIV Card is not practical”.

Many comments requested a scope extension to allow derived credentials to be stored and used in devices other than the smart phones, tablets and e-readers mentioned in the definition of a mobile device in Footnote 1. Comment 212 by Wave asked for the definition in Footnote 1 to be extended to include laptops, smart glasses and smart watches. Comments 236 by DHS, 325 by CDC and 356 by Giesecke & Devrient asked for allowing derived credentials to be stored in Trusted Platform Modules within computers such as laptops, desktops or workstations, or in hardware tokens connected to such computers. The responses from NIST seem to allow derived credentials on laptops on the grounds that “it is up to the agencies to decide what types of devices fall into the mobile category” (as stated in the response to a request to disambiguate Footnote 1 made in comment 41, referenced in the responses to comments 212 and 236), but rule them out on desktops and workstations on the grounds that “in their current state Derived PIV credentials are restricted to authentication of mobile devices to remote systems” (as stated in the response to comment 15, referenced in the responses to comments 325 and 356). But NIST declined to disambiguate Footnote 1 in the final version of SP800-157, left in place other language that makes it clear that laptops are not considered mobile devices (e.g. the assertion in the Executive Summary that “mobile devices lack the integrated smart card readers found in laptop and desktop computers”), and declined to provide any indication that derived credentials could be used in laptops or other computers, thus in effect rejecting the scope extension requests.

Instead of storing derived credentials in a personal computer, they could be carried in a mobile device and used by a personal computer that would access the mobile device over a wireless connection. This functionality was requested in comment 62 by Gemalto, which also requested physical access functionality, and was rejected by the same reference to comment 15, the response to which states that “based on current policy, the Derived PIV Credential should only be used where use of a PIV Card is not practical”.

Other uses of derived credentials were requested by Apple, DHS and Giesecke & Devrient. In comment 227 Apple requested using derived credentials to protect data at rest, besides using them for authentication and email. DHS requested using them for workstation logon, in the same comment 250 where it requested physical access functionality. In comment 357 Giesecke & Devrient requested using derived credentials for encrypted voice communication, encrypted cloud storage and Windows logon. All these requests were rejected with reference to comment 15.

The above-mentioned response to comment 15 includes the following blanket prohibition on allowing the PIV Derived Application to use the NFC interface of a mobile device:

Based on current policy, the Derived PIV Credential should only be used “where use of a PIV Card is not practical.” Thus, the PIV Derived Application should not be accessed over the mobile device’s NFC interface, as any use case involving accessing the PIV Derived Application over an NFC interface (e.g., getting access to buildings) would be a use case in which it would be practical to use the PIV Card.

This prohibition precludes a convenient method of obtaining a derived credential for an NFC-enabled mobile device, by authenticating to the issuer on the mobile device itself with a PIV card through the NFC interface. This convenient issuance method was proposed by Giesecke & Devrient in comment 358, which requested examples of issuance methods. In response to the comment, NIST didn’t explicitly reject the method; but it referred to comment 83, whose response referred in turn to an appendix added to the final version of SP800-157, which contains examples of issuance methods, but not the one proposed by Giesecke & Devrient.

4. Lack of attention to embedded solutions

Section 3.3.1 of Draft SP800-157 referred to three specific kinds of removable hardware tokens: Secure Digital (SD) Cards, Universal Integrated Circuit Cards (UICC) and Universal Serial Bus (USB) tokens. Section 3.3.2, on the other hand, referred very briefly to embedded tokens as comprising hardware and software tokens, without mentioning any specific kind of hardware tokens. Removable tokens were thus emphasized in spite of the substantial drawbacks of SD Cards, UICCs and USB tokens discussed in sections 3.2.1 and 3.2.2 of Draft NISTIR 7981, and in spite of the fact that the motivation for using derived credentials is the supposed inconvenience of using PIV Cards, which are themselves removable tokens. (Additional drawbacks of removable devices were surfaced in comments 360 by Giesecke & Devrient and 410 by NSA regarding UICCs, and comment 11 by Oberthur Technologies regarding SD cards.)

The lack of attention to embedded solutions could be explained by the following puzzling statement in Draft NISTIR 7981, lines 361-363, which is clearly untrue:

While some mobile devices have a form of an embedded hardware security module, currently they are either unavailable for use or do not provide the specific set of features needed to support PKI credentials.

In comment 417, NSA strongly objected to the lack of attention paid to embedded tokens:

Overall, we are concerned by the amount of attention paid to various removable hardware token solutions compared to the level of discussion surrounding the embedded tokens. We believe that due to the costs, usability, lack of commercial market viability, and incompatibility of using hardware tokens, most agencies are going to opt for an embedded solution, and the comparative lack of guidance in this area will make this solution more difficult to implement. We recommend solutions be usable, commercially sustainable, and secure.

Many other comments asked for better guidance on embedded tokens. Comments 204 by NASA, 211, 213 and 214 by Wave, and 223-224 by Microsoft emphasized the availability and advantages of Trusted Platform Modules (TPM). Comment 232 by Apple referred to the section on embedded cryptographic tokens as being extremely weak. Comment 361 by Giesecke & Devrient asked whether a Trusted Execution Environment (TEE) is an embedded hardware token. Comment 419 by Global Platform called attention to the availability of embedded Secure Elements (SE). Comments 420-421 also by Global Platform described the advantages of a TEE, including the fact that it provides a Trusted User Interface. (In our own comments we discussed the pros and cons of a TEE for implementing derived credentials; unfortunately that portion of our comments was not included in the file of comments and dispositions released by NIST. The full version of our comments can be found in our site.)

The responses to these comments were inconsistent. On one hand, in response to comment 361, NIST stated that embedded hardware solutions are not commercially available at this time. On the other hand, the response to comment 204, which requested that TPMs be mentioned, was as follows:

Resolved by including a pointer to the NISTIR in Section 3.3.2. (TPM, TEE, OS key store, SE).

NISTIR 7981, however, makes no explicit mention of any TPM, TEE, OS key store or SE.

In the final version of SP800-157, NIST added a mention of “a hybrid approach where the key is stored in hardware, but a software cryptographic module uses the key during an authentication operation”. But it did not discuss the kind of hardware that the key is stored in, and it continued to omit any mention of a TPM, a TEE or a SE.

5. Security issues

SP800-157 allows the original PIV credential and the derived PIV credential to be issued by different organizations. This creates creates two security problems:

  1. An adversary could obtain a derived credential using a compromised original credential before the compromised original credential has been revoked, and, if no precautions are taken, the fraudulently issued derived credential could survive revocation of the original credential. This risk exists even if the original and derived credentials are issued by the same organization, but is much easier to mitigate in that case.
  2. If no precautions are taken, a derived credential could unintentionally remain valid after the original credential has been terminated.

To address the first problem, Draft SP800-157 required the revocation status of the original credential to be rechecked seven days after issuance of the derived credential. But, obviously, seven days is plenty of time for the adversary to carry out attacks. The requirement was changed to a recommendation (“shall” was changed to “should”) in the final version. The final version also added Footnote 9, recommending to investigate, when a PIV card is reported lost or stolen, whether it might have been used to fraudulently issue derived credentials, but only in cases where the issuer of the PIV card is also the issuer of derived credentials.

To address the second problem, Draft SP800-157 required the issuer of the derived credential to track the status of the PIV card containing the original credential; and it pointed out that it is not sufficient to track the status of the PIV Authentication certificate, because the certificate is not necessarily revoked when the PIV card is terminated.

In cases where the issuer of the derived credential is different from the issuer of the PIV card, Draft SP800-157 suggested using the Federal Backend Attribute Exchange (BAE) or a Uniform Reliability and Revocation Service (URRS) to obtain the termination status of the PIV card. The final version more specifically suggested checking the status every 18 hours, after noting that Section 2.9.4 of FIPS 201-2 requires PIV Card termination to be performed within 18 hours for cases where the PIV card cannot be collected. It is not clear why the periodicity of the status check should be the same as the delay in performing termination; that may double the time during which a credential that should no longer be valid can be used fraudulently, making it 36 hours.

Comment 309 by ICAMSC pointed out that the BAE does not maintain revocation information for PIV Credentials, and NIST acknowledged that a new attribute would have to be created to support the termination status check functionality.

The concept of a Uniform Reliability and Revocation Service was proposed in NISTIR 7817, but we have seen no indication that such a service has been implemented.

As an alternative way of tracking the status of the PIV card when the issuer of the derived credential is different from the issuer of the PIV card, the draft suggested that the latter send a notification to the former when the PIV card is terminated. (In response to comment 198 by the Smart Card Alliance, the final version added a suggestion that the issuer of the derived credential notify the card issuer when the derived credential is created.) But relying on notification of card termination raises the issue of how to deal with an attack where the adversary prevents the notification from reaching the derived credential issuer.

Comment 174 by DOS (and duplicate comment 339 by FPKI CPWG) questioned whether different issuers should be allowed to issue the PIV card and the derived credential, and comment 182 by DOS said that it is unlikely that such a situation, allowed by the document, would occur. Comments 365 and 386-387 by CertiPath argued that the issuers should be the same. Comment 380 by CertiPath added that requiring the issuers to be the same could mitigate risks associated with the issuance of multiple derived credentials mentioned in the draft. Comments 86 by the Treasury, 150 by USDA, 238 by DHS, and 376 by CertiPath questioned the efficacy of a recheck after seven days. Comment 304 by ICAMSC and 377 by CertiPath asked what action should be taken if the seven-day recheck showed that the original credential had been revoked. In spite of all these comments, the final version of SP800-157 continues to allow the original PIV credential and the derived PIV credential to be issued by different organizations.

As noted in comment 168 by DOS (and duplicate comment 335 by FPKI CPWG), the idea of allowing different organizations to issue an original credential and a derived credential can be traced to Section 5.3.5 of the Electronic Authentication Guideline (SP800-63), which includes the recommendation to recheck the status of the original credential a week after issuing the derived credential. But SP800-63 is concerned with any kind of credentials, not just PIV credentials issued to Federal employees. In that broad context, for the sake of generality, it makes sense to consider the possibility that an organization, Federal or otherwise, would issue credentials for some particular purpose derived from original credentials issued for some other purpose by a different organization. But there is no such motivation in the case PIV credentials. In response to comment 365, NIST argued that “the capability of external issuers to issue Derived PIV Credentials allows these organizations to support other Agency employees on detail”. But PIV credentials issued by an agency, original or derived, are supposed to be verifiable by a different agency. Therefore an employee on detail to an agency should be able to use a derived credential issued by the employee’s own agency to authenticate to information systems of the agency to which the employee is detailed.

Other security problems were identified by other comments.

Comment 292 by ICAMSC and 423 by Exponent warned that malware on a computer with access to a PIV card might be able to initiate the issuance of a derived credential fraudulently. In response to comment 292, NIST stated that “verifying intent is addressed in SP 800-79-2 with issuer control # SP (DC)-1 for Derived PIV Credentials”. But Draft SP800-79-2 contains “Guidelines for the Authorization of Personal Identity Verification Card Issuers (PCI) and Derived PIV Credential Issuers (DPCI)” and control SP(DC)-1, on page 112, only says the following: “A Derived PIV Credential is issued only upon request by proper authority. Assessment Determine that: (i) the process for making a request is documented (review); (ii) A request from a valid authority is made in order to issue a Derived PIV Credential (observe)”. Thus it fails to address the specific security issue identified in the comments.

Comment 410 by NSA explained that:

While a carrier may offer a security domain on a UICC that is separate from other domains, that security domain will never be fully under the explicit control of the issuing agency. The carrier, in order to perform network operations, will control the card management key, which will allow (possibly undetected) modification of the card, the card’s firmware, and security domains on the card.

and concluded that UICC Cryptographic Modules should be removed as an acceptable solution. NIST failed to remove UICC tokens from the final version, and responded to the comment saying only that “there may need to be an SLA and level of trust involved when using an MNO’s UICC”.

But the most daunting security problems identified by the comments are those concerning software tokens. We will examine them in the next post of this series.

See also:

Posted in Identity | Tagged , , , , , , | 1 Comment

Virtual Tamper Resistance is the Answer to the HCE Conundrum

Host Card Emulation (HCE) is a technique pioneered by SimplyTapp and integrated by Google into Android as of 4.4 KitKat that allows an Android app running in a mobile device equipped with an NFC controller to emulate the functionality of a contactless smart card. Prior to KitKat the NFC controller routed the NFC interface to a secure element, either a secure element integrated in a carrier-controlled SIM, or a different secure element embedded in the phone itself. This allowed carriers to block the use of Google Wallet, which competes with the carrier-developed NFC payment technology that used to be called ISIS and is now called SoftCard. (I’m not sure if or how they blocked Google Wallet in devices with an embedded secure element.) Using HCE, Google Wallet can run on the host CPU where it cannot be blocked by carriers. (HCE also paves the way to the development of a variety of NFC applications, for payments or other purposes, as Android apps that do not have to be provisioned to a secure element.)

But the advantages of HCE are offset by a serious disadvantage. An HCE application cannot count on a secure element to protect payment credentials if the device is stolen, which is a major concern because more then three million phones where stolen last year in the US alone. If the payment credentials are stored in ordinary persistent storage supplied by Android, a thief who steals the device can obtain the credentials by rooting the device or, with more effort, by opening the device and probing the flash memory.

Last February Visa and MasterCard declared their support for HCE. In a Visa press release and a MasterCard press release, both payment networks referred to cloud-based applications or processing, thereby suggesting that an HCE application could store the payment credentials in the cloud. But that would require authenticating to the cloud in order to retrieve the credentials or make use of them remotely; and none of the usual authentication methods is well suited to that purpose. Authenticating with a passcode requires a high entropy passcode, and asking the user to enter such a passcode would negate any convenience gained by using a mobile device for payments. Authenticating with a credential stored in the device requires protecting the credential, which brings us back to square one. Authenticating with a credential supplied to the device, e.g. via SMS, obviously doesn’t provide security when the device has been stolen. Authenticating with a credential supplied to a different device would again negate any convenience gain.

An alternative to storing the payment credentials in the cloud would be to store them in encrypted storage within the device. But secure encryption requires a secure element to store a secret that can be used in the derivation of the encryption key. Without such a secret, the encryption key would have to be derived exclusively from a passcode entered by the user. That passcode would need to have very high entropy in order to resist an offline guessing attack with a password-cracking botnet; and asking the user to enter such a password would again negate any convenience gain. If a secure element is available to Android for storing the secret, there is no reason not to use it as well for hosting the payment application itself.

But there is a solution to this puzzle. The solution is to use what we call virtual tamper resistance to protect the payment credentials (or the HCE application, or the entire Android file system). Here is how that works. The credentials are stored in ordinary persistent storage within the device, but they are encrypted with a data protection key that is entrusted to a key storage service in the cloud. To retrieve that key, the device authenticates to the service with a cryptographic device-authentication credential. But that credential is not stored in the device. Instead, it is regenerated on demand from a PIN supplied by the user and what we call a protocredential. The protocredential is such that all PINs yield well-formed credentials. Hence a thief who steals the device has no information that could be used to test guesses of the PIN in an offline attack. A PIN can only be tested online by generating a credentials and attempting to authenticate with it to the key storage service, which limits the number of attempts. Methods of regenerating the device authentication credential from a protocredential and a PIN can be found in Section 2.6 of a technical report. Methods for using a biometric sample instead of, or in addition to, a PIN can be found in Section 3 of the same report. A method for implicitly authenticating the device to the key storage service while retrieving the data protection key can be found in a recent blog post.

I have to point out that neither a secure element nor virtual tamper resistance provide full protection against malware that is able to root Android while the unsuspecting user is using the device, because such malware may be able to intercept or phish the PIN or biometric sample that is used to enable the use of the credentials if a secure element is used, or to regenerate the device authentication credential and retrieve the data protection key if virtual tamper resistance is used. Protection against such malware can be achieved by running the payment application in a Trusted Execution Environment (TEE) that features a trusted path between the user interface and the payment application. The trusted path can protect the PIN or biometric sample from being intercepted or phished by malware. Furthermore, even if malware has somehow obtained the PIN or a genuine biometric sample, the payment application can insist on the PIN or sample being submitted by the user via the trusted path, rather than by code running in the possibly infected Rich Execution Environment (REE) where ordinary apps run. On the other hand a TEE by itself does not provide full protection against physical capture of the device, because it does not usually provide physical tamper resistance. Virtual tamper resistance can be used to remedy that shortcoming.

Posted in Payments | Tagged , , , , | Leave a comment

How Apple Pay Uses 3-D Secure for Internet Payments

In a comment on an earlier post on Apple Pay where I was trying to figure out how Apple Pay works over NFC, R Stone suggested looking at the Apple Pay developer documentation (Getting Started with Apple Pay, PassKit Framework Reference and Payment Token Format Reference), guessing that Apple Pay would carry out transactions over the Internet in essentially the same way as over NFC. I followed the suggestion and, although I didn’t find any useful information applicable to NFC payments in the documentation, I did find interesting information that seems worth reporting.

It turns out that Apple Pay relies primarily on the 3-D Secure protocol for Internet payments. EMV may also be used, but merchant support for EMV is optional, whereas support for 3-D Secure is required (see the Discussion under Working with Payments in the documentation of the PKPaymentRequest class). It makes sense to rely primarily on a protocol such as 3-D Secure that was intended specifically for Internet payments rather than on a protocol intended for in-store transactions such as EMV. Merchants that only sell over the Internet should not be burdened with the complexities of EMV. But Apple Pay makes use of 3-D Secure in a way that is very different from how the protocol is traditionally used on the web. In this post I’ll try to explain how the merchant interacts with Apple Pay for both 3-D Secure and EMV transactions over the Internet, then how Apple Pay seems to be using 3-D Secure. I’ll also point out a couple of surprises I found in the documentation.

Merchant Interaction with Apple Pay for Internet Payments

A merchant app running on the phone shows an Apple Pay button on its user interface. When the user taps the button, the app makes a payment request to the Apple Pay API, specifying the amount of the payment and a description of the transaction. Apple Pay displays to the user a payment sheet including the description of the transaction, the payment amount, and a prompt to Pay with Touch ID. When the user touches the fingerprint sensor and a valid fingerprint is recognized, Apple Pay creates a payment token, which it returns to the merchant app. (This payment token is not to be confused with the payment token of the EMV Tokenisation Specification which I discussed in the previous post; the payment token of that specification is a replacement for the primary account number, whereas the payment token of the Apple Pay developer documentation is a description of a payment transaction. To disambiguate, I will refer to the payment token of the developer documentation as an iOS payment token.) The merchant app may send the iOS payment token to a merchant server, which passes it through a network API to a payment processor, which uses it to create an authorization request that it forwards to the acquiring bank. Alternatively, the merchant app may use an SDK supplied by the processor, which sends the iOS payment token directly to a processor server as described for example in the Apple Pay Getting Started Guide of Authorize.Net, which is one of the processors listed in the Apple Pay developer site.

The iOS payment token includes, among other things, a header and encrypted payment data, which are signed together by Apple Pay with an asymmetric signature. The payment data is encrypted under a symmetric key derived from an Elliptic-Curve Diffie-Hellman (ECDH) shared secret, which is itself derived from an ephemeral ECDH key pair generated by Apple Pay and a long term ECDH key pair belonging to the merchant. (Apple Pay computes the shared secret from the ephemeral private key and the merchant public key, while the merchant computes it from its private key and the ephemeral public key, which is included in the header. An encryption method that uses an ephemeral Diffie-Hellman key pair of the encryptor with a long term Diffie-Hellman key pair of the decryptor may be viewed as a variant of El Gamal encryption.)

After receiving the iOS payment token from the merchant, the processor verifies the Apple Pay asymmetric signature on the header and encrypted payment data, and decrypts the payment data using the private key of the merchant. To the latter purpose the processor may generate the ECDH key pair on behalf of the merchant and keep the private key.

The decrypted payment data may consist of an “EMV payment structure“, described as “output from the Secure Element“; unfortunately, the documentation does not provide any details about the structure, so the Apple Pay developer documentation does not shed light on the details of Apple Pay payment transactions over NFC as had been hoped by R Stone. The decrypted payment data may also consist of an “online payment cryptogram, as defined by 3-D Secure” plus an “optional ECI indicator, as defined by 3-D Secure“. Whether 3-D Secure or EMV is used, the developer documentation does not provide enough information to create an authorization request that can be submitted to the acquiring bank. Unless additional information can be obtained from other sources, the merchant will have to contract out transaction processing to one of the processors listed in the Apple Pay developer site, which will have received the necessary information from Apple.

3-D Secure in a nutshell

The 3-D Secure protocol, which is rarely used in the US but commonly used in other countries, improves security for Internet payments by authenticating the cardholder. It was developed by VISA, and it is used by VISA, MasterCard, JCB and American Express under the respective names Verified by VISA, MasterCard SecureCode, J/Secure and American Express SafeKey. The protocol is proprietary, but I have found some information about it in a Wikipedia page and in merchant implementation guides published by VISA and by MasterCard.

Ordinarily, 3-D Secure is used for web payments. The merchant site redirects the user’s (i.e. the cardholder’s) browser to an Access Control Server (ACS) operated by the issuing bank, or more commonly by a third party on behalf of the issuing bank, which authenticates the user. Redirection is often accomplished by including in a merchant web page an inline frame whose URL targets the ACS. As is usually the case for web authentication protocols that redirect the browser to an authentication server, such as OpenID, OAuth, OpenID Connect, or the SAML Browser SSO Profile, the method used to authenticate the user in 3-D Secure is up to the server (the ACS) and is not prescribed by the protocol. Typically the ACS displays a Personal Assurance Message (PAM) to authenticate itself to the user and mitigate the risk of phishing, then prompts the user for an ordinary password agreed upon when the user enrolls for 3-D Secure, or for a one-time password (OTP) that is delivered to the user or generated by the user using a method agreed upon at enrollment time.

After authenticating the user, the ACS redirects the browser back to the merchant site, passing a Payer Authentication Response (PARes) that indicates whether authentication succeeded, failed, or could not be performed, e.g., because the user has not enrolled in 3-D Secure with the issuing bank and no password or OTP generation or transmittal means have been agreed upon. The PARes is signed by the ACS with an asymmetric signature that is verified by a Merchant Plug-In (MPI) provided to the merchant by an MPI provider licensed by the payment network. The PARes comprises an authentication status, and may also comprise an Electronic Commerce Indicator (ECI) that indicates the result of the authentication process redundantly with the authentication status, and a Cardholder Authentication Verification Value (CAVV) (which MasterCard calls instead an Accountholder Authentication Value, or AAV). The CAVV includes an Authentication Tracking Number (ATN) and a cryptographic Message Authentication Code (MAC), which is a symmetric signature computed by the ACS.

After the MPI has verified the asymmetric signature in the response, if authentication succeeded, the merchant adds the CAVV and the ECI to the authorization request that it assembles using the card number, security code and cardholder data obtained from a web form. The merchant sends the authorization request to the acquiring bank, which forwards it via the payment network to the issuing bank. The issuing bank verifies the MAC in the CAVV, using the same key that was used by the ACS to compute the MAC after authenticating the user on behalf of the issuing bank.

Use of 3-D Secure by Apple Pay

An Apple Pay transaction is very different from a traditional 3-D Secure transaction. It is not a web transaction. No browser is involved, and no browser redirection takes place. The user is authenticated by Apple Pay (using the fingerprint sensor, which IMO provides little security as discussed in earlier posts) rather than by the issuing bank. And Apple Pay uses tokenization, whereas 3-D Secure does not. Therefore 3-D Secure must have been modified very substantially for use with Apple Pay.

The developer documentation does not explain how 3-D Secure has been modified, but here is a guess.

After verifying the user’s fingerprint, Apple Pay generates the CAVV, without involvement by an ACS on behalf of the issuing bank or by the issuing bank itself. As discussed in earlier posts, I believe that Apple Pay shares a secret with the payment network or token service provider (here I’m referring to the token of the EMV Tokenisation Specification) that it uses to derive the symmetric key that is used to generate the token cryptogram in a tokenized EMV transaction over NFC. I suppose Apple Pay uses the same symmetric key, or a symmetric key derived from the same shared secret, to generate the MAC in the CAVV. The CAVV thus plays a role similar to that of the token cryptogram, and is verified by the payment network or a token service provider used by the payment network, just as the token cryptogram is.

In ordinary 3-D Secure the asymmetric signature on the PARes, created by the ACS and verified by the MPI plug-in, allows the merchant to verify that the user has been successfully authenticated and it is OK to make an authorization request. In Apple Pay, the same role is played by the asymmetric signature included in the iOS payment token. That signature is verified by the payment processor, which subsumes the role played by the MPI plug-in in 3-D Secure.

Surprise: is the primary account number present in the phone?

The primary account number (PAN) is not supposed to be present in the phone. Its absence from the phone was stated in the Apple Pay announcement:

When you add a credit or debit card with Apple Pay, the actual card numbers are not stored on the device nor on Apple servers. Instead, a unique Device Account Number is assigned, encrypted and securely stored in the Secure Element on your iPhone or Apple Watch

And I’ve seen it emphasized in many blog posts on Apple Pay. But the documentation of the PKPaymentPass class refers both to a deviceAccountIdentifier, described as “the unique identifier for the device-specific account number”, and a primaryAccountIdentifier, described as “an opaque value that uniquely identifies the primary account number for the payment card”. This seems to imply that the primary account number is present in the device, even though it may be hidden from the merchant app by an opaque value.

Surprise: lack of replay protection?

In the Payment Token Format Reference, the instructions on how to verify the Apple Pay signature on the header and encrypted payment data of the iOS payment token include the following step:

e. Inspect the CMS signing time of the signature, as defined by section 11.3 of RFC 5652. If the time signature and the transaction time differ by more than a few minutes, it’s possible that the token is a replay attack.

No other anti-replay precautions are mentioned. This seems to indicate that replay protection relies on the Apple Pay signature not being more than “a few minutes” old. That is obviously not an effective protection against replay attacks, nor against bugs or other glitches that may cause the iOS payment token to be sent twice. I conjecture that lack of replay protection may have contributed to the multiple charges for some purchases that have been reported.

Posted in Payments | Tagged , , , , | Leave a comment

Making Sense of the EMV Tokenisation Specification

Apple Pay has brought attention to the concept of tokenization by storing a payment token in the user’s mobile device instead of a card number, a.k.a. a primary account number, or PAN. The Apple Pay announcement was accompanied by an announcement of a token service provided by MasterCard and a similar announcement of another token service provided by Visa.

Tokenization is not a new concept. Token services such as the TransArmor offering of First Data have been commercially available for years. But as I explained in a previous post there are two different kinds of tokenization, an earlier kind and a new kind. The earlier kind of tokenization is a private arrangement between the merchant and a payment processor chosen by the merchant, whereby the processor replaces the PAN with a token in the authorization response, returning the token to the merchant and storing the PAN on the merchant’s behalf. In the new kind of tokenization, used by Apple Pay and provided by MasterCard, Visa, and presumably American Express, the token replaces the PAN within the user’s mobile device, and is forwarded to the acquirer and the payment network in the course of a transaction. The purpose of the earlier kind of tokenization is to allow the merchant to outsource the storage of the PAN to an entity that can store it more securely. The purpose of the new kind of tokenization is to prevent cross-channel fraud or, more specifically, to prevent an account reference sniffed from an NFC channel in the course of a cryptogram-secured transaction from being used in a traditional web-form or magnetic-stripe transaction does does not require verification of a cryptogram. The new kind of tokenization has the potential to greatly improve payment security while the payment industry transitions to a stage where all transactions require cryptogram verification.

The new kind of tokenization is described in a document entitled EMV Tokenisation Specification — Technical Framework. We have looked at the document in detail and we report our findings in a white paper. The document is, to be blunt, seriously flawed. It leaves most operational details to be specified separately in the message specifications of each of the payment networks (presumably MasterCard, Visa and American Express), and it is plagued with ambiguities, inconsistencies and downright nonsense. Nevertheless, I believe we have been able to come up with an interpretation of the document that makes sense for some of the use cases. (Other use cases cannot be made to work following the approach taken in the document.)

Here are the conclusions drawn by the white paper.

Apple Pay use case. In the use case that is probably implemented by Apple Pay for both in-store and in-app transactions, a token service provider provisions a token and a shared key to the mobile device. When it comes to making a payment, the merchant sends a cryptographic nonce to the device and the device generates a cryptogram, which is a symmetric digital signature computed with the shared key on data that includes the nonce. (A cryptographic nonce is a number that is only used once in a given context.) The merchant includes the token and the cryptogram in the authorization request, which travels via the acquirer to the payment network. The payment network asks the token service provider to validate the cryptogram on behalf of the issuer and map the token to the PAN; then it forwards to the issuer a modified authorization request that includes both the token and the PAN but not the cryptogram. The role of payment service provider can be fulfilled by the payment network itself without essentially altering the use case.

Alternative use case with end-to-end security. As an alternative, the issuer itself can play the role of token service provider and provision the token and shared key to the mobile device, just as it provisions a shared key to a chip card in a non-tokenized transaction. (The issuer may also provision a token to a chip card; the token is then stored in the chip while the PAN is embossed on the card.) In that case the payment network forwards the authorization request to the issuer without replacing the token with the PAN. The transaction flow is essentially the same as in a non-tokenized transaction. The cryptogram is validated by the issuer, preserving the end-to-end security that is lost when the cryptogram is validated by the payment network or a third party playing the role of token service provider.

Alternative to tokenization. Instead of provisioning a token to a mobile device (or a chip card), the issuer can achieve essentially the same level of security by provisioning a secondary account number and flagging it in its own database as being intended exclusively for use in EMV transactions, which require cryptogram validation.

If you have comments on the white paper, please leave them here.

Posted in Payments | Tagged , , | 2 Comments

Implementing Virtual Tamper Resistance without a Secure Channel

Last week I made a presentation to the GlobalPlatform 2014 TEE Conference, co-authored with Karen Lewison, on how to provide virtual tamper resistance for derived credentials and other data stored in a Trusted Execution Environment (TEE). I’ve put the slides online as an animated PowerPoint presentation with speaker notes.

An earlier post, also available on the conference blog, summarized the presentation. In this post I want to go over a technique for implementing virtual tamper resistance that we have not discussed before. The technique is illustrated with animation in slides 9 and 10. The speaker notes explain the animation steps.

Virtual tamper resistance is achieved by storing data in a device, encrypted under a data protection key that is entrusted to a key storage service and retrieved from the service after the device authenticates to the service using a device authentication credential, which is regenerated from a protocredential and a PIN. (Some other secret or combination of secrets not stored in the device can be used instead of a PIN, including biometric samples or outputs of physical unclonable functions.) The data protection key is called “credential encryption key” in the presentation, which focuses on the protection of derived credentials. The gist of the technique is that all PINs produce well-formed device authentication credentials, so that an adversary who physically captures the mobile device cannot mount an offline guessing attack that would easily crack the PIN, because there is no way to test guesses of the PIN offline. To test a PIN, the adversary must combine it with the protocredential to produce a credential, and test the credential by trying to authenticate online against the key storage service, which limits the number of attempts.

The device authentication credential consists of a key pair pertaining to a digital signature cryptosystem, plus a record ID that uniquely identifies a device record where the key storage service keeps the data protection key. The device record is created when the device registers with the key storage service. It also contains the public key component of the key pair, and a counter of consecutive authentication failures. Methods for regenerating a credential comprising a DSA, ECDSA or RSA key pair can be found in our paper on mobile authentication, and in our more recent paper providing an example of a derived credentials architecture.

In those papers we proposed retrieving the data protection key over a secure channel between the device and the key storage service, such as a TLS connection. But a TEE may not be equipped with TLS client software or other software for establishing a secure channel. It may not be practical to implement such software in a TEE due to memory constraints; and it may not be desirable to do so for security reasons, given that the security provided by a TEE depends to some extent on TEE software being kept simple and bug-free. This motivates the technique illustrated in the presentation, which does not rely on a secure channel.

The technique requires only one roundtrip, comprising two messages. The TEE generates an ephemeral symmetric key that the key storage service will use to encrypt the data protection key for transmission to the mobile device, and it signs the ephemeral key using the private key component of the digital signature key pair in the device authentication credential. In the first message, the TEE sends the signed key to the service along with the record ID in the credential. The TEE encrypts the first message with a public key of the key storage service, and the service decrypts it with the corresponding private key.

The service uses the record ID to locate the device record, and the public key that it finds in the record to verify the signature on the ephemeral key.

Signing the ephemeral key indirectly authenticates the mobile device, and more precisely the TEE within the device, to the key storage service. The signature tells the service that the ephemeral key originates from the TEE and can be used to encrypt the data protection key for transmission to the TEE. The service encrypts the data protection key and sends it to the TEE, which uses it to decrypt the data protected by virtual tamper resistance.

Instead of storing the public key component of the device authentication credential in the device record, it is possible to only store a hash of the public key. In that case the TEE sends the public key along with the record ID and the signed ephemeral key. This has several advantages: it saves space in the database of device records of the key storage service; it allows the service to verify the signature before accessing the database, which may be a good thing if database access is onerous; and as a matter of defense-in-depth, it might provide protection against a cryptanalytic attack that would exploit a weakness in the digital signature cryptosystem to recover the private key of the device authentication credential from the public key. On the other hand, sending the public key takes up substantial additional bandwidth.

Posted in Security | Tagged , , , | Leave a comment

Which Flavor of Tokenization is Used by Apple Pay

I’ve seen a lot of confusion about how Apple Pay uses tokenization. I’ve seen it stated or implied that the token is generated dynamically, that it is merchant-specific or transaction-specific, and that its purpose is to help prevent fraudulent Apple Pay transactions. None of that is true. As the Apple Pay press release says, “a unique Device Account Number is assigned, encrypted and securely stored in the Secure Element on your iPhone or Apple Watch”. That Device Account Number is the token; it is not generated dynamically, and it is not merchant-specific or transaction-specific. And as I explain below, its security purpose is other than to help prevent fraudulent Apple Pay transactions.

Some of the confusion comes from the fact that there are two very different flavors of tokenization. That those two flavors are confused is clear in a blog post by Yoni Heisler that purports to provide “an in-depth look at what’s behind” Apple Pay. Heisler’s post references documents on both flavors, not realizing that they describe different flavors that cannot possibly both be used by Apple Pay.

In the first flavor, described on page 7 of a 2012 First Data white paper referenced in Heisler’s post, the credit card number is replaced with a token in the authorization response. The token is not used until the authorization comes back. Tokenization is the second component of a security solution whose first component is encryption of credit card data from the point of capture, which can be a magnetic stripe reader, to the data center of the processor that the merchant has contracted with to process credit and debit card transactions. The processor decrypts the card data and forwards the transaction to the issuing bank for authorization (via the acquiring bank and the payment network, although this is not mentioned). When the authorization response comes back, the processor replaces the credit card number with the token before forwarding the response to the merchant. The merchant retains the token, and uses it instead of the credit card number for settlement, returns, recurring transactions, etc.

In this first flavor, the token is specific to a merchant, or perhaps even to a transaction or sequence of recurring transactions. A security breach at the merchant, other than skimming, can only reveal tokens, which cannot be used for purchases at a different merchant.

In the flavor of tokenization used by Apple Pay, on the other hand, the card number (and expiration date) is mapped to a token (and token expiration date), which is stored in the phone instead of the card number. The same token is used for all transactions and all merchants. In the course of a transaction the token travels from the phone to the merchant, to the processor if the merchant uses one, to the acquiring bank, and to the payment network (MasterCard, VISA or American Express). The payment network maps the token to the card number and forwards the authorization request with the card number to the issuer. When the authorization response comes back from the issuer, the payment network maps the card number back to the token before forwarding the response to the acquiring bank, optional processor, and merchant.

The above explanation is based on the widely held belief that Apple Pay is based on the EMV Tokenisation Specification, also referenced in Heisler’s post. The specification is a framework that admits many variations, but in all of them the token is mapped to the card number after the acquirer forwards the authorization request to the payment network, and the card number is mapped back to the token before the payment network sends the authorization response to the acquirer. Therefore the EMV tokenization standard does not cover the tokenization solution described in the First Data white paper. Why not? Simply because that solution is a private matter between the merchant and the processor. It does not involve the acquiring bank, the payment network or the issuing bank, and therefore it requires no standardization.

Since all merchants see the same token, the security of Apple Pay transactions cannot hinge on tokenization. Instead, it relies on a secret that the phone shares with the issuing bank and uses to generate the dynamic security code also mentioned in the press release, from a transaction-specific challenge received from the merchant and a transaction counter. [Update (2014-10-19). Actually, it seems that the secret is shared with the token service provider rather than with the issuer. See the white paper Interpreting the EMV Tokenisation Specification.] Use of the dynamic security code is specified in the mag-stripe mode of the EMV Contactless Specification, as I explained in an earlier post. (In that post I also pointed out that the EMV Contactless Specification also has an EMV mode, where the user’s device has a key pair certified by the issuing bank in addition to a shared secret, and I speculated that Apple Pay might also be using EMV mode now or might use in the future. A comment by Mark on that post says that it is his understanding that Apple Pay supports both modes from the outset.)

The tokenization flavor used by Apple Pay does have a security purpose, but it is not to prevent fraudulent Apple Pay transactions. It is to prevent the card number and expiration date, which could be exfiltrated from an Apple Pay transaction in the absence of tokenization, from being used in traditional online or magnetic stripe transactions that do not require a shared secret or a key pair.

Posted in Payments | Tagged , , | 1 Comment

Smart Cards, TEEs and Derived Credentials

This post has also been published on the blog of the GlobalPlatform TEE Conference.

Smart cards and mobile devices can both be used to carry cryptographic credentials. Smart cards are time-tested vehicles, which provide the benefits of low cost and widely deployed infrastructures. Mobile devices, on the other hand, are emerging vehicles that promise new benefits such as built-in network connections, a built-in user interface, and the rich functionality provided by mobile apps.

Derived Credentials

It is tempting to predict that mobile devices will replace smart cards, but this will not happen in the foreseeable future. Mobile devices are best used to carry credentials that are derived from primary credentials stored in a smart card. Each user may choose to carry derived credentials on zero, one or multiple devices in addition to the primary credentials in a smart card, and may obtain derived credentials for new devices as needed. The derived credentials in each mobile device are functionally equivalent to the primary credentials, and are installed into the device by a device registration process that does not need to duplicate the user proofing performed for the issuance of the primary credentials.

The term derived credentials was coined by NIST in connection with credentials carried by US federal employees in Personal Identity Verification (PIV) cards and US military personnel in Common Access Cards (CAC); but the concept is broadly applicable. Derived credentials can be used for a variety of purposes, and can be implemented by a variety of cryptographic means. A credential for signing email could consist of a private key and a certificate that binds the corresponding public key to the user’s email address, the private-public key pair pertaining to a digital signature cryptosystem. A credential to provide email confidentiality could consist of a certified public key used by senders to encrypt messages and the corresponding private key used to decrypt them. A credential for user authentication could consist of a certified or uncertified key pair pertaining to any of a variety of cryptosystems.

An important class of derived credentials are payment credentials. Credentials carried in Google Wallet, in apps that take advantage of Host Card Emulation, or in Apple Pay devices, are examples of derived credentials.

Using a TEE to Protect Derived Credentials

Derived credentials carried in a mobile device must be protected against two threats: the threat of malware running on the device, and the threat of physical capture of the device.

If no precautions are taken, malware running on a mobile device may be able to exfiltrate derived credentials for use on a different device, or make malicious use of the credentials on the device itself. Malware may also be able to capture a PIN and/or a biometric sample used to authenticate the user to the device and enable credential use, and use them to surreptitiously enable the credentials and make use of them at a later time.

Mobile devices are frequently lost or stolen. More than three million smart phones were stolen in the US alone in 2013. If no precautions are taken, an adversary who captures the device may be able to physically exfiltrate the credentials for use in a different device, even if the credentials are not enabled for use in the device itself when the device is captured. The exfiltrated credentials should be revocable, but there may be a time lag before they are revoked, and a further time lag before revocation is recognized by relying parties. Moreover, some relying parties may not check for revocation, and some credential uses are not affected by revocation. For example, revocation of a key pair used for email encryption and decryption cannot prevent the private key from being used to decrypt messages sent before revocation, which may have been collected over time by the adversary.

A TEE is ideally suited to protect derived credentials against the threat of malware. Credentials stored in the TEE are protected by the Secure OS and cannot be read by malware running in the Rich Execution Environment (REE), even if such malware has taken control of the Rich OS. REE-originated requests to make use of the credentials can be subjected to user approval through a Trusted User Interface. A credential-enabling PIN can be entered through the Trusted User Interface, and a biometric sample can be entered through a sensor controlled by the TEE through a Trusted Path.

A TEE can also provide protection against physical capture by storing credentials in a Secure Element (SE) as specified in the TEE Secure Element API Specification. However, it is also possible to provide protection against physical capture without recourse to a SE, using Virtual Tamper Resistance (VTR) mediated by the credential-enabling PIN and/or biometric sample.

Virtual Tamper Resistance

PIN-mediated VTR protects credentials by encrypting them under a symmetric credential-encryption key (CEK). It would be tempting to derive the CEK from the PIN, but that does not work because an adversary who captured the device and extracted the encrypted credentials could mount an offline brute-force attack against the PIN that would easily crack it. Instead, the CEK is stored in the cloud, where it is entrusted to a key storage service. The CEK, however, must be retrieved securely. That requires authentication of the mobile device to the key storage service, using a device authentication credential (DAC) which must itself be protected. This is again a credential-protection problem, but a simpler one, because the DAC is a single-purpose authentication credential. Protection of the DAC is achieved by not storing it anywhere. Instead, it is regenerated before each use from a protocredential stored in the mobile device and the PIN. An adversary who captures the device cannot mount an offline attack against the PIN because all PINs produce well-formed credentials. Each PIN guess can only be tested by attempting to authenticate against the key storage service, which limits the number of guesses.

Virtual tamper resistance mediated by a biometric sample works similarly, using a biometric key instead of a PIN. The biometric key is consistenly derived from a genuine-but-variable biometric sample and helper data using a known method based on error correction technology. The helper data is stored in the mobile device as part of the protocredential, but it does not reveal biometric information to an adversary who captures the mobile device because it is computed by performing a bitwise exclusive-or operation on a biometric feature vector and a random error-correction codeword, the exclusive-or operation being deemed to effectively hide the biometric information in the feature vector from the adversary.

Using virtual tamper resistance instead of physical tamper resistance realizes the cost-saving benefits of a TEE by protecting the derived credentials without requiring a separate tamper-resistant chip. If desired, however, security can be maximized by combining virtual and physical tamper resistance, which have overlapping but distinct security postures. To defeat virtual tamper resistance, the adversary must capture the device, and also breach the security of the key storage service. To defeat physical tamper resistance, the adversary must reverse-engineer and circumvent physical countermeasures such as meshes and sensors that trigger zeroization circuitry, using equipment such as a Focused Ion Beam workstation. To defeat their combination the adversary must achieve three independent security breaches by capturing the device, defeating the physical countermeasures, and breaking into the online key storage service.

Beyond Derived Credentials

Virtual tamper resistance and protocredentials are versatile tools that can be used for many security purposes besides protecting derived credentials.

Virtual tamper resistance can be used to implement a cryptographic module within a TEE, protecting the keys and data kept in the module. It can also be used for general-purpose data protection within the REE, by encrypting the data under one or more keys stored in a VTR-protected cryptographic module within the TEE.

A credential regenerated within a TEE from a protocredential in conjunction with a PIN and/or a biometric sample can be used to authenticate a mobile device in the context of Mobile Device Management (MDM) or, more broadly, Enterprise Mobility Management (EMM).

A protocredential can be used in conjunction with a hardware key produced by a Physical Unclonable Function (PUF) to regenerate a device credential that an autonomous device can use for authentication in a cyberphysical system.

Posted in Security | Tagged , , , , , | 2 Comments

Apple Pay Must Be Using the Mag-Stripe Mode of the EMV Contactless Specifications

Update (2014-10-19). The discussion of tokenization in this post is based on an interpretation of the EMV Tokenisation specification that I now think is not the intended one. See the white paper Interpreting the EMV Tokenisation Specification for an alternative interpretation.

Update (2014-10-05). See Mark’s comment below, where he says that Apple Pay is already set up to use the EMV mode of the EMV Contactless Specification, in addition to the mag-stripe mode.

I’ve been trying to figure out how Apple Pay works and how secure it is. In an earlier post I assumed, based on the press release on Apple Pay, that Apple had invented a new method for making payments, which did not seem to provide non-repudiation. But a commenter pointed out that Apple Pay must be using standard EMV with tokenization, because it works with existing terminals as shown in a demonstration.

So I looked at the EMV Specifications, more specifically at Books 1-4 of the EMV 4.3 specification, the Common Payment Application Specification addendum, and the Payment Tokenisation Specification. Then I wrote a second blog post briefly describing tokenized EMV transactions. I conjectured that the dynamic security code mentioned in the Apple press release was an asymmetric signature on transaction data and other data, the signature being generated by customer’s device and verified by the terminal as part of what is called CDA Offline Data Authentication. And I concluded that Apple Pay did provide non-repudiation after all.

But commenters corrected me again. Two commenters said that the dynamic security code is likely to be a CVC3 code, a.k.a. CVV3, and provided links to a paper and a blog post that explain how CVC3 is used. I had not seen any mention of CVC3 in the specifications because I had neglected to look at the EMV Contactless Specifications, which include a mag-stripe mode that does not appear in EMV 4.3 and makes use of CVC3. I suppose that, when EMVCo extended the EMV specifications to allow for contactless operation, it added the mag-stripe mode so that contactless cards could be used in the US without requiring major modification of the infrastructure for processing magnetic stripe transactions prevalent in the US.

The EMV contactless specifications

The EMV Contactless Specifications envision an architecture where the merchant has a POS (point-of-sale) set-up comprising a terminal and a reader, which may be separate devices, or may be integrated into a single device. When they are separate devices, the terminal may be equipped to accept traditional EMV contact cards, magnetic stripe cards, or both, while the reader has an NFC antenna through which it communicates with contactless cards, and software for interacting with the cards and the terminal.

The contactless specifications consist of Books A, B, C and D, where Book C specifies the behavior of the kernel, which is software in the reader that is responsible for most of the logical handling of payment transactions. (This kernel has nothing to do with an OS kernel.) Book C comes in seven different versions, books C-1 through C-7. According to Section 5.8.2 of Book A, the specification in book C-1 is followed by some JCB and Visa cards, specification C-2 is followed by MasterCards, C-3 by some Visa cards, C-4 by American Express cards, C-5 by JCB cards, C-6 by Discover cards, and C-7 by UnionPay cards. (Contactless MasterCards have been marketed under then name PayPass, contactless Visa cards under the name payWave, and contactless American Express cards under the name ExpressPay.) Surprisingly, the seven C book versions seem to have been written independently of each other and are very different. Their lengths vary widely, from the 34 pages of C-1 to the 546 pages of C-2.

Each of the seven C books specifies two modes of operation, an EMV mode and the mag-stripe mode that I mentioned above.

A goal of the contactless specifications is to minimize changes to existing payment infrastructures. A contactless EMV mode transaction is similar to a contact EMV transaction, and a contactless mag-stripe transaction is similar to a traditional magnetic card transaction. In both cases, while the functionality of the reader is new, those of the terminal and the issuing bank change minimally, and those of the acquiring bank and the payment network need not change at all.

The mag-stripe mode in MasterCards (book C-2)

I’ve looked in some detail at contactless MasterCard transactions, as specified in the C-2 book. C-2 is the only book in the contactless specifications that mentions CVC3. (The alternative acronym CVV3 is not mentioned anywhere.) I suppose other C books refer to the same concept by a different name, but I haven’t checked.

C-2 makes a distinction between contactless transactions involving a card and contactless transactions involving a mobile phone, both in EMV mode and in mag-stripe mode. Section 3.8 specifies what I would call a “mobile phone profile” of the specification. The profile supports the ability of the mobile phone to authenticate the customer, e.g. by requiring entry of a PIN; it allows the mobile phone to report to the POS that the customer has been authenticated; and it allows for a different (presumably higher) contactless transaction amount limit to be configured for transactions where the phone has authenticated the customer.

Mobile phone mag-stripe mode transactions according to book C-2

The following is my understanding of how mag-stripe mode transactions work according to C-2 when a mobile phone is used.

When the customer taps the reader with the phone, a preliminary exchange of several messages takes place between the phone and the POS, before an authorization request is sent to the issuer. This is of course a major departure from a traditional magnetic stripe transaction, where data from the magnetic stripe is read by the POS but no other data is transferred back and forth between the card and the terminal.

(I’m not sure what happens according to the specification when the customer is required to authenticate with a PIN into the mobile phone for a mag-stripe mode transaction, since the mobile phone has to leave the NFC field while the customer enters the PIN. The specification talks about a second tap, but in a different context. Apple Pay uses authentication with a fingerprint instead of a PIN, and seems to require the customer to have the finger on the fingerprint sensor as the card is in the NFC field, which presumably allows biometric authentication to take place during the preliminary exchange of messages.)

One of the messages in the preliminary exchange is a GET PROCESSING OPTIONS command, sent by the POS to the mobile phone. This command is part of the EMV 4.3 specification and typically includes the transaction amount as a command argument (presumably because the requested processing options depend on the transaction amount). Thus the mobile phone learns the transaction amount before the transaction takes place.

The POS also sends the phone a COMPUTE CRYPTOGRAPHIC CHECKSUM command, which includes an unpredictable number, i.e. a random nonce, as an argument. The phone computes CVC3 from the unpredictable number, a transaction count kept by the phone, and a secret shared between the phone and the issuing bank. Thus the CVC3 is a symmetric signature on the unpredictable number and the transaction count, a signature that is verified by the issuer to authorize the transaction.

After the tap, the POS sends an authorization request that travels to the issuing bank via the acquiring bank and the payment network, just as in a traditional magnetic stripe transaction. The request carries track data, where the CVC1 code of the magnetic stripe is replaced with CVC3. The unpredictable number and the transaction count are added as discretionary track data fields, so that the issuer can verify that the CVC3 code is a signature on those data items. The POS ensures that the unpredictable number in the track data is the one that it sent to the phone. The issuer presumably keeps its own transaction count and checks that it agrees with the one in the track data before authorizing the transaction. Transaction approval travels back to the POS via the payment network and the acquiring bank. Clearing takes place at the end of the day as for a traditional magnetic stripe transaction.

Notice that transaction approval cannot be reported to the phone, since the phone may no longer be in the NFC field when the approval is received by the POS. As noted in the first comment on the second post, the demonstration shows that the phone logs the transaction and shows the amount to the customer afer the transaction takes place. Since the phone is not told the result of the transaction, the log entry must be based on the data sent by the POS to the phone in the preliminary exchange of messages, and a transaction decline will not be reflected in the blog.

Tokenized contactless transactions

Tokenization is not mentioned in the contactless specifications. It is described instead in the separate Payment Tokenisation specification. There should be no difference between tokenization in contact and contactless transactions. As I explained in the second post, a payment token and expiration date are used as aliases for the credit card number (known as the primary account number, or PAN) and expiration date. The customer’s device, the POS, and the acquiring bank see the aliases, while the issuing bank sees the real PAN and expiration date. Translation is effected as needed by a token service provider upon request by the payment network (e.g. MasterCard or Visa). In the case of Apple Pay the role of token service provider is played by the payment network itself, according to a Bank Innovation blog post.

Implications for Apple Pay

Clearly, Apple Pay must following the EMV contactless specifications of books C-2, C-3 and C-4 for MasterCard, Visa and American Express transactions respectively. More specifically, it must be following what I called above the “mobile phone profile” of the contactless specifications. It must be implementing the contactless mag-stripe mode, since magnetic stripe infrastructure is still prevalent in the US. It may or may not be implementing contactless EMV mode today, but will probably implement it in the future as the infrastructure for supporting payments with contact cards is phased in over the next year in the US.

The Apple press release is too vague to know with certainty what the terms it uses refer to. The device account number is no doubt the payment token. In mag-stripe mode the dynamic security code is no doubt the CVC3 code, as suggested in the comments on the second post. In EMV mode, if implemented by Apple Pay, the dynamic security code could refer to the CDA signature as I conjectured in that post, but it could also refer to the ARQC cryptogram sent to the issuer in an authorization request. (I’ve seen that cryptogram referred to as a dynamic code elsewhere.) It is not clear what the “one-time unique number” refers to in either mode.

If Apple Pay is only implementing mag-stripe mode, one of the points I made in my first post regarding the use of symmetric instead of asymmetric signatures is valid after all. In mag-stripe mode, only a symmetric signature is made by the phone. In theory, that may allow the customer to repudiate a transaction, whereas an asymmetric signature could provide non-repudiation. On the other hand, two other points related the use of a symmetric signature that I made in the first post are not valid. A merchant is not able to use data obtained during the transaction to impersonate the customer. This is not because the merchant sees the payment token instead of the PAN, but because the merchant does not have the secret needed to compute the CVC3, which is only shared between the phone and the issuer. And an adversary who breaches the security of the issuer and obtains the shared secret is not able to impersonate the customer, assuming that the adversary does not know the payment token.

None of this alleviates the broader security weaknesses that I discussed in my third post on Apple Pay: the secrecy of the security design, the insecurity of Touch ID, the vulnerability of Apple Pay on Apple Watch to relay attacks, and the impossibility for merchants to verify the identity of the customer.

Remark: a security miscue in the EMV Payment Tokenisation specification

I said above that “an adversary who breaches the security of the issuer and obtains the shared secret is not able to impersonate the customer, assuming that the adversary does not know the payment token“. The caveat reminds me that the tokenization specification suggests, as an option, forwarding the payment token, token expiry date, and token cryptogram to the issuer. The motivation is to allow the issuer to take them into account when deciding whether to authorize the transaction. However, this decreases security instead of increasing it. As I pointed out in the the second post when discussing tokenization, the issuer is not able to verify the token cryptogram because the phone signs the token cryptogram with a key that it shares with the token service provider, but not with the issuer; therefore the issuer should not trust token-related data. And forwarding the token-related data to the issuer may allow an adversary who breaches the confidentiality of the data kept by the issuer to obtain all the data needed to impersonate the customer, thus missing an opportunity to strengthen security by not storing all such data in the same place.

Update (2014-09-21). There is a small loose end above. If the customer loads the same card into several devices that run Apple Pay, there will be a separate transaction count for the card in each device where it has been loaded. Thus the issuer must maintain a separate transaction count for each instance of the card loaded into a device (plus another one for the physical card if it is a contactless card), to verify that its own count agrees with the count in the authorization request. Therefore the issuer must be told which card instance each authorization request is coming from. This could be done in one of two ways: (1) the card instance could be identified by a PAN Sequence Number, which is a data item otherwise used to distinguish multiple cards that have the same card number, and carried, I believe, in discretionary track data; or (2) each card instance could use a different payment token as an alias for the card number. Neither option fits perfectly with published info. Option (2) would require the token service provider to map the same card number to different payment tokens, based perhaps on the PAN sequence number; but the EMV Tokenization Specification does not mention the PAN sequence number. Option (1) would mean that the same payment token is used on different devices, which goes counter to the statement in the Apple Press release that there is a Device Unique Number; perhaps the combination of the payment token and the PAN sequence number could be viewed as the Device Unique Number. Option (2) provides more security, so I assume that’s the one used in Apple Pay.

Posted in Payments | Tagged , , | 9 Comments