Virtual Tamper Resistance is the Answer to the HCE Conundrum

Host Card Emulation (HCE) is a technique pioneered by SimplyTapp and integrated by Google into Android as of 4.4 KitKat that allows an Android app running in a mobile device equipped with an NFC controller to emulate the functionality of a contactless smart card. Prior to KitKat the NFC controller routed the NFC interface to a secure element, either a secure element integrated in a carrier-controlled SIM, or a different secure element embedded in the phone itself. This allowed carriers to block the use of Google Wallet, which competes with the carrier-developed NFC payment technology that used to be called ISIS and is now called SoftCard. (I’m not sure if or how they blocked Google Wallet in devices with an embedded secure element.) Using HCE, Google Wallet can run on the host CPU where it cannot be blocked by carriers. (HCE also paves the way to the development of a variety of NFC applications, for payments or other purposes, as Android apps that do not have to be provisioned to a secure element.)

But the advantages of HCE are offset by a serious disadvantage. An HCE application cannot count on a secure element to protect payment credentials if the device is stolen, which is a major concern because more then three million phones where stolen last year in the US alone. If the payment credentials are stored in ordinary persistent storage supplied by Android, a thief who steals the device can obtain the credentials by rooting the device or, with more effort, by opening the device and probing the flash memory.

Last February Visa and MasterCard declared their support for HCE. In a Visa press release and a MasterCard press release, both payment networks referred to cloud-based applications or processing, thereby suggesting that an HCE application could store the payment credentials in the cloud. But that would require authenticating to the cloud in order to retrieve the credentials or make use of them remotely; and none of the usual authentication methods is well suited to that purpose. Authenticating with a passcode requires a high entropy passcode, and asking the user to enter such a passcode would negate any convenience gained by using a mobile device for payments. Authenticating with a credential stored in the device requires protecting the credential, which brings us back to square one. Authenticating with a credential supplied to the device, e.g. via SMS, obviously doesn’t provide security when the device has been stolen. Authenticating with a credential supplied to a different device would again negate any convenience gain.

An alternative to storing the payment credentials in the cloud would be to store them in encrypted storage within the device. But secure encryption requires a secure element to store a secret that can be used in the derivation of the encryption key. Without such a secret, the encryption key would have to be derived exclusively from a passcode entered by the user. That passcode would need to have very high entropy in order to resist an offline guessing attack with a password-cracking botnet; and asking the user to enter such a password would again negate any convenience gain. If a secure element is available to Android for storing the secret, there is no reason not to use it as well for hosting the payment application itself.

But there is a solution to this puzzle. The solution is to use what we call virtual tamper resistance to protect the payment credentials (or the HCE application, or the entire Android file system). Here is how that works. The credentials are stored in ordinary persistent storage within the device, but they are encrypted with a data protection key that is entrusted to a key storage service in the cloud. To retrieve that key, the device authenticates to the service with a cryptographic device-authentication credential. But that credential is not stored in the device. Instead, it is regenerated on demand from a PIN supplied by the user and what we call a protocredential. The protocredential is such that all PINs yield well-formed credentials. Hence a thief who steals the device has no information that could be used to test guesses of the PIN in an offline attack. A PIN can only be tested online by generating a credentials and attempting to authenticate with it to the key storage service, which limits the number of attempts. Methods of regenerating the device authentication credential from a protocredential and a PIN can be found in Section 2.6 of a technical report. Methods for using a biometric sample instead of, or in addition to, a PIN can be found in Section 3 of the same report. A method for implicitly authenticating the device to the key storage service while retrieving the data protection key can be found in a recent blog post.

I have to point out that neither a secure element nor virtual tamper resistance provide full protection against malware that is able to root Android while the unsuspecting user is using the device, because such malware may be able to intercept or phish the PIN or biometric sample that is used to enable the use of the credentials if a secure element is used, or to regenerate the device authentication credential and retrieve the data protection key if virtual tamper resistance is used. Protection against such malware can be achieved by running the payment application in a Trusted Execution Environment (TEE) that features a trusted path between the user interface and the payment application. The trusted path can protect the PIN or biometric sample from being intercepted or phished by malware. Furthermore, even if malware has somehow obtained the PIN or a genuine biometric sample, the payment application can insist on the PIN or sample being submitted by the user via the trusted path, rather than by code running in the possibly infected Rich Execution Environment (REE) where ordinary apps run. On the other hand a TEE by itself does not provide full protection against physical capture of the device, because it does not usually provide physical tamper resistance. Virtual tamper resistance can be used to remedy that shortcoming.

Posted in Payments | Tagged , , , , | Leave a comment

How Apple Pay Uses 3-D Secure for Internet Payments

In a comment on an earlier post on Apple Pay where I was trying to figure out how Apple Pay works over NFC, R Stone suggested looking at the Apple Pay developer documentation (Getting Started with Apple Pay, PassKit Framework Reference and Payment Token Format Reference), guessing that Apple Pay would carry out transactions over the Internet in essentially the same way as over NFC. I followed the suggestion and, although I didn’t find any useful information applicable to NFC payments in the documentation, I did find interesting information that seems worth reporting.

It turns out that Apple Pay relies primarily on the 3-D Secure protocol for Internet payments. EMV may also be used, but merchant support for EMV is optional, whereas support for 3-D Secure is required (see the Discussion under Working with Payments in the documentation of the PKPaymentRequest class). It makes sense to rely primarily on a protocol such as 3-D Secure that was intended specifically for Internet payments rather than on a protocol intended for in-store transactions such as EMV. Merchants that only sell over the Internet should not be burdened with the complexities of EMV. But Apple Pay makes use of 3-D Secure in a way that is very different from how the protocol is traditionally used on the web. In this post I’ll try to explain how the merchant interacts with Apple Pay for both 3-D Secure and EMV transactions over the Internet, then how Apple Pay seems to be using 3-D Secure. I’ll also point out a couple of surprises I found in the documentation.

Merchant Interaction with Apple Pay for Internet Payments

A merchant app running on the phone shows an Apple Pay button on its user interface. When the user taps the button, the app makes a payment request to the Apple Pay API, specifying the amount of the payment and a description of the transaction. Apple Pay displays to the user a payment sheet including the description of the transaction, the payment amount, and a prompt to Pay with Touch ID. When the user touches the fingerprint sensor and a valid fingerprint is recognized, Apple Pay creates a payment token, which it returns to the merchant app. (This payment token is not to be confused with the payment token of the EMV Tokenisation Specification which I discussed in the previous post; the payment token of that specification is a replacement for the primary account number, whereas the payment token of the Apple Pay developer documentation is a description of a payment transaction. To disambiguate, I will refer to the payment token of the developer documentation as an iOS payment token.) The merchant app may send the iOS payment token to a merchant server, which passes it through a network API to a payment processor, which uses it to create an authorization request that it forwards to the acquiring bank. Alternatively, the merchant app may use an SDK supplied by the processor, which sends the iOS payment token directly to a processor server as described for example in the Apple Pay Getting Started Guide of Authorize.Net, which is one of the processors listed in the Apple Pay developer site.

The iOS payment token includes, among other things, a header and encrypted payment data, which are signed together by Apple Pay with an asymmetric signature. The payment data is encrypted under a symmetric key derived from an Elliptic-Curve Diffie-Hellman (ECDH) shared secret, which is itself derived from an ephemeral ECDH key pair generated by Apple Pay and a long term ECDH key pair belonging to the merchant. (Apple Pay computes the shared secret from the ephemeral private key and the merchant public key, while the merchant computes it from its private key and the ephemeral public key, which is included in the header. An encryption method that uses an ephemeral Diffie-Hellman key pair of the encryptor with a long term Diffie-Hellman key pair of the decryptor may be viewed as a variant of El Gamal encryption.)

After receiving the iOS payment token from the merchant, the processor verifies the Apple Pay asymmetric signature on the header and encrypted payment data, and decrypts the payment data using the private key of the merchant. To the latter purpose the processor may generate the ECDH key pair on behalf of the merchant and keep the private key.

The decrypted payment data may consist of an “EMV payment structure“, described as “output from the Secure Element“; unfortunately, the documentation does not provide any details about the structure, so the Apple Pay developer documentation does not shed light on the details of Apple Pay payment transactions over NFC as had been hoped by R Stone. The decrypted payment data may also consist of an “online payment cryptogram, as defined by 3-D Secure” plus an “optional ECI indicator, as defined by 3-D Secure“. Whether 3-D Secure or EMV is used, the developer documentation does not provide enough information to create an authorization request that can be submitted to the acquiring bank. Unless additional information can be obtained from other sources, the merchant will have to contract out transaction processing to one of the processors listed in the Apple Pay developer site, which will have received the necessary information from Apple.

3-D Secure in a nutshell

The 3-D Secure protocol, which is rarely used in the US but commonly used in other countries, improves security for Internet payments by authenticating the cardholder. It was developed by VISA, and it is used by VISA, MasterCard, JCB and American Express under the respective names Verified by VISA, MasterCard SecureCode, J/Secure and American Express SafeKey. The protocol is proprietary, but I have found some information about it in a Wikipedia page and in merchant implementation guides published by VISA and by MasterCard.

Ordinarily, 3-D Secure is used for web payments. The merchant site redirects the user’s (i.e. the cardholder’s) browser to an Access Control Server (ACS) operated by the issuing bank, or more commonly by a third party on behalf of the issuing bank, which authenticates the user. Redirection is often accomplished by including in a merchant web page an inline frame whose URL targets the ACS. As is usually the case for web authentication protocols that redirect the browser to an authentication server, such as OpenID, OAuth, OpenID Connect, or the SAML Browser SSO Profile, the method used to authenticate the user in 3-D Secure is up to the server (the ACS) and is not prescribed by the protocol. Typically the ACS displays a Personal Assurance Message (PAM) to authenticate itself to the user and mitigate the risk of phishing, then prompts the user for an ordinary password agreed upon when the user enrolls for 3-D Secure, or for a one-time password (OTP) that is delivered to the user or generated by the user using a method agreed upon at enrollment time.

After authenticating the user, the ACS redirects the browser back to the merchant site, passing a Payer Authentication Response (PARes) that indicates whether authentication succeeded, failed, or could not be performed, e.g., because the user has not enrolled in 3-D Secure with the issuing bank and no password or OTP generation or transmittal means have been agreed upon. The PARes is signed by the ACS with an asymmetric signature that is verified by a Merchant Plug-In (MPI) provided to the merchant by an MPI provider licensed by the payment network. The PARes comprises an authentication status, and may also comprise an Electronic Commerce Indicator (ECI) that indicates the result of the authentication process redundantly with the authentication status, and a Cardholder Authentication Verification Value (CAVV) (which MasterCard calls instead an Accountholder Authentication Value, or AAV). The CAVV includes an Authentication Tracking Number (ATN) and a cryptographic Message Authentication Code (MAC), which is a symmetric signature computed by the ACS.

After the MPI has verified the asymmetric signature in the response, if authentication succeeded, the merchant adds the CAVV and the ECI to the authorization request that it assembles using the card number, security code and cardholder data obtained from a web form. The merchant sends the authorization request to the acquiring bank, which forwards it via the payment network to the issuing bank. The issuing bank verifies the MAC in the CAVV, using the same key that was used by the ACS to compute the MAC after authenticating the user on behalf of the issuing bank.

Use of 3-D Secure by Apple Pay

An Apple Pay transaction is very different from a traditional 3-D Secure transaction. It is not a web transaction. No browser is involved, and no browser redirection takes place. The user is authenticated by Apple Pay (using the fingerprint sensor, which IMO provides little security as discussed in earlier posts) rather than by the issuing bank. And Apple Pay uses tokenization, whereas 3-D Secure does not. Therefore 3-D Secure must have been modified very substantially for use with Apple Pay.

The developer documentation does not explain how 3-D Secure has been modified, but here is a guess.

After verifying the user’s fingerprint, Apple Pay generates the CAVV, without involvement by an ACS on behalf of the issuing bank or by the issuing bank itself. As discussed in earlier posts, I believe that Apple Pay shares a secret with the payment network or token service provider (here I’m referring to the token of the EMV Tokenisation Specification) that it uses to derive the symmetric key that is used to generate the token cryptogram in a tokenized EMV transaction over NFC. I suppose Apple Pay uses the same symmetric key, or a symmetric key derived from the same shared secret, to generate the MAC in the CAVV. The CAVV thus plays a role similar to that of the token cryptogram, and is verified by the payment network or a token service provider used by the payment network, just as the token cryptogram is.

In ordinary 3-D Secure the asymmetric signature on the PARes, created by the ACS and verified by the MPI plug-in, allows the merchant to verify that the user has been successfully authenticated and it is OK to make an authorization request. In Apple Pay, the same role is played by the asymmetric signature included in the iOS payment token. That signature is verified by the payment processor, which subsumes the role played by the MPI plug-in in 3-D Secure.

Surprise: is the primary account number present in the phone?

The primary account number (PAN) is not supposed to be present in the phone. Its absence from the phone was stated in the Apple Pay announcement:

When you add a credit or debit card with Apple Pay, the actual card numbers are not stored on the device nor on Apple servers. Instead, a unique Device Account Number is assigned, encrypted and securely stored in the Secure Element on your iPhone or Apple Watch

And I’ve seen it emphasized in many blog posts on Apple Pay. But the documentation of the PKPaymentPass class refers both to a deviceAccountIdentifier, described as “the unique identifier for the device-specific account number”, and a primaryAccountIdentifier, described as “an opaque value that uniquely identifies the primary account number for the payment card”. This seems to imply that the primary account number is present in the device, even though it may be hidden from the merchant app by an opaque value.

Surprise: lack of replay protection?

In the Payment Token Format Reference, the instructions on how to verify the Apple Pay signature on the header and encrypted payment data of the iOS payment token include the following step:

e. Inspect the CMS signing time of the signature, as defined by section 11.3 of RFC 5652. If the time signature and the transaction time differ by more than a few minutes, it’s possible that the token is a replay attack.

No other anti-replay precautions are mentioned. This seems to indicate that replay protection relies on the Apple Pay signature not being more than “a few minutes” old. That is obviously not an effective protection against replay attacks, nor against bugs or other glitches that may cause the iOS payment token to be sent twice. I conjecture that lack of replay protection may have contributed to the multiple charges for some purchases that have been reported.

Posted in Payments | Tagged , , , , | Leave a comment

Making Sense of the EMV Tokenisation Specification

Apple Pay has brought attention to the concept of tokenization by storing a payment token in the user’s mobile device instead of a card number, a.k.a. a primary account number, or PAN. The Apple Pay announcement was accompanied by an announcement of a token service provided by MasterCard and a similar announcement of another token service provided by Visa.

Tokenization is not a new concept. Token services such as the TransArmor offering of First Data have been commercially available for years. But as I explained in a previous post there are two different kinds of tokenization, an earlier kind and a new kind. The earlier kind of tokenization is a private arrangement between the merchant and a payment processor chosen by the merchant, whereby the processor replaces the PAN with a token in the authorization response, returning the token to the merchant and storing the PAN on the merchant’s behalf. In the new kind of tokenization, used by Apple Pay and provided by MasterCard, Visa, and presumably American Express, the token replaces the PAN within the user’s mobile device, and is forwarded to the acquirer and the payment network in the course of a transaction. The purpose of the earlier kind of tokenization is to allow the merchant to outsource the storage of the PAN to an entity that can store it more securely. The purpose of the new kind of tokenization is to prevent cross-channel fraud or, more specifically, to prevent an account reference sniffed from an NFC channel in the course of a cryptogram-secured transaction from being used in a traditional web-form or magnetic-stripe transaction does does not require verification of a cryptogram. The new kind of tokenization has the potential to greatly improve payment security while the payment industry transitions to a stage where all transactions require cryptogram verification.

The new kind of tokenization is described in a document entitled EMV Tokenisation Specification — Technical Framework. We have looked at the document in detail and we report our findings in a white paper. The document is, to be blunt, seriously flawed. It leaves most operational details to be specified separately in the message specifications of each of the payment networks (presumably MasterCard, Visa and American Express), and it is plagued with ambiguities, inconsistencies and downright nonsense. Nevertheless, I believe we have been able to come up with an interpretation of the document that makes sense for some of the use cases. (Other use cases cannot be made to work following the approach taken in the document.)

Here are the conclusions drawn by the white paper.

Apple Pay use case. In the use case that is probably implemented by Apple Pay for both in-store and in-app transactions, a token service provider provisions a token and a shared key to the mobile device. When it comes to making a payment, the merchant sends a cryptographic nonce to the device and the device generates a cryptogram, which is a symmetric digital signature computed with the shared key on data that includes the nonce. (A cryptographic nonce is a number that is only used once in a given context.) The merchant includes the token and the cryptogram in the authorization request, which travels via the acquirer to the payment network. The payment network asks the token service provider to validate the cryptogram on behalf of the issuer and map the token to the PAN; then it forwards to the issuer a modified authorization request that includes both the token and the PAN but not the cryptogram. The role of payment service provider can be fulfilled by the payment network itself without essentially altering the use case.

Alternative use case with end-to-end security. As an alternative, the issuer itself can play the role of token service provider and provision the token and shared key to the mobile device, just as it provisions a shared key to a chip card in a non-tokenized transaction. (The issuer may also provision a token to a chip card; the token is then stored in the chip while the PAN is embossed on the card.) In that case the payment network forwards the authorization request to the issuer without replacing the token with the PAN. The transaction flow is essentially the same as in a non-tokenized transaction. The cryptogram is validated by the issuer, preserving the end-to-end security that is lost when the cryptogram is validated by the payment network or a third party playing the role of token service provider.

Alternative to tokenization. Instead of provisioning a token to a mobile device (or a chip card), the issuer can achieve essentially the same level of security by provisioning a secondary account number and flagging it in its own database as being intended exclusively for use in EMV transactions, which require cryptogram validation.

If you have comments on the white paper, please leave them here.

Posted in Payments | Tagged , , | 1 Comment

Implementing Virtual Tamper Resistance without a Secure Channel

Last week I made a presentation to the GlobalPlatform TEE Conference, co-authored with Karen Lewison, on how to provide virtual tamper resistance for derived credentials and other data stored in a Trusted Execution Environment (TEE). I’ve put the slides online as an animated PowerPoint presentation with speaker notes.

An earlier post, also available on the conference blog, summarized the presentation. In this post I want to go over a technique for implementing virtual tamper resistance that we have not discussed before. The technique is illustrated with animation in slides 9 and 10. The speaker notes explain the animation steps.

Virtual tamper resistance is achieved by storing data in a device, encrypted under a data protection key that is entrusted to a key storage service and retrieved from the service after the device authenticates to the service using a device authentication credential, which is regenerated from a protocredential and a PIN. (Some other secret or combination of secrets not stored in the device can be used instead of a PIN, including biometric samples or outputs of physical unclonable functions.) The data protection key is called “credential encryption key” in the presentation, which focuses on the protection of derived credentials. The gist of the technique is that all PINs produce well-formed device authentication credentials, so that an adversary who physically captures the mobile device cannot mount an offline guessing attack that would easily crack the PIN, because there is no way to test guesses of the PIN offline. To test a PIN, the adversary must combine it with the protocredential to produce a credential, and test the credential by trying to authenticate online against the key storage service, which limits the number of attempts.

The device authentication credential consists of a key pair pertaining to a digital signature cryptosystem, plus a record ID that uniquely identifies a device record where the key storage service keeps the data protection key. The device record is created when the device registers with the key storage service. It also contains the public key component of the key pair, and a counter of consecutive authentication failures. Methods for regenerating a credential comprising a DSA, ECDSA or RSA key pair can be found in our paper on mobile authentication, and in our more recent paper providing an example of a derived credentials architecture.

In those papers we proposed retrieving the data protection key over a secure channel between the device and the key storage service, such as a TLS connection. But a TEE may not be equipped with TLS client software or other software for establishing a secure channel. It may not be practical to implement such software in a TEE due to memory constraints; and it may not be desirable to do so for security reasons, given that the security provided by a TEE depends to some extent on TEE software being kept simple and bug-free. This motivates the technique illustrated in the presentation, which does not rely on a secure channel.

The technique requires only one roundtrip, comprising two messages. The TEE generates an ephemeral symmetric key that the key storage service will use to encrypt the data protection key for transmission to the mobile device, and it signs the ephemeral key using the private key component of the digital signature key pair in the device authentication credential. In the first message, the TEE sends the signed key to the service along with the record ID in the credential. The TEE encrypts the first message with a public key of the key storage service, and the service decrypts it with the corresponding private key.

The service uses the record ID to locate the device record, and the public key that it finds in the record to verify the signature on the ephemeral key.

Signing the ephemeral key indirectly authenticates the mobile device, and more precisely the TEE within the device, to the key storage service. The signature tells the service that the ephemeral key originates from the TEE and can be used to encrypt the data protection key for transmission to the TEE. The service encrypts the data protection key and sends it to the TEE, which uses it to decrypt the data protected by virtual tamper resistance.

Instead of storing the public key component of the device authentication credential in the device record, it is possible to only store a hash of the public key. In that case the TEE sends the public key along with the record ID and the signed ephemeral key. This has several advantages: it saves space in the database of device records of the key storage service; it allows the service to verify the signature before accessing the database, which may be a good thing if database access is onerous; and as a matter of defense-in-depth, it might provide protection against a cryptanalytic attack that would exploit a weakness in the digital signature cryptosystem to recover the private key of the device authentication credential from the public key. On the other hand, sending the public key takes up substantial additional bandwidth.

Posted in Security | Tagged , , , | Leave a comment

Which Flavor of Tokenization is Used by Apple Pay

I’ve seen a lot of confusion about how Apple Pay uses tokenization. I’ve seen it stated or implied that the token is generated dynamically, that it is merchant-specific or transaction-specific, and that its purpose is to help prevent fraudulent Apple Pay transactions. None of that is true. As the Apple Pay press release says, “a unique Device Account Number is assigned, encrypted and securely stored in the Secure Element on your iPhone or Apple Watch”. That Device Account Number is the token; it is not generated dynamically, and it is not merchant-specific or transaction-specific. And as I explain below, its security purpose is other than to help prevent fraudulent Apple Pay transactions.

Some of the confusion comes from the fact that there are two very different flavors of tokenization. That those two flavors are confused is clear in a blog post by Yoni Heisler that purports to provide “an in-depth look at what’s behind” Apple Pay. Heisler’s post references documents on both flavors, not realizing that they describe different flavors that cannot possibly both be used by Apple Pay.

In the first flavor, described on page 7 of a 2012 First Data white paper referenced in Heisler’s post, the credit card number is replaced with a token in the authorization response. The token is not used until the authorization comes back. Tokenization is the second component of a security solution whose first component is encryption of credit card data from the point of capture, which can be a magnetic stripe reader, to the data center of the processor that the merchant has contracted with to process credit and debit card transactions. The processor decrypts the card data and forwards the transaction to the issuing bank for authorization (via the acquiring bank and the payment network, although this is not mentioned). When the authorization response comes back, the processor replaces the credit card number with the token before forwarding the response to the merchant. The merchant retains the token, and uses it instead of the credit card number for settlement, returns, recurring transactions, etc.

In this first flavor, the token is specific to a merchant, or perhaps even to a transaction or sequence of recurring transactions. A security breach at the merchant, other than skimming, can only reveal tokens, which cannot be used for purchases at a different merchant.

In the flavor of tokenization used by Apple Pay, on the other hand, the card number (and expiration date) is mapped to a token (and token expiration date), which is stored in the phone instead of the card number. The same token is used for all transactions and all merchants. In the course of a transaction the token travels from the phone to the merchant, to the processor if the merchant uses one, to the acquiring bank, and to the payment network (MasterCard, VISA or American Express). The payment network maps the token to the card number and forwards the authorization request with the card number to the issuer. When the authorization response comes back from the issuer, the payment network maps the card number back to the token before forwarding the response to the acquiring bank, optional processor, and merchant.

The above explanation is based on the widely held belief that Apple Pay is based on the EMV Tokenisation Specification, also referenced in Heisler’s post. The specification is a framework that admits many variations, but in all of them the token is mapped to the card number after the acquirer forwards the authorization request to the payment network, and the card number is mapped back to the token before the payment network sends the authorization response to the acquirer. Therefore the EMV tokenization standard does not cover the tokenization solution described in the First Data white paper. Why not? Simply because that solution is a private matter between the merchant and the processor. It does not involve the acquiring bank, the payment network or the issuing bank, and therefore it requires no standardization.

Since all merchants see the same token, the security of Apple Pay transactions cannot hinge on tokenization. Instead, it relies on a secret that the phone shares with the issuing bank and uses to generate the dynamic security code also mentioned in the press release, from a transaction-specific challenge received from the merchant and a transaction counter. [Update (2014-10-19). Actually, it seems that the secret is shared with the token service provider rather than with the issuer. See the white paper Interpreting the EMV Tokenisation Specification.] Use of the dynamic security code is specified in the mag-stripe mode of the EMV Contactless Specification, as I explained in an earlier post. (In that post I also pointed out that the EMV Contactless Specification also has an EMV mode, where the user’s device has a key pair certified by the issuing bank in addition to a shared secret, and I speculated that Apple Pay might also be using EMV mode now or might use in the future. A comment by Mark on that post says that it is his understanding that Apple Pay supports both modes from the outset.)

The tokenization flavor used by Apple Pay does have a security purpose, but it is not to prevent fraudulent Apple Pay transactions. It is to prevent the card number and expiration date, which could be exfiltrated from an Apple Pay transaction in the absence of tokenization, from being used in traditional online or magnetic stripe transactions that do not require a shared secret or a key pair.

Posted in Payments | Tagged , , | 1 Comment

Smart Cards, TEEs and Derived Credentials

This post has also been published on the blog of the GlobalPlatform TEE Conference.

Smart cards and mobile devices can both be used to carry cryptographic credentials. Smart cards are time-tested vehicles, which provide the benefits of low cost and widely deployed infrastructures. Mobile devices, on the other hand, are emerging vehicles that promise new benefits such as built-in network connections, a built-in user interface, and the rich functionality provided by mobile apps.

Derived Credentials

It is tempting to predict that mobile devices will replace smart cards, but this will not happen in the foreseeable future. Mobile devices are best used to carry credentials that are derived from primary credentials stored in a smart card. Each user may choose to carry derived credentials on zero, one or multiple devices in addition to the primary credentials in a smart card, and may obtain derived credentials for new devices as needed. The derived credentials in each mobile device are functionally equivalent to the primary credentials, and are installed into the device by a device registration process that does not need to duplicate the user proofing performed for the issuance of the primary credentials.

The term derived credentials was coined by NIST in connection with credentials carried by US federal employees in Personal Identity Verification (PIV) cards and US military personnel in Common Access Cards (CAC); but the concept is broadly applicable. Derived credentials can be used for a variety of purposes, and can be implemented by a variety of cryptographic means. A credential for signing email could consist of a private key and a certificate that binds the corresponding public key to the user’s email address, the private-public key pair pertaining to a digital signature cryptosystem. A credential to provide email confidentiality could consist of a certified public key used by senders to encrypt messages and the corresponding private key used to decrypt them. A credential for user authentication could consist of a certified or uncertified key pair pertaining to any of a variety of cryptosystems.

An important class of derived credentials are payment credentials. Credentials carried in Google Wallet, in apps that take advantage of Host Card Emulation, or in Apple Pay devices, are examples of derived credentials.

Using a TEE to Protect Derived Credentials

Derived credentials carried in a mobile device must be protected against two threats: the threat of malware running on the device, and the threat of physical capture of the device.

If no precautions are taken, malware running on a mobile device may be able to exfiltrate derived credentials for use on a different device, or make malicious use of the credentials on the device itself. Malware may also be able to capture a PIN and/or a biometric sample used to authenticate the user to the device and enable credential use, and use them to surreptitiously enable the credentials and make use of them at a later time.

Mobile devices are frequently lost or stolen. More than three million smart phones were stolen in the US alone in 2013. If no precautions are taken, an adversary who captures the device may be able to physically exfiltrate the credentials for use in a different device, even if the credentials are not enabled for use in the device itself when the device is captured. The exfiltrated credentials should be revocable, but there may be a time lag before they are revoked, and a further time lag before revocation is recognized by relying parties. Moreover, some relying parties may not check for revocation, and some credential uses are not affected by revocation. For example, revocation of a key pair used for email encryption and decryption cannot prevent the private key from being used to decrypt messages sent before revocation, which may have been collected over time by the adversary.

A TEE is ideally suited to protect derived credentials against the threat of malware. Credentials stored in the TEE are protected by the Secure OS and cannot be read by malware running in the Rich Execution Environment (REE), even if such malware has taken control of the Rich OS. REE-originated requests to make use of the credentials can be subjected to user approval through a Trusted User Interface. A credential-enabling PIN can be entered through the Trusted User Interface, and a biometric sample can be entered through a sensor controlled by the TEE through a Trusted Path.

A TEE can also provide protection against physical capture by storing credentials in a Secure Element (SE) as specified in the TEE Secure Element API Specification. However, it is also possible to provide protection against physical capture without recourse to a SE, using Virtual Tamper Resistance (VTR) mediated by the credential-enabling PIN and/or biometric sample.

Virtual Tamper Resistance

PIN-mediated VTR protects credentials by encrypting them under a symmetric credential-encryption key (CEK). It would be tempting to derive the CEK from the PIN, but that does not work because an adversary who captured the device and extracted the encrypted credentials could mount an offline brute-force attack against the PIN that would easily crack it. Instead, the CEK is stored in the cloud, where it is entrusted to a key storage service. The CEK, however, must be retrieved securely. That requires authentication of the mobile device to the key storage service, using a device authentication credential (DAC) which must itself be protected. This is again a credential-protection problem, but a simpler one, because the DAC is a single-purpose authentication credential. Protection of the DAC is achieved by not storing it anywhere. Instead, it is regenerated before each use from a protocredential stored in the mobile device and the PIN. An adversary who captures the device cannot mount an offline attack against the PIN because all PINs produce well-formed credentials. Each PIN guess can only be tested by attempting to authenticate against the key storage service, which limits the number of guesses.

Virtual tamper resistance mediated by a biometric sample works similarly, using a biometric key instead of a PIN. The biometric key is consistenly derived from a genuine-but-variable biometric sample and helper data using a known method based on error correction technology. The helper data is stored in the mobile device as part of the protocredential, but it does not reveal biometric information to an adversary who captures the mobile device because it is computed by performing a bitwise exclusive-or operation on a biometric feature vector and a random error-correction codeword, the exclusive-or operation being deemed to effectively hide the biometric information in the feature vector from the adversary.

Using virtual tamper resistance instead of physical tamper resistance realizes the cost-saving benefits of a TEE by protecting the derived credentials without requiring a separate tamper-resistant chip. If desired, however, security can be maximized by combining virtual and physical tamper resistance, which have overlapping but distinct security postures. To defeat virtual tamper resistance, the adversary must capture the device, and also breach the security of the key storage service. To defeat physical tamper resistance, the adversary must reverse-engineer and circumvent physical countermeasures such as meshes and sensors that trigger zeroization circuitry, using equipment such as a Focused Ion Beam workstation. To defeat their combination the adversary must achieve three independent security breaches by capturing the device, defeating the physical countermeasures, and breaking into the online key storage service.

Beyond Derived Credentials

Virtual tamper resistance and protocredentials are versatile tools that can be used for many security purposes besides protecting derived credentials.

Virtual tamper resistance can be used to implement a cryptographic module within a TEE, protecting the keys and data kept in the module. It can also be used for general-purpose data protection within the REE, by encrypting the data under one or more keys stored in a VTR-protected cryptographic module within the TEE.

A credential regenerated within a TEE from a protocredential in conjunction with a PIN and/or a biometric sample can be used to authenticate a mobile device in the context of Mobile Device Management (MDM) or, more broadly, Enterprise Mobility Management (EMM).

A protocredential can be used in conjunction with a hardware key produced by a Physical Unclonable Function (PUF) to regenerate a device credential that an autonomous device can use for authentication in a cyberphysical system.

Posted in Security | Tagged , , , , , | 2 Comments

Apple Pay Must Be Using the Mag-Stripe Mode of the EMV Contactless Specifications

Update (2014-10-19). The discussion of tokenization in this post is based on an interpretation of the EMV Tokenisation specification that I now think is not the intended one. See the white paper Interpreting the EMV Tokenisation Specification for an alternative interpretation.

Update (2014-10-05). See Mark’s comment below, where he says that Apple Pay is already set up to use the EMV mode of the EMV Contactless Specification, in addition to the mag-stripe mode.

I’ve been trying to figure out how Apple Pay works and how secure it is. In an earlier post I assumed, based on the press release on Apple Pay, that Apple had invented a new method for making payments, which did not seem to provide non-repudiation. But a commenter pointed out that Apple Pay must be using standard EMV with tokenization, because it works with existing terminals as shown in a demonstration.

So I looked at the EMV Specifications, more specifically at Books 1-4 of the EMV 4.3 specification, the Common Payment Application Specification addendum, and the Payment Tokenisation Specification. Then I wrote a second blog post briefly describing tokenized EMV transactions. I conjectured that the dynamic security code mentioned in the Apple press release was an asymmetric signature on transaction data and other data, the signature being generated by customer’s device and verified by the terminal as part of what is called CDA Offline Data Authentication. And I concluded that Apple Pay did provide non-repudiation after all.

But commenters corrected me again. Two commenters said that the dynamic security code is likely to be a CVC3 code, a.k.a. CVV3, and provided links to a paper and a blog post that explain how CVC3 is used. I had not seen any mention of CVC3 in the specifications because I had neglected to look at the EMV Contactless Specifications, which include a mag-stripe mode that does not appear in EMV 4.3 and makes use of CVC3. I suppose that, when EMVCo extended the EMV specifications to allow for contactless operation, it added the mag-stripe mode so that contactless cards could be used in the US without requiring major modification of the infrastructure for processing magnetic stripe transactions prevalent in the US.

The EMV contactless specifications

The EMV Contactless Specifications envision an architecture where the merchant has a POS (point-of-sale) set-up comprising a terminal and a reader, which may be separate devices, or may be integrated into a single device. When they are separate devices, the terminal may be equipped to accept traditional EMV contact cards, magnetic stripe cards, or both, while the reader has an NFC antenna through which it communicates with contactless cards, and software for interacting with the cards and the terminal.

The contactless specifications consist of Books A, B, C and D, where Book C specifies the behavior of the kernel, which is software in the reader that is responsible for most of the logical handling of payment transactions. (This kernel has nothing to do with an OS kernel.) Book C comes in seven different versions, books C-1 through C-7. According to Section 5.8.2 of Book A, the specification in book C-1 is followed by some JCB and Visa cards, specification C-2 is followed by MasterCards, C-3 by some Visa cards, C-4 by American Express cards, C-5 by JCB cards, C-6 by Discover cards, and C-7 by UnionPay cards. (Contactless MasterCards have been marketed under then name PayPass, contactless Visa cards under the name payWave, and contactless American Express cards under the name ExpressPay.) Surprisingly, the seven C book versions seem to have been written independently of each other and are very different. Their lengths vary widely, from the 34 pages of C-1 to the 546 pages of C-2.

Each of the seven C books specifies two modes of operation, an EMV mode and the mag-stripe mode that I mentioned above.

A goal of the contactless specifications is to minimize changes to existing payment infrastructures. A contactless EMV mode transaction is similar to a contact EMV transaction, and a contactless mag-stripe transaction is similar to a traditional magnetic card transaction. In both cases, while the functionality of the reader is new, those of the terminal and the issuing bank change minimally, and those of the acquiring bank and the payment network need not change at all.

The mag-stripe mode in MasterCards (book C-2)

I’ve looked in some detail at contactless MasterCard transactions, as specified in the C-2 book. C-2 is the only book in the contactless specifications that mentions CVC3. (The alternative acronym CVV3 is not mentioned anywhere.) I suppose other C books refer to the same concept by a different name, but I haven’t checked.

C-2 makes a distinction between contactless transactions involving a card and contactless transactions involving a mobile phone, both in EMV mode and in mag-stripe mode. Section 3.8 specifies what I would call a “mobile phone profile” of the specification. The profile supports the ability of the mobile phone to authenticate the customer, e.g. by requiring entry of a PIN; it allows the mobile phone to report to the POS that the customer has been authenticated; and it allows for a different (presumably higher) contactless transaction amount limit to be configured for transactions where the phone has authenticated the customer.

Mobile phone mag-stripe mode transactions according to book C-2

The following is my understanding of how mag-stripe mode transactions work according to C-2 when a mobile phone is used.

When the customer taps the reader with the phone, a preliminary exchange of several messages takes place between the phone and the POS, before an authorization request is sent to the issuer. This is of course a major departure from a traditional magnetic stripe transaction, where data from the magnetic stripe is read by the POS but no other data is transferred back and forth between the card and the terminal.

(I’m not sure what happens according to the specification when the customer is required to authenticate with a PIN into the mobile phone for a mag-stripe mode transaction, since the mobile phone has to leave the NFC field while the customer enters the PIN. The specification talks about a second tap, but in a different context. Apple Pay uses authentication with a fingerprint instead of a PIN, and seems to require the customer to have the finger on the fingerprint sensor as the card is in the NFC field, which presumably allows biometric authentication to take place during the preliminary exchange of messages.)

One of the messages in the preliminary exchange is a GET PROCESSING OPTIONS command, sent by the POS to the mobile phone. This command is part of the EMV 4.3 specification and typically includes the transaction amount as a command argument (presumably because the requested processing options depend on the transaction amount). Thus the mobile phone learns the transaction amount before the transaction takes place.

The POS also sends the phone a COMPUTE CRYPTOGRAPHIC CHECKSUM command, which includes an unpredictable number, i.e. a random nonce, as an argument. The phone computes CVC3 from the unpredictable number, a transaction count kept by the phone, and a secret shared between the phone and the issuing bank. Thus the CVC3 is a symmetric signature on the unpredictable number and the transaction count, a signature that is verified by the issuer to authorize the transaction.

After the tap, the POS sends an authorization request that travels to the issuing bank via the acquiring bank and the payment network, just as in a traditional magnetic stripe transaction. The request carries track data, where the CVC1 code of the magnetic stripe is replaced with CVC3. The unpredictable number and the transaction count are added as discretionary track data fields, so that the issuer can verify that the CVC3 code is a signature on those data items. The POS ensures that the unpredictable number in the track data is the one that it sent to the phone. The issuer presumably keeps its own transaction count and checks that it agrees with the one in the track data before authorizing the transaction. Transaction approval travels back to the POS via the payment network and the acquiring bank. Clearing takes place at the end of the day as for a traditional magnetic stripe transaction.

Notice that transaction approval cannot be reported to the phone, since the phone may no longer be in the NFC field when the approval is received by the POS. As noted in the first comment on the second post, the demonstration shows that the phone logs the transaction and shows the amount to the customer afer the transaction takes place. Since the phone is not told the result of the transaction, the log entry must be based on the data sent by the POS to the phone in the preliminary exchange of messages, and a transaction decline will not be reflected in the blog.

Tokenized contactless transactions

Tokenization is not mentioned in the contactless specifications. It is described instead in the separate Payment Tokenisation specification. There should be no difference between tokenization in contact and contactless transactions. As I explained in the second post, a payment token and expiration date are used as aliases for the credit card number (known as the primary account number, or PAN) and expiration date. The customer’s device, the POS, and the acquiring bank see the aliases, while the issuing bank sees the real PAN and expiration date. Translation is effected as needed by a token service provider upon request by the payment network (e.g. MasterCard or Visa). In the case of Apple Pay the role of token service provider is played by the payment network itself, according to a Bank Innovation blog post.

Implications for Apple Pay

Clearly, Apple Pay must following the EMV contactless specifications of books C-2, C-3 and C-4 for MasterCard, Visa and American Express transactions respectively. More specifically, it must be following what I called above the “mobile phone profile” of the contactless specifications. It must be implementing the contactless mag-stripe mode, since magnetic stripe infrastructure is still prevalent in the US. It may or may not be implementing contactless EMV mode today, but will probably implement it in the future as the infrastructure for supporting payments with contact cards is phased in over the next year in the US.

The Apple press release is too vague to know with certainty what the terms it uses refer to. The device account number is no doubt the payment token. In mag-stripe mode the dynamic security code is no doubt the CVC3 code, as suggested in the comments on the second post. In EMV mode, if implemented by Apple Pay, the dynamic security code could refer to the CDA signature as I conjectured in that post, but it could also refer to the ARQC cryptogram sent to the issuer in an authorization request. (I’ve seen that cryptogram referred to as a dynamic code elsewhere.) It is not clear what the “one-time unique number” refers to in either mode.

If Apple Pay is only implementing mag-stripe mode, one of the points I made in my first post regarding the use of symmetric instead of asymmetric signatures is valid after all. In mag-stripe mode, only a symmetric signature is made by the phone. In theory, that may allow the customer to repudiate a transaction, whereas an asymmetric signature could provide non-repudiation. On the other hand, two other points related the use of a symmetric signature that I made in the first post are not valid. A merchant is not able to use data obtained during the transaction to impersonate the customer. This is not because the merchant sees the payment token instead of the PAN, but because the merchant does not have the secret needed to compute the CVC3, which is only shared between the phone and the issuer. And an adversary who breaches the security of the issuer and obtains the shared secret is not able to impersonate the customer, assuming that the adversary does not know the payment token.

None of this alleviates the broader security weaknesses that I discussed in my third post on Apple Pay: the secrecy of the security design, the insecurity of Touch ID, the vulnerability of Apple Pay on Apple Watch to relay attacks, and the impossibility for merchants to verify the identity of the customer.

Remark: a security miscue in the EMV Payment Tokenisation specification

I said above that “an adversary who breaches the security of the issuer and obtains the shared secret is not able to impersonate the customer, assuming that the adversary does not know the payment token“. The caveat reminds me that the tokenization specification suggests, as an option, forwarding the payment token, token expiry date, and token cryptogram to the issuer. The motivation is to allow the issuer to take them into account when deciding whether to authorize the transaction. However, this decreases security instead of increasing it. As I pointed out in the the second post when discussing tokenization, the issuer is not able to verify the token cryptogram because the phone signs the token cryptogram with a key that it shares with the token service provider, but not with the issuer; therefore the issuer should not trust token-related data. And forwarding the token-related data to the issuer may allow an adversary who breaches the confidentiality of the data kept by the issuer to obtain all the data needed to impersonate the customer, thus missing an opportunity to strengthen security by not storing all such data in the same place.

Update (2014-09-21). There is a small loose end above. If the customer loads the same card into several devices that run Apple Pay, there will be a separate transaction count for the card in each device where it has been loaded. Thus the issuer must maintain a separate transaction count for each instance of the card loaded into a device (plus another one for the physical card if it is a contactless card), to verify that its own count agrees with the count in the authorization request. Therefore the issuer must be told which card instance each authorization request is coming from. This could be done in one of two ways: (1) the card instance could be identified by a PAN Sequence Number, which is a data item otherwise used to distinguish multiple cards that have the same card number, and carried, I believe, in discretionary track data; or (2) each card instance could use a different payment token as an alias for the card number. Neither option fits perfectly with published info. Option (2) would require the token service provider to map the same card number to different payment tokens, based perhaps on the PAN sequence number; but the EMV Tokenization Specification does not mention the PAN sequence number. Option (1) would mean that the same payment token is used on different devices, which goes counter to the statement in the Apple Press release that there is a Device Unique Number; perhaps the combination of the payment token and the PAN sequence number could be viewed as the Device Unique Number. Option (2) provides more security, so I assume that’s the one used in Apple Pay.

Posted in Payments | Tagged , , | 9 Comments

Security Weaknesses of Apple Pay for In-Store Transactions

In an earlier post I raised concerns about the security of Apple Pay based on the scant information provided in Apple’s press release. In a comment on that post, Brendon Wilson pointed out that Apple Pay must be using standard EMV with Tokenization rather than a new payment protocol, because it works with existing terminals. After looking in some detail at the EMV specifications, I tried to explain in my last post how Apple Pay could be implemented without departing from the specifications. As part of that explanation, I conjectured that an Apple Pay device may be using both a symmetric signature verified by the issuer and an asymmetric signature verified by the merchant’s terminal. That would eliminate one of the security concerns in my original post. In his comment, Brendon also referred to a MacRumors blog post that provided new details on how Apple Pay is used with Apple Watch. In this post I’d like to recap my remaining concerns on the security of Apple Pay for in-store transactions. (I don’t have enough information yet to discuss web transactions.)

Secrecy

The security design of Apple Pay is secret. This is a weakness in and of itself. Submitting security designs to public scrutiny has been a standard best practice for decades. Without public scrutiny, security flaws will not be caught by friendly researchers, but may be found by adversaries who have enough to gain from reverse-engineering the design and exploiting the flaws that they find.

Insecurity of Touch ID

When Apple Pay is used on the iPhone, the user has to authenticate to the phone for each transaction. This is a good thing, but authentication relies on Touch ID, which only provides security against casual attackers.

Shortly after the introduction of Touch ID, it was shown that it is possible to lift a fingerprint from the iPhone itself, use the fingerprint to make fake skin reproducing the fingerprint ridges, place the fake skin on a finger, and use the finger with the fake skin to authenticate with Touch ID. Three different techniques for making the fake skin were reported, two of them here, a third one here.

Relay attacks against Apple Pay on Apple Watch

Apple’s press release touts the security provided by Touch ID, but adds that Apple Pay will also work with Apple Watch without explaining how the customer will authenticate to Apple Watch, which does not have a fingerprint sensor, and can be used with the iPhone 5 and 5c, which do not have fingerprint sensors either.

A MacRumors blog post explains that the user will authenticate with a PIN, and remain authenticated while the watch detects continuing skin contact using sensors on the back of the watch. The user has to reenter the PIN after contact is interrupted.

A PIN is more secure than Touch ID as long as either the watch or a chip within the watch provide sufficient tamper resistance to protect the hash of the PIN which may be used to verify the hash. (If an adversary who captures the watch is able to extract the hash, then he or she can easily crack the PIN by a brute-force offline attack.) But the scheme described by MacRumors is vulnerable to a relay attack.

A relay attack involves two attackers, a first attacker located near a merchant’s contactless terminal, and a second attacker located near an unwitting customer, whom we shall refer to as the victim. The first attacker has an NFC device that interacts with the terminal, and the second attacker has an NFC device that interacts with the victim’s contactless card or mobile device, masquerading as a terminal. The attackers’ devices communicate with each other using a fast link, relaying data between the terminal and the victim’s device. The first attacker can thus make a purchase and have it charged to the victim’s card or device. The attack does not work if a customer has to authenticate by performing some action such as entering a PIN or touching a sensor, but it works if all a customer needs to do is put his or her device within NFC reach of the terminal, as is the case for Apple Pay on Apple Watch.

A relay attack was demonstrated by Gerhard Hancke in 2005, who claimed it was an easy attack against ISO 14443A cards and terminals. More recently, a different kind of relay attack was demonstrated by Michael Roland against Google Wallet. In Roland’s attack, the second attacker was replaced with malware running on the victim’s Android phone. Google countered the attack by restricting access from the main operating system of the phone to the NFC chip containing the payment credentials. (It would be interesting to check if Host Card Emulation happens to reenable the attack.) Google’s countermeasure, however, only prevents the attacker-plus-malware attack, not the two-attacker attack.

Lack of customer ID verification

Tokenization means that merchants will not be able to verify the identity of their customers. This is good for customer privacy, but leaves merchants defenseless against criminals who steal phones and defeat Touch ID, and against relay attacks on Apple Watch. Millions of smart phones are stolen every year in the US alone, and once Touch ID can be used for payments, criminal organizations will no doubt perfect the Touch ID hacking techniques originally developed by researchers.

Posted in Payments | Tagged , , | 3 Comments

Apple Pay, EMV and Tokenization

Update (2014-10-19). The discussion of tokenization in this post is based on an interpretation of the EMV Tokenisation specification that I now think is not the intended one. See the white paper Interpreting the EMV Tokenisation Specification for an alternative interpretation.

Update (2014/09/24). Apple Pay must be using the EMV contactless specifications, which are a substantial departure from the EMV 4.3 specifications. PLEASE SEE THIS MORE RECENT POST.

After reading Apple’s press release on Apple Pay, I naively believed that Apple had invented a new protocol for credit and debit card payments. In my previous post I speculated on how Apple Pay might be using the device account number, one-time unique number and dynamic security code mentioned in the press release. But in a comment, Brendon Wilson pointed out that Apple Pay must be using standard EMV with Tokenization, since it uses existing contactless terminals, as shown in a demonstration that he sent a link to. I agree, and after spending some time looking at the EMV specifications, I believe that the device account number, one-time unique number and dynamic security code of the press release are fanciful names for standard data items in the specifications.

I’ve seen a Bank Innovation blog post that tries to explain how Apple Pay works in terms of EMV and Tokenization. But that post is inconsistent, saying sometimes that the terminal generates a transaction-specific cryptogram, and other times that the cryptogram is already stored in the iPhone when the consumer walks up to a checkout counter.

One way of explaining how EMV-plus-Tokenization works is to consider the evolution from magnetic strip cards to cards with EMV chips and then tokenization.

Magnetic strip transactions

In a magnetic strip transaction, the terminal reads the credit or debit card number (a.k.a. as the Primary Account Number, or PAN) and the expiration date from the card and assembles a transaction authorization request that contains the card number and expiration data in addition to other data, including the transaction amount. The terminal sends the request to the acquiring bank, which forwards it to the issuing bank through a payment network such as VISA or MasterCard. If appropriate, the issuer returns an approval code, which reaches the merchant via the payment network and the acquiring bank.

At the end of the day, the merchant sends a batch of approval codes to the acquiring bank for clearing. The acquiring bank forwards each approval code to the appropriate issuing bank via the payment network and credits the merchant account after the issuing bank accepts the charge and the transaction amount is received by the acquiring bank from the issuing bank.

EMV transactions

When an EMV chip is used instead of a magnetic strip, the transaction process changes as follows. The terminal sends transaction data including the transaction amount to the chip in the card, which returns a response indicating whether the transaction is to be rejected, accepted offline, or processed online by submitting it to the issuer for approval.

(In the context of EMV, an “online transaction” is an in-store transaction that is approved by the issuing bank reached over some network. To avoid confusion, I will use the term “web transaction” or “web payment” to refer to a transaction where the user enters credit card data into a web form.)

The chip’s response includes a cryptogram. Confusingly, the term cryptogram has two different meanings in the EMV specifications. Formally, a cryptogram is a symmetric signature, which takes the form of a message authentication code (MAC) calculated with a key shared between the chip and the issuing bank. (Strictly speaking, each MAC is computed with a different key derived from a permanent shared key and a transaction counter.) But informally, the term cryptogram is also used to refer to a message containing the MAC, such as the response from the chip to the terminal, and a cryptogram is said to be of a particular type, indicated by an acronym such as AAC, TC, ARQC or ARPC, determined by a Cryptogram Information Data byte included in the message.

To indicate that the transaction is to be accepted offline, the chip sends the terminal a Transaction Certificate (TC) cryptogram, while to indicate that an authorization request is to be sent to the issuer, the chip sends the terminal an Authorization Request Cryptogram (ARQC). In both cases the MAC is computed on data that includes the transaction amount and other transaction data as well as terminal and application data. However, the card number and expiration date are not included in the MAC computation.

If it receives an ARQC cryptogram, the terminal sends an authorization request including the cryptogram (i.e. the MAC) to the acquiring bank, which forwards it to the issuing bank via the payment network. The issuer responds with a message that follows the same route back to the merchant and includes an Authorization Response Cryptogram (ARPC), signed with the same key as the ARQC cryptogram. The terminal forwards the ARPC cryptogram to the chip, which sends back a TC cryptogram.

Whether the transaction is authorized offline or online, the merchant includes the TC cryptogram received from the chip in the funding request that it sends to the acquiring bank at clearing time. The TC plays the role played by the approval code in magnetic strip processing. It is forwarded by the acquiring bank to the issuer via the payment network.

Tokenized transactions

Tokenization replaces the credit or debit card number and expiration date with numeric codes of same length, called a payment token and a token expiry date respectively. Separate ranges of numeric codes are allocated so that no payment token can be confused with a card number. A Token Service Provider maintains the mapping between card numbers coupled with their expiration dates and payment tokens coupled with their expiration dates.

Reliance on the token service provider means that tokenization can only be used for online transactions. [Update. As explained in Shaun’s comment, there is no reason why offline transactions cannot use tokenization.] Only the issuer and the payment network see the true card number and expiration date. The acquiring bank, the merchant and the user’s device (which may be a card with a chip, or a mobile device) only see the payment token and expiration date. Back-and-forth translation between card data and token data is effected by the token service provider upon request by the payment network. Translation does not invalidate cryptograms, because cryptograms do not include the card number and expiration date.

At transaction time, the authorization request with the ARQC cryptogram includes token data as it travels from the user’s device to the merchant’s terminal, the acquiring bank, and the payment network. The payment network sends a “de-tokenization” request to the token service provider. The token service provider returns the card data, which the payment network adds to the request before forwarding it to the issuer bank. The response from the issuer bank, which carries the ARPC cryptogram, includes card data but no token data. The response goes first to the payment network, which replaces the card data with token data obtained from the token service provider, before sending the response along to the acquiring bank, the merchant, and the user’s device.

At clearing time, the merchant sends token data along with the TC cryptogram to the acquiring bank, which forwards them to the payment network. The payment network asks the token service provider to de-tokenize the data, then forwards the card data and the TC cryptogram to the issuing bank.

The tokenization spec allows the role of token service provider to be played by the issuing bank, the payment network, or a party that plays no other role in transaction processing. According to the Bank Innovation post, it is the payment network that plays the token service provider role in Apple Pay.

The tokenization spec mentions a token cryptogram. This cryptogram is different from the others, and does not replace any of the others. Its purpose is to help the token service provider decide whether it is OK to respond to a de-tokenization request and reveal card data. It is computed with a symmetric key derived from data shared between the user’s device and the token service provider. It is sent along with a transaction authorization request from the user’s device to the merchant’s terminal, the acquiring bank and the payment network, which includes it in the de-tokenization request to the token service provider.

According to the EMV specs, the token cryptogram may also be forwarded by the payment network to the issuer, which can take it into account when deciding whether to authorize the transaction. However, the issuer cannot verify the authenticity of the token cryptogram, since it is signed with a key that the issuer does not have.

Offline data authentication

Now I need to go back to the pre-tokenization EMV specs and describe the concept of offline data authentication, which refers to the direct authentication by the terminal of data sent by the card, as part of an offline or online transaction. The EMV specifications require cards that can perform offline transactions to support offline data authentication, while such support is optional for cards that only perform online transactions. Offline data authentication takes place when both the card and the terminal support it.

Offline data authentication comes in three flavors, called SDA, DDA, and CDA.

In Static Data Authentication (SDA), the card provides the terminal with an asymmetric signature on static card data. The signature is computed once and for all by the issuer when the card is issued and stored in the card. The issuer has an RSA key pair. The private key is used to compute the signature, and the public key is included in a certificate issued by a certificate authority (CA) to the card-issuing bank . The issuer’s certificate is also stored in the card and sent to the terminal along with the signature. (The issuer’s private key is not stored in the card, of course.) The terminal uses the public key in the certificate to verify the signature, and the public key of the CA, which it is configured with, to verify the CA’s signature in the issuer’s certificate. Notice that the card does not have a key pair.

In Dynamic Data Authentication (DDA), the card has its own key pair, which is stored in the card when the card is issued. (Cryptographic best practice calls for a key pair to be generated within the cryptographic module where it will be used, but card firmware may not have key-pair generation functionality.) The card provides the terminal with an asymmetric signature computed with the card’s private key on data including a transaction-specific random challenge sent by the terminal. The card sends the signature to the terminal together with a certificate for the card’s public key signed by the issuer and backed by the issuer’s certificate, which the card also sends to the terminal.

In Combined DDA / Application Cryptogram Generation (CDA), the data signed by the card additionally includes the cryptogram that the card sends to the terminal.
[Update: The data that is signed also includes transaction data. Transaction data is thus signed twice, with a symmetric signature (the cryptogram) and an asymmetric signature. The CDA asymmetric signature provides non-repudiation, although non-repudiation is not discussed in the EMV specfications.]

Offline data authentication in a tokenized transaction

The tokenization spec does not mention offline data authentication. Recall that tokenized transactions are necessarily online transactions, and the EMV spec does not require cards that only perform online transactions to support offline data authentication.

However, nothing prevents the use of offline data authentication in a tokenized online transaction. In a non-tokenized transaction, the asymmetric signature in any of the three flavors of offline data authentication is computed on data that includes the card number and expiration date. In a tokenized transaction, it will be computed on data that includes instead the payment token and token expiry date.

Explaining the Apple press release terminology

Based on the above, the terms in the Apple press release can be understood as follows:

Device account number. The press release says:

When you add a credit or debit card with Apple Pay, the actual card numbers are not stored on the device nor on Apple servers. Instead, a unique Device Account Number is assigned, encrypted and securely stored in the Secure Element on your iPhone or Apple Watch.

Clearly, the device account number must be what the Tokenization spec calls the payment token.

One-time unique number. The press release also says:

Each transaction is authorized with a one-time unique number using your Device Account Number …

The one-time unique number must be the ARQC cryptogram that is sent to the issuer as part of an authorization request.

Dynamic security code. The press release goes on to say:

… and instead of using the security code from the back of your card, Apple Pay creates a dynamic security code to securely validate each transaction.

This is puzzling, since the card’s security code is not used for in-store transactions, is not encoded in a magnetic strip, and is not stored in an EMV chip. It is only used for payments by phone or web payments. So nothing can be used instead of the security code for in-store transactions.

I conjecture that the term dynamic security code has been invented by an imaginative security-marketing guru to refer to an asymmetric CDA signature sent by the user’s device to the merchant’s terminal. We have seen above that CDA is not precluded by the EMV spec for online transaction. It would make sense for Apple Pay devices to provide CDA to merchant terminals, because that would increase security and could be useful to merchants. A merchant could use a CDA signature as evidence when contesting a chargeback, because an asymmetric signature provides non-repudiation. The signature would be on data including the payment token rather than the card number, but in a repudiation dispute the token service provider could supply the card number.

If Apple Pay devices implement CDA signatures, and if all terminals used with Apple Pay make use of them, then the concerns about the use of symmetric instead of asymmetric signatures that I raised in the previous post are eliminated. But other security concerns remain. In the next post I will restate those remaining concerns, taking into account new information in a MacRumors blog post on Apple Watch that was also referenced by Brendon Wilson in his comment. (Thank you, Brendon!)

Posted in Payments | Tagged , , | 12 Comments

On the Security of Apple Pay

Update (2014-9-15). As pointed out in the comments, it seems that Apple Pay is based on existing standards. In my next post I try to explain how it may follow the EMV specifications with Tokenization, and in the following one I update the security concerns taking into account additional information on Apple Watch.

Yesterday’s Apple announcements shed light on a surprising contrast between the attitudes of the company towards product design on one hand, and towards security on the other. Tim Cook took pride not only on the design of the Apple Watch, but also on the process of designing it, the time and effort it took, the attention to detail, and the reliance on a broad range of disciplines ranging from metallurgy to astronomy. The contrast could not be sharper with the lack of attention paid to the security of Apple Pay.

I doubt if any cryptographers were consulted on the design of Apple Pay. If they were, they should have insisted on publishing the design so that it could benefit from the scrutiny of a broad range of security experts. Submission to public scrutiny has been recognized as a best practice in the design of cryptographic protocols for many decades.

The press release on Apple Pay mentions a Device Account Number, a one-time transaction authorization number, and a dynamic security code. Since the security design is secret, it is impossible to tell for sure how these numbers and codes are used. But since no mention is made of public key cryptography, I surmise that the Device Account Number is a shared secret between the device and the credit card issuing bank, and the dynamic security code is a symmetric signature on the transaction record. If so, by using a symmetric signature instead of an asymmetric one, Apple is 36 years behind the state of the art in cryptography. By contrast, asymmetric signatures are used routinely by smartcards for in-store payments in accordance with the EMV specifications. The same techniques could be adapted for in-store or online payments with credentials stored in a mobile device.

Symmetric signatures lack non-repudiation. And if the Device Account Number is used as a symmetric key, then it may be vulnerable to insider attacks and security breaches at the issuing bank, while a private key used for asymmetric signatures would only be stored in the user’s device and would be immune to such vulnerabilities. Worse, for all we know, the Device Account Number may be made available to the merchant’s terminal; the press release says nothing to the contrary. If so, it would be vulnerable to capture by point-of-sale malware, after which it could be used online to commit fraud just like a credit card number.

A surprising aspect of Apple Pay is its dependence on Touch ID, which only provides security against casual attackers. But wait, does Apple Pay security really depend on Touch ID? Although this is not mentioned in the Apple Pay press release, it was stated at the Apple event in Cupertino that payments can be made using an Apple Watch, which itself can be used in conjunction with an iPhone 5 or 5C. Neither the Apple Watch, nor the iPhones 5 and 5C have Touch ID sensors; and use of the Touch ID sensor in an iPhone 5S, 6 or 6+ may not be required when the terminal is tapped by an Apple Watch used in conjunction with those phones. So it seems that Apple Pay does not really require user authentication.

The press release says that, “when you’re using Apple Pay … cashiers will no longer see your name, credit card number or security code, helping to reduce the potential for fraud”. This may reduce the potential for fraud against the customer, but certainly not for fraud against the merchant. And while customers have little if any liability to fraud, at least in the US, merchants are fully liable. Without knowing the customer’s name, merchants cannot verify the customer’s identity and are defenseless against a thief who steals an Apple Watch and its companion iPhone and goes shopping online, or in stores without having to show an ID.

But while Apple Pay puts merchants at risk of financial loss, it puts users at an even greater risk. I don’t like to dramatize, but I don’t know how else to say this. People have been killed by smart phone thieves. Somebody wearing an Apple Watch will be parading a valuable watch, and advertising that a valuable smart phone is being carried along with the watch. Furthermore, a thief who steals the watch and the phone can then go on a shopping spree. The press release says that “if your iPhone is lost or stolen, you can use Find My iPhone to quickly suspend payments from that device”; but this will be a powerful incentive for the thief to kill the victim. If Apple does not find some other way of discouraging theft, wearers of Apple watches will be putting their lives at risk.
Update. I got carried away and didn’t think it through. It’s unlikely that a murderer will risk using the victim’s phone and watch for purchases. Even though the merchant does not know the identity of the owner of the mobile device, a forensic investigation will no doubt be able to link the murder to the shopping and may allow the murderer to be identified by surveillance cameras.

Posted in Payments | Tagged , , | 4 Comments