NIST Omits Encryption Requirement for Derived Credentials

This is Part 2 of a series of posts reviewing the public comments received by NIST on Draft SP800-157, Guidelines for Derived Personal Identity Verification (PIV) Credentials, their disposition, and the final version of the document. Links to all the posts in the series can be found here.

In the first post of this series I discussed how NIST failed to address many concerns expressed in the 400+ comments that it received on the guidelines for derived credentials published in March of last year as Draft Special Publication (SP) 800-157, including concerns about insufficient discussion of business need, lack of guidance, narrow scope, lack of attention to embedded solutions, and security issues. But I postponed a discussion of what I think is the most critical security problem in SP800-157: the lack of security of the so-called software tokens, a concern that was raised in comments including 111 by the Treasury, 291, 311 and 318 by ICAMSC, 406 by PrimeKey AB, 413 by NSA, and 424 by Exponent. This post focuses on that problem.

The concept of a software token, or software cryptographic module is defined in Draft NISTIR 7981 (Section 3.2.1) as follows:

Rather than using specialized hardware to store and use PIV keys, this approach stores the keys in flash memory on the mobile device protected by a PIN or password. Authentication operations are done in software provided by the application accessing the IT system, or the mobile OS.

What does it mean for the keys to be “protected by a PIN or password“?
The draft of SP800-157 added the following in the section on Activation Data for Software Implementations (Section 3.4):

For software implementations (LOA-3) of Derived PIV Credentials, a password-based mechanism shall be used to perform cryptographic operations with the private key corresponding to the Derived PIV Credential. The password shall meet the requirements of an LOA-2 memorized secret token as specified in Table 6, Token Requirements per Assurance Level, in [SP800-63].

These two statements led us to believe that the derived credentials were meant to be encrypted under the PIN or password (i.e. under a key-encryption key derived from the PIN or password). We assumed that the “password-based mechanism” specifically required for activation of derived credentials in software tokens consisted of decrypting the Derived PIV Credential (or the private key portion of the credential) using the activation password entered by the user, allowing the resulting plaintext private key to be used for performing cryptographic operations to authenticate to remote information systems of Federal Agencies. (What else could it be?)

But the referenced entry of Table 6 of SP800-63 only requires 20 bits of entropy for the memorized secret. In our own comments we pointed out that 20 bits of entropy provide no security against an adversary who extracts encrypted keys from a software token and carries out an offline brute-force attack against the PIN or password used to encrypt the keys, an attack that can be easily carried out with a botnet; but this portion of our comments was omitted by NIST from the list of received comments. Comment 248 by DHS also seemed to assume that the private key of a derived credential was to be stored encrypted, and also pointed out that such a private key removed from the device is vulnerable to a parallel brute-force attack. The need to protect against brute-force attack was also noted in comment 318 by ICAMSC. And the draft version of SP800-157 seemed to refer to the possibility of derived credentials being extracted from software tokens when it noted that

… as a practical matter it will often be impossible to prevent users from making copies of software tokens or porting them to other devices.

But whereas the draft version of SP800-157 seemed to require that the private keys of derived credentials be stored encrypted, the final version does not. The requirement that “a password-based mechanism shall be used to perform cryptographic operations with the private key” in a software token has been removed. Both versions of the document require a software token to be “validated to [FIPS140] Level 1 or higher“; but although FIPS 140 envisions the possibility of encrypting private keys (Section 4.7.5), it does not require encryption at any security level. The final version of SP800-157 has added a qualification of the Derived PIV Authentication private key as being a “plaintext or wrapped [i.e. encrypted] private key” (page 13, line 7), without including any requirement that the key be wrapped, in software tokens or any other kind of tokens.

Without encryption, a private key stored in a software token within a mobile device has no protection against physical capture. Data in the software token may be protected by the operating system against unauthorized access, but an adversary who steals the mobile device may be able to jailbreak or root the device to circumvent the operating system, or may physically tamper with the device to read the contents of flash memory.

In response to the comments, NIST made the following modifications to the final version of SP800-157:

  • It added a requirement for a mechanism to block the use of the Derived PIV Authentication private key after a number of consecutive failed activation attempts, and an option to implement a throttling mechanism to limit the number of attempts that may be performed over a given period of time. The draft required a blocking mechanism (also referred to as a lockout mechanism) for hardware tokens at Level of Assurance 4 (LOA-4), but not for software tokens. A blocking mechanism, however, does not mitigate the risk of the private key being extracted by an adversary who captures the mobile device, or by malware running on the device.
  • It removed the statement about the impossibility of preventing users from “making copies of software tokens or porting them to other devices“, thus hiding the risk instead of mitigating it.
  • It added a warning that “Protecting and using the Derived PIV Credential’s corresponding private key in software may potentially increase the risk that the key could be stolen or compromised“, followed by the sentence “For this reason, software-based Derived PIV Credentials cannot be issued at LOA-4“, which suggests that they can be issued at LOA-3 without saying it. But LOA-3 is defined in the Office of Management and Budget (OMB) memorandum M-04-04 on E-Authentication Guidance for Federal Agencies as providing “High confidence in the asserted identity’s validity“, which is not consistent with storing authentication credentials in smart phones, millions of which are stolen every year, without tamper resistance or encryption.
  • It added a note recommending (but not requiring) the use of a “hybrid approach“, previously mentioned in the companion Draft NISTIR 7981, instead of a software-only approach.

The note regarding the hybrid approach, in Section 3.3.2, deserves further discussion. It is phrased as follows:

Note: Many mobile devices on the market provide a hybrid approach where the key is stored in hardware, but a software cryptographic module uses the key during an authentication operation. While the hybrid approach is a LOA-3 solution, it does provide many security benefits over software-only approaches. Therefore the hybrid approach is recommended when supported by mobile devices and applications.

I don’t know of any mobile devices on the market today that come equipped with a software cryptographic module, at least not one available to applications. And I’m not sure what it means to store a key “in hardware“. Presumably this refers to the kind of “specialized hardware” other than flash memory of the above-quoted definition of a software token. But it is not clear what specialized hardware is available “in many mobile devices on the market” for implementing a hybrid approach. Perhaps it means storing the key in a OS-provided storage facility such as the iOS key chain or the Android key store. Such a facility could conceivably be implemented using tamper resistant hardware. But how the facility is implemented is proprietary information that may not be readily available from the device manufacturer to Federal Agencies implementing derived credentials; and I believe most devices use ordinary flash memory for such facilities today.

Even if the hybrid approach stored the key in tamper resistant hardware, it would provide little security against an adversary who physically captures the device. The adversary might not be able to read the key while stored in the tamper resistant storage, but would be able to read it after copying it from the tamper resistant storage to the ordinary flash memory storage. Copying may require circumventing the operating system, but the adversary may be able to do that by jailbreaking or rooting the device, or by overwriting the non-tamper resistant storage where the operating system code resides.

It is surprising that SP800-157 does not require derived credentials to be encrypted. Encryption is a standard method for protecting data at rest, and data encryption is routinely used today in mobile devices. Federal agencies were actually put under the obligation to encrypt all sensitive data on mobile devices (which, I would think, should include derived credentials) by OMB memorandum M-06-16 on Protection of Sensitive Agency Information. The memorandum, sent by the White House in 2006 to the Heads of Departments and Agencies, was followed in 2007 by the publication of NIST SP800-111, Guide to Storage Encryption Technologies for End User Devices. M-06-16 requires Federal Agencies to:

1. Encrypt all data on mobile computers/devices which carry agency data unless the data is determined to be non-sensitive, in writing, by your Deputy Secretary or an individual he/she may designate in writing;

Although M-06-16 dates back to the administration of George W. Bush, it is still considered valid by NIST. In fact, Appendix D of the final version of SP800-157 discusses an effort by NIST to obtain alternative guidance from OMB that will remove the need to comply with requirement number 2 in the same memorandum:

2. Allow remote access only with two-factor authentication where one of the factors is provided by a device separate from the computer gaining access;

It is not clear why NIST thinks that requirement 2 matters but ignores requirement 1.

A reason why NIST omitted the requirement to encrypt derived credentials in the final version of SP800-157 may be the difficulty of figuring out what key to use for encryption, and how to manage such key. As discussed above, the naive solution to that problem hinted at in the draft version, in which the encryption key is derived from the PIN or password, provides no security because it makes the PIN or password vulnerable to a brute force guessing attack.

There are solutions to the problem, however. Some solutions make use of a biometric for derived credential activation instead of a PIN or password, as suggested in several comments, and rely on the biometric for deriving or protecting the credential-encryption key. This is a big topic that I plan to discuss in the next blog post of this series.

Other solutions entrust the credential-encryption key to a trusted key storage service in the cloud. Storing a data encryption key on a server is a technique mentioned in SP800-111 (Section 4.2.1, page 4-4, lines 6-7). The activation PIN or password could be used to authenticate to the server and retrieve the encryption key; this is more secure than deriving the encryption key from the PIN or password, because it does not expose the PIN or password to an offline brute-force attack. In our comments, we proposed an even more secure method of retrieving the encryption key, in which the device authenticates to the key storage service cryptographically, using a device authentication credential that is regenerated on demand from a protocredential and the activation PIN or password, without exposing the PIN or password to an offline brute-force attack. Regrettably, NIST included in the list of comments (as comment 9) an excerpt of our proposal which, taken out of context, could be construed as meaning that the device authentication credential is to be used as the derived PIV credential. Then it rejected the proposal arguing that such a credential “could only be electronically verified by the agency that issued the credential“, rather than by all agencies. Our full comments, however, make it perfectly clear that the device authentication credential is only used to retrieve the encryption key, which is in turn used to decrypt the derived PIV credential. Our derived PIV credential is no different than the one of SP800-157, and can therefore be verified by all agencies, not just by the agency that issued it.

See also:

Leave a Reply

Your email address will not be published.