Highlights of the NIST Worshop on PIV-Related Special Publications

This is Part 5 of a series discussing the public comments on Draft NIST SP 800-157, Guidelines for Derived Personal Identity Verification (PIV) Credentials and the final version of the publication. Links to all the posts in the series can be found here.

On March 3-4, NIST held a Workshop on Upcoming Special Publications Supporting FIPS 201-2. The FIPS 201 standard, Personal Identity Verification (PIV) of Federal Employees and Contractors, leaves out many details to be specified in a large number of Special Publications (SPs). The purpose of the workshop was to discuss SPs being added or revised to achieve alignment with version 2 of the standard, FIPS 201-2, which was issued in September 2013. An agenda with links to the presentations and an archived webcast of the workshop are now available.

I attended the workshop, via webcast, mostly because some of the topics to be discussed were related to derived credentials. In this post I report on some of those topics, plus on three other topics that were quite interesting even though not directly related to derived credentials: (i) the resolution of a controversy on whether to use a pairing code to authenticate a computer or physical access terminal to the PIV card; (ii) the security of methods for physical access control, including new methods to be introduced in the next version of SP 800-116; and (iii) the difficulties caused by having to certify cryptographic modules to FIPS 140. I will leave two other topics for the next two posts in this series: (i) why it takes four seconds to gain physical access using the asymmetric Card Authentication Key, and what to do about it; and (ii) whether Bluetooth has a role to play in connection with derived credentials.

The pairing code controversy

FIPS 201-2 introduced “secure messaging” as an optional feature of contactless PIV cards. Secure messaging (SM) refers to the establishment of a cryptographically secure channel between the card and a Physical Access Control System (PACS) terminal, a workstation, or a mobile device, over an underlying NFC channel. The key establishment phase of the secure channel protocol is based on “a simplified profile of OPACITY with Zero Key Management” according to Revised Draft SP 800-73-4, Section 4.1.

A shared secret is established between the card and the terminal (or computer) using the Elliptic Curve Cryptography Cofactor Diffie-Hellman (ECC CDH) primitive specified in SP 800-56A, Section 5.7.1.2. The card uses a static (i.e. long term) Diffie Hellman (DH) key pair, while the terminal or computer uses an ephemeral key pair; this means that the key establishment phase authenticates the card to the terminal, not the terminal to the card; and that there is no forward secrecy. The static public key of the card is certified by a Card Verifiable (CV) Certificate. CV certificates, standardized in Part 8 of ISO/IEC 7816, are easier to parse than X.509 certificates, and are used for other purposes, notably in ePassports. Notice that, although it is “card verifiable“, the certificate is verified by the terminal rather than the card.

(OPACITY, which stands for Open Protocol for Access Control, Identification, and Ticketing with privacY, has been registered as an ISO/IEC 24727-6 authentication protocol, and is specified in INCITS 504, a.k.a. ANSI 504 or GICS, but does not seem to be in actual use. The key establishment protocol of SM is a simplification of the one in OPACITY in that it omits a session resumption feature known as “persistent binding“. OPACITY has a vulnerability, described in the paper A Cryptographic Analysis of OPACITY by Dagdelen et al. (See the 2nd and 3rd paragraphs of Appendix A.1). The NIST announcement of Revised Draft SP 800-73-4 stated that the key establishment protocol in the initial draft was modified for purposes including addressing security issues raised in the paper; however the vulnerability involves persistent binding, and thus does not affect the NIST SM protocol. Actually, the NIST SM protocol was modified so that the card would send the GUID card identifier in the clear rather than encrypted under a session key. Since the terminal does not authenticate, such encryption would be useless, because an attacker could trivially decrypt the identifier with the session key after establishing a session. Comments NSA-2 and NSA-3 allude to this in the table of comments on the initial draft of SP 800-73-4.)

Table 2 in the Revised Draft of SP 800-73-4 Part 1 (pages 14-15) lists the “access rules” for reading data stored in a PIV card. The entry “PIN” in a row for a data item means that the card must be activated by entering a PIN in order to read the item. The entry “PIN or OCC” means that the card must be activated by entering a PIN or a fingerprint used in an on-card comparison (OCC) to a template stored in the card. The entry “Always” means that the item can be read even if the card is not activated. The table shows that all the X.509 certificates in the card can be read without activating the card. Those certificates include the PIV Authentication certificate, the Card Authentication certificate, the Digital Signature certificate, the Key Management certificate used for email encryption, and up to 20 Retired Key Management certificates whose associated private keys are used for decrypting older emails. Except for the Card Authentication certificate, all these X.509 certificates contain personally identifiable information (PII). To prevent that PII from being read over NFC through mere proximity of a reader to the card, NIST added a mechanism for authenticating the NFC reader to the card after the SM key establishment phase. Without such a mechanism, an attacker could read certificates from a card carried in a federal employee’s pocket without a protective sleeve, simply by stepping into close proximity of the employee, something which could be achieved without raising suspicions in a crowded public space such as the Washington DC Metro.

The added mechanism for achieving mutual authentication is a “pairing code“, consisting of 8 decimal digits, which is generated at random by the card issuer and stored in the card. The cardholder pairs the card to a mobile device by entering the pairing code into the device, where it may be cached indefinitely and used to authenticate the device to the card after each SM connection establishment. (It is not clear how the pairing code would be entered into an NFC terminal near a door or turnstile.) When the pairing code is used in conjunction with SM to achieve a secure channel over NFC with mutual authentication, the contactless interface of the card is referred to as the “Virtual Contact Interface (VCI)“. (Confusingly, the term “contact interface” is sometimes used to refer both the physical and the virtual contact interfaces. This is the case, in particular, in the above-mentioned Table 2 of SP 800-73, according to Footnote 11.)

The pairing code cannot be changed by the cardholder. To mitigate the burden of having to memorize a random 8-digit pairing code in addition to the 6-to-8 digit card activation PIN, SP 800-73 suggests printing the pairing code on the back of the card. There is no prohibition against letting the cardholder use the pairing code as the PIN, which less security-conscious cardholders may be tempted to do.

The introduction of the pairing code in the first draft of SP 800-73-4 drew a very strong negative reaction from the federal agencies, as noted in the announdement of the revised draft and as seen in a large number of strongly worded public comments on the draft. In response to these comments, NIST announced at the workshop that it is planning to make a further change to SP 800-73, discussed in Slide 9 of Hildegard Ferraiolo’s presentation PIV Card Specification Update (SP 800-73-4). The card issuer will be allowed to remove the pairing code requirement. This will require approval by a Designated Approving Authority and unspecified compensating controls.

I may be missing something, but I don’t see why X.509 certificates should be readable without card activation. Card activation is required for using the private keys associated with the certificates, so why not require it for reading the certificates? Any comments suggesting an explanation would be appreciated. If card activation were required for reading certificates, the pairing code might not be necessary.

Physical access security

As I pointed out in Part 3 of this series, SP 800-116, A Recommendation for the Use of PIV Credentials in Physical Access Control Systems (PACS), is out of date; it has not been revised since it was issued in November 2008. At the workshop, a talk by Ketan Mehta with slides by David Cooper discussed changes that NIST is planning to make to this publication, including the addition of new authentication mechanisms and deprecation of insecure ones.

A level-of-assurance (LOA) table in slide 3 summarizes and classifies no less than 13 mechanisms that are or will be available in PIV cards for use in physical access control. The table greatly overestimates the security provided by some of these mechanisms. In particular, the last row of the table, labeled “VERY HIGH confidence” (in the identity of the cardholder), includes seven mechanisms with widely different security postures, some of them quite weak.

The first row of the LOA table, labeled “LITTLE or NO confidence“, includes two mechanisms, VIS and CHUID, in a dim font that suggests that they are or will be deprecated. CHUID was indeed deprecated by FIPS 201-2. Presumably VIS will be deprecated by the upcoming version of SP 800-116. In CHUID, the PACS reads and verifies the signature on a signed data structure of same name, which contains two identifiers (FASC-N, and an encoding of the Card UUID known as GUID) and the card’s expiration date. CHUID cannot be fabricated because of the signature, but can be cloned by reading it from a valid card and storing it in a fake card. VIS refers to the visual inspection of the card by a guard.

If VIS is depecrated, then BIO-A, listed in the last row of the LOA table as providing two-factor authentication, should be downgraded to one-factor authentication. In BIO-A the PACS reads a signed fingerprint template after the cardholder activates the card by entering a PIN, and compares the template off-card to a fingerprint entered by the cardholder in the presence of a human attendant; BIO is like BIO-A, without the attendant. As I pointed out in Part 3, the current version of SP 800-116 classifies BIO as one-factor authentication because the card is not authenticated and, consequently, the PIN provides no security if the card is fake. BIO-A is classified as two-factor because the attendant will visually inspect the card in addition to collecting the fingerprint; but if VIS is depecrated, such visual inspection should not be deemed to provide additional security, and may not even be performed.

The LOA table includes four authentication mechanisms that are not in the current version of SP 800-116: SM-AUTH, SM-AUTH + PIN, SM-AUTH + BIO, and OCC-AUTH.

SM-AUTH is a new mechanism that is not mentioned in FIPS 201-2. It relies on the authentication of the card that takes place during the SM key establishment phase. The PACS terminal tries to establish an SM connection to the card, and deems the card to be authenticated if the connection is successfully established. The cardholder is indirectly identified by the GUID encoding of the Card UUID, which is present in the CV certificate that binds the card’s static DH public key. In SM-AUTH + PIN the cardholder activates the card with a PIN over SM. In SM-AUTH + BIO the terminal retrieves a fingerprint template or an iris image from the card for off-card comparison to a fingerprint or iris image collected from the cardholder, over SM, after the card has been activated with a PIN.

Two drawbacks of SM-AUTH (and its extensions SM-AUTH + PIN and SM-AUTH + BIO) were pointed out by Mehta and Ferraiolo. Slide 6 of Mehta’s presentation on physical access control says that SM-AUTH does not include revocation checking. It is not clear why this is the case. SM uses a CV certificate rather than an X.509 certificate, but I don’t see why a CV certificate would not be revocable. Slide 7 of Ferraiolo’s PIV card specification update states that, because SM-AUTH is optional, it is not a candidate for use at an inter-agency access point; i.e., it cannot be used to allow entrance to a building of a federal agencies to employees of other federal agencies.

In OCC-AUTH, a PACS terminal establishes an SM channel to the card, thus authenticating the card, and sends a fingerprint collected from the cardholder to the card, where it is compared against a template. I discussed the weaknesses of OCC-AUTH in Part 3: there is no revocation checking, no PIN is required, no human attendant is required, and only an easy-to-spoof fingerprint biometric can be used. (There may be an iris biometric in the card, but it cannot be used.) OCC-AUTH cannot be said to provide very high security, contrary to what is suggested by its classification in slide 3 of the physical access control presentation. More generally, any authentication method that does not include a revocation check should not be deemed to provide very high security. That includes BIO-A, OCC-AUTH, SM-AUTH + PIN, SYM-CAK + BIO and SM-AUTH + BIO, all of which appear in the last row of the LOA table.

It should be noted that some authentication methods may provide high confidence in the identity of the cardholder, and yet provide little security. An example is SM-AUTH + BIO which provides three-factor authentication by PIN, fingerprint, and proof of possession of a cryptographic credential, and is included in the last row of the LOA table as providing very high confidence in the identity of the cardholder. Another example is SM-AUTH + BIO-A, not mentioned in the LOA table, which provides even higher confidence because it is harder to spoof a fingerprint in the presence of a guard. These mechanisms provide little security because they do not check for revocation, and thus allow any former federal employee who has refused to surrender his or her PIV card to continue entering government buildings for an indefinite period of time.

Confidence in the identity of the cardholder is not an appropriate security criterion for access control. Access control security requires both authentication security and authorization security, and authorization security requires a check for credential revocation.

Coping with FIPS 140

FIPS 140, Security Requirements for Cryptographic Modules is a NIST standard that is used by NIST-accredited laboratories to certify cryptographic modules used in products sold to the federal government. The standard and the certification program have been so successful that they are widely used internationally to certify products not necessarily intended for the US government.

But FIPS 140 is an old standard, which has not been revised since FIPS 140-2 was issued in May 2001. The evolution of technology over the last 14 years has caused not just the details but also the underlying philosophy of the standard to become out of date.

FIPS standards are supposed to be revised every five years. An effort to revise FIPS 140-2 was initiated in 2005 and led to a draft of FIPS 140-3 being published in 2009; but the effort was later abandoned. There are rumors that ISO 19790:2012 is a candidate to succeed FIPS 140-2. But neither the draft of FIPS 140-3 nor ISO 19790:2012 seem to embody the fundamental rethinking needed to catch up with current technologies. Meanwhile, in the absence of a revised version of FIPS 140, adherence to the old FIPS 140-2 is causing difficulties for the implementation of both derived credentials and PIV cards.

One difficulty caused by FIPS 140-2 is that it requires power-up and run-time self-tests. In measurements made by the General Services Administration (GSA) and reported at the workshop in a presentation by Chi Hickey of GSA (see slide 5, “Crypto Pre-Checks“) and another presentation by Apostol Vassilev of NIST (see the entry “FIPS 140-2 POST” in slide 2, where POST stands for Power-On Self-Test), those self-tests contributed between 0.6 and 1.0 seconds to the four seconds it takes to gain physical access using the asymmetric Card Authentication Key stored in a PIV card. (The four-second delay is a topic that was extensively debated at the workshop and that I plan to discuss in the next post.)

Self-tests also affect derived credentials in several ways. One of the comments on Draft SP 800-157, comment 360 by Giesecke & Devrient, noted that they could conflict with UICC performance requirements, if derived credentials were to be stored in a UICC chip within a mobile device; that point was also made in a presentation by Giesecke & Devrient at the workshop, discussed below. Frequent computationally expensive self-tests of cryptographic algorithms, of doubtful security value, could also severely affect battery life. Self-tests need to be thoroughly rethought in preparation for the long-overdue revision of FIPS 140-2.

A second difficulty caused by FIPS 140-2 is the need to recertify a cryptographic module after every minor change. Another one of the comments on Draft SP 800-157, comment 252 by Emergent LLC, pointed out that this is unfeasible for derived credentials, as the rate of change in mobile device hardware and software far exceeds the rate at which recertification is possible. The same point was made in a comment during a question-and-answer exchange at the workshop, and in a USDA pilot presentation, discussed below. But recertification has also become a problem for PIV cards, as the rate of card firmware changes accelarates. This was addressed at the workshop by Ketan Mehtan in his talk on INCITS 504, a draft standard of the InterNational Committee for International Technology Standards, also known as ANSI 504 or GICS. Slide 8 proposes, as “Use Case 1” for the standard, to “Maintain FIPS 140-2 Certification“. The idea is to implement a PIV card by installing a PIV application on a GICS platform implemented in the card and certified against FIPS 140-2. It seems that this makes it allowable to modify the card without having to recertify it, but I did not understand how. Any comments explaining that would be appreciated.

A third difficulty caused by FIPS 140-2 is specific to derived credentials. FIPS 140-2 relies exclusively on physical tamper resistance for protection of a cryptographic module in a mobile device against physical capture of the device by an adversary. By contrast, mobile devices protect data today by encrypting it. The next version of FIPS 140 should allow encryption as an alternative or supplement to physical tamper resistance for the protection of sensitive security parameters and data kept in a cryptographic module. I argued that derived credentials should be encrypted in Part 2.

Derived Credentials

Most of the second day of the workshop was dedicated to derived credentials. Sadly, there was no discussion of the main outstanding issue concerning derived credentials: the obvious security gap created by storing them in “software tokens” without encryption or physical tamper resistance. But the workshop did discuss other topics related to derived credentials, and provided interesting information.

The portion of the workshop concerned with derived credentials started with a presentation on SP 800-157 by Hildegard Ferraiolo, and a presentation on a forthcoming certificate policy for derived credentials by Matt King and Wendy Brown of the Federal Public Key Infrastructure Policy Authority (FPKIPA). The latter was not one of the presentations ignoring the narrow-scope policy. On the contrary, it emphasized that only some uses of derived credentials are allowed and specifically highlighted uses that are prohibited.

The presentation listed in the agenda as Derived PIV Credential Test Requirements and Conformance, by Ramaswamy Chandramouli, was a status update on a new NIST Special Publication, SP 800-166, whose first draft is expected to be published on April 24, 2015. The publication will provide guidelines for the testing of the Derived PIV Applications that will store and manage derived credentials, but will only be concerned with “non-embedded tokens” such as microSD, USB or UICC tokens, the testing of “embedded tokens” (which include software tokens, TEE tokens, and Embedded-Secure-Element tokens), being deemed a hard problem.

The presentation listed as Derived PIV Credential Issuer Accreditation, also by Ramaswamy Chandramouli, provided an overview of Draft SP 800-79-2, published in June 2014. (The slides say June 2015, but that must be a typo.) This revision of SP 800-79 adds 74 “controls“, 53 of them specifically related to derived credentials. Publication of the final version of SP 800-79-2 is expected soon.

The presentation listed as NCCoE Proof of Concept for Derived PIV Credential, by Jeffrey Cichonsky, described plans for a proof of concept implementation of derived credentials. NCCoE is the National Cybersecurity Center of Excellence. The slides include a Derived PIV Lifecycle chart (slide 9) and a workflow for the issuance of derived credentials at Level Of Assurance (LOA) 4 (slide 10).

The NCCoE talk was followed by two presentations describing ongoing derived credential pilots at the Department of Defense (DoD) and the Department of Agriculture (USDA).

Greg Youst of the Defense Information Systems Agency (DISA) discussed a DoD iOS Soft Certificate Pilot involving 14 users. The pilot uses software tokens, NSA having told DISA that keeping derived credentials in the key store is “good enough for iOS and Samsung“. Initially, credentials were generated, certified and installed manually, but that took 2.5 hours per device and required a 70-slide presentation for training Registration Authority (RA) personnel. An over-the-air (OTA) provisioning flow is now being implemented for iOS, to be followed by OTA provisioning flows for Samsung devices, and “down the road” for Windows and Blackberry devices. The presentations listed on the agenda as Side Loading Software Certificates on CMDs and Purebred were not presented; I believe the first one describes the initial provisioning method and the second one the OTA method.

The USDA pilot was described by Adam Zeimet. The talk included a wealth of information that is not in the slides, so I recommend watching the presentation in the archived webcast if interested (at the end of Day 2, Part 2). A summary follows.

The number of mobile devices in use at the USDA has doubled over the last year to more than 15,000. Mobile devices allow USDA employees doing field work to spend up to 4 days in the field, instead of 1-2 days in the field followed by 3-4 days of data entry in the office. But when using mobile devices they have to log in to USDA servers with username or password rather than a PIV card.

The initial goal of the pilot was to provide secure authentication on mobile devices, for three purposes: to access Exchange ActiveSync servers; to log in to an eAuthentication portal that provides access to more than 450 USDA web apps; and for native apps to access their remote backends. But another important goal has emerged: to use a smart phone as a PIV backup for user authentication on a laptop or desktop. This would help achieve full PIV compliance, and would address recurring complaints of employees whose card is lost, stolen, or forgotten when leaving for a field trip, and are then unable to do their work for several days. (Notice that this is one of “prohibited” uses of derived credentials.)

The pilot uses a software implementation of derived credentials. The first phase of the pilot is a proof of concept in a lab environment. 75-80% of target use cases have been successfully tested, but authentication to an Exchange ActiveSync server is not working yet. A second phase will extend testing to USDA subagencies or bureaux, and a third phase will reassess the technology before deploying it throughout the agency.

Challenges include: integrating many native apps that are not PIV-enabled or even PKI-enabled today; finding device-agnostic solutions that work across iOS, Android and Windows devices; and keeping FIPS 140 level 1 validation up-to-date in a rapidly changing mobile environment. On the other hand the PIV backup work looks promising. It relies on a Bluetooth connection between the phone and the laptop or desktop, using dongles that provide “extra layers of encryption” on top of Bluetooth.

The pilot presentations were followed by two private-sector talks on Derived PIV Credentials token form factors. In his presentation, Christopher Goyet of Oberthur compared the pros and cons of PIV cards and derived credentials stored in embedded security elements, UICC chips, and microSD cards. He made the interesting point that a card features a fallback authentication mechanism for physical access (visual inspection of the card by a guard) when electronic verification becomes unavailable. In his presentation, Werner Ness of Giesecke & Devrient recapitulated the concept of Derived PIV Credentials and the form factors that are being considered for carrying them, including a software key store, a Trusted Execution Environment, a USB token, a UICC chip, a secure microSD card, and an embedded secure element. Then he described the products and activities of Giesecke & Devrient related to derived credentials. Both speakers mentioned a variety of use cases for derived credentials, including use cases such as physical access that are prohibited by NIST and FPKIPA, as discussed above.

The workshop concluded with a most interesting talk but Bill Burr on Future Tokens for Derived PIV Credentials, including an in-depth discussion of Bluetooth and whether it should play a role in connection with derived credentials. But I leave that for the blog post after next. In the next post I intend to discuss the the four-second physical access problem.

See also:

2 thoughts on “Highlights of the NIST Worshop on PIV-Related Special Publications”

  1. Thanks for all the effort you did on this report.
    Is there any efforts ongoing to only have a Identifier on/in the token/device and store PIV in a highly protected “credential store”? This will hopefully make dissemination of PIV more restricted and attributes more updated. It also means me, as a person, can have several Identities/Identifiers and control use of my PIV. similar to what happens when I install an App on my Smart Phone and it ask for permission to use certain functionalities.

  2. The Backend Attribute Exchange (BAE) allowed a US federal agency to look up attributes of a PIV cardholder based on the FASC-N or UUID identifier in the PIV card. This allows attributes to be kept up-to-date in a backend service without having to update the card as the attributes change, as you suggest in your comment. However work on the BAE does not seem to have progressed beyond the pilot stage.
    In your comment, you seem to suggest storing a PKI credential (consisting of a public key certificate and the associated private key) in an external credential store, e.g. in the cloud, and fetching it into a mobile device as needed. But storing the private key in the cloud would expose it to capture and forgo non-repudiation. It would be better to store the credential in the device, encrypted under an encryption key that can itself be stored in an external key store. I discussed that solution in an earlier post of this series.

Leave a Reply

Your email address will not be published.