Invited Talk at the University of Utah

I’ve been so busy that I haven’t had time to write for more than three
months, which is a pity because things have been happening and there
is much to report. I’m trying to catch up today.

The first thing to report is that Prof. Gopalakrishnan of the
University of Utah invited Karen Lewison and myself to give a joint
talk at the University, on May 29. We talked about the need to
replace TLS,
which I’ve
discussed earlier
on this blog. The
slides can be found
at the usual location for papers and presentations at the bottom of
each page of this web site.

The University of Utah has a
renowned School of Computing and
it was quite stimulating to meet with faculty and discuss research
after the talk. We were happy to discover common research interests,
and we have been exploring the possibility of doing joint research
work with Profs.
Ganesh
Gopalakrishnan
, Sneha
Kasera
,
and Tammy
Denning
; we are thrilled that the prospects look promising.

Other things to report include that we had papers accepted at the
forthcoming M2MSec workshop and the forthcoming GlobalPlatform TEE
conference. I will report on that in the next two posts.

It’s Time to Redesign Transport Layer Security

One difficulty faced by privacy-enhancing credentials (such as U-Prove
tokens, Idemix anonymous credentials, or credentials based on group
signatures), is the fact that they are not supported by TLS. We
noticed this when we looked at privacy-enhancing credentials in the
context of NSTIC, and we proposed an architecture for the NSTIC
ecosystem that included an extension of TLS to accommodate them.

Several other things are wrong with TLS. Performance is poor over
satellite links due to the additional roundtrips and the transmission
of certificate chains during the handshake. Client and attribute
certificates, when used, are sent in the clear. And there has been a
long list of TLS vulnerabilities, some of which have not been
addressed, while others are addressed in TLS versions and extensions
that are not broadly deployed.

The
November SSL
Pulse
reported that only 18.2% of surveyed web sites supported TLS
1.1, which dates back to April 2006, only 20.7% supported TLS 1.2,
which dates back to August 2008, and only 30.6% had server-side
protection against the BEAST attack, which requires either TLS 1.1 or
TLS 1.2. This indicates upgrade fatigue, which may be due to
the age of the protocol and the large number of versions and
extensions that it has accumulated during its long life. Changing the
configuration of a TLS implementation to protect against
vulnerabilities without shutting out a large portion of the user base
is a complex task that IT personnel is no doubt loath to tackle.

So perhaps it is time to restart from scratch, designing a new
transport layer security protocol — actually, two of them, one
for connections and the other for datagrams — that will
incorporate the lessons learned from TLS — and DTLS —
while discarding the heavy baggage of old code and backward
compatibility requirements.

We have written a
new white
paper
that recapitulates the drawbacks of TLS and discusses
ingredients for a possible replacement.

The paper emphasizes the benefits of redesigning transport layer
security for the military, because the military in particular should
be very much interested in better transport layer security protocols.
The military should be interested in better performance over satellite
and radio links, for obvious reasons. It should be interested in
increased security, because so much is at stake in the security of
military networks. And I would argue that it should also be
interested in increased privacy, because what is viewed as privacy on
the Internet may be viewed as resistance to traffic analysis in
military networks.

Feedback on the Paper on Privacy Postures of Authentication Technologies

Many thanks to every one who provided feedback on the
paper on
privacy postures of
authentication technologies

which was announced in the
previous
blog post
. The paper was discussed on the
Identity Commons mailing list
and we also received feedback at the
ID360 conference,
where we presented the paper, and at
IIW 16, where we
showed a poster summarizing the paper. In this post I will recap the
feedback that we have received and the revisions that we have made to
the paper based on that feedback.

Steven Carmody pointed out that
SWITCH, the Swiss InCommon federation,
has developed an extension of Shibboleth called
uApprove
that allows the identity or attribute provider to ask the user for
consent before disclosing attributes to the relying party. Ken
Klingestein told us that the
Scalable
Privacy
NSTIC pilot is developing a privacy manager that will let
the user choose what attributes will be disclosed to the the relying
party by the Shibboleth identity provider. We have added references
to these Shibboleth extensions to Section 4.2 of the paper.

The original paper explained that, although a U-Prove token does not
provide multishow unlinkability, the user may obtain multiple tokens
from the issuer, and present different tokens to different relying
parties. Christian Paquin said that a U-Prove
credential
is defined as a batch of such tokens, created simultaneously by an
efficient parallel procedure. We have added this definition of a
U-Prove credential to Section 4.3.

Christian Paquin also pointed out that a U-Prove token is a
mathematical concept that can be embodied in a variety of
technologies. He sent me a link to the
WS-Trust
embodiment
, which was used in CardSpace. We have explained this
and included the link in Section 4.3.

Tom Jones said that what we call anonymity is
called pseudonymity by others. In fact, column 9, labeled
“Anonymity”, covers both pseudonymity, as provided, e.g., by an Idemix
pseudonym or an uncertified key pair or a combination of a user ID and
a password when the user ID is freely chosen by the user, and full
anonymity, as provided when a relying party learns only attributes
that do not uniquely identify the user. I think it is not
unreasonable to view anonymity (the service provider does not learn
the user’s “name”) as encompassing pseudonymity (the service provider
learns a pseudonym instead of the “real name”).

Nat Sakimura provided a lot of feedback, for which we are grateful.
He said that Google and Yahoo implemented OpenID Pairwise
Pseudonymous Identifiers (PPID)
, i.e. different identifiers for
the same user provided to different relying parties, before ICAM
specified its OpenID profile. We have noted this in
Section 4.2 of the revised paper and changed the label of row 8 to “OpenID (without PPID)”.

He also said that OpenID Connect supports an ephemeral identifier,
which provides anonymity. I was able to find a discussion of an
ephemeral identifier in the archives of the OpenID Connect mailing
list, but no mention of it in any of the OpenID Connect
specifications; so ephemeral identifiers may be added in the future,
but they are not there yet.

Nat also argued that OpenID Connect provides multishow unlinkability
by different parties and by the same party. I disagree, however. The
Subject Identifier in the ID Token makes OpenID Connect authentication
events linkable. Furthermore, OpenID is built on top of OAuth, whose
purpose is to provide the relying party with access to resources owned
by the user by means of an access token. In a typical use case the
relying party gets access to the user’s account at a social network
such as Facebook, Twitter or Google+. It is unlikely that two relying
parties who share information cannot determine that they are both
accessing the same account, or that a relying party cannot determine
that it has accessed the same account in two different occasions.

Nat said that OpenID Connect can be used for two-party authentication
using a “Self-Issued OpenID Provider”. We have added a checkmark to
row 11, column 1 of the table to indicate this, and an explanation to
Section 4.2.

He also said that OpenID Connect provides group 4 functionality by
allowing the relying party to obtain attributes from “distributed
attribute providers”. We have mentioned this in Section 4.4 of the
revised version of the paper.

Finally, Nat said:

Just by reading the paper, I was not very clear what is the
requirement for Issue-show unlinkability. By issuance, I imagine it
means the credential issuance. I suppose then it means that the
credential verifier (in ISO 29115 | ITU-T X.1254 sense) cannot tell
which credential was used though it can attest that the user has a
valid credential. Is that correct? If so, much of the technology in
group 2 should have n/a in the column because they are independent of
the actual authentication itself. They could very well use anonymous
authentication or partially anonymous authentication (ISO 29191).

The technologies in group 2 are recursive authentication technologies.
The relying party directs the browser to the identity or attribute
provider, which recursively authenticates the user and provides a
bearer credential to the relying party based on the result of the
inner authentication. In all generality there may be multiple inner
authentications, as the identity or attribute provider may require
multiple credentials. So the authentication process may consist of a
tree of nested authentications, with internal nodes of the tree
involving group 2 technologies, and leaf nodes other technologies.
However, rows 5-11 (group 2) are only concerned with the usual case
where the user authenticates to the identity or attribute provider as
a returning user with a user ID and a password or some other form of
two-party authentication; we have now made that clear in Section 4.2
of the revised paper. In that case there is no issue-show
unlinkability.

We have also made a couple of other improvements to the paper,
motivated in part by the feedback:

  • We have replaced the word possession with the
    word ownership in the definition of closed-loop authentication
    (Section 2), so that it now reads: authentication is closed-loop
    when the credential authority that issues or registers a credential is
    later responsible for verifying ownership of the credential at
    authentication time
    . The motivation for this change is that, in
    group 2, the credential is the information that the identity or
    attribute provider has about the user, and is thus kept by the
    identity or attribute provider rather than by the user.
  • We have added a distinction between two forms of multishow
    unlinkability, a strong form that holds even if the credential
    authority colludes and shares information with the relying parties,
    and a weak form that holds only if there is no such collusion. The
    technologies in group 2 that provide multishow unlinkability provide
    the weak form, whereas Idemix anonymous credentials provide the strong
    form.

Comparing the Privacy Features of Eighteen Authentication Technologies


This blog post motivates and elaborates on the paper
Privacy Postures of Authentication Technologies,
which we presented at the recent ID360 conference.

There is a great variety of user authentication technologies, and some
of them are very different from each other. Consider, for example,
one-time passwords, OAuth, Idemix, and ICAM’s Backend Attribute
Exchange: any two of them have little in common.

Different authentication technologies have been developed by different
communities, which have created their own vocabularies to describe
them. Furthermore, some of the technologies are extremely complex:
U-Prove
and
Idemix
are based on mathematical theories that may be
impenetrable to non-specialists; and
OpenID Connect, which is an
extension of OAuth, adds seven specifications to a large number of
OAuth specifications. As a result, it is difficult to compare
authentication technologies to each other.

This is unfortunate because decision makers in corporations and
governments need to decide what technologies or combinations of
technologies should replace passwords, which have been rendered even
more inadequate by the shift from traditional personal computers to
smart phones and tablets. Decision makers need to evaluate and compare
the security, usability, deployability, interoperability and, last but
not least, privacy, provided by the very large number of very
different authentication technologies that are competing in the
marketplace of technology innovations.

But all these technologies are trying to do the same thing:
authenticate the user. So it should be possible to develop a common
conceptual framework that makes it possible to describe them in
functional terms without getting lost in the details, to compare their
features, and to evaluate their adequacy to different use cases.

The
paper that we presented
at the recent ID360 conference can be viewed as a step in that
direction. It focuses on privacy, an aspect of authentication
technology which I think is in need of particular attention. It
surveys eighteen technologies, including: four flavors of passwords
and one-time passwords; the old Microsoft Passport (of historical
interest); the browser SSO profile of SAML; Shibboleth; OpenID; the
ICAM profile of OpenID; OAuth; OpenID Connect; uncertified key pairs;
public key certificates; structured certificates; Idemix pseudonyms;
Idemix anonymous credentials; U-Prove tokens; and ICAM’s Backend
Attribute Exchange.

The paper classifies the technologies along four different dimensions
or facets, and builds a matrix indicating which of the technologies
provide seven privacy features: unobservability by an identity or
attribute provider; free choice of identity or attribute provider;
anonymity; selective disclosure; issue-show unlinkability; multishow
unlinkability by different parties; and multishow unlinkability by the
same party. I will not try to recap the details here; instead I will
elaborate on observations made in the paper regarding privacy
enhancements that have been used to improve the privacy postures of
some closed-loop authentication technologies.

Privacy Enhancements for Closed-Loop Authentication

One of the classification facets that the paper considers for
authentication technologies is the distinction between closed-loop and
open-loop authentication, which I discussed in an
earlier
post
. Closed-loop authentication means that the credential
authority that issues or registers a credential is later responsible
for verifying possession of the credential at authentication time.
Closed-loop authentication may involve two parties, or may use a
third-party as a credential authority, which is usually referred to as
an identity provider. Examples of third-party closed-loop
authentication technologies include the browser SSO profile of SAML,
Shibboleth, OpenID, OAuth, and OpenID Connect.

I’ve pointed out before that third-party closed-loop authentication
lacks unobservability by the identity provider. Most third-party
closed-loop authentication technologies also lack anonymity and
multishow unlinkability. However, some of them implement privacy
enhancements that provide anonymity and a form of multishow
unlinkability. There are two such enhancements, suitable for two
different use cases.

The first enhancement consists of omitting the user identifier that
the identity provider usually conveys to the relying party. The
credential authority is then an attribute provider rather than an
identity provider: it conveys attributes that do not necessarily
identify the user. This enhancement provides anonymity, and multishow
unlinkability assuming no collusion between the attribute provider and
the relying parties. It is useful when the purpose of authentication
is to verify that the user is entitled to access a service without
necessarily having an account with the service provider. This
functionality is provided by
Shibboleth, which can be used,
e.g., to allow a student enrolled in one educational institution to
access the library services of another institution without having an
account at that other institution.

The core
OpenID
2.0 specification
specifies how an identity provider conveys an
identifier to a relying party. Extensions of the protocol such as the
Simple
Registration Extension
specify methods by which the identity
provider can convey user attributes in addition to the user
identifier; and the core specification hints that the identifier could
be omitted when extensions are used. It would be interesting to know
whether any OpenID server or client implementations allow the
identifier to be omitted. Any comments?

The second enhancement consists of requiring the identity provider to
convey different identifiers for the same user to different relying
parties. The identity provider can meet the requirement without
allocating large amounts of storage by computing a user identifier
specific to a relying party as a cryptographic hash of a generic user
identifier and an identifier of the relying party such as a URL.
This privacy enhancement is required by the
ICAM
profile of OpenID
. It achieves user anonymity and multishow
unlinkability by different parties assuming no collusion between the
identity provider and the relying parties; but not multishow
unlinkability by the same party. It is useful for returning user
authentication.

One-Click OpenID: A Solution to the NASCAR Problem

OpenID allows the user to choose any identity provider, even one
that the relying party has never heard of. This freedom of choice is,
in my opinion, the most valuable feature of OpenID. Unfortunately,
this feature comes with a difficult challenge: how to provide the
relying party with the information it needs to interact with the
identity provider.

OAuth does not have this problem because the relying party has to
preregister with the identity provider, typically a social site, and
therefore must know of the identity provider. An OAuth relying party
displays one or a few buttons labeled with the logos of the social
sites it supports, e.g. Facebook and Twitter, and the user chooses a
site by clicking on a button. But of course freedom of choice is
lost: the user can only use as identity provider a social site
supported by the relying party.

The traditional OpenID user interface consists of an input box
where the user types in an OpenID identifier, which serves as the
starting point of an identity provider discovery process. To compete
with the simplicity of picking a social site by clicking on a button,
some OpenID relying parties present the user with many buttons labeled
by logos of popular OpenID identity providers, in addition to the
traditional input box; but this user interface has been deemed ugly
and confusing to the user. The many logos have been compared to the
many ads on a race car, hence the term NASCAR problem that is
used to refer to the OpenID user interface challenge.

To solve the challenge we propose to let the browser keep track of
the identity provider(s) that the user has signed up with. The list
of identity providers will be maintained by the browser as a user
preference.

An identity provider will be added to the list by explicit
declaration. As the user is visiting the identity provider’s site,
the provider will offer its identity service to the user. The user
will accept the offer by clicking on a button or link. In the HTTP
response to the browser that follows the HTTP request triggered by
this action the identity provider will include an ad-hoc HTTP header
containing identity provider data including the OP Endpoint URL. The
browser will ask the user for permission to add the identity provider
to the list and store the identity provider data.

A relying party will use a login form containing a single button,
with a label such as Login with OpenID. There need not be any
input box for entering an OpenID identifier, nor any buttons with
logos of particular identity providers. The form will contain a new
ad-hoc non-visual element <idp>. When the form is submitted,
the browser will choose one identity provider from the list and send
its data to the relying party as the value of the <idp> element.

Which identity provider is chosen is up to the browser. It could
be the default, or an identity provider that has previously been used
for the relying party, as recorded by the browser, or an identity
provider explicitly chosen by the user from a menu presented by the
browser. The browser could choose to permanently display a menu
showing the user’s list of identity providers as part of the browser
chrome.

We arrived at this solution while thinking about the NSTIC
pilot that we plan to propose
. In the planned proposal the
identity provider issues a certificate to the browser, which the
browser imports automatically. A natural extension is to let the
identity provider download to the browser other data besides the
certificate, such as the OP Endpoint URL. Also, a browser user
interface for selecting an identity provider is akin to the user
interface for selecting a client certificate that browsers already
have. We realize that existing user interfaces for certificate
selection are less than optimal, but we believe that this is due to
lack of attention by browser manufacturers to a rarely used feature,
and that better interfaces can be designed.

OpenID Providers Invited to Join in an NSTIC Pilot Proposal

NSTIC has
announced funding for pilot projects.
Preliminary proposals are due by March 7 and full proposals by April 23.
There will be a
proposer’s conference
on February 15, which will be webcast live.

We are planning to submit a proposal and are inviting OpenID identity
providers to join us. The proposed pilot will demonstrate a
completely password-free method of user authentication where the
relying party is an ordinary OpenID relying party. The identity
provider will issue a public key certificate to the user, and later
use it to authenticate the user upon redirection from the relying
party. The relying party will not see the certificate.
Since the certificate will be verified by the same party that
issued it, there will be no need for certificate revocation lists.
Certificate issuance will be automatic, using an extension of the
HTML5 keygen mechanism that Pomcor will implement on an extension of
the open source Firefox browser.

There will be two privacy features:

  1. The identity provider will supply different identifiers to
    different relying parties, as in the
    ICAM
    OpenID 2.0 Profile
    .
  2. Before authenticating the user, the identity provider will inform
    the user of the value of the DNT (Do Not Track) header sent by the
    browser, and will not track the user if the value of the header is 1.

The identity provider will:

  1. Implement a facility for issuing certificates to users, taking
    advantage of the keygen element of HTML5. The identity
    provider will obtain a public key from keygen, create a certificate
    that binds the public key to the user’s local identity, and download
    the certificate in an ad-hoc HTTP header. Pomcor will supply a
    Firefox extension that will import the certificate automatically.
  2. Use the certificate to authenticate the user upon redirection
    from the relying party. The browser will submit the certificate as a
    TLS client certificate. The mod_ssl module of Apache supports the use
    of a client certificate and makes data from the certificate available
    to high-level server-side programming environments such as PHP via
    environment variables.

For additional information you may write to us using the
contact
page

of this site.

Do-Not-Track and Third-Party Login

Recently the
World Wide Web Consortium
(W3C)
launched a Tracking Protection Working Group, following
several recent proposals for Do-Not-Track mechanisms, and more
specifically in response to a W3C-member
submission by Microsoft
. A useful list of links to proposals and
discussions related to Do-Not-Track can be found in
the working group’s
home page
.

The Microsoft submission was concerned with tracking by third-party
content embedded in a Web page via cookies and other means of
providing information to the third party. It proposed a Do-Not-Track
setting in the browser, to be sent to Web sites in an HTTP header and
made available to Javascript code as a DOM property. It also proposed
a mechanism allowing the user to specify a white list of third party
content that the browser would allow in a Web page and/or a black list
of third party content that the browser would block. The browser
would filter the requests made by a Web page for downloading
third-party content, allowing some and rejecting others.

(The specific filtering mechanism proposed by Microsoft would allow
third-party content that is neither in the white list nor in the black
list. This would be ineffective, since the third party could
periodically change the domain name it uses to avoid being
blacklisted. I trust that the W3C working group will come up with a
more effective filtering mechanism.)

A Do-Not-Track setting and a filtering mechanism are good ideas, but
they only deal with the traditional way of tracking a user. Today
there is another way of tracking a user, which can be used whenever
the user logs in to a Web site with authentication provided by a third
party, such as Facebook, Google or Yahoo.

Third-party login uses a double-redirection protocol. When the user
wants to log in to a Web site, the user’s browser is redirected to a
third party, which plays the role of “identity provider.” The
identity provider authenticates the user and redirect the browser back
to the Web site, which plays the role of “relying party.” The
identity provider is told who each relying party is, and can therefore
can track the user without any need for cookies. The identity
provider can link the user’s logins to relying parties to the
information in the user’s account at the identity provider, which in
the case of Facebook includes the user’s real name and much other real
identity information.

Privacy-enhancing technologies, which I discussed in a recent series
of blog posts (starting with the
one on U-Prove
), may eventually make it possible to log in with a
third party credential without the identity provider being able to
track the user; but in the meantime, means must be found of providing
protection against tracking via third-party login. The W3C Tracking
Protection working group could provide such protection by broadening
the scope of the Do-Not-Track setting so that it would apply
to both the traditional method
of tracking via embedded content and the new method of tracking via
third-party login. An identity provider who receives a Do-Not-Track
header while participating in a double-redirection protocol would be
required to forget the transaction after authenticating the user.

The scope of the filtering mechanism could also be broadened
so that it would apply to redirection requests in
addition to third-party content embedding. This could
mitigate a security weakness that affects third-party login protocols
such as OpenID and OAuth. Such protocols are highly vulnerable to a
phishing attack that captures the user’s password for an identity
provider: the attacker sets up a malicious relying party that
redirects the browser to a site masquerading as the identity provider.
A filtering mechanism that would block redirection
by default could prevent
the attack based on the fact that the site masquerading as the
identity provider would not be whitelisted (while the legitimate identity
provider would be).

Benefits of TLS for Issuing and Presenting Cryptographic Credentials

In comments on the previous post at the
Identity Commons mailing list
and comments at the session on deployment and usability of cryptographic
credentials at the
Internet Identity Workshop,
people have questioned the advantages of
running cryptographic protocols for issuing and presenting credentials
inside TLS, and argued in favor of running them instead over HTTP.
I believe running such protocols inside TLS removes several obstacles
that have hindered the deployment of cryptographic credentials. So in
this post I will try to answer those comments.

Here are three advantages of running issuance and presentation
protocols inside TLS over running them outside TLS:

  1. TLS is ubiquitous. It is implemented by all browsers and all
    server middleware. If issuance and presentation protocols were
    implemented inside TLS, then users could use cryptographic credentials
    without having to install any applications or browser plugins, and
    developers of RPs and IdPs would not have to install and learn
    additional SDKs.
  2. The PRF facility of TLS is very useful for implementing cryptographic
    protocols. For example, in the U-Prove presentation protocol
    [1],
    when U-Prove is used for user authentication, the verifier must send a
    nonce to the prover; if the protocol were run inside TLS, that step
    could be avoided because the nonce could be independently generated by
    the prover and the verifier using the PRF. The PRF can also be used
    to provide common pseudo-random material for protocols based on the
    common reference string (CRS) model
    [2].
    (Older cryptosystems such as
    U-Prove
    [1]
    and Idemix
    [3]
    rely on the Fiat-Shamir heuristic
    [4]
    to
    eliminate interactions, but more recent cryptosystems based on
    Groth-Sahai proofs
    [5]
    rely instead on the CRS model, which is more
    secure in some sense
    [6].)
  3. Inside TLS, an interactive cryptographic protocol can be run in a
    separate TLS layer, allowing the underlying TLS record layer to
    interleave protocol messages with application data (and possibly with
    messages of other protocol runs), thus mitigating the latency impact
    of protocol interactions.

And here are two advantages of running protocols either inside or directly
on top of TLS, over running them on top of HTTP:

  1. Simplicity.
    Running a protocol over HTTP would require specifying how protocol messages
    are encapsulated inside HTTP requests and responses, i.e. it would require
    defining an HTTP-level protocol.
  2. Performance.
    Running a protocol over HTTP would add the overhead of sending HTTP
    headers, and, possibly, of establishing different TLS connections for
    different HTTP messages if TLS connections cannot be kept alive for
    some reason.

As always, comments are welcome.

References

[1] Christian Paquin. U-Prove Cryptographic Specification V1.1 Draft
Revision 1
, February 2011.
Downloadable from
http://www.microsoft.com/u-prove.

 
[2] M. Blum, P. Feldman and S. Micali. Non-Interactive Zero-Knowledge
and Its Applications (Extended Abstract). In Proceedings of the
Twentieth Annual ACM Symposium on Theory of Computing (STOC 1988)
.

 
[3] Jan Camenisch et al. Specification of the Identity Mixer
Cryptographic Library, Version 2.3.1
. December 2010. Available at
http://www.zurich.ibm.com/~pbi/identityMixer_gettingStarted/ProtocolSpecification_2-3-2.pdf.

 
[4] A. Fiat and A. Shamir. How to Prove Yourself: Practical Solutions
to Identification and Signature Problems. In Proceedings on Advances
in Cryptology (CRYPTO 86)
, Springer-Verlag.

 
[5] J. Groth and A. Sahai. Efficient Non-Interactive Proof Systems
for Bilinear Groups. In Theory and Applications of Cryptographic
Techniques (EUROCRYPT 08)
, Springer-Verlag.

 
[6] R. Canetti, O. Goldreich and S. Halevi. The Random Oracle
Methodology, Revisited. Journal of the ACM, vol. 51, no. 4, 2004.

 

Pomcor’s Comments on the Cybersecurity Green Paper

We have written a response to the
Call for
Comments
on the report
entitled Cybersecurity,
Innovations and the Internet Economy
, written by the Internet
Policy Task Force of the US Department of Commerce.

In the response we call for research and development efforts aimed at
improving and broadening the scope of the TLS protocol (formerly known
as SSL). This would
benefit NSTIC and the many
IETF protocols that rely on TLS for their security.

If you have any comments on our response, please leave then below.