This is the fourth and last post of a series describing a proof-of-concept web app that implements cryptographic authentication using Node.js with a MongoDB back-end. Part 1 described the login process. Part 2 described the registration process. Part 3 described login session maintenance. The proof-of-concept app, called app-mongodb.js, or simply app-mongodb, can be found in a zip file downloadable from the cryptographic authentication page.
In app-mongodb, random bits are used on the server side for generating registration and login challenges to be signed by the browser, and for generating login session IDs. On the client side, they are used for generating key pairs and computing randomized signatures on server challenges.
In quest of full entropy
There are established methods for obtaining random bits to be used in web apps. On the client side, random bits can be obtained from crypto.getRandomValues, which is part of the W3C Web Crypto API. On the server side, /dev/urandom can be used in Linux, MacOS and most flavors of Unix. However, neither of these methods guarantees full entropy.
N bits are said to have full entropy if they have been generated by a process whose entropy is at least N (or almost N according to the first of these definitions.) A cryptographically secure pseudo-random or, in NIST terminology, deterministic random bit generator (DRBG) has entropy N if it has been seeded with N bits of entropy; thus bits obtained from a DRBG have full entropy if the DRBG has been seeded with at least as much entropy as the number of bits obtained.
A DRBG has a security strength equal to its entropy. For example, a DRBG seeded with 128 bits of entropy has a security strength of 128 bits, which means that it is cryptographically secure against an adversary who is not able to break a cryptosystem such as AES 128.
Full entropy is essential for security. If a key pair is generated with insufficient entropy, its security strength cannot be greater than the effective entropy. In DSA (and ECDSA), the private key can be computed from the signature and a per-message secret that randomizes the signature. If the per-message secret is generated with insufficient entropy, the private key becomes guessable in 2N trials, where N is the effective entropy. If the registration or login challenge is generated by the server with insufficient entropy, an adversary may be able to guess the challenge and use it to mount a replay attack.
To achieve full entropy in app-mongodb we make use of /dev/random,
a companion to /dev/urandom that provides bits with full
entropy. On the server side we seed a DRBG with 128 bits obtained
from
Waiting for /dev/random
The price to pay for full entropy is having to wait until enough entropy is available. In Linux, /dev/random “blocks” a request for bits if it has less entropy than the number of bits requested, making the requester wait until an entropy counter maintained by /dev/random reaches a value equal to the number of bits requested. But /dev/random is a special file in the Linux file system, and the fs module of Node.js provides an asynchronous interface to the file system, which we take advantage of to avoid blocking.
Besides blocking, /dev/random may provide fewer bits than requested when it does not have enough entropy. We have observed this while testing the proof-of-concept app. Perhaps /dev/random gives up on providing the requesting number of bits if entropy is taking too long to accumulate.
The following three snippets of app-mongodb.js (where the line numbers are relative to the beginning of the file) show how we use /dev/random to seed the server-side DRBG. (We also use /dev/random for reseeding, as shown below in connection with prediction resistance.) (In the first snippet, at line 87, please note that we have renamed the constant securityStrength as rbgSecurityStrength, to be consistent with the fact that there are different security strengths in the application as discussed in Part 2.)
var rbgStateObject = new Object(); var rbgStateInitialized = false; const rbgSecurityStrength = 128;
function getDevRandomBits(bitLength, f) { var byteLength = bitLength / 8; var buf = new Buffer(byteLength); (function fillBuf(bufPos) { var remaining = byteLength - bufPos; if (remaining == 0) { f(pjclHex2BitArray(buf.toString('hex'))); return; } fs.open('/dev/random', 'r', function(err, fd) { if (err) throw new Error(err); fs.read(fd, buf, bufPos, remaining, 0, function(err, bytesRead) { if (err) throw new Error(err); bufPos += bytesRead; fs.close(fd, function(err) { if (err) throw new Error(err); fillBuf(bufPos); }); }); }); })(0); } getDevRandomBits(rbgSecurityStrength, function(bitArray) { pjclRBG128Instantiate(rbgStateObject, bitArray); rbgStateInitialized = true; reseedPeriodically(reseedPeriod); });
app.use(function(req,res,next) { if (!rbgStateInitialized) { res.status(503).send('SERVER BUSY, TRY AGAIN LATER'); } else { next(); } });
The function getDevRandomBits, defined at lines 92–113 of the second snippet, is used to get random bits from /dev/random. It takes as arguments the number of bits to be obtained (a multiple of 8) and a callback function that becomes the value of the callback parameter f.
The nested function fillBuf defined at lines 95–112 is a recursive function that fills a Node.js buffer with bytes by calling itself as needed until enough bytes have been obtained. Obtaining one set of bytes involves three asynchronous operations. At line 101, fs.open opens the special file /dev/random and passes a file descriptor fd to a callback. At line 103 that callback calls fs.read and registers a second callback, which will be called once /dev/random has produced the requested number of bytes or given up and provided fewer bytes, the number of bytes read being passed as an argument. At line 106, the second callback calls fs.close to close the file descriptor and avoid a memory leak. Then a third callback recursively calls fillBuf.
At line 98, once the buffer has been filled, its contents are converted to a bit array and passed to the callback parameter f.
The function getDevRandomBits is used at line 115 to seed the server-side DRBG. Once it has obtained the desired number of bits from /dev/random, here 128, it calls the callback defined at lines 115–119, which initializes the DRBG, or “instantiates” it in NIST terminology. Initialization is performed by pjclRBG128Instantiate, which takes as arguments a pointer to storage for the state of the DRBG, and entropy provided as a bit array. The server-side DRBG stores its state in an ordinary JavaScript object, created at line 85 as shown in the first snippet. The state will therefore persist as long as the Node.js server process is running, and will have to created and initialized again if the server is restarted.
The line-115 callback also sets the global flag rbgStateInitialized to true, and calls the function reseedPeriodically, which will provide configurable prediction resistance as explained below. The anonymous function defined at lines 138–145 and registered with app.use as a middleware uses the flag rbgStateInitialized to send an HTTP 503 response to any requests that arrive before the server-side DRBG has been initialized.
Client-side random bit generation
As mentioned above, random bits are produced on the client side by a DRBG that obtains entropy from the browser and from the server-side DRBG. The client-side DRBG stores its state in localStorage, protected by the same-origin policy of the browser. The same-origin policy prevents JavaScript code with a different origin than app-mongodb from reading or writing the state of the DRBG, but allows other web apps with the same origin to share the same DRBG. This is good for security, since multiple apps sharing the DRBG will contribute more entropy to it than a single app.
In app-mongodb, the server downloads entropy to the browser when the user registers or logs in. In both cases, this is done by obtaining an array of 128 bits from the server-side DRBG, converting it to a hex string, and including the string as a JavaScript literal assigned to a variable in the POST redirection script that conveys the registration or login challenge to the browser.
Browser entropy is obtained using the function pjclBrowserEntropy128Bits defined in the static file browser-entropy.js as follows:
pjclBrowserEntropy128Bits = function() { var ui32Array = new Uint32Array(4); var cryptoObject = self.crypto || self.msCrypto; // for IE cryptoObject.getRandomValues(ui32Array); var bitArray = pjclUI32Array2BitArray(ui32Array); return bitArray; }
Browser and server entropy are concatenated and used to instantiate or reseed the client-side DRBG, as shown for example in this snippet of the reg-redir.handlebars view that is used for server-side rendering of the POST redirection script used to convey the registration challenge:
var entropy = pjclBrowserEntropy128Bits().concat(pjclHex2BitArray(serverEntropy)); pjclRBG128InstantiateOrReseed(localStorage,entropy);
Configurable prediction resistance
Prediction resistance is a term used by NIST that refers to the ability to recover from a compromise of the state of a DRBG so that an adversary who has gained knowledge of that state becomes unable to use it to predict anything useful about future DRBG output. (In what follows I will simply say “predict” to mean “predict anything useful”.) A discussion of prediction resistance can be found in Section 8.8 of SP 800-90A. Prediction resistance can be achieved by reseeding the DRBG with an amount of entropy at least equal to its security strength.
In app-mongodb we set a configurable limit on the amount of time that it takes to recover from a compromise of the state of the server-side DRBG or a compromise of the state of the client-side DRBG. We refer to this as configurable prediction resistance for the app. We implement it by periodically reseeding the server-side DRBG with 128 bits obtained from /dev/random, with a configurable period. After that period has elapsed, an adversary who has compromised the state of either DRBG, but does not have the computational power to break a system with 128 bits of security strength, will no longer be able to predict the output of either DRBG. Server-side bits will not be predictable because the server-side DRBG will have been reseeded with 128 bits of entropy. Client-side bits will not be predictable because the client-side DRBG is seeded or reseeded before use (by one of the JavaScript POST redirection scripts) with 128 bits of entropy downloaded from the server (plus an unknown amount of entropy obtained provided by crypto.getRandomValues).
The periodic reseeding of the server-side DRBG is implemented using the function reseedPeriodically, called by getDevRandomBits after initializing the DRBG as shown above in the second snippet. The definition of reseedPeriodically, shown in the following snippet, should be self-explanatory given the definition of getDevRandomBits discussed above and knowing that setTimeout is a Node.js function with the same API as the function of same name provided by browsers.
function reseedPeriodically(period) { setTimeout(getDevRandomBits, period, rbgSecurityStrength, function(bitArray) { pjclRBG128Reseed(rbgStateObject, bitArray); reseedPeriodically(period); }); }
You can eat your entropy and have it too
To conclude I would like to comment on the pros and cons of /dev/random and make a recommendation for improving it.
We saw above that, in a Linux server, /dev/random blocks if it has less entropy than the number of bits requested. This is essential functionality that allows us to reseed the server-side and client-side DRBGs with a known amount of entropy, and thereby provide meaningful security guarantees for cryptographic functionality.
/dev/random may block because it has not collected enough entropy since the machine was turned on. But, surprisingly, it may also block later because it thinks that it has lost entropy. This is not useful and may cause unnecessary delays when using /dev/random to reseed the server-side DRBG.
The peculiar idea of losing entropy is discussed in a theory of operation included as a comment in the Linux kernel file random.c:
When random bytes are desired, they are obtained by taking the SHA hash of the contents of the “entropy pool”. The SHA hash avoids exposing the internal state of the entropy pool. It is believed to be computationally infeasible to derive any useful information about the input of SHA from its output. Even if it is possible to analyze SHA in some clever way, as long as the amount of data returned from the generator is less than the inherent entropy in the pool, the output data is totally unpredictable. For this reason, the routine decreases its internal estimate of how many bits of “true randomness” are contained in the entropy pool as it outputs random numbers [by an amount equal to the number of output bits, according to this 2006 paper].
This reasoning does not make sense. No matter what the “inherent entropy” in the pool is, output data obtained by applying the same hash function to the contents of the pool will not be “totally unpredictable”: it will always be the same, if the contents of the pool do not change. The pool is presumably modified after bits are output (hopefully after, not before, as discussed in Section 3.1 of the same 2006 paper referenced above), and unpredictability hinges on how it is modified, not on how many bits are output.
Thinking of entropy as something that you acquire and consume is the wrong metaphor. Entropy is a cake that you can eat and have too. The purpose of reseeding a DRBG is not to replenish lost entropy, but to provide prediction resistance. NIST also requires reseeding a DRBG for mitigation of cryptanalytic attacks, but only after it has responded to no less than 248 requests for random bits, as specified in Table 2 of SP 800-90A.
The implementation of /dev/random should be changed so that it no longer blocks after it has collected an amount of entropy equal to the desired security strength. This could simply be achieved by not decrementing the entropy counter.
A single DRBG with a state having a number of bits equal to the desired security strength could be used to implement both /dev/random and /dev/urandom. Entropic bits would be used to reseed the DRBG as they become available in a reasonably large numbers. An entropy counter would be incremented by an estimate of the entropy held by those bits until it reaches a value equal to the security strength. Entropic bits would continue to be hashed into the state after the counter has reached its limit. The entropy counter would never be decremented. A request for bits would be blocked (i.e. delayed) by /dev/random (but not by /dev/urandom) until the counter reaches a value equal to the lesser of the security strength and the number of bits requested. N bits returned by /dev/random would have full entropy as long as N is not greater than the security strength.
The correct use of /dev/[u]random is to ensure that you have a sufficient amount of entropy for your security strength by taking the first N bits from /dev/random, then switching to /dev/urandom.
In this way, it will only block in the beginning when the system has not collected enough entropy yet, and then use the cryptographically secure PRF of /dev/[u]random.
1. By the way, both /dev/[u]random use the same internal pool and the same PRF, and fresh entropy is also injected in /dev/urandom ; the only difference being that /dev/urandom does not assume that entropy has been “lost” in some weird phenomenon (or does not care).
2. This is already done natively in OpenBSD and variants, to my knowledge.
3. It is even dangerous to use /dev/random exclusively, because an attacking process can flood the random pool by continuous requests and then control exactly which entropy is going in. A good starting point might lie here: https://eprint.iacr.org/2013/338
Using /dev/random to get the first N random bits (where N is the desired security strength), then switching to /dev/urandom, would be the way to do it if we were using the Linux kernel PRNG (i.e. /dev/[u]random) to output the random bits consumed by the application. But that’s not what we do. Complexity is the enemy of security, and the Linux PRNG is extravagantly complex. So, as explained in the post, we use instead a user-space DRBG implemented according to the specification of the Hash_DRBG mechanism in SP 800-90A to output random bits for the application, and only use /dev/random to provide entropy for seeding the DRBG, then reseeding it every 10 minutes. It is essential to use /dev/random rather than /dev/urandom for the initial seeding. For reseeding, /dev/random and /dev/urandom provide the same prediction resistance with respect to an attacker who compromises the state of the user-space DRBG but not the state of the Linux kernel PRNG. But /dev/random provides marginally better prediction resistance wrt to an attacker who breaches the state of the Linux kernel PRNG; and we can use it because the function getDevRandomBits shown in the second snippet uses an asynchronous API to the file system provided by Node.js to avoid being blocked by /dev/random.
As to the dangers of /dev/random: Using /dev/random to output bits consumed by the application could cause /dev/random to block, and that could be exploited for a denial-of-service attack. But as I said, we only use /dev/random to reseed the DRBG once every 10 minutes. Regarding an attacker who is able to control the entropy going in, starting at some time T: If the attacker knows the state of the Linux PRNG at time T, then he/she can predict its output after time T. If he/she does not know the state at time T, he/she cannot predict the output after time T. It makes no difference whether the output is produced by /dev/random or /dev/urandom. And it makes no difference whether the attacker has flooded the PRNG with continuous requests before time T. Flooding the PRNG with requests may reduce the woefully misnamed “entropy counter” (of the “input pool” of the PRNG), but will not reduce its entropy. As I say in the post, entropy is a cake that you can eat and have too.
What I am saying is that if you use /dev/random for reseeding your DRBG, you *are* exposed to an attacker who floods the Linux PRNG with continuous requests: the problem is not that “entropy is decreasing”, the problem is that Linux does consider that entropy is decreasing, so Linux will at the end use only the entropy which is *directly controlled by the attacker*.
This won’t happen with /dev/urandom : in that case, Linux is not throwing “ancient entropy” away (the cake you can eat and have it too), so unless the attacker can control the whole system as soon at it boots, you stay safe.
Note that in the first case, you cannot anymore claim “prediction resistance” for your DRBG, because this attacker can now exactly predict the output of /dev/random, so if he knows the state of the DRBG at time T, he still does at time T+1 after a reseed with /dev/random (and won’t with /dev/urandom unless he controls the machine from the very beginning); that’s the point.
The real problem in this discussion is that you want a physical entropy source for reseeding the DRBG ; unfortunately the only simple way to access that is through the /dev/[u]random interface. Both use a PRNG (the extravagantly complex as you call it) before output, no matter what. /dev/urandom does not deplete entropy (this is the correct way to think of entropy, like you say), /dev/random does erase things and is therefore vulnerable to an active attacker, despite the common belief.
In one word: you should *really* (I mean, from a cryptographer’s point of view) use /dev/urandom as soon as you can rely on sufficient system entropy (by completing a sufficiently long request to /dev/random); this is your simplest way to simulate a physical entropy source in user-space. Then you can use this “physical source” to seed and reseed your DRBG.
The entropy of the internal state of a PRNG (Linux terminology) or DRBG (NIST terminology) is a measure of the uncertainty that an attacker has about its value. Literally speaking, it makes no sense to speak about “throwing ancient entropy away”. But you may be speaking metaphorically.
In any case, you seem to believe that the entropy of the internal state that /dev/random uses to produce output decreases as the attacker causes purportedly random bits known to the attacker to be input to the Linux PRNG, because /dev/random “erases things”. This is not true.
The Linux PRNG uses three “entropy pools”, which are the internal states of three different PRNGs. Random bits are mixed into the “input pool”, which produces outputs that are mixed into the “non-blocking pool” used by /dev/urandom and the “blocking pool” used by /dev/random. The blocking and non-blocking pools use the same mixing function, and nothing is ever erased from any of the pools.
This paper has the following formal definition of a mixing function (Definition 1, page 5): a function f used to combine the state X of a PRNG with an input I to produce a new state f(X,I) is a mixing function according to the definition if H(f(X, I)) ≥ H(X) and H(f(X, I)) ≥ H(I), where H is Shannon’s entropy function. As stated in the paper, a mixing function that satisfies this definition guarantees that “if an attacker has no knowledge of the state X but has complete control of the input I, he will gain no additional information on the new state after the execution of the mixing function”. The paper has a formal proof (Lemma 1, page 9), which I don’t claim to have read, that the mixing functions of all three entropy pools satisfy the definition. Therefore an attacker who floods the Linux PRNG with input bits known to the attacker does not gain any additional ability to predict the output of /dev/random.