Lessons Learned in Implementing and Deploying Crypto Software

Lessons Learned in Implementing and Deploying Crypto Software – Gutmann 2002

The author of today’s paper, Peter Gutmann, is the developer of CryptLib, which gives him a unique perspective both in the development of crypto, and also in how people use it (from supporting the crypolib user base). The paper was written in 2002, so details specific to particular libraries and versions will have changed in the intervening 13 years, but the bigger lessons remain relevant.

…the basic tools for strong encryption have become fairly widespread, gradually displacing the snake oil products which they had shared the environment with until then. As a result, it’s now fairly easy to obtain software which contains well-established, strong algorithms such as triple DES and RSA instead of pseudo one-time-pads.

Unfortunately, it’s still easy to produce insecure systems using these building blocks:

A proprietary, patent-pending, military-strength, million-bit-key, one-time pad built from encrypted prime cycle wheels is a sure warning sign to stay well clear, but a file encryptor which uses Blowfish with a 128-bit key seems perfectly safe until further analysis reveals that the key is obtained from an MD5 hash of an uppercase-only 8-character ASCII password.

Gutmann terms such systems ‘2nd generation snake oil,’ and because it looks like the real thing but isn’t, also uses the phrase ‘naugahyde crypto.’ I had no idea what naugahyde is/was, but it turns out it’s a form of imitation leather.

Most crypto software is written with the assumption that the user knows what they’re doing, and will choose the most appropriate algorithm and mode of operation, carefully manage key generation and secure key storage, employ the crypto in a suitably safe manner, and do a great many other things which require fairly detailed crypto knowledge. However, since most implementers are everyday programmers whose motivation for working with crypto is defined by ‘the boss said do it’, the inevitable result is the creation of products with genuine naugahyde crypto.

The main body of the paper is a catalogue of the ways in which the author had seen crypto misused. I’ve included some highlights below.

‘Private’ keys

One of the principal design features of cryptlib is that it never exposes private keys to outside access. The single most frequently-asked cryptlib question is therefore ‘How do I export private keys in plaintext form?’ …. The amount of sharing of private keys across applications and machines is truly frightening. Mostly this appears to occur because users don’t understand the value of the private key data, treating it as just another piece of information which can be copied across to wherever it’s convenient.

Once keys are this widespread, keeping them secure is virtually impossible. Thus Gutmann offers this advice:

If your product allows the export of private keys in plaintext form or some other widely-readable format, you should assume that your keys will end up in every other application on the system, and occasionally spread across other systems as well.

Accidental transmission of private keys is another common mistake – not helped by the fact that PKCS #12 bundles private keys with certificates. This leads to users sending their private key when they only meant to send their certificate…

“I’m sending you a my certificate” is frequently accompanied by a PKCS #12 file.” …. The author, being a known open-source crypto developer, is occasionally asked for help with certificate-management code, and has over the years accumulated a small collection of users’ private keys and certificates, ranging from disposable email certificates through to relatively expensive higher-assurance certificates (the users were notified and the keys deleted where requested).

Key management practices can also undermine use of public key cryptography:

Even when public-key encryption is being used, users often design their own key-management schemes to go with it. One (geographically distributed) organisation solved the key management problem by using the same private key on all of their systems. This allowed them to deploy public-key encryption throughout the organisation while at the same time eliminating any key management problems, since it was no longer necessary to track a confusing collection of individual keys.

Guttman’s advice: “Straight Diffie-Hellman requires no key management. This is always better than other no-key-management alternatives which users will create.”

Entropy again…

Crypto toolkits sometimes leave problems which the toolkit developers couldn’t solve themselves as an exercise for the user. For example the gathering of entropy data for key generation is often expected to be performed by user-supplied code outside the toolkit. Experience with users has shown that they will typically go to any lengths to avoid having to provide useful entropy to a random number generator which relies on this type of user seeding. The first widely-known case where this occurred was with the Netscape generator, whose functioning with inadequate input required the disabling of safety checks which were designed to prevent this problem from occurring.

Particularly illuminating is the flood of ‘helpful’ posts on forums showing how to circumvent the problem of not having enough entropy once OpenSSL 0.9.5 started requiring it.

It is likely that considerably more effort and ingenuity has been expended towards seeding the generator incorrectly than ever went into doing it right…. The practical experience provided by cases such as the ones given above shows how dangerous it is to rely on users to correctly initialise a
generator – not only will they not perform it correctly, they’ll go out of their way to do it wrong.

Do you know what you’re doing?

The functionality provided by crypto libraries constitute a powerful tool. However, like other tools, the potential for misuse in inexperienced hands is always present. Crypto protocol design is a subtle art, and most users who cobble their own implementations together from a collection of RSA and 3DES code will get it wrong. In this case ‘wrong’ doesn’t refer to (for example) missing a subtle flaw in Needham-Schroeder key exchange, but to errors such as using ECB mode (which doesn’t hide plaintext data patterns) instead of CBC (which does).

A cautionary tale is given of a vendor that implemented their VPN using triple DES in ECB mode. “Since ECB mode can only encrypt data in multiples of the cipher block size, they didn’t encrypt any leftover bytes at the end of the packet. The interaction of this processing mechanism with interactive user logins, which frequently transmit the user name and password one character at a time, can be imagined by the reader.”

Cryptographic library functions should be provide at the highest level of abstraction possible to reduce the risk of ‘users who know just enough to be dangerous…’

The issue which needs to be addressed here is that the average user hasn’t read any crypto books, or has at best had some brief exposure to portions of a popular text such as Applied Cryptography, and simply isn’t able to operate complex (and potentially dangerous) crypto machinery without any real training. The solution to this problem is for developers of libraries to provide crypto functionality at the highest level possible, and to discourage the use of low-level routines by inexperienced users. The job of the crypto library should be to protect users from injuring themselves (and others) through the misuse of basic crypto routines.