Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

OpenSSL 1.0.1e CipherSuites and TLS1.2 more mixed signals than my xgf

Tags:

openssl

I am currently working on a high security web server using Windows (i know, if it were up to me it would be OpenBSD) Server 2012.

In looking at the choices of ciphersuites and getting up to speed on what is considered the strongest and what isn't, I had a few questions.

  1. It is my understanding that as of OpenSSL 1.0.1e (or current TLS 1.2) that block ciphers (specifically AES and Camellia) are no longer vulnerable to cache timing side-channel attacks. Is this correct?

  2. Knowing #1, it is now safe to say that block ciphers in CBC mode are once again safe, even though that there are a few known weak attack vectors that simplify them slightly.

  3. SHA1 has known collisions, SHA2-256 is the new minimum known secure standard, correct?

  4. For all normal intents and purposes RC4 is completely broken. Don't use it. Is this a correct blanket statement?

  5. Ephemeral keys are are the only way to achieve perfect forward secrecy using OpenSSL or TLS 1.2, correct?

And finally a question: Is there a mathematical or probability reason to consider GCM safer than CBC after the current round of OpenSSL updates?

Thanks in advance guys, this is a lot of BS to shuffle through via google and wikis, and I was unable to find a straight, up to date answer on this.

like image 390
user2555174 Avatar asked Jul 05 '13 22:07

user2555174


1 Answers

Sorry about the format below. I'm going to try and group them by topics, so it means some questions get visited multiple times and out of order.


It is my understanding that as of OpenSSL 1.0.1e (or current TLS 1.2) that block ciphers (specifically AES and Camellia) are no longer vulnerable to cache timing side-channel attacks.

Well, OpenSSL performs branches using secrets so its susceptible to cache timing attacks (both local and network). I recall reading a paper on it, but I don't have a reference at the moment. I think Bernstein talks about it in his presentation cited below. I know one group of cryptographers are really upset with OpenSSL because the project won't accept the patches to fix some of the sore spots.

For an approachable talk on the subject, see Daniel Bernstein's Cryptography's Worst Practices.

As far as I know, Bernstein's NaCl is the only library that tries to remove all the side channels. But NaCl is not a general purpose SSL/TLS library.


It is my understanding that as of OpenSSL 1.0.1e (or current TLS 1.2) that block ciphers (specifically AES and Camellia) are no longer vulnerable to cache timing side-channel attacks.

SSL/TLS uses an Authenticate-then-Encrypt scheme (AtE). The scheme is essentially:

T = Auth(m)
C = Enc(m||t)

Send {C} to peer

Since the authentication tag is encrypted, the protocol must decrypt the message before it can verify the authenticity of the message. That's where Serge Vaudenay's Padding Oracles have flourished, and that's what Duong and Rizzo's BEAST was using. Its because the cipher text is being used before being authenticated.

Its OK to use Authenticate-then-Encrypt, but the details can be tricky to get right. If its being used with a stream cipher that XORs the key stream, then its usually OK. If its being used with a block cipher, then you have to be careful. The official treatment can be found at Hugo Krawczyk's The Order of Encryption and Authentication for Protecting Communications.

OpenSSL and other libraries have patched the recent round of padding oracles on block ciphers.

Stream ciphers are not really viable because RC4 is not really suitable for use in TLS. See the answer to Question 4.

That's why SSL/TLS sucked so bad in 2011 or so. Both stream ciphers and block ciphers were broken, and we had no good choice/alternative to use. (Most people chose RC4 over block ciphers).

You should expect more side channel attacks in the future because of the architectural defects incumbent to the protocol and the implementation defects.


For completeness, here's what you would want to use in an ideal world. Its an Encrypt-then-Authenticate scheme (EtA) used by IPSec. The IETF refuses to specify it for SSL/TLS even though its provably secure under generic composition (see Krawczyk's paper):

C = Enc(m)
T = Auth(C)

Send {C || T} to peer

In the above scheme, the peer rejects any cipher text C that does not authenticate against the authentication tag T. The padding oracle does not reveal itself because the decryption is never performed.

There's now a IETF draft to use Enrcypt-then-Authenticate in SSL/TLS. See Peter Gutmann's Encrypt-then-MAC for TLS and DTLS.

And for more completeness, here's what SSH does. Its an Authenticate-and-Encrypt scheme (A&E), and it uses the cipher text before authenticating it, too:

C = Enc(m)
T = Auth(m)

Send {C || T} to peer

Since the authentication tag is applied to the plain text message, the cipher text must be decrypted to recover the plain text message. That means the cipher text is being used before its authenticated.

An programmer approachable treatment of authenticated encryption is available at the Code Project's Authenticated Encryption.


For all normal intents and purposes RC4 is completely broken. Don't use it. Is this a correct blanket statement?

Its not completely broken, but it's biases are a real problem in TLS. From AlFardan, Bernstein (et al), On the Security of RC4 in TLS and WPA:

... While the RC4 algorithm is known to have a
variety of cryptographic weaknesses (see [26]
for an excellent survey), it has not been previously
explored how these weaknesses can be exploited
in the context of TLS. Here we show that new and
recently discovered biases in the RC4 keystream
do create serious vulnerabilities in TLS when using
RC4 as its encryption algorithm. 

Knowing #1, it is now safe to say that block ciphers in CBC mode are once again safe, even though that there are a few known weak attack vectors that simplify them slightly.

Well, its a matter of your risk posture or adversity. Given you know RC4 is not suitable for use in TLS, that only leaves block ciphers. But we know OpenSSL still suffers from side channel attacks when using block ciphers because they branch on secret key material. So you get to pick your poison.


Knowing #1, it is now safe to say that block ciphers in CBC mode are once again safe, even though that there are a few known weak attack vectors that simplify them slightly.

And the block cipher is not the only vector. An attacker could recover the AES (or Camellia) key during key transport. See Bleichenbacher's “Million message attack” on RSA; and The “Million Message Attack” in 15 000 Messages. That means a busy website might have to change its long term signing/encryption key every 10 minutes or so.

You also have other side channel/oracle attacks, like Duong and Rizzo's attacks on compression. Their attacks target both the socket layer (CRIME) and application layer (BREACH), and applies to HTTPS and related protocols like SPDY.


SHA1 has known collisions, SHA2-256 is the new minimum known secure standard, correct?

That depends upon how you are using it and who you ask. If you are using it as a pseudo random function (PRF) in a random number generator, then its OK to use SHA1. If you are using it where collision resistance is required, such as a digital signature, then SHA1 is below its theoretical security level of 280-bits. In fact, we know its closer to 260 thanks to Marc Stevens (see HashClash). That means it could be in the reach of some attackers.

For SSL/TLS, SHA1 only needs to be collision resistant long enough to push the SSL/TLS record over the air or along the wire. 260 is probably good enough for that because the length of time is small - its about the network's 2MSL. Put another way, an attacker is probably not going to be able to forge a network packet in under 2 minutes.

On the other hand, you would probably want X509 certificates with SHA256 because the time-to-live is almost unbounded, unlike a TLS record.

The nearly unbounded time on a long lived X509 certificate is why it was effective to develop FLAME. The attackers had unlimited time to find the MD5 prefix collision on that Microsoft TS certificate. Once found, it could be used everywhere there was a Microsoft box running the service.


SHA1 has known collisions, SHA2-256 is the new minimum known secure standard, correct?

There are a few bodies that publish these sorts of minimum standards, and 112-bits of security appears to be the agreed upon minimum around now. So that means the algorithms are:

  • DH-2048
  • RSA-2048
  • SHA-224
  • 3-key TDEA (Triple DES)
  • AES128 (or above)

Those are common US algorithms. You can also use Camellia (AES equivalent) and Whirlpool (SHA equivalent) if desired. 2-key TDEA provides 80-bits of security and should not be used. (3-key TDEA uses 24-byte keys, and 2-key uses 16-byte keys. SSL/TLS specifies the 24-byte variety in RFC 2246).

There's also sizes for elliptic curves, too based on the size of the prime field or binary field characteristic. For example, if you want an elliptic curve over a prime field with 112-bits of security, then I believe you would use P-224 or above (or binary fields of size 233 or above).

I think some good reading on the subject can be found at Crypto++'s Security Levels. It discusses security levels, and calls out the standard bodies like ECRYPT (Asia), ISO/IEC (Worldwide), NESSIE (Europe), and NIST (US).

Even the NSA has a minimum security level. Its 128-bits (rather than 112-bits) for SECRET, and the algorithms are specified in its Suite B.


SHA1 has known collisions, SHA2-256 is the new minimum known secure standard, correct?

You have to be careful about removing SHA1. If you do, you might remove all common TLSv1.0 algorithms, which could have an impact on usability. Personally, I'd like to bury TLSv1.0, but I think its needed for interop because so few clients and server fully implement TLSv1.1 and TLSv1.2.

Also, Ubuntu 12.04 LTS disables TLSv1.1 and TLSv1.2. So you will need the best hash that can be used for TLSv1.0, and I believe that's SHA1. See Ubuntu 12.04 LTS: OpenSSL downlevel version is 1.0.0, and does not support TLS 1.2.

In this case, you could probably still use SHA1, but push it down on the list of preferred ciphers.


Ephemeral keys are are the only way to achieve perfect forward secrecy using OpenSSL or TLS 1.2, correct?

Ephemeral key exchanges provide Perfect Forward Secrecy (PFS). That means if the long term signing key is compromised, then past sessions are not in jeopardy from a loss of privacy. That is, an attacker cannot recover the plain text of past sessions.

Ephemeral key exchanges first came to light in SSL 3.0. Here are the algorithms of interest (I'm omitting RC4 and MD variants because I don't use them):

  • EDH-DSS-DES-CBC3-SHA
  • EDH-RSA-DES-CBC3-SHA

TLS 1.0 added the following:

  • DHE-DSS-AES256-SHA
  • DHE-RSA-AES256-SHA
  • DHE-DSS-AES128-SHA
  • DHE-RSA-AES128-SHA

TLS 1.1 added no algorithms.

TLS 1.2 added the following:

  • ECDHE-ECDSA-AES256-GCM-SHA384
  • ECDHE-RSA-AES256-GCM-SHA384
  • ECDHE-ECDSA-AES128-GCM-SHA256
  • ECDHE-RSA-AES128-GCM-SHA256
  • DHE-DSS-AES256-GCM-SHA384
  • DHE-RSA-AES256-GCM-SHA384
  • DHE-DSS-AES128-GCM-SHA256
  • DHE-RSA-AES128-GCM-SHA256

Ephemeral keys are are the only way to achieve perfect forward secrecy using OpenSSL or TLS 1.2, correct?

There's ways to destroy PFS even with ephemeral key exchanges. For example, session resumption requires the retention of the premaster secret. Retaining the premaster secret to perform next = Hash( current) kind of destroys that property.

In an Apache server farm, that also means the premaster secret is written to disk. (The last time I checked, Apache does not have a way distribute it in-memory to the servers in the farm).


Is there a mathematical or probability reason to consider GCM safer than CBC after the current round of OpenSSL updates?

GCM is a streaming mode, so it should not suffer the padding oracle attacks. However, there are some who claim its trading the devil you know for the devil you don't know. See, for example, the discussion of Workshop on Real-World Cryptography on the cryptography mailing list.


Related: by controlling the cipher suite (e.g., ECDHE-RSA-AES256-GCM-SHA384), you can control the algorithm (e.g., ECDHE, RSA, AES256, SHA384) and control the protocol (e.g., TLSv1.2).

In OpenSSL, its a three step process to control these things (step 2 is optional below):

  1. Remove broken/wounded protocols. Use the SSLv23_method method, and then call SSL_CTX_set_options with SSL_OP_NO_SSLv2 | SSL_OP_NO_SSLv3. That gets you TLSv1.0 and above.
  2. SSL_OP_NO_COMPRESSION might also be a good idea due to CRIME. You still have to watch for compression leaks at higher layers due to BREACH for protocols like SPDY and HTTP.
  3. Choose your cipher suites, and then set them using SSL_set_cipher_list. Don't use cipher suites with algorithms like RC4 or MD5. A sample string is below that also removes RC4 and SHA1 (and other lesser ciphers).
  4. On a server, don't allow the client to select a weak or wounded cipher (by default and per the RFCs, the server honors the client's choice). Don't allow the client to choose by telling the server to make the selection with SSL_CTX_set_options and SSL_OP_CIPHER_SERVER_PREFERENCE.

Here's what the code might look like when setting your cipher list to control cipher suites and protocols:

const char* const PREFERRED_CIPHERS = "kEECDH:kEDH:kRSA:AESGCM:AES256:AES128:3DES:"
                  "!MD5:!RC4:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM:!ADH:!AECDH";

int res = SSL_set_cipher_list(ssl, PREFERRED_CIPHERS);
if(1 != res) handleFailure();

Here's another way to set the cipher list:

const char* const PREFERRED_CIPHERS =

  /* TLS 1.2 only */
  "ECDHE-ECDSA-AES256-GCM-SHA384:"
  "ECDHE-RSA-AES256-GCM-SHA384:"
  "ECDHE-ECDSA-AES128-GCM-SHA256:"
  "ECDHE-RSA-AES128-GCM-SHA256:"

  /* TLS 1.2 only */
  "DHE-DSS-AES256-GCM-SHA384:"
  "DHE-RSA-AES256-GCM-SHA384:"
  "DHE-DSS-AES128-GCM-SHA256:"
  "DHE-RSA-AES128-GCM-SHA256:"

  /* TLS 1.0 and above */
  "DHE-DSS-AES256-SHA:"
  "DHE-RSA-AES256-SHA:"
  "DHE-DSS-AES128-SHA:"
  "DHE-RSA-AES128-SHA:"

  /* SSL 3.0 and TLS 1.0 */
  "EDH-DSS-DES-CBC3-SHA:"
  "EDH-RSA-DES-CBC3-SHA:"
  "DH-DSS-DES-CBC3-SHA:"
  "DH-RSA-DES-CBC3-SHA";

Related: according to Steffen below, F5 BIG-IP load balancers have a bug where they reject a ClientHello that's too large. That's another reason to limit the number of cipher suites advertised as supported because each cipher suite requires two bytes. So you can reduce the the size required for cipher suites from 160 bytes (80 each cipher suites) to about 30 bytes (about 15 cipher suites).

Below is the ClientHello from tracing openssl s_client -connect www.google.com:443 under Wireshark. Notice the 79 cipher suites.

Wireshark trace of ClientHello


Related: Apple has a bug in its TLSv1.2 code where Safari fails to negotiate ECDHE-ECDSA ciphers as advertised. The bug is present in OS X 10.8 through 10.8.3, and was allegedly fixed in OS X 10.8.4. Apple did not provide a hotfix or apply the fix to the affected versions of its SecureTransport, so 10.8 through 10.8.3 will remain broken. And some versions of iOS will likely be broken.

Be sure to use OpenSSL's SSL_OP_SAFARI_ECDHE_ECDSA_BUG as a context option. See SSL_OP_SAFARI_ECDHE_ECDSA_BUG and Apple are, apparently, dicks... for details.


Related: there's another devil in the detail with OpenSSL and elliptic curves. Currently, you cannot specify the fields when using OpenSSL 1.0.1e. So you might end up with a weak curve (say P-160 with 80-bits of security) when negotiating a AES256 cipher (256-bits of security). Obviously, your attacker will go after the weak curve rather than the strong block cipher. That's why its important to match security levels.

If you want to be mindful of the security levels, you will have to patch OpenSSL's t1_lib.c around line 1690 to ensure pref_list does not include weaker curves.

OpenSSL 1.0.2 will allow you to set elliptic curve sizes. See SSL_CTX_set1_curves.


Related: PSK is Preshared Key and SRP is Secure Remote Password. I see a lot of folks remove PSK and SRP as bad ciphers (e.g., preferred cipher includes "!PSK:!SRP"). In reality they are usually unneeded because no one uses them, but they are not bad like RC4 or MD5. They are actually preferred because they have desirable security properties.

PSK and SRP are the most desired for applications that use passwords or shared secrets. They are most desirable because they provide mutual authentication of the client and server, and they don't suffer Man-in-the-Middle interceptions. That is, both the client and the server know the password and channel setup succeeds, or one (or both) don't know the password or secret and channel setup fails. They don't put the username or password on the wire in the plain text, so a rogue server or adversary gets nothing during an attack.

PSK and SRP have the property of "channel binding", which is an important security property. In classical RSA-based SSL/TLS, a secure channel is setup and then the user password is pushed down the wire in a disjoint step. If the client makes a mistake and sets up a channel with a rogue server, then the username and password is provided to the rogue server. In this case, the authentication mechanism is a disjoint step or "unbound" to the workings of the layer below. PSK and SRP don't suffer unbound channels because the binding is built into the protocol.

like image 81
19 revs Avatar answered Sep 28 '22 07:09

19 revs