oss-sec mailing list archives

Re: Prime example of a can of worms


From: Brad Knowles <brad () shub-internet org>
Date: Mon, 19 Oct 2015 15:06:28 -0500

On Oct 18, 2015, at 11:06 PM, Kurt Seifried <kseifried () redhat com> wrote:

A small
number of fixed or standardized groups are used by millions
of servers; performing precomputation for a single 1024-bit
group would allow passive eavesdropping on 18% of popular
HTTPS sites, and a second group would allow decryption
of traffic to 66% of IPsec VPNs and 26% of SSH servers.

I think this may be a bit of a slippery slope here.

How many machines would have to be vulnerable for a given group to be considered big enough to be “weak” and therefore 
worth of having a CVE issued?  Would that number be 1%?  5%?  10%?

At what point is it more dangerous to generate your own DH groups on systems that do not have sufficient uptime, versus 
re-using an existing DH group that might be considered “weak”?


There was a time when 1024-bit DH groups were considered sufficiently safe, and 2048-bit was overkill.  At what point 
does 2048-bit become “weak” in the same way that 1024-bit is today?  How many years in advance are we going to build 
into the system, so that we can have people “safely” transitioned off 2048-bit DH groups and onto whatever the next new 
thing is?

I mean, NIST is having a hard enough time getting people to stop using MD-5, much less SHA-1.  And if SHA-1 falls this 
year, how long before SHA-2 falls?

--
Brad Knowles <brad () shub-internet org>
LinkedIn Profile: <http://tinyurl.com/y8kpxu>

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail


Current thread: