nanog mailing list archives

Re: SHA1 collisions proven possisble


From: James DeVincentis via NANOG <nanog () nanog org>
Date: Wed, 1 Mar 2017 22:57:06 -0600

Let me add some context to the discussion.

I run threat and vulnerability management for a large financial institution. This attack falls under our realm. We’ve 
had a plan in progress for several years to migrate away from SHA-1. We’ve been carefully watching the progression of 
the weakening of SHA-1 as computing power has increased and access to large-scale computing has become standard over 
the last 5 years. This does nothing to change our timeline. 

The attack does nothing to prove anything we already didn’t know. As computing power increases we must change hashing 
mechanisms every few years. This is why this is no surprise to us in the security sphere. However the presentation of 
this particular information has been following a very troublesome trend we’ve been seeing in the security sphere. 
Naming a vulnerability something silly but easily rememberable by management types. ‘HeartBleed’, ‘Shattered’, 
‘CloudBleed’, ‘SomethingBleed’… This is a publicity stunt by Google to whip up a hype and it worked. Case in point, are 
some of the posts in this thread that completely dismiss fact for assumption and embellishment. 

With specific regard to SSL certificates: "Are TLS/SSL certificates at risk? Any Certification Authority abiding by the 
CA/Browser Forum regulations is not allowed to issue SHA-1 certificates anymore. Furthermore, it is required that 
certificate authorities insert at least 64 bits of randomness inside the serial number field. If properly implemented 
this helps preventing a practical exploitation.” (https://shattered.it/ <https://shattered.it/>). It seems not all of 
the news outlets read the entire page before typing up a sensationalist post claiming all your data is at risk suddenly.

Here’s why this is sensationalist. If anyone with *actual* hands-on work in the *security* sphere in *recent* years 
disagrees, I’ll be happy to discuss. 

- Hardened SHA1 exists to prevent this exact type of attack. 

- Every hash function will eventually have collisions. It’s literally impossible to create a hashing function that will 
never have a collision. There are an infinite number of inputs that can go into a hash function. There are a finite 
number of outputs of a hash function. Hash functions create an implausibility of a collision. That’s it. There is no 
100% certainty that any hash function will not have collisions.

- Google created a weak example. The difference in the document they generated was a background color. They didn’t even 
go a full RGBA difference. They went from Red to Blue. That’s a difference of 4 bytes (R and B values). It took them 
nine quintillion computations to generate the correct spurious data to create a collision when they controlled both the 
documents with a 4 byte difference. That spurious data inflated the PDF by at least a few hundred KB. Imagine the 
computations it would take for the examples they give? Anyone know? No. They didn’t dare attempt it because they knew 
it’s not possible.  

- This wasn’t even an attack on a cryptographic method that utilizes SHA1.  This was a unique identifier / integrity 
attack. Comparing an SHA1 hash is not the correct way to verify authenticity of a document. Comparing an SHA1 how you 
verify integrity of a document, looking for corruption. Authenticity is derived from having the data signed from a 
trusted source or encrypted using say PGP from a trusted source.

- And last but not least.. which takes all of the bite out of the attack. Google also showed it was easily detectable. 
Is a weakness or attack on a hash function really viable of it’s easily and readily detectable? No. It’s not. (See: 
IDS, WAFs: They filter and detect attacks against systems that may be vulnerable and prevent them by checking for the 
attacks). So If I see a hash collision? I’ll modify the algorithm… Wait.. This sounds awfully familiar.. Oh yea… 
Hardened SHA1.

With all of these reasons all wrapped up. It clearly shows the level of hype around this attack is the result of 
sensationalist articles and clickbait titles.

It also appears the majority of those who embrace fact also abandoned this thread fairly early once it began to devolve 
into embracing sensationalism. I’m going to join them. 

*micdrop* *unsubscribe*


On Mar 1, 2017, at 9:49 PM, Matt Palmer <mpalmer () hezmatt org> wrote:

On Thu, Mar 02, 2017 at 03:42:12AM +0000, Nick Hilliard wrote:
James DeVincentis via NANOG wrote:
On top of that, the calculations they did were for a stupidly simple
document modification in a type of document where hiding extraneous
data is easy. This will get exponentially computationally more
expensive the more data you want to mask. It took nine quintillion
computations in order to mask a background color change in a PDF.

And again, the main counter-point is being missed. Both the good and
bad documents have to be brute forced which largely defeats the
purpose. Tthose numbers of computing hours are a brute force. It may
be a simplified brute force, but still a brute force.

The hype being generated is causing management at many places to cry
exactly what Google wanted, “Wolf! Wolf!”.

The Reaction state table described in
https://valerieaurora.org/hash.html appears to be entertainingly accurate.

With particular reference to the "slashdotter" column.

- Matt



Current thread: