Bugtraq mailing list archives

Re: BugTraq: EFS Win 2000 flaw


From: Dan Kaminsky <dankamin () CISCO COM>
Date: Wed, 24 Jan 2001 21:11:27 -0800

Addendum to my thoughts on the apparent EFS design flaw, which is actually
less significant than originally announced.  Essentially, only files that
are converted FROM plaintext TO ciphertext are temped, meaning the bug only
affects files that were plaintext on the disk in the first place.  There's
still a problem--temp files aren't overwritten, not even once--but EFS
doesn't become *at all* farce of a FS I thought it was.  (Of course, I
didn't know this as I wrote much of what's below, so if something reads
funny...that's why.)

Specific kudos to Scott Culp at Microsoft, whose response to Rickard's post
was well researched and nicely done.

1)  As quite a few people noticed, I pasted in the wrong URL for Peter
Gutmann's Secure Deletion paper.  Ironic that, for all that I tend to talk
about the dangers of intermittent failures(as opposed to the clear-cut loss
of service that tech support is generally built to verify and address),
Windows' occasional tendancy to ignore a copy request would hit me.

The *correct* URL is as follows.

http://www.cs.auckland.ac.nz/~pgut001/secure_del.html

This is incredible reading, even now, five years after it was authored.
[Yes, Ben had to make a special post with the above URL, but I'm repeating
it as an exhortation to everyone:  Read It!]

2)  According to Russ Cooper(editor of NTBugTraq), Gutmann, as of two years
ago, said that the increased disk densities weren't yet posing a problem for
disk data recovery.  This fits well with the presumption that, as densities
go up, redundancies and error correction codes also increase their
effectiveness.  A moderately interesting facet of memory design is that
apparently nearly every single DIMM has defects, but each chip on that DIMM
has extra blocks and integrated circuitry to detect bad memory and
transparently reroute into the extra storage areas.  This increases yields
by providing tolerance against minor imperfections, at the cost of a
slightly larger die.

More importantly to us, however, is the reassertion that logical reality has
no required relationship with physical reality.  Memory can be logically
sequential and physically random.  IP addresses can be logically grouped and
physically diverse.  Content on a hard drive can be logically erased but
physically immortal, due perhaps to a freak accident that causes a block to
be marked as bad and transparently duplicated elsewhere.  Remove from a hard
drive read head the constraints of size, mass production, writability,
non-destructivity, and even the requirement to survive shipping, and it
becomes very clear how simply because *one* physical apparatus (the
read/write head shipped with the hard drive) cannot logically analyze a set
of faded impressions in the magnetic media, that no other physical apparatus
might.

Interestingly enough, the variation in "physical agility" doesn't just apply
to readability; the ability to create physical impressions is also something
that varies absolutely unpredictably according to the flow of time, money,
and criticality.  Since the sanctity of physical impressions are exactly
what biometric systems attempt to authenticate against, one should realize
that a similar risk factor exists in biometric spoofing as exists in
multi-generation data recovery--more risk, since you probably don't need a
clean room and expensive sensors to spoof a $25 optical fingerprint scanner;
less risk, since hard drive data recovery doesn't require aquisition of the
secret(although it's likely that the last person to use that $25 fingerprint
scanner will leave their prints on the scanner!)

The advantage of cryptography is that, overall, the belief that there's no
efficient way to factor the product of two large primes is more "secure"
than the belief that there's no way to physically spoof a given set of
physical properties.  Done correctly, it allows us to *ignore* the
possibility that our physical data routes might get compromised.  No, we
don't lose *all* physical security constraints, but we're able to physically
isolate the value of our information from the bulk of our data.  The whole
idea of an EFS is to grant the theoretical attacker full physical access to
a hard drive and *still* maintain security-- all the attackers receive is
encrypted bulk data.

The problem, of course, is where to put the decryption key.  Having a
plaintext decryption key next to a whole pile of encrypted data is about as
smart as etching the combination to a half ton safe right next to the wheel.
(Given the cash rush into crypto, that hasn't stopped anyone from deploying
such systems.  The system Crypto-Gram linked to at http://www.gianus.com/
comes to mind.)

So this is where crypto, for all its logical manipulations, is forced to
intersect with the physical world.  Many systems depend on human memory to
contain some secret, although there are arguments that nobody can remember
more entropy than a computer can brute force through.  Other systems use
hardware tokens to store their secrets.  As the crypto is isolated to its
own subsystem, the physical requirements of that system can be tuned to only
need to protect that limited amount of data and nothing more.

That doesn't mean they're foolproof.  Human memory is defeatable via
bribery, rubber hose cryptoanalysis, and the aforementioned size constraint.
Most tokens are defeatable via side channel attacks.  But the odds of a
device built for secure storage surviving assault are quite a bit better
than a system whose primary function is to, well, *function*.  A hard
drive's *job* is to store and allow retrieval of information in unnaturally
dense and arbitrary formations.  We should not be surprised when such a
system succeeds, even if we'd rather it fail.

What should surprise us is when a cryptosystem *built* to prevent the
relevance of an unauthorized successful read makes presumptions that the
underlying medium will fail to reveal deleted data.  That's unfortunately
what Microsoft's EFS implementation is doing--they're writing plaintext
information, trivially deleting it after encryption, and saying the file has
been protected behind a cryptographic key.

3)  Timothy Miller mentioned something moderately important:  Placing a
system into hibernation has the effect of dumping all live memory to disk in
plaintext.  This obviously compromises whatever happens to be in memory,
including the decryption keys that *need* to be in memory in order for
everyday access to function.  Couple quirks which deserve mention:

First, causing a system to drop into hibernation mode is conceivably a poor
man's forensic toolkit.  Although it's definitely conceivable that malware
might detect the hibernation process and deploy countermeasures to prevent
its detection, the concept of code defending against *anything* is possible,
and moderately more likely vs. userland apps that need to be loaded from
remote sources instead of something integrated with the kernel.  We've
already seen at least one virus that prevents connection to antivirus
websites, for instance.

Second, simply encrypting the memory dump isn't necessarily going to help:
Where does the system put the key, presuming there's no hardware token to
query?  How does the system retrieve a password to decrypt that key without
loading up the rest of Windows?  The MS worldview isn't exactly moving
towards having a Non-Win32 window pop up asking for a password, but bringing
up a full Win32 environment arguably requires coming out of full
hibernation--meaning the system needs to be able to load up system RAM
without querying the user for a decryption code.

Finally, if arbitrary users can send a machine into temporary
hibernation(issue the command, then have a remote host send a Wake-On-LAN
magic packet), and has a pathway to do raw reads against the file
system(this doesn't necessarily require physical access!), the EFS won't
help--once the system comes back up, the attacker just needs to sort through
the hibernation data to retrieve key material.  Essentially, the EFS doesn't
win you anything above straight NTFS file permissions, since the key to
decrypt the NTFS file is available in the same pile the permissions you're
bypassing are located.

Mind you, a token doesn't really help in this circumstance, since most
tokens aren't used for bulk decryption.  Generally, the token will be used
to decrypt some key file into memory, and that key will be used to encrypt
and decrypt files.

Actually, I'm moderately curious how EFS does key selection--on a per file
basis?  Per block?  Is there salting?  File system crypto is moderately
difficult, due to issues like crash resistance, appending data to arbitrary
points within a file, etc.  This buglet happened due to an allowance made
for crash resistance--it'd be interesting to see whether anything else was
exposed due to specific allowances made for this functional domain.

Yours Truly,

    Dan Kaminsky
    Cisco Systems, Inc.
    http://www.doxpara.com


Current thread: