Bugtraq mailing list archives

Re: Shred 1.0 Bug Report


From: Dan Kaminsky <dankamin () CISCO COM>
Date: Wed, 11 Oct 2000 16:16:31 -0700

I therefore advise discontinuation of the use of the "shred" package. I
have no plans to bugfix or update it, since Tom Vier's "wipe" package
accomplishes the same job, and in a more thorough fashion.

Wipe is impressive, though srm and overwrite are generally what I've been
using(srm's recursive functionality is clean and simple).

Oddly enough, I've been questioning the value of attempting to shred disk
contents past what dedicated hardware can read.  In my mind, it may be more
important to *by default* cause all files, or all files mounted on a given
"secure partition", to be overwritten with just a single pass to prevent all
software based attacks from being successful.  Dedicated hardware implies
physical access to the hard drive--if your threat scenario grants that,
there's alot more damaging things they can do to your systems and your
infrastructure.  Contrast that with the threat of root being remotely
breached, and your disks being scanned for undeletable files.  As
distasteful as such a situation may be, it's just *far* more likely than a
physical break-in.

It's reasonably arguable how many passes of what kind of data(pseudorandom
vs. patterned) actually can protect against a physical analysis--Gutmann's
paper aside, the few companies and TLA's that do this sort of thing aren't
particularly open with their methods or capabilities.  However, there's no
question that simply overwriting and wiping the data once, as is already
done in good operating systems when formerly used system memory is remapped
to a new process, will protect against all software-based attacks--hard
drives simply lack the firmware commands to return deep scan information
about the physical surface.

Often, such code hasn't been deployed because it slows down the system
considerably when used in a synchronous manner.  For synchronous
single-wiped deletion, if you delete a one gigabyte file, and you need to
wait for a gig of pseudorandom data to be written to the hard drive before
you can access that space, you've got a three to five minute delay tacked on
to whatever the file system already required to wipe that file.  That's
easily enough to cause secure deletion to be disabled, especially because of
all the non-obvious cases--a one gigabyte file is opened and replaced with a
one megabyte file.  For synchronous code, somewhere in that process a huge
delay has to be inserted to clear out the unused space.  For a user who just
wants to save something, they're almost guaranteed to assume their machine
has crashed and reboot/kill -9.

This, most likely, leaves possibly security critical garbage data sitting on
the hard drive--the synchronous wipe was almost certainly disabled before it
could finish.  Never let it be said that user interfaces and security are
not intrinsically linked to one another.

Since synchronous wiping on everything is infeasable, there are really two
solutions:  One, explicitly use userspace secure deletion utilities to wipe
files we know are security critical.  This is where apps like shred, wipe,
srm, and overwrite come into play.  The other is a "lazy overwriting"
asynchronous daemon, which sacrifices immediate security and code simplicity
for the ability to know that, within a certain amount of time after space is
de-allocated, the former contents of that space will be overwritten and be
no
longer recoverable.

By turning secure deletion into a non-blocking process, it might actually be
feasable to have it more widely deployed--one of the problems with userspace
tools is you can't use them for *every* transaction(even if you replace the
rm executable with srm, it doesn't help with simple clobbers a la cat >
file) and you have to know in advance which specific files are security
sensitive.  If anything changes on your system, or if any apps do their own
file management...you've gotta modify your system configuration, sometimes
to the point of changing source code.  That's in addition to the speed
penalty.  Asynchronous methods can run whenever the system isn't heavily
loaded.  Synchronous processes hold everything else up.

I'm not aware of any projects to create an asychronous secure deletion
daemon, but believe me, I'd use one if I could find one.

Jeff, I do have to question whether it was appropriate to notify
Bugtraq, since "shred" was never, to my knowledge, a part of any Linux
distribution.

This is *absolutely* irrelevant.  A security critical package failed to
perform it's single security critical function.  Whether or not this was
because of a change in file handling semantics is irrelevant:  Files that
people thought were wiped from their drives were almost certainly *not*.

At minimum, it's a great example of the law of unintended consequences and
the need to verify the functionality of one's code.  "It looks and sounds
like it's overwriting file, so it must be!" was how Shred was judged as
functional.  Nobody had the tools before TCT(or grep /dev/hda, but I'm
talking convenience here) to see if the deletion actually worked.

It didn't, the report went out.  If a security list didn't post that
security software was insecure, it wouldn't be much of a security list!

Yours Truly,

    Dan Kaminsky
    Cisco Systems, Advanced Network Services
    http://www.doxpara.com


Current thread: