Bugtraq mailing list archives

Use of timestamps when checking for file versions


From: dleblanc () MINDSPRING COM (David LeBlanc)
Date: Mon, 15 Feb 1999 10:32:56 -0500


At 10:46 AM 2/11/99 -0800, Jim Trocki wrote:
On Tue, 9 Feb 1999, David LeBlanc wrote:

We get that one right.  All the NT patch checks are based on file
timestamps, not service pack numbers.  We have seperate checks for just
service pack numbers, since you need less access to get the SP level than
to get timestamps on system files.

C'mon. Haven't you learned to use digital signatures (like MD5) instead
of timestamps to identify files? A timestamp is a bunch of crap, and
it has no relation at all to the contents of the file. You could easily
build a database of MD5 hashes of the different DLLs which are included
in each different service pack, and use that to identify SP levels.

A timestamp on a hotfix installed by NT (remember that NT is really a very
different animal than UNIX) will show what version of the file you have
accurately.  What it will not do is detect tampering.  When you're looking
to see what patches have been applied to an NT machine, tampering isn't
normally an issue.  Unlike UNIX systems, NT hasn't developed a large number
of altered system files which can be applied.  There isn't any inherent
reason this can't be done, and it will almost certainly occur in the
future, but right now, we just don't see them.  What we're a lot more
concerned with is which of the several versions of tcpip.sys might be
installed, and what that means in terms of which DoS attacks might work.
We're also typically concerned about more than service pack level - there
were a lot of hotfixes between SP3 and SP4, so managing that can be
difficult.  I'd agree that tampering could certainly occur, and could yield
poor results, but also consider that a very sophisticated attacker could
hook just about any OS function, filter just about any driver, and do
nearly anything they wanted.  A checksum could very easily be diverted to
the correct file - you call into the file system to open a certain file, a
file system filter hooks that request, diverts it to a different file, and
away we go.  So I don't see where the checksum is going to be completely
airtight, either.

If you _are_ looking for tampering, then you most certainly should be
looking at checksums.  That's why people make tools that baseline the file
system, the registry, etc (another ISS product does that, so does tripwire,
and others). You usually want to do that locally, then burn the database
off onto a CD. In the places where the scanner is actually looking for
tampering (password filters are a good one), then we do look at a secure
checksum of the file.

The problem of building a database isn't as easy as you might think, given
that Microsoft ships a very large number of versions of any service pack -
one for every language they support.  That's a lot to worry about.  Bottom
line is that if what you're worried about is reminding the admin which
patches need to be applied, the file times do work well.  If you're worried
about tampering, then much stronger measures are called for.  Timestamps
are certainly not foolproof, can indeed be tampered with (by an admin-level
user), but they do get the job done.  If you're worried about the integrity
of the system, then baseline the file system - and verify it _off-line_.


David LeBlanc
dleblanc () mindspring com



Current thread: