Security Incidents mailing list archives

Re: part deux, was -> RE: Digital forensics of the physical memory


From: Ben Hawkes <ben.hawkes () paradise net nz>
Date: Tue, 21 Jun 2005 00:22:52 +1200

On Sun, Jun 19, 2005 at 05:24:15AM -0700, Harlan Carvey wrote:
It is not possible to "image" a reality that is
constantly changing.  A "smear," on the other hand,
is a pejorative term
which assumes that a changing reality cannot
therefore be measured accurately.

My first thought was to rephrase the question of what
to call something that changes during the course of
the collection process.  What would we call something
like this?  As George has pointed out, it's not an
"image", and the term "smear" denotes something that
cannot be measured accurately.

Of course the next step that this leads to is
specificity of language for the community.  I won't
take this off-topic, but suffice to say that while
many professions (doctors, lawyers, plumbers,
mechanics, etc) have specific terms that mean very
specific things to them, the IT security community
(and in particular the IR/forensics community) seems
to lack this sort of thing.

There's a fine line between having specific terms for things and
creating a meaningless professional mystique. Mathematicians are
particularly guilty of the latter, conjuring up all sorts of seemingly
impenetrable words for what are often quite simple ideas.

However, in this particular case I tend to agree that the term "image"
does not accurately represent the result, and that the difference is
important. It needs to be stressed that physical memory is volatile
data, and any evidence gathered from it needs to be treated as such.
Using a specific term for this type of physical memory psuedo-image is
one way of stressing this. Open exchange of the technical issues
surrounding this particular branch of forensics is another.

As for what the actual term should be, I can only offer a suggestion.
When doing some preliminary research into a project which incidentally
involved the runtime shift of data in memory I called the affected
segment the "simulacrum". Although when I first used this term I was
observing a single process, the affect is very similar to what one
encounters when trying to mirror physical memory as a whole.

Perhaps it was the mathematician in me.

I asked the question in my previous email, that if
"smear" assumes a change that cannot be accurately
measured, how would one accurately measure the changes
that occur to kernel memory during the process of
collecting those memory contents with dd.exe?

I can't answer this question with specific technical details as they
pertain to Windows, as it is not my area of expertise. It would seem,
however, that in the general case there are three different ways by
which the process in charge of gathering the image alters the image
itself:

 1) The static areas of memory in to which the executable is loaded.
 2) The dynamic areas of memory which are used by the process such as
    the stack, heap, POSIX shared memory objects, runtime allocated
    mmap chunks, etc.
 3) Kernel data structures.

Each of these points are implementation specific to some degree. To
accurately measure the total affect we need to have knowledge of the 
specific implementation details for the host operating system concerned.
Short of knowing these implementation details the best we can do is to 
treat the "simulacrum" as a single object. An example of this would be 
running strings on the memory output. This is, I suspect, exactly what 
the analysts did in the case mentioned by George.

Now, the difficulty of gathering these implementation details directly 
relates to the availability of the code and the completeness of the 
documentation. Mariusz chose to do an analysis of Linux, which from his
point of view has the distinct advantage of being open source, meaning
it is relatively easy to understand the technical details. I am not
familiar with the quality of Microsoft's documentation, so in the case
of Windows I can only assume he would have been forced to reverse engineer
the results. This would increase the difficulty of the task substantially,
but I still believe, as he said in the paper, that it would be possible.

I am aware of the mammoth amount of work such an analysis could entail,
but for all I know, someone may have already done this (to some extent or
another). If anyone can explain the current state of the research in to
Windows kernel internals, I'm sure it would be appreciated.

The absence of free and open public reflection and debate on this
matter is a serious obstacle to computer forensic aspirations of
becoming a scientific discipline.

...
Many within the LEO community will not contribute, simply for that
matter...the concern seems to be that they cannot give up their 
'secrets' to the 'bad guys'.

Firstly, I agree with George's sentiments that an open forum will
ultimately benefit computer forensics. We do need to consider the point
that Harlan raised though; that such a forum would also benefit certain
attackers, and that concerns some people. I see some clear parallels
between this issue and the debates surrounding full disclosure of 
vulnerability details, but there are also some clear differences that
need to be noted.

Right now, we see two types of attackers. One type of attacker does not
consider using anti-forensic techniques as a result of either their
technical ineptitude or their self-inflated sense of anonymity. The
other type of attacker does consider the use of anti-forensics, and more
than likely attempts to hide their tracks to some degree.

I believe that the open exchange of forensic processes would not
significantly affect the second group of attackers. The basis of the
claim to withhold forensic ideas seems to be that attackers will not be
able to apply anti-forensics if they do not know forensics. I believe
this is a highly unrealistic assumption. The attackers who are concerned
about forensics do take the time to work through the simulations of
their attack to work out ways that they can minimize forensic evidence.
If they weren't doing this, we would not have the term "anti-forensics".

As for the first group, it is pointless to deny that a few would benefit
from learning the forensic processes. I think, however, that the
benefits of an open forensics community outweigh the negatives. Based
on the current technical proficiency of your average member of this
group compared to the increasing complexity of forensic techniques, I
believe the number of people caught out by better forensic analysts
would outweigh the number of people able to avoid detection as a result
of the open community. 

How does this issue differ from the full-disclosure of vulnerability 
details? Primarily it is the fact that when a bug is public, the system 
administrator can always win, should he choose to patch his system. But
we are disclosing forensic techniques, and in many cases in forensics
the attacker wins. This may sound severe, or may even sound like a reason
not to make forensic processes public, but unfortunately that's just how
forensics is, regardless of whether the techniques we use are public or
not.

The best thing we can do is to raise the bar. Raise the technical
competency of the forensic analyst so they are better equipped to find
the truth. This in turn forces the attackers to raise their level in
order to remain undetected. This is the best we can do, and the best way
we can do this is through an open forensic community.

My apologies for arguing the point that I think we all agreed upon. I 
realise that I haven't addressed exactly how to create an open
community, but I think that it's important that everybody knows the
reasons why open exchange is needed before just doing it. Perhaps this
is why attempts have failed in the past.

--
Ben Hawkes (fiver)
http://pie.sf.net/


Current thread: