Security Incidents mailing list archives

Re: Incident investigation methodologies


From: Valdis.Kletnieks () vt edu
Date: Mon, 07 Jun 2004 17:41:06 -0400

On Mon, 07 Jun 2004 15:19:50 EDT, "Fiscus, Kevin" said:

If a web server, containing no critical informatin (isolated from the
corporate network also) gets defaced, it may make sence to simply restore from
backup.(How do you determine what happened to prevent it from happening again?)

If it's isolated from the corporate network, it's probably outward-facing, and
thus seen by customers, and it's suddenly critical information.  You don't believe
me, stick a 'Yew bean 0wnz0rd' on your corporate web homepage and see if your
customers don't suddenly move you down a few pegs on the clue board...

And if it's not inward facing, and it's not outward facing, why are you
bothering to run it?

If it's important enough to be running it, it's important enough to take the
time to at least snarf some sort of image to do forensics on later, offline,
after you do whatever else needs doing to get the box back up.  No, you may not
have the time to do an image that will hold up in court - but you can at least
do a quick search for important modified files and logs, and snap a copy of
them quickly.

There's three cases to worry about:

1) The server is mission-critical and can't afford downtime.  Well, then you
just kick the server down and do forensics at your leisure, since the hot
backup server will pick up and go.  And if you *didn't* have a hot backup
server, as you're doing the forensics, ask yourself what you'd be doing if
you'd just blown a motherboard or something like that. You would be standing
there with your collective thumbs stuck in the appropriate orifices while you
waited for a replacement part to get onsite and installed. No, saying "but
we have a full supply of replacement parts cached onsite" doesn't cut it -
you'll find THAT out the first time you have to replace 3 parts (at 20-30 mins
down per part) before you find the bad one, or you replace 4 memory modules
before you discover that you have a overvoltage on one of the power leads....

2) The server isn't mission-critical.  Then take at least a few minutes to
image the box (even if only partially).  And if it isn't worth taking the 5
minutes to do that to help insure that it doesn't happen again, do your
organization a favor and just turn the damned thing off and LEAVE it off.

3) This user is the 423rd one to click on that "OOH! Shiny!!" attachment, and
you feel quite confident that a 'wipe and restore' will suffice (Note that for
the case of shiny attachments that leave backdoors, you need to investigate at
least enough to verify that the backdoor wasn't actually used for anything - if
it was, you have more work to do).  In that case, just make sure you re-image
the system from an image that doesn't let the user become number 497 as well...

In any case, I would hope that you're keeping enough of an audit trail so that
in the "restore and watch" case, you have at least a *clue* what happened the
first time around (inside login attack, outside login attack, abuse of a
vulnerable PHP script, etc).  Even saying "I'm not really sure who it was, but
it looked like somebody uploaded it using a developer's account from somewhere
in Zanzibar, so we're watching that more carefully for a while" is better than
just sitting there waiting to get nailed by the exact same thing again....

Attachment: _bin
Description:


Current thread: