Firewall Wizards mailing list archives

Re: Recording slow scans


From: "Stephen P. Berry" <spb () incyte com>
Date: Wed, 07 Oct 1998 18:33:36 -0700

-----BEGIN PGP SIGNED MESSAGE-----


If you want to test the datastream against n filters, you've got
to run your data through tcpdump n times[2].  

Well, obviously, that's a design flaw with using a piece of
filtering software that's designed to only evaluate a single
condition efficiently! My experience is that tcpdump is great
for limited stuff but then it hits a wall pretty fast, design-wise.

No arguments there.  The main advantage I see to building an IDS
around something like tcpdump(8) is that it's fairly ubiquitous.  So
the individual machines which are functioning as sensors, analysis
engines, u.s.w. can be pretty much anything, mod the crunch power
required.  On the sensors, this tends to be pretty close to nil.



What I want to do is twiddle libpcap [blah blah]

By the time you've done that you'll have wound up writing your
own NFR, or Bro, or argus or NNstat.

You say that as if it was a bad thing.  But at any rate I don't think
that's entirely accurate.  I'm more interested in having a set of
tools that can be incorporated into many different applications
than in building the applications themselves.  In general I'd rather have
a set of svelte routines for twiddling with packets, which could
be used in constucting an IDS, than have an redi-mix IDS itself.

That being said, I think I'd ideally like to see more of the smarts
for the first-order datamunging taking place on the sensor itself,
and below the application level.

In most of the IDSen I've seen, what makes a sensor a sensor is an
application.  It's sitting on top of an OS which may or may not be
doing things other than running the IDS-related application(s).  It collects
data and periodically schlepps it (or allows it to be schlepped)
to an analysis machine.  

I think I'd rather see a sensor which could be distributed as a set
of diffs against your favourite free UNIX source tree---that is,
have the sensor smarts in the kernel.  This would allow you to
grab the interesting bits of information from the datastream very
close to the raw device---instead of waiting for the entire datastream
to percolate up from the IP stack to luserland, to be sliced, diced
and julienned there.



As a result,
analysis generally covers everything from yesterday (or whenever
data was last imported into the database) and back as far as
storage will allow.  So it's slow.

Right. What you really need to be able to do is preload the
kind of analysis you want to do, so as much of it as possible
is done in realtime as the data comes in.  I haven't worked with
databases for a long time, but what it amounts to is pushing the
first stage of query optimization into the data gathering
loop -- then if the results come true, you run the rest of the
query.

Yes and no.  Certainly having a heuristic for evaluating how
interesting a particular bit of traffic is while it is still hot off
the wire is a good thing.  But that's fundamentally just incident
response, and I'm generally also interested in long-term trend
analysis and suchlike as well---and I don't know of any way of getting
away from looking at all the data while doing that.


Speaking of trend analysis, I saw a snippet in ;login: awhile back about
some research being done with inductive learning algorithms as a security 
tool.  The specific tactic discussed was using ripper to build heuristics 
for identifying whether a particular use of a service or application was 
legit or an exploit attempt, based on traces of system calls.  Has anyone
tried to apply similar techniques to network intrusion detection?









- -Steve


-----BEGIN PGP SIGNATURE-----
Version: 2.6.2

iQCVAwUBNhwV9irw2ePTkM9BAQFaFAQAjyNl2wlEgKX+y66FTqJ4lbrBAIocJfWK
CLZH+C7/eUEpTCDf+YzpCyc0513OoIMaYb2hmyGoAZMjOujE9IeAHuhrs/FD4VVa
TC3vl2+UZ0Dt7ItslwLz6Rrkl8KaJBwK6uqh/467KJeRAaz7Csis/8bNTOiSZelE
xWOaN8Xz7p4=
=R3GN
-----END PGP SIGNATURE-----



Current thread: