nanog mailing list archives

Re: Yahoo! Lessons Learned


From: Vijay Gill <wrath () cs umbc edu>
Date: Thu, 10 Feb 2000 16:43:25 -0500 (EST)


On 10 Feb 2000, Sean Donelan wrote:

On Thu, 10 February 2000, Vijay Gill wrote:
Of course, given that we can get netflow type packet histories, plotting
the src/dest pairs for a while and then if there is a _large_ change (some
n std dev) from the norm for some particular dst (nominally the one under

I've wondered what type of statistical sampling could be used to find these
attacks, but not require huge amounts of storage.  The theory is these are
very large traffic flows which congest the pipe and push other traffic out
of the way.  If you sample 1% of the traffic, and 99% of the sample is the
same src/dest pair, something may be fishy.

How about looking at 1 in n packets, for some large value of n, perhaps as
a percentage of line rate?  This leads into the entire issue of building
ASICS in the fast path that punt 1 in n out towards some collator
mechanism, with perhaps the first order data reduction done in the router
itself before it is handed off. 

Once again, these things will cost money to build, take time to debug, and
the entire data collection system will be non trivial to scale. 

Problems that can be solved given enough talent/time/money, but is anyone
willing to put forth the effort?

/vijay





Current thread: