Snort mailing list archives

Re: What am I Protecting Against?


From: Nicholas Bachmann <nbachmann () davison k12 mi us>
Date: Mon, 02 Jun 2003 21:21:53 -0400

Roy S. Rapoport wrote:

Sorry, couldn't come up with something wittier.

Now that I've got ACID running, I'm attempting to make sure I understand
what alerts I'm seeing and why I'm seeing them.  Obvious, ain't it?
That's the biggest challenge in IDS, so you're right, it's not always obvious.

My goal is to get to the point that I log all things reasonably
considered intrusions or recon, but to only alert on things that are
actually threats -- in other words, I don't want to know at 2am that
someone's trying to compromise my MS SQL Server, since it's running on
UNIX and isn't MS SQL. Oh, and it's not available to the net :).
There are two schools of thought on this
1) I don't care, my system isn't vulnerable. (what you're saying)
2) My system isn't vulnerable, but why is somebody trying to exploit it?
You see, I'd rather know somebody is trying to exploit Code Red on my Apache server or chunked encoding on my IIS server, before they get it straight, you know what I mean? (Hopefully everything is patched, though! :-) Knowing somebody is casing the establishment is useful for putting you on hightened alert... if you know it's happening you'll be more on the look out for patterns that you might dismiss as coincidences. You do, however, still have to prune alerts for your environment that are truely false positives (i.e. not someone even looking to probe you ports) versus false positives in the sense that you're not vulnerable.

Both methods have advantages and disadvantages. #2 is more time consuming and leads to information overload. However, if you're well trained and pragmatic, you'll stand a much higher chance of catching the bad guy before he makes swiss cheese of your network.

So I'm trying to figure out what some rules are actually trying to
protect me against; sometimes, there are references to actual docs that
make this obvious; sometimes, the rule documentation covers it.
However, some rules are still undocumented.  So for example, I give you
SID 1852:
alert tcp $EXTERNAL_NET any -> $HTTP_SERVERS $HTTP_PORTS (msg:"WEB-MISC robots.txt access"; flow:to_server,established; uricontent:"/robots.txt"; nocase; reference:nessus,10302; classtype:web-application-activity; sid:1852; rev:3;)
As I see it, this alerts you of any attempts by anyone to access
/robots.txt on your HTTP server.

So hey, maybe I'm an idiot, but why? Trying to get /robots.txt is a
simple part of any search engine that spiders your site.  _I_ don't see
it as a security issue at all.  Am I missing something?
These are alerts that just need to be pruned or used as part of a bigger strategy. Say you find a suspicious IP that seems to be scanning you and probing for unpached vulnerabilites. You can use robots.txt as part of a corallary... they're port scanning, they're looking for the files you don't want indexed (like private pages, good hacker food). The idea of this rule is not to show you a major security event, but to give you a better picture of what's happening.

And, more generally, is there a way to find out, essentially, what the
rule writer was thinking when they came up with the rule?
Being cynical and paranoid.

--
        Regards,
        Nick

        Nicholas Bachmann, SSCP
        Technology Department
        Davison Community Schools







-------------------------------------------------------
This SF.net email is sponsored by:  Etnus, makers of TotalView, The best
thread debugger on the planet. Designed with thread debugging features
you've never dreamed of, try TotalView 6 free at www.etnus.com.
_______________________________________________
Snort-users mailing list
Snort-users () lists sourceforge net
Go to this URL to change user options or unsubscribe:
https://lists.sourceforge.net/lists/listinfo/snort-users
Snort-users list archive:
http://www.geocrawler.com/redir-sf.php3?list=snort-users


Current thread: