Snort mailing list archives

Re: optimizing MySQL for Snort


From: Bill.Van.Devender () Cummins com
Date: Thu, 06 Dec 2001 10:58:40 -0600


Florin,

Do you have fsync turned off on the postgres setup?  It makes a big
difference on insert speed.






Florin Andrei <florin () sgi com>@lists.sourceforge.net on 12/05/2001 01:34:16
PM

Sent by:  snort-users-admin () lists sourceforge net


To:   snort-users <snort-users () lists sourceforge net>
cc:

Subject:  [Snort-users] optimizing MySQL for Snort


I run Snort as a sniffer, recording all IP packets in the database.
To retrieve some useful info about those packets, i'm running something
like this:

SELECT event.cid, iphdr.ip_src, iphdr.ip_dst, tcphdr.tcp_dport FROM
event, iphdr, tcphdr WHERE event.cid = iphdr.cid AND event.cid =
tcphdr.cid AND tcphdr.tcp_flags = '2';

(get all relevant data for all TCP SYN packets)

The problem is, my tables are pretty big (approx. 2 GB space on my
hard-disk) and MySQL-3.23.46 takes forever to return from SELECT (i let
it run over night, in the morning i still didn't got any results, so i
killed the query).
With PostgreSQL-7.1.3, the same SELECT on the same data returns in like
5 minutes, which is awesome when comparing it to the near-infinite
amount of time required by MySQL.
But the problem with PostgreSQL is, it's too slow for INSERTs (while
sniffing the same amount of traffic, with MySQL the CPU usage is at 50%
and no packets are lost, with PostgreSQL the CPU usage is >90% and more
than half packets are lost).
Both databases run with the default configuration parameters.

Splitting the big tables into small ones will not help, because i still
want to be able to run the SELECT across _all_ data.

So i wonder if there's any way to optimise MySQL so that i will be able
to actually do something useful with the data collected by Snort.
Maybe tweak parameters like join_buffer_size? table_cache? Anyone has
some experience with these?...
I'm concerned about MySQL's performance for SELECTs on such large
tables.

Until now, i used MySQL to collect the data, then export the db into
PostgreSQL and run the select there. But this is not an option, at least
not if you think long-term.

--
Florin Andrei

"Engineering does not require science." - Linus Torvalds


_______________________________________________
Snort-users mailing list
Snort-users () lists sourceforge net
Go to this URL to change user options or unsubscribe:
https://lists.sourceforge.net/lists/listinfo/snort-users
Snort-users list archive:
http://www.geocrawler.com/redir-sf.php3?list=snort-users




_______________________________________________
Snort-users mailing list
Snort-users () lists sourceforge net
Go to this URL to change user options or unsubscribe:
https://lists.sourceforge.net/lists/listinfo/snort-users
Snort-users list archive:
http://www.geocrawler.com/redir-sf.php3?list=snort-users


Current thread: