Full Disclosure mailing list archives

Re: Nessus experience


From: Jay Jacobson <jay () edgeos com>
Date: Wed, 13 Oct 2004 10:28:40 -0700 (MST)

On Wed, 13 Oct 2004, Mr. Rufus Faloofus wrote:

This strikes me as unreasonably slow, for bulk automated testing, so
first, I'd like to ask if these performance metrics are in line with
others' experiences.  I'd also solicit any hints people might have
to offer on how they optimize performance, any rules of thumb anyone
might care to share about estimating times for Nessus runs.


A 2 GHz processor is more than enough for the job you are doing. However, more RAM might be a good idea, as it will allow you to increase the parallelism of your Nessus testing. Also doing nmap seperately then feeding those results into Nessus (as you are doing) is a good idea.

You can "tune" Nessus to be very fast or very slow. It all depends on how you have your .nessusrc file tweaked. Some questions come to mind:

 * What version of Nessus are you using?
 * Are you enabling all of the Nessus plugins or just a subset?
 * What are the max_hosts and max_checks values in your .nessusrc file?
 * What is the value for optimize_test in your .nessusrc file?

It is somewhat subjective, but given the little bit of information about your envorinment, you should be able to have max_checks set to 5 - 10 and max_hosts set to at least 20 - 30 (likely more and more RAM would allow you to further increase max_hosts. That will have a huge impact on scanning speed.

Of course, another good place for these questions would be the Nessus mailing list. You may also want to check out Edgeos' Nessus Knowledge Base, which documents every configuration option in Nessus <http://www.edgeos.com/nessuskb/>.

--
..
..  Jay Jacobson
..  Edgeos, Inc. - 480.961.5996 - http://www.edgeos.com
..
..  Network Security Auditing and
..  Vulnerability Assessment Managed Services
..

_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.netsys.com/full-disclosure-charter.html


Current thread: