Penetration Testing mailing list archives
Re: Testing large networks
From: Matthew Caston <mattcaston () mchsi com>
Date: Mon, 07 Mar 2005 11:38:13 -0600
Dan,I would start with a small subset of something like the SANS Top 20 - say the "Top 5." Performance and/ratings should be far more manageable than a full Nessus sweep and could serve as starting point (baseline) for prioritizing further more in-depth testing. Moreover, I'd probably look to sort departments/BU's based on some high-level business criticality and/or exposure to threats; to include a review of webapps, as necessary (internal and external.). Simply (or not!!) build some criteria into your testing methodology to help prioritize and schedule further testing - that way your client gets data they can digest and you get to lay out a strategy for addressing the bigger picture
Most companies who are successful with this type of testing recognize that point in time assessments are....how to say...."pointless" Rather, they create a methodology and look to execute a rolling assessment with certain triggers based on continually updated sets of results - even if those results don't cover every asset, than can be statistically relevant (and manageable), allowing an organization to prioritize often limited resources/budget. To wit - rather than spend $100k and one month assessing everything at once consider breaking the assessment into more manageable and phased chunks spread over time (six months, for example), using each successive assessment to build on the results of the prior one.
At the end of the day (and as you note) whats the point of conducting an assessment and collecting an inventory of 50/100/150 thousand vulns, when you have little to no ability to prioritize and execute remediation - it really comes down to persistence and a good-faith attempt at continual improvement based on actionable data. If after six months you've been able to demonstrate a dramatic shift in the ratio of high to low risk vulns then you've accomplished something. Conversely, showing that you've remediated 1% of 150k issues, is less compelling...from a risk management perspective...my 2cents!!
-mgc Dan Rogers wrote:
Hi list, In the last few months I have been asked to assess a number of fairly large networks, which have been addressed very inefficiently. So, usually this consists of one or two main networks with about 1000 devices, and ten or so remote sites connected by WAN links or VPN's. It's not uncommon for the HQ to have a class B (or worse) as their internal subnet, even though there are nowhere near that many hosts. The problem I have is that a lot of the owners of these networks don't really know what they want in terms of testing, and ask very generic questions - things like "we want to know where we are weakest" or even "we want to know whats on our network". A lot of the motivation for this testing is usually passed down from senor management who just want to feel are secure, so they tell their IT managers to get a pen test without knowing what it means. This means IT managers can't often tell me what they actually want to be tested. I'm effectively given a blank sheet, and free reign to approach the testing from any angle I choose. It is also not uncommon for there to be little or no useful documentation - so I rarely have a complete set of network diagrams from which to work. These engagements mostly range from seven to twenty working days. Usually the approach goes something like this. 1. Ask IT manager to identify critical network infrastructure (servers, routers, wireless access points, Domain Controllers) - chose a representative sample for review 2. Attempt to establish general network architecture using a network-mapping tool 3. Perform internal scanning of network using NMAP/Nessus or GFI LANguard 4. look for really obvious problems. E.g. public/private SNMP or default passwords, missing patches, well known open trojan ports Create report giving fairly high-level areas of concern, and remediation (e.g. patch management solution/strategy, segregate servers from workstations with firewalls, update default passwords/use strong password strategy) When I conduct the tests, time is usually very tight, and therefore scanning of internal networks is quite costly time wise (especially if there is a class A/B to scan). Following a methodology which recommends scanning in several different ways and checking TCP responses just isn't practical. Using something like nessus can yield hundreds and hundreds of pages of results, and wading through them looking for false-positives is also not practical. So how do you lot approach testing a lage network? Also, how do you decide what to report to the client on? Cheers Dan
Current thread:
- Testing large networks Dan Rogers (Mar 07)
- Re: Testing large networks Matthew Caston (Mar 07)
- RE: Testing large networks Randy Golly (Mar 07)
- Re: Testing large networks Anders Thulin (Mar 08)
- <Possible follow-ups>
- Re: Testing large networks Davi Ottenheimer (Mar 07)
- RE: Testing large networks Piskovatskov, Alexey (Mar 07)
- Re: Testing large networks Dhruv Soi (Mar 08)