Nmap Development mailing list archives

Re: Ndiff core dumps on large scans


From: David Fifield <david () bamsoftware com>
Date: Thu, 15 Oct 2009 16:03:18 -0600

On Tue, Oct 13, 2009 at 03:44:17PM +0200, Clausen, Martin (DK - Copenhagen) wrote:
Running Ndiff on large scans (8192 IP addresses) makes it core dump;
probably because it cannot allocate more memory. Mu Linux box has 1 GB
RAM.

Marting sent me the files and it turns out that there was a problem in
the XML. There were two errors in two different files:

xml.sax._exceptions.SAXParseException: scan-1.xml:35426:95: not well-formed (invalid token)
xml.sax._exceptions.SAXParseException: scan-2.xml:16735:18: not well-formed (invalid token)

The referenced lines are

<osmatch name="Linux 2.6.18 (ClarkConnect 4.3 Enterprise Edition)" accuracy="90" line="18415"/>^N<osmatch name="Linux 
2.6.18 - 2.6.27" accuracy="90" line="18689"/>
<times srtt="2354"$rttvar="822" to="100000" />

The error in the first line is an illegal character (^N stands for a 0E
byte). In the second line it's a '$' where there should be a space.
After correcting these two errors I was able to diff the files as
expected.

I can't figure out what could cause Nmap to emit this slightly broken
XML. I have seen broken XML with normal output mixed in, but I think
that was the result of a failing filesystem because it had dates from a
previous scan in it. I remember getting a Zenmap crash report that was
caused by binary garbage (more than one byte) in an XML file, but I
can't find it now. Does anyone have any ideas?

David Fifield

_______________________________________________
Sent through the nmap-dev mailing list
http://cgi.insecure.org/mailman/listinfo/nmap-dev
Archived at http://SecLists.Org


Current thread: