Nmap Development mailing list archives

Re: NSE HTTP Pipeline implementation


From: Joao Correa <joao () livewire com br>
Date: Mon, 10 Aug 2009 01:37:06 -0300

Hi guys,

I've made a lot of local tests and everything seems to be okey with
the new versions of the http.lua and sql-injection.nse, both
supporting pipelining. Anyway, since a lot of scripts depend on this
resource, I think it would be very nice making some more extensive
testing.

I'm sending the a patch for the new version of http.lua and the script
pipeline-sql-injection.nse, that makes almost the same things done in
sql-injection.nse, except for the pipeline support. Also, I'm sending
the pipeline-http-enum.nse, that is a version of http-enum.nse with
pipeline support.

Notice that it is expected that the script will show no timing gain on
some situations, since only when pipeline is supported by the
webserver we can use such resoure. Anyway, testing not only the
mentioned scripts, but all others that use http.lua might be of great
help.

Also important are the changes in http.lua, which was hardly
"reworked". I've tried to work on a better modularization for the
script, that before was doing a lot of different tasks single
functions. With this new modularization, a lot of code could be
reused.

Thanks,
Joao

On Thu, Aug 6, 2009 at 2:12 AM, Joao Correa<joao () livewire com br> wrote:
On Wed, Aug 5, 2009 at 9:55 PM, <doug () hcsw org> wrote:
On Tue, Aug 04, 2009 at 03:50:35AM -0300 or thereabouts, Joao Correa wrote:
The total timing
decreased from 11804 secs to 2090 secs when pipelining 10 requests
into a single connection and to only 542 when pipelining 40 requests.

Very impressive, congratulations. This could be an excellent performance
optimization for certain NSE scripts and I am looking forward to seeing how
aggressively pipelined NSE scripts interact with different HTTP implementations.

Thanks!

In servers that don't have support to pipelined requests the response
to the first request is given and the connection is closed by the
server.

This is what implementations *should* do but not necessarily what they
*do* do. If this were guaranteed then web browsers could pipeline
requests with impunity and know they would always find out right away
whether they need to re-issue their requests.

Yes, I'm aware about that. Also, I've seen some servers where the
response header does not have information like Keep-Alive max
requests. I'm trying to make the function the most robust possible,
but certainly it will only be enough reliable with a lot of testing
around the internet.


The problem is that not all HTTP implementations properly buffer
keep-alive connections. Consider a client that pipelines 2 requests
by sending the following:

"GET /a HTTP/1.1\r\nHost: blah.com\r\n\r\nGET /b HTTP/1.1\r\nHost: blah.com\r\n\r\n"

A correctly implemented HTTP/1.1 implementation can do 1 of 2 things:

* Send the results of the first request with a connection: close header,
 followed by closing the write direction of the socket [1].
* Send two HTTP responses, optionally pipelined.

But a broken HTTP server might read in the first string assuming the
string it reads in will have only 1 HTTP request, process that request,
and try to read from the socket again. In this case, the client that
pipelined the first request will "hang" waiting for the second response
that will never come. When the results of HTTP requests span read()
calls, such servers are even more broken.

Actually I'm dealing with such situation dynamically handling the
number of simultaneous requests based on the responses received. If we
know the maximum number of simultaneous keep-alive requests, than this
value is used. If we notice that the number of responses is less than
expected, the script decreases the limit of simultaneous pipeline
requests. Also, if the field is not provided, the first attempt is
made with 40 pipelined requests (arbitrary value that can be changed)
and if these 40 responses are not received, than this value is
decreased.

I believe that this implementation makes the code a little bit robust,
and as far as I am testing this around the internet, I'm not having
much trouble. Anyway, I agree that some exotic implementations will
always show up.

The most important is that, if we send 40 requests, and only 1
response is received, the pipeline script will handle exactly like if
all the requests were done serially.

Almost all HTTP servers properly maintain a buffer when handling
keep-alive connections, but apparently this still isn't enough to
justify enabling pipelining in general-purpose http clients. Of
course NSE scripts are anything but general-purpose http clients
so have fun.

Thanks Doug. I'm just testing the scripts a little bit more. I expect
that soon I'll "officially" share them. Anyway, the experimental
version is on my nmap-exp brach
(nmap-exp/joao/experimental/nselib/http.lua), just in case of anyone
wanting to take a look =).

João

Doug

[1] http://httpd.apache.org/docs/1.3/misc/fin_wait_2.html#appendix


_______________________________________________
Sent through the nmap-dev mailing list
http://cgi.insecure.org/mailman/listinfo/nmap-dev
Archived at http://SecLists.Org


Attachment: http_pipeline_cookies.diff
Description:

Attachment: pipeline-http-enum.nse
Description:

Attachment: pipeline-sql-injection.nse
Description:


_______________________________________________
Sent through the nmap-dev mailing list
http://cgi.insecure.org/mailman/listinfo/nmap-dev
Archived at http://SecLists.Org

Current thread: