Wireshark mailing list archives

Re: Idea for faster dissection on second pas


From: Evan Huus <eapache () gmail com>
Date: Sat, 12 Oct 2013 12:29:37 -0400

On Sat, Oct 12, 2013 at 11:46 AM, Anders Broman <a.broman () bredband net> wrote:
Just looking at performance in general as I got reports that top of trunk
was slower than 1.8.
Thinking about it fast filtering is more attractive as long as loading isn't
to slow I suppose.
It's quite annoying to wait 2 minutes for a file to load and >=2 minutes on
every filter operation.

Ya. It was quite surprising to me to find out how much data we're
generating and throwing away on each dissection pass. Now I'm
wondering how much of this could be alleviated somehow by a more
efficient tree representation...

I think we need to balance memory usage and speed to be able to handle large
files, up to 500M/1G files as a rule of thumb ?

It's always a tradeoff. Ideally we would be fast and low-memory, but
there's only so much we can do given how much data a large capture
file contains.
___________________________________________________________________________
Sent via:    Wireshark-dev mailing list <wireshark-dev () wireshark org>
Archives:    http://www.wireshark.org/lists/wireshark-dev
Unsubscribe: https://wireshark.org/mailman/options/wireshark-dev
             mailto:wireshark-dev-request () wireshark org?subject=unsubscribe


Current thread: