Wireshark mailing list archives

Re: The cost of memory allocation


From: Anders Broman <anders.broman () ericsson com>
Date: Wed, 21 Sep 2016 12:07:24 +0000

Hi,
Just briefly browsing the code…
Could proto_get_finfo_ptr_array() be used instead of proto_find_finfo()?
Perhaps these functions should be rewritten to use wmem arrays instead or use g_ptr_array_sized_new ()
Regards
Anders

From: wireshark-dev-bounces () wireshark org [mailto:wireshark-dev-bounces () wireshark org] On Behalf Of Graham Bloice
Sent: den 21 september 2016 11:15
To: Developer support list for Wireshark <wireshark-dev () wireshark org>
Subject: Re: [Wireshark-dev] The cost of memory allocation



On 21 September 2016 at 10:12, Graham Bloice <graham.bloice () trihedral com<mailto:graham.bloice () trihedral com>> 
wrote:


On 21 September 2016 at 10:06, Paul Offord <Paul.Offord () advance7 com<mailto:Paul.Offord () advance7 com>> wrote:
Good point – debug build.

Debug builds using the ms allocator are a lot slower due to all the extra memory checking, i.e. see this page: 
https://msdn.microsoft.com/en-us/library/974tc9t1.aspx

I'm not entirely certain though, that a debug build of Wireshark will use a version of glib that then uses the debug 
calls into msvcrt.

However the point still stands, that using debug builds for performance testing might not be giving you the real 
picture.

And this post explains how even a release build is affected by debug memory allocations if it's run under a debugger: 
http://preshing.com/20110717/the-windows-heap-is-slow-when-launched-from-the-debugger/



From: wireshark-dev-bounces () wireshark org<mailto:wireshark-dev-bounces () wireshark org> 
[mailto:wireshark-dev-bounces () wireshark org<mailto:wireshark-dev-bounces () wireshark org>] On Behalf Of Graham 
Bloice
Sent: 21 September 2016 09:49
To: Developer support list for Wireshark <wireshark-dev () wireshark org<mailto:wireshark-dev () wireshark org>>
Subject: Re: [Wireshark-dev] The cost of memory allocation



On 21 September 2016 at 09:29, Paul Offord <Paul.Offord () advance7 com<mailto:Paul.Offord () advance7 com>> wrote:
I’m not happy with the performance of the transum dissector and so I’ve started some analysis.  I’ve never used VS 
performance profiling before but I plan try to investigate this problem using it.  In the meantime I’ve used a tool 
that I’m reasonably familiar with called PerfView.  It’s produced some interesting results which I thought I’d share.

The problem I’m having is that with transum enabled load time for a 50MB file increases from 5 seconds to 10 seconds, 
but then subsequent loads of the same file go out to about 40 or 50 seconds.

[cid:image003.jpg@01D21411.78ECCC70]

Above (or attached depending on your email system) is a screen shot showing the time spent in various functions when a 
load of the file took 44.8 seconds.  At the top of the image is a transum function called decode_gtcp.  The image shows 
that 50.7% of the total load time was spent executing in this function.  Then we see all of the nested functions with 
the proportion of time spent in each of those.

What I notice is that a lot of time is being spent in glib functions, and in particular the time is being spent 
allocating and freeing memory.

[cid:image004.jpg@01D21411.78ECCC70]

Using a slightly different view we can see that across the whole of the process during a load file with transum enabled 
more than 66% of the time is spent messing around with memory.

I haven’t yet figured out why I get inconsistent load times, and I don’t know what I can do about any of the above, but 
I thought it might be of general interest.

Best regards…Paul


Release or debug build?

--
Graham Bloice


--
Graham Bloice



--
Graham Bloice

___________________________________________________________________________
Sent via:    Wireshark-dev mailing list <wireshark-dev () wireshark org>
Archives:    https://www.wireshark.org/lists/wireshark-dev
Unsubscribe: https://www.wireshark.org/mailman/options/wireshark-dev
             mailto:wireshark-dev-request () wireshark org?subject=unsubscribe

Current thread: