Re: Memory leaks



On Thu, 2011-02-10 at 10:50 +0200, Costin Chirvasuta wrote:
Because malloc() implementations generally kept a linear linked list of
free space, and traversed the list on a free() in case they found
adjacent memory areas to the one you were freeing, which they could join
together and make into a single larger area.

I'm sorry, I now understand what you mean. If what you say is true
(which I don't doubt) it's a really boneheaded mechanism in my
opinion. Defragmenting memory in realtime is a performance nightmare.
But that's irrelevant. Your point is well taken.

It was necessary on smaller machines. GNU malloc used to take he
approach (may still) of using only powers of two for bucket sizes, which
is faster (less fragmentation) but uses on average about twice as much
memory if requested sizes are random.


However, consider a modern GUI app. It's allocating and freeing
several orders of magnitude more pointers than are left at the end of
the program. So, when you finally get to the end and have searched
through all those heaps of pointers to free them and are stuck with
only 200 (say) you just give up. It's like drowning near the shore.
Plus, searching through a 200 pointer linked list should be an order
of magnitude faster than ((200/n) * (what it takes to free the n
pointers the program uses normally)). Assuming n is quite a bit larger
than 200 (which IMHO is really not far-fetched).

A few million isn't unlikely for a GUI-based program - e.g. consider
allocating an event structure whenever the mouse pointer moves. Remember
that just because you free'd something doesn't mean it's gone -- it's
still on the heap and available for reuse in that list. Some versions of
malloc do try and return pages to the operating system under some
circumstances, although the cost of doing that in performance is large,
so you don't want to do it often.

Also, Bill C wrote,


Liam - To have a problem with freeing up memory prior to exiting 
suggests that either you have a memory leak, or a bad design  (or 
both).  It might be your development environment.

Maybe it was bad -- System VR3 Unix, SunOS 4, IRIX, 4.3BSD etc etc. The
situation was a program that allocated millions of small objects. The
point still stands though - don't assume it's fast (or slow) until
you've measured.  The package itself did not, I think, have a badly
designed memory architecture for its time, but maybe I just think that
because I wrote it :-)  There were no leaks -- I measured carefully and
did in fact account for every item of memory. Well, I wrote a
carefully-tested program to do it :-)

Best,

Liam

-- 
Liam Quin - XML Activity Lead, W3C, http://www.w3.org/People/Quin/
Pictures from old books: http://fromoldbooks.org/




[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]