Re: optimal way to use Memory Chunks



Olivier Sessink wrote:

I was considering to use the GMemChunk infrastructure for some of my
code, where often 50 till 5000 structs of 4 bytes are allocated. I
will use the G_ALLOC_AND_FREE mode, because many items are not used
after a while, but not all.

I am, however, wondering if GMemChunk has much overhead. Is there much
searching time involved looking for the next free element if the
number of items become large? And how large would that be? And if the
number of elements is at a time actually very little, is there much
overhead using GMemChunk instead of g_malloc() ?

Summarizing: the question is when *not* to use GMemChunk, and when *to
use* GMemChunk.

Using such a feature like GMemChunk makes sense only if the size of the
objects (atoms) isn't too small (less than about 16 bytes as a rule of
thumb). Your atoms appear to be the smallest ones possible at all.
GMemChunks are especially helpful for linked lists of any sort. Your
atoms are apparently not big enough to contain such information.

In your case, the memory overhead by using GMemChunk would be slightly
over 100% on 32 bit machines and slightly over 200% on 64 bit machines,
as for every 4-byte struct to be stored there would be a 4 (32 bit) or 8
(64 bit) byte handle / pointer to reference that atom, plus the
GMemChunk object itelf.

So, by using GMemChunk you might end up with 5000 pointers (4 or 8 byte
each) to atoms of 4 bytes each. I suppose you would prepare some sort of
static array to store those up to 5000 pointers. Then why not simply
create an array to contain those 4 byte informations directly, rather
than creating an array to contain pointers to the 4 byte information
chunks, to be managed with sure memory and speed penalty?

What sort of 4 byte information is to be stored, if I may ask? Is it to
be referenced mainly by entry numbers (1st, 2nd, 3rd, ... atom) or by
contents, i.e. locating atoms that contain particular values? Possibly
for your app GMemChunks are not only inefficient but unsuitable at all.



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]