Browsing Google, I went into this: http://www.geocrawler.com/archives/3/488/2001/11/0/7198728/ I have the same sort of problem. I'm trying to finish writing a traffic accounting tool that I first designed using hash tables. But as the traffic it was to measure kept going over 600Mbit, I decided that we really needed something that would compute faster, leaving more CPU cycles for the packet processing inside the kernel. For starters, I wrote a small NetFlow aggregator. I used md5 strings as keys for the hash table which I stored the NetFlow structures into. That's what I also intended for my tree. As you may have already guessed, I implemented a cleanup routine for inactive flows. So in the first case I used g_hash_table_foreach_remove(). The problem popped up on large numbers of flows (I had an average of ~200k). It seemed to me that the cleanup routine ate up all the CPU. That's why I decided to move to a faster data structure. Now, I tried both indexing the unused strucutures in a separate hash table, which I would clean up at a later time and destroying the tree nodes while iterating through it. Both times I got several errors regarding double-frees. What puzzled me was that the warnings came up both when I manually free()-ed the keys and elements in the tree and when the tree destroying routine did that itself. What do I miss ? I can't recall free()-ing the data on purpose twice. I'd like an example of an implementation from someone who has more experience with Glib than me. Thanks in advance for any help provided. ---- If it's there, and you can see it, it's real. If it's not there, and you can see it, it's virtual. If it's there, and you can't see it, it's transparent. If it's not there, and you can't see it, you erased it.
Attachment:
pgpVccYbaAKfO.pgp
Description: PGP signature