Re: memory management with glibmm & giomm

nico wrote:

I tried to benchmark my program...
Results can be found on these charts:

The first graph represent the memory usage after each execution of the
scan_directory() function.
(program ends with a memory usage of 25 MB)
The second graph represents the depth of scanned directories (max depth
is only 7, so at this point the program store no more than 7
Gio::FileEnumerator objects).
The third chart represent the number of scanned files.

With these charts, it quiet clear that the memory usage depend of the
number of scanned files and doesn't depend of the depth.
So I guess, my program doesn't correctly "unref" objects. Something goes

I paste here the source code of the program :

#include <iostream>
#include <glibmm.h>
#include <giomm.h>

int depth;
int files;

bool scan_directory(std::string path) {
    try {
Glib::RefPtr<Gio::File> directory = Gio::File::create_for_path(path); Glib::RefPtr<Gio::FileEnumerator> enumerator = directory->enumerate_children();
        Glib::RefPtr<Gio::FileInfo> file_info;
        while (file_info = enumerator->next_file()) {
            if (file_info->get_file_type() == Gio::FILE_TYPE_DIRECTORY)
scan_directory(path + file_info->get_name() + G_DIR_SEPARATOR_S);
            else {
                //do some staff, for example:
//std::cout << path + file_info->get_name() << "\t" << file_info->get_content_type() << std::endl;
        std::cout << "depth: " << depth << std::endl;
        std::cout << "files: " << files << std::endl;
        std::cout << "###" << std::endl;
        return true;

    catch(const Glib::Exception& ex) {
std::cerr << "Error: " << ex.what() << std::endl; return false;

int main(int argc, char** argv) {
    depth = 0;
    files = 0;


    setlocale(LC_ALL, "");

    scan_directory((argc == 2) ? argv[1] : "");

    return 0;

it depends a bit on the structure of your filesystem. you are doing
depth-first descent through a directory structure. in some cases, this
would not use many resources; in others, you'd expect it to use a great

every single directory encountered will be enumerated, then each
directory within it will be recursively entered. by the time its done,
you've got enumerated data for just about every file below the starting
point. none of your data structures are cleaned up until control returns
to the top level.

If you think this is a bug, it would be helpful if you could open a bug in bugzilla so that we can remember to investigate:



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]