Il 07/01/2012 19:52, Ibragimov Rinat ha scritto:
Hello. I'm using GThumb as image viewer and it's a great app, but it is slow when number of images becomes very large. My typical workload -- more than 3000 images in one folder. When I open such folder, GThumb consumes 100% cpu for quite a long time. I digged a bit and found that it actually has quadratic complexity. _gth_file_list_update_next_thumb function tries to create thumbnails for almost all images (N_CREATEAHEAD macro evaluates to 50000) and is called for _every_ image. I understand that patch below is a dirty hack, but when I measured user time on 20000-images-testcase it showed cpu time usage reduction from 10.5 minutes to 10 seconds.
or you can set the N_CREATEAHEAD macro to a lower value
diff --git a/gthumb/gth-file-list.c b/gthumb/gth-file-list.c index 3717273..5b4d5c9 100644 --- a/gthumb/gth-file-list.c +++ b/gthumb/gth-file-list.c @@ -1525,7 +1525,6 @@ _gth_file_list_update_thumb (GthFileList *file_list, } if (job == NULL) { - file_list->priv->update_event = g_idle_add (restart_thumb_update_cb, file_list); return; } } There is another place with same problem. However it has lower performace effec (10sec -> 8sec) diff --git a/gthumb/gth-file-store.c b/gthumb/gth-file-store.c index ddb3075..29966c2 100644 --- a/gthumb/gth-file-store.c +++ b/gthumb/gth-file-store.c @@ -779,6 +779,7 @@ _gth_file_store_update_visibility (GthFileStore *file_store, GthFileData *file; int j, k; gboolean row_deleted; + GHashTable *file_index; #ifdef DEBUG_FILE_STORE @@ -818,12 +819,14 @@ g_print ("UPDATE VISIBILITY\n"); /* reorder and filter */ _gth_file_store_sort (file_store, all_rows, all_rows_n); + file_index = g_hash_table_new (g_direct_hash, g_direct_equal); files = NULL; for (i = 0; i< all_rows_n; i++) { GthFileRow *row = all_rows[i]; row->abs_pos = i; + g_hash_table_insert (file_index, row->file_data, GINT_TO_POINTER (i)); files = g_list_prepend (files, g_object_ref (row->file_data)); } files = g_list_reverse (files); @@ -831,21 +834,15 @@ g_print ("UPDATE VISIBILITY\n"); new_rows_n = 0; gth_test_set_file_list (file_store->priv->filter, files); while ((file = gth_test_get_next (file_store->priv->filter)) != NULL) { - GthFileRow *row = NULL; - - for (i = 0; i< all_rows_n; i++) { - row = all_rows[i]; - if (row->file_data == file) - break; - } - + i = GPOINTER_TO_INT (g_hash_table_lookup (file_index, file)); g_assert (i< all_rows_n); - row->visible = TRUE; + all_rows[i]->visible = TRUE; new_rows_n++; } _g_object_list_unref (files); + g_hash_table_unref (file_index); /* create the new visible rows array */
This patch is very interesting, I've attached a new patch that stores position+1 in the hash table instead of the position itself, this way you can distinguish the case when the file is not found and when it is in position 0. The patch also uses the hash table each time there is a need to search a file in an array, this way the quadratic complexity should go away.
I'll test the patch for some time before committing it, thank you. - Paolo
Attachment:
file-store-changes.patch
Description: Text Data