Re: [Tracker] ANNOUNCE: tracker 0.6.91 released
- From: Jamie McCracken <jamie mccrack googlemail com>
- To: Martyn Russell <martyn imendio com>
- Cc: Tracker mailing list <tracker-list gnome org>
- Subject: Re: [Tracker] ANNOUNCE: tracker 0.6.91 released
- Date: Sun, 15 Mar 2009 10:27:34 -0400
On Sun, 2009-03-15 at 10:10 +0000, Martyn Russell wrote:
Jamie McCracken wrote:
On Sat, 2009-03-14 at 18:32 +0100, JÃrg Billeter wrote:
On Sat, 2009-03-14 at 11:18 -0400, Jamie McCracken wrote:
yeah I dont know why they replaced the fast summarised stats with a real
time count which is way too slow
martyn can you revert that bit?
doing a count with group by will involve a full table scan which would
be especially bad with the decomposed branch
In the decomposed branch we can use COUNT(*) on each class table, this
should be fast and not require grouping.
but for stats we want it grouped
I mean its ok if you just want the number of music files but not it you
want summarised stats for everything (which is what the applet displays)
I don't think it is worth reverting this. We had many reports that the
stats were simply incorrect, mostly due to removable media. Also I think
manually incrementing and decrementing stats when we insert or remove
items in the database adds another unnecessary transaction for every
file we handle which itself has a speed disadvantage. I should also
mention that if you disable any items then the stats again are not a
real representation of the data.
Whilst its not tremendously urgent, the optimal solution AFAICT is to
have an in memory stat table
Basically calc stats once at startup and then increment/decrement as you
index
You could recalc stats after removing a volume or hold stats by volume
which would allow you to avoid that
Anyway I think thats better than say calc stats continuously especially
as applet requests stats every few seconds if you have the stat window
open
jamie
The count is not slow enough in my opinion to be a problem. At least it
wasn't when I tested it. Michael, how slow was this for you, because for
me even with 60k items it wasn't slow enough to notice.
As for the actual crash, I will take a look into that. The daemon's
update signal was completely broken before and I refactored it so it
works now - perhaps that has something to do with it. I will investigate
on Monday.
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]