Re: [Tracker] status of 0.6.90?
- From: Martyn Russell <martyn imendio com>
- To: Ivan Frade <ivan frade nokia com>
- Cc: Tracker-List <tracker-list gnome org>
- Subject: Re: [Tracker] status of 0.6.90?
- Date: Tue, 07 Oct 2008 08:55:26 +0100
Ivan Frade wrote:
Hi
El lun, 06-10-2008 a las 16:58 +0100, ext Martyn Russell escribiÃ:
Jamie McCracken wrote:
There are also a load of other issues that need correcting:
1) enumerating and crawling directories needs to be done in the indexer
(and pass directories to watch back to the daemon). Daemon can then run
as nice 0 and normal ionice instead of nice 19 as only cpu/io heavy ops
will be searches and queries which need to be fast as possible
I really want to do this ASAP. This will reduce DBus traffic
significantly not to mention it should be faster and reduce the amount
of memory duplication we have with strings existing in the indexer and
daemon. The daemon will be REALLY lightweight then and not need to be
nice()d as Jamie says, so I can't to do this.
If we want to reduce all the traffic we can merge everything again in
one process }:-)
:)
Right now, I send 150k files across DBus to the indexer on startup -
EVERY time. Not just once. If I can do that in the indexer and only send
back 5-6k directories which need monitoring instead, that is a good thing.
Plus the indexer ALREADY has a crawler to standalone testing. We could
remove that completely. So to some extent there is already code
duplication which is unnecessary.
Seriously, i am not sure it would be a good idea to move the crawler to
the indexer. If you want move the crawling code away from the daemon, i
would do it to an external process.
There is no point in making it a separate process unless it is something
we do regularly for more than one process.
Two points:
1) What happens if i want to write different crawlers/monitors? (for
instance, online content).
Do you have something in mind? or is this purely hypothetical?
2) What happens if i want to use the (wonderful) tracker-indexer in
other programs, where i dont need a crawler at all?.
This is a valid point. I presume we would need some way of only crawling
on startup.
If my program just
need an index engine why should i start/configure a crawler?
I think this is an interesting argument. Plus I think this is why we
designed it the way it is now.
For example: "lucene" is a full-text-index library, and it doesn't care
about the crawling.
--
Regards,
Martyn
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]