Re: [Tracker] Queueing sparql requests



On Wed, 2011-03-30 at 18:13 +0200, Philip Van Hoof wrote:
On Wed, 2011-03-30 at 16:44 +0100, Lionel Landwerlin wrote:

Hi,

I'm using tracker from an application to browse/search media files.
To support live update when searching/browsing I'm using some kind of
live model. Adding a lot of files in a short period of time leads to
lots of sparql request to maintain the models up to date and that kind
of break the whole thing because at some point :
        * when using the direct backend, I end up with sqlite errors
        telling me that the database is corrupted
        * when using the bus backend, I end up not being able to open
        file descriptors anymore because the per process limit has been
        reached
        
So to work around the first and then the second issue, I'm about to
write some "private" (as not in tracker) queueing API on top of
libtracker-sparql.

I'm pretty sure other people might be interested by such a feature/api.
Is there any plan to add such thing to tracker ?

Ideally the bus backend of libtracker-sparql is someday redesigned to
reuse the FD instead of creating a new one per request.

This would then work the way typical pipelining works:

Client just sends requests as they are needed, adds a tag to each
request. Service sends tagged replies back. Client would read the tag of
each reply it sees on the FD and fires the callback of the request
tagged that way.

Non-trivial but in my opinion better than creating a new dup() and a new
pipe() for each request (which of course exhausts the max open file
descriptors after having done sufficient requests - 1024 on a standard
distribution that sets ulimit per shell, I think).

Then no such clientside queue would be needed.

Ok, thanks.

Regards,

--
Lionel Landwerlin





[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]