Re: Getting started with beagle
- From: Debajyoti Bera <dbera web gmail com>
- To: "Kevin Kubasik" <kevin kubasik net>
- Cc: Joe Shaw <joe joeshaw org>, dashboard-hackers gnome org
- Subject: Re: Getting started with beagle
- Date: Thu, 14 Feb 2008 00:03:39 -0500
> An architectural decision to be made, do we want to actually index the
> data off of every webservice, or just offer 'transparent' backends to
> query the existing query API's for each service. I'm more for a local
A transparent "proxy" backend to query using webservice API (in beagle lingo,
a "QueryDriver") is fine for some kind of data but ideally a real backend
that fetches the data and indexes it ("backend") would be the best option.
> copy (makes it fast, and solid even when disconnected, but just my
> $0.02) I love writing/overhauling new backends, so I might stab at
> some of these (im actually thinking of maybe an out-of-process script
> that does its Beagle interaction like the Mozilla extensions ect, so
> we aren't responsible for its scheduling.)
An out-of-process script will work but it is really not that complicated to do
this in process. All you have to do is create an IndexableGenerator and feed
indexables as asked in GetNextIndexable. Depending on how fast the data can
be accessed from the webservice, either download some 30/40 "indexables" from
the webservice in HasNextIndexable or use a separate thread to download them
and put in a shared queue from which GetNextIndexable will get them.
If you do it out of process, make sure you dont choke the internet by
downloading all 10K emails in one go i.e. you can't ignore some kind of
scheduling.
- dBera
--
-----------------------------------------------------
Debajyoti Bera @ http://dtecht.blogspot.com
beagle / KDE fan
Mandriva / Inspiron-1100 user
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]