Re: [Tracker] From the client side



On Mon, 2008-11-24 at 09:48 -0500, Jamie McCracken wrote:
On Mon, 2008-11-24 at 14:35 +0000, Martyn Russell wrote:
 

this is true if apps use tracker as a conventional client/server system
where client is thin and only requests data on a need to know basis

In reality apps like RhythmBox are thick clients and tend to load all
data into memory. They are agnostic about where the data comes from and
are not designed/optimised for purely client/server access

Banshee has its sorting performed by SQLite and has a model for its list
view that operates on the SQLite data in so-called cursor style (for
non-database ppl: that's iterator or enumerator style).

Note that our tracker_db_result_set_iter_next can't do this atm because
we pull all data from the SQLite result into our own memory in our
create_result_set_from_stmt. But as you probably know you can use
sqlite3_step() in so-called cursor style too. Afaik is Banshee using it
this way, although I should verify this in its code (it's probably well
hidden behind ADO.NET's cursor/IEnumerator APIs).

Compared to Rhythmbox is Banshee's technique noticeable (by the user)
better ... Might illustrate how important iterator/cursor models are for
UI developers.

The net result is that only the items that were once visible have been
cursored top to bottom, and when traveling back you have them cached or
you perform a new query using offset & max. Or you just always fetch
pages (using offset & max) (which is a workaround if you can't support a
normal cursor-style API).

A UI developer who doesn't make his models this way yet hopes to deal
with vast amounts of data that he wants to display, is not going to fail
caused by our DBus's overhead. He'll fail caused by his own stubbornness
to decide not to refactor his model-view-controller architecture to do
his models this way. Our APIs wont be his only problem, he'll sink in
problems.

I don't think that bad clientside development is a good use-case.

For batch systems the DBus overhead can also be neglected, as there's
nobody disturbed by the mini overhead (when compared to the query's
overhead, when the result-set truly is large).

Also idd note that any query that yields a very large result-set will
mean that a query was requested that will likely take up to 100 times
longer (or even ten times more than that) than the DBus overhead.
Meaning that we are optimizing at the wrong level in the first place.

And for the DBus overhead per page request (for a paged API) or per Next
or NextPage ... is probably less than most GMainLoop iterations take.

Which means ... (in case just a page is requested, and not vast amounts)

That drawing the scene takes longer than the DBus overhead.

Although if there's really a solid use-case where the micro overhead of
DBus matters, I'd agree that a direct method should be developed. It's
just a guaranteed huge can of worms that you open.

Question is do we fix the apps or make tracker flexible enough to work
with both thin and thick clients?

My feeling is that its more work to fix the apps


Fix the apps. Although perhaps more work, and while I'm not sure that
it's indeed more work that way, then at least the apps are clean and
correctly implemented.



-- 
Philip Van Hoof, freelance software developer
home: me at pvanhoof dot be 
gnome: pvanhoof at gnome dot org 
http://pvanhoof.be/blog
http://codeminded.be




[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]