Re: SuperDataDaemonThing - Lets Plan



First, sorry for the long e-mail, totally looked shorter last night, I
swear. ;)

On Wed, Mar 5, 2008 at 3:23 AM, John Stowers <john stowers gmail com> wrote:
> ACK on the basic idea and kudos for building on the Conduit platform!
Hence the daemons strong conduit roots. Not having spent a ton of time
on conduit code I should emphasize that this is largely a
design/philosophy argument, not a grit-n-numbers one.
1) Conduit is awesome, and has connected to a plethora of services,
this codebase is beyond invaluable. However, (imho) Conduit doesn't
make sense as a desktop-wide daemon. While the daemon could still just
be part of the conduit project (shipped as conduit-data-daemon or the
like) I feel that it makes the most sense to separate the data access
logic from the sync logic. What we have is the awesome application
that has such a big niche to fill that its doing several roles from
one (non-threadable) process.

Again, this is just my opinion, but the other thing to worry about is
making the daemon somewhat standard. If its never running then apps
will never use it, to keep that service lean and mean we should only
have its purpose served . I would argue that it would be significantly
better for development if conduit used the daemon as its sole backend.

My $0.02
>
> I think it is smart doing this on top of the conduit daemon because we have
> already done a lot of the periphial stuff needed, including configuration
> getting/setting, and gui configuration for those yucky dataproviders that
> need you to sign into a website. We also have some rather comprehensive unit
> tests to make sure that we continue to work with all theses online
> dataproviders.
And I agree that this work is invaluable, and the last thing I want us
to do is rewrite the same logic (the whole point of the daemon). As
mentioned above, its more a concern of scope and purpose. I feel
strongly that the universal data access point for a desktop should be
headless and as lean as is feasible. It should also stick to one task
and perform it well. I really have (over the course of this
conversation) become quite fond of the idea of simply splitting
conduit. We leave the UI/Sync element with its current Gtk and Dbus
interfaces, and we simply move all the fetching/pushing to a separate
process, and communicate over dbus. We expand the conduit backend
daemon to fit the more generic role of desktop provider while
maintaining solid ties to the Conduit system.
>
>
> >
> > 1) The interface for finding and being notified of DataProvider's
> > presence will be much akin to Conduit's as mentioned on the Wiki.
> > 2) Once the user has a DataProvider, it seems like a small expansion
> > of the basic CRUD is a good start:
> >   * get_all
> >   * get
> >   * put
> >   * delete
> >   * refresh
>
> I would also add finish() to give dataproviders the ability to do some
> transaction if they wish, and the ability to clean up memory.
Certainly.
>
> >
> > (Anyone who's worked on conduits internals gets a tingle in their spine ;)
> )
> > 3) Jc2K had a great idea and turned me on to json-glib. While the
> > above interface is complete, I'm sure everyone noticed that in most
> > cases get is going to be returning one mammoth tuple, which isn't very
> > user-friendly A much cleaner way is to make all of our DataTypes
> > GObjects and use json-glib to string them over the wire. Now this does
> > present 2 main concerns 1) its Gnome specific, I feel dirty using dBus
> > to be desktop specific, and 2) its very different from most dBus
> > applications. And while it is easier to deserialize a string than it
> > is to try and manage an enormous struct or tuple, people have much
> > more familiarity with the structs and tuples. For the sake of
> > shockingly beautiful client code, my vote is towards json-glib (note
> > that means we get lots of free translation into most popular
> > languages, its pretty safe to say if they have dbus bindings they have
> > gobject bindings as well.)
>
> Note that I have already had this discussion with Jc2k, but I will repeat it
> here, as hopefully it makes more sense the second time around. Basically
> this strikes me as not a very good idea.
>
> The premise of my argument is that its more eficient for the application
> that wants the data to deal to do so with it in a standardised format, using
> the most suitable native libraries for the application, then it is to break
> all of the datatypes into tuple like form and then send them over the bus.
> By effficient I also mean less work.
>
> Conduit uses standardized datatypes to represent data it downloads, or we
> provide the ability to convert arbitary data to standardised types (vCal,
> iCard, etc). Within a dataprovider the only requirement is that data can be
> located via a locally unique ID. So et_all() returns LUIDS, and get(LUID)
> returns data at that LUID. Same applies for delete.
>
> I suggest we (conduit) expose a slightly modified Conduit (see the
> ExporterConduit iface) dbus interface over the bus that intercepts calls to
> get(LUID) and returns a local file URI that is the result of the conversion
> of the native dataprovider type into a standardised datatype such as a vCard
> or iCal. We take care of the caching, and the application always gets a
> local file URI that represents a remote piece of data.
This is quite similar to my first design, my concern was (somewhat
specialized, I know, but I feel like it could be common enough to
cause concern) that some applications might still want to know the
original url of an image, or want tagging info/other metadata. While
the dbus tuple is totally workable, the thought of standardized
GObjects with these properties (but _not_ the actual data) is as well.
To dig up some of my experimental code, I had a Photo class that
looked like so:
Photo
--remote_url
--LUID
--mtime
--cached_url
--tags
and we simply send that across the wire, the client still handles
native IO, as I don't trust dBus with hundereds of megs of photos yet
;)
>
> Note that (excluding caching logic) the above would be not more than an
> hours work from Conduits perspective, and would definitely be worth
> investigating before spending too much time committing to the glib jason
> thing.
A solid agree from me here, I just don't want us to toss a system that
could make accessing the daemon significantly easier (most notebly in
languages like C# where a struct is very different from a class) by
offering up objects instead of tuples.
>
>  If we need to expose additional metadata to calling apps than what can be
> expressed in a file uri then we can do either
> 1) add a get_metadata(LUID, key)
That works, I would probably support a get_all_metadata() as well, to
reduce the number of roundtrips. But that's just semantics.
> 2) give the option of returning the LUID unmodified, i.e. before it is
> converted to a file. This may be a smarter solution where the LUID is
> standardised, and the app already has bindings to the dataprovider that
> conduit is providing a proxy for (e.g evolution - where caching doesnt make
> sense anyway).
I do agree that in lots of cases, this is true. Like the photo
scenario, we shouldn't copy the images out of f-spot's directory,
that's redundant. However, the evolution case is one where I might
argue the other direction. Ideally I would say that someone using sddt
would need to know very little to nothing about the specifics of a
dataproviders implementation. I would argue that the evo bindings tend
to be buggy, and complicated to use. By abstracting this over a simple
dbus interface it removes lots of implementation details. The other
upside is this would expose a sort of 'first-class' object. So I could
get all my evo and facebook contacts, and not need separate parsing
logic for each.

I'm not 100% sure of how to proceed here, its definatly a question of
scope. I see an ideal system as offering a little more abstraction.
And while its not very cheap, I would also argue that a json
representation of an event is going to instantiate at least comparable
to the parsing  of an ical file for one event, or one file for each
event.
>
> I still think this is a simpler solution than the jlib one which requires
> 1) thinking about how to deconstruct each datatype into a tuple/dictionary
> representation
Thats all the json is essentially doing. Its a dictionary like so
{attribute_name,value}. Where a value can represent deeper subsets of
dictionaries, the 2 main differences are that json-glib is another
library, and it instantiates GObjects, usable more or less the same in
any system with gobject bindings available. In all honesty, as a
python programmer, its no big, we have easy access dics,tuples,or
GObjects. I'm just thinking about usability in C etc. Never having
used dBus in C I cannot speak to it. If its not an issue, then we just
do it the old school way.
> 2) extra DBus traffic
Wouldn't this be nominal? Json is an extremely minimalistic markup.
I'll do some tests for numbers.
> 3) A new library and binding for each app
Major downside, and the main reason I'm completely ok w/ conventional dbus.
>
> Anyway, I would appreciate your thoughts
>
> Regards
>
> John Stowers

Thanks for the input!
>
>



-- 
Kevin Kubasik
http://kubasik.net/blog


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]