Re: SuperDataDaemonThing - Lets Plan



ACK on the basic idea and kudos for building on the Conduit platform!

I think it is smart doing this on top of the conduit daemon because we have already done a lot of the periphial stuff needed, including configuration getting/setting, and gui configuration for those yucky dataproviders that need you to sign into a website. We also have some rather comprehensive unit tests to make sure that we continue to work with all theses online dataproviders.


1) The interface for finding and being notified of DataProvider's
presence will be much akin to Conduit's as mentioned on the Wiki.
2) Once the user has a DataProvider, it seems like a small expansion
of the basic CRUD is a good start:
  * get_all
  * get
  * put
  * delete
  * refresh

I would also add finish() to give dataproviders the ability to do some transaction if they wish, and the ability to clean up memory.
 

(Anyone who's worked on conduits internals gets a tingle in their spine ;) )
3) Jc2K had a great idea and turned me on to json-glib. While the
above interface is complete, I'm sure everyone noticed that in most
cases get is going to be returning one mammoth tuple, which isn't very
user-friendly A much cleaner way is to make all of our DataTypes
GObjects and use json-glib to string them over the wire. Now this does
present 2 main concerns 1) its Gnome specific, I feel dirty using dBus
to be desktop specific, and 2) its very different from most dBus
applications. And while it is easier to deserialize a string than it
is to try and manage an enormous struct or tuple, people have much
more familiarity with the structs and tuples. For the sake of
shockingly beautiful client code, my vote is towards json-glib (note
that means we get lots of free translation into most popular
languages, its pretty safe to say if they have dbus bindings they have
gobject bindings as well.)

Note that I have already had this discussion with Jc2k, but I will repeat it here, as hopefully it makes more sense the second time around. Basically this strikes me as not a very good idea.

The premise of my argument is that its more eficient for the application that wants the data to deal to do so with it in a standardised format, using the most suitable native libraries for the application, then it is to break all of the datatypes into tuple like form and then send them over the bus. By effficient I also mean less work.

Conduit uses standardized datatypes to represent data it downloads, or we provide the ability to convert arbitary data to standardised types (vCal, iCard, etc). Within a dataprovider the only requirement is that data can be located via a locally unique ID. So et_all() returns LUIDS, and get(LUID) returns data at that LUID. Same applies for delete.

I suggest we (conduit) expose a slightly modified Conduit (see the ExporterConduit iface) dbus interface over the bus that intercepts calls to get(LUID) and returns a local file URI that is the result of the conversion of the native dataprovider type into a standardised datatype such as a vCard or iCal. We take care of the caching, and the application always gets a local file URI that represents a remote piece of data.

Note that (excluding caching logic) the above would be not more than an hours work from Conduits perspective, and would definately be worth investigating before spending too much time committing to the glib jason thing.

 If we need to expose additional metadata to calling apps than what can be expressed in a file uri then we can do either
1) add a get_metadata(LUID, key)
2) give the option of returning the LUID unmodified, i.e. before it is converted to a file. This may be a smarter solution where the LUID is standardised, and the app already has bindings to the dataprovider that conduit is providing a proxy for (e.g evolution - where caching doesnt make sense anyway).

I still think this is a simpler solution than the jlib one which requires
1) thinking about how to deconstruct each datatype into a tuple/dictionary representation
2) extra DBus traffic
3) A new library and binding for each app
 
Anyway, I would appreciate your thoughts

Regards

John Stowers



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]