SuperDataDaemonThing - Lets Plan



1) Eventually we will pick a name

But seriously ladys and forks ;) the SuperDataDaemonThingl [1](sddt
for the rest of this mail) is a wicked awesome idea. An integral part
of the online desktop and the changing way we use computers. I know
I'm preaching to the choir a bit with this sentiment, but we all know
how frustrating it is to see 6 Flickr API's all on the desktop, each
authenticating and downloading all the photos separately, and most
frustrating of all, each fixing the same bug separately over and over.
There are tons of great arguments for doing all this in C or as a
collection of libraries, or not at all, but they pale in comparison to
someone actually starting to do something and writing code.

It is under this banner of 'lets start doing something instead of just
talking' that I propose the following:
1) Sddt will exist as its own process, and as a 'freestanding'
project. While we might save some time just modding out a conduit
interface to leverage its DataProviders, Conduit has a very different
purpose, and while it would work fine for Gimmie and Conduit and maybe
the next 3 programs that come along, this interface needs to be
generic, and so does its backend.
2) All that being said, starting from scratch is clearly a mistake, as
Conduit has already implemented a great deal of logic when it comes to
talking to a variety of endpoints. My 'solution' is that we make sddt
a phased project that will eventually be more or less rewritten, but
still highly functional in its infancy.
* The first iteration of sddt will be little more than a thin DBus
wrapper, a kind of glue for everyone. We don't implement caching or
connection pooling, or even authentication storage within sddt itself,
we just use dbus whereever possible, and libraries where we must to
offer an initial set of DataProviders. They will be ugly, and slow,
and in no way better than implementations that people are currently
using. However, this gives us a functional platform to get API
feedback on, and to provide a complete featureset when applications
decide to migrate.
* The second phase is the implementation is the consolidation of
backends and caching, The goal being that (especially for remote
services like flickr) we should be able to respond to calls almost
immediately by using a local sqlitedb or even just pickle. Then we can
queue a refresh of the remote source, if changes are made, we fire the
appropriate events. This is a big step, and will require significant
amounts of new code. However, because we can implement it one
DataProvider at a time, we won't be pressed to somehow produce all
that caching code in one release cycle.
* Finally, we do something with authentication, make some prettier
config UI's etc. etc. make it into a real application.

I know its not a perfect, but I spent a few hours this week trying to
figure out the best launching point for this (3 failed attempts later)
I realized that I couldn't just start from scratch, and I would have
to screw with far too much of conduit's internals to make it do what
we want. (However, a copy of its Dataproviders is a good starting
point for several of our more common backends)


Ok, are we still alive? Does this sound even remotely sane? The idea
is that we design a rockin public API, and we make that as functional
as humanly possible as soon as possible. Then, we slowly repair the
backend, bit by bit. This isn't coming from some person experience,
its just an idea, so feel free to abuse it as one.

Now, for implementation of the actual DBus API.

1) The interface for finding and being notified of DataProvider's
presence will be much akin to Conduit's as mentioned on the Wiki.
2) Once the user has a DataProvider, it seems like a small expansion
of the basic CRUD is a good start:
   * get_all
   * get
   * put
   * delete
   * refresh
(Anyone who's worked on conduits internals gets a tingle in their spine ;) )
3) Jc2K had a great idea and turned me on to json-glib. While the
above interface is complete, I'm sure everyone noticed that in most
cases get is going to be returning one mammoth tuple, which isn't very
user-friendly A much cleaner way is to make all of our DataTypes
GObjects and use json-glib to string them over the wire. Now this does
present 2 main concerns 1) its Gnome specific, I feel dirty using dBus
to be desktop specific, and 2) its very different from most dBus
applications. And while it is easier to deserialize a string than it
is to try and manage an enormous struct or tuple, people have much
more familiarity with the structs and tuples. For the sake of
shockingly beautiful client code, my vote is towards json-glib (note
that means we get lots of free translation into most popular
languages, its pretty safe to say if they have dbus bindings they have
gobject bindings as well.)

Of course, it seemed odd to me that there wasn't already a better way
of doing this over dBus, I'm still pretty new to it on the whole, so I
really can't say I've got the finer points down.

On this note, I have a patch for an initial set of python bindings to
json-glib, its only about 70% of the api as of now, but it should be
pretty easy to finish, I'm just getting bogged down in C and odd
Makefile trickery, so anyone who's particularly literate in one or
both, 5 minutes and some eyeballs would be much appreciated. [2]

Again, following conduit's lead on this, I would think that events are
best handled in a tree of interfaces.

What to people think? I'd love to get in at the ground level on this
one, but I wanna make sure we have a consensus on what applications
could use/need and how to best provide that. Once we start to get a
better idea of what form this will take its easy to split up the work
;)



[1] - http://live.gnome.org/SuperDataDaemonThing
[2] - http://bugzilla.openedhand.com/show_bug.cgi?id=830

-- 
Kevin Kubasik
http://kubasik.net/blog


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]