On Fri, 2008-01-11 at 19:43 -0600, Govind Salinas wrote: > Hi Again, > > I have a build and a test environment going, now I want to start > contributing. I figured I would start with a way to keep a to-do > list. Then work on import, export from Tomboy (since that was > mentioned in the Wiki). After that, maybe I could work on OpenID > support, since thats related to the work I do at work. > > First things first. I have been reading the "A new data model for > Mugshot" article and I think I get whats going on, at least well > enough to start writing a plug-in (extension, stack, I'm not sure what > term you use). But I didn't see any documentation on how to create a > plug-in and install it. I will also need to add classes/resources, > and I am not sure if a server change would be needed or not. I would > expect you could do it by API. Can someone point me in the right > direction here? So, unfortunately, while the data model is in fact the coolest thing since sliced bread, it's not very easy to extend it with new types of objects stored on the server side. Since you've ready the data model article, you probably have some idea of the benefits of the data model, but other people may be wondering "a todo list sounds really simple, why all this complexity?" ... so I'll describe some of the features you could have in a TODO list implemented with the data model: - Web display of your TODO list - Instant updates of the client display when you add and remove items from the TODO list via the web - Ability to publish selected items in your TODO list to your friends - Instant updates when your friends add and remove published items - Ability to link to bugzilla tasks from the TODO list, with the server going and retrieving the bug status and updating it when it changes. There are two basic reasons why server side extension is hard: one is simply that to do any hacking on the server side requires getting a test instance of the server going, and that's a quite a project. The other is that the emphasis for the data model has been exporting the existing server objects via the data model, so you need to know how we implement "existing server objects" and add a new one. And that's no simple thing, since it adding little bits of code scattered throughout lots of different directories. So, how to fix this and make it easy for people to implement their own applications with an online component, without becoming experts in a complicated Java server? One approach is to punt and say that that the data model isn't involved for applications. After all, we need to work with external non-GNOME web applications. People should just implement server components as a normal web app. And the simplest version of this is to use an existing application. I think the suggestion of basing a TODO list on rememberthemilk.com is a good one. Alternatively, we could create some sort of "simple data storage API" that maybe wouldn't allow all the features described above, but would work fine for a basic TODO list. In fact we already have this (!); if you just store your TODO list items in GConf and add a line to: online-desktop/online-prefs-sync/online-prefs-sync.synclist (Or install a sync file for your application), then the TODO list items will automatically be propagated between all of your desktops. The downside is that conflict resolution is very simple to non-existent, so it's not going to handle offline TODO list editing on multiple machines well. Something for the future would be to allow extension of the data model via web servers. Right now when you ask the server for information via XMPP you say something like: getResource(resourceId="http://online.gnome.org/o/user/61m76k3hGbRRFS", fetch="contacts +") You could imagine that if you did: getResource(resourceId="http://todo.gnome.org/o/item/61m76k3hGbRRFS.43", fetch="contacts +") The server would go out todo.gnome.org via HTTP and fetch the data (signing the request to indicate which online.gnome.org user is requesting the data) We could then have a much simpler way of writing services that export data into the data model without modifying the online.gnome.org server. The other thing to do, of course, is working on making the server easy to run and hack on, and there is a lot of stuff we could do there: - Get rid of the JBoss dependency and the use of EJB session beans - Don't use UDP multicast and go to manual cluster configuration - Fix the code to work with Java 6/7 and IcedTea - Remove stale code for all sorts of mugshot.org features that we no longer care about - Documentation, documentation But there is months of work there, so it's not going to happen quickly. For right now I think rememberthemilk or maybe the GConf approach is the right one. A note about backup: we do have nightly backups of the entire online.gnome.org data set. So worse case, we lose 24 hours of data. But I think what Colin was getting at is that if you are storing terabytes of critical data for people, you have to think about the whole problem of failure and recovery differently: you inevitably will have all sorts of hardware failures, and you need to have data stored redundantly and be able to route them without blinking. We aren't set up to do that sort of thing. And therefore (at least in the short term), we shouldn't be putting critical data on online.gnome.org. In terms of client-side encryption and storing things on the server as a binary blob: it is clearly the right thing to do in certain circumstances ... for example, if we wanted to sync your gnome keyring saved passwords between machines. But there are two big downsides to be careful of: first, if something is encrypted client side, you can't create a web interface for it. Second, and more important, if you encrypt something with no way to recover the password, then you better make sure that the user is more concerned that the data will fall into the right hands then about permanently losing the data. This is frequently not the case. - Owen
Attachment:
signature.asc
Description: This is a digitally signed message part