Thanks for your answer, it cast light on many of my interrogations. Le samedi 11 novembre 2006 à 13:18 -0500, Havoc Pennington a écrit : > As a preface to responding to your mail, keep in mind that there are two > interface points that need not use the same mechanism: > > user session on a host <-> some kind of global-to-network user state > application <-> some kind of sessionwide state Indeed. Until now, I didn't see gconfd as a proxy to the data, but as a central place applications must talk to. Currently, it is both. > Also, there are a couple of interesting cases. Traditionally, we've > always thought about a LAN with user identities stored in NIS or > something and user==UNIX-user. Each user has an NFS homedir. > > However, another interesting case is an individual consumer with their > data stored in a "cloud" (e.g. latest Google desktop stores its prefs > somewhere in the Google cloud). So there's the local UNIX user then some > remote account on the Internet. Indeed, but this case can only be addressed by an entirely different backend that can only be enabled after the user (or the site admin) has setup something to host it. One could argue it would also be nice to have something like this automatically setup, but I'm afraid it would be getting, although much interesting, far beyond the scope of this discussion. > Josselin Mouette wrote: > > I think a good short-term change is to revert to global locking by > > default. > > There is a way to turn on global locking again via environment variable, > I believe. At least there used to be. So you could play around with it. I know that. The point is about turning it on by default. > GConf was this way for the first several years. It is not a good idea, > essentially because fcntl() locking over NFS does not work reliably in > real production environments. (Not to mention people who insist on using > AFS with hosts sharing a homedir that aren't even behind the same firewall.) AFAICS, NFS locking is something that is getting slowly fixed in most systems. This is basically because gconf was far from being the only software to suffer from the situation. > As you note, there are several problems with the per-user-and-host > locking, but gconfd mysteriously refusing to start up at all due to the > global locks resulted in loads of support issues and angry admins. Was it refusing to start up because ORBit wasn't listening on TCP, or because of some locking bugs? BTW, this led to the opposite situation, although probably not at the same level - I've received a few complaints from users not understanding why their changes are not applied across multiple hosts or mysteriously disappeared. > Regarding the three issues: > - to fix notification of config changes to multiple hosts, you > really must use a "central server" model (like IMAP). > The only other thing you _might_ try is some firewall-hopping > code like the P2P VOIP stuff - I think Google Talk's firewall-hopping > library based on having an XMPP connection is open source. > But just connecting from A to B with no fancy stuff is not > going to work on many deployments, so just switching back to global > locking won't help. Well, you can generally expect that on a local network where you have a shared filesystem, there is not a paranoid firewall blocking connections on each host. Which is why global locking helps in most cases. > - file corruption shouldn't happen; at least last I knew, the files are > written atomically (all or nothing) by writing to a tmp file then > rename(), so the possible corruption is that if you write > both /foo/bar/baz and /foo//baz/bar both in two places, > you might get one value from one place and one from the other. > You will not get mangled XML or something like that. > This should cause problems almost never in practice (especially > since users aren't usually actively configuring the same thing > on two hosts at one time) OK, that only means some configuration disappearing. This is one of the things that caused trouble when we tried to enable the merged-tree setup for the home directory GConf source. > - the ORBit DOS applies to pretty much the whole desktop - I think > ORBit may have some kind of code to counter this. It looks like ORBit has indeed some code to counter this, as hijacking /tmp/orbit-$user doesn't result in a DoS, only in a warning. However, hijacking /tmp/gconfd-$user results in a big boom. Maybe there's some magic in ORBit that we could re-use. > It isn't > huge in practice since you can look at who owns the offending > file and apply real-world countermeasures such as disciplinary > action, no? Indeed, but it can lead to obnoxious "jokes", and it really makes the software look amateurish. > What you want here is a gconf backend, not a replacement for gconfd. > You still want a per-session gconfd (which dbus is designed for) in > order to cache the remote stuff and because it should default to local > files to avoid mandatory configuration. In case of a shared filesystem, the "local" files are generally not that local. But I see what you mean, the caching process is what makes gconf so reactive and it should remain. > Then have a backend that uses either a central IMAP-like server, or adds > some kind of P2P change notification to the file store. > > I would encourage fixing the "3 things" from > http://www.gnome.org/projects/gconf/plans.html first, because right now > gconf backends are very hard to write and brokenly have to contain all > the schema information. That would be a good occasion to learn dbus indeed. (But really, don't cound on me unless days magically become 48 hours long.) > > How about forgetting this communication thing? Configuration is stored > > in files, we just need to read and write these files. We even have some > > decent ways to monitor files now: local using inotify, remote using fam > > with dnotify. > > It's harder than you might think because to do a gconf notification > based on a file change, you need to "diff" the old and new config values > in the file. Which means that either gconfd keeps all files loaded all > the time, or that the backend puts only one value per file, or maybe > another solution you can think of. One value per file will result in a > _lot_ of files. Currently we have one file per directory. This is a nice intermediate and could remain this way. > NFS fam, I think you'll find, is a disaster in practice that most admins > will not get to work properly. Not sure it's even maintained, though I > suppose it could be. Even if you get it sane on Linux you have a bunch > of other platforms it won't work on, and AFS, etc. Fam has gotten almost sane with the introduction of the dnotify patch. It looks stable and doesn't poll. More importantly, it is designed to work with NFS and most shared home directories, whether we like it or not, are still on NFS. Which means we have to deal with fam anyway, as monitoring is already needed in many parts of the desktop. > If you meant to avoid a gconfd entirely and just have apps monitor the > files, I would suggest that the per-session-daemon setup is semantically > equivalent but a lot easier to code and much more efficient. You can > share the file monitoring, local caching, and backend overhead among all > the apps. Maybe most importantly you can share the overhead of reading > in all the config when you log in among all the apps. It's also a lot > easier to swap in a network backend using the per-session daemon. I agree that this would be much more efficient. It would mean the daemon could act mostly as a cache, but that the data could also be modified by another daemon, or by external tools without any need to signal the daemon. It also solves the network consistency issue without any need for a P2P signaling system or the like. > One of the "would be nice" items on > http://www.gnome.org/projects/gconf/plans.html is to remove "--direct" > mode which is when the apps use the config backend directly, because it > results in a much larger "libgconf" than just talking to a daemon does. Well, having the daemon monitoring the files would also mean it could be safe for applications to access the data directly. This could be provided by a separate library for not cluttering the main one. This separate library would also be used by the daemon itself to access the data. > With a dbus session daemon, "libgconf" would just be a maybe 50K > convenience wrapper, and apps with an aversion to dependencies could > just manually make the dbus calls and skip the lib. Or just manually read the files by directly talking to the same backend as the daemon. > Per-session-daemon also makes it much easier to have settings stored in > a "cloud" for individual users without an IT admin, because the session > daemon can log in to the "cloud" service and keep a local cache of what > it finds there. It would be a mess if every app did this separately. > > > If a migration > > script is provided, complete source and binary compatibility could be > > retained. > > Something to keep in mind is that there's a need for both forward and > backward compat with gconf, i.e. people often share a homedir among an > older and a newer version of GNOME, both in active use. > > You can accomplish this by completely renaming the storage (maybe move > it to XDG_CONFIG_DIR) but that's the least-desirable approach since it > means each GNOME version has to be separately configured. It would > probably be worth it though if the config setup were really cleaned up > nicely. If it's done only once, and by moving to something really better that we hope to keep for a long time, I think it's worth the deal. Regards, -- Josselin Mouette /\./\ "Do you have any more insane proposals for me?"
Attachment:
signature.asc
Description: Ceci est une partie de message =?ISO-8859-1?Q?num=E9riquement?= =?ISO-8859-1?Q?_sign=E9e?=