Re: [Colm Smyth <Colm Smyth ireland sun com>] Re: GConf



Colm writes:
> I don't think it causes a problem with update notification because I
> assume that a schema change affecting the default value does not cause a 
> notification (it would require a mechanism for the backend to ask each
> gconfd process to check if the default value would have been used by
> a client; if no overriding value existed in that client's database list,
> then the default value is the one that the client would see).
> 

Right, at the moment changing a schema won't cause notification for
the keys it applies to. As you say, there are no hard guarantees that
notification will happen. I want to do it whenever possible, because
whenever notification doesn't happen the user experience is somewhat
diminished, but of course making GConf slow and overcomplicated is
going to be more damaging than a few missed notifications.

> >Right now multiple entries can of course have the same schema applied
> >to them (i.e. you can have multiple <applyto> entries for one schema),
> >but there isn't any way to do that automatically along the lines of
> >/gnome/mime/*/mime-type etc. 
> 
> The naming of a schema without a key so that it may be referred to 
> elsewhere with a specific (or wildcarded) key is an equally
> important feature of the changes I suggested. I'm assuming that 
> applications will need to come with their own schema files and these files
> will need to be able to re-use system schema information. 
> 
> >Are you proposing that the GConf API change here, or is it just
> >additional syntax in the .schemas file?
> 
> A small API change would be needed to allow a schema to be named and
> referred to for a key.
>

I think I'm not totally clear what you mean here. I'm going to go back
and read your first message and try to get it clear in my head.
 
> Locking is not a problem from a technology perspective, but it
> can affect the semantics of concurrent clients and in the case of
> an update transaction may lead to deadlock depending on how it
> is implemented by the specific database technology.
> 
> I've adopted the view that update notification is a 'nice to have' but
> that apps should not rely on it once I noticed that the backend
> interface provides no mechanism for databases to tell clients that an
> update has been detected - it is up to gconfd to do notification.
>

I was planning to add some mechanism to the backend interface for
this. I think it's worthwhile as soon as we have a backend that can
find out about changes independently of gconfd. It should be some
simple callback mechanism.

At the moment, I am not supporting third-party backends, that is, I'm
considering the backend interface an internal API that we can break
and assuming that all upgrades of GConf will include upgrading all
backends. Someday we might change this, but for now we have lots of
flexibility.
 
> >If we can solve this, then we don't need to require a single gconfd
> >per home directory, and can instead have a single gconfd per user
> >session. The advantage of that is that gconfd is always running
> >locally, and sysadmins don't have to fool with allowing ORBit to open
> >sockets across the network. Could also have performance advantages.
> 
> Correct me if I'm wrong, but I'm guessing that ORBit's name server
> is located by looking at properties on the Xserver as name services
> are reported to be tied to the user's X session. This appears to
> be the only security and the means to prevent a user's gconfd from
> being manipulated by another user, possibly remotely.
>

In GNOME 1.2 this is true, names are stored on the X server. However
GConf is using our next-generation name server called OAF (object
activation framework), and it can store names in other locations, I'm
not sure of all the details.

ORBit itself isn't communicating over the X server though, it's
opening its own sockets, either on the local machine or alternatively
TCP/IP sockets between machines. So this may require firewall or
configuration changes, and can be harder to support than just X. 

It strikes me as cleaner to have one gconfd per user per CPU, instead
of the current situation with one gconfd per home directory,
regardless of number of CPUs the user is logged in on. That way we
avoid nonlocal ORBit connections, which is simpler and should be more
responsive/faster.
 
> >For the XML backend it's going to involve some kind of polling
> >timestamps on files though, which is unpleasant (if nothing else, it
> >keep laptops from spinning down their hard drive).
> 
> If you would like to make change notification a reliable feature, 
> you need some backend support. Rather than requiring a backend
> to do notification, it's possble to simply require backends to
> store a list of current clients. Then when a gconfd client does
> an update, it also notifies all clients (both those in the
> gconfd user's session and other clients). The only downside is
> that this only does notification if GConf is used to update
> the configuration values; it would not work for other API's
> (e.g. regular LDAP).
>

You mean each gconfd would inform all other gconfds using the same
backend if a change was made. I think that's a decent
solution. I'm not sure whether it's easier to implement this or the
backend-based notification, would require more thought.
 
> I feel that it's not a good idea to create artificial labels for
> database types; it's one of GConf's strengths that it applies a
> simple algorithm and sysadmins can use the database-path mechanism
> to create flexible configuration views.
> 
> I think it would be better to have a list of "root paths" associated
> with each database.
> 
> e.g. 
> 
> EnterpriseDB: /schema, /gnome/applications, /gnome/bookmarks
> Workgroup:    /schema, /gnome/applications, /mail/aliases
> Host:         /gnome/filesystems
> User:         /gnome/applications, /mail/aliases
> 
> When a value is read or written, the root paths form a high-level
> map of the database list. Specifically when writing a value,
> the workgroup database might be writable by regular users, but
> the User database is used in preference so long as the key
> is under one of the User root paths.
>

Good! Right now the backends have a sort of inferior version of this,
as you may have seen (the backend has a readable/writeable methods
that take a directory argument). But the toplevel map lets gconf avoid
the linear search of all backends, and is in general a much better
approach. Should make it practical to include even specific keys
in the map, rather than directories.

I'd like to move to this method, excellent idea.
 
> This is almost a must-have even for the first release, but I agree
> that the design issues are more important. Including a search
> method in the backend interface would be vital.
>

The only reason I say a GUI can wait is that we have less than 2
months for this release, and then we have another release early next
year, and GConf is not widely used in this next release anyway. The
one next year should contain much more pervasive use of GConf.

> > - life cycle management for gconfd; right now gconfd lives forever, 
> >   which is sure to get annoying if you have a machine with a few
> >   hundred users. I think this is as simple as exiting after a period 
> >   of idleness, but it requires the above-mentioned robustness against
> >   gconfd exiting.
> 
> We could do reference counting; when the client count goes to zero, gconfd
> self-terminates after 5 minutes.
>

It doesn't even need to be explicit refcounting, just when there are
no listeners registered gconfd could exit. Of course the problem is
that crashed clients keep gconfd alive, and since we plan to have
potentially a few hundred clients, one of them crashing isn't too
unlikely.

A nice solution is perhaps to ping clients periodically, and remove
them from the list if they're dead.


Another TODO item I thought of is that threading is likely needed for
certain operations at some point. Across-the-network backend queries,
and pinging clients, could both potentially hang, so those should
probably be in threads. There are also issues with getting ORBit to
support threading though, so this may also be a 2.0 feature.

Havoc





[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]