[Setup-tool-hackers] Location management, time travel support



Esteemed hackers,

I have imported the core code for location management and time travel
into the helix-setup-tools module in CVS. It's in the archiver/
directory. A simple command-line tool is available for storing data in
the archive, rolling data back by invoking backends, and changing and
managing locations. Clustering support is not present at this time.
There are a few notes on my future plans in the file TODO and I have
some scribblings about the design of the whole thing in the various
-spec files.

I have attached copies of those spec files for your reading enjoyment.
Please do read and enjoy. These are files I created several months ago,
and a few changes have taken place during implementation. I'll update
those files to reflect the changes shortly.

-- 
-Bradford Hovinen

"If the fool would persist in his folly he would become wise."

        - William Blake, "Proverbs of Hell," 1793
Rollback archiving internals
Copyright (C) 2000 Helix Code, Inc.
Written by Bradford Hovinen <hovinen@helixcode.com>

1. Directory format

Diagram:

	 + toplevel
	 |-+ Location 1
	 | |- CCYYMMDD-hhmm-<id>.xml
	 | |  .
	 | |  .
	 | |  .
	 | |- metadata.log:
	 | |  [<id> <date> <time> <backend>         ] ^
	 | |  [ .                                   ] | Time
	 | |  [ .                                   ] |
	 | |  [ .                                   ] |
	 | \- metadata.xml:
	 |    [...                                  ]
	 |    [<inherits>location</inherits>        ]
	 |    [<valid-config>backend</valid-config> ]
	 |    [...                                  ]
	 |-+ Location 2
	 |   ...

There is one toplevel directory for each archive. This directory
contains one or more location directories. Each location directory
must contain two files: an XML file describing the location and a log
of changes made in that location. Each change corresponds to an XML
file containing a snapshot of the configuration as modified. There is
one XML file per backend. Each change has an id number that is
incremented atomicall by the archiving script when it stores
configuration changes. The id number, as well as the date and time of
storage, form a filename that uniquely identifies each configuration
change. The archiving script must also store in the log file a line
with the id number, date and time of storage, and backend used
whenever it stores XML data. New entries are stored at the head of the 
file, so that during rollback, the file may be foreward scanned to
find the appropriate identifier for the configuration file. The
per-location XML configuration file contains information on what the
location's parent is and what configurations that location defines.

For now, the backend shall be referred to by its executable name. When 
the backends gain CORBA interfaces, I suggest that the OAF id be used
instead. This reduces the problem of setting a backend's configuration 
to a simple object activation and method invocation. The OAF id may
also be used to resolve the backend's human-readable name.

2. Meta-configuration details

In order that this system be complete, there must be a way to
ascertain the current location and to roll back changes in location. I 
propose that there be a special archive in the configuration hierarchy 
that contains location history in the same format as other
locations. The archiver can then be a single script that accepts
command-line arguments describing the request action: `archive this
data', `roll back this backend's configuration', and `switch to this
location'. It then handles all the details of interfacing with the
archive and applying the changes in the correct order. Conceptually,
the archiver becomes a backend in and of itself, where the frontend is 
located in the GUI of HCM. It would therefore be adviseable to use the 
same standards for the archiver as for other backends and hence make
it a CORBA service, where the tool-specific interface is as described
above.

3. Future directions

The metafile log structure may run into scalability problems for
installations have have been in place for a long time. An alternative
structure that uses binary indexing might be in order. A command line
utility (with GUI interface) could be written to recover the file in
the case of corruption; such a utility could simply introspect each of 
the XML files in a directory. Provided that each XML file contains
enough information to create a file entry, which is trivial, recovery
is assured.
Changes to the Helix Configuration Manager
Copyright (C) 2000 Helix Code, Inc.
Written by Bradford Hovinen <hovinen@helixcode.com>

As it stands, capplets and Helix Setup Tools are both run as separate
processes through the exec() facility. It is planned that the capplets 
shall become Bonobo controls in the future, once the OAF/gnorba
compatibility problems are worked out. This changes the design of the
configuration system considerably, and several things should be done
to take full advantage of these changes.

1. Capplets become Bonobo controls

It stands to reason that the front ends for Helix Setup Tools should
become Bonobo controls at the same time as capplets. They can each
implement the same interface (say, Bonobo::Capplet) with methods
getXML(), setXML(), ok(), cancel(), and init() and hence look the same
to the shell. This means that the front ends for the Helix Setup Tools
run as the same user as HCM and respond in the same way as capplets do
to requests to embed them in the HCM shell window. This is essential
for a consistent user interface that will not result in end-user
confusion [1]. HCM itself may then export an interface that includes
the method runBackend(), to which the frontend supplies a stream of
XML that HCM passes through the root manager to the backend via a
standard pipe [2]. The backend is then responsible for running the
program that archives the XML -- there is no other way to place that
XML in a central, system-wide repository. I suggest, therefore, that
we modify the design of the current system to make that change, so
that we do not have to undo existing work later.

2. Backends get CORBA interfaces through Perl/ORBit

At this point, there must be a way for the root manager to forward
CORBA sockets to the user securely. This could be done by modifying
ORBit so as to give the running program very precise control over the
nature of the socket. Access could be granted specifically to the user 
running the root manager by placing the socket in a directory owned by 
that user with permissions no more lax than 0700. When the CORBA
interfaces are created, applications will be able to make use of it to 
make system-wide changes as necessary (say, to add a new user during
the installation of a piece of software). This means that the
traditional rollback facilities must be extended to allow users to
roll back changes made by applications. In addition, the application
must treat the backend as a black box -- it should never be expected
to do anything unusual to support rollback, since buggy or
poorly-written applications would otherwise cause trouble for
unsuspecting users.

At this point I suggest that each backend export two interfaces: one
that is universal to all backends and one that is specific to that
particular tool. The former may include the methods getXML(),
setXML(), and commit(). When changes are made through the
tool-specific interface, the tool decides whether or not to apply
those changes immediately or to queue them up until a commit() is
invoked. If changes are made through the backend's CORBA interface and 
it is deactivated before a commit(), the backend must roll back those
changes under the assumption that they are not intended to be
permanent.

Of course, this makes implementation of the cancel() interface on the
frontends very easy -- simply deactivate the backend without
commit()ing. ok() can be implemented by flushing any remaining
changes, calling commit(), and then deactivating the backend. The
frontend can and should use the CORBA interface to invoke changes
whenever they are made, as long as it makes sense. It is then the
backend that sets the policy of whether or not the updates are live,
as described above. The frontend must still be able to read XML,
though, since it is through that that it will get an initial
description of the setup with which to fill in the dialog. In
addition, since the frontend may be invoked to make changes to an
inactive location, it should be able to write out an XML description
of the dialog's contents so that those changes may be archived rather
than applied immediately.

Notes

[1] A visual cue that signals to the user that he is running a
system-wide configuration tool rather than a personal one would be
advantageous. Such could take the form of an icon on the dialog, a
layout or formatting convention for the dialog proper, or some sort of
coloring convention of some of the controls. However, simply having
the tool run with root's Gtk+ theme and ignoring the embedding
preference, as would be the case if we do not Bonobize the HST
frontends, is inconsistent as many users will leave their themes as
the default and elect not to embed capplets -- eliminating all visual
cues. In addition, it is not particularly lucid and many users will
merely be confused by the inconsistent interface. One may imagine many
users filing bug reports in the belief that the behavior is
erroneous. Hence, that solution is insufficient.

[2] There must then be a method of multiplexing I/O throught the root
manager, as there may be multiple backends running concurrently. A
simple protocol could be implemented to do this, or a named pipe could 
be created if done very carefully as to ensure a high degree of
security.
Configuration rollback, location management, and cluster support
Copyright (C) 2000 Helix Code, Inc.
Written by Bradford Hovinen <hovinen@helixcode.com>

I.   Basic architecture

A. Components

1. Helix Configuration Manager

The GUI shell, here referred to as the Helix Configuration Manager
(HCM), acts as launching point for the capplets and Helix Setup Tools,
and a control center for managing rollback, location management, and
clustering. It launches other components and knows how to find
archived configuration data for rollback. When rollback or a change of
location is required, it invokes the required backends or capplets
with the --set option and feeds the required XML to them.

2. Capplets

Capplets handle user configuration; they combine the front and back
ends into one process. They will eventually run as Bonobo controls
implementing the Bonobo::Capplet interface, but for now they are run
as regular processes. They all support the --get and --set command
line options (which will be methods in the Bonobo::Capplet
interface). --get returns an XML description of the capplet's state
for archival purposes and --set takes an XML description of the
capplet's state and applies those settings to the desktop.

3. Helix Setup Tools (HSTs)

These programs are for system-wide configuration and must run as root
in order to apply changes. They may also run as a regular user in
`read-only' mode. They have separate front- and backends, the former
typically written in C and the latter normally written in Perl. This
facilitates in-place modification of existing configuration files
without the need for creating out own separate, incompatible way of
configuring the system. The backends support the --get and --set
arguments, analogous to the arguments in capplets mentioned above.

2. Root manager

The root manager process runs as root and is launched through
gnome-su. It accepts on stdin a set of programs to launch, one per
line, with command line arguments. HCM uses it to launch Helix Setup
Tools so that they run as root, without needing to ask the user for a
password each time a tool is run. The root manager is run exactly once 
through console-helper the first time a tool that must be run as root
is invoked. On subsequent occasions the command to start the tool is
passed to the root manager through stdin.

3. The script do-changes

do-changes is responsible for archiving changes made to the system's
configuration and passing them on to the backend, if appropriate. It
accepts a stream of XML on stdin and stores this XML in the
configuration archive directory. If a backend is specified on the
command line, it also spawns the backend process with the --set option 
and feeds the XML to it through stdin.

II.  Configuration process

When a user makes changes to either his own configuration or that of
the system, those changes must be archived so that the system may be
rolled back in the future. In the case of capplets, the capplet
currently dumps an XML snapshot of its current state to the script
do-changes when the user clicks `Ok'. do-changes then archives the
state in ~/.gnome/config/<location>/<revision> where <location> is the 
name of the active location (cf. section IV) and <revision> is
incremented after each change.

When the capplets are converted into Bonobo controls, the situation
will be slightly different. HCM will be the recipient of the `Ok'
signal, so it will invoke the OKClicked() method of the
Bonobo::Capplet interface on the appropriate capplet. It will also
invoke the GetXML() method of the same interface in order to retrieve
an XML snapshot of the state and store that snapshot with
do-changes. Hence, much of the action moves from the capplet to HCM.

In the case of Helix Setup Tools, the frontend passes the XML through
the do-changes script to the backend whenever the `Ok' button is
clicked. It passes to do-changes the argument --backend <backend name> 
so that do-changes will also invoke the indicated backend and pass the 
XML to it.

III. Rollback process

>From within the HCM, the user may elect to roll back either his
personal configuration or that of the system to a particular
date. HCM looks for a revision directory in the current location
profile with the most recent modification date that is not more recent 
than the date specified by the user. HCM also has a list of what
capplets (or HSTs) constitute a complete snapshot of the system's
configuration. In order to perform a complete rollback, it backtracks
through the revision directories, picking up XML snapshots of capplets 
until it has a complete set and applies them through the --set
method. In the case of HSTs, the HCM knows how to invoke the backend
and does so as necessary.

IV.  Location management

The system may have one or more profiles, each giving different system 
configurations. For example, a user may own a laptop and wish to hook
that laptop up to different networks at different times. Each network
may be located in a different time zone, have different network
configuration parameters (e.g., DHCP vs. static IPs), and use different 
default printers. When the user hooks his laptop up in a particular
network, it would be advantageous to switch to that network's
configuration with a minimum of hassle.

As mentioned above, configuration data is stored in separate
directories corresponding to the name of a given location. HCM has the 
ability to apply a set of configuration files from a given location in 
a manner similar to the rollback procedure described above. When the
user selects an alternative configuration, it simply goes through the
revision history for that location, pulls a complete set of
configuration files, and applies them. The procedure is similar for
both capplets and HSTs.

In addition, locations may be expressed hierarchically. For example, a 
user might specify a location called `Boston' that describes language, 
time zone, currency, and other locale data, and another location called 
`Boston Office' that includes network parameters. `Boston Office'
inherits its locale data from `Boston', overriding the latter's
network configuration. 

To implement this, each location directory contains some metadata that
describes what configuration data is valid for it and what other
configuration it inherits from. There are one or more root
configurations that contain a complete set of data. When applying a
new location, HCM looks first at that location's directory, pulling a
complete set of all the configuration data defined by that location,
and then goes to the next level up in the location hierarchy and does
the same thing. It also keeps track of the common subtree root between
the old and new locations so that only the configuration items that
actually change are collected.

>From a user's perspective, the HCM will present a tree showing the
existing locations. Users may create a new location derived from an
existing one. When the user elects to configure a particular location, 
the HCM shell includes icons that are grayed out, indicating that
those configuration items are not set in this particular location. If
the user attempts to change them, they become specific to that
particular location and are recolored accordingly.

V.   Clustering

A single server may archive the configuration for a large number of
individual workstations, providing configuration data to each of the
clients on demand. An administrator can then push configuration
updates out to each machine with the press of a button, rather than
having to go to each machine and update it manually.

To enable this, each client machine will run a daemon that accepts
configuration data pushed out by the server. Some sort of public key
signing will be implemented to ensure that this is done securely. On
the server end, a series of host groups is maintained, each one
containing a set of hosts. These form the top two levels of a
configuration hierarchy not unlike what is described above. Each host
may override certain configuration values for the cluster as a
whole. The cluster may also have multiple `locations', e.g. for
configuring a computer lab for computer science during one class and
for math during another. Locations may be selected down to the
granularity of a single host, or for the entire cluster at
once. Cluster-wide configurations occur between the cluster and host
level in the configuration hierarchy.

VI.  Issues

1. We need a way to get an XML state without actually applying
changes, so that the user can configure a location without switching
to it.

2. Can we make the HST frontends Bonobo controls, and can we have them 
run as the regular user rather than as root? This would ensure that
certain user interface preferences, such as themes, are kept
consistent for a given user between capplets and HSTs. The way to
implement this is to have a method on the HCM interface called
RunBackend() which returns a BonoboObject referring to the backend
that implements the Bonobo::HSTBackend interface, which is similar to
the Bonobo::Capplet interface mentioned above. The interface defines
the GetXML and SetXML methods. The object should also implement
another, HST-specific interface to facilitate setting specific
configuration variables, so that live update may be implemented. The
root manager must then be extended to support some sort of secure
forwarding, allowing the user to access that particular CORBA object.

3. If we make the HSTs into Bonobo controls, can we give them the same 
Bonobo::Capplet interface that is given to the Capplets? This would
make everything a bit simpler from the HCM's perspective, since it
then does not need to know the difference between Capplets and
HSTs -- it then only needs to implement the RunBackend() method for
the benefit of the HSTs.


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]