Re: Glad to see interest in install/packaging



   One last spin before I break this up into different threads :)
This mail is really long so skim it until you see something you like.

   Also the sections in paren's should be read in the voice of
Kevin Nealon's (Saturday Night Live) Mr. Subliminal.  Very fast
and monotone as though you're sneaking it in-between the real words.

   Morey




On Mon, 30 Apr 2001, David Elliott wrote:

> Morey Hubin wrote:
> 
> > On Sun, 29 Apr 2001, David Elliott wrote:
> >
> >   Just to clarify, I was thinking ahead.  To setup an automated build on
> > a build farm you have to know a little about the product you're building.
> > Setting the ground rules about what you can guarantee on the user's
> > systems reduces the complexity of the automated build environment.
> >
> 
> To be honest, you seem to be on a totally different level.  That is a good thing.
> 

  6 years as a software build engineer makes you think head about the results.
  I run 3 Sun Sparc build farms that turn out Teradyne software 24 hours a day.
  I'm getting pretty comfortable with farm configuration and these issues
  were some of the things I had to answer when we built the first farms.





> >
> >   I'm concerned with source tree setup, package creation and package
> > management.  Less about PC based hardware.  ...
> 
> Too bad there are no MP Athlons at this point.  I wonder if we could get some
> company with an interest in a better Gnome to donate the use of a dedicated
> server for this.  I would recommend using Sourceforge but the proposed level of
> use would be like a DoS attack.

  I have some heavy duty Sparc compute servers for Solaris 5.1 (8 yrs old)
through Sol 8 builds (We use them all inhouse).  I can't offer cpu time to
others (Corporate Firewall) but I can turn over Sparc builds fairly regularly.

  I also have a spare dual cpu 400Mhz Slackware box at home with an 80Gb IDE
drive. I could probably run a few builds for 2 or 3 key versions of Linux.
At least until a respectable build farm comes on line.  It's a drop in the
bucket but its a start.  Unfortunately I don't have public space or bandwidth
for publishing the Solaris or Linux packages.






> > > >       ) A standard install path that everyone can use universally.
> > > >         I setup machines so /usr/gnome is a symlink to the actuall
> > > >         Gnome install.  That way I can burn in library paths using
> > > >         the linkers -R/usr/gnome/lib (or equivalent for your OS)
> > > >         and not have to worry about setting LD_LIBRARY_PATH at runtime.
> > >
> > > On Linux at least we ought to follow the filesystem standard. Stable
> > > installed packages should have prefix=/usr, ...
> > >
> > > However, development releases shouldn't screw with a stable gnome config, so
> > > the best place for those is probably in /opt/something.  Perhaps something
> > > like /opt/gnome-20010429 for CVS snapshot builds.  ...
> >
> I have noticed the general trend that most seasoned UNIX users tend to use /opt.
> Many Linux users err the other way and put everything in /usr.  To paraphrase
> Jeremy White (CodeWeavers) "I prefer to have everything in nice rm -rfable
> spaces." (check wine-devel winehq com a few months ago).
> 
> The whole point of using RPM is to allow you to glob everything into /usr without
> screwing up your system.  ...
> ...
> 

  OK,  I'm used to Slackware and Solaris.  Neither of them use rpm's natively.
You can crossbreed installer apps (Rpm's on solaris, ...) but I'm favorable
toward utilizing native installers on thier respective platforms.  That's what
they are there for.  I use Slakware tgz's and Sol pkg's in high volume.

  I don't like M$ (Microfluff) stuffing cheesy software on my machine and I
wouldn't presume to do that to other non-Linux users.  I'm not questioning the
Rpm's quality, I'm just saying that you can't interogate for installed Solaris
OS pkg's with an Rpm manager.  Rpm's weren't built to integrate with existing
Solaris pkg's in mind.

  As part of the bin packaging it would be nice to have every devel machine
feel the same regardless of UNIX flavor.   We could achieve this through a
published standard using wisely placed&named symlinks.  Not to complicated,
just a few basic rules outlining what to do  and what symlinks to manage
if you what things to be really easy.

  Unfortunately I'm seeing that conscensus will be difficult. Many value
personal taste over group-wide utility.  My suggestion on a standardized
install pathing has generated a signifigant amount of unwarranted negativity.

  Seems to be a very political issue and I hate politics.  End trans...





> >   Using /usr/bin is extremely dangerous on networked Solaris boxes.  Solaris
> > provides thier own versions of tar, sed, libintl.a and other GNU replacements
> > ...
> 
> Of course on Solaris none of that RPM stuff applies and I would DEFINITELY not
> use /usr/bin.  Incidently, isn't Sun planning on including Gnome with newer
> releases of Solaris? ...

  Solaris CDE installs as /usr/dt and if you decide to install CDE elsewhere
on your machine, /usr/dt becomes a symlink to your CDE install.  This makes it
simple to find Motif libraries because /usr/dt/lib is guaranteed to be correct
at install.  Since Openwin and CDE install this way I naturally assume that Sun
will have Gnome install the same way as /usr/gnome.




> >   I'm partial to the /opt/gnome-MMDDYY for snapshot builds.  Again we should
> > look into choosing a static path to serve as a reference point for the gnome
> > run-time.  Say /opt/gnome-devel -> /opt/gnome-20013322 so that we can burn in
> > libpaths to /opt/gnome-devel and still allow devel to change quickly.  This
> > has a large effect on the build environment so that is why I harp on it.
> > I can explain this in detail some other time if anyone cares to read it.
> >
> 
> I am more partial to the YYYYMMDD format, it sorts better when doing an ls (was
> that a typo on your part?).
> 
  Not pickey, as long as the dir names are unique and meaningful.  I figure
  we'll digitally WWF out the name format a little later.




> The more I think about it, the more I like using rpath for snapshots.  However
> for release packages I would discourage it.
> 
  If you know where the released packages will be installed on the system
  then -rpath's can be a real blessing at run-time.  Releases should use them
  too because that will serve >80% of the population who install Gnome bins.
  The rest can fake it with a few well placed symlinks.
  The key is to decide where these unbranded Gnome packages will be installed
  by default on all OS's.  Should the unbranded intermediate releases clash with
  older released libs and apps delivered with the Linux OS's?  If not then
  /usr/bin is not so hot after all.

  This came up in another mail I received and I need to get it in bit form.
     I don't know if Linux vendors will re-release these packages unchanged.
  I imagine that they will rebuild or repackage many of the apps to thier own spec.
  This means that our package standard has little effect on them.  This is a hunch
  only and someone, probably me, should really research this.  Do vendors do thier
  own thing or will they take our packages, uncut, and re-distribute them???




> In keeping with the suggestion below that we not build all packages with the same
> frequency it may be more prudent to ditch the date thing altogether and just use
> gnome-devel.  Really and truely why in the hell would you want more than one
> complete installed snapshot of gnome.  More than likely you would want to upgrade
> or downgrade only one or a handful of packages.
> 
  I picture people downloading only the apps they want to debug/evaluate,
  plus maybe a few dependencies.  This means that people could have subsets
  from 2 or more builds in separate directories.  Symlinks can make this work.
  I download a release of Balsa, find a bug or improvment, download the debuggable
  version of the same build and start hacking and debugging.  The release version
  is still on my system, untouched.  You can jump from release to debug at will.
  Probably run them side by side if we want to.  It's a viable option that the pro's
  use.  This will still require the gnome-date naming so I wouldn't trash it just yet.
  Again we will have to use a well placed/named symlink so coders can jump back and
  forth from download to download without ruining thier config.




> >
> >   Also keep in mind that ~/.gnome, ~/.gnome_private and ~/.enlightenment
> > keep absolute paths to themes, libs and other apps.  Changing from
> > /opt/gnome-1111 to /opt/gnome-2222 means that most of your user config
> > is gone and enlightenment stop working. ...
> 
> Good call.  Along these same lines, should we consider using different ~/
> directories for snapshot builds?  This will really screw people who want to
> switch between a working stable gnome in /usr and the snapshots in
> /opt/gnome-devel.  I am not so sure if I like this idea but I am throwing it out,
> maybe like ~/.gnome-devel, ~/.gnome_private-devel and so on.
> 
  Can't help here.  I took the approach of leaving the product the way it is
and massaging the machines to converge on a standard pathing.  Some people have
gone the other way and called this a major software defect.
 (Queue shoulder shrugg ... now)




> > >
> > > >
> > > >       ) To prevent a chicken and egg argument I would like to create
> > > >         a Perl based Gnome install manager (GIM) that in no way relies
> > > >         on Gnome components to run.  I see a PerlTK GUI and an alternate
> > > >         silent install control language (really simple syntax) that
> > > >         manages the installation of kits.  It may not be as beautiful
> > > >         as other Gnome apps but it will be portable and easy to edit.
> > > >
> > >
> > > I don't see adding a dependency on TK as a good thing.  Why not use the
> > > latest stable GTK? ...
> > >
> >   PerlTK is a simple downloadable perl module.  Building GLIB and GTK is not
> > something non-Linux beta testers can do easliy.  Documentation people either.
> > ...
> 
> I see your point here.  TK is indeed a very nice lightweight toolkit found on
> just about every UNIX system, and a UNIX system without Perl just isn't a UNIX
> system :-). ...
> 
> However, if you are targetting people without root access to their machines then
> why have a user-friendly installer at all.
> 
  Gnome is the operating environment (super-duper window manager).  Glib and Gtk
  are generalized wrapper libraries.  You can run user apps without running in a
  full Gnome session.  If you have Gtk and Glib installed you can run most any
  Gtk app under KDE, CDE, fvwm, AfterStep or (dare I show my age?) OpenWindows.
  You can download Glib, Gtk and a single app and beta-test the heck out of it.
  Many companies evaluate new software this way before committing to an upgrade.







> > > >         The key is to identify the common needs for most native install
> > > >         managers ( rpm's, tgz's, Solaris pkg's, dpkg's, ...) and devise
> > > >         a GIM Virtual Interface (VI).  Aka, a fancy name for a set of
> > > >         functions that the GIM needs in your Perl module.
> > > >         ...
> > >
> > > This does not really sound like packaging components, but rather
> > > distributing them.  I don't think this project is trying to compete with
> > > red-carpet or apt-get.
> > >
> >  +++++++++ Off topic so skip to next question if not interested +++++++++++
> >   No more than Slackware's rough ncurses install competes with Red Hat's.
> > Your right though, it is off topic.  My companies "beta testers" are generally
> > well organized and thurough people who are fairly helpless at a UNIX prompt.
> > A simple download&install manager may increase your beta tester population.
> >
> 
> Now that is not really all that off-topic.  I think it is imperative that gnome
> be beta tested on several flavors of UNIX.  Whatever can be done to help this is
> most certainly a good thing. ...
> 
> That still leaves other UNIX users in the dark though, including Slackware.  So
> at that point you have to make some sort of system that is lightweight and easy
> to use.  For this I would agree that perlTK is definitely the way to go.
> 

  I'll try to put a prototype together.  I planned on doing it for our Solaris
pkg's inhouse anyway so it won't be for not.  It is primarily intended to be
a package browser/downloader/installer for OS's that don't have one already.





> > > >       ) A standard path to source that developers can use with GDB.
> > > >         GDB expects to find source in the same place it was built.
> > > >         If it doesn't then is asks the user to supply the correct path.
> > > >         This can get annoying for large projects.  If all debuggable
> > > >         builds take place under /usr/gnome_source then developers can
> > > >         setup a /usr/gnome_source symlink to thier download area and
> > > >         GDB away.
> > > >
> > >
> > > Now this is a very valid point.  I would suggest $prefix/src/package-name.
> > > That is something like /usr/src/gnome-core-1.4.0 or for devel stuff
> > > /opt/gnome-20010429/src/gnome-core-1.5.0  (assuming 1.5.0 is the devel
> > > version and the prefix is /opt/gnome-20010429).
> > >
> >   To expand a little I have /usr/gnome_source as the root of the source
> > tree followed by, [ stable/ || unstable/ || gnu/ || contrib/ || scripts//]
> > then the packages below that.  Contrib/ contains jade, Berkeley DB, libjpeg
> > Perl5.6 (for glade) and other non-Gnome buildables.
> >
> 
> Interesting.  I do think it should be a subdir of the install path rather than
> over in /usr though.
> 
  That's cool, I'm not pickey.  As long as there is some sort of consensus
  on a standard location for the source tree then I'm cool with that.
  This actually has a huge effect on the setup of the build farm machines
  so this one should be set in stone as early as possible. 






> > > >         This does mean that any build machines will have to mount build
> > > >         space under /usr/gnome_source.
> > > >         The debugger knows the absolute path to the source so there's
> > > >         no way to use symlinks to fake the path to the debug build.
> > > >
> > >
> > > Hmm, that sounds like a real fuck to me.  Is there any sane way around this?
> > >
> >   Not that bad really.  In my work environment compute servers are special
> > and we can do non-standard things to them.  The ends justify the means :)
> > As long as the configuration is reproducable and well documented we can
> > make changes to improve the environment for developers.
> 
> I am assuming that if the end user doesn't have enough space on that filesystem
> that symlinking /usr/gnome_source to some other directory will work.  Am I right
> here?

  Pretty much.  They other advantage is that you can jump from one snapshot
code tree to another by re-setting the symlink.  You can have multiple versions
of the source for the same app (or different apps) on your machine.  Most
developers who venture this deep can reset symbolic links comfortably.  The
ones who can't usually shy away from multiple source tree work in the first place.

  Again I am not heart set on /usr/... as the location so we can throw the
actual path spec around the horn.







> > > >       ) We sort the list of packages into classes like:
> > > >           Core Libs   (stuff heavliy depended upon by user apps)
> > > >           Gnome Libs  (stuff needed to run the Gnome App Environment)
> > > >           Extra Libs  (stuff no so heavily depended on)
> > > >
> > > >           Gnome Run   (panel, menu applets ... )
> > > >           User Apps   (Useable even if you are not running gnome)
> > > >           ... User Apps can be subdivided by what they do ...
> > > >
> > >
> > > I believe that for the most part this is already done.  Really the more
> > > fine-grained control the better.  But then again, don't overdo it.
> >
> >   You can probably optimize build server time by rebuilding core libraries
> > less frequently than apps.  Thats why I wanted to identify them.  The more
> > dependencies, the less often it should be rebuilt.  Changes to low level
> > libs tend to be more invasive, take longer to perfect and are generally
> > more dangerous inbetween stable builds.
> 
> Yeah, back in the day of Gnome 0.x the core libs changed daily so I have been
> thinking more along the lines of keep everything snapshotted from exactly the
> same time.  But really we only want to snapshot the cores when there are major
> changes.
> 

  There are tradeoffs in building large code sets.  You have to utilize your
cpu time and reduce the impact of broken builds on devel.  Some micro-manage
thier builds by monitoring and re-compiling packages as they change in CVS.
I decided to stay out of code monitoring game and go a different route.

  I use the tough-love approach:
  
  If a single app does not build&run then we, the build farmers, don't care.
  Only people who work on that app feel the pain so the build meisters don't
  get involved.  Peer pressure gets these cleaned up really quick without us.

  If a single core library does not build&perform then call in the firing squad.
  A core library break means that all dependent apps are dead by proxy.  This is
  where the build meisters go out and start breaking knuckles.  We ensure that
  these get cleaned up real fast.  By the schedule, we only get involved once
  every core library build.  Maybe once every 2 weeks for Gnome.  I can deal
  with that type of time committment.

  Where I work, we build displays and user tools 1 to 2 times a day.
We only rebuild core libraries and daemons once a week.  In lines of code
written, this is probably equivalent to a user apps build every 2 days
and a core library rebuild every 2+ weeks in Gnome devel (rough guestimate).
Apps don't use new core lib features until the feature is released so it's
unusual to have an app break from unpublished lib features.  It's also helpful
to give app developers 2 to 3 stable weeks between core library updates.

  Keep in mind I build 23 versions of my software concurrently 6 days a week.
We have myself and one other person managing the whole build process and the
machines run like swiss clocks.  I even have time to write long mails like
this one... :)

  For open source devel I imagine much of the development happens on the
weekends.  We might try building core libs every other wednesday night and
publish them by friday for weekend devel work.  Apps can be continually built
throughout the week without much oversight by the farm controllers.

  With build schedules I find that if you set/decree a regular build schedule,
developers adjust thier biorythms to match very quickly.  Meaningfull core
library changes rarely happen overnight so we want to give lib hackers as
much notice as possible before a build.  A 2 week build schedule does exactly
that.  If it's not ready by wednesday of week X then don't check it in until
the next core build pass!






> > > >       ) Release and Debuggable versions of packages wherever possible.
> > > >
> > >
> > > Release early, release often.  Preferably have a high-power compiling
> > > machine or farm that automatically builds and packages CVS every night.
> > >
> >   We build 23 versions of a 4+ million line software package 6 days a week.
> > I am a firm believer in automatic turnover and publishing as much as possible.
> 
> What the hell are you building?  Emacs? ;-)
> 

  Its a custom operating environment to drive Teradyne's multi-million dollar
silicon wafer test systems.  It's a 12 year mature app rigorously worked on by
110+ engineers continuously.  We have 23 custom versions, each being 4 to 5
million lines of pure C and C++.  27 million lines if you include memory test
patterns, data files, etc....  There are OS drivers, daemons, modeling displays,
digital analysis tools and lots of other techno-gunk.  Try to come up with a
new Electrical Engineering tool and we probably have a dozen of those already.

  Lots and lots of code needs lots and lots a builds.  We have 2 very sane
(not overworked) people running the whole build environment with time to spare.






[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]