Re: [gamin] State of python unit tests?

On Sun, Jan 08, 2006 at 02:04:58AM -0500, William Lachance wrote:
> Hi,
> I've been wanting to commit a fix to Gamin for bug #323064, but it
> seems like the python unit tests (which get run when I type 'make
> check' from toplevel) aren't passing.
> The thing is, they don't pass in the version currently in CVS either.
> It seems as if they have both timing issues (i.e.: changing sleep(1)
> to sleep(2) can change the results dramatically), as well as possibly
> not having been maintained/updated for changes in the behaviour of the
> API (and/or new features, such as the inotify backend).
> So I guess my questions are:
> 1. Is there a plan to bring these guys up to date, make them more
> robust, and (most importantly) continue maintaining them in the
> future? Or should they just be disabled entirely?
> 2. Is it ok for me to commit my patch while they still fail? The (less
> comprehensive tests) in the tests subdir still pass.
> My opinion is that trying to keep things "not more broken than before"
> is kind of a losing strategy with regards to unit testing. It's just
> too frustrating interpreting test results, and the potential to make
> things even worse is great. Better to have tests that pass all the
> time (on all possible machines), or none at all.

  If you break the regression tests, by default, do not commit !
If you know why this breaks and how to fix the tests then posty to the
list an explanation and fix for the test and code itself.
If you don't know why this breaks then that means you don't know why
you are breaking the behaviour of applications I was mimicking in building
those tests, i.e. you don't know the impact of your changes, and in that
case I don't want this to be commited as is.


Daniel Veillard      | Red Hat
veillard redhat com  | libxml GNOME XML XSLT toolkit | Rpmfind RPM search engine

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]