Re: [gamin] Test suite problems



On Mon, Aug 01, 2005 at 06:31:01PM -0400, Daniel Veillard wrote:
> On Mon, Aug 01, 2005 at 06:18:23PM -0400, John McCutchan wrote:
> > On Mon, 2005-08-01 at 17:14 -0400, Daniel Veillard wrote:
> > > On Mon, Aug 01, 2005 at 04:11:45PM -0400, John McCutchan wrote:
> > > > Yo,
> > > > 
> > > > dnotify4.py is broken. The test creates a file, then watches it as a
> > > > directory, and expects to get a DELETED event. This is screwy. What
> > > > should happen is that the monitor fails. 
> > > 
> > >  it is the expected behaviour from the applications at this point.
> > 
> > What applications expect to be able to watch a directory as a file?
> 
>   An application which started monitoring a non existent resource 
> which was (re)created as a file for example. If you look at the
> efew examples from the SGI documentation they show this
> kind of examples.
>    http://techpubs.sgi.com/library/tpl/cgi-bin/getdoc.cgi?coll=0650&db=bks&fname=/SGI_Developer/books/IIDsktp_IG/sgi_html/ch08.html
> 
>   "Example: A client monitors a directory containing some files. Another
>    program deletes the directory, then creates a new file with the same
>    name as the directory."
> 
>   and I think I remember debugging nautilus and seeing this happening.

In the actual test the directory does exist. Well, I think this is silly
but if the FAM documentation says it must be this way. It must be :)
> 
> > >   They worked fine until now on dnotify and inotify back-ends. They
> > > reflect potential application expectations. Maybe those are not
> > > reasonnable but if the test must be changed that need to be discussed 
> > > beforehand not after the commit breaking them.
> > 
> > They worked fine on _your_ machine under your load. Many of the tests
> 
>   Ahum, you told me that they passed in your inotify tests too !
> I'm sorry if you are disapointed, but denial is not a correct way to
> process.

I wasn't trying to deny that the test passed on my machine. I was trying to 
make the point that on someone elses machine, the tests might fail.

> 
> > (especially the flood tests) are racey. When you are testing for flood,
> 
>   very likely, let's fix the tests so they pass again but in a methodical
> fashion after having seen what is the problem and verified tha all changes
> make sense and result will pass on the multiple backends.
> 
> > you expect a certain number of write events within the test time. But if
> > load is high, the write thread might not get scheduled in the way your
> > test assumes, and the number of events will be off.
> 
>   yes, this is not the problem encountered in the regression test failing.
> So this is not a justification to ignore it.

I'm not suggesting that we ignore the regression tests. I'm mearly
pointing out that they aren't deterministic. I want to see the tests
adjusted so that they are deterministic.

> 
> > Also, looking at the dnotify kernel code and considering the DELETE
> > event. There is nothing that guarantees that by the time you receive the
> > DELETE event in gam_server the file will actually be removed from the
> > directory tree.
> 
>   So far that expectation has never be seen as broken in normal testing.
> To me it's a kernel bug like inotify bug you fixed 2 weeks ago it may just
> be a race instead of being systematic.

I agree that the kernel should only send the event after the file has
been removed from the filesystem. But, currently the dnotify kernel code
makes no guarantees (it just so happens that by the time gam_server
looks for the file it's gone.) The reason is succeeds is because
the events are being delivered by signal which takes more time than the
events being delivered over an inotify FD.

John



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]