Hello Alexander, thanks for your answer! Alexander Larsson [2012-09-19 11:19 +0200]: > There are actually some tests, but they are not really integrated with > gvfs nor run regulary unfortunately. The live-g-file test in > glib/gio/tests is meant to be able to run against non-local backends and > tests a lot of the corner-cases in the GIO api that GVfs backends have > to support. Ah, thanks for pointing this out! I'll see to integrating this into our regular test runs somehow. This is also not overlapping with the tests that I wrote. > > - http://people.canonical.com/~pitti/tmp/gvfs-testbed > > > > Wrapper which builds a "sandbox" in the sense of using a temporary > > /tmp, /etc/, and /home, and creating a temporary "testuser" user in > > the sandbox. It uses unshare(1) with tmpfs bind mounts, so it's > > both very quick and safe - all the bind mounts will be cleaned up > > automatically as soon as the program terminates, and cannot be > > accessed from outside, so they do not interfere with the system > > (well, some stuff like useradd does log to /var/log/auth.log, but > > that can be covered by bind mounts as well eventually) > > Another approach would be to use a real VM as testbed. In production [1] we do run them in VMs, of course. For that the tmpfs magic is not necessary of course, and the wrapper script could be reduced to just set up a test user, etc. I just like having that sandbox because it's magnitudes faster and slightly more convenient (it takes no noticeable time at all) than the juggling with a VM during test development. However, in both cases the actual tests (the first script, gvfs-test) is the same. So if you don't particularly like gvfs-testbed, I'm happy to keep that on the Debian/Ubuntu side for now. [1] https://jenkins.qa.ubuntu.com/view/Quantal/view/AutoPkg%20Test/job/quantal-adt-gvfs/ > We could even ship a pre-built one with whatever dependencies is > needed. Shipping a complete VM? That sounds a bit big.. But I think this would fit perfectly into Colin Walter's OSTree builds. I was talking to him at GUADEC, and this just yearns for running a few tests during build (I also have some tests for udisks, upower, etc.) > That would be arch-specific and somewhat larger/cumbersome, but it > would let us do isolated testing without depending on the services > installed on the system (i.e. sshd, twistd, etc). It also avoids > having to run as root. Whats your opinion on this? Anything which makes tests run regularly has my full approval :-) I just don't think many people are keen on downloading a 5 GB blob every day for testing, and maintaining it is a bit cumbersome. But some scripts to integrate it into jhbuild/ostree/whatever would be great. There is no operational plan for this yet, it's just an obvious idea for now. > I also think we need to ship a bunch of testcases in the form of > directories and files we want to explode onto the various backends so we > can try to access them via gvfs. We can do that of course. Right now the tests just create some dirs/files on the fly in a temp dir, but for creating large hierarchies to test corner cases, pre-built tarballs are more convenient of course. > > So, my questions: > > > > - First, are you interested in eventually adding integration tests to > > the upstream git at all? If not, I can put it into the > > Debian/Ubuntu packaging where they will be run as part of > > https://jenkins.qa.ubuntu.com/view/Quantal/view/AutoPkg%20Test/ but > > that would make them a lot less useful for other developers. > > Yes, we're very interested in this. Its sadly lacking in the ... Phone rang? Pizza was ready? :-) > All of gvfs is grounded in the session dbus instance, so if you run a > separate instance of dbus-daemon --session with the right config file Right, I'm currently using dbus-launch for that. GTestDBus is also very nice, but it has some quirks right now. > it *should* be able to run from the build tree the LD_LIBRARY_PATH and friends are not a problem, but gvfs needs to find and access all its .service files and the like; I tried a "make install" into ./test-inst/ and some seddery in the .service files, but I didn't get it to run. But I shall try harder. Do you want the current system integration tests upstream already ("make installcheck" seems to be a rather common name), or wait a bit until they matured a bit? Start of 3.7 cycle might be a good time? > Thats up to whoever does the work. I think python makes a great deal of > sense. Although in some particular tricky tests we might need to write a > helper binary in some lower level language like C which we can call out > to from the tests. Yeah, that's possible of course, if it comes up. Right now I only do blackbox testing, i. e. I don't look at the guts of the API; when we get tests that do, we might also get quite far with using the Gio API over introspection (which would be a nice incentive to finally fix that for good :-) ). > > - Do you prefer if the test suite uses the command line tools (the > > current prototype does that), or the API (we can use Gio through > > introspection from Python)? Presumably we should test them both? > > It should use the Gio api imho. The command line tools are not competent > enought to test anything but the most rudimentary details. OK. I'll keep some tests for the CLI tools just to make sure that they work, but convert (or add) some tests that use the API. > It would be good to test dav too Ah, good point! Adding that to my list. (Added a work item to https://blueprints.launchpad.net/ubuntu/+spec/desktop-q-desktop-quality) > and various different types of ftp servers too, as these behave > rather differently at times. Ah, do you have a concrete example (bug report or so) which I should add? Right now I only run the python twisted FTP server, as that's easiest to use. Thanks, Martin -- Martin Pitt | http://www.piware.de Ubuntu Developer (www.ubuntu.com) | Debian Developer (www.debian.org)
Attachment:
signature.asc
Description: Digital signature