Re: Some shortcomings in gtestutils

On Thu, Feb 21, 2013 at 02:39:21PM +0800, Sam Spilsbury wrote:
On Thu, Feb 21, 2013 at 1:17 PM, Peter Hutterer
<peter hutterer who-t net> wrote:
Having worked with googletest and xorg-gtest [1] for X integration testing,
I can say the most annoying bit is to get the whole thing to compile. The
C++ ODR prevents us from building gtest and xorg-gtest as library and then
compiling against that. and autotools is not happy with having external
source files. if you're planning to share a test framework across multiple
source repositories, that can be a major pain.


I agree, this is the one drawback of google-test.


fwiw, one of the drawbacks I found with the multiple binary case is that it
reduces the chance of running all tests every time. there's a sweet spot
somewhere between too many and too few binaries and I suspect it differs for
each project.

A good way to handle that is to have a separate test-runner that runs
the make check/test target. That can usually just be a makefile rule
or something else (in compiz we use ctest)

yeah, I've hooked it up to make check, but that has other issues too. I
should really write a script for that.

fwiw, we still have multiple binaries (evdev, synaptics, server, libXi,
etc.) But initially I had multiple binaries for the server to test various
server features (grab, multihead, touch, etc.). Since the features aren't
clear-cut though (where do you put touch grab tests?) I found having a
single server binary was better.

This is what I meant with there being a sweet spot for each project that
needs to be found.

for the googletest case for example separate binaries will give you a
separate junit xml output, which make some regression comparisons harder.

We ran into this problem as well.

I think the solution was two fold:

 1. First of all, we wanted to be able to introspect test binaries so
that the test runner would be able to show each individual one. [1] is
a massive hack, but it works.
 2. Second of all, we had to patch google-test to shut up about
skipped tests in the junit.xml so that Jenkins didn't think you had
like 600,000 tests or something. I can provide this patch upon
request, its just somewhere in the midsts of the Canonical Jenkins

oh, right. that hasn't been a problem for me so far, the jenkins install
always runs all tests. the tricky bit for me was tracking which tests are
supposed to fail (e.g. on RHEL6 tests for newer features are
known-to-fail). so far I'm tracking this largely manually, but the
known-to-fail case shouldn't be much of an use-case for an upstream project.



Sam Spilsbury

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]