[Evolution-hackers] Test suite for libecal



Some thoughts on the proposed test suite for libecal in the
evolution-data-server module. Request your comments and ideas on the
same.

I'm hoping that this test suite evolves into a developer tool doubled as
a regression suite. This might possibly be integrated with the build
system in the form of a smoke test (build validations).

The goal is to enable -
1) running individual test cases or a selected set of test cases as
desired.
2) a test program handy to experiment and play with - by modifying the
test data with minimum fuss.
3) Minimal human intervention to determine the success of the test and
interpret the results.
4) perform non-functional testing (performance/scalability/multiple
threads) using the same test cases.

The approach is to pre-load the e-cal-backend with test data, prior to
the execution of the tests. Since tests are performed against known
data, comparisons of the actual results with the expected ones can be
automated. (Much like the setup->execution->result compilation->tear
down patterns in PyUnit/JUnit etc). The output of the test run would be
a report with statistics on the tests run, passed tests and diagnostic
data to examine and rerun failed tests. 

The tests would strive to be idempotent, so as not to leave any
side-effects on the loaded data and hence interfere into other tests.
This is necessary to allow individual test cases to be run.
Where this is not possible, these tests would be run after executing all
other tests.


Current implementation : A crude first draft of the tests is available
and has been posted for review. I am currently working on adding the
capability to run single tests based on the name of the test. This works
with file backend only today.

Issues : Need to figure out an elegant way to pre-populate test data
onto other backends like groupwise.
Quickly find out the predominant usage patterns of the test suite and
tailor to meet those needs than merely building in features to resemble
other frameworks.. 
I've used bash scripts to perform loading/cleanup of test data and
executing multiple tests...It is slightly easier to try different things
while executing tests but we need to see if things need to be done
differently.



Alternate Approaches : While there are impressive examples/frameworks in
many Open source projects in other languages, C programs seem to use
project-specific test programs as against generic frameworks.
CUnit (http://cunit.sourceforge.net) seems to worth looking at (report
generation capabilities in multiple formats is a good plus).

This imposes an external dependency on the project. Such dependencies on
external testing frameworks is common in many Open source projects. I
have not come across this in Gnome yet. (I have not been around long
enough, I may be wrong here). Does that sound a good investment for the
future ???
Anything else that needs to be considered as well, that I have missed ?

Harish 













[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]