GNOME-QA IRC Meeting Notes
- From: Sriram Ramkrishna <sri ramkrishna me>
- To: gnome-qa-list gnome org
- Subject: GNOME-QA IRC Meeting Notes
- Date: Fri, 28 Feb 2014 17:59:16 -0800
QA inaugural meeting
Summary:
* dogtail for automated gui testing
* unittests for automated non-gui testing
* QA and developers both responsible for tests
* integration will happen into gnome-continuous
* Will use fedora's GNOME integration testing as initial set of tests
* Agreement on test framework -
* Most tests are either UI clickign or internal unittests
* Suggest to follow what Red Hat has been doign (see talk at FOSDEM)
- https://fosdem.org/2014/schedule/event/standalone_app_testing_automation/
* Could also look at 'openqa'
http://openqa.opensuse.org/results/openSUSE-Factory-NET-x86_64-Build0069-gnome
* Walters - I don't rerun through the installer for every change.
There is value to running through an installer, but also value in
not doing so.
* Vadim - openqa is distro oriented
* Walters - I don't draw a bright lien between distro and GNOME
* Adam was concerned how useful UI testing would be if components
are going to be shifted around every release. (eg design and features
continue to evolve)
* Vadim believes that this can happen with a level of abstraction
* Adam thought this was okay, but could be onerous
* Evolution was an example of this kind of testing
* Adam believed this is an exception as it is not a very
GNOME 3 style app. Gedit is changing all the time, while
Evolution has been relatively static.
* Adam's main point is that if you going to do automated GUI testing,
use the best framework available available. The main concern is
that automated GUI testing tends itno a messy time sink.
* Vadim agrees, however correct set of tests will minimize the tests
* We can offer the framework for those who want to do it.
* Frederic - "QA as a service"
* Vadim thinks the idea is to support tests using a default
framework (recommended framework?) while leaving freedom for
projects to use their own solutions if they prefer. Expanding -
we want to support tests upstream instead of fedora/suse/rhel
tests in packages.
* Vadim will present a proof-of-concept for a small app
(gnome-weather is a nice candidate) running dogtail tests
on gnome-continuous
* Walter adds "it's not about the framework so much as:
1) reliable automation
2) having the app ship it, not it be an external thing"
* Today we have the above and in addition
3) very high speed reliable automation
* "i believe it is one of the fastest and most efficient
public continuous deployment systems in the world - at
least if you find anyone out there that is integrating
over 200 git repositories 80+ times a day, please send
me a link =)"
4) Everything we do in GNOME should be usuable by downstream
and ideally jhbuild too
* Adam - this is more of the question "waht you're etsting - openqa
is built around 'is SUSE working?'
* Vadim - I prefer to focus on apps testing, as every distro has its
own tests for installer - but not for apps
* Adam this is about 'is GNOME working?' - it would be valuable for
distros to know "is GNOME working?"
* Frederic - openQA could be used for app testing, but there doesn't
seem much interest.
* Walters - whatever we do here could be re-used in openqa
* OpenQA v2 has improvements, including graphical needle
editor. Hopefully wil be available soon once ACL part is
done.
* Consensus seems that people are most interested in automated testing -
Discuss good old manual tests?
* Vhumpa: as for reliablility of app tests, the key thing would be
to choose the way of compiling the tests together in an easily
maintainable fashion. The POC using behave to abstract the actual
UI fashion It will never be completely painless.
* Walters: I would like to see more non-gui tests - like having
apps refactor some of their internals as unit tests
* example hotssh: I would like a test that I am correctly
modifying a ~/.ssh/known_hosts file. - like make check with
InstalledTests that are run all the time
* unittests are usually in dev's responsibility - takes too much
time to learn the internals.
* Adam brings up --
https://fedoraproject.org/wiki/QA:Desktop_validation_results_template
that is used to test GNOME.
* Good place to start test cases for the whole DE.
* Reporting results -
* Do we need a system for this?
* Wiki?
* Wiki as a results management system sucks. Works fine to
holding the test cases and test plans themselves, but it's not good
for results.
* wiki = manual edit = will rot
* rot isn't really the biggest problem, the biggest
problem is it's very difficult to get any kind of
longitudinal view of the data, it doesn't handle
multiple results for the same test well, and it's
just kind of a pain in the ass to work with. no-one
wants to be writing mediawiki syntax to report a
test result.
* gnome-continuous handles results management
* Probably want to feed results to bugzilla
* don't want to turn your test system into an ad hoc issue tracker
- bad results
* we need ubuntu participation - 'balloons' on freenode - nicholas
skaggs is a good one. (martin Pitt is also another)
* Walters: can't spend too much time on gnome-continuous - working
on transferring some of the ideas to RPM/fedora world.
http://rpm-ostree.cloud.fedoraproject.org/#/
* If we are doing GNOME integration at the distro level for fedora,
it will likely happen on taskotron -
https://fedoraproject.org/wiki/Taskotron - tflink is one of the main
devel.
* Interested in setting any kind of concrete quality standard for
releases? Is this more of a 'just try to increate quality' effort?
* Not ready yet for quality releases, too early
* Would take time for developers to use dogtail
* would be best if the tests done not only by dedicated QA, but by
developers as well - otherwise they will bitrot.
09:17 < vrutkovs> 1) determine the components that we want to test
09:17 < vrutkovs> 2) determine what test cases we want to test for each of the
components
09:17 < vrutkovs> 3) resource people to create those test cases
09:17 < vrutkovs> 4) process for integrating the test cases for each of those
components
09:17 < vrutkovs> 5) how often do we run the test cases?
09:18 < vrutkovs> does anyone has comments / objections / more ideas about it?
09:19 < fcrozat> was there already an agreement / selection of the test
framework ?
09:19 < vrutkovs> fcrozat, no, but is a good topic to discuss
09:20 < vrutkovs> most of the tests for projects are either 'UI clicking' or
internal unittests
09:20 < fcrozat> yep, in that case, dogtail would fit the requirements
09:20 < fcrozat> the RH guys did a nice talk about that as FOSDEM
09:20 < fcrozat> I would suggest to follow their tracks
09:21 < vrutkovs> yes, I didn't see other tools (e.g. autotest or LDTP) to gain
a significant role
09:21 < fcrozat> and maybe add openqa if we want to go a step further (ostree)
09:22 < vrutkovs> fcrozat, I guess this will be handled via gnome-continuous
09:22 < vrutkovs> also I didn't have any experience with openqa
09:22 < vrutkovs> also I didn't have any experience with openqa
09:22 < fcrozat> but is it able to boot a deployed system ?
09:22 < fcrozat> (ok, let's focus, this can be discussed later ;)
09:23 < vrutkovs> fcrozat, yes, it does. Check
http://build.gnome.org/#/gnome-continuous and
https://wiki.gnome.org/Projects/GnomeContinuous
09:23 < adamw> i'm not sure how useful automated GUI testing is going to be to
GNOME at least as long as things are getting shuffled around
every release
09:23 < vrutkovs> adamw, this can be handled by a sufficient level of
abstraction
09:23 < adamw> vrutkovs: from what i've seen, that becomes very onerous
09:23 < vrutkovs> basically we'd like to propose a default system which can be
easily picked up by new projects
09:24 < vrutkovs> this uses behave + dogtail
09:24 < adamw> but if you want to take it on, sure - just be aware that afaik
it's going to be a lot of work. i'd tend towards using automated
testing for non-GUI stuff and manual testing for GUI validation
09:24 < vrutkovs> adamw, we experimented with evolution on this
09:24 < adamw> well, evolution is kind of an exception. it's not a very gnome-y
app and it gets few GUI changes. the rest of GNOME changes
app and it gets few GUI changes. the rest of GNOME changes
rather faster. at least in my subjective / fallible memory.
09:24 < vrutkovs> say the diff between tests for 3.8 and 3.12 was 50 lines
09:24 < fcrozat> vrutkovs: ok, I didn't notice you were one of the fosdem guys
;)
09:25 < adamw> evolution's GUI has been 'make it look like Outlook' for about a
decade. :P
09:25 < vrutkovs> adamw, that was the only release they did redesign a dialog
(tasks)
09:26 < adamw> if you look at how GNOME itself, or say gedit or something, has
changed over the 3.0 series...
09:26 < vrutkovs> I might be too optimistic about it, but it seems much better
approach that pure scripts
09:26 < vrutkovs> it did visually, but in terms of a11y elements it was
insignificant
09:26 < adamw> vrutkovs: oh, sure, if you're going to *do* automated GUI
testing, absolutely use the best framework available, and
dogtail seems like a decent one. i'm just saying that from what
i've seen, even with the *best* framework automated GUI testing
tends to turn into a messy time sink.
09:26 < vrutkovs> however I don't think we have examples of long0term scripts
09:26 < vrutkovs> however I don't think we have examples of long0term scripts
supports (unfortunately)
09:26 < fcrozat> openqa is using visual needle and it kind of work
09:27 < fcrozat> but there will be always a need for refreshing testcases
09:27 < vrutkovs> adamw, agreed. However a correct selection might minimize the
pain
09:27 < fcrozat> whatever the framework
09:27 < vrutkovs> maybe pitivi guys can share their experience
09:27 < adamw> vrutkovs: sure, if the idea is just to offer a framework for
sub-projects who decide they want the pain, that makes sense.
09:28 < fcrozat> "QA as a service"
09:28 < vrutkovs> the idea is to support tests using default frameworks (see
also gjs acceptance in gnome)
09:28 < vrutkovs> meanwhile leaving freedom for projects to use their own
solutions if they prefer
09:29 < vrutkovs> the whole idea it to support tests upstream instead of
fedora/suse/rhel tests in packages
09:29 < adamw> roger
09:29 < vrutkovs> so I'm gonna present a proof-of-concept for a small app
(gnome-weather is a nice candidate)
09:30 < vrutkovs> running dogtail tests on gnome-continuous
09:30 < vrutkovs> running dogtail tests on gnome-continuous
09:30 < walters> adamw, works well when the app ships the tests
09:30 < vrutkovs> I guess that would be a good example for our students working
on related gsoc project
09:30 < walters> it's not about the framework so much as 1) reliable automation
2) having the app ship it, not it be an external thing
09:31 < vrutkovs> yes, right, we're getting too specific while walters has
pointed to a generic goal
09:31 < walters> i have 1) and not only that, but very high speed reliable
automation
09:31 < walters> concretely vs openqa, I don't rerun through the installer for
every change
09:32 < walters> (there is value to running through an installer, there is also
value to not doing so)
09:32 < vrutkovs> openqa is more distro-oriented (as much as I've understood),
which doesn't really fits gnome
09:32 < walters> i don't draw such a bright line between distro and gnome
09:32 < fcrozat> I wasn't aware of what gnome-continuous could do
09:32 < walters> anything we do in gnome should be usable by downstreams
09:33 < walters> fcrozat, i believe it is one of the fastest and most efficient
public continuous deployment systems in the world - at least
public continuous deployment systems in the world - at least
if you find anyone out there that is integrating over 200 git
repositories 80+ times a day, please send me a link =)
09:33 < adamw> walters: it's more about the question of 'what you're testing' -
openqa is kind of built around 'is SUSE working'
09:33 < vrutkovs> yes, but I'd prefer to focus on apps testing, as every distro
has its own tests for installer - but not for apps
09:33 < adamw> whereas this is presumably more 'is GNOME working'
09:34 < adamw> of course it'll be valuable to distros to run 'is GNOME working'
tests
09:34 < fcrozat> let's not go into one vs another
09:34 < walters> exactly
09:34 < walters> the tests should be consumable by downstreams
09:34 < walters> (and ideally in jhbuild too)
09:34 < fcrozat> openQA could be used also to do app testing but it looks like
people aren't interested here, so I'll retract this suggestion
09:35 < walters> fcrozat, i'd definitely hope that whatever is done here could
be reused in openqa - although personally I think the
"checksums of screenshots/OpenCV" isn't really sustainable
results mechanism
09:36 < fcrozat> openqa v2 has improved stuff, including graphical needle
09:36 < fcrozat> openqa v2 has improved stuff, including graphical needle
editor. Hopefully, it will be soon available, once the ACL
part is done
09:36 < fcrozat> a11y based is nice
09:36 < fcrozat> but don't fit all
09:36 < vrutkovs> I see all the people are more interested in automated tests.
Should we also discuss good old manual tests? Anybody wants
to discuss that?
09:37 < vhumpa> As to reliability of such app tests, key would be to choose the
way of compiling the tests together in an easily maintainable
fashion. The POC using behave to abstract the actual UI
elements and framework code can hopefully make it *relatively*
painless. But you guys are right it will never completely
maintain-less.
09:37 < vhumpa> Hey guys btw :)
09:37 < fcrozat> vrutkovs: for manual tests, you need ressources (ie people)
09:38 < fcrozat> and I don't see us as a community to have such ressources
09:38 < walters> i also would like for there to be more non-GUI tests - like
having apps refactor some of their internals as unit tests
09:38 < walters> for example: for hotssh, I would like a test that I am
correctly modifying a ~/.ssh/known_hosts file
correctly modifying a ~/.ssh/known_hosts file
09:38 < vrutkovs> fcrozat, yes, I'm kind of unsure if people really want to do
this regularly at this point
09:38 < walters> this is like traditional "make check" - except with
InstalledTests, we run it all of the time
09:38 < vrutkovs> walters, in our tests we combine dogtail and scripts for
instance
09:39 < walters> yes
09:39 < adamw> fcrozat: we do have a (fairly arbitrary in coverage terms) test
plan for GNOME for Fedora which we run as part of release
validation
09:39 < adamw> https://fedoraproject.org/wiki/QA:Desktop_validation_results_template
09:39 < vrutkovs> however, unittest are usually in dev's responsibility zone -
it usually takes too much time to learn internals
09:40 < adamw> somewhere on my todo list is a few other tests to add to that
(inc. bluetooth and wireless network configuration and printer
setup, IIRC)(
09:40 < vrutkovs> adamw, nice, looks like a place to start with testcases for
the whole DE
09:41 < adamw> just to confirm, after understanding the scope of the proposal,
09:41 < adamw> just to confirm, after understanding the scope of the proposal,
i've no objection at all to providing the proposed dogtail setup
as a supported automated GUI testing platform, if you're going
to have such a platform, dogtail is a good choice
09:41 < vrutkovs> another things - do we need any system to report results?
Such as wiki in fedora project?
09:42 < adamw> wiki as a results management system kind of sucks, it's the part
of our ridiculous wiki-TCMS setup that i hate the most. i think
we were planning to replace it, but i need to check back in with
the people working on that. wiki works fine for holding the test
cases and test plans themselves, but it's a sucky results
manager.
09:42 < adamw> for automated tests gnome-continuous handles result management I
believe?
09:43 < vrutkovs> adamw, yes (kind of), though no tools for nice analysis yet
09:43 < vrutkovs> I'd love to have "failing since ..." at least
09:43 < walters> yes
09:43 < adamw> so my advice-based-on-experience for tracking results is, don't
use a wiki. :P
09:43 < vrutkovs> and frankly speaking I'm not really interested in any system
like that. At all
like that. At all
09:43 < vrutkovs> as 90% pass doesn't give you the overview of the whole
situation
09:44 < fcrozat> wiki = manual edit = will rot
09:44 < walters> what is powerful for continuous is that the tests can come
back and define what is released
09:44 < walters> the difference between
gnome-continuous/buildmaster/x86_64-devel-debug vs
gnome-continuous/smoketested/x86_64-devel-debug
09:45 < fcrozat> ( http://openqa.opensuse.org/results/ for something readable ;)
09:45 < vrutkovs> agreed
09:45 < adamw> fcrozat: rot isn't really the biggest problem, the biggest
problem is it's very difficult to get any kind of longitudinal
view of the data, it doesn't handle multiple results for the
same test well, and it's just kind of a pain in the ass to work
with. no-one wants to be writing mediawiki syntax to report a
test result.
09:45 < adamw> so yeah, you want something better and indeed you want a nice
way for test results to result in build/release process changes
without manual intervention
09:45 < vrutkovs> at least its not TCMS via WebUI ;)
09:45 < vrutkovs> at least its not TCMS via WebUI ;)
09:45 < walters> fcrozat, yeah it looks good! I see the tests are also passing
again
09:46 < adamw> you're also going to want to be able to feed results into
bugzilla (again as openqa does)
09:46 < fcrozat> (
http://openqa.opensuse.org/results/openSUSE-Factory-NET-x86_64-Build0069-gnome
for a better view)
09:47 < adamw> you don't want your test system to turn into an ad hoc issue
tracker, that never turns out well
09:47 < fcrozat> in there, dogtail could be run too)
09:48 < vrutkovs> fcrozat, I didn't find any feature list for openQA, could you
point me to such list (if it exists)?
09:48 < walters> fcrozat, yeah but it just wouldn't scale to write a checksum
or a needle or whatever for each app for each test
09:48 < vrutkovs> I guess we could use several ideas, as it looks very
interesting
09:48 < walters> at least i don't believe so
09:48 < walters> what you have makes sense for small scale targeted "baseline"
testing
09:49 < fcrozat> walters: frankly, I don't know.. needle editor is really
09:49 < fcrozat> walters: frankly, I don't know.. needle editor is really
helping but it is not visible yet
09:49 < fcrozat> mixing both approach could work, I think
09:49 < walters> you really need a simple test model where you execute some
code in a well known environment, it returns exit code 0 or
not-0
09:49 < fcrozat> I'm a bit afraid of the complexity of writing testcase with
dogtail
09:49 < adamw> fwiw, ubuntu does have a system too, which may have some useful
ideas. they've clearly taken inspiration from both fedora qa and
openqa, but they have a fairly large team and they've run with it
09:49 < adamw> http://iso.qa.ubuntu.com/
09:50 < adamw> https://code.launchpad.net/ubuntu-qa-tools is their source repo
i believe
09:50 < fcrozat> vrutkovs: http://www.os-autoinst.org/
09:50 < walters> adamw, interesting, wonder if we could get them here
09:50 < vrutkovs> great, but all this closer to distribution testing
09:51 < adamw> vrutkovs: was just thinking more in terms of result management
and issue tracker interface
09:51 < adamw> walters: the ubuntu qa lead is 'balloons' on freenode, guy
called nicholas skaggs, he's one of the good ones
called nicholas skaggs, he's one of the good ones
09:51 < vrutkovs> adamw, agreed, gnome-continuous UI could really be much
better (considering that it has a lot data in json)
09:52 < walters> it's also easy to write ui with Angular
09:52 < walters> however um before we talk about continuous too much more I
should say I can't spend as much time on it now, as I am
trying to transfer some of the ideas to the RPM/Fedora world
09:52 < walters> i haven't announced a new version of that, but you can see
the new website here:
http://rpm-ostree.cloud.fedoraproject.org/#/
09:53 < vrutkovs> I'm not sure if people are super-interested in this results
yet
09:53 < vrutkovs> as the amount of tests is too low
09:53 -!- satellit [~satellit bc105197 bendcable com] has joined #qa
09:53 < walters> for rpm-ostree? or continuous?
09:54 < vrutkovs> for continuous mostly
09:54 < walters> right
09:54 < vrutkovs> as in "tests for GNOME" not "%check / make check tests"
09:54 -!- tflink [~tflink c-75-70-208-209 hsd1 co comcast net] has joined #qa
09:54 < walters> mmm
09:54 < walters> gnome-software includes Dogtail tests which are run
09:54 < walters> gnome-software includes Dogtail tests which are run
09:54 < walters> right now
09:54 < vrutkovs> so do gnome-weather
09:54 < vrutkovs> and (broken) evince
09:55 < walters> they can be executed via "make check" - but even better as
InstalledTests
09:55 < vrutkovs> and I guess the list ends ;)
09:55 < walters> i think it is really really important that when gtk+ changes,
we rerun the gnome-software Dogtail tests
09:55 < walters> *really* *really* *really* important
09:55 < vrutkovs> right!
09:56 < adamw> tflink is one of the fedora qa guys working on automated
testing, he has much more of a clue than me.
09:56 < walters> as it lifts the mental model out of the "indivdual collection
of fiefdoms" and more towards "software constantly delivered
and constantly tested as a unit"
09:56 < vrutkovs> so having minimal highest importance cases for *each* app is
the most important goal IMHO
09:56 < adamw> (he's one of the lead devs for taskotron,
https://fedoraproject.org/wiki/Taskotron)
09:56 < vrutkovs> adamw, yes, I'm gonna invite him also
09:56 < vrutkovs> adamw, yes, I'm gonna invite him also
09:56 < adamw> if we're going to run GNOME integration tests at the distro
level, taskotron is likely where it'll happen for fedora.
09:57 <@shivani> Hey anyone here ?
09:57 < vrutkovs> shivani, hey
09:58 < vrutkovs> so, going on. Any other topics anyone wants to discuss?
09:58 < walters> i'd love to have regular meetings about this topic
09:58 < tflink> fwiw, I'm definititely interested in running gnome integration
tests in fedora's automation systems
09:58 < fcrozat> what is the menu for diner ? :)
09:58 < walters> and furthermore we should report on it
09:59 < adamw> are you interested in setting any kind of concrete quality
standards for releases, or is this more a 'just try to increase
test coverage as much as possible to improve overall ongoing
quality' effort?
09:59 < fcrozat> tflink: yes, I'd like to do the same in openSUSE package
using openQA
09:59 < vrutkovs> tflink, right, I'd say the current goal is to have a minimal
amount of those tests upstream
09:59 < fcrozat> adamw: setting quality standards now is too early, I think
09:59 < adamw> roger
09:59 < adamw> roger
10:00 < fcrozat> it will take time for developers to learn how to use dogtail
10:00 < fcrozat> it would be best if the tests were not only done by dedicated
folks
10:00 < fcrozat> but also by developers themselves
10:00 < walters> right
10:00 < fcrozat> otherwise, they'll bitrot
10:00 < tflink> vrutkovs: do you have any specific ones in mind?
10:01 < walters> and furthermore exported from the module in a way consumable
by automated frameworks
10:01 < vrutkovs> fcrozat, agreed, I think we could have a workshop during
guadec/fosdem/etc. on dogtail api etc
10:01 < vrutkovs> tflink, for now only gnome-weather and gnome-software have
those
10:01 <@shivani> sorry my connection is giving me issues, reading the logs now
10:01 <@shivani> wanted to attend the meeting :\
10:01 < vrutkovs> tflink, and we're having a gsoc to improve them and have
those for more packages
10:01 < fcrozat> are the current tests for gnome-weather / gnome-software in
the component git repo ?
10:01 < fcrozat> or in a separate repo ?
10:01 < fcrozat> or in a separate repo ?
10:02 < fcrozat> s/tests/testcases/
10:02 < vrutkovs> fcrozat, yes. The best way to use them is compiling with
--enable-installed-tests and running via
gnome-desktop-tests-runner
10:02 < vrutkovs> fcrozat, see
https://live.gnome.org/Initiatives/GnomeGoals/InstalledTests
10:02 < fcrozat> ok, so part of the tarball.
10:02 < fcrozat> excellent.
10:02 < tflink> hrm, that might make a good gsoc project for us as well
10:03 < vrutkovs> fcrozat, I'm not sure about the tarball actually. But they
are in the same git repo
10:03 < vrutkovs> tflink, that would be great, actually. Please do propose
this if there is still time
10:03 < fcrozat> having them in tarball could also help, I think
10:03 < walters> fcrozat, I would dearly love for more downstreams to consume
InstalledTests, and have been making some headway there. If
you run into any issues please ping me!
10:03 < fcrozat> to make sure downstream would consume them
10:04 < vrutkovs> fcrozat, I guess that depends on maintainer, but I'm not
sure if 3.11.90 tarballs do have those tests in tarball
sure if 3.11.90 tarballs do have those tests in tarball
10:04 < walters> ah, they should be, the same way the tests are in the tarball
for "make check"
10:04 < walters> the difficult thing for you rpm/dpkg downstreams is
provisioning machines
10:04 < walters> it won't work to run them in a mock chroot or whateer
10:04 < fcrozat> I wouldn't run them in make check
10:05 < walters> right, you can't
10:05 < fcrozat> but I'd create a subpackage containing the testcase and
command to be run
10:05 < walters> yep, for fedora we are starting to make "-tests" subpackages
10:05 < vrutkovs> walters, I think that problem should be solved by each
distro - we can't restrict them
10:05 < walters> there is a glib2-tests
10:05 < fcrozat> and then, you install the subpackage in the tooling test
framework, like openqa or another
10:05 < walters> vrutkovs, InstalledTests Type=session tests presently mandate
a logged in desktop
10:05 < walters> you can't synthesize a working desktop inside a mock chroot
10:05 < walters> or well you *can* but you shouldn't
10:06 < walters> the point is to test the system *as shipped to users*
10:06 < walters> the point is to test the system *as shipped to users*
10:06 < walters> which means VMs or physical hardware
10:06 < vrutkovs> walters, I have a right to shoot in my own leg and I'm proud
of it ;)
10:06 < walters> test *after* shipping
10:06 < vrutkovs> though we do should state this explicitly
10:06 < fcrozat> well, you ship to staging : test ; ship really:)
10:06 < walters> (now I do want to make Type=headless InstalledTests)
10:06 < fcrozat> but yes ;)
10:07 < walters> fcrozat, with continuous you can track "buildmaster" which is
the binaries straight from git with 0 tests
10:07 < walters> and that's super valuable
10:07 < vrutkovs> Type=headless sounds interesting btw, this is good enough
for eds tests at least
10:07 < walters> because how do you debug a test failure?
10:07 < walters> you really want to be able to run gdb on it locally..
10:08 < walters> if a test is failing on the main continuous server, you can
get a VM which has exactly the same content it has
10:08 < walters> down to the binary level
10:08 < vrutkovs> okay, guys, me and vhumpa got to go soon. Anything you
wanted to discuss then?
wanted to discuss then?
10:08 < vrutkovs> I'm gonna read the backlog anyway
10:10 < fcrozat> I'll go too :)
10:10 < fcrozat> time to go home
10:10 * shivani is reading the logs
10:10 -!- mode/#qa [+o vrutkovs] by shivani
10:10 -!- mode/#qa [-o shivani] by shivani
10:11 <@vrutkovs> okay, how about meeting here at the same time next Thursday?
10:11 <@vrutkovs> I guess we'll be more prepared and have an agenda at least ;)
10:11 < walters> cool
10:12 < walters> let's try to get the ubuntu people here too
10:12 < shivani> +1
10:12 < tflink> what email list are you using for scheduling?
10:12 < shivani> ^ I'll make sure my internet connection is right the next
time around!
10:12 <@vrutkovs> tflink, none yet, we're gonna create a mailing list by that
time, I hope qa is not taken yet
10:13 < fcrozat> not sure I'll be able to join, but we'll see
10:13 <@vrutkovs> thanks guys, got to go
10:13 <@vrutkovs> it was nice to see so many people interested actually
10:13 < shivani> bye Vadim :)
[Date Prev][
Date Next] [Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]