Re: Fwd: A question related to issue resolution practices in Gnome



Thank you, thats a lot of valuable information to think about and understand.
E.g., I completely missed the Bugmaster's role, somehow implicitly
confused that with a developer role. Two quick clarifications/answers below.

> For me, yes! Though thorough analysis needs to be done. Sometimes the
> data might indicate one thing, while practically something else happens.
Exactly, to add to your point, our goals are roughly threefold:
At the technical level, the hope is to simplify some queries that are not
so easily generated via Bugzilla reports: time trends, intervals between states,
particular trajectories (e.g., *-resolved-unconfirmed-resolved-*).
From the more practical perspective, to come up with summary measures
that accurately reflect concerns of various teams (maintainer, bugsquad, user).
From a theoretical perspective to learn from (and, hopefully, to help with) the
distributed decision making process that takes place in this large and complex
organization with a variety of roles/stakeholders.

> Ideally it should be built into Pe2. How does Pe2 gather its data btw?
Depends on the project. For the prototype and historic analysis
scraping Bugzilla/Jira/Trac seems sufficient, for a live look at the current state
of the project thats not going to work, would need some update subscription mechanism.
For GNOME it is based on the existing extracts:
http://academic.patrick.wagstrom.net/research/gnome
http://msr.uwaterloo.ca/msr2009/challenge/msrchallengedata.html
http://passion-lab.org/download
Also:http://mail.gnome.org/archives/academia-list/2010-October/thread.html#00002

Audris

On 11/21/2011 05:08 AM, Olav Vitters wrote:
First impressions.

On Sun, Nov 20, 2011 at 06:37:46PM -0500, Audris Mockus wrote:
To investigate efficiency and effectiveness of issue tracking we
built a tool to visualize and quantify issue resolution practices
based on what is recorded in Bugzilla. In particular, we hope that
something along these lines might be of some use to Gnome.

Seems quite interesting. However, due note that there are multiple teams
involved with Bugzilla:
1) bugmasters
    very small group of administrators
2) bugsquad
    loosely connected group of people who triage bugs
3) maintainers/developers
4) users
Missed
The entire triage process is decided by bugsquad themselves. I've cc'ed
gnome-bugsquad as I think they'll be interested in this email as well.
That is a public mailing list btw.

As an example, we investigated changes to BugBuddy and how it
affected issue reporting and resolution
(http://mockus.org/papers/demo/index.html#BB)
- A description is at http://mockus.org/papers/demo
   (pdf at http://mockus.org/papers/demo.pdf)
- A link to a video demonstration :
   http://www.youtube.com/watch?v=y9O37OTecbE
- A link to the tool: http://passion-lab.org/pee.html

We would greatly appreciate any feedback you might have.
Below are some specific questions of particular interest:

I will need to look into that a bit more, need more time for that.

1) Do you think that improving the quality of information
when deciding upon a practice change would be helpful?

For me, yes! Though thorough analysis needs to be done. Sometimes the
data might indicate one thing, while practically something else happens.

2) If yes, would you think a tool be of use, e,g., Pe2

Ideally it should be built into Pe2. How does Pe2 gather its data btw?

3) If yes to both above, what kind of questions does the tool need
to help you easily answer for you to actually use it?

I'd like to see:
- how often are bugs closed as incomplete
- buggyness of GNOME in general over time
- pareto chart of top crashing products
- all duplicate crashers should be detected automatically
- crashers should not be filed at bugzilla, instead they should be on
   some separate server which only task is to handle the crashers
- separate server should forward to bugzilla
- pareto chart of products where no action seems to be taken (indicating
   need to ask maintainer, or lack of maintainer)
- anything that indicates sudden trend break, be it positive or negative
   e.g.: suddenly there are way more bugs fixed for a product than usual,
   or opposite

4) Are there factual errors in the BugBuddy account?

There are some problems with bug-buddy:
- the retrace server is broken, so the change of version 2.19 is broken
- we should let the retracing be done by the distribution

There are plans to change the crash handling significantly. See:
https://live.gnome.org/Design/Apps/Oops
https://live.gnome.org/GnomeOS/Design/Whiteboards/ProblemReporting

The designers+maintainers want to provide a well integrated problem
reporting infrastructure. GNOME is becoming more and more integrated
with the OS. OS problems affect the perception people have of GNOME.
E.g. if suspend doesn't work, GNOME will be seen as bad. We (GNOME)
should work to ensure such lower level problems can be detected,
reported and fixed.

5) Are the two measures sensible summaries for service and
issue quality?
  a) quality of service metric: the time it takes to
resolve an issue (e.g, 90% of issues resolved in X months).

This relies on two things:
- bugsquad to triage the bugs and
- maintainers to fix the bug

Would also be nice to see it in a control chart. Though think the data
is not stable. E.g. new GNOME means new crashers (I assume). Same for
when a new distribution is released.

  b) issue quality metric: the fraction of issues fixed

Ideally I'd like to see the number of crashers per amount of hours spend
in the software. But that is impossible to gather at the moment.

See for instance:
https://crash-stats.mozilla.com/products/Firefox

That has crashers per 100 users, also quite interesting.



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]