canonical-ci-engineering team mailing list archive
-
canonical-ci-engineering team
-
Mailing list archive
-
Message #00859
Re: NFSS Training
On Fri, Jul 4, 2014 at 9:36 AM, Alexander Sack <asac@xxxxxxxxxxxxx> wrote:
> Related to this topic (and I am not sure where else to seed this wish,
> so I am asking QA to keep an eye on this): whatever measurement based
> test we plan to implement, can QA also keep an eye on ALWAYS doing
> those so we get a good grasp on the variance of the result we measure?
>
>
sure.
> Looking ahead I am already plagued with how we can even use
> measurements likethis to effectively gate and drive engineering given
> how hard it is for us to do that effectively for simple black and
> white true/false tests.
>
> In other words: for measurement based tests with variance, flakiness
> will be the theme everywhere and only by having very low and
> reproducible variance levels I will even be able to talk to
> engineering teams and managers about buying into the idea to gate and
> identify promotion blockers out of those.
>
>
Well, not really.
The problem with flakey functional tests is that the tests intermittently
fail. The problem isn't really that there's variance in the system, it's
that the tests aren't written to cope with that variance. On the other
hand, for performance data, identifying a regression with a certain
confidence level is a solved statistical problem (that's where the 'S'
comes from in NFSS). We can look at a series of data and say "with a
confidence level of N, this data point represents a regression over the
last X values".
I agree though that the results have to be clear before we start blocking
MPs.
Cheers,
--
Thomi Richards
thomi.richards@xxxxxxxxxxxxx
References