← Back to team overview

canonical-ci-engineering team mailing list archive

Re: Mir Performance Testing

 

Thomi - thanks for writing this as we've been discussing the last couple of
days.
I've also filed a "bug"/feature request for the CI team to track here
https://bugs.launchpad.net/ubuntu-ci-services-itself/+bug/1252857

If I've been incomplete, please ping me...happy to discuss or flesh out
further.
br,kg


On Tue, Nov 19, 2013 at 2:28 PM, Thomi Richards <
thomi.richards@xxxxxxxxxxxxx> wrote:

> Hi CI team,
>
>
> The mir team would like to run the glmark2 benchmarks on a native mir
> server across Desktop, N4 and N10 devices at the MP stage, and block merges
> on a significant drop in FPS performance.
>
>
> We already have a glmark2 version that runs against a native mir server.
> The parts that are missing includes:
>
>
>    1. Infrastructure to run that test (jenkins and whatnot).
>
>    2. Policy around exactly when to fail the test, when to warn about a
>    drop in performance, and when to pass the test.
>
>    3. A way to convert the data from glmark2 into something the CI team
>    can report on.
>
>    4. A place to show the graphs of performance over time.
>
>
> So, addressing these in reverse order (why not):
>
> 4: We already have performance graphs for mir, so I suspect this part will
> be easy.
>
> 3: This needs a small amount of python code, I suggest we emit a subunit
> result stream here - we could use this as a testbed for subunit. I imagine
> that this script would be done by someone in the QA team? Is the CI team
> ready to read subunit result streams? I'm more than happy to walk someone
> through what needs to happen here (the code is trivially easy)...
>
> 2: I suggest we get the tests running for a few weeks, graph the results,
> and decide the policy based on that data. We're going to be blocking merge
> proposals from this, so we need to make sure we get it right. I expect that
> the policy would be decided upon by a mixture of the mir and QA teams. The
> current idea is that < 5% drop would result in a warning, and >5% would
> result in a fail. It would be nice if we could tweak this policy easily
> without having to go through the CI team... but that's a minor detail.
>
> 1: Chris Gagnon is already working on a jenkins job that runs the mir
> demos and integration tests, but I think this should be a separate job, for
> a couple of reasons:
>
> First, we're likely to want to run these performance tests in several
> places (MP CI runs to begin with, perhaps on a daily basis on top of the
> distro images as well?).
>
> Second, I think it's worth keeping the concerns of "have we broken the mir
> demos or integration tests by altering API without updating them" and the
> concern of "have we regressed performance in this release" separate.
>
> So, I don't think there's anything too controversial in here - Can we get
> a plan together to figure out who's going to do what, so we can get the mir
> team hooked up?
>
>
> Cheers,
>
> --
> Thomi Richards
> thomi.richards@xxxxxxxxxxxxx
>

References