canonical-ci-engineering team mailing list archive
-
canonical-ci-engineering team
-
Mailing list archive
-
Message #00373
Re: Mir Performance Testing
On Wed, Nov 20, 2013 at 09:28:53AM +1300, Thomi Richards wrote:
> Hi CI team,
>
>
> The mir team would like to run the glmark2 benchmarks on a native mir
> server across Desktop, N4 and N10 devices at the MP stage, and block merges
> on a significant drop in FPS performance.
>
>
> We already have a glmark2 version that runs against a native mir server.
> The parts that are missing includes:
>
>
> 1. Infrastructure to run that test (jenkins and whatnot).
>
> 2. Policy around exactly when to fail the test, when to warn about a
> drop in performance, and when to pass the test.
>
> 3. A way to convert the data from glmark2 into something the CI team can
> report on.
>
> 4. A place to show the graphs of performance over time.
>
>
> So, addressing these in reverse order (why not):
>
> 4: We already have performance graphs for mir, so I suspect this part will
> be easy.
Adding views to the dashboard hasn't proven to ever be *easy*. :) It
shouldn't be nearly as costly as adding an entirely new class of view
but the work will not be trivial.
>
> 3: This needs a small amount of python code, I suggest we emit a subunit
> result stream here - we could use this as a testbed for subunit. I imagine
> that this script would be done by someone in the QA team? Is the CI team
> ready to read subunit result streams? I'm more than happy to walk someone
> through what needs to happen here (the code is trivially easy)...
We are not currently ready to read subunit result streams. I will
likely be the one adding this support so feel free to email me the
details you have in mind.
Thanks,
Joe
Follow ups
References