canonical-ci-engineering team mailing list archive
-
canonical-ci-engineering team
-
Mailing list archive
-
Message #00606
Re: Jenkins job request - Autopilot Release job
>>>>> Chris Lee <chris.lee@xxxxxxxxxxxxx> writes:
> Hi All,
> We (the Tools and Trust team, CC'd in this email) have a request for a
> special Jenkins job that will assist in making Autopilot releases as
> painless and speedy as possible.
This sounds like a big task.
> Currently for a release of Autopilot to happen there is a lot of
> manual testing done which takes a long time (a day or so) for someone
> to do.
To achieve this big task, I'd say you should start by automating your
manual testing one job at a time.
Get it running locally and *then* define the jenkins job that should run
automatically (preferably with a single command defined by the project).
> This testing is effectively a collection of the automated test jobs
> that are already exist.
You kind of lost me here, either they are automated or they are manual,
what is the bit I'm missing ?
> Our idea is to create one massive Jenkins job that contains all (or a
> subset?) of the existing automated acceptance tests across the board
> (including Click, Touch, Desktop etc.) that can be used to green-light
> an Autopilot release.
A massive job is likely to fail exponentially if any of the existing jobs
is not perfectly isolated and will be a real pain to diagnose.
Splitting jobs to create some isolation between them may be imperfect
but at least provide useful results.
> This job won't be run very often (only when we intend to release)
"Release what you've tested instead of testing what you've released[1]"
is my motto.
Delaying feedback on new failures only make them harder to fix.
I would aim for a daily run if those tests can't be run on every commit,
but waiting for a new release is probably too long.
> so it shouldn't tie up resources and doesn't matter if it takes
> hours to complete.
> Autopilot touches so many different test suites, environments and form
> factors (desktop and device) and we need to be confident that a
> release won't break the build.
Like any library, autopilot should indeed have integration tests
run. Whether its users do so or (better) autopilot itself anticipates
and run them itself only impacts the feedback length.
> It is getting harder and taking longer to manually run the tests
> and can also be a moving target (a contrived example; the CI test
> runner script using an app-armor command that the manual tester
> isn't aware of, causing the tester grief)
I'm not sure I understand the issue here, is it that the project using
autopilot doesn't properly capture the constraints on the test
environment ? Or is it that this environment is not under the project
control ? Something else ?
> We're hoping to be able to harness the existing infrastructure and
> test suites to make this whole process a lot easier.
That, surely, is a important goal of the CI infrastructure.
> So the questions are: Can we have this please? and what are the steps
> needed to make it happen?
Three parties seem to be involved here: autopilot, the projects that use
autopilot and the CI infrastructure.
Since any job run in the CI lab should ultimately be perfectly
reproducible on any dev machine... I'm strongly tempted to say that the
CI effort should be to ensure that no specific setup bits (that are not
properly captured by the project itself) exist in the CI infrastructure.
To get there, I think you should start by making sure you can run all
the tests you care about locally and automatically, documenting any
issue blocking such a goal.
Vincent
[1]: Or are about to release which is already better ;)
Follow ups
References