canonical-ci-engineering team mailing list archive
-
canonical-ci-engineering team
-
Mailing list archive
-
Message #00613
Re: Jenkins job request - Autopilot Release job
Vincent,
I think what Chris and Thomi are asking for is a job that aggregates
all of the current jobs we already have that make use of autopilot and
allow them to run with a version of autopilot under test.
For example, we run autopilot tests for gallery-app, camera-app and
address-book-app, etc. The goal as I understand it is to just re-use
these existing jobs for the purpose of testing the current app with
the new autopilot. My understanding is that running these app test
suites is what Chris and company are doing manually. So the
application tests and jobs are already defined, the request is to
re-purpose them for testing autopilot.
Autopilot already has significant unit and integration tests that run
on every MP. I think the goal here is to catch any surprises that can
occur when the apps are tested against a new autopilot.
> So the questions are: Can we have this please? and what are the steps
> needed to make it happen?
Regarding the actual implementation of doing this: If I understand
this correctly, we already have all of the pieces to do this laying
around. We may be able to cover desktop and touch testing for the
canonical apps via a single parent job that would aggregate all of our
upstream-merger jobs. The mega-job may also work here to provide
'smoke'-like testing of the touch-image. I think we can achieve
75%-90% of this fairly easily, but I can't make any promises on how
soon this could be done without more info.
Chris and Thomi, I'll try to catch you on IRC today to get more details.
Francis
On Tue, Jan 7, 2014 at 2:49 AM, Vincent Ladeuil <vila+ci@xxxxxxxxxxxxx> wrote:
>>>>>> Chris Lee <chris.lee@xxxxxxxxxxxxx> writes:
>
> > Hi All,
> > We (the Tools and Trust team, CC'd in this email) have a request for a
> > special Jenkins job that will assist in making Autopilot releases as
> > painless and speedy as possible.
>
> This sounds like a big task.
>
> > Currently for a release of Autopilot to happen there is a lot of
> > manual testing done which takes a long time (a day or so) for someone
> > to do.
>
> To achieve this big task, I'd say you should start by automating your
> manual testing one job at a time.
>
> Get it running locally and *then* define the jenkins job that should run
> automatically (preferably with a single command defined by the project).
>
> > This testing is effectively a collection of the automated test jobs
> > that are already exist.
>
> You kind of lost me here, either they are automated or they are manual,
> what is the bit I'm missing ?
>
> > Our idea is to create one massive Jenkins job that contains all (or a
> > subset?) of the existing automated acceptance tests across the board
> > (including Click, Touch, Desktop etc.) that can be used to green-light
> > an Autopilot release.
>
> A massive job is likely to fail exponentially if any of the existing jobs
> is not perfectly isolated and will be a real pain to diagnose.
>
> Splitting jobs to create some isolation between them may be imperfect
> but at least provide useful results.
>
> > This job won't be run very often (only when we intend to release)
>
> "Release what you've tested instead of testing what you've released[1]"
> is my motto.
>
> Delaying feedback on new failures only make them harder to fix.
>
> I would aim for a daily run if those tests can't be run on every commit,
> but waiting for a new release is probably too long.
>
> > so it shouldn't tie up resources and doesn't matter if it takes
> > hours to complete.
>
>
> > Autopilot touches so many different test suites, environments and form
> > factors (desktop and device) and we need to be confident that a
> > release won't break the build.
>
> Like any library, autopilot should indeed have integration tests
> run. Whether its users do so or (better) autopilot itself anticipates
> and run them itself only impacts the feedback length.
>
> > It is getting harder and taking longer to manually run the tests
> > and can also be a moving target (a contrived example; the CI test
> > runner script using an app-armor command that the manual tester
> > isn't aware of, causing the tester grief)
>
> I'm not sure I understand the issue here, is it that the project using
> autopilot doesn't properly capture the constraints on the test
> environment ? Or is it that this environment is not under the project
> control ? Something else ?
>
> > We're hoping to be able to harness the existing infrastructure and
> > test suites to make this whole process a lot easier.
>
> That, surely, is a important goal of the CI infrastructure.
>
> > So the questions are: Can we have this please? and what are the steps
> > needed to make it happen?
>
> Three parties seem to be involved here: autopilot, the projects that use
> autopilot and the CI infrastructure.
>
> Since any job run in the CI lab should ultimately be perfectly
> reproducible on any dev machine... I'm strongly tempted to say that the
> CI effort should be to ensure that no specific setup bits (that are not
> properly captured by the project itself) exist in the CI infrastructure.
>
> To get there, I think you should start by making sure you can run all
> the tests you care about locally and automatically, documenting any
> issue blocking such a goal.
>
> Vincent
>
> [1]: Or are about to release which is already better ;)
>
> --
> Mailing list: https://launchpad.net/~canonical-ci-engineering
> Post to : canonical-ci-engineering@xxxxxxxxxxxxxxxxxxx
> Unsubscribe : https://launchpad.net/~canonical-ci-engineering
> More help : https://help.launchpad.net/ListHelp
--
Francis Ginther
Canonical - Ubuntu Engineering - Continuous Integration Team
Follow ups
References