canonical-ci-engineering team mailing list archive
-
canonical-ci-engineering team
-
Mailing list archive
-
Message #00952
Re: QA -> CI test handoff checklist
Hi,
On Sat, Nov 15, 2014 at 2:27 AM, Ursula Junque <ursula.junque@xxxxxxxxxxxxx>
wrote:
> Hey Thomi, I know this is kind of an old email but hope this helps anyway.
>
>
No worries, we're all busy, and this is an important discussion to have, I
think.
> On Thu, Oct 30, 2014 at 12:48 AM, Thomi Richards <
> thomi.richards@xxxxxxxxxxxxx> wrote:
>
>
>> The problem is that the CI team haven't had time to complete the 'test
>> handover checklist' we discussed in D.C. (as an aside, I'd love to know
>> from the scrum-experts on this list whether there's a standard way to
>> manage dependencies between multiple scrum teams). Additionally, the
>> 'handover' doesn't seem like something that fits within the scrum
>> framework. I'd like some direction from the CI team as to how you see this
>> happening.
>>
>
> In an ideal world, teams would be cross-functional, having members of all
> areas and therefore wouldn't have stories dependent on other teams' ones.
> But this is real life. :)
>
> One problem here is that one story might have a higher priority for a team
> than the related story for the other, and it's really important that both
> teams understand the "business" goal on completing the two stories, in a
> way that both can evaluate if that is a priority for the product or the
> company as a whole (that's what we're all here for, right? :)).
>
I guess this is where it gets complicated. Who's to say whether task A, or
B is more important? The concept of 'importance' is rather opaque to me,
especially when (for example) task 'A' involves a QA-related task, and task
'B' involves delivering a new feature. Some people would argue that
delivering new features is *obviously* more important, others would argue
that keeping what we have already delivered high quality is more important.
My understanding is that this is a decision for the product owner - that
stakeholders argue for their cards, and the product owner somehow makes a
decision out of that meeting. I'm not sure how that happens though. Anyway,
I'm not trying to be difficult, I'm just curious about how this part of the
process is supposed to work.
> For that we need to break down the stories in these terms, and also in a
> way they become as independent as possible, e.g. having work you would have
> to do anyway, while you wait for whatever piece you will be provided with
> by the other team.
> A good example of this for us is the "citrain spreadsheet replacement"
> story. When planning, we gathered all information from IS (the other team)
> on what is required to request a deployment, and created a story to prepare
> specs required as part of the "epic" that is the whole spreadsheet
> replacement itself (that includes production deployment for beta-testing).
> It was a way of delimiting stories that involve other team's validation or
> services. Hint: investigation stories are valid and important stories. :)
>
hmm, ok, so we were almost correct - we identified that CI (the other team)
would need to take these tests on. We worked with Francis to find out what
the requirements were, but the problem was that the requirements didn't
exist yet. Instead of holding off the implementation of the card, we
decided to have a 'first pass' at defining those requirements, and took it
from there. The difference is that in your example, IS already had those
requirements set out (I'm assuming?), whereas here we're breaking new
ground... Did we do the right thing? If not, what would have been a better
approach?
>
> Regarding the tests themselves, I'd love to get a rough idea of what your
>> requirements are. I have jotted down some thoughts that might help get us
>> started:
>>
>> - All test suites in dep8/autopkgtest format.
>> - Results to be stored in a subunit binary stream.
>> - ...which is an archived artifact.
>> - Tests to be in a launchpad branch (of a specific project? any
>> project?)
>> - Evan mentioned that he didn't want to keep running tests in
>> jenkins. The tests we'd like to hand over currently run in jenkins. What's
>> the proposed alternative? How can we set up those test jobs to prove that
>> these suites are stable?
>>
>>
> I see at least two stories here for the CI team, one to define the
> criteria, and the other one to put infrastructure in place where this can
> run. I'll draft a couple of stories (if not there already), and add there
> the results of the ongoing discussions. Then we can talk in terms of
> acceptance criteria without getting into too much technical detail, that
> always focuses the conversation on _how_ things are going to be done
> instead of _why_ we are doing so ---- this is really important, I hope you
> don't mind me repeating that over and over. :)
>
>
Not at all :D
I believe the cards already exist, although they might be expressed as a
single card, instead of two separate ones. I've CC'd Julien in, since I
guess the two of you will be having that conversation.
Now that our first sprint is over, and we've met these requirements, I'd
love to 'close the loop' and find out what (if anything) is missing so we
can get these tests running in the CI infrastructure.
>
>> Please accept my apologies if this seems like me trying to interrupt your
>> sprint. I guess that 's exactly what I'm doing, but you should let me,
>> because... reasons :D
>>
>
>
> Nah, please, keep interrupting, that's important (as long you interrupt me
> or Evan that's fine. :)).
>
OK. Please let me know if this mailing list is still the appropriate place
for this discussion, or if you'd rather we emailed directy.
Cheers!
--
Thomi Richards
thomi.richards@xxxxxxxxxxxxx
References