canonical-ci-engineering team mailing list archive
-
canonical-ci-engineering team
-
Mailing list archive
-
Message #00807
Re: Noisy tests
>>>>> Celso Providelo <celso.providelo@xxxxxxxxxxxxx> writes:
> Vincent,
> You have raised this issue while reviewing on of my cleanup branches
> and I decided to move the discussion to the public ML, so the rest of
> the team can easily participate.
> On Fri, May 9, 2014 at 6:04 AM, Vincent Ladeuil <vila@xxxxxxxxxxxxx> wrote:
>> Review: Approve
>>
>> Lovely cleanup of code and namespace \o/
>>
>> 337 with LogCapture() as lc:
>> 338 logger = logging.getLogger()
>> 339 - self.cli.main(args, log=logger)
>> 340 + with mock.patch('sys.stdout'):
>> 341 + self.cli.main(args, log=logger)
>>
>> Care to have an out of review discussion about that ? Some ideas: I'd live
>> to see a better story around noisy tests. If I get your intent right here,
>> you're silenting one by ignoring sys.stdout, may be there a re valuable
>> data there to check in the test, also, couldn't we unify all output in
>> loggers ? (Even if that means having one still going to stdout with the
>> same content we would then have a single point to capture/redirect for
>> tests).
> Right, as part of this and other previous cleanups I have
> *quieten* lots of tests that leaked data through STDOUT, solely
> because it bothered me while reading the test suite results (I
> have broken the test suite a lot lately).
Cough, you're very welcome about that ;)
> So, first, we have to agree, as a team, that stdout noise in the
> test suite output is a real issue for us, because in practice it
> does not really affect the tests results.
+42
It doesn't affect the test failures but make them harder to read because
the test traceback displayed in the end is not correlated with the
output, so you don't know which test output what, be it relevant or
useless.
Likewise, it is far easier to get useful data out of a test run that
outputs only the test name and the time spent in that test (shameless
plug for run-tests ;) as well as a summary in the end for the number of
tests run, failed, errored or skipped.
> I say that because, from what I saw, the relevant output
> information that is important for tests is already using logger
> and is captured and checked accordingly.
Not in every case, juju deployer for one.
> If that is the case, we should probably instrument our default
> test runner to do the STDOUT suppressing and register/list tests
> that are offending this rule, turning it into a failure once the
> existing tests are all fixed (so it will never happen again).
That's one way. I'm not a huge fan of it because it put the
responsibility of output capture outside of reach for the test writer.
I'd rather make it a fixture that any test (or test class) can setup so
that if interesting bits are there they could be checked easily in the
test and still allow specific implementations of that capture without
putting a requirement on the test runner or even a test base class.
Hence my proposal above to stop using stdout/sterr as much as possible
in our code and instead use python's logging exclusively where message
classification, selection and formatting are already well
defined. (I.e. we can still use stdout/stderr but inside the loggers).
Andy and Ursula both encountered cases where modules had to configure
their sub-module loggers, tests are just a different case which also
requires setting loggers for modules and their sub-modules.
> If we all agree, It would be a nice feature to be implemented
> right after Phase-0 release, as part of the project "Technical
> Dept" list that we could build together.
I think you mean uce-0 here, we used phase-0 for Mt Hood ;)
> Im sure there are other testing-infrastructure issues that should
> be tackled across the entire project as soon as possible, ideally
> in the beginning of a new development cycle to avoid major
> disruption/distractions.
Full agreement here especially since I've been saying exactly that at
the end of phase-0 ;-D
Vincent
Follow ups
References