← Back to team overview

desire team mailing list archive

Re: Desire's own waterfall

 

From: Juan Jose Garcia-Ripoll <juanjose.garciaripoll@xxxxxxxxxxxxxx>
> I know that waterfall looks cool, but could names be arranged
> horizontally? You have so much software tested it does not fit on one
> screen? Hmmm, even vertical is problematic, maybe a multicolumn table
> would be better for such a big library set? Or perhaps a matrix with
> just the ok/failure colors and a floating tooltip with the name?

First of all, it appears that there will be about, at least, twice as much
libraries.

About layout -- any suggestions are welcome, I'm not feeling attached
to any particular way.

I plan to use cells with following layout (scaled down, obviously):

+------------------------------------+
|status                              |
| [[output][link://to-phase-output]] |
| b: branch-name                     |
| v: last-major-version-number       |
| +rev: n-revisions-since-l-m-v-n    |
| status-change-since-last-build     |
+------------------------------------+

..plus the library name in the header referring to link://to-more-status,
and plus the library's name as a floating tooltip (as is done now).

It'll be a challenge to cram so much information into a small cell,
but I don't feel there is't much that can be left out.

What do you think about all of this?

>> When I initiate the build process you can see it live-updating..
> 
> How does this work? Continuously pulling data? Assistance from the web server?

Basically, sending headers early and continuously doing FLUSH-OUTPUT
on the client stream, server-side.  Although, during update I have
some lock contention issues.  But I need some sleep and work done first :-)

>> It's got no test output, no periodicity, and no other phases (slave fetch,
>> slave load and slave test), but these shouldn't be far off.
> 
> Your waterfall shows a kind of "tests ran and finished ok" flag, but
> in my experience testing libraries change from one to one. Did you
> introduce the commands by hand and somehow analyzed the output?

I don't think I understand the question here.  I admit I was a bit
obscure above, so I'll clarify:

The "tests" I refer above are not library tests, or, more precisely,
not only library tests.  They are individual actions that need to be
performed by the wishmaster+client chain for the library to be available.
The last round (or "phase") of these "tests" consists of actual
library tests.

They are:

   - upstream fetch & conversion
       performed by the UPDATE function on the wishmaster/buildmaster
   - client fetch
       performed by the UPDATE function on the buildslave
   - client load
       performed by issuing ASDF:OOS 'ASDF:LOAD-OP on the buildslave
   - client test
       ???, not implemented at all
       
Currently only the upstream fetch & conversion phase is hooked up to
the buildbot, as it is simplest to control, by virtue of being performed
locally, on the wishmaster generating the webpage.  The next two
phases are sort-of implemented, but not yet hooked up.  The last phase
isn't even sketched out.


regards,
  Samium Gromoff
--
                                 _deepfire-at-feelingofgreen.ru
O< ascii ribbon campaign - stop html mail - www.asciiribbon.org



References