← Back to team overview

canonical-ci-engineering team mailing list archive

CI/CD proposed changes

 

Hi Team,

Francis, Celso, Evan and I had a discussion today to plan some further
improvements to the CI/CD system on wendigo. What follows is a rough
outline of some aspects we’d like to change:

Service Configuration:

Currently every service is configured by changing options in the crontab
file. Instead /srv/mojo/LOCAL/<service_name>/config.ini will contain
per-service config files. Every <service-name> directory will be it’s own
bazaar branch. Every service gets it’s own configuration file. Even though
services share many configuration options, we want separate files for each
service.


adt-continuous-deployer changes:

mojo.py and cd.py would drop the special per-config arguments. The wendigo
crontab file would be hugely simplified - essentially calling cd.py for
every service we deploy on a regular basis.

cd.py would be configured to look at the revision of the config branch in
/srv/mojo/LOCAL/<service-name>, as well as all the other bzr revisions it
considers now as a trigger to re-deploying a service. This means that if
you want to tweak the config for a service, you need to:


   1.

   Edit the config in /srv/mojo/LOCAL/<service-name>/config.ini
   2.

   commit your changes to the bzr branch.


Committing config changes gives us a log of who changed what, and allows us
to easily undo bad config changes, and triggers a deployment (since the
revno will change every commit).

We’d drop the ci-cd-identifiers directory. Instead, metadata about the
deployment would be written to the metadata of each nova instance. This
would be done in the mojo spec with ‘nova set key=value’ (or perhaps
through the python API). Metadata that will be recorded will include (but
is not necessarily limited to):


   -

   bzr revision for all branches deployed, as separate key/value pairs
   -

   A hash of all bzr revisions together (like we do now)


list.py now reads this metadata in order to determine what’s deployed.
cd.py either reads this metadata, or reads the output of list.py to
determine if anything needs to be re-deployed.

Future Autoscaling Ideas:

When we need it, an “autoscaling-agent” would be deployed to the bootstrap
nodes of any service that needs scaling. It would read some external input
(maybe a queue size, maybe something else), and talk to juju to scale up or
down the number of instances of the service.


We plan to do this in the next sprint. Please read through the above, and
reply with any thoughts you have. I think what's outlined above is a sane
"next step" towards the ultimate CD solution :D


Cheers,

-- 
Thomi Richards
thomi.richards@xxxxxxxxxxxxx

Follow ups