canonical-ci-engineering team mailing list archive
-
canonical-ci-engineering team
-
Mailing list archive
-
Message #01019
CDO Cloud Sprint Trip Report from CI team
This is the CI Team’s (from wgrant and fginther) trip report summary for
the CDO Sprint in Cape Town Feb 2-6. The raw notes are available here:
https://docs.google.com/a/canonical.com/document/d/1L-kl2ejlamwwy9KCtSXz4pdZVrstSfUD_n7YemGDdfU/edit#heading=h.9332ygcyx07
Juju
====
Leader election
* Should make it in for 1.23.
* Once leader election is in, 1.24 or later may give the leader API access
so it can coordinate cross-unit operations like upgrades.
Actions
* These allow charm authors to define parameterized commands to execute on
a service or unit in a controlled manner.
* Marco Ceppi is using Actions to execute benchmark tests on openstack
deployments as part of the CABS project.
* Actions will be in 1.23 and eventually obviate SSHing into units to do
things.
MESS - Multi-Environment State Server
* This will allow hosting multiple deployments on a single bootstrap node
but still maintain isolation.
Charm Helpers Framework
* Use of the charmhelpers services framework was highly encouraged. This
is proving to be a better way to write charms as it more cleanly defines
dependencies and actions.
* Example
https://code.launchpad.net/~canonical-sysadmins/canonical-is-charms/wordpress-services
* Each charm still has to embed the charmhelpers codebase, as Juju has no
support for dependencies. They’re looking at possible solutions, but
nothing is imminent.
Charm coding best practices
* Ben Saller and his team have been working on breaking up large charms
(using a ‘role’ pattern) and breaking them into small charms with shared
code via charmhelpers /contrib.
* Example https://code.launchpad.net/~bigdata-dev/
* Juju charm dependencies will make this much cleaner.
CABS - Canonical Automated Benchmarking Service
* This is basically a project to run standard benchmarks on cloud
deployements. For example, we would run this on the bootstack cloud itself.
Other Juju bits
* Storage is becoming a 1st class citizen, no more storage-brokers in the
future. Will be post 1.23.
* Ben Saller is looking at environment upgrades, which will probably
require a redesign of the bundle format. It’s currently declarative, and
evolving between different declarative formats is difficult.
* Proper support for LXC containers within an environment’s machines is
coming. No more manual network configuration hacks.
* Cross-environment relations aren’t on any definite roadmap. But Ben
Saller is working on virtual services(?) to replace proxy charms. MESS
won’t directly help this, but MESS and virtual services provide a workable
solution and a path to a proper fix.
* Jujufying existing machines in-place is difficult, but being considered.
We’re only really interested in it for a few weird or large hosts, but
other customers don’t want to tear down and redeploy their entire
production deployment just to add Juju.
* Work on a ‘juju reconciler’ is in progress. The idea is to allow
upgrading of complex deployments that need to add and/or remove
relationships as part of the process. The reconciler will be given a goal
state for the deployment (via a juju-deployer like yaml) and determine what
relationship changes are needed to get there. Ben Saller’s team is
currently working on something external to juju to do this for the
cloud-foundry charms, but it is expected to eventually be in juju-core. See
~cf-charmers/charms/trusty/cloudfoundry/trunk
Snappy
======
Planning / Roadmaps
* There were multiple sessions on deploying snappy with existing
technologies. These were primarily preliminary planning sessions to get
some investigative tasks on the CDO teams’ roadmaps.
Juju and Snappy
* How to deploy snappy images with juju? This would require something like
a juju framework.
* Juju and snappy could also be combined to provide orchestration within
the host. For example, a hypervisor framework is deployed to the snappy
system and juju is used to manage multiple containers running on that system
Landscape and Snappy
* Landscape provides server upgrade features, there was a discussion
around what would be useful to leverage for snappy and could landscape
scale to IoT.
MaaS and Snappy
* Most of the work needed to actually deploy a snappy image would be in
curtain
* MaaS would continue to use it’s own image for bootstrapping, but then
use curtain to lay down the snappy image and partitioning
* Support of deploying to IoT devices is really dependent upon the device
boot capabilities. Questions exist for how one would provision devices
without pxe boot.
Containers and Hypervisors
* A very useful pattern for snappy is to deploy a hypervisor framework
(docker, lxd, kvm, etc) and then deliver apps as containers.
* Support for an LXD/LXC framework is a priority
Snapp Apps on Server Images
* There is a strong desire to allow installation of snappy apps on regular
server images. This provides a single ‘binary’ that developers need to
provide to support both snappy and server. It also allows for a faster
delivery through the store versus the ubuntu archive.
* This is expected to be supported via installation of the snappy packages.
Bootstack HA
============
The upgrade of the bootstack cloud is planned.
* The next version will be an HA openstack deployment and have more CPU
and MEM capacity compared to the current cloud.
* New bootstack will be brought up in parallel to allow for verification
and migration of services
--
Francis Ginther
Canonical - Ubuntu Engineering - Continuous Integration Team