TripleO and Juno

I know in my last post, I said I’d be focusing on CI/CD next. But, given that I’ve been a bit lax in blogging, I decided to post about what we’re working on for TripleO in the OpenStack Juno timeframe.

We had a TripleO midcycle meetup hosted at the Red Hat HQ in Raleigh, NC at the end of July. The meetup was very well attended and as a community we continued on some discussions from the Atlanta Summit as well as focused on what we want to finish up in the current cycle.

I’ll highlight some of those items here, in no particular order.



TripleO has now switched to using Ironic by default instead of Nova Baremetal. Ironic’s REST API offers many more features specific to baremetal node management than were offered in the baremetal API extensions in Nova. The Nova Baremetal driver has also been deprecated in the Nova codebase. While it remains to be seen if Ironic will graduate in the Juno release, it’s definitely the path forward for TripleO and baremetal provisioning in OpenStack. So, it’s good that we’re getting more folks testing it out and using it exclusively.

High Availability

Work has continued on making TripleO deploy HA clouds by default. A standard set of technologies are in use to accomplish this goal: HAProxy, keepalived, and Pacemaker. MySQL and MariaDB packages have also been replaced to use Percona XtraDB Cluster or MariaDB Galera Cluster. For folks wanting to test out TripleO deployments or do development with less nodes than HA requires, they’ll be able to deploy HA “clusters” of 1 node. This means all TripleO users will be using the same HA based configurations and deployments. This is beneficial in that it gets everyone testing the HA configurations.

Heat Templates

A lot of work has gone into the set of Heat templates that TripleO uses to deploy OpenStack. As part of the Icehouse release, the templates were updated to use the new SoftwareConfig and SoftwareDeployment resources that were added to Heat. As part of Juno, the templates are being further refactored to use Provider resources and Environments. TripleO is also working to drive features in Heat directly that were previously done out of band — things like scaling resources and merging multiple reusable templates into one stack.

Multi-hypervisor Support

One of the things we discussed at the Atlanta summit was support for deploying OpenStack via other hypervisors besides baremetal. The main driver behind this idea is using Docker to deploy containers running your TripleO Overcloud control services. However, since TripleO uses Nova, we can in theory support any hypervisor that Nova supports, such as Libvirt. So, we generalized the idea to be Multi-hypervisor support. Docker, and containers in general, are attractive to many people due to their isolation, security, and potential upgrade/rollback patterns. Security is a particular driver because if an Overcloud control service were to be compromised, instead of having direct access to baremetal, you’re still contained within a …container.


Lots of work has gone into Tuskar to make it more of a cloud planning and deployment service and UI. The aforementioned Heat template refactoring will give Tuskar users greater ability to designate roles for inclusion in their cloud, and to scale and configure those roles individually.  Roles are things like Control, Compute, Block Storage, and Object Storage. The Tuskar UI will also have tighter integration with Ceilometer in order to show usage and metric graphs for your cloud.


os-cloud-config is a newer project under the TripleO umbrella. It aids in the initial bootstrap of a deployed cloud, things like keystone initialization, registering API endpoints, registering baremetal nodes for use in a deployment, etc. Previously, these things were being done by a set of scripts using the OpenStack CLI’s. Moving this functionality into its own project with proper releases and testing makes it more consumable.


Another new project, os-net-config, standardizes applying configuration of a network stack on a deployed node. It has a pluggable backend, so it will support different network configuration schemes such as iproute2, ifconfig, or /etc/network/interfaces (in the Ubuntu/Debian world). os-net-config will be driven by a common JSON representation of the desired network configuration, and then use the chosen backend to realize that configuration on the node.


Finally, on the Continuous Integration front, we now have 2 racks running our TripleO CI jobs. This means that on every TripleO commit, multiple jobs doing end to end testing of deploying OpenStack using TripleO are run. We’re trying to run jobs than span the possible configuration permutations — while keeping in mind we’re limited by the amount of physical hardware we have. So, we’re varying things like Fedora vs. Ubuntu, Nova Baremetal vs. Ironic, and HA vs. non-HA.

There are lots of other things going on as well, but I think that’s enough to highlight for now. If you have further interest, keep an eye on our tripleo-specs repository where we’re reviewing what might be coming up next.


3 comments on “TripleO and Juno

  1. Lennie says:

    Have you considered running the undercloud in containers ?

  2. slagle says:


    Yes, we have. The general pattern in TripleO is to roll out new features to the Overcloud first, as these are typically more user facing. In this case, the users are deployers. Once we validate the set of changes to deploy to containers (image building, templates, configuration, etc), there’s no reason we can’t apply the same features to the Undercloud as well.

Leave a Reply