Update on TripleO with already provisioned servers

In a previous post, I talked about using TripleO with already deployed and provisioned servers. Since that was published, TripleO has made a lot of progress in this area. I figured it was about time for an update on where the project is with this feature.

Throughout the Ocata cycle, I’ve had the chance to help make this feature more
mature and easier to consume for production deployments.

Perhaps most importantly, for pulling their deployment metadata from Heat, the servers are now configured to use a Swift Temporary URL instead of having to rely on a Keystone username and password.

Also, instead of having to bootstrap the servers with all the expected packages
and initial configuration that TripleO typically expects from instances that it
has deployed from pre-built images, you can now start with a basic CentOS image
installed with only the initial python-heat-agent packages and the agent
running.

There have also been other bug fixes and enhancements to enable this to work
with things such as network isolation and fixed predictable IP’s for all
networks.

I’ve started on some documentation that shows how to use this feature for
TripleO deployments: https://review.openstack.org/#/c/420369/
The documentation is still in progress, but I invite people to give it a try
and let me know how it works.

Using this feature, I’ve been able to deploy an Overcloud on 4 servers in a
remote lab from a virtualized Undercloud running in an entirely different lab.
There’s no L2 provisioning network connecting the 2 labs, and I don’t have
access to run a DHCP server on it anyway. The 4 Overcloud servers were
initially provisioned with the existing lab provisioning system
(cobbler/kickstart).

This flexibility helps build upon the composable nature of the
tripleo-heat-templates framework that we’ve been developing in TripleO
in that it allows integration with already existing provisioning environments.

Additionally, we’ve been using this capability extensively in our
Continuous Integration tests. Since TripleO does not have to be responsible for
provisioning the initial operating system on instances, we’ve been able to make
use of virtual instances provided by the OpenStack Infra project and
their managed Nodepool instance.

Like all other OpenStack CI jobs running in the standard check and gate queues,
our jobs are spread across several redundant OpenStack clouds. That means we
have a lot more virtual compute capacity for running tests than we previously
had available.

We’ve further been able to define job definitions using 2, 3, and 4 nodes in
the same test. These multinode tests, and the increased capacity, allow us to
test different deployment scenarios such as customized composable roles, and
recently, a job upgrading from the previous OpenStack release all the way to
master.

We’ve also scaled out our testing using scenario tests. Scenario tests allow us
to run a test with a specific configuration based on which files are actually
modified by the patch being tested. This allows the project to make
sure that patches affecting a given service are actually tested, since a
scenario test will be triggered deploying that service. This is important to
scaling our CI testing, because it’s unrealistic to expect to be able to deploy
every possible OpenStack service and test that it can be initially deployed, is
functional, and can be upgraded on every single TripleO patch.

If this is something you try out and have any feedback, I’d love to hear it and
see how we could improve this feature and make it easier to use.


Leave a Reply