TripleO with already deployed servers

Recently I’ve been prototyping how to use TripleO with already deployed
and provisioned servers. In such a scenario, Nova and Ironic would not be used
to do the initial operating system provisioning of the Overcloud nodes.
Instead, the nodes would already be powered on, running an OS, and ready to
start to configure OpenStack.

There are a couple of reasons why I find this worth prototyping. It would allow
users to make use of other provisioning systems and technologies, such as
Foreman, Cobbler, kickstart, etc. It also allows users or developers
to use other virtual infrastructure for testing, as it would be possible to
deploy to any virt instances where you may not be able to pxe provision.

It’s worth mentioning how this concept relates to other ongoing work in TripleO
such as OpenStack Virtual Baremetal (OVB) and split-stack. OVB is an effort to
use OpenStack itself to create virt instances as needed for TripleO testing.
The prototype I’ve explored could use OpenStack itself (as I’ll show), but it
doesn’t have to, as it can make use of any running server, including actual
baremetal. OVB also still exercises Nova and Ironic to do the provisioning,
where as the deployed server idea does not.

Split-stack is a concept of splitting the single overcloud stack in TripleO
into 2 or more stacks. The stacks would be split along primary
responsibilities, such as infrastructure provisioning, network configuration,
bootstrap configuration, and OpenStack configuration. Not all the stacks would
be required, so split-stack would also allow for using already deployed servers
that were provisioned with other tools. Split-stack is an architecture change
for TripleO, and is probably a little ways down the roadmap.

Instead, I wanted to prototype a solution that would fit relatively easily into
the existing architecture. To do so, it makes the OS::Nova::Server resource
pluggable in tripleo-heat-templates, via a OS::TripleO::Server resource. By
default, OS::TripleO::Server is just mapped back to OS::Nova::Server.

To use already deployed servers, I use Heat’s resource-registry to
alternatively map OS::TripleO::Server to a new nested stack called
deployed-server.yaml. This nested stack has no OS::Nova::Server resources, so
no nova servers will be created.

It needs to have the same interface (properties/outputs) as OS::Nova::Server so
that’s it’s a pluggable replacement in the templates. To do so it applies some
SoftwareDeployments to the deployed servers to query for their hostnames and IP
addresses to set as outputs on the stack as those values are needed elsewhere
in the templates.

In essence, how it works is that the SoftwareDeployments used to apply the
network configuration and puppet manifests to the overcloud nodes will be
associated with this nested stack instead of an instance of OS::Nova::Server.

The deployed servers will be configured out of band to query for available
SoftwareDeployments for their associated nested stack, and they’ll then run the
necessary hooks (puppet/script/os-apply-config) to apply the configuration to
create an overcloud.

There are a few other patches needed to enable this all to work, I won’t detail
all those here, but I used a single topic branch called “deployed-server” in
gerrit so they’re all grouped together.

Configuring the networking on the servers can be a bit of a challenge
depending on the infrastructure in use. For instance, if you can’t route
traffic for a private subnet due to the firewall configuration that is outside
your control, it makes things a bit more difficult. In those cases, tunnels or
vpn’s could be used. I plan to detail some of the networking configurations in
a later post.

To test it out initially though, I decided to use the Rackspace public cloud
where I could create a private network with its own dedicated subnet that I
controlled. I hadven’t actually directly used the Rackspace public cloud in a few months, overall I was really pleased with the web Control Panel and the performance of the
instances.

I created a new network and called it “cltplane”, and gave it the default
192.0.2.0/24 subnet that TripleO uses for deployment:

Screenshot_2016-04-17_12-42-40

I then created 3 servers in the cloud, and made sure to attach each one to the
ctlplane network that I had created. I used the “7.5 GB Compute v1” flavor,
which has 4 vcpus and 7.5 GB ram. Of the 3 servers, one would be the
undercloud, and the other 2 would be for the overcloud nodes.

Screenshot_2016-04-18_09-27-17

On the undercloud server, I just installed a normal TripleO undercloud using
the standard process. For the local_interface configuration setting, I
specified eth2, since that was the interface connected to the ctlplane network
I had created in the cloud.

For the 2 deployed servers, I launched a vanilla Centos 7 image offered by
Rackspace in their cloud. Once the servers were up, I used a script to add the
needed packages and initial configuration to the servers. The goal of the
script is just to make the instance look the same as the initial overcloud-full
image — nothing more than that. The bulk of that work just makes use of
instack to apply the same elements that are used in the diskimage-builder build
of overcloud-full.

At this point, I’m ready to start the overcloud deployment.

Here’s what my deployment command looks like:

openstack overcloud deploy
 --control-scale 1 \
 --compute-scale 1 \
 --overcloud-ssh-user root \
 --ntp-server clock.redhat.com \
 --templates /home/stack/deployed-server/tripleo-heat-templates \
 -e /home/stack/deployed-server/tripleo-heat-templates/environments/puppet-pacemaker.yaml \
 -e /home/stack/deployed-server/tripleo-heat-templates/deployed-server/deployed-server-environment.yaml \
 -e /home/stack/deployed-server/deployed-server-hosts.yaml"

And the contents of deployed-server-hosts.yaml:

resource_registry:
 OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/deployed-server/tripleo-heat-templates/net-config-static-bridge.yaml
 OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/deployed-server/tripleo-heat-templates/net-config-static.yaml

parameter_defaults:
 NeutronPublicInterface: nic3
 HypervisorNeutronPublicInterface: nic3
 ControlPlaneDefaultRoute: "192.0.2.1"
 ControlPlaneSubnetCidr: "24"
 EC2MetadataIp: "192.0.2.1"

One aspect here is that we still need to configure os-collect-config on each of
the overcloud nodes, but we can’t do that until we know the unique nested stack id’s
to query for SoftwareDeployment data. So, once those stacks are created, we can
look up their uuid’s and go configure os-collect-config on each already
deployed server. I wrote another script to do that automatically.

Once that is done, the servers start pulling configuration from Heat, and the
overcloud stack should run to CREATE_COMPLETE.

Now, it of course took me a few iterations to get this working, but once it did
and the overcloud finishes, here is what you’re left with:

# On the undercloud
[stack@undercloud ~]$ source stackrc

[stack@undercloud ~]$ nova list
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+

[stack@undercloud ~]$ ironic node-list
+------+------+---------------+-------------+--------------------+-------------+
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
+------+------+---------------+-------------+--------------------+-------------+
+------+------+---------------+-------------+--------------------+-------------+

[stack@undercloud ~]$ openstack stack list
+--------------------------------------+------------+-----------------+---------------------+--------------+
| ID                                   | Stack Name | Stack Status    | Creation Time       | Updated Time |
+--------------------------------------+------------+-----------------+---------------------+--------------+
| 798e62b5-0e59-4cce-b3c7-b3f4c5ee7862 | overcloud  | CREATE_COMPLETE | 2016-04-17T13:46:06 | None         |
+--------------------------------------+------------+-----------------+---------------------+--------------+

No nova servers deployed, but we have a CREATE_COMPLETE stack :).

Here’s how the deployed-server nested stack looks as a Heat resource:

[stack@undercloud ~]$ openstack stack resource list overcloud
+-------------------------------------------+-----------------------------------------------+---------------------------------------------------+-----------------+---------------------+
| resource_name                             | physical_resource_id                          | resource_type                                     | resource_status | updated_time        |
+-------------------------------------------+-----------------------------------------------+---------------------------------------------------+-----------------+---------------------+
--snip--
| Controller                                | 1ae04997-b1ec-4ebe-bef7-5831b9169638          | OS::Heat::ResourceGroup                           | CREATE_COMPLETE | 2016-04-17T13:46:07 |
| Compute                                   | 37a935df-2de0-4b88-8731-2f0b2039098c          | OS::Heat::ResourceGroup                           | CREATE_COMPLETE | 2016-04-17T13:46:07 |
+-------------------------------------------+-----------------------------------------------+---------------------------------------------------+-----------------+---------------------+

[stack@undercloud ~]$ openstack stack resource list 1ae04997-b1ec-4ebe-bef7-5831b9169638
+---------------+--------------------------------------+-------------------------+-----------------+---------------------+
| resource_name | physical_resource_id                 | resource_type           | resource_status | updated_time        |
+---------------+--------------------------------------+-------------------------+-----------------+---------------------+
| 0             | 0b52fbc9-154e-44fe-90c7-df9972de828b | OS::TripleO::Controller | CREATE_COMPLETE | 2016-04-17T13:46:24 |
+---------------+--------------------------------------+-------------------------+-----------------+---------------------+

[stack@undercloud ~]$ openstack stack resource list 0b52fbc9-154e-44fe-90c7-df9972de828b
+--------------------------+--------------------------------------+-------------------------------------------------+-----------------+---------------------+
| resource_name            | physical_resource_id                 | resource_type                                   | resource_status | updated_time        |
+--------------------------+--------------------------------------+-------------------------------------------------+-----------------+---------------------+
--snip--
| Controller               | 85a628ea-d606-482e-85c2-36fdde9028a6 | OS::TripleO::Server                             | CREATE_COMPLETE | 2016-04-17T13:46:25 |
+--------------------------+--------------------------------------+-------------------------------------------------+-----------------+---------------------+

[stack@undercloud ~]$ openstack stack resource list 85a628ea-d606-482e-85c2-36fdde9028a6
+----------------------+--------------------------------------+-----------------------------------+-----------------+---------------------+
| resource_name        | physical_resource_id                 | resource_type                     | resource_status | updated_time        |
+----------------------+--------------------------------------+-----------------------------------+-----------------+---------------------+
| deployed-server      | 6f33d5f9-ebed-4660-9962-21c98892b92e | OS::TripleO::DeployedServerConfig | CREATE_COMPLETE | 2016-04-17T13:46:29 |
| HostsEntryDeployment | 96fc194a-2278-4e2d-aa5d-bb8f548ab1c9 | OS::Heat::SoftwareDeployment      | CREATE_COMPLETE | 2016-04-17T13:46:29 |
| InstanceIdDeployment | 8b79d743-163d-4556-a905-b999c8899411 | OS::Heat::StructuredDeployment    | CREATE_COMPLETE | 2016-04-17T13:46:29 |
| InstanceIdConfig     | 31acdf25-96eb-4bf6-8c15-a3d4bd61ac9c | OS::Heat::StructuredConfig        | CREATE_COMPLETE | 2016-04-17T13:46:29 |
| ControlPlanePort     | b7535b97-a0c5-48b0-b6ad-d0a01bf833a0 | OS::Neutron::Port                 | CREATE_COMPLETE | 2016-04-17T13:46:29 |
| HostsEntryConfig     | 7390c1e3-1d42-4d13-8c5a-34738c5e7c17 | OS::Heat::SoftwareConfig          | CREATE_COMPLETE | 2016-04-17T13:46:29 |
+----------------------+--------------------------------------+-----------------------------------+-----------------+---------------------+

[stack@undercloud ~]$ openstack stack resource list 6f33d5f9-ebed-4660-9962-21c98892b92e
+------------------------+--------------------------------------+--------------------------+-----------------+---------------------+
| resource_name          | physical_resource_id                 | resource_type            | resource_status | updated_time        |
+------------------------+--------------------------------------+--------------------------+-----------------+---------------------+
| deployed-server-config | afa2a513-75db-4507-b8d5-922203d5db8c | OS::Heat::SoftwareConfig | CREATE_COMPLETE | 2016-04-17T13:46:30 |
+------------------------+--------------------------------------+--------------------------+-----------------+---------------------+

Let’s have a look at the neutron ports created:

[stack@undercloud ~]$ neutron port-list
+--------------------------------------+---------------------------------+-------------------+-----------------------------------------------------------------------------------+
| id                                   | name                            | mac_address       | fixed_ips                                                                         |
+--------------------------------------+---------------------------------+-------------------+-----------------------------------------------------------------------------------+
| 0068899e-13d1-4c3f-9b27-ee1f66f5a2a7 | redis_virtual_ip                | fa:16:3e:41:bc:8a | {"subnet_id": "2d01b5fa-450c-484d-8a86-6e023721f08e", "ip_address": "192.0.2.16"} |
| 6b50e4e0-defb-4740-a4f3-86c9a571651d | deployed-server-1-ctlplane-port | fa:16:3e:3a:91:17 | {"subnet_id": "2d01b5fa-450c-484d-8a86-6e023721f08e", "ip_address": "192.0.2.17"} |
| 708953bc-5fc5-4b6a-8713-07b52cff871b |                                 | fa:16:3e:91:51:9a | {"subnet_id": "2d01b5fa-450c-484d-8a86-6e023721f08e", "ip_address": "192.0.2.5"}  |
| bbb6c5e8-09d2-4ebf-918e-c9b22bfe50fd | control_virtual_ip              | fa:16:3e:65:19:e6 | {"subnet_id": "2d01b5fa-450c-484d-8a86-6e023721f08e", "ip_address": "192.0.2.15"} |
| d9144e0a-dd83-482f-a2a5-33860ea58864 | deployed-server-2-ctlplane-port | fa:16:3e:25:dd:d9 | {"subnet_id": "2d01b5fa-450c-484d-8a86-6e023721f08e", "ip_address": "192.0.2.18"} |
+--------------------------------------+---------------------------------+-------------------+-----------------------------------------------------------------------------------+

Now, let’s examine the deployed overcloud using the generated overcloudrc:

[stack@undercloud ~]$ source overcloudrc 

[stack@undercloud ~]$ nova service-list
+----+------------------+-------------------------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host                          | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+-------------------------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-scheduler   | deployed-server-1             | internal | enabled | up    | 2016-04-17T14:10:48.000000 | -               |
| 7  | nova-consoleauth | deployed-server-1             | internal | enabled | up    | 2016-04-17T14:10:46.000000 | -               |
| 8  | nova-conductor   | deployed-server-1             | internal | enabled | up    | 2016-04-17T14:10:53.000000 | -               |
| 9  | nova-compute     | deployed-server-2.localdomain | nova     | enabled | up    | 2016-04-17T14:10:44.000000 | -               |
+----+------------------+-------------------------------+----------+---------+-------+----------------------------+-----------------+

[stack@undercloud ~]$ nova hypervisor-list
+----+-------------------------------+-------+---------+
| ID | Hypervisor hostname           | State | Status  |
+----+-------------------------------+-------+---------+
| 1  | deployed-server-2.localdomain | up    | enabled |
+----+-------------------------------+-------+---------+

If we ssh into the controller we can see the right IP addresses applied (including the VIP’s):

[stack@undercloud ~]$ ssh root@192.0.2.17
Last login: Mon Apr 18 13:08:17 2016 from 192.0.2.1

[root@deployed-server-1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
    valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host 
    valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
 link/ether bc:76:4e:21:17:15 brd ff:ff:ff:ff:ff:ff
 inet 146.20.65.172/24 brd 146.20.65.255 scope global eth0
    valid_lft forever preferred_lft forever
 inet6 2001:4802:7806:102:be76:4eff:fe21:1715/64 scope global 
    valid_lft forever preferred_lft forever
 inet6 fe80::be76:4eff:fe21:1715/64 scope link 
    valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
 link/ether bc:76:4e:21:28:db brd ff:ff:ff:ff:ff:ff
 inet 10.209.224.193/19 brd 10.209.255.255 scope global eth1
    valid_lft forever preferred_lft forever
 inet6 fe80::be76:4eff:fe21:28db/64 scope link 
    valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000
 link/ether bc:76:4e:21:0d:b3 brd ff:ff:ff:ff:ff:ff
 inet6 fe80::be76:4eff:fe21:db3/64 scope link 
    valid_lft forever preferred_lft forever
5: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN 
 link/ether d6:fa:eb:d5:5f:8d brd ff:ff:ff:ff:ff:ff
6: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
 link/ether bc:76:4e:21:0d:b3 brd ff:ff:ff:ff:ff:ff
 inet 192.0.2.17/24 brd 192.0.2.255 scope global br-ex
    valid_lft forever preferred_lft forever
 inet 192.0.2.16/32 brd 192.0.2.255 scope global br-ex
    valid_lft forever preferred_lft forever
 inet 192.0.2.15/32 brd 192.0.2.255 scope global br-ex
    valid_lft forever preferred_lft forever
 inet6 fe80::be76:4eff:fe21:db3/64 scope link 
    valid_lft forever preferred_lft forever

We have a successful overcloud stack with the proper networking applied and a
functioning OpenStack deployed. My next steps would be to test this prototype
further using a full 3 node HA cluster and network isolation with separate
vlans.

Overall, I think this is a useful concept. I could see it being used in
additional ways as well such as using Heat to configure the undercloud or being
able to test TripleO using regular nodepool instances.


5 comments on “TripleO with already deployed servers

  1. Hey, great post. In the Newton cycle Ironic will probably have an adoption feature to bring existing servers under its wing: http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/active-node-creation.html

  2. […] TripleO is typically used to deploy new cloud environments. But what if your servers are already deployed and provisioned? TripleO might still be useful tool, for example, to use virtual infrastructure for testing purposes or to run other provisioning systems, according to James Slagle who walks you through his attempt to set up TripleO with deployed servers. […]

  3. […] other provisioning systems, according to James Slagle who walks you through his attempt to set up TripleO with deployed servers […]

  4. […] a previous post, I talked about using TripleO with already deployed and provisioned servers. Since that was […]

  5. Bogdan Dobrelya says:

    Great post, James.

    There is also related work for deployed-server automation by Quickstart ansible playbooks and tripleo CLI: https://bugs.launchpad.net/tripleo/+bug/1691467

    The goal is to simplify development environments creation for tripleo contributors working on non-provisioning related bits.

Leave a Reply