In this last section, we'll make the current lab deployment that we deployed in Chapter 2, Architecting the Cloud, highly available by adding a second controller node and configuring cluster software. This example should be simple enough to easily implement in the lab while still allowing you to evaluate the technology.
In later chapters we'll look at more flexible deployment methodologies that allow fine-grained service placement and automated cluster configuration. In this chapter, we'll just extend the Packstack deployment from Chapter 2, Architecting the Cloud. Packstack isn't designed to deploy multiple controllers, so some manual configuration of services will be required.
To provision the second controller, install the operating system on a new machine in the same way you provisioned the first controller and then copy the Packstack answer file from the first controller. Edit the answer and replace CONFIG_CONTROLLER_HOST
, CONFIG_NETWORK_HOSTS
, and CONFIG_STORAGE_HOST
with the IP address of the new controller.
We won't be reconfiguring the compute nodes or the existing controller, so add the compute nodes and controller that you provisioned in Chapter 2, Architecting the Cloud, to the EXCLUDE_SERVERS
parameter in the answer file. We'll leave the MySQL, Redis, MongoDB, and RabbitMQ services on the first controller, so those values should not be modified.
Run Packstack on the new controller using the same command we used in Chapter 2, Architecting the Cloud:
packstack --answer-file <answer-file>
After Packstack completes, you should be able to log in to the dashboard by going to the IP address of the second host and using the same credentials that you used in Chapter 2, Architecting the Cloud.
Next, we'll install Pacemaker to manage the VIPs that we'll use with HAProxy to make the web services highly available. We assume that the cluster software is already available via yum
to the controller nodes.
# yum install -y pcs fence-agents-all
# rpm -q pcs
pcs-0.9.137-13.el7_1.3.x86_64
# firewall-cmd --permanent --add-service=high-availability
# passwd hacluster
# systemctl start pcsd.service
# systemctl enable pcsd.service
# pcs cluster auth controller1 controller2
Username: hacluster
Password:
controller1: Authorized
controller2: Authorized
# pcs cluster setup --start --name openstack
> controller1 controller2
Shutting down pacemaker/corosync services...
Redirecting to /bin/systemctl stop pacemaker.service
Redirecting to /bin/systemctl stop corosync.service
Killing any remaining services...
Removing all cluster configuration files...
controller1: Succeeded
controller2: Succeeded
Starting cluster on nodes: controller1, controller2...
controller1: Starting Cluster...
controller2: Starting Cluster...
# pcs property set stonith-enabled=false
# pcs status
Cluster name: openstack
Last updated: Mon Aug 31 01:51:47 2015
Last change: Mon Aug 31 01:51:14 2015
Stack: corosync
Current DC: controller1 (1) - partition with quorum
Version: 1.1.12-a14efad
2 Nodes configured
0 Resources configured
Online: [ controller1 controller2 ]
For more information on setting up Pacemaker, see the excellent Clusters from Scratch documentation at http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html/Clusters_from_Scratch/index.html.
We'll be using HAProxy to load-balance our control plane services in this lab deployment. Some deployments may also implement Keepalived
and run HAProxy in an Active/Active configuration. For this deployment, we'll run HAProxy Active/Passive and manage it as a resource along with our VIP in Pacemaker.
To start, install HAProxy on both nodes using the following command:
# yum -y install haproxy
Verify installation with the following command:
# rpm -q haproxy
haproxy-1.5.4-4.el7_1.x86_64
Next, we will create a configuration file for HAProxy which load-balances the API services installed on the two controllers. Use the following example as a template, replacing the IP addresses in the example with the IP addresses of the two controllers and the IP address of the VIP that you'll be using to load-balance the API services.
The following example /etc/haproxy/haproxy.cfg
, will load-balance Horizon in our environment:
global daemon group haproxy maxconn 40000 pidfile /var/run/haproxy.pid user haproxy defaults log 127.0.0.1 local2 warning mode tcp option tcplog option redispatch retries 3 timeout connect 10s timeout client 60s timeout server 60s timeout check 10s listen horizon bind 192.168.0.30:80 mode http cookie SERVERID insert indirect nocache option tcplog timeout client 180s server controller1 192.168.0.10:80 cookie controller1 check inter 1s server controller2 192.168.0.11:80 cookie controller2 check inter 1s
In this example, controller1
has an IP address of 192.168.0.10
and
controller2
has an IP address of 192.168.0.11
. The VIP that we've chosen to use is 192.168.0.30
. Copy this file, replacing the IP addresses with the addresses in your lab, to /etc/haproxy/haproxy.cfg
on each of the controllers.
In order for Horizon to respond to requests on the VIP, we'll need to add the VIP as a ServerAlias in the Apache virtual host configuration. This is found at
/etc/httpd/conf.d/15-horizon_vhost.conf
in our lab installation. Look for the following line:
ServerAlias 192.168.0.10
Add an additional ServerAlias
line with the VIP on both controllers:
ServerAlias 192.168.0.30
You'll also need to tell Apache not to listen on the VIP so that HAProxy can bind to the address. To do this, modify /etc/httpd/conf/ports.conf
and specify the IP address of the controller in addition to the port numbers. The following is an example:
Listen 192.168.0.10:35357 Listen 192.168.0.10:5000 Listen 192.168.0.10:80
In this example, 192.168.0.10
is the address of the first controller. Substitute the appropriate IP address for each machine.
Restart Apache to pick up the new alias:
# systemctl restart httpd.service
Next, add the VIP and the HAProxy service to the Pacemaker cluster as resources. These commands should only be run on the first node:
# pcs resource create VirtualIP IPaddr2 ip=192.168.0.30 cidr_netmask=24
# pcs resource create HAProxy systemd:haproxy
Co-locate the HAProxy service with the VirtualIP to ensure that the two run together:
# pcs constraint colocation add VirtualIP with HAProxy score=INFINITY
Verify that the resources have been started:
# pcs status
...
Full list of resources:
VirtualIP(ocf::heartbeat:IPaddr2):Started controller1
HAProxy(systemd:haproxy):Started controller1
...
At this point, you should be able to access Horizon using the VIP you specified. Traffic will flow from your client to HAProxy on the VIP to Apache on one of the two nodes.
Now that we have a working cluster and HAProxy configuration, the final configuration step is to move each of the OpenStack API endpoints behind the load balancer. There are three steps in this process, which are as follows:
In the following example, we will move the Keystone service behind the load balancer. This process can be followed for each of the API services.
First, add a section to the HAProxy configuration file for the authorization and admin endpoints of Keystone:
listen keystone-admin bind 192.168.0.30:35357 mode tcp option tcplog server controller1 192.168.0.10:35357 check inter 1s server controller2 192.168.0.11:35357 check inter 1s listen keystone-public bind 192.168.0.30:5000 mode tcp option tcplog server controller1 192.168.0.10:5000 check inter 1s server controller2 192.168.0.11:5000 check inter 1s
Make sure to update the configuration on both of the controllers. Restart the
haproxy
service on the active node:
# systemctl restart haproxy.service
You can determine the active node with the output from pcs status
. Check to make sure that HAProxy is now listening on ports 5000
and 35357
using the following commands:
# curl http://192.168.0.30:5000 # curl http://192.168.0.30:35357
Both should output some JSON describing the status of the Keystone service.
Next, update the endpoint for the identity service in the Keystone service catalog by creating a new endpoint and deleting the old one:
# . ./keystonerc_admin # openstack endpoint list +----------------------------------+-----------+--------------+--------------+ | ID | Region | Service Name | Service Type | +----------------------------------+-----------+--------------+--------------+ | 14f32353dd7d497d9816bf0302279d23 | RegionOne | keystone | identity | ... # openstack endpoint create --adminurl http://192.168.0.30:35357/v2.0 --internalurl http://192.168.0.30:5000/v2.0 --publicurl http://192.168.0.30:5000/v2.0 --region RegionOne keystone +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | adminurl | http://192.168.0.30:35357/v2.0 | | id | c590765ca1a847db8b79aa5f40cd2110 | ... # openstack endpoint delete 14f32353dd7d497d9816bf0302279d23
Last, update the
auth_uri
and
identity_uri
parameters in each of the OpenStack services to point to the new IP address. The following configuration files will need to be edited:
/etc/ceilometer/ceilometer.conf
/etc/cinder/api-paste.ini
/etc/glance/glance-api.conf
/etc/glance/glance-registry.conf
/etc/neutron/neutron.conf
/etc/neutron/api-paste.ini
/etc/nova/nova.conf
/etc/swift/proxy-server.conf
After editing each of the files, restart the OpenStack services on all of the nodes in the lab deployment using the following command:
# openstack-service restart
The OpenStack services will now be using the Keystone API endpoint provided by the VIP and the service will be highly available. The architecture used in this cluster is relatively simple, but it provides an example of both Active/Active and Active/Passive service configurations.