High availability with IP fail-over

For critical services, high availability can be provided through node duplication with an IP fail-over mechanism. This can be implemented with a virtual IP address (VIP) and the Keepalive daemon as described below. For example, the bastion host is a function that is recommended to be implemented with high availability.

Contents

VRRP (Virtual Router Redundancy Protocol) allows a number of hosts sharing a (virtual) IP address. One of the hosts is set to be master node and, during normal conditions, is the only host that will respond on the VIP. The other nodes are backup nodes and monitor the master node periodically to ensure that it is still running. If the master node goes down, one of the backup nodes take over the role as master and starts replying on the VIP. This facilitates configurations without any single point of failure at node level.

Initially, we have two servers with private IP addresses <server1-ip> and <server2-ip> and a public (floating) IP address <floating-ip> available.

Create VIP port

The first step is to create a port (here called vip-port) on the desired network and without any security group, and assign <floating-ip> to it. The port is created with

openstack port create --network <internal-network> --no-security-group vip-port

The private IP of this port is the virtual IP address (VIP) <virtual-ip>. This can be read in the output from

openstack port list

Assign the available floating IP address to the port with

openstack floating ip set --port vip-port <floating-ip>

The chosen network can be created and dedicated for VRRP, or an existing internal network.

Update server ports

Update the server’s network ports using the allowed-address attribute set to the VIP <virtual-ip>. To find the port resource ID of a server, list all ports with

openstack port list

and note the IDs of the ports having the IP addresses <server1-ip> and <server2-ip>, respectively - here denoted <port1-id> and <port2-id>. Update the ports with

openstack port set --allowed-address ip-address=<virtual-ip> <port1-id>

openstack port set --allowed-address ip-address=<virtual-ip> <port2-id>

By allowing this address pairing with the VIP, enables keepalived to listen to VRRP.

Configure Keepalive daemon

The daemon keepalived is used for high availability architectures. It performs process monitoring, and contains a VRRP stack and low-level supporting functions. The daemon is installed on the remote servers with

sudo apt install keepalived

It needs a configuration file with details such as IP addresses and interface name. The interface names on the servers can be found by listing them with ip addr.

In the present Ubuntu configuration, the interface is named ens3. Letting server 1 be the master node, login to server 1 and create the file /etc/keepalived/keepalived.conf with

sudo nano /etc/keepalived/keepalived.conf

and copy in the content

global_defs {
        # Setting will recover keepalive from missing announcements
        vrrp_garp_master_refresh 10
        # Version 3 supports IPv6     
        vrrp_version 3
}
vrrp_instance VI01 {
        interface ens3
        # Number from 0 to 255, must be equal on both instances      
        virtual_router_id 123
        # Initial state {BACKUP, MASTER} - can be set to BACKUP on both        
        state BACKUP
        # The highest priority sets the MASTER node, must be different on the nodes 
        priority 110
        # The IP address of the interface keepalived listens on       
        unicast_src_ip <server1-ip>
        # The IP address of the peer server
        unicast_peer { 
                <server2-ip>        
        }
        virtual_ipaddress {
                <virtual-ip>
        }
}

The daemon is started and enabled with

sudo systemctl enable --now keepalived

Similarly, install keepalived on server 2 and configure it as backup node with

global_defs {
        # Setting will recover keepalive from missing announcements
        vrrp_garp_master_refresh 10
        # Version 3 supports IPv6     
        vrrp_version 3
}
vrrp_instance VI02 {
        interface ens3
        # Number from 0 to 255, must be equal on both instances      
        virtual_router_id 123
        # Initial state {BACKUP, MASTER} - can be set to BACKUP on both        
        state BACKUP
        # The highest priority sets the MASTER node, must be different on the nodes 
        priority 100
        # The IP address of the interface keepalived listens on       
        unicast_src_ip <server2-ip>
        # The IP address of the peer server
        unicast_peer { 
                <server1-ip>        
        }
        virtual_ipaddress {
                <virtual-ip>
        }
}

changing the VRRP instance name, priority, the unicast source and the unicast peer IP addresses accordingly. To verify that it is running, do

systemctl status keepalived

and to see that it has been enabled, enter

systemctl list-unit-files | grep enabled

which will list all enabled processes (Figure 1).

Figure 1. Listing of all enabled processes.

The command

ip addr show <interface-name>

now shows both the private and the virtual IP addresses (in Figure 2 the addresses 10.0.0.130 and 10.0.0.136, respectively).

Figure 2. Address pair on the instance port.

After making changes in the configuration file, the service needs to be restarted with

sudo systemctl reload keepalived

Note that syslog also contains information related to the daemon activities.

Testing

Testing of the IP fail-over amounts to ensuring operation in a simulated server failure scenario. The simplest test is pinging the public IP associated with the virtual IP address, and disable the master server, say server 1, with

openstack server stop <server1-name>

Pinging again should still yield a response, now generated by the backup server. Restore operation with openstack server start <server1-name>.

To identify the responding back-end server, test web servers can be installed on server 1 and 2 as described in https://pannet.atlassian.net/l/c/HQSvC5Hh, and test with

curl <floating-ip>

before and after disabling the master server.