Configure LBaaS
LBaaS in Boron is based on Octavia and can be deployed by OpenStack commands as described in this chapter. Note that an unallocated floating IP address is required.
Contents
LBaaS in Boron
The Octavia load balancer is managed by the plugin python-octaviaclient
, which is installed with the OpenStack client suite for versions later than 3.12.0. If not already present, the plugin can be installed by
pip install python-octaviaclient
The commands described in this chapter follows the syntax of Octavia Client version 2.2.0. Figure 1 shows the different resource types in the load balancer and indicates the creation workflow.
Create load balancer
To create a load balancer, the components have to be defined with OpenStack commands. We assume that there are two back-end servers available and that a floating IP is available for the load balancer. During the creation process, it is convenient to use some simple naming convention, for example lb1
for the load balancer and lb1-listener
for its listener, etc. Back-end servers controlled by the load balancer must be connected to the same subnet <internal-subnet>
.
A load balancer has a listener that is associated with a specific port such as port 80. If the load balancer only handles HTTP traffic, one listener is sufficient. If it manages both HTTP and HTTPS, one listener per port (80 and 443) is needed.
openstack loadbalancer create --name <lb-name> --vip-subnet <internal-subnet> openstack loadbalancer listener create --name <lb-listener-name> --protocol HTTP --protocol-port 80 <lb-name> openstack loadbalancer pool create --name <lb-pool-name> --protocol HTTP --lb-algorithm ROUND_ROBIN --listener <lb-listener-name> openstack loadbalancer healthmonitor create --name <lb-healthmonitor-name> --delay 4 --timeout 4 --max-retries 4 --type HTTP <lb-pool-name>
The parameter lb-algorithm takes the values
ROUND_ROBIN
- sends requests to the pool members sequentially in a rotating mannerLEAST_CONNECTIONS
- sends requests to the server with the least connections in useSOURCE_IP_PORT
- creates a hash from IP address and port used to select a serverSOURCE_IP
- creates a hash from IP address only used to select a server
To verify the creation of the components, use
openstack loadbalancer list openstack loadbalancer listener list openstack loadbalancer pool list openstack loadbalancer healthmonitor list
to have the details of the respective component printed. Next, two (or more) back-end web servers are entered as members of the pool:
openstack loadbalancer member create --name <lb-member0-name> --address <server0-ip> --protocol-port 8000 <lb-pool-name> openstack loadbalancer member create --name <lb-member1-name> --address <server1-ip> --protocol-port 8000 <lb-pool-name>
Note that in this example, requests to the back-end servers are made on port 8000. The assigned members are listed with
openstack loadbalancer member list <pool-name>
which also shows the their operational status. A member can be removed from the pool with
openstack loadbalancer member delete <lb-pool-name> <lb-member_x-name>
Assign floating IP
In the publication step, a floating IP resource must be available to be assigned to the virtual IP (VIP) port. In the three steps below, the floating IP is acquired from the tenant quota, but an unassigned floating IP can also be used directly, replacing the variable FIP
with this IP address.
A floating IP is acquired, stored in the variable FIP
and assigned to the VIP port with
FIP=`openstack floating ip create <external-network> -f value -c floating_ip_address` PORT=`openstack loadbalancer show -f value -c vip_port_id <lb-name>` openstack floating ip set --port $PORT $FIP
The external network name or ID is found from
openstack network list
It can be convenient to define a variable for the external network ID by
export EXT_NET = `openstack network list --name <external-network> -f value -c ID`
which is then used in the first command above by replacing <external-network>
with $EXT_NET
.
Add security group
For testing purposes, a general security group is defined and assigned to the back-end server by
openstack security group create <lb-sg-name> openstack security group rule create <lb-sg-name> --protocol HTTP --remote-ip 0.0.0.0/0 --dst-port 80 openstack server add security group <server0-name> <lb-sg-name> openstack server add security group <server1-name> <lb-sg-name>
Testing
The load balancer is launched and its internal configuration is done automatically, so it should now be operational. If the back-end servers are listening to the configured port (in the examples in this chapter the default port 80, but this can be redefined to some other number, for example 8000) and can send a response back that can identify it - for example by some message in index.html
- they should respond to requests on the floating IP address.
For testing purposes, curl is a convenient to use directly from the client. By repeated requests
curl <floating-ip>
the back-end server should reply in an alternating manner (according to the algorithm specified). The commands for reading a possible unavailable status of a load balancer are also useful:
openstack loadbalancer list --provisioning-status ERROR
openstack loadbalancer list --provisioning-status PENDING_UPDATE
Application load balancing
By defining L7 policies in Octavia, exceptions to the default content-based routing can be implemented. These are typically associated with HTTP and formulated as a redirection or rejection of a request based on a pattern in the URL, header or cookie, referred to as a L7 rule. This is referred to as application load balancing, or content-based routing.
L7 policies
The rules are most effective on fully qualified domain names (FQDN). Testing without the necessity to register a domain name and create a DNS record can easily be accomplished by defining the names locally in the file /etc/hosts
as shown in the example below.
Furthermore, to define an alternative route for a policy, one or more secondary pools can be defined beside the default (primary) pool. Note that a pool can consist of a single server.
An L7 policy is created with
openstack loadbalancer l7policy create --action <action> [<redirection>] --name <l7policy-name> <listener>
The action argument can take the values
REJECT
- The request is rejectedREDIRECT_TO_URL
- The request is redirected to the URL defined in theredirect-url
parameterREDIRECT_TO_POOL
- The request is redirected to the pool defined in theredirect-pool
parameter
Rejection policies take precedence over other policies, and REDIRECT_TO_URL
policies take precedence over REDIRECT_TO_POOL
policies. If no policy matches a given request, then the request is routed to the listener's default pool (if it exists).
The priority of a policy can be determined by assigning a sequence number with the position
argument. Policy position numbering starts with 1. If omitted, a rule is appended to the list of rules after the last inserted rule. An automatic reordering is performed after each insertion or deletion in the list.
L7 rules
The triggers of the policy are defined in L7 rules created with
openstack loadbalancer l7rule create --type <type> --compare-type <operator> --value <value> <l7policy>
where type
takes one of the values
HOST_NAME
- Compare a HTTP/1.1 host name in the request against the value parameter (for example subdomain)PATH
- Compare the path portion of the HTTP URI against the value parameterFILE_TYPE
- Compare the last portion of the URI against the value parameter (for example "txt", "jpg", etc.)HEADER
- Compare a portion of the header against the value parameterCOOKIE
- Compare a portion of a cookie against the value parameter
The rule specifies a comparison test on the type by setting the argument compare-type
to one of the following operations. Note that not all rule types support all comparison types.
REGEX
- Perl-type regular expression matchingSTARTS_WITH
- A substring that the input string should start withENDS_WITH
- A substring that the input string should end withCONTAINS
- A substring that the input string should containEQUAL_TO
- A string identical to the input string
The value
argument is the URL or string object that triggers the rule. A the truth value of a rule can be inverted to create rules more flexibly by setting the argument invert
. In this way, for example, a rule “not equal to” can be formulated with comparison type EQUAL_TO
and invert
.
A load balancer with redirection
To illustrate a load balancer with a redirection policy, consider three back-end servers called, say, server1, server2 and server3, on the same internal subnet denoted <internal-subnet>
. Suppose server1 and server2 are members of the default pool used to handle normal requests, and server3 is used to deliver some special content (or represent some subdomain that should be handle requests separately).
Create a load balancer with
export SUBNET=`openstack subnet list --name <internal-subnet> -f value -c ID` openstack loadbalancer create --vip-subnet-id ${SUBNET} --name lb2
and a listener
openstack loadbalancer listener create --name lb2_listener --protocol HTTP --protocol-port 80 lb2
Next create a default pool with server1
and server2
and an additional pool with only server3
as members with
openstack loadbalancer pool create --name http_pool1 --listener lb2_listener --protocol HTTP --lb-algorithm ROUND_ROBIN openstack loadbalancer pool create --name http_pool2 --loadbalancer lb2 --protocol HTTP --lb-algorithm ROUND_ROBIN
Note that in creation of the default pool the argument listener
assigns it as the default pool and for any additional pool the argument loadbalancer
has to be used.
Let the FQDNs of the servers be server1.tld
, server2.tld
and server3.tld
and add these as members to the pools with their repsective IP addresses like so
openstack loadbalancer member create --name server1.tld --subnet <internal-subnet> --address 192.168.0.100 --protocol-port 80 http_pool1 openstack loadbalancer member create --name server2.tld --subnet <internal-subnet> --address 192.168.0.200 --protocol-port 80 http_pool1 openstack loadbalancer member create --name server3.tld --subnet <internal-subnet> --address 192.168.0.300 --protocol-port 80 http_pool2
Assign a floating IP with
FIP=`openstack floating ip create <external-network> -f value -c floating_ip_address` PORT=`openstack loadbalancer show -f value -c vip_port_id lb2` openstack floating ip set --port $PORT $FIP
The floating IP is the public address through which all three servers are reachable. For the purpose of the example, say that FIP
after assignment holds the IP address 188.125.16.119
. Then, on the client, open the file /etc/hosts
for editing and add the lines
188.125.16.119 server1.tld 188.125.16.119 server2.tld 188.125.16.119 server3.tld
Save the file. Here, TLD (for top-level domain) has been added to the server names to give them proper format. A real domain name would have as TLD com
, org
, a country code TLD, etc. With these lines, a request can be sent with the server name, which is locally resolved to the IP address given. A request to the load balancer is generated simply by issuing
curl server1.tld
Finally, a policy is created to route requests to server3
to the secondary pool http_pool2
with
openstack loadbalancer l7policy create --action REDIRECT_TO_POOL--redirect-pool http_pool2 --name policy1 lb2_listener openstack loadbalancer l7rule create --type HOST_NAME--compare-type EQUAL_TO --value server3.tld policy1
The load balancer is now configured as set out, so that requests to server1
or server2
will be routed to http_pool1
and any request to server3
to http_pool2
.
Delete LBaaS resources
A listener
, pool
, healthmonitor
or l7policy
is deleted (represented by <type>
) with
openstack loadbalancer <type> delete <id>
taking the identifier <id>
as the only argument.
Members can only be removed from the load balancer pool by deletion, that is
openstack loadbalancer member delete <pool> <member>
with both a pool and a member identifier. The same syntax (with the identities <l7policy>
and <l7rule>
) is used to delete an L7 rule.
The complete load balancer is deleted with all its dependent components by
openstack loadbalancer delete --cascade <lb-name>
LBaaS deployment with Heat
The LBaaS can be easily deployed with Heat. The following simple Heat orchestration template creates the resources of the load balancer, two back-end servers with a security group applied and assigns a floating IP address to the load balancer. The resource quotas on the tenant must be allow for these resources to be created.
The load balancer can be created with Heat. The template specifies parameters for creation of the Octavia resources, two back-end servers and assignment of a floating IP address. A key pair needs to be generated and the corresponding <keypair-name>
entered as a parameter to allow SSH access to the servers. The external and internal networks and subnet need to be added as parameters as well.
heat_template_version: 2018-08-31 description: Create a load balancer parameters: app_port: type: number description: Port used by the servers default: 80 lb_port: type: number default: 80 description: Port used by the load balancer private_network: type: string description: Network used by the servers default: <internal-network> public_network: type: string description: Network used by the load balancer default: <external-network> subnet: type: string description: Subnet on which the load balancer will be located default: <internal-subnet> flavor: type: string description: Flavor used for servers default: g1.standard-1-1 image: type: string description: Image used for servers default: ubuntu-20.04-x86_64 key_pair: type: string description: Key pair used by servers default: <keypair-name> resources: sec_group: type: OS::Neutron::SecurityGroup properties: rules: - remote_ip_prefix: 0.0.0.0/0 protocol: tcp port_range_min: { get_param: app_port } port_range_max: { get_param: app_port } - remote_ip_prefix: 0.0.0.0/0 protocol: tcp port_range_min: 22 port_range_max: 22 lb1: type: OS::Octavia::LoadBalancer properties: name: "lb1" vip_subnet: { get_param: subnet } lb1_listener: type: OS::Octavia::Listener depends_on: - lb1 properties: protocol: HTTP protocol_port: { get_param: lb_port } loadbalancer: { get_resource: lb1 } lb1_pool: type: OS::Octavia::Pool depends_on: - lb1_listener properties: name: "lb1_pool" lb_algorithm: "ROUND_ROBIN" protocol: HTTP listener: { get_resource: lb1_listener } lb1_healthmonitor: type: OS::Octavia::HealthMonitor depends_on: - lb1_pool properties: delay: 3 type: HTTP timeout: 3 max_retries: 3 pool: { get_resource: lb1_pool } server0: type: OS::Nova::Server properties: name: "server0" image: { get_param: image } flavor: { get_param: flavor } key_name: { get_param: key_pair } networks: [{ network: { get_param: private_network }}] security_groups: [{ get_resource: sec_group }] lb1_poolmember0: type: OS::Octavia::PoolMember depends_on: [ lb1_pool, server0 ] properties: address: { get_attr: [ server0, first_address] } pool: { get_resource: lb1_pool } protocol_port: { get_param: app_port } subnet: { get_param: subnet } # needed? server1: type: OS::Nova::Server properties: name: "server1" image: { get_param: image } flavor: { get_param: flavor } key_name: { get_param: key_pair } networks: [{ network: { get_param: private_network }}] security_groups: [{ get_resource: sec_group }] lb1_poolmember1: type: OS::Octavia::PoolMember depends_on: [ lb1_pool, server1 ] properties: address: { get_attr: [ server1, first_address] } pool: { get_resource: lb1_pool } protocol_port: { get_param: app_port } subnet: { get_param: subnet } floating_ip: type: OS::Neutron::FloatingIP depends_on: - lb1 properties: floating_network: { get_param: public_network } port_id: { get_attr: [lb1, vip_port_id] } outputs: lburl: value: str_replace: template: http://IP_ADDRESS:PORT params: IP_ADDRESS: { get_attr: [floating_ip, floating_ip_address] } PORT: { get_param: lb_port } description: > The public URL used to access the load balancer.
Saving the template as a file, say, load_balancer.yaml
, the system is created with
openstack stack create --template load_balancer.yaml <stack-name>
The order of resource deployment is important, and is controlled by the depends_on
function. The template also contains an output section, which is printed to the stack record, viewed with
openstack stack show <stack-name>
Please note that the security group is intended for testing purposes only and should be modified with stricter policies.
Load balancer with ECMP
In Boron, a LBaaS product supporting ECMP is also available. Its deployment needs BGPaaS for route management. For deployment of this load balancer, please contact https://pannet.atlassian.net/l/c/5Z2vN5M1