Deploy web server

The tutorial describes the steps to create a virtual machine with Heat and on it deploy Apache web server using Ansible.

Contents

Prerequisites

The following resources required for the web server have to be available in the tenant

  • One unassociated floating IP address

  • Compute resources for the chosen instance flavor and a new security group

  • Existing external network, internal network and subnet, see Quick Start

The internal network and subnet are called <internal_network> and <internal_subnet>. In addition, a key file needs to be created (here called <my_key> as follows:

openstack keypair create my_key > my_key.pem
chmod 600 my_key.pem

Create VM with OpenStack Heat

The OpenStack client provides many commands to create and manage virtual machines and other objects. For larger projects, such repetitive operations become tedious and prone to errors and orchestration with OpenStack Heat greatly facilitates infrastructure deployment.

Heat is a template-based orchestration utility, where templates are text files that specifies a deployment which are parsed and translated into the appropriate OpenStack API calls. The Heat Orchestration Templates (HOT) are written in YAML or JSON format; Heat is used primarily to manage infrastructure, but the templates can also be integrated with software configuration tools, such as Ansible.

Install client

OpenStack Heat is part of the client suite, and is installed along with the other OpenStack modules. After loading the tenant environment variables, the command

openstack stack list

should print an empty line, which also shows that the heat client is installed. Otherwise, it can easily be installed with

pip install python-heatclient

Templates

In this chapter, the templates are written in YAML format, which is sensitive to indentation and other syntax issues. The supported keywords and syntax is dependent on the template version, which is specified in the parameter heat_template_version. The supported template versions can be found with

openstack orchestration template version list

which produces a list (Figure 1) of versions that can be used.

Figure 1. List of supported Heat template versions.

The the functions supported by a template version is listed with

openstack orchestration template function list <template-version>

where <template-version> can be for example heat_template_version.rocky.

The following simple template for VM deployment specifies resource type (OS::Nova::Server) and its properties that correspond to OpenStack CLI arguments.

heat_template_version: 2018-08-31

description: Simple template for server deployment

resources:
  server:
    type: OS::Nova::Server
    properties:
      name: my_server
      key_name: my_key
      image: ubuntu-18.04-x86_64
      flavor: g1.standard-1-1
      availability_zone: az1
      networks: 
        - network: internal_network
      security_groups: [ssh_only]

After saving the HOT with the property parameters filled in as, say, create_server.yml, it is used with the command (from the same directory where the template was saved)

openstack stack create --template create_server.yml my_stack

Create a stack

The term stack is used for objects created by Heat, referring to the set of components that operate together to support an application. A stack is created by

openstack stack create --template <template> <stack-name>

from a template <template> and with some given name <stack-name>. The template must have a valid version after heat_template_version. The version can be a key date or the code name of the Heat release, which specifies both template format and supported features. If the format or a feature are unsupported by the given version, an error message is returned.

The template uses the resource type OS::Nova::Server to create an instance called my_instance with specified flavor, image, and key, and connecting to the existing network my_network. The description field are used for single-line comments (for multi-line comments description: > is used).

The creation output looks similar to Figure 2, showing some key data and creation status.

Figure 2. Terminal output from Heat.

The templates can be made more general by accepting optional arguments overriding the default values in the template as shown in the code snippet.

heat_template_version: 2018-08-31

description: Template for deploying test server

parameters:
  server_name:
    type: string
    description: Name of server
    default: piglet

  app_port:
    type: number
    description: Port used by the servers
    default: 80

  key_pair:
    type: string
    description: Key pair used by server
    default: my_key

  internal_network:
    type: string
    description: Network used by server
    default: internal_network

  internal_subnet:
    type: string
    description: Subnet used by server
    default: internal_subnet

  external_network:
    type: string
    description: External internet provider
    default: external_internet_provider

  server_image:
    type: string
    description: Image server is based on
    default: ubuntu-20.04-x86_64

  server_flavor:
    type: string
    description: Flavor server is based on
    default: g1.standard-1-1

resources:
  sec_group:
    type: OS::Neutron::SecurityGroup
    properties:
      rules:
        - remote_ip_prefix: 0.0.0.0/0
          protocol: tcp
          port_range_min: { get_param: app_port }
          port_range_max: { get_param: app_port }
        - remote_ip_prefix: 0.0.0.0/0
          protocol: tcp
          port_range_min: 22
          port_range_max: 22
        - remote_ip_prefix: 0.0.0.0/0
          protocol: icmp

  server:
    type: OS::Nova::Server
    properties:
      name: {get_param: server_name}
      key_name: {get_param: key_pair}
      image: {get_param: server_image}
      flavor: {get_param: server_flavor}
      networks: 
        - network: {get_param: internal_network}
      security_groups: [{get_resource: sec_group}]

  floating_ip:
    type: OS::Neutron::FloatingIP
    properties:
      floating_network: { get_param: external_network }

  server_ip_assoc:
    type: OS::Neutron::FloatingIPAssociation
    properties:
      floatingip_id: { get_resource: floating_ip }
      port_id: {get_attr: [server, addresses, {get_param: internal_network}, 0, port]}

outputs:
  floating_ip:
    description: IP Address of the server instance
    value: { get_attr: [ floating_ip, floating_ip_address ]}

A parameter is defined with its name, a type (usually string), a default value and optionally, label and description.

The intrinsic function get_param copies a user-specified parameter value and uses it in the resource specification. In the example, the parameter values for image, flavor and key pair can now be set by adding --parameter key_name=<keypair-name> to set another key pair than default.

The arguments can be placed in the parameters section, in which each parameter is declared with a name, type, default value and an optional description. In the body, parameter values, resources attributes and resource identities are accessed with the functions:

  • get_param - references an input parameter of a template and resolves to the value provided for this input parameter at runtime.

  • get_attr - references an attribute of a resource which is resolved at runtime from the resource instance created from the respective resource definition.

  • get_resource - references another resource within the same template. At runtime, it is resolved to reference the ID of the referenced resource, which is resource type specific.

Default parameter values specified in the template can be overwritten in the creation command with the argument --parameter, for example,

openstack stack create --parameter "server_name=server2" --template create_server.yml <stack-name>

The argument is given in quotation marks and multiple parameters are separated with semi-colon.

The reader is encouraged to visit some of the extensive on-line documentation available for more details on techniques and Heat templates.

Verify deployment

At any time, the status of the deployment can be inspected with

openstack stack list

The creation process goes through the stage CREATE_IN_PROGRESS before reaching CREATE_COMPLETE, when the VM is ready to use. Should the creation process fail, this is indicated by CREATE_FAILED.

The OpenStack CLI provides some details related to the cause of failure in the creation report, which is fetched with

openstack stack show <stack-name>

In case the stack creation failed, it should be deleted and the cause of error needs to be corrected before attempting re-deployment. Using the template above, the public IP address <ip-address> is presented in the output from the show command (Figure 3).

Figure 3. Assigned IP address shown in the output.

Now, the server should be fully operational and respond to ping requests.

Another useful command for troubleshooting is

openstack stack event list <stack-name>

The output is a list of event records generated by the Senlin engine, the OpenStack clustering service.

Delete stack

Deleting a stack reverses the creation process and removes all dependent resources created in the stack. The command is

openstack stack delete <stack-name>

The entire stack should be deleted in this way rather than removing the resources manually. The deletion process go through DELETE_IN_PROGRESS status, so it is advisable to wait until a previous version of a stack has been fully deleted and purged from the tenant before attempting re-creation of the stack.

Establish SSH connection

In the template, the same key file is used for all servers by default, but it can be overridden by a command-line argument. To avoid having to refer to the key files explicitly, they can preferably be added to the SSH client with

ssh-add my_key.pem

for all key files used. Now, the servers should be accessible over SSH with

ssh ubuntu@<ip-address>

Install Apache with Ansible

Ansible is an open source orchestration tool, which will be used to install and configure a basic Apache web server on a VM by performing a set of tasks and using variable and directives from a set of templates and configuration files. The top level instructions are specified in a playbook file.

Install Ansible

Ansible can be installed with pip or the package manager of the distribution used on a Linux client. In the former case, the installation is performed with

python -m pip install --user ansible

To install through the Ubuntu packet manger, first the Personal Package Archive (PPA) repository has to be added to the APT packet management tool

sudo apt-add-repository ppa:ansible/ansible
sudo apt-get update -y
sudo apt-get install -y ansible

The installation can be verified with

ansible --version

which prints the Ansible version and dependencies (Figure 4)

Figure 4. Verification of Ansible installation.

Create role

An Ansible role can be seen as a project and uses a predefined directory structure which contains eight main standard directories. The Ansible roles are conveniently stored in a dedicated directory called roles/ relative to the playbook file.

First create the directory and move into it:

mkdir roles && cd roles

The role, here called apache, and its directory structure is created with

ansible-galaxy init apache

The command tree (which requires the utility tree installed) shows the created directory structure

├── defaults
│   └── main.yml
├── files
├── handlers
│   └── main.yml
├── meta
│   └── main.yml
├── README.md
├── tasks
│   └── main.yml
├── templates
├── tests
│   ├── inventory
│   └── test.yml
└── vars
    └── main.yml 

The roles can be listed with

ansible-galaxy role list

By adding instructions in YAML format to the directory structure, the role is built to perform various tasks, such as

  • Installing apache2 for Ubuntu

  • Creating a document root folder for the Apache VirtualHost and set up a test page

  • Enabling the Apache VirtualHost

Create inventory

The inventory file contains the basic server access details used by Ansible. Under the group header webservers, all web servers that will be affected are listed.

[webservers]
188.125.27.152   ansible_user=ubuntu
# 188.125.27.199  ansible_user=ubuntu

Note that there is no space surrounding the equality sign. Saving the file as inventory.ini, a connectivity test can be performed with

ansible webservers -m ping -i inventory.ini

which produces an output similar to Figure 5.

Figure 5. Ansible ping test.

The default inventory file is /etc/ansible/hosts. To remove the need to specify the inventory in the command, either the path to the used inventory can be set in the configuration file, or the contents of inventory.inin copied to /etc/ansible/hosts. After updating the hosts file, there is no need to specify the inventory explicitly (Figure 6)

Figure 6. Ansible ping test using hosts file.

Configuration and log files

The path to the Ansible configuration file is /etc/ansible/ansible.cfg. In the same directory is the file hosts which is the default inventory file. The configuration contains, for example, default file paths and privilege escalation settings (needed for sudo operations).

Open the file for editing with

sudo nano /etc/ansible/ansible.cfg

and uncomment the lines for privilege escalation (Figure 7)

Figure 7. Privilege settings in ansible.cfg

For larger projects, it is strongly recommended to enable logging. In ansible.cfg, uncomment the line

log_path = /var/log/ansible.log

to enable the logging function (Figure 8).

Figure 8. Enable logging in ansible.cfg.

Ansible may not have permissions to create the log file, which is done manually by

sudo touch /var/log/ansible.log
sudo chmod 0777 /var/log/ansible.log 

Specify tasks

Just as in manual installation, we begin with updating the APT repository cache with sudo apt update. To perform this action, the playbook contains the group name of the servers to be updated (webservers), the necessary access elevation through become and the actual task calling the apt utility.

---
- hosts: webservers
  become: yes
  become_method: sudo
  tasks:
  - name: "Update Repository cache"
    apt:
      update_cache: yes
      cache_valid_time: 3600
      force_apt_get: yes

Saving the code as, say, apt_update.yml under the roles directory, all web servers can be updated by executing

ansible-playbook apt_update.yml -i inventory.ini

A successful run produces the terminal output shown in Figure 9.

Figure 9. Using Ansible to update server repository cache.

Defaults

In the file defaults/main.yml, global variables which are referenced from other template files are defined. Note the http_port which must correspond to the port set in the Heat template for the security group (app_port in the HOT). The scr_dir contains site files that are copied to the remote hosts, in this example the file index.html.

The Boolean variable disable_default is used to control the disabling of the default server configuration (yes/no).

# defaults/main.yml

http_host_dir: "mytestsite.com"
http_conf_file: "myapache.conf"
http_port: 80
src_dir: "files/"
disable_default: yes

Files

Under the sub-directory files/ are files that are to be uploaded to the remote web server by Ansible. In this case only index.html with the content shown below.

<!DOCTYPE html>
<html lang="en">
  <title> </title>
  <meta name="viewport" content="width=device-width, initial-scale=1">
  <link rel="stylesheet" href="https://unpkg.com/tachyons/css/tachyons.min.css">
  <body>
    <article class="vh-100 dt w-100 bg-dark-pink">
      <div class="dtc v-mid tc white ph3 ph4-l">
        <h1 class="f6 f2-m f-subheadline-l fw6 tc">Hello World!</h1>
      </div>
    </article>
  </body>
</html>

Handlers

The handlers define control commands that we wish to execute during the installation, in this case the Apache restart and reload operations.

# handlers/main.yml

- name: "restart apache2"
  service:
    name: apache2
    state: restarted

- name: "reload apache2"
  service:
    name: apache2
    state: reloaded

Tasks

Under the tasks directory, all tasks related to the Apache installation are specified in the file called apache_ubuntu.yml and shown below. It contains all steps from updating the repository cache, creating the document root, copying artifacts and configuring the virtual host.

# tasks/apache_ubuntu.yml

- name: "Install Apache web server on Ubuntu"
  apt:
    name: "apache2"
    update_cache: yes
    state: latest

- name: "Create document root"
  file:
    path: "/var/www/{{ http_host_dir }}"
    state: directory
    mode: '0755'

- name: "Copy source code artifacts"
  copy:
    src: "{{src_dir}}"
    dest: "/var/www/{{ http_host_dir }}"

- name: "Set up Apache virtual host"
  template:
    src: "apache.conf.j2"
    dest: "/etc/apache2/sites-available/{{ http_conf_file }}"

- name: "Enable new web site"
  shell: /usr/sbin/a2ensite {{ http_conf_file }}
  notify: "reload apache2"

- name: "Disable default web site"
  shell: /usr/sbin/a2dissite 000-default.conf
  when: disable_default
  notify: "reload apache2"

- name: "Add UFW rule: allow HTTP on port {{ http_port }}"
  ufw:
    rule: allow
    port: "{{ http_port }}"
    proto: tcp

- name: "Enable httpd"
  service:
    name: "apache2"
    enabled: yes

The main task file tasks/main.yml only contains a reference to the previously described detailed file

# tasks/main.yml

- name: "Install Apache on Ubuntu"
  import_tasks: "apache_ubuntu.yml"

Templates

The file templates/apache.conf.j2 with the content shown below contains the virtual host modifications of the Apache server configuration. Using the Jinja2 template syntax, the variable values defined in defaults/main.yml are used.

<VirtualHost *:{{ http_port }}>
DocumentRoot "/var/www/{{ http_host_dir }}"
</VirtualHost>

Playbook

The playbook contains the instructions and file references used by Ansible. All such instructions have be specified in the role, so the playbook - called webservers.yml - only contains the server group header and a reference to role apache.

---
- hosts: webservers
  roles:
    - role: 'apache'

Deploy web server

In the roles/ directory, the file structure looks like in Figure 10. Note that not all template files are used - only the ones described in the previous section.

Figure 10. File structure of roles directory.

To deploy the Apache web server as specified, simply run the Ansible playbook with

ansible-playbook webservers.yml

The output (Figure 11) shows all tasks and their results in a compact form.

Figure 11. Ansible run output.

Testing

Apache has a utility to verify the configuration. On the remote host, run

sudo apachectl configtest

which reveals any configuration errors. After Apache deployment with Ansible, the result from the test is likely to be as shown in Figure 12.

Figure 12. Apache configuration test.

This is not critical for the operation of the web server, but it can easily be fixed by a dding a line to the Apache configuration file. Open the file for editing with

sudo nano /etc/apache2/apache2.conf

and add the line

ServerName <internal-ip-address>

as shown in Figure 13.

Figure 13. Server name in apache2.conf.

where <internal-ip-address> is the private IP address of the server. Save and close the file, and do

sudo systemctl reload apache2

to restart the server with the new configuration. The reason for this missing configuration item is that Apache needs both ServerName and an IP address to bind to. An IP address can be found from DNS, but in case no such record exists, the information has to be entered manually.

Verify that the server is running with

systemctl status apache2

which should show active (running) in green font (Figure 14).

Figure 14. Output of web server status.

The server response can be tested with

curl 188.125.27.152

or by typing the IP address in a browser (Figure 15).

Figure 15. Custom landing page of the web server.

The Apache server can be restarted (when already running) with

$ sudo systemctl restart apache2

where restart can be replaced with stop or start as desired.