Deploy LAMP stack with Terraform

For architectures with multiple servers and complex software dependencies, an orchestration tool like Terraform can be used to save deployment time and ensure consistency. This is illustrated by installing a LAMP stack on a single server, all created from raw resources in the tenant.

Contents

Terraform creates necessary objects in the tenant, so capacity to create the following resource quotas must be available:

  • 1 Key pair

  • 1 Compute instance of flavor g1.standard-1-1

  • 1 Network and 1 subnet

  • 1 Non-allocated floating IP

  • 2 Security groups

Install Terraform

The Terraform client by HashiCorp can be installed on Ubuntu through the APT packet manager. To do this, first download the private key for package authentication, and then add the official HashiCorp repository to your local machine with

curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
sudo apt-add-repository "deb [arch=$(dpkg --print-architecture)] https://apt.releases.hashicorp.com $(lsb_release -cs) main"

The version of Terraform client is needed is dependent on the architecture and operating system (Ubuntu distribution release) on your local machine. This information is used in the second command: the APT architecture (such as amd64) is found by the shell command

dpkg --print-architecture

and the code name of the local distribution release (for example, focal for Ubuntu 20.04)

lsb_release -cs

It also needs to load the correct provider module for OpenStack. Providers represent the upstream API, responsible for understanding API interactions and exposing resources. The provider registry code is part of the project file provider.tf.

The operation apt-add-repository usually runs apt update to update new packet indices, but there is no harm in repeating the command before installing the the Terraform client from the new repository:

sudo apt update
sudo apt install terraform

The installation can be tested with terraform --version or terraform --help.

The optional command auto-complete is enabled with

terraform -install-autocomplete

whereby typing the beginning of a command argument (with the required number of letters to avoid any ambiguity) followed by <TAB> prints the whole word. For example, typing ter and <TAB> prints terraform. A new console window needs to be opened for this function to take effect.

In some projects, a particular version of Terraform is required to match the existing configuration. Currently available Terraform versions in the repository index are listed with apt policy terraform and a specific version is then installed with sudo apt install terraform=<version>.

Build project

Like with many other template-based orchestration systems, it is customary to create separate files for variables, login credentials, resource definitions etc. For the simple project of creating a plain, unconfigured compute instance, we create the files

  • terraform.tfvars with OpenStack credentials

  • provider.tf specifying the OpenStack provider with reference to the OpenStack credentials

  • variables.tf with the definitions of project-related variables

  • deploy.tf with the template for resource creation

Root module and credentials

Terraform uses the OpenStack tenant credentials to connect and perform operations. Terraform can use the OpenStack environmental variables if set, or specified in the variable definitions file terraform.tfvars:

openstack_user_name = "<username>"
openstack_tenant_name = "<tenant-name>"
openstack_password = "<password>"
openstack_auth_url = "https://keystone.ic-hrvart3.in.pan-net.eu:5000/v3"

String values are typically given surrounded by double quotes. Comment lines begin with a hash tag.

The first part of the root module file provider.tf contains the necessary settings to load the OpenStack provider module. The second part sets the needed variables by reference to the variables definitions file or specified directly.

# Define required providers
terraform {
required_version = ">= 0.14.0"
  required_providers {
    openstack = {
      source  = "terraform-provider-openstack/openstack"
      version = "~> 1.35.0"
    }
  }
}
# Credentials from variables.tf
provider "openstack" {
  user_name = "${var.openstack_user_name}"
  tenant_name = "${var.openstack_tenant_name}"
  password  = "${var.openstack_password}"
  auth_url  = "${var.openstack_auth_url}"
  cacert_file = "~/pannet/certs/rootcax1.crt"
  region      = "RegionOne"
  project_domain_name = "<project-domain-name>"
  user_domain_name = "users_domain"
}

The minimum set of variables used here are:

  • user_name - Login username; If omitted, the OS_USERNAME environment variable is used.

  • tenant_name - Name of the tenant (Identity v2) or project (Identity v3) to login to; If omitted, the OS_TENANT_NAME or OS_PROJECT_NAME environment variable are used.

  • password - Password to login with; If omitted, the OS_PASSWORD environment variable is used.

  • auth_url - Identity authentication URL; If omitted, the OS_AUTH_URL environment variable is used.

  • cacert_file - Custom CA certificate for SSL communication; If omitted, the OS_CACERT environment variable is used.

  • region - Region of the OpenStack cloud. If omitted, the OS_REGION_NAME environment variable is used.

  • project_domain_name - Domain name of the project; If omitted the OS_PROJECT_DOMAIN_NAME environment variable is used.

  • user_domain_name - User domain name to scope to (Identity v3). If omitted, the OS_DOMAIN_NAME environment variable is used.

Resources

Variables are collected in the file variables.tf in the format shown below. Each input variable is typically declared with a variable name, followed by a variable block with type and default value within curly brackets. The type and default values are not strictly necessary. The variable value can be defined elsewhere and type can deducted from the default or input value.

Note that the first four variables are declared as variables in this file, but specified in terraform.tfvars. The remainder of the variables are related to resources (with given example values).

variable "openstack_user_name" {}
variable "openstack_tenant_name" {}
variable "openstack_password" {}
variable "openstack_auth_url" {}
variable "image" {
  default = "ubuntu-20.04-x86_64"
}
variable "flavor" {
  default = "g1.standard-1-1"
}
variable "ssh_key_pair" {
  default = "my_key"
}
variable "ssh_user_name" {
  default = "ubuntu"
}
variable "availability_zone" {
	default = "az1"
}
variable "security_group" {
	default = "default"
}
variable "network" {
	default  = "internal_network"
}

The file deploy.tf contains the resource declarations used by Terraform - in this case for the compute instance to be created.

resource "openstack_compute_instance_v2" "server" {
  count = 1
  name = "test-vm-${count.index}"
  image_name = var.image
  availability_zone = var.availability_zone
  flavor_name = var.flavor
  key_pair = var.ssh_key_pair
  security_groups = [var.security_group]
  network {
    name = var.network
  }
}
# Output VM IP Address
output "serverip" {
 value = openstack_compute_instance_v2.server[*].access_ip_v4
}

The declaration is given in a resource block, specifying resource type (“openstack_compute_instance_v2”), a local name (“server”) and its configuration parameters within curly brackets. In addition, the declaration may contain meta arguments, such as count for creation of multiple identical instances. Unique instance names with a count value larger than 1 can be created by appending count.index. Other useful meta arguments in larger projects are for_each and depends_on.

The variable security_groups takes a list of values and the property network takes as argument either name or uuid, corresponding to the respective OpenStack identifiers. The output section takes an output variable name and, as argument, a reference to the created instances in vector notation and displays their IP addresses.

The output section takes a name in double quotes, which is the name of the output variable that is assigned the value evaluated from the expression in curly brackets. In the file deploy.tf, the value is the private IP address assigned to the compute instances (using a wildcard vector notation) after their creation.

Deployment

The Terraform files should now be located in the same directory. Open a terminal window and move into this directory. The command output from tree in this directory shows the template files before invoking Terraform (which creates some additional files and folders)

├── deploy.tf
├── provider.tf
├── terraform.tfvars
└── variables.tf

Initiate

The first step is to perform

terraform init

which loads the necessary provider module(s). The output should contain the statement Terraform has been successfully initiated!, as shown in Figure 1.

Figure 1. Terraform init output showing successful initiation.

The next step is an execution planning without deploying any actual changes, carried out with

terraform plan

Terraform reads the current state of the existing infrastructure and compares with the resource declarations, and reports any differences. It also proposes necessary changes - if any - to make the infrastructure correspond to the declarations. These changes can only be implemented with the command terraform apply.

The command can be appended with the argument -out=<file-name> to save the configuration for use with terraform apply. The created file is a zipped archive with information internal to terraform and is not intended to be edited manually. The name passed to it should therefore be any proper file name without file extension.

After a successful deployment, and if no changes are made to the templates, a re-execution of terraform plan will report that no changes are needed to the infrastructure configuration.

Apply

The infrastructure changes created with terraform init are deployed with

terraform apply

If an output file has been created in the planning step, this can be referenced by adding the file name after the command. It also prompts for approval before starting the deployment. To avoid typing the approval during the ongoing process, the argument -auto-approve can be passed to the command. The command also accepts arguments to modify variable names in the format -var “<name>:<value>”. Figure 2 shows the last part of the output from terraform apply.

Figure 2. Terraform apply output returning its private IP address.

The command terraform show prints a report of the deployment.

Destroy

All components that have been declared in project are deleted with

terraform destroy

It requires manual approval before changing the infrastructure, or else the argument -auto-approve must be passed to the command.

It is a good idea to verify the deletions with standard OpenStack list queries. A common situation is that templates are being edited before the previous deployment has been destroyed. This can lead to that not all resources are properly destroyed when the destroy command is run, and then these need to be deleted manually not to cause errors at the next deployment.

Troubleshooting

Verbose Terraform output is enabled with

TF_LOG=DEBUG OS_DEBUG=true terraform apply

Specific compute faults can be obtained with

openstack server show <server-id> -c fault -f value

where <server-id> is obtained from openstack server list. A common error is that some tenant resource quota is exceed during the Terraform execution, which may not be obvious from the output.

In case terraform destroy does not delete all components, they need to be removed manually. In particular network resources can be a bit tricky, since instances, ports, networks and routers need to be removed in that order, and especially ports are identified by their ID strings rather than a resource name.

Provisioners

Terraform uses provisioners to perform actions on the local or remote hosts, including installation of software packages, configuration and transfer of files.

Using Terraform provisioners for software deployment and configuration is not recommended because not all actions can be part of a Terraform plan, and it requires additional credentials to log in to hosts which are not used directly by Terraform. Nevertheless, for simple node configurations, provisioners are a convenient way to incorporate such steps with the infrastructure creation.

The provisioners need a section for connection-related information. In this project, the SSH connection block looks like

  connection {
    type     = "ssh"
    user     = "ubuntu"
    private_key = file("~/.ssh/${var.ssh_access_key_name}")
    host     = openstack_networking_floatingip_v2.lamp_access_floatip_ip.address
  }

The variable host contains the floating IP address, assigned as part of the instance creation process.

File provisioner

The file provisioner is used to upload files to the remote host. It takes as variables the local (source) and remote (destination) file paths:

  provisioner "file" {
    source = var.upload_file
    destination = "/home/ubuntu/info.txt"
  }

Note however, that the file has to be specified as a variable, that is, be part of the configuration source code, for example in the variables file, like

variable "upload_file" {
  default = "/home/client/terraform/project/info.txt"
}

Remote execution provisioner

The remote execution provisioner takes a list (named inline) of Linux commands represented as strings, that are to be executed on the remote host.

  provisioner "remote-exec" {
    inline = [
      "sudo apt update",
      "sudo apt install -y apache2",
      "sudo apt install -y php7.4 php7.4-mysql php-common php7.4-cli php7.4-json php7.4-common php7.4-opcache libapache2-mod-php7.4",
      "sudo systemctl restart apache2",
      "sudo apt install mariadb-server mariadb-client",
      "echo '<?php phpinfo(); ?>' | sudo tee -a /var/www/html/info.php > /dev/null"
    ]
  }

Deploying the LAMP stack

The project contains the following files in a dedicated directory, listed with the tree command within that directory:

├── data_sources.tf
├── deploy.tf
├── output_template.tmpl
├── provider.tf
├── terraform.tfvars
└── variables.tf

The project source code can be found in https://pannet.atlassian.net/wiki/spaces/DocEng/pages/524320863/Deploy+LAMP+stack+with+Terraform#Project-files, together with provider.tf and terraform.tfvars from https://pannet.atlassian.net/wiki/spaces/DocEng/pages/524320863/Deploy+LAMP+stack+with+Terraform#Root-module-and-credentials

The private IP range is set to 192.168.0.0 and can be changed if desired in variables.tf.

Key pair

After creating the project files, execute from within the project directory the command lines

ssh-keygen -q -N "" -f ~/.ssh/pan_net_cloud_id_rsa
echo -e "variable \"tenant-admin-public-key\" {\n    default = \"$(cat ~/.ssh/pan_net_cloud_id_rsa.pu

The second line writes the public key to the variables file. When a new key is generated to replace an existing key, the old one must be manually deleted from that file.

Data sources

A data source allows an existing resource or data (such as node status) to be included in the Terraform configuration. Whereas a resource defines a new infrastructure component to be created and configured by Terraform, a data source is a read-only reference to pre-existing data, or to some value computed within Terraform itself.

The data block has an identifier consisting of a type and a name, where the combination of type and name must be unique. A data instance has one or more attributes in its body, surrounded by curly brackets.

Terraform verifies the availability of any specified data resource before creating a new resource. This can save time and help finding errors while executing terraform plan, before running the deployment process and thereby avoiding failure during the deployment process.

Output section

In this project, the output section is defined in a separate template, referenced from the deploy.tf template and with relevant variables passed to it.

Deployment

From the project directory, deploy the project with the following three commands. The terraform plan and apply commands produce a lot of information, so only the end parts of the expected output are shown in the figures below (Figure 3-5).

terraform init

Figure 3. Terraform init for the LAMP stack project.

terraform plan -out=lamp_stack

Figure 4. Terraform plan using output file.

terraform apply lamp_stack

Figure 5. Terraform apply of the LAMP stack project.

After successful deployment, the output shows instructions for testing. From a browser, the given IP address shows the PHP page (Figure 6).

Figure 6. Testing LAMP deployment in browser.

The command to login to the instance with SSH is also shown in the output, that is

$ ssh -i ~/.ssh/pan_net_cloud_id_rsa ubuntu@188.125.27.5

Please note that the database is not fully configured, since automation of these steps is beyond the scope of this tutorial.

Project files

data_sources.tf

data "openstack_compute_flavor_v2" "compute_instance_flavor" {
  name = var.compute_flavor
}
data "openstack_images_image_v2" "compute_instance_image" {
  name = var.openstack_image
}
data "openstack_networking_secgroup_v2" "default_group" {
  name = var.default_security_group
}
data "openstack_networking_router_v2" "router" {
  status = "ACTIVE"
}

variables.tf

variable "openstack_user_name" {}
variable "openstack_tenant_name" {}
variable "openstack_password" {}
variable "openstack_auth_url" {}
variable "lamp_internal_subnet_ipv4" {
    default = "192.168.0.0/24"
}
variable "floating-ip-pool" {
  default = "external_internet_provider"
}
variable "compute_flavor" {
  default = "g1.standard-1-1"
}
variable "openstack_image" {
  default = "ubuntu-20.04-x86_64"
}
variable "default_security_group" {
  default = "default"
}
variable "ssh_access_key_name" {
    default = "pan_net_cloud_id_rsa"
}

deploy.tf

## SSH key pair in the cloud
resource "openstack_compute_keypair_v2" "tenant_admin_keypair" {
  name = var.ssh_access_key_name
  public_key = var.tenant-admin-public-key
}
## Security groups and rules
resource "openstack_networking_secgroup_v2" "lamp_secgroup_ssh" {
  name        = "lamp_secgroup_ssh"
  description = "Allow inbound ssh traffic"
}
resource "openstack_networking_secgroup_v2" "lamp_secgroup_http" {
  name        = "lamp_secgroup_http"
  description = "Allow inbound http traffic"
}
resource "openstack_networking_secgroup_rule_v2" "lamp_secgroup_rule_http" {
  direction         = "ingress"
  ethertype         = "IPv4"
  protocol          = "tcp"
  port_range_min    = 80
  port_range_max    = 80
  remote_ip_prefix  = "0.0.0.0/0"
  security_group_id = openstack_networking_secgroup_v2.lamp_secgroup_http.id
}
resource "openstack_networking_secgroup_rule_v2" "lamp_secgroup_rule_ssh" {
  direction         = "ingress"
  ethertype         = "IPv4"
  protocol          = "tcp"
  port_range_min    = 22
  port_range_max    = 22
  remote_ip_prefix  = "0.0.0.0/0"
  security_group_id = openstack_networking_secgroup_v2.lamp_secgroup_ssh.id
}
## Network (internal)
resource "openstack_networking_network_v2" "lamp_internal" {
  name = "lamp-stack-internal"
  admin_state_up = "true"
}
## Subnet (pool of IP addresses with associated configuration state)
resource "openstack_networking_subnet_v2" "lamp_internal_subnet_ipv4" {
  name = "lamp-stack-internal-subnet-ipv4"
  network_id = openstack_networking_network_v2.lamp_internal.id
  cidr = var.lamp_internal_subnet_ipv4
  ip_version = 4
  dns_nameservers = [
    "8.8.8.8",
    "8.8.4.4"
  ]
}
## Interface between router and subnet
resource "openstack_networking_router_interface_v2" "lamp_internal_router_interface" {
  router_id = data.openstack_networking_router_v2.router.id
  subnet_id = openstack_networking_subnet_v2.lamp_internal_subnet_ipv4.id
}
## Connection points
resource "openstack_networking_port_v2" "lamp_port_1" {
  name           = "lamp_port_1"
  network_id     = openstack_networking_network_v2.lamp_internal.id
  admin_state_up = "true"
  fixed_ip {
    subnet_id = openstack_networking_subnet_v2.lamp_internal_subnet_ipv4.id
  }  
    security_group_ids = [
    data.openstack_networking_secgroup_v2.default_group.id,
    openstack_networking_secgroup_v2.lamp_secgroup_ssh.id,
    openstack_networking_secgroup_v2.lamp_secgroup_http.id,    
  ]
  depends_on = [openstack_networking_network_v2.lamp_internal]
}
## Virtual machine
resource "openstack_compute_instance_v2" "lamp_stack_iaas_example" {
  name            = "lamp_stack_iaas_example"
  image_id        = data.openstack_images_image_v2.compute_instance_image.id
  flavor_id       = data.openstack_compute_flavor_v2.compute_instance_flavor.id
  key_pair        = openstack_compute_keypair_v2.tenant_admin_keypair.name
  network {
    port = openstack_networking_port_v2.lamp_port_1.id
  }
}
resource "null_resource" "lamp_config" {
  provisioner "remote-exec" {
    inline = [
      "sudo apt update",
      "sudo apt install -y apache2",
      "sudo apt install -y php7.4 php7.4-mysql php-common php7.4-cli php7.4-json php7.4-common php7.4-opcache libapache2-mod-php7.4",
      "sudo systemctl restart apache2",
      "sudo apt install -y mariadb-server mariadb-client",
      # "sudo mysql_secure_installation",
      "echo '<?php phpinfo(); ?>' | sudo tee -a /var/www/html/phpinfo.php > /dev/null"
    ]
  }
  connection {
    type     = "ssh"
    user     = "ubuntu"
    private_key = file("~/.ssh/${var.ssh_access_key_name}")
    host     = openstack_networking_floatingip_v2.lamp_access_floatip_ip.address
  }
}
## Floating IP address - allocate
resource "openstack_networking_floatingip_v2" "lamp_access_floatip_ip" {
  pool = var.floating-ip-pool
}
## Floating IP address - make association
resource "openstack_networking_floatingip_associate_v2" "lamp_access_floatip_ip_associate" {
  floating_ip = openstack_networking_floatingip_v2.lamp_access_floatip_ip.address
  port_id = openstack_networking_port_v2.lamp_port_1.id
}
## Output blocks used for returning values created by Terraform
output "instance_ip_addr" {
  value = openstack_networking_floatingip_v2.lamp_access_floatip_ip.address
  description = "The password for logging in to the database."
}
output "template" {
  value = templatefile("./output_template.tmpl", { public-key = openstack_compute_keypair_v2.tenant_admin_keypair.name, ip_address = openstack_networking_floatingip_v2.lamp_access_floatip_ip.address })
}

output_template.tmpl

To validate server configuration open the following link in web-browser:
   http://${ip_address}/phpinfo.php

To access host over ssh copy the following command to command line:
    ssh -i ~/.ssh/${public-key} ubuntu@${ip_address}

Additional Resources

HashiCorp documentation