Deploy LAMP stack with Terraform
For architectures with multiple servers and complex software dependencies, an orchestration tool like Terraform can be used to save deployment time and ensure consistency. This is illustrated by installing a LAMP stack on a single server, all created from raw resources in the tenant.
Terraform creates necessary objects in the tenant, so capacity to create the following resource quotas must be available:
1 Key pair
1 Compute instance of flavor
1 Network and 1 subnet
1 Non-allocated floating IP
2 Security groups
The Terraform client by HashiCorp can be installed on Ubuntu through the APT packet manager. To do this, first download the private key for package authentication, and then add the official HashiCorp repository to your local machine with
The version of Terraform client is needed is dependent on the architecture and operating system (Ubuntu distribution release) on your local machine. This information is used in the second command: the APT architecture (such as
amd64) is found by the shell command
and the code name of the local distribution release (for example,
focal for Ubuntu 20.04)
It also needs to load the correct provider module for OpenStack. Providers represent the upstream API, responsible for understanding API interactions and exposing resources. The provider registry code is part of the project file
apt-add-repository usually runs
apt update to update new packet indices, but there is no harm in repeating the command before installing the the Terraform client from the new repository:
The installation can be tested with
terraform --version or
The optional command auto-complete is enabled with
whereby typing the beginning of a command argument (with the required number of letters to avoid any ambiguity) followed by
<TAB> prints the whole word. For example, typing
terraform. A new console window needs to be opened for this function to take effect.
In some projects, a particular version of Terraform is required to match the existing configuration. Currently available Terraform versions in the repository index are listed with
apt policy terraform and a specific version is then installed with
sudo apt install terraform=<version>.
Like with many other template-based orchestration systems, it is customary to create separate files for variables, login credentials, resource definitions etc. For the simple project of creating a plain, unconfigured compute instance, we create the files
terraform.tfvarswith OpenStack credentials
provider.tfspecifying the OpenStack provider with reference to the OpenStack credentials
variables.tfwith the definitions of project-related variables
deploy.tfwith the template for resource creation
Root module and credentials
Terraform uses the OpenStack tenant credentials to connect and perform operations. Terraform can use the OpenStack environmental variables if set, or specified in the variable definitions file
String values are typically given surrounded by double quotes. Comment lines begin with a hash tag.
The first part of the root module file
provider.tf contains the necessary settings to load the OpenStack provider module. The second part sets the needed variables by reference to the variables definitions file or specified directly.
The minimum set of variables used here are:
user_name- Login username; If omitted, the
OS_USERNAMEenvironment variable is used.
tenant_name- Name of the tenant (Identity v2) or project (Identity v3) to login to; If omitted, the
OS_PROJECT_NAMEenvironment variable are used.
password- Password to login with; If omitted, the
OS_PASSWORDenvironment variable is used.
auth_url- Identity authentication URL; If omitted, the
OS_AUTH_URLenvironment variable is used.
cacert_file- Custom CA certificate for SSL communication; If omitted, the
OS_CACERTenvironment variable is used.
region- Region of the OpenStack cloud. If omitted, the
OS_REGION_NAMEenvironment variable is used.
project_domain_name- Domain name of the project; If omitted the
OS_PROJECT_DOMAIN_NAMEenvironment variable is used.
user_domain_name- User domain name to scope to (Identity v3). If omitted, the
OS_DOMAIN_NAMEenvironment variable is used.
Variables are collected in the file
variables.tf in the format shown below. Each input variable is typically declared with a variable name, followed by a
variable block with type and default value within curly brackets. The type and default values are not strictly necessary. The variable value can be defined elsewhere and type can deducted from the default or input value.
Note that the first four variables are declared as variables in this file, but specified in
terraform.tfvars. The remainder of the variables are related to resources (with given example values).
deploy.tf contains the resource declarations used by Terraform - in this case for the compute instance to be created.
The declaration is given in a resource block, specifying resource type (“openstack_compute_instance_v2”), a local name (“server”) and its configuration parameters within curly brackets. In addition, the declaration may contain meta arguments, such as
count for creation of multiple identical instances. Unique instance names with a
count value larger than 1 can be created by appending
count.index. Other useful meta arguments in larger projects are
security_groups takes a list of values and the property
network takes as argument either
uuid, corresponding to the respective OpenStack identifiers. The output section takes an output variable name and, as argument, a reference to the created instances in vector notation and displays their IP addresses.
The output section takes a name in double quotes, which is the name of the output variable that is assigned the value evaluated from the expression in curly brackets. In the file
deploy.tf, the value is the private IP address assigned to the compute instances (using a wildcard vector notation) after their creation.
The Terraform files should now be located in the same directory. Open a terminal window and move into this directory. The command output from
tree in this directory shows the template files before invoking Terraform (which creates some additional files and folders)
The first step is to perform
which loads the necessary provider module(s). The output should contain the statement Terraform has been successfully initiated!, as shown in Figure 1.
The next step is an execution planning without deploying any actual changes, carried out with
Terraform reads the current state of the existing infrastructure and compares with the resource declarations, and reports any differences. It also proposes necessary changes - if any - to make the infrastructure correspond to the declarations. These changes can only be implemented with the command
The command can be appended with the argument
-out=<file-name> to save the configuration for use with
terraform apply. The created file is a zipped archive with information internal to terraform and is not intended to be edited manually. The name passed to it should therefore be any proper file name without file extension.
After a successful deployment, and if no changes are made to the templates, a re-execution of
terraform plan will report that no changes are needed to the infrastructure configuration.
The infrastructure changes created with
terraform init are deployed with
If an output file has been created in the planning step, this can be referenced by adding the file name after the command. It also prompts for approval before starting the deployment. To avoid typing the approval during the ongoing process, the argument
-auto-approve can be passed to the command. The command also accepts arguments to modify variable names in the format
-var “<name>:<value>”. Figure 2 shows the last part of the output from
terraform show prints a report of the deployment.
All components that have been declared in project are deleted with
It requires manual approval before changing the infrastructure, or else the argument
-auto-approve must be passed to the command.
It is a good idea to verify the deletions with standard OpenStack
list queries. A common situation is that templates are being edited before the previous deployment has been destroyed. This can lead to that not all resources are properly destroyed when the
destroy command is run, and then these need to be deleted manually not to cause errors at the next deployment.
Verbose Terraform output is enabled with
TF_LOG=DEBUG OS_DEBUG=true terraform apply
Specific compute faults can be obtained with
openstack server show <server-id> -c fault -f value
<server-id> is obtained from
openstack server list. A common error is that some tenant resource quota is exceed during the Terraform execution, which may not be obvious from the output.
terraform destroy does not delete all components, they need to be removed manually. In particular network resources can be a bit tricky, since instances, ports, networks and routers need to be removed in that order, and especially ports are identified by their ID strings rather than a resource name.
Terraform uses provisioners to perform actions on the local or remote hosts, including installation of software packages, configuration and transfer of files.
Using Terraform provisioners for software deployment and configuration is not recommended because not all actions can be part of a Terraform plan, and it requires additional credentials to log in to hosts which are not used directly by Terraform. Nevertheless, for simple node configurations, provisioners are a convenient way to incorporate such steps with the infrastructure creation.
The provisioners need a section for connection-related information. In this project, the SSH connection block looks like
host contains the floating IP address, assigned as part of the instance creation process.
The file provisioner is used to upload files to the remote host. It takes as variables the local (
source) and remote (
destination) file paths:
Note however, that the file has to be specified as a variable, that is, be part of the configuration source code, for example in the variables file, like
Remote execution provisioner
The remote execution provisioner takes a list (named
inline) of Linux commands represented as strings, that are to be executed on the remote host.
Deploying the LAMP stack
The project contains the following files in a dedicated directory, listed with the
tree command within that directory:
The project source code can be found in https://pannet.atlassian.net/wiki/spaces/DocEng/pages/524320863/Deploy+LAMP+stack+with+Terraform#Project-files, together with
terraform.tfvars from https://pannet.atlassian.net/wiki/spaces/DocEng/pages/524320863/Deploy+LAMP+stack+with+Terraform#Root-module-and-credentials
The private IP range is set to 192.168.0.0 and can be changed if desired in
After creating the project files, execute from within the project directory the command lines
The second line writes the public key to the variables file. When a new key is generated to replace an existing key, the old one must be manually deleted from that file.
A data source allows an existing resource or data (such as node status) to be included in the Terraform configuration. Whereas a resource defines a new infrastructure component to be created and configured by Terraform, a data source is a read-only reference to pre-existing data, or to some value computed within Terraform itself.
The data block has an identifier consisting of a type and a name, where the combination of type and name must be unique. A data instance has one or more attributes in its body, surrounded by curly brackets.
Terraform verifies the availability of any specified data resource before creating a new resource. This can save time and help finding errors while executing
terraform plan, before running the deployment process and thereby avoiding failure during the deployment process.
In this project, the output section is defined in a separate template, referenced from the
deploy.tf template and with relevant variables passed to it.
From the project directory, deploy the project with the following three commands. The terraform plan and apply commands produce a lot of information, so only the end parts of the expected output are shown in the figures below (Figure 3-5).
terraform plan -out=lamp_stack
terraform apply lamp_stack
After successful deployment, the output shows instructions for testing. From a browser, the given IP address shows the PHP page (Figure 6).
The command to login to the instance with SSH is also shown in the output, that is
$ ssh -i ~/.ssh/pan_net_cloud_id_rsa firstname.lastname@example.org
Please note that the database is not fully configured, since automation of these steps is beyond the scope of this tutorial.