Deploy blogging platform with Terraform and Ansible
This tutorial illustrates deployment of a scalable web application with a back-end database cluster. All instances are accessed through a bastion host for administration.
Ghost is an open-source web publishing application built on Node.js. In this tutorial, Ghost is implemented with a Galera cluster (MariaDB) as database in a Docker swarm architecture for robustness.
The project needs the following resources:
Two non-allocated floating IP addresses
Four or more instances, depending on the node count settings
Two networks with subnets
One keypair for administration
A router connecting to internet is required, either by enabled SNATaaS or manually created, as described in https://pannet.atlassian.net/wiki/spaces/DocEng/pages/1428390705/Basic+networking#Create-router
On the client side, the deployment has been tested with
Terraform v1.1.0 with template v2.1.2 and OpenStack provider v1.35.0
Ansible core 2.11.1
The templates used in this tutorial are contained in
tf.zip for Terraform and
config.zip for Ansible.
The blogging platform Ghost is deployed in a project with a separate database cluster (Figure 1), all connected over a machine-to-machine network. The database and application servers are managed through a bastion host on a separate OAM network. The infrastructure is deployed with Terraform with software installed and configured through Ansible.
The database cluster is based on the Galera solution, consisting of a cluster of MariaDB databases utilizing the Docker swarm mode for cluster management. The minimum full configuration consists of three nodes of which one is designated the manager and the other two are called workers.
The IP address of the manager must be assigned to a network interface available to the host operating system. All nodes in the swarm need to connect to the manager at this IP address, so a fixed IP address should be used.
The swarm uses the following protocols and ports, which must be allowed in security groups:
TCP port 2377 for cluster management communications
TCP and UDP port 7946 for communication among nodes
UDP port 4789 for overlay network traffic
Web server and proxy
The web server is deployed in a Docker container connecting to the external database cluster over the virtual IP of the database and accessed through a virtual IP.
The bastion host (jump host) is used to access all nodes over the created OAM network. Only basic functionality is installed without any particular security features. For further configuration options, see https://pannet.atlassian.net/l/c/SrbPryh1
Install Terraform and Ansible
The procedures for installing Terraform and Ansible are described in https://pannet.atlassian.net/wiki/spaces/DocEng/pages/524320863/Deploy+LAMP+stack+with+Terraform#Install-Terraform and https://pannet.atlassian.net/wiki/spaces/DocEng/pages/519504148/Deploy+web+server#Install-Ansible
The templates are contained in a zip file that should be extracted in a dedicated directory for this project, say ghost. After extraction of the files and moving into the created subdirectory, its structure is as shown in Figure 2.
After extracting the contents of the Ansible zip file under the dedicated ghost directory, and moving into the subdirectory created, it has the structure shown in Figure x (only the first level is shown with the option
Create key pair
Before starting deployment with Terraform a key pair has to be generated using the
ssh-keygen command and the public key copied to
variables.tf that Terraform uses when deploying the virtual machines to make them accessible by Ansible.
ssh-keygen creates an RSA key pair of 2048 bits, which is secure enough in most cases. The bit length is specified by
-b <bits>, so to create of 4096 bits, use
This command show the following output:
The path is the absolute path to the .ssh directory in the client user home directory. The file name used in the templates is
ghost_id_rsa, so this should be kept to avoid having to update the templates.
A passphrase is not needed and to leave it empty, press
<ENTER> twice when prompted for it. The public key is used by Terraform; copy it to the
variables.tf template with the command
Note that the key pair name is hardcoded as
ghost_id_rsa in the templates, so if another name is used, the templates need to be updated (
Deploy infrastructure resources with Terraform
Certain variables used in the templates need to be customized before applying them. In particular:
OpenStack project (tenant) name in
OpenStack user name (the environment variable is used for the password, so this needs to have been loaded) in
Name of the external network in
The infrastructure resources necessary for Ghost are deployed in the following steps (naming the plan “ghost”):
terraform plan -out "ghost"
terraform apply "ghost"
As a preparation for deletion of the deployed infrastructure, it is advisable to create a plan for deletion with
terraform plan -destroy -out "ghost-destroy"
The plans are stored as zip files in the Terraform directory. Note that after deleting infrastructure with a destruction plan, the original plan might become “stale” and no good for redeployment (Figure 4). In that case, simply re-run
terraform plan -out <plan> before reapplying the new plan.
The Terraform output contains details including IP addresses that needs to be copied into two configuration files before proceeding to the Ansible orchestration step. The first part looks like
It goes into the file
~/.ssh/config on the client machine. Open the file for editing, or create it if it does not exist, and copy in the first part of the Terraform output.
The second part should be copied into the file
config/inventory/hosts in the ansible directory - see the example snippet below.
Another piece of dynamic information needs to go into the Ghost playbook
config/playbook/ghost.yml as the value of the parameter
vip_mariadb after the Terraform deployment: the virtual IP of the Galera cluster. This is the internal IP of the Galera manager node (
ghost-db01 by default) on the M2M network on the subnet
192.168.10.xx and is easily found in the output from
openstack server list. The parameter is the first listed under
vars: as in the example snippet.
The third output from Terraform is the public IP address (VIP for the Ghost web server) on which the application will be available.
Configure services with Ansible
The Ansible templates contain a number of roles and tasks. For the first deployment, executing the roles stepwise allows intermediate testing that can prevent time consuming troubleshooting.
The installation process requires SSH connectivity (through the bastion host) to all nodes and internet connectivity from the nodes. This should have been set up by Terraform during infrastructure deployment. All servers have a security group allowing ICMP, so a ping test between any of the nodes should work.
~/.ssh/config and the inventory file
inventory/hosts, verify connectivity from Ansible to the created infrastructure with the command
ansible all -m ping -i inventory/hosts
The output should be in green text, similar to in Figure 5.
It is also advisable to test external access with a ping test such as
ping -c 4 126.96.36.199 from the back-end servers and TCP connectivity with
sudo apt update.
The installation is performed by running
ansible-playbook site.yml -i inventory/hosts
site.yml contains five parts which can be installed in a single run or step-by-step (by commenting out irrelevant sections), allowing for some testing between the steps. The first step is installation of Docker by the section
The role installs and starts Docker Engine on the database instances and web servers. The installation can be verified with
sudo docker info
which produces a detailed output (Figure 6).
The Docker swarm is installed in the step
The status of the swarm mode can be retrieved directly from the command
sudo docker info
In the output contents, some details of the cluster is presented apart from the status (Figure 7), such as the role of the current node and the number of nodes in the cluster.
sudo docker networks ls
shows the networks that Docker is using (Figure 8). When Docker is installed, it creates a default networking bridge that all containers will use by default. This default bridge uses an IP subnet of 172.17.0.0/16. The networks created by Terraform to be used by the swarm are labeled overlay.
The Galera stack is installed by the section
After the Galera stack deployment, evidence of installation can be seen in the output (Figure 9) from the command
sudo docker images
In most Docker commands, the container ID needs to be supplied. This can be retrieved from the command
sudo docker container ls
which shows a list of running containers. A simple test of the Galera cluster is to query the database (Figure 10) with
sudo docker exec -ti <container-id> mysql -e 'show status like "wsrep_cluster_size"'
In place of the container ID, its name can be used in the commands.
The Ansible section
keepalived daemon on the cluster. It is used to monitor running services and used to switch nodes automatically whenever a fault occurs. After this deployment step, the
keepalived daemon container is visible (Figure 11) in the cluster nodes (for example ghost-db01).
The Ghost application requires the following resources:
A server with at least 1GB memory
Ubuntu 16.04, Ubuntu 18.04 or Ubuntu 20.04
A supported version of Node.js
A supported version of MySQL / MariaDB
Nginx (minimum of 1.9.5 for SSL)
A registered domain name
In the deployment step, the requirement of domain name (and therefore the Nginx reverse proxy) will be relaxed to illustrate the installation process and do some basic testing. To take the system live, DNS A-Record needs to be set up and pointing to the server’s IP address, together with a reverse proxy. To configure TLS during setup, this must be done in advance.
The Ghost installation step is done by the Ansible section
After this step, the list of running containers on
ghost-web01 should show Ghost (Figure 12) in the output from
sudo docker ps
sudo docker container ls
The application should be reachable from a browser by entering the public IP associated with the Ghost web server, that is, the address in the third output section from Terraform. The landing page is shown in Figure 13.
Testing of the Ghost application as such is beyond the scope of this tutorial. It is, however, instructive to use the it and make database queries to verify database operations.
Simple tests include:
Verification of package versions and Ghost environment variables
Inspection of Ghost logs
Database status test
Database activity test of application activities
The installed version of Ghost and Node.js can be verified with
sudo docker exec -it <container-id> ls versions
sudo docker exec <container-id> node --version
This information is also visible in the list of environment variables obtained with
sudo docker exec <container-id> env
with an example shown in Figure 14.
Note the parameters for database connection, as specified in the file
Ghost log file
The docker container log files (Figure 15) are printed with
sudo docker logs -f <container-id>
Note that the Docker image publishes by default on
localhost. This port is mapped by Docker to the publicly available VIP address associated with the web server.
Database status test
A Bash shell on the Galera container on one of the database nodes is opened with
sudo docker exec -it <container-id> bash
from which MariaDB CLI is opened with the
mysql command. The printout from the query
show databases; shows the Ghost database (Figure 16).
Select database with
use ghostdb. A detailed list of Galera cluster system parameters related to write-set replication (wsrep) is obtained with the query
SHOW STATUS LIKE 'wsrep_%';
Three important entries in the list are cluster size, connectivity and local state, marked in Figure 17 and 18.
Database update test
To perform a simple functional test, we can create a new post in Ghost and inspect the database for changes. Before creating a post, you need create an account in Ghost. Begin by typing
<ip-address>/ghost in a browser window, where
<ip-address> is the public IP address of the Ghost installation.
This redirects to a welcome page with a button Create your account (Figure 19). Press the button to open the dashboard showing options to Create your first member and Publish a post (Figure 20).
Create an account and and a post that can easily be identified through a database query, and publish it. An example is shown in Figure 21.
On any database node, records of the new account and post should now be visible. Log in to a database cluster node and execute in turn
which prints a list of users (Figure 22).
Similarly, the query
select title from posts\G prints a list of posts (Figure 23).
According to the Ghost.org documentation, the application is intended to be used with a reverse proxy in front of it, either installed on the web server or on a dedicated server. The reverse proxy described here is based on Nginx deployed on a separate virtual machine with a suitable security group. Some general Nginx configuration instructions can be found in the How-to guide https://pannet.atlassian.net/l/c/X7J1h6qu
It is sufficient to add a new site configuration file for Ghost, here called
ghost.conf. To create this file, open an empty file with
sudo nano /etc/nginx/sites-available/ghost.conf
enter the following content and save it.
The IP address is the one where Ghost is published. To activate, unlink the default site and create a link to the new site, perform a test and restart the server with the commands
A registered domain name is a prerequisite for the Ghost production solution, and it is strongly recommended to use a TLS certificate for traffic encryption. After domain name registration, this replaces
<ip-address> in the configuration file.
Appropriate security features should be applied to the reverse proxy. For further details, see https://pannet.atlassian.net/wiki/spaces/DocEng/pages/1428914372/Bastion+host#Security-best-practices
The Ghost application uses a mail service, which has not been configured during the deployment. It can therefore be necessary to modify the Ghost instance after its deployment.
Docker containers can be managed from the Docker CLI. This section contains only a few hints. For details on Ghost configuration, please see the documentation.
It can be useful to copy files from the container to the host. This is done with
sudo docker cp <container-id>:<source-file-path> <destination-file-path>
<destination-file-path> with a dot copies the file to the current working directory. Figure 24 shows how this is used. Uploading a file is done with the same command and switching the source and destination explicit file paths.
Ghost has its own CLI which is installed on the Docker container (as can be seen in the list of environment variables). To use it, open a Bash login shell on the container with
sudo docker exec -it <container-id> bash. Configuration options are listed with
ghost config --help, showing the arguments to configure the URL, mail service and database connectivity. This utility modifies the entry corresponding to the given argument in configuration file in the container directly.
Unfortunately, port mappings are lost when a container is restarted, and the option to set the ports is only available with
docker run and
docker create, and not with
The simplest, albeit perhaps not the most elegant solution, is to launch a new container with environment variables to override the configuration file settings as follows (see the Ansible
Another approach is to build a custom Ghost image and launch a container based on that. Alternatively, the environment variables in the Ansible template can be modified, and a new Ghost instance created by re-running Ansible with the