Use cases

This chapter describes four use cases in VPC topology design, including availability configuration and networking options.


Single instance with networking options

The basic configuration for hosting a web application is a single instance. In a standard tenant, two floating IP addresses are available, which can be used for service access and management operations. Basic VM creation is described in Create and manage virtual machines

The single VM use case is illustrated in Figure 1, illustrating how it can be connected to private networks, a VPN or the Internet. It can be managed by a bastion (jump) host that acts like a control point towards the DMZ for traffic and user accounts.

Figure 1. Single instance with networking.

On a single VM, it is possible to configure more than a single network interface controller (NIC). In a physical server, multiple IP addresses are often assigned to a single NIC. The physical limitation is the number of NIC card slots in the server. For virtual NICs, however, no such limitation exists, so a NIC can be added for each IP address. The limitation in the cloud is the number of virtual PCI bus addresses. This limit can be reached when a server supports many domains with separate public IP addresses.

In multiple back-end server architectures, a service can be accessed through a load balancer with a public IP address. Basic and more advanced networking options are described in how-to Configure network and Configure LBaaS

Multiple instances within a data center

In planning the topology of a VPC topology, an important step is to decide which servers are preferably located on the same hypervisor (compute) host to keep communication overhead to a minimum, and which servers should be on separate hosts for increased availability and migration flexibility.

Figure 2 shows multiple servers within the same data center. To deploy a high-availability (HA) solution, the user can use anti-affinity rules and specify availability zones, as described in concepts about Availability

By letting an application be supported by multiple servers within the same data center, an HA architecture can engineered through VM redundancy. For a single RAZ, at least two servers assigned to different site availability zones or an anti-affinity group would result in an availability of over 99%, please refer to Table 2 in DT Cloud Services cloud maintenance

Instances are assigned to a SAZ when they are created by passing the OpenStack argument for availability zone. Only compute instances can be assigned to different SAZ, whereas other resources can not. Alternatively, a server group can be defined as an anti-affinity group that instances can be assigned to at creation time. Anti-affinity can be used in combination SAZ diversity. This is described in Create (anti-)affinity group

Figure 2. Multiple instances in a single data center.

Multiple instances within a region

A service architecture taking advantage of maximum geo-redundancy has multiple instances distributed across a region, deployed in different data centers. These are the architectures with dual RAZ in Table 2 in DT Cloud Services cloud maintenance

This scenario is illustrated in Figure 3. The internal DT Cloud Services transport network between data centers consists of high bandwidth (100 gigabit Ethernet) redundant links. The resource distribution to different data centers may cause some additional latency, which should be taken into account when selecting data centers.

Figure 3. Multiple instances within a region.

To deploy such an architecture, the user needs to consider both in which data center (RAZ) and in which SAZ a virtual machine is to be allocated before its creation.

Each data center has its own control layer (OpenStack deployment), so the data center selection cannot be done automatically with an OpenStack command. DT Cloud Services provides geo-redundant architecture deployment as a service. Peering solutions are further described in Packet backbone network

Multiple tenants

In case the user has multiple tenants, these can be connected through VPC peering (see Figure 4).

Figure 4. Multiple tenants connected through VPC peering.

Each VPC has its own, independent networking and PaaS resources. By default, no traffic is allowed between VPCs, an traffic must be permitted separately in each direction. For this, the VPCs must be interconnected at a peering point. This is also the case when VPCs are deployed in different data centers. Peering solutions are described in Packet backbone network

With peering enabled, a VPC can use another VPC for Internet or VPN connectivity. In all peering solutions, the user must ensure consistent IP addressing, security rules and traffic filtering across the VPCs. Peering solutions are further described in Packet backbone network