Contents

Adopting multicloud strategy using terraform

Introduction

In this post lets talk about multicloud strategy between AWS and GCP that provides distribution of cloud assets, software, applications, and more across several cloud environments. Multicloud is the use of multiple cloud computing and storage services in a single network architecture. We are going to use two or more cloud computing resources from any number of different cloud vendors. In this case, we will use AWS and GCP cloud vendors and provision these resources from terraform.

Why use Multicloud Strategy?

A multicloud strategy allows companies to select different cloud services from different providers because some are better for certain tasks than others. For example, some cloud platforms specialize in large data transfers or have integrated machine learning capabilities. You can use multiple clouds to test the strengths of each provider and combine them to increase agility and eliminate vendor lock-in. Plus, you can take advantage of the unique capabilities of several cloud providers, and can take advantage of them when you need them, without having to fully migrate to another cloud, which is a complex and time-consuming process. Choosing a multicloud environment can also enhance disaster recovery. If you run mission-critical workloads on only one provider, and that provider experiences an outage, your workloads will go down with them, while deploying to additional providers makes applications resilient. Deploying applications to multiple providers can improve network security by reducing the likelihood that an attack will affect the entire infrastructure at once.

Organizations implement a multicloud environment for the following reasons:

  • Choice: The additional choice of multiple cloud environments gives you flexibility and the ability to avoid vendor lock-in.

  • Disaster Avoidance: Outages happen; sometimes it is due to a disaster; other times it is due to human error. Having multiple cloud environments ensures that you always have compute resources and data storage available so you can avoid downtime.

  • Compliance: Many multi-cloud environments can help enterprises achieve their goals for governance, risk management and compliance regulations.

A multicloud platform combines the best services that each platform offers. This allows companies to customize an infrastructure that is specific to their business goals. A multi-cloud architecture also provides lower risk. If one web service host fails, a business can continue to operate with other platforms in a multi-cloud environment versus storing all data in one place.

Main Challenges adopting Multicloud Strategies

There are several drawbacks of adopting multicloud strategies, here a few common:

  • Complexity

  • Cost Control

  • Security

  • Administrator

  • Monitoring

The Project

In this project, we are provision two servers, each of them in a separate cloud vendor. In this case AWS and GCP. We are going to install an initial script to install webservers nginx to provides access to each web page. The initial idea was to provide a third resource in another cloud vendor to balance traffic between two webservers, but, how I aready had a domain hosted and managed by Route 53 in AWS, I will keep this configuration and balance the traffic throught this resource.

Implementation

If you are not confortable using terraform, before read this post, you can read this post about Terraform Essentials on AWS. Here I am going to focus only the important options to provision multicloud, not about terraform essentials.

The first file I created was app-variables.tf that has the App’s declaration variables with the follow content:

variable "app_name" {
  type        = string
  description = "Application MultiCloud"
  default     = "multicloud"
}
variable "app_environment" {
  type        = string
  description = "Application Environment"
  default     = "prod"
}

These variables will be used in the main cloud providers file to identify the resources.

The next two files are variables files that I separated for organizational reasons. Each file are separated for cloud vendor. See the content:

cat aws-variables.tf

variable "aws_access_key" {
  type        = string
  description = "AWS Access Key"
}
variable "aws_secret_key" {
  type        = string
  description = "AWS Secret Key"
}
variable "aws_region" {
  type        = string
  description = "AWS Region for the VPC"
  default     = "us-east-1"
}
variable "aws_az" {
  type        = string
  description = "AWS AZ"
  default     = "us-east-1a"
}
variable "aws_vpc_cidr" {
  type        = string
  description = "CIDR for the VPC"
  default     = "10.8.0.0/16"
}
variable "aws_subnet_cidr" {
  type        = string
  description = "CIDR for the subnet"
  default     = "10.8.1.0/24"
}

and …

cat gcp-variables.tf

variable "gcp_project" {
  type        = string
  description = "multicloud-project-310818"
}
variable "gcp_auth_file" {
  type        = string
  description = "GCP authentication file"
}
variable "gcp_region" {
  type        = string
  description = "GCP region"
  default     = "us-central1"
}
variable "gcp_zone" {
  type        = string
  description = "GCP zone"
  default     = "us-central1-c"
}
variable "gcp_subnet_cidr" {
  type        = string
  description = "Subnet CIDR"
  default     = "10.10.8.0/24"
}

The next is a file that terraform automatically loads a number of variable definitions files if they are present. The name of the file must be terraform.tvars .. Generally this file is used to load credentials. In this case, I setuped both AWS and GCP credentials. Look:

cat terraform.tfvars

#AWS authentication variables
aws_access_key = "XXXXXXXXXXXXXXXX"
aws_secret_key = "XXXXXXXXXXXXXXXX"
#GCP authentication variables
gcp_project   = "multicloud-project"
gcp_auth_file = "./multicloud-project.json"

Note that the variables declared early use these values. The file multicloud-project.json used before in the gcp_auth_file variable is the file that GCP uses for authenticate and you can download it from your GCP project.

Now I separated the data content used in the main file of the each cloud:

cat aws-data.tf

data "aws_ami" "amazon_linux" {
  most_recent = true

  filter {
    name = "name"

    values = [
      "amzn-ami-hvm-*-x86_64-gp2",
    ]
  }

  filter {
    name = "owner-alias"

    values = [
      "amazon",
    ]
  }
  owners = ["amazon"]
}
data "aws_ami" "ubuntu" {
  most_recent = true

  filter {
    name   = "name"
    values = ["ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-*"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }

  owners = ["099720109477"] # Canonical
}

and …

cat gcp-data.tf

data "template_file" "metadata_startup_script" {
  template = file("gcp-user-data.sh")
}

In this case, in the main file, you can choose which image will be used.

The next two files are files that contains the scripts that will be executed as soon as the instance is created. So, for each cloud, I have created one file to identify the instance when will be called.

cat aws-user-data.sh

#! /bin/bash
sudo apt-get update
sudo apt-get install -y apache2
sudo systemctl start apache2
sudo systemctl enable apache2
echo "<h1>MultiCloud - AWS</h1>" | sudo tee /var/www/html/index.html

and …

cat gcp-user-data.sh

#! /bin/bash
sudo apt-get update
sudo apt-get install -y apache2
sudo systemctl start apache2
sudo systemctl enable apache2
echo "<h1>MultiCloud - GCP</h1>" | sudo tee /var/www/html/index.html

Now lets specify the main file that each cloud. In this file we will specify the Provider of the each cloud, the VPC, Subnets, Internet Gateway and Route Table (AWS), Security Groups or Firewall, Public IPs and its associations and Instances.

cat aws-main.tf

provider "aws" {
  access_key = var.aws_access_key
  secret_key = var.aws_secret_key
  region     = var.aws_region
}
resource "aws_vpc" "aws-vpc" {
  cidr_block           = var.aws_vpc_cidr
  enable_dns_hostnames = true
  tags = {
    Name        = "${var.app_name}-${var.app_environment}-vpc"
    Environment = var.app_environment
  }
}
resource "aws_subnet" "aws-subnet" {
  vpc_id            = aws_vpc.aws-vpc.id
  cidr_block        = var.aws_subnet_cidr
  availability_zone = var.aws_az
  tags = {
    Name        = "${var.app_name}-${var.app_environment}-subnet"
    Environment = var.app_environment
  }
}
resource "aws_internet_gateway" "aws-internet-gateway" {
  vpc_id = aws_vpc.aws-vpc.id
  tags = {
    Name        = "${var.app_name}-${var.app_environment}-igw"
    Environment = var.app_environment
  }
}
resource "aws_route_table" "aws-route-table" {
  vpc_id = aws_vpc.aws-vpc.id
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.aws-internet-gateway.id
  }
  tags = {
    Name        = "${var.app_name}-${var.app_environment}-route-table"
    Environment = var.app_environment
  }
}
resource "aws_route_table_association" "aws-route-table-association" {
  subnet_id      = aws_subnet.aws-subnet.id
  route_table_id = aws_route_table.aws-route-table.id
}
resource "aws_security_group" "aws-security-group" {
  name        = "${var.app_name}-${var.app_environment}-web-sg"
  description = "Allow incoming HTTP connections"
  vpc_id      = aws_vpc.aws-vpc.id
  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name        = "${var.app_name}-${var.app_environment}-security-group"
    Environment = var.app_environment
  }
}
resource "aws_eip" "aws-eip" {
  vpc = true
  tags = {
    Name        = "${var.app_name}-${var.app_environment}-elastic-ip"
    Environment = var.app_environment
  }
}
resource "aws_instance" "aws-web-server" {
  ami                         = data.aws_ami.ubuntu.id
  instance_type               = "t2.micro"
  subnet_id                   = aws_subnet.aws-subnet.id
  vpc_security_group_ids      = [aws_security_group.aws-security-group.id]
  associate_public_ip_address = true
  source_dest_check           = false
  user_data                   = file("aws-user-data.sh")
  tags = {
    Name        = "${var.app_name}-${var.app_environment}-web-server"
    Environment = var.app_environment
  }
}
resource "aws_eip_association" "aws-eip-association" {
  instance_id   = aws_instance.aws-web-server.id
  allocation_id = aws_eip.aws-eip.id
}

and…

cat gcp-main.tf

provider "google" {
  project     = var.gcp_project
  credentials = file(var.gcp_auth_file)
  region      = var.gcp_region
  zone        = var.gcp_zone
}
resource "google_compute_network" "gcp-vpc" {
  name                    = "${var.app_name}-${var.app_environment}-vpc"
  auto_create_subnetworks = "false"
  routing_mode            = "GLOBAL"
}
resource "google_compute_subnetwork" "gcp-subnet" {
  name          = "${var.app_name}-${var.app_environment}-subnet"
  ip_cidr_range = var.gcp_subnet_cidr
  network       = google_compute_network.gcp-vpc.name
  region        = var.gcp_region
}
resource "google_compute_firewall" "gcp-allow-http" {
  name    = "${var.app_name}-${var.app_environment}-fw-allow-http"
  network = google_compute_network.gcp-vpc.name
  allow {
    protocol = "tcp"
    ports    = ["80"]
  }
  target_tags = ["http"]
}
resource "google_compute_address" "gcp-web-ip" {
  name    = "${var.app_name}-${var.app_environment}-web-ip"
  project = var.gcp_project
  region  = var.gcp_region
}
resource "google_compute_instance" "gpc-web-server" {
  name         = "${var.app_name}-${var.app_environment}-web-server"
  machine_type = "f1-micro"
  zone         = var.gcp_zone
  tags         = ["http"]
  boot_disk {
    initialize_params {
      image = "ubuntu-os-cloud/ubuntu-1804-lts"
    }
  }
  metadata_startup_script = data.template_file.metadata_startup_script.rendered
  network_interface {
    network    = google_compute_network.gcp-vpc.name
    subnetwork = google_compute_subnetwork.gcp-subnet.name
    access_config {
      nat_ip = google_compute_address.gcp-web-ip.address
    }
  }
}

Once the main files were created, its time to create the resource that is responsible to balance traffic between two instance, one of them provisioned in AWS and another in GCP. As I mentioned earlier, The ideal is to provision this resource in another cloud vendor to provide more resilience and available, but, the intention here is to show the basic idea to provision resources in differentes clouds from automation tools as terraform. This makes it easier to adapt to future implementation of the real world.

As you can see, I created two records in my hosted domain in Route 53. It was based in Weighted Route Policy that associates multiple resources with a single domain name or subdomain name (multicloud.filipemotta.me) and choose how much traffic is routed to each resource. This can be useful for a variety of purposes, including load balancing and testing new versions of software. In this case, we will use to balance the traffic in an equal way, setting the 1 weight for each record and setting the TTL to 1 that reduces DNS lookup time.

cat aws-route53-data.tf

data "aws_route53_zone" "multicloud" {
  name = "filipemotta.me"
}

and …

cat aws-route53-main.tf

resource "aws_route53_record" "multicloud_aws" {
  zone_id = data.aws_route53_zone.multicloud.zone_id
  name    = "multicloud"
  type    = "A"
  ttl     = "1"

  weighted_routing_policy {
    weight = 1
  }

  set_identifier = "AWS"
  records        = [aws_eip.aws-web-eip.public_ip]
}

resource "aws_route53_record" "multicloud_gcp" {
  zone_id = data.aws_route53_zone.multicloud.zone_id
  name    = "multicloud"
  type    = "A"
  ttl     = "1"

  weighted_routing_policy {
    weight = 1
  }

  set_identifier = "GCP"
  records        = [google_compute_address.gcp-web-ip.address]

Note that terraform has differents ways to reffer the public address both AWS and GCP throught record option.

Now its time to create resources from few terraforms commands. Lets go:

terraform fmt 
terraform init 
terraform validate
terraform apply

If everything ok the one webserver was created in AWS, another one in GCP and two register in Route 53 called ‘multicloud.filipemotta.me’ ( in my case ). Now you can access the URL http://multicloud.filipemotta.me e the traffic will be balance between two clouds.

Resources Added

Multicloud AWS

Multicloud GCP

Obviouslly that you can create your multicloud with differents purposes like failover or many others, as I mencioned before, the intention here is to show the basic of the implementation.

I hope this post was useful.