Contents

Terraform Essentials to AWS

Introduction

The main of this post is to show how to build infrastructure on AWS from Terraform using Infrastructure as Code. I’ll show the main features to provision some services on AWS throught terraform basis.

Pre Requisites

You must have an AWS Accont

You must be able to configure AWS account with AWS Credentials

You must have installed Terraform on your OS

It is a good idea to have AWS CLI installed

Credentials

There are many ways to supply the AWS credentials on Terraform. So, I’ll explain some of them.

Credentials
“First, hard-coded credentials are not recommended in any Terraform configuration and risks secret leakage should this file ever be committed to a public version control system.”

So, the first recommended mode is configure aws credentials to store it on your home directory throught aws configure command.

Usage:

aws configure

The command will prompt to input your AWS Access Key ID and Secret Access Key. You can use this link to access your AWS credentials. The files generated by the CLI for a default profile configured with aws configure looks similar to the following (Unix-based systems):

~/.aws/credentials

[default]
aws_access_key_id=XXXXXXXXXX
aws_secret_access_key=XXXXXX

Another way to provide credentials is environment variables via the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, environment variables, representing your AWS Access Key and AWS Secret Key, respectively.

Usage:

$ export AWS_ACCESS_KEY_ID="XXXXXXXXX"
$ export AWS_SECRET_ACCESS_KEY="XXXXXXXX"

You can optionally specify a different location in the Terraform configuration by providing the shared_credentials_file argument or using the AWS_SHARED_CREDENTIALS_FILE environment variable. This method also supports a profile configuration and matching AWS_PROFILE environment variable:

Usage:

provider "aws" {
 region                  = "us-west-2"
 shared_credentials_file = "/Users/tf_user/.aws/creds"
 profile                 = "customprofile"
}

There are many others ways to provide credentials, but keep in mind the main secure way is IAM Instance Profile using IAM Role. This is a preferred approach over any other when running in EC2 as you can avoid hard coding credentials. Instead these are leased on-the-fly by Terraform which reduces the chance of leakage.

Terraform configuration files

Lets start our configuration to set of files used to describe infrastructure in Terraform. The idea is to build our infrasctructure little by little. So lets start to setting up our provider and some resource to up our first EC2 instance.

The first thing to do is to create a directory to store each configuration file. In my case I created the follow:

mkdir my-first-terraform-deployed ; cd my-first-terraform-deployed 

So lets create the file aws-main.tf into these directory with the following content and save it. Terraform loads all files in the working directory that end in .tf.

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.27"
    }
  }
}
provider "aws" {
  access_key = var.aws_access_key
  secret_key = var.aws_secret_key
  region = var.aws_region
}
resource "aws_instance" "First_EC2" {
  ami           = data.aws_ami.amazon_linux.id
  instance_type = "t2.micro"

  tags = {
    Name = var.app_environment
  }
}
data "aws_ami" "amazon_linux" {
  most_recent = true

  filter {
    name = "name"

    values = [
      "amzn-ami-hvm-*-x86_64-gp2",
    ]
  }

  filter {
    name = "owner-alias"

    values = [
      "amazon",
    ]
  }
   owners      = ["amazon"]
}

Note that I have specified the variables on the main file (var.*). It is another way to pass data ( Input Variables ) without hard-coded credentials on your code improving security and you can organize to module your code via resources data in another file. Input variables serve as parameters for a Terraform module, allowing aspects of the module to be customized without altering the module’s own source code, and allowing modules to be shared between different configurations. In my case I created the file aws-variable.tf to declare the variables that will be used to provision the resources above.

#AWS authentication variables
variable "aws_access_key" {
  type = string
  description = "AWS Access Key"
}
variable "aws_secret_key" {
  type = string
  description = "AWS Secret Key"
}
#AWS Region
variable "aws_region" {
  type = string
  description = "AWS Region"
  default = "eu-west-1"
}
#Define application environment
variable "app_environment" {
  type = string
  description = "Application Environment"
  default = "prod"
}

In another file you should specified the variables used in the aws-main.tf. I have created the file terraform.tfvars to specify their values in a variable definitions file (with a filename ending in either .tfvars or .tfvars.json). Terraform also automatically loads a number of variable definitions files if they are present. Please, becareful that sensible files are kept outside of your version control.

terraform.tfvars

#AWS authentication variables
aws_access_key = "XXXXXXXXXXXX"
aws_secret_key = "XXXXXXXXXXXX"

Note that I have only specified two credentials variables, this is occurs because the other variables has a default value in it declaration.

Terraform loads variables in the following order, with later sources taking precedence over earlier ones:

  • Environment variables

  • The terraform.tfvars file, if present.

  • The terraform.tfvars.json file, if present.

  • Any *.auto.tfvars or *.auto.tfvars.json files, processed in lexical order of their filenames.

  • Any -var and -var-file options on the command line, in the order they are provided. (This includes variables set by a Terraform Cloud workspace.)

Finally, lets talking about the main file (aws-main.tf).

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.27"
    }
  }
}

The terraform {} block is required so Terraform knows which provider to download from the Terraform Registry. In the configuration above, the aws provider’s source is defined as hashicorp/aws which is shorthand for registry.terraform.io/hashicorp/aws.

You can also assign a version to each provider defined in the required_providers block. The version argument is optional, but recommended. It is used to constrain the provider to a specific version or a range of versions in order to prevent downloading a new provider that may possibly contain breaking changes. If not specified, Terraform will automatically download the most recent provider during initialization.

provider "aws" {
  access_key = var.aws_access_key
  secret_key = var.aws_secret_key
  region = var.aws_region
}

The provider block configures the named provider, in our case aws, which is responsible for creating and managing resources. A provider is responsible for understanding API interactions and exposing resources.

AWS Credentials
If you leave out your AWS credentials, Terraform will automatically search for saved API credentials (for example, in ~/ aws/credentials) or IAM instance profile credentials. This is cleaner when .tf files are checked into source control or if there is more than one admin user.

In my case, I have specified the values of the variables in the file terraform.tfvars on the step above.

The follow block is the resource that defines the piece of infrastructure. A resource might be a physical component such as an EC2 instance, VPC, Security Groups, or any others.

resource "aws_instance" "First_EC2" {
  ami           = data.aws_ami.amazon_linux.id
  instance_type = "t2.micro"

  tags = {
    Name = var.app_environment
  }
}
data "aws_ami" "amazon_linux" {
  most_recent = true

  filter {
    name = "name"

    values = [
      "amzn-ami-hvm-*-x86_64-gp2",
    ]
  }

  filter {
    name = "owner-alias"

    values = [
      "amazon",
    ]
  }
   owners      = ["amazon"]
}

The resource block has two strings before the block: the resource type and the resource name. In the example, the resource type is aws_instance and the name is First_EC2. The prefix of the type maps to the provider. In our case “aws_instance” automatically tells Terraform that it is managed by the “aws” provider. The arguments for the resource are within the resource block. The arguments could be things like machine sizes, disk images, Security Groups, VPC, Elastic IP and so on.

Note an special kind of value on the ami argument. It is called data source. A data source is accessed via a special kind of resource known as a data resource, declared using a data block. For didatic purpose, I placed the data set in the same file of the resources, but you can specified an separated .tf file to will be loaded into the directory. A data block requests that Terraform read from a given data source (“aws_ami”) and export the result under the given local name (“amazon_linux”). The data source and name together serve as an identifier for a given resource and so must be unique within a module as you can see here data.aws_ami.amazon_linux.id. Each data resource is associated with a single data source, which determines the kind of object (or objects) it reads and what query constraint arguments are available.

Initialize the directory

Now its time to initialize the directory. The terraform init command is used when you create a new configuration. Terraform uses a plugin-based architecture to support hundreds of infrastructure and service providers. Initializing a configuration directory downloads and installs providers used in the configuration, which in this case is the aws provider according image bellow:

terramform init

Terraform downloads the aws provider and installs it in a hidden subdirectory of the current working directory

Format and validate the configuration files

It is it’s interesting to format the files to stantard terraform usage. The terraform fmt command automatically updates configurations in the current directory for easy readability and consistency.

It is a simple command usage:

terraform fmt

Terraform will return the names of the files it formatted. In this case, your configuration file was already formatted correctly, so Terraform won’t return any file names.

It is interesting to check the consistent and report errors within modules, attribute names, and value types usgin terramform validade commnd

usage:

terraform validade

Plan before you apply

Now that your your configurations files are defined, you need to see what the plan is for this configuration change. Terraform plan command determines what actions are necessary to achieve the desired state specified in the configuration files. This command is a convenient way to check whether the execution plan for a set of changes matches your expectations without making any changes to real resources or to the state. . By running the command terraform plan, you can verify what the output will actually look like. Output is truncated for readability.

terramform plan

The output format is similar to the diff format generated by tools such as Git. The output has a + next to aws_instance.First_EC2, meaning that Terraform will create this resource. Beneath that, it shows the attributes that will be set. When the value displayed is (known after apply), it means that the value won’t be known until the resource is created because it not was configured in your configuration file.

Applying Your Configuration

If everything looks good, go ahead and actually apply your configuration changes and provision your EC2 instance by using the command terraform apply. As you can see from the following output, Terraform shows what actions are taking place and any errors that occur. Output is truncated for readability.

terramform apply

Inspect state

When you applied your configuration, Terraform wrote data into a file called terraform.tfstate. This file now contains the IDs and properties of the resources Terraform created so that it can manage or destroy those resources going forward.

You can inspect the current state via this command:

usage:

terraform show

This commando read the terraform.tfstate. So you can share this state file to your trusted team members to collaboration and manage your infrastructure.

Change your Infrastructure

We know that infrastructure is continuously evolving and Terraform was built to help manage and enact that change. As you change Terraform configurations, Terraform builds an execution plan that only modifies what is necessary to reach your desired state.

It is a good strategy version control not only your configurations but also your state so you can see how the infrastructure evolved over time.

aws-main.tf

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.27"
    }
  }
}
provider "aws" {
  access_key = var.aws_access_key
  secret_key = var.aws_secret_key
  region     = var.aws_region
}
resource "aws_instance" "Second_EC2" {
  ami           = data.aws_ami.ubuntu.id
  instance_type = "t2.micro"
  vpc_security_group_ids = [aws_security_group.aws-web-sg.id]

  tags = {
    Name = var.app_environment
  }
}
#Define the security group for Second EC2 Instance
resource "aws_security_group" "aws-web-sg" {
  name = "${var.app_environment}-web-sg"
  description = "Allow incoming HTTP connections"
  ingress {
    from_port = 80
    to_port = 80
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port = 0
    to_port = 0
    protocol = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name = "${var.app_environment}-web-sg"
    Environment = var.app_environment
  }
}

Note that I have suppressed the data block that I have created previously in this file for the didatic purpose. I created a separated file for this called datas.tf with the follow content:

datas.tf

data "aws_ami" "amazon_linux" {
  most_recent = true

  filter {
    name = "name"

    values = [
      "amzn-ami-hvm-*-x86_64-gp2",
    ]
  }

  filter {
    name = "owner-alias"

    values = [
      "amazon",
    ]
  }
  owners = ["amazon"]
}
data "aws_ami" "ubuntu" {
    most_recent = true

    filter {
        name   = "name"
        values = ["ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-*"]
    }

    filter {
        name   = "virtualization-type"
        values = ["hvm"]
    }

    owners = ["099720109477"] # Canonical
}

So, I modified the aws-main.tf with purpose to create new EC2 instance with Ubuntu Image and create a Security Group to allow incoming HTTP connections only. Note that I changed the resource’s name, so, as you will see, the terraform will destroy EC2 amazon instance and create a new one. If I changed the ami only for example, the instance would be modified.

terramform plan

terramform plan

The execution plan shows an summary that will be created , destroyed and updated for you. it handled these details for you, and the execution plan makes it clear what Terraform will do. The prefix -/+ means that Terraform will destroy and recreate the resource, rather than updating it in-place. While some attributes can be updated in-place (which are shown with the ~ prefix).

As we have saw, now its only apply these changes.

terraform apply

Destroy your Infrastructure

The simple but power terraform destroy command will terminates resources defined in your Terraform configuration. This command is the reverse of terraform apply in that it terminates all the resources specified by the configuration.

terramform destroy

terramform destroy

In this post you saw the bests pratics to store your credentials when use the AWS, how to initialize your infrastructure with terraform, create, change and destroy using input variables and some bests pratics to use it.

Now there are many, many features in each resource that can be explore according your needs, but is impossible to show all of them here. Terraform applys default configuration when it is not set.

I really hope this artcile will be usefull for you.