Introduction
Have you ever had to share an AWS environment with a team members or other teams? I know I have and the frustration of having to coordinate deployments, being blocked or worse having your infrastructure deleted mid-way through testing by a rogue colleague is something I spent a while trying to avoid. After experiment with setting-up local-first development environments, I think I found an elegant way of running AWS Cloud locally that I would like to share.
Prerequisites
The following tooling is required:
- LocalStack - local emulator for AWS services.
- Docker & Docker Compose - a runtime environment for LocalStack (simplest way to run LocalStack)
- Terraform or OpenTofu - a tool to provision our infrastructure (other IaC tools such as Pulumi or CloudFormation should also work but may require adjustements).
- tflocal - a CLI wrapper around Terraform to use LocalStack instead (other tooling should have equivalents).
Implementation
Objectives
To result in the best developer experience possible, here are the requirements for this local development environment:
- The developer should be able to simulate the AWS infrastructure necessary for the project locally.
- The developer should be able to deploy the same infrastructure to AWS directly without any conflicts with the local environment.
Repository Structure
File Structure
To achieve the objective of independent deployments locally and to the cloud, the following repository structure has been set-up:
๐ฆ project-repository/
โโโ ๐ server
โโโ ๐ terraform/
โ โโโ ๐ local/
โ โ โโโ ๐๏ธ main.tf
โ โโโ ๐ cloud/
โ โ โโโ ๐๏ธ main.tf
โ โโโ ๐ modules/
โ โโโ ๐ infrastructure/
โ โโโ ๐๏ธ dynamodb.tf
โ โโโ ๐๏ธ s3.tf
โโโ ๐ณ docker-compose.yml
terraform- stores all of the infrastructure code.server- an example directory for any application that leverages the infrastructure.docker-compose.yml- a file that defines the LocalStack container to be ran (utilisingdocker-composeinstead ofdockerenable the developer to extend the local development environment by adding more containers for other services such as local databases or proxies).
Double Provider Setup (Terraform specific)
The terraform directory has three sub-directories local, cloud & modules/infrastructure that have the following responsibilities:
local- holds terraform configuration for LocalStack i.e. terraform state is stored on the disk using a local backend.
# local/main.tf
terraform {
backend "local" {}
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 5.56.1"
}
}
}
provider "aws" {
region = "eu-west-2"
access_key = "test"
secret_key = "test"
s3_use_path_style = true
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
endpoints {
s3 = "http://s3.localhost.localstack.cloud:4566"
}
}
module "infrastructure" {
source = "../modules/infrastructure"
...environment variables go here
}
cloud- holds terraform configuration for AWS i.e. terraform state is stored in an S3 bucket using a remote backend.
# cloud/main.tf
terraform {
backend "s3" {
bucket = "remote-bucket-name"
key = "key/for/statefile"
region = "eu-west-2"
}
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 5.56.1"
}
}
}
provider "aws" {
region = "eu-west-2"
}
module "infrastructure" {
source = "../modules/infrastructure"
...environment variables go here
}
modules/infrastructure- holds the terraform configuration for the infrastructure that is used by the application. Themodulesdirectory is a default directory for terraform modules and in this case theinfrastructuremodule is used by bothlocalandcloudsetups.
# modules/infrastructure/s3.tf
resource "aws_s3_bucket" "app_content_bucket" {
bucket = "bucket-name"
}
# modules/infrastructure/dynamodb.tf
resource "aws_dynamodb_table" "example" {
name = "table-name"
hash_key = "hashKey"
billing_mode = "PAY_PER_REQUEST"
attribute {
name = "hashKey"
type = "S"
}
}
Abstracting the desired infrastructure into a module enables independent deployments based on the provider used. Terraform workspaces can also be used to enable more complex, multi-environment setups.
Below is a diagram visualising how this project setup works:
Steps to Run
Deploy to LocalStack
- Start LocalStack by running
docker-compose up -d(while at the root of the repository) - Navigate to the
terraform/localdirectory. - (On the initial run) Initialise terraform using the
tflocal initcommand. - Check the infrastructure to be provisioned using
tflocal plan. - Provision the infrastructure to LocalStack using
tflocal apply(use--auto-approveflag to skip the manual approval step). - Point all of the AWS Service clients in your application to LocalStack. An example, written in TypeScript is shown below.
const client = new S3Client({
endpoint: "http://localstack:4566",
region: 'eu-west-2',
credentials: {
accessKeyId: 'test',
secretAccessKey: 'test',
},
forcePathStyle: true, // Required for LocalStack when using S3
})
IMPORTANT: Environment variables can be used to conditionally point the application to either LocalStack or real instance of AWS services, however this is not covered by this article.
Deploy to AWS
- Make sure AWS credentials are set up for the machine or appropriate IAM role is assumed.
- Navigate to the
terraform/clouddirectory. - (On the initial run) Initialise terraform using the
terraform initcommand. - Check the infrastructure to be provisioned using
terraform plan. - Provision the infrastructure to LocalStack using
terraform apply(use--auto-approveflag to skip the manual approval step).
Wrap-up
Summary
In this article we have covered a way of emulating AWS infrastructure locally using Terraform and LocalStack in a simple and developer-friendly way.
Next Steps
I invite you to use and extend this setup in your projects with the link to a starter repository here: local-environment-terraform-localstack - GitHub.
I hope this will save you countless hours of development time and ease some of the frustration when developing cloud-native AWS applications. Thank you for reading and best of luck! ๐