blog.maisumvictor.dev — zsh
Building Reusable Terraform Modules for Kubernetes Infrastructure
> Published:
When managing multiple Kubernetes clusters across environments, copy-pasting Terraform code quickly becomes a nightmare. Let’s explore how to build reusable modules that scale.
The Problem with Monolithic Configs
Most teams start with a single main.tf file that grows into an unmaintainable mess. Here’s what NOT to do:
# DON'T DO THIS
resource "aws_eks_cluster" "main" {
name = "my-cluster"
role_arn = aws_iam_role.cluster.arn
vpc_config {
subnet_ids = ["subnet-1", "subnet-2", "subnet-3"]
}
# ... 500 more lines
}
Module Structure
A well-structured module separates concerns:
terraform-eks-module/
├── main.tf
├── variables.tf
├── outputs.tf
├── versions.tf
├── modules/
│ ├── vpc/
│ ├── eks/
│ └── node-groups/
└── examples/
├── basic/
└── complete/
The VPC Module
# modules/vpc/main.tf
locals {
azs = slice(data.aws_availability_zones.available.names, 0, 3)
}
data "aws_availability_zones" "available" {
state = "available"
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 5.0"
name = "${var.cluster_name}-vpc"
cidr = var.vpc_cidr
azs = local.azs
private_subnets = [for i, az in local.azs : cidrsubnet(var.vpc_cidr, 4, i)]
public_subnets = [for i, az in local.azs : cidrsubnet(var.vpc_cidr, 4, i + 3)]
enable_nat_gateway = true
single_nat_gateway = var.environment == "dev"
enable_dns_hostnames = true
enable_dns_support = true
# Tags required for EKS
private_subnet_tags = {
"kubernetes.io/role/internal-elb" = "1"
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
}
public_subnet_tags = {
"kubernetes.io/role/elb" = "1"
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
}
tags = var.tags
}
The EKS Module
# modules/eks/main.tf
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 19.0"
cluster_name = var.cluster_name
cluster_version = var.kubernetes_version
vpc_id = var.vpc_id
subnet_ids = var.private_subnet_ids
# EKS Managed Node Groups
eks_managed_node_groups = {
general = {
desired_size = var.node_desired_size
min_size = var.node_min_size
max_size = var.node_max_size
instance_types = var.node_instance_types
capacity_type = var.node_capacity_type
labels = {
workload = "general"
}
taints = var.node_taints
update_config = {
max_unavailable_percentage = 25
}
}
}
# Fargate profiles for serverless workloads
fargate_profiles = var.enable_fargate ? {
kube_system = {
name = "kube-system"
selectors = [
{ namespace = "kube-system" }
]
}
} : {}
# Cluster addons
cluster_addons = {
coredns = {
most_recent = true
configuration_values = jsonencode({
computeType = "Fargate"
})
}
kube-proxy = { most_recent = true }
vpc-cni = { most_recent = true }
}
tags = var.tags
}
Using the Module
# examples/complete/main.tf
module "eks_cluster" {
source = "../.."
cluster_name = "production-eks"
environment = "prod"
vpc_cidr = "10.0.0.0/16"
kubernetes_version = "1.29"
node_desired_size = 3
node_min_size = 2
node_max_size = 10
node_instance_types = ["m6i.large", "m6i.xlarge"]
node_capacity_type = "SPOT"
enable_fargate = true
tags = {
Environment = "production"
Team = "platform"
ManagedBy = "terraform"
}
}
Key Takeaways
- Separate concerns: VPC, EKS, and node groups as distinct modules
- Use variables liberally: Make everything configurable
- Provide sensible defaults: Don’t force users to specify everything
- Document with examples: Show, don’t just tell
- Version your modules: Use Git tags for releases
The complete module is available in my GitHub repository.