Terraform EKS Module
Terrarom EKS Module Example
위 2개의 링크는 모듈을 사용하기 위해 참고하면 좋을 것 같아서 가지고 왔다.
.
├── _variables_
│ └── dev
│ ├── common_info.yaml
│ ├── common_tags.yaml
│ ├── eks_cluster_info.yaml
│ └── vpc_info.yaml
├── environments
│ └── dev
│ ├── locals.tf
│ ├── main.tf
│ ├── outputs.tf
│ ├── provider.tf
│ └── variables.tf
└── modules
├── eks
│ ├── cluster-role.tf
│ ├── cluster-sg.tf
│ ├── cluster.tf
│ ├── nodegroup-role.tf
│ ├── nodegroup.tf
│ └── variables.tf
└── vpc
├── igw.tf
├── nat.tf
├── outputs.tf
├── route.tf
├── subnet.tf
├── variables.tf
└── vpc.tf
위 소스에 대한 링크는 EKS Module 깃허브 에서 clone 하여 바로 사용할 수 있도록 올려 놓았으니 필요 시 참고하면 좋을 거 같다.
해당 코드를 통해 생성되는 아키텍처는 위와 같으며, 가용 영역은 a, b, c zone 을 사용하였고, 각 퍼블릭 서브넷에는 NAT 가 하나씩 존재하게 구성을 하였다.
해당 디렉토리는 리소스를 만들 때 필요한 변수들을 정의하기 위한 파일을 모아놓은 디렉토리이며, 현재는 dev 라는 폴더안에 개발 환경 구성을 위한 변수 정보들이 있다. 이후 추가 환경이 있다면 variables 디렉토리 아래에 추가로 만들 환경을 생성하면 된다.
# common_info.yaml
env: dev
service_name: test
# common_tags
Owner: Terraform
Environment: Develop
# eks_cluster_info.yaml
cluster_name: test-eks
cluster_service_ipv4_cidr : 172.16.0.0/16
cluster_version : 1.29
cluster_endpoint_private_access : true
cluster_endpoint_public_access : true
cluster_endpoint_public_access_cidrs : 0.0.0.0/0
cluster_enabled_cluster_log_types : ["api", "audit", "authenticator", "controllerManager", "scheduler"]
nodegroup_name : test-nodegroup
nodegroup_ami_type : AL2_x86_64
nodegroup_capacity_type : ON_DEMAND
nodegroup_disk_size : 20
nodegroup_instance_types : [t3.medium]
nodegroup_labels : node-group
remote_access_key : eks-terraform-key
nodegroup_desired_size : 3
nodegroup_min_size : 3
nodegroup_max_size : 4
# vpc_info.yaml
cidr_block_vpc: 172.21.0.0/16
vpc_name: test-vpc
cidr_blocks_public:
public_a:
subnet_name: test-public-a
cidr_block: 172.21.0.0/22
availability_zone: ap-northeast-2a
public_b:
subnet_name: test-public-b
cidr_block: 172.21.4.0/22
availability_zone: ap-northeast-2b
public_c:
subnet_name: test-public-c
cidr_block: 172.21.8.0/22
availability_zone: ap-northeast-2c
cidr_blocks_private:
private_a:
subnet_name: test-private-a
cidr_block: 172.21.12.0/22
availability_zone: ap-northeast-2a
private_b:
subnet_name: test-private-b
cidr_block: 172.21.16.0/22
availability_zone: ap-northeast-2b
private_c:
subnet_name: test-private-c
cidr_block: 172.21.20.0/22
availability_zone: ap-northeast-2c
cidr_blocks_private_db:
private_db_a:
subnet_name: test-private-db-a
cidr_block: 172.21.24.0/22
availability_zone: ap-northeast-2a
private_db_b:
subnet_name: test-private-db-b
cidr_block: 172.21.28.0/22
availability_zone: ap-northeast-2b
private_db_c:
subnet_name: dobby-private-db-c
cidr_block: 172.21.32.0/22
availability_zone: ap-northeast-2c
#private_to_public_map:
# private_a: public_a
# private_b: public_b
# private_c: public_c
해당 디렉토리는 실제 terraform 관련 명령을 진행하기 위한것이며, 동일하게 dev 폴더안에 version provider, region 등 설정을 하는 곳이다. 동일하게 추가 환경이 있다면 해당 디렉토리 아래에 추가로 만들 환경을 생성하면 된다.
locals {
common_info = yamldecode(file("../../_variables_/dev/common_info.yaml"))
common_tags = yamldecode(file("../../_variables_/dev/common_tags.yaml"))
vpc_info = yamldecode(file("../../_variables_/dev/vpc_info.yaml"))
eks_cluster_info = yamldecode(file("../../_variables_/dev/eks_cluster_info.yaml"))
}
yaml 에서 정의한 내용을 지정된 변수안에 저장하기 위한 파일
module "vpc" {
source = "../../modules/vpc"
common_info = local.common_info
common_tags = local.common_tags
vpc_info = local.vpc_info
eks_cluster_info = local.eks_cluster_info
}
module "eks" {
source = "../../modules/eks"
common_info = local.common_info
common_tags = local.common_tags
vpc_info = local.vpc_info
eks_cluster_info = local.eks_cluster_info
vpc_id = module.vpc.vpc_id
subnets_private_ids = module.vpc.subnets_private_ids
}
각 모듈마다 필요한 리소스를 만들기 위한 main 파일
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
required_version = ">= 0.12"
}
provider "aws" {
region = "ap-northeast-2"
}
Terraform 을 사용하기 위해 provider, version 등을 설정하기 위한 파일
variable "common_info" {
description = "common_info"
type = any
default = null
}
variable "common_tags" {
description = "common_tags"
type = any
default = null
}
variable "vpc_info" {
description = "vpc_info"
type = any
default = null
}
variable "eks_cluster_info" {
description = "eks_cluster_info"
type = any
default = null
}
locals.tf 파일에서 가져온 변수를 main.tf 에서 각 각의 모듈 디렉토리에 변수를 넘겨주기 위한 파일
AWS 의 각 서비스가 존재할 리소스들이 정의가 되어 있다. 현재는 vpc, eks 자원만 있지만, 필요 시 추가록 생성하거나 수정하여 사용할 수 있다.
resource "aws_vpc" "vpc" {
cidr_block = var.vpc_info.cidr_block_vpc
enable_dns_support = true
enable_dns_hostnames = true
tags = {
"Name" = "${var.common_info.env}-${var.vpc_info.vpc_name}"
}
}
resource "aws_internet_gateway" "internet_gateway" {
vpc_id = aws_vpc.vpc.id
tags = {
"Name" = "${var.common_info.service_name}-igw"
}
}
resource "aws_eip" "eip" {
for_each = var.vpc_info.cidr_blocks_public
tags = {
"Name" = "${var.common_info.service_name}-${each.key}-eip"
}
}
resource "aws_nat_gateway" "nat_gateway" {
for_each = var.vpc_info.cidr_blocks_public
allocation_id = aws_eip.eip[each.key].id
subnet_id = aws_subnet.subnets_public[each.key].id
tags = {
Name = "${var.common_info.service_name}-${each.key}-nat"
}
depends_on = [aws_internet_gateway.internet_gateway]
}
resource "aws_subnet" "subnets_public" {
for_each = var.vpc_info.cidr_blocks_public
vpc_id = aws_vpc.vpc.id
cidr_block = each.value.cidr_block
availability_zone = each.value.availability_zone
tags = merge(
{
Name = "${var.common_info.env}-${each.value.subnet_name}"
}
)
}
resource "aws_subnet" "subnets_private" {
for_each = var.vpc_info.cidr_blocks_private
vpc_id = aws_vpc.vpc.id
cidr_block = each.value.cidr_block
availability_zone = each.value.availability_zone
tags = merge(
{
Name = "${var.common_info.env}-${each.value.subnet_name}"
}
)
}
resource "aws_subnet" "subnets_private_db" {
for_each = var.vpc_info.cidr_blocks_private_db
vpc_id = aws_vpc.vpc.id
cidr_block = each.value.cidr_block
availability_zone = each.value.availability_zone
tags = merge(
{
Name = "${var.common_info.env}-${each.value.subnet_name}"
}
)
}
resource "aws_route_table" "route_table_public" {
vpc_id = aws_vpc.vpc.id
tags = merge(
{
Name = "${var.common_info.env}-${var.common_info.service_name}-public"
},
var.common_tags
)
}
resource "aws_route_table" "route_table_private" {
vpc_id = aws_vpc.vpc.id
for_each = var.vpc_info.cidr_blocks_private
tags = merge(
{
Name = "${var.common_info.env}-${each.value.subnet_name}"
},
var.common_tags
)
}
resource "aws_route_table" "route_table_private_db" {
vpc_id = aws_vpc.vpc.id
tags = merge(
{
Name = "${var.common_info.env}-${var.common_info.service_name}-private-db"
},
var.common_tags
)
}
resource "aws_route" "routes_public" {
route_table_id = aws_route_table.route_table_public.id
destination_cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.internet_gateway.id
}
resource "aws_route" "routes_private" {
count = length(var.vpc_info.cidr_blocks_private)
route_table_id = aws_route_table.route_table_private[keys(var.vpc_info.cidr_blocks_private)[count.index]].id
destination_cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.nat_gateway[keys(var.vpc_info.cidr_blocks_public)[count.index]].id
}
# resource "aws_route" "routes_private" {
# for_each = var.vpc_info.cidr_blocks_private
# route_table_id = aws_route_table.route_table_private[each.key].id
# destination_cidr_block = "0.0.0.0/0"
# nat_gateway_id = lookup(var.vpc_info.private_to_public_map, each.key, null) != null ? aws_nat_gateway.nat_gateway[lookup(var.vpc_info.private_to_public_map, each.key)].id : null
# }
resource "aws_route_table_association" "route_table_association_public" {
for_each = var.vpc_info.cidr_blocks_public
subnet_id = aws_subnet.subnets_public[each.key].id
route_table_id = aws_route_table.route_table_public.id
}
resource "aws_route_table_association" "route_table_association_private" {
for_each = var.vpc_info.cidr_blocks_private
subnet_id = aws_subnet.subnets_private[each.key].id
route_table_id = aws_route_table.route_table_private[each.key].id
}
resource "aws_route_table_association" "route_table_association_private_db" {
for_each = var.vpc_info.cidr_blocks_private_db
subnet_id = aws_subnet.subnets_private_db[each.key].id
route_table_id = aws_route_table.route_table_private_db.id
}
42번 ~ 48번의 리소스 블록을 본다면 특이한 점이 보일 것이다.
43번 라인에 count = length(var.vpc_info.cidr_blocks_private)
으로 정의가 되어 있어 해당하는 값은 0 (private-a), 1 (private-b), 2 (private-c) 가 될 것이다.
nat_gateway_id 의 값은 이미 aws_nat_gateway.nat_gateway["public-a, b, c"] 이렇게 생성이 되었다.
이 코드에서는 nat_gateway_id
를 설정하는 부분에서 keys(var.vpc_info.cidr_blocks_public)[count.index]
를 사용하여 퍼블릭 서브넷의 CIDR 블록을 가져온다. 이를 통해 해당하는 NAT 게이트웨이를 선택하고 그 ID를 사용하여 라우팅을 구성을 하였다.
==리소스가 생성이 끝난 후에도 콘솔에서 확인이 꼭 필요하다. NAT 가 Private Subnet 에 존재하여 트러블 슈팅하는데 시간이 꽤나 걸렸다.==
variable "common_info" {
description = "common_info"
type = any
default = null
}
variable "common_tags" {
description = "common_tags"
type = any
default = null
}
variable "vpc_info" {
description = "vpc_info"
type = any
default = null
}
variable "eks_cluster_info" {
description = "eks_cluster_info"
type = any
default = null
}
output "vpc_id" {
value = aws_vpc.vpc.id
}
output "subnets_private_ids" {
value = values(aws_subnet.subnets_private)[*].id
}
위 값들은 output 블록을 통해main.tf 파일에서 eks 모듈을 호출하여 필요한 값들을 전달하기 위해 사용 하였다.
resource "aws_eks_cluster" "cluster" {
name = var.eks_cluster_info.cluster_name
role_arn = aws_iam_role.master_role.arn
version = var.eks_cluster_info.cluster_version
kubernetes_network_config {
service_ipv4_cidr = var.eks_cluster_info.cluster_service_ipv4_cidr
}
vpc_config {
security_group_ids = [aws_security_group.cluster_sg.id]
subnet_ids = var.subnets_private_ids
endpoint_public_access = var.eks_cluster_info.cluster_endpoint_public_access
endpoint_private_access = var.eks_cluster_info.cluster_endpoint_private_access
}
enabled_cluster_log_types = var.eks_cluster_info.cluster_enabled_cluster_log_types
depends_on = [
aws_iam_role_policy_attachment.AmazonEKSClusterPolicy,
aws_iam_role_policy_attachment.AmazonEKSVPCResourceController,
]
tags = {
Name = var.eks_cluster_info.cluster_name
}
}
resource "aws_iam_role" "master_role" {
name = "${var.eks_cluster_info.cluster_name}-master-role"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.master_role.name
}
resource "aws_iam_role_policy_attachment" "AmazonEKSVPCResourceController" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
role = aws_iam_role.master_role.name
}
resource "aws_security_group" "cluster_sg" {
name = "terraform-eks-cluster"
description = "Cluster communication with worker nodes"
vpc_id = var.vpc_id
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "terraform-eks-cluster"
}
}
resource "aws_security_group_rule" "cluster_sg_rule" {
cidr_blocks = ["0.0.0.0/0"]
description = "Allow workstation to communicate with the cluster API Server"
from_port = 443
protocol = "tcp"
security_group_id = aws_security_group.cluster_sg.id
to_port = 443
type = "ingress"
}
resource "aws_eks_node_group" "eks_nodegroup" {
cluster_name = aws_eks_cluster.eks_cluster.name
node_group_name = var.eks_cluster_info.nodegroup_name
node_role_arn = aws_iam_role.nodegroup_role.arn
subnet_ids = var.subnets_private_ids
ami_type = var.eks_cluster_info.nodegroup_ami_type
disk_size = var.eks_cluster_info.nodegroup_disk_size
instance_types = var.eks_cluster_info.nodegroup_instance_types
labels = {
nodegroup-type = var.eks_cluster_info.nodegroup_name
}
scaling_config {
desired_size = var.eks_cluster_info.nodegroup_desired_size
max_size = var.eks_cluster_info.nodegroup_max_size
min_size = var.eks_cluster_info.nodegroup_min_size
}
remote_access {
ec2_ssh_key = var.eks_cluster_info.remote_access_key
}
depends_on = [
aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly,
]
tags = {
Name = var.eks_cluster_info.nodegroup_name
}
}
resource "aws_iam_role" "nodegroup_role" {
name = "${var.eks_cluster_info.cluster_name}-nodegroup-role"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.nodegroup_role.name
}
resource "aws_iam_role_policy_attachment" "AmazonEKS_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.nodegroup_role.name
}
resource "aws_iam_role_policy_attachment" "AmazonEC2ContainerRegistryReadOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.nodegroup_role.name
}
resource "aws_security_group" "nodegroup_sg" {
name = "${var.eks_cluster_info.cluster_name}-nodegroup-role"
description = "Security group for all nodes in the cluster"
vpc_id = var.vpc_id
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "${var.eks_cluster_info.cluster_name}-nodegroup-role"
}
}
resource "aws_security_group_rule" "nodes" {
description = "Allow nodes to communicate with each other"
from_port = 0
protocol = "-1"
security_group_id = aws_security_group.nodegroup_sg.id
source_security_group_id = aws_security_group.nodegroup_sg.id
to_port = 65535
type = "ingress"
}
resource "aws_security_group_rule" "nodes_inbound" {
description = "Allow worker Kubelets and pods to receive communication from the cluster control plane"
from_port = 1025
protocol = "tcp"
security_group_id = aws_security_group.nodegroup_sg.id
source_security_group_id = aws_security_group.cluster_sg.id
to_port = 65535
type = "ingress"
}
variable "common_info" {
description = "common_info"
type = any
}
variable "common_tags" {
description = "common_tags"
type = any
}
variable "vpc_info" {
description = "vpc_info"
type = any
}
variable "eks_cluster_info" {
description = "eks_cluster_info"
type = any
default = null
}
variable "vpc_id" {
description = "vpc_id"
type = any
}
variable "subnets_private_ids" {
description = "subnets_private_ids"
type = any
}