Terraform - EKS - practice1

XYMON·2023년 5월 19일
0

Terraform

목록 보기
7/7

Terraform EKS cluster setting.

First, I tried to make terraform files for my cluster without importing any modules(terraform-eks-module).
But I realized that making cluster itself is easy, but making the cluster to be connected other k8s objects or eks services, there are too many to consider.

In my case, I tried to make a node group - ASG(Auto Scaling Group). The node group and ec2 node of it are made, but the ec2 node can not joined the cluster. I think the k8s tags and vpc setting are the reason. Also there were similar problems like that so i decided to use 'terraform-aws-eks'

module "eks" {
    source  = "terraform-aws-modules/eks/aws"
    version = "19.13.1"

    cluster_name = var.cluster_name
    cluster_version = var.cluster_version
    
    cluster_endpoint_private_access = true
    cluster_endpoint_public_access  = true
    
    cluster_addons = {
        coredns = {
            resolve_conflicts = "OVERWRITE"
        }
        kube-proxy = {}
        vpc-cni = {
            resolve_conflicts = "OVERWRITE"
        }
    }

    vpc_id      = var.vpc_id
    subnet_ids  = var.subnet_ids
    enable_irsa = true
    # create_kms_key = false
    # cluster_encryption_config = ""

    eks_managed_node_groups = var.eks_managed_node_groups
    # create_aws_auth_configmap = true
    manage_aws_auth_configmap = true

    aws_auth_roles = [
        {
            rolearn  = var.eks_access_role_arn
            username = "accessrole"
            groups   = ["system:masters"]
        }
    ]

    node_security_group_additional_rules = {
        egress_all = {
            description = "Node all egress"
            protocol    = "-1"
            from_port   = 0
            to_port     = 0
            type        = "egress"
            cidr_blocks = ["0.0.0.0/0"]
        }
        ingress_self_all = {
            description = "Node to node all ports/protocols"
            protocol    = "-1"
            from_port   = 0
            to_port     = 0
            type        = "ingress"
            self        = true
        }
        ingress_allow_access_from_control_plane = {
            type                          = "ingress"
            protocol                      = "-1"
            from_port                     = 0
            to_port                       = 0
            source_cluster_security_group = true
            description                   = "Allow access from control plane to webhook port of AWS load balancer controller"
        }
    }
}
  • cluster_endpoint_private/public_access = true: This allows the API endpoint of EKS cluster to be accessed from within the private/public subnets of the VPC.
    Private - resources inside of VPC like ec2 instances.
    Public - public internet. useful when external access is needed(manage tool)

  • cluster_addons : Addons are optional k8s components that can be deployed alongside eks cluster to enhance its functionality.
    CoreDns: provides DNS resolution capabilities for cluster. - resolution conflict - overwirte means that if there are any conflict btw exisitng CoreDNS config, defined config will overwrite.
    kube-proxy: responsible for load balancing network traffic within eks cluster. empty -> default setting.
    vpc-cni: used to provide networking capabilities for cluster. it enables networking communication btw pods within the cluster and allows them to communicate with other AWS services/resources.

  • eks_managed_node_groups : define the managed node groups for the EKS cluster. Managed node groups are Auto Scaling Group that will be associated with the EKS.

    eks_managed_node_groups = {
        api_node_group = {
            use_custom_launch_template = false
            instance_types = ["t2.micro"]
            desired_capacity = 1
            min_size = 1
            max_size = 5  
            subnet_ids = module.vpc.private_subnets
        }
    }

This is node scailing config. we can specify AMI, instance type, desired number of nodes, min/max num of node instance, etc. spec

  • enable_irsa=true : Enable IAM Roles for Service Account - allows to associate IAM role directly with k8s service in EKS clister.
    with this, all pods running in cluster are assumed to be associated the IAM role and have access AWS resources using AWS SDK/CLI.
    And the role can be specified at "aws_auth_roles"

  • node_security_group_additional_rules: defines additional security group rules for worker nodes.
    protocol - IP protocol for the rule.
    from/to_port - port range
    type - ingress for inbound egress for outbound
    self - allows traffic from the same security group. -enabling communication btw worker nodes within same sg.(inter node communication)
    source_cluster_security_group: allow traffic from the control plane sg.(control plane communication)

Further Improvment
Apply cluster-autoscailing for dynamic scaling.

profile
염염

0개의 댓글