본문 바로가기
STUDY - AEWS

1주차 - EKS 소개 및 배포

by gaji3 2026. 3. 19.

Amazon EKS 배포 및 확인

  • 실습을 위한 로컬 PC 준비 (Mac OS)
1) aws cli 설치 및 iam (주체) 자격 증명 설정
# Install aws cli
brew install awscli
aws --version

# iam (주체) 자격 증명 설정
aws configure
AWS Access Key ID : <액세스 키 입력>
AWS Secret Access Key : <시크릿 키 입력>
Default region name : ap-northeast-2

# 확인
aws sts get-caller-identity

=============================================
2) (참고) AWS CLI로 EC2 Key Pair 생성
# 기본 Key Pair 생성 (pem 파일 생성)
aws ec2 create-key-pair \
  --key-name my-keypair \
  --query 'KeyMaterial' \
  --output text > my-keypair.pem

# 권한 설정
chmod 400 my-keypair.pem

# 확인
aws ec2 describe-key-pairs --key-names my-keypair

=============================================
3) k8s 관리 필수 툴 설치
# Install kubectl
brew install kubernetes-cli
kubectl version --client=true

# Install Helm
brew install helm
helm version

=============================================
4) (권장) k8s 관리 유용한 툴 설치
# Install krew
brew install krew

# Install k9s
brew install k9s

# Install kube-ps1
brew install kube-ps1

# Install kubectx
brew install kubectx


# kubectl 출력 시 하이라이트 처리
brew install kubecolor
echo "alias k=kubectl" >> ~/.zshrc
echo "alias kubectl=kubecolor" >> ~/.zshrc
echo "compdef kubecolor=kubectl" >> ~/.zshrc

# k8s krew path : ~/.zshrc 아래 추가
export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"

=============================================
테라폼 설치 : tfenv(권장) - [링크](https://learn.hashicorp.com/tutorials/terraform/install-cli?in=terraform/aws-get-started)

# tfenv 설치
brew install tfenv

# 설치 가능 버전 리스트 확인
tfenv list-remote

# 테라폼 특정 버전 설치
tfenv install 1.14.6

# 테라폼 특정 버전 사용 설정 
tfenv use 1.14.6

# tfenv로 설치한 버전 확인
tfenv list

# 테라폼 버전 정보 확인
terraform version

# 자동완성
terraform -install-autocomplete

## 참고 .zshrc 에 아래 추가됨
cat ~/.zshrc
autoload -U +X bashcompinit && bashcompinit
complete -o nospace -C /usr/local/bin/terraform terraform
더보기
더보기

실제 적용값 확인

1) aws cli 설치 및 iam (주체) 자격 증명 설정
# Install aws cli
brew install awscli
aws --version
aws-cli/2.13.32 Python/3.11.6 Darwin/22.2.0 exe/x86_64 prompt/off

# iam (주체) 자격 증명 설정
aws configure
AWS Access Key ID : <액세스 키 입력>
AWS Secret Access Key : <시크릿 키 입력>
Default region name : ap-northeast-2

# 확인
aws sts get-caller-identity
{
    "UserId": "AIxxxxxxxxxxx",
    "Account": "123123123123",
    "Arn": "arn:aws:iam::123123123123:user/testuser"
}
=============================================
2) (참고) AWS CLI로 EC2 Key Pair 생성
# 기본 Key Pair 생성 (pem 파일 생성)
aws ec2 create-key-pair \
  --key-name my-keypair \
  --query 'KeyMaterial' \
  --output text > my-keypair.pem

# 권한 설정
chmod 400 my-keypair.pem

# 확인
aws ec2 describe-key-pairs --key-names my-keypair
{
    "KeyPairs": [
        {
            "KeyPairId": "key-xxxxxxxxxx",
            "KeyFingerprint": "3c:xxxxxxxxxx",
            "KeyName": "test-key",
            "KeyType": "rsa",
            "Tags": [
                {
                    "Key": "Name",
                    "Value": "test-key"
                }
            ],
            "CreateTime": "xxxxxxxxxx"
        }
    ]
}
=============================================
3) k8s 관리 필수 툴 설치
# Install kubectl
brew install kubernetes-cli
kubectl version --client=true
Client Version: v1.34.1
Kustomize Version: v5.7.1
Kubecolor Version: v0.5.2

# Install Helm
brew install helm
helm version
version.BuildInfo{Version:"v3.19.0", GitCommit:"3d8990f0836691f0229297773f3524598f46bda6", GitTreeState:"clean", GoVersion:"go1.25.1"}
=============================================
4) (권장) k8s 관리 유용한 툴 설치
# Install krew
brew install krew

# Install k9s
brew install k9s

# Install kube-ps1
brew install kube-ps1

# Install kubectx
brew install kubectx


# kubectl 출력 시 하이라이트 처리
brew install kubecolor
echo "alias k=kubectl" >> ~/.zshrc
echo "alias kubectl=kubecolor" >> ~/.zshrc
echo "compdef kubecolor=kubectl" >> ~/.zshrc

# k8s krew path : ~/.zshrc 아래 추가
export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"

=============================================
테라폼 설치 : tfenv(권장) - [링크](https://learn.hashicorp.com/tutorials/terraform/install-cli?in=terraform/aws-get-started)

# tfenv 설치
brew install tfenv

# 설치 가능 버전 리스트 확인
tfenv list-remote

# 테라폼 특정 버전 설치
tfenv install 1.14.6

# 테라폼 특정 버전 사용 설정 
tfenv use 1.14.6

# tfenv로 설치한 버전 확인
tfenv list
* 1.14.6 (set by /opt/homebrew/Cellar/tfenv/3.0.0/version)
  1.11.0
  
# 테라폼 버전 정보 확인
terraform version
Terraform v1.14.6
on darwin_arm64

# 자동완성
terraform -install-autocomplete

## 참고 .zshrc 에 아래 추가됨
cat ~/.zshrc
autoload -U +X bashcompinit && bashcompinit
complete -o nospace -C /usr/local/bin/terraform terraform

 

 

# 코드 다운로드
$ git clone https://github.com/gasida/aews.git
Cloning into 'aews'...
remote: Enumerating objects: 15, done.
remote: Counting objects: 100% (15/15), done.
remote: Compressing objects: 100% (13/13), done.
remote: Total 15 (delta 1), reused 13 (delta 1), pack-reused 0 (from 0)
Receiving objects: 100% (15/15), 7.61 KiB | 3.80 MiB/s, done.
Resolving deltas: 100% (1/1), done.

s-aews $ cd aews
s-aews $ tree aews
aews
├── 1w
│   ├── eks.tf
│   ├── var.tf
│   └── vpc.tf
└── eks-private
    ├── ec2.tf
    ├── main.tf
    ├── outputs.tf
    └── versions.tf
    
    
# 작업 디렉터리 이동
$ cd 1w

 

 

 

  • vpc, eks 배포 (12분 정도 소요) →k8s 자격증명 설정
# 변수 지정
v:Documents:s-aews $ aws ec2 describe-key-pairs --query "KeyPairs[].KeyName" --output text

# 변수 수동 지정
# 원래 변수 내용 : export TF_VAR_KeyName=$(aws ec2 describe-key-pairs --query "KeyPairs[].KeyName" --output text)
v:Documents:s-aews $ export TF_VAR_KeyName=test-key
v:Documents:s-aews $ export TF_VAR_ssh_access_cidr=$(curl -s ipinfo.io/ip)/32

v:Documents:s-aews $ echo $TF_VAR_KeyName $TF_VAR_ssh_access_cidr
test-key x.x.x.x/32



# 배포 : 12분 정도 소요
v:Documents:s-aews:aews:1w $ terraform init
Initializing the backend...
Initializing modules...
Downloading registry.terraform.io/terraform-aws-modules/eks/aws 21.15.1 for eks...
- eks in .terraform/modules/eks
- eks.eks_managed_node_group in .terraform/modules/eks/modules/eks-managed-node-group
- eks.eks_managed_node_group.user_data in .terraform/modules/eks/modules/_user_data
- eks.fargate_profile in .terraform/modules/eks/modules/fargate-profile
Downloading registry.terraform.io/terraform-aws-modules/kms/aws 4.0.0 for eks.kms...
- eks.kms in .terraform/modules/eks.kms
- eks.self_managed_node_group in .terraform/modules/eks/modules/self-managed-node-group
- eks.self_managed_node_group.user_data in .terraform/modules/eks/modules/_user_data
Downloading registry.terraform.io/terraform-aws-modules/vpc/aws 6.6.0 for vpc...
- vpc in .terraform/modules/vpc
Initializing provider plugins...
- Finding hashicorp/aws versions matching ">= 6.0.0, >= 6.28.0"...
- Finding hashicorp/tls versions matching ">= 4.0.0"...
- Finding hashicorp/time versions matching ">= 0.9.0"...
- Finding hashicorp/cloudinit versions matching ">= 2.0.0"...
- Finding hashicorp/null versions matching ">= 3.0.0"...
- Installing hashicorp/aws v6.36.0...
- Installed hashicorp/aws v6.36.0 (signed by HashiCorp)
- Installing hashicorp/tls v4.2.1...
- Installed hashicorp/tls v4.2.1 (signed by HashiCorp)
- Installing hashicorp/time v0.13.1...
- Installed hashicorp/time v0.13.1 (signed by HashiCorp)
- Installing hashicorp/cloudinit v2.3.7...
- Installed hashicorp/cloudinit v2.3.7 (signed by HashiCorp)
- Installing hashicorp/null v3.2.4...
- Installed hashicorp/null v3.2.4 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.


v:Documents:s-aews:aews:1w $ terraform plan
(상위 생략)
  + resource "aws_eks_addon" "before_compute" {
      + addon_name                  = "vpc-cni"
      + addon_version               = (known after apply)
      + arn                         = (known after apply)
      + cluster_name                = (known after apply)
      + configuration_values        = (known after apply)
      + created_at                  = (known after apply)
      + id                          = (known after apply)
      + modified_at                 = (known after apply)
      + preserve                    = true
      + region                      = "ap-northeast-2"
      + resolve_conflicts_on_create = "NONE"
      + resolve_conflicts_on_update = "OVERWRITE"
      + tags                        = {
          + "Environment" = "cloudneta-lab"
          + "Terraform"   = "true"
        }
      + tags_all                    = {
          + "Environment" = "cloudneta-lab"
          + "Terraform"   = "true"
        }

      + timeouts {}
    }

  # module.eks.aws_eks_addon.this["coredns"] will be created
  + resource "aws_eks_addon" "this" {
      + addon_name                  = "coredns"
      + addon_version               = (known after apply)
      + arn                         = (known after apply)
      + cluster_name                = (known after apply)
      + configuration_values        = (known after apply)
      + created_at                  = (known after apply)
      + id                          = (known after apply)
      + modified_at                 = (known after apply)
      + preserve                    = true
      + region                      = "ap-northeast-2"
      + resolve_conflicts_on_create = "NONE"
      + resolve_conflicts_on_update = "OVERWRITE"
      + tags                        = {
          + "Environment" = "cloudneta-lab"
          + "Terraform"   = "true"
        }
      + tags_all                    = {
          + "Environment" = "cloudneta-lab"
          + "Terraform"   = "true"
        }

      + timeouts {}
    }

  # module.eks.aws_eks_addon.this["kube-proxy"] will be created
  + resource "aws_eks_addon" "this" {
      + addon_name                  = "kube-proxy"
      + addon_version               = (known after apply)
      + arn                         = (known after apply)
      + cluster_name                = (known after apply)
      + configuration_values        = (known after apply)
      + created_at                  = (known after apply)
      + id                          = (known after apply)
      + modified_at                 = (known after apply)
      + preserve                    = true
      + region                      = "ap-northeast-2"
      + resolve_conflicts_on_create = "NONE"
      + resolve_conflicts_on_update = "OVERWRITE"
      + tags                        = {
          + "Environment" = "cloudneta-lab"
          + "Terraform"   = "true"
        }
      + tags_all                    = {
          + "Environment" = "cloudneta-lab"
          + "Terraform"   = "true"
        }

      + timeouts {}
    }

  # module.eks.aws_eks_cluster.this[0] will be created
  + resource "aws_eks_cluster" "this" {
      + arn                           = (known after apply)
      + bootstrap_self_managed_addons = false
      + certificate_authority         = (known after apply)
      + cluster_id                    = (known after apply)
      + created_at                    = (known after apply)
      + deletion_protection           = (known after apply)
      + endpoint                      = (known after apply)
      + id                            = (known after apply)
      + identity                      = (known after apply)
      + name                          = "myeks"
      + platform_version              = (known after apply)
      + region                        = "ap-northeast-2"
      + role_arn                      = (known after apply)
      + status                        = (known after apply)
      + tags                          = {
          + "Environment" = "cloudneta-lab"
          + "Terraform"   = "true"
        }
      + tags_all                      = {
          + "Environment" = "cloudneta-lab"
          + "Terraform"   = "true"
        }
      + version                       = "1.34"

      + access_config {
          + authentication_mode                         = "API_AND_CONFIG_MAP"
          + bootstrap_cluster_creator_admin_permissions = false
        }

      + compute_config (known after apply)

      + control_plane_scaling_config (known after apply)

      + encryption_config {
          + resources = [
              + "secrets",
            ]

          + provider {
              + key_arn = (known after apply)
            }
        }

      + kubernetes_network_config {
          + ip_family         = "ipv4"
          + service_ipv4_cidr = (known after apply)
          + service_ipv6_cidr = (known after apply)

          + elastic_load_balancing (known after apply)
        }

      + storage_config (known after apply)

      + upgrade_policy (known after apply)

      + vpc_config {
          + cluster_security_group_id = (known after apply)
          + endpoint_private_access   = false
          + endpoint_public_access    = true
          + public_access_cidrs       = [
              + "0.0.0.0/0",
            ]
          + security_group_ids        = (known after apply)
          + subnet_ids                = (known after apply)
          + vpc_id                    = (known after apply)
        }
    }

  # module.eks.aws_iam_openid_connect_provider.oidc_provider[0] will be created
  + resource "aws_iam_openid_connect_provider" "oidc_provider" {
      + arn             = (known after apply)
      + client_id_list  = [
          + "sts.amazonaws.com",
        ]
      + id              = (known after apply)
      + tags            = {
          + "Environment" = "cloudneta-lab"
          + "Name"        = "myeks-eks-irsa"
          + "Terraform"   = "true"
        }
      + tags_all        = {
          + "Environment" = "cloudneta-lab"
          + "Name"        = "myeks-eks-irsa"
          + "Terraform"   = "true"
        }
      + thumbprint_list = (known after apply)
      + url             = (known after apply)
    }

  # module.eks.aws_iam_policy.cluster_encryption[0] will be created
  + resource "aws_iam_policy" "cluster_encryption" {
      + arn              = (known after apply)
      + attachment_count = (known after apply)
      + description      = "Cluster encryption policy to allow cluster role to utilize CMK provided"
      + id               = (known after apply)
      + name             = (known after apply)
      + name_prefix      = "myeks-cluster-ClusterEncryption"
      + path             = "/"
      + policy           = (known after apply)
      + policy_id        = (known after apply)
      + tags             = {
          + "Environment" = "cloudneta-lab"
          + "Terraform"   = "true"
        }
      + tags_all         = {
          + "Environment" = "cloudneta-lab"
          + "Terraform"   = "true"
        }
    }

  # module.eks.aws_iam_role.this[0] will be created
  + resource "aws_iam_role" "this" {
      + arn                   = (known after apply)
      + assume_role_policy    = jsonencode(
            {
              + Statement = [
                  + {
                      + Action    = [
                          + "sts:TagSession",
                          + "sts:AssumeRole",
                        ]
                      + Effect    = "Allow"
                      + Principal = {
                          + Service = "eks.amazonaws.com"
                        }
                      + Sid       = "EKSClusterAssumeRole"
                    },
                ]
              + Version   = "2012-10-17"
            }
        )
      + create_date           = (known after apply)
      + force_detach_policies = true
      + id                    = (known after apply)
      + managed_policy_arns   = (known after apply)
      + max_session_duration  = 3600
      + name                  = (known after apply)
      + name_prefix           = "myeks-cluster-"
      + path                  = "/"
      + tags                  = {
          + "Environment" = "cloudneta-lab"
          + "Terraform"   = "true"
        }
      + tags_all              = {
          + "Environment" = "cloudneta-lab"
          + "Terraform"   = "true"
        }
      + unique_id             = (known after apply)

      + inline_policy (known after apply)
    }

  # module.eks.aws_iam_role_policy_attachment.cluster_encryption[0] will be created
  + resource "aws_iam_role_policy_attachment" "cluster_encryption" {
      + id         = (known after apply)
      + policy_arn = (known after apply)
      + role       = (known after apply)
    }

  # module.eks.aws_iam_role_policy_attachment.this["AmazonEKSClusterPolicy"] will be created
  + resource "aws_iam_role_policy_attachment" "this" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
      + role       = (known after apply)
    }

  # module.eks.aws_security_group.cluster[0] will be created
  + resource "aws_security_group" "cluster" {
      + arn                    = (known after apply)
      + description            = "EKS cluster security group"
      + egress                 = (known after apply)
      + id                     = (known after apply)
      + ingress                = (known after apply)
      + name                   = (known after apply)
      + name_prefix            = "myeks-cluster-"
      + owner_id               = (known after apply)
      + region                 = "ap-northeast-2"
      + revoke_rules_on_delete = false
      + tags                   = {
          + "Environment" = "cloudneta-lab"
          + "Name"        = "myeks-cluster"
          + "Terraform"   = "true"
        }
      + tags_all               = {
          + "Environment" = "cloudneta-lab"
          + "Name"        = "myeks-cluster"
          + "Terraform"   = "true"
        }
      + vpc_id                 = (known after apply)
    }

  # module.eks.aws_security_group.node[0] will be created
  + resource "aws_security_group" "node" {
      + arn                    = (known after apply)
      + description            = "EKS node shared security group"
      + egress                 = (known after apply)
      + id                     = (known after apply)
      + ingress                = (known after apply)
      + name                   = (known after apply)
      + name_prefix            = "myeks-node-"
      + owner_id               = (known after apply)
      + region                 = "ap-northeast-2"
      + revoke_rules_on_delete = false
      + tags                   = {
          + "Environment"                 = "cloudneta-lab"
          + "Name"                        = "myeks-node"
          + "Terraform"                   = "true"
          + "kubernetes.io/cluster/myeks" = "owned"
        }
      + tags_all               = {
          + "Environment"                 = "cloudneta-lab"
          + "Name"                        = "myeks-node"
          + "Terraform"                   = "true"
          + "kubernetes.io/cluster/myeks" = "owned"
        }
      + vpc_id                 = (known after apply)
    }

  # module.eks.aws_security_group_rule.cluster["ingress_nodes_443"] will be created
  + resource "aws_security_group_rule" "cluster" {
      + description              = "Node groups to cluster API"
      + from_port                = 443
      + id                       = (known after apply)
      + protocol                 = "tcp"
      + region                   = "ap-northeast-2"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 443
      + type                     = "ingress"
    }

  # module.eks.aws_security_group_rule.node["egress_all"] will be created
  + resource "aws_security_group_rule" "node" {
      + cidr_blocks              = [
          + "0.0.0.0/0",
        ]
      + description              = "Allow all egress"
      + from_port                = 0
      + id                       = (known after apply)
      + protocol                 = "-1"
      + region                   = "ap-northeast-2"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 0
      + type                     = "egress"
    }

  # module.eks.aws_security_group_rule.node["ingress_cluster_10251_webhook"] will be created
  + resource "aws_security_group_rule" "node" {
      + description              = "Cluster API to node 10251/tcp webhook"
      + from_port                = 10251
      + id                       = (known after apply)
      + protocol                 = "tcp"
      + region                   = "ap-northeast-2"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 10251
      + type                     = "ingress"
    }

  # module.eks.aws_security_group_rule.node["ingress_cluster_443"] will be created
  + resource "aws_security_group_rule" "node" {
      + description              = "Cluster API to node groups"
      + from_port                = 443
      + id                       = (known after apply)
      + protocol                 = "tcp"
      + region                   = "ap-northeast-2"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 443
      + type                     = "ingress"
    }

  # module.eks.aws_security_group_rule.node["ingress_cluster_4443_webhook"] will be created
  + resource "aws_security_group_rule" "node" {
      + description              = "Cluster API to node 4443/tcp webhook"
      + from_port                = 4443
      + id                       = (known after apply)
      + protocol                 = "tcp"
      + region                   = "ap-northeast-2"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 4443
      + type                     = "ingress"
    }

  # module.eks.aws_security_group_rule.node["ingress_cluster_6443_webhook"] will be created
  + resource "aws_security_group_rule" "node" {
      + description              = "Cluster API to node 6443/tcp webhook"
      + from_port                = 6443
      + id                       = (known after apply)
      + protocol                 = "tcp"
      + region                   = "ap-northeast-2"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 6443
      + type                     = "ingress"
    }

  # module.eks.aws_security_group_rule.node["ingress_cluster_8443_webhook"] will be created
  + resource "aws_security_group_rule" "node" {
      + description              = "Cluster API to node 8443/tcp webhook"
      + from_port                = 8443
      + id                       = (known after apply)
      + protocol                 = "tcp"
      + region                   = "ap-northeast-2"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 8443
      + type                     = "ingress"
    }

  # module.eks.aws_security_group_rule.node["ingress_cluster_9443_webhook"] will be created
  + resource "aws_security_group_rule" "node" {
      + description              = "Cluster API to node 9443/tcp webhook"
      + from_port                = 9443
      + id                       = (known after apply)
      + protocol                 = "tcp"
      + region                   = "ap-northeast-2"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 9443
      + type                     = "ingress"
    }

  # module.eks.aws_security_group_rule.node["ingress_cluster_kubelet"] will be created
  + resource "aws_security_group_rule" "node" {
      + description              = "Cluster API to node kubelets"
      + from_port                = 10250
      + id                       = (known after apply)
      + protocol                 = "tcp"
      + region                   = "ap-northeast-2"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 10250
      + type                     = "ingress"
    }

  # module.eks.aws_security_group_rule.node["ingress_nodes_ephemeral"] will be created
  + resource "aws_security_group_rule" "node" {
      + description              = "Node to node ingress on ephemeral ports"
      + from_port                = 1025
      + id                       = (known after apply)
      + protocol                 = "tcp"
      + region                   = "ap-northeast-2"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = true
      + source_security_group_id = (known after apply)
      + to_port                  = 65535
      + type                     = "ingress"
    }

  # module.eks.aws_security_group_rule.node["ingress_self_coredns_tcp"] will be created
  + resource "aws_security_group_rule" "node" {
      + description              = "Node to node CoreDNS"
      + from_port                = 53
      + id                       = (known after apply)
      + protocol                 = "tcp"
      + region                   = "ap-northeast-2"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = true
      + source_security_group_id = (known after apply)
      + to_port                  = 53
      + type                     = "ingress"
    }

  # module.eks.aws_security_group_rule.node["ingress_self_coredns_udp"] will be created
  + resource "aws_security_group_rule" "node" {
      + description              = "Node to node CoreDNS UDP"
      + from_port                = 53
      + id                       = (known after apply)
      + protocol                 = "udp"
      + region                   = "ap-northeast-2"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = true
      + source_security_group_id = (known after apply)
      + to_port                  = 53
      + type                     = "ingress"
    }

  # module.eks.time_sleep.this[0] will be created
  + resource "time_sleep" "this" {
      + create_duration = "30s"
      + id              = (known after apply)
      + triggers        = {
          + "certificate_authority_data" = (known after apply)
          + "endpoint"                   = (known after apply)
          + "kubernetes_version"         = "1.34"
          + "name"                       = (known after apply)
          + "service_cidr"               = (known after apply)
        }
    }

  # module.vpc.aws_default_route_table.default[0] will be created
  + resource "aws_default_route_table" "default" {
      + arn                    = (known after apply)
      + default_route_table_id = (known after apply)
      + id                     = (known after apply)
      + owner_id               = (known after apply)
      + region                 = "ap-northeast-2"
      + route                  = (known after apply)
      + tags                   = {
          + "Environment" = "cloudneta-lab"
          + "Name"        = "myeks-VPC-default"
        }
      + tags_all               = {
          + "Environment" = "cloudneta-lab"
          + "Name"        = "myeks-VPC-default"
        }
      + vpc_id                 = (known after apply)

      + timeouts {
          + create = "5m"
          + update = "5m"
        }
    }

  # module.vpc.aws_default_security_group.this[0] will be created
  + resource "aws_default_security_group" "this" {
      + arn                    = (known after apply)
      + description            = (known after apply)
      + egress                 = (known after apply)
      + id                     = (known after apply)
      + ingress                = (known after apply)
      + name                   = (known after apply)
      + name_prefix            = (known after apply)
      + owner_id               = (known after apply)
      + region                 = "ap-northeast-2"
      + revoke_rules_on_delete = false
      + tags                   = {
          + "Environment" = "cloudneta-lab"
          + "Name"        = "myeks-VPC-default"
        }
      + tags_all               = {
          + "Environment" = "cloudneta-lab"
          + "Name"        = "myeks-VPC-default"
        }
      + vpc_id                 = (known after apply)
    }

  # module.vpc.aws_internet_gateway.this[0] will be created
  + resource "aws_internet_gateway" "this" {
      + arn      = (known after apply)
      + id       = (known after apply)
      + owner_id = (known after apply)
      + region   = "ap-northeast-2"
      + tags     = {
          + "Environment" = "cloudneta-lab"
          + "Name"        = "myeks-IGW"
        }
      + tags_all = {
          + "Environment" = "cloudneta-lab"
          + "Name"        = "myeks-IGW"
        }
      + vpc_id   = (known after apply)
    }

  # module.vpc.aws_route.public_internet_gateway[0] will be created
  + resource "aws_route" "public_internet_gateway" {
      + destination_cidr_block = "0.0.0.0/0"
      + gateway_id             = (known after apply)
      + id                     = (known after apply)
      + instance_id            = (known after apply)
      + instance_owner_id      = (known after apply)
      + network_interface_id   = (known after apply)
      + origin                 = (known after apply)
      + region                 = "ap-northeast-2"
      + route_table_id         = (known after apply)
      + state                  = (known after apply)

      + timeouts {
          + create = "5m"
        }
    }

  # module.vpc.aws_route_table.public[0] will be created
  + resource "aws_route_table" "public" {
      + arn              = (known after apply)
      + id               = (known after apply)
      + owner_id         = (known after apply)
      + propagating_vgws = (known after apply)
      + region           = "ap-northeast-2"
      + route            = (known after apply)
      + tags             = {
          + "Environment" = "cloudneta-lab"
          + "Name"        = "myeks-VPC-public"
        }
      + tags_all         = {
          + "Environment" = "cloudneta-lab"
          + "Name"        = "myeks-VPC-public"
        }
      + vpc_id           = (known after apply)
    }

  # module.vpc.aws_route_table_association.public[0] will be created
  + resource "aws_route_table_association" "public" {
      + id             = (known after apply)
      + region         = "ap-northeast-2"
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # module.vpc.aws_route_table_association.public[1] will be created
  + resource "aws_route_table_association" "public" {
      + id             = (known after apply)
      + region         = "ap-northeast-2"
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # module.vpc.aws_route_table_association.public[2] will be created
  + resource "aws_route_table_association" "public" {
      + id             = (known after apply)
      + region         = "ap-northeast-2"
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # module.vpc.aws_subnet.public[0] will be created
  + resource "aws_subnet" "public" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "ap-northeast-2a"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "192.168.1.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block                                = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = true
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + region                                         = "ap-northeast-2"
      + tags                                           = {
          + "Environment"            = "cloudneta-lab"
          + "Name"                   = "myeks-PublicSubnet"
          + "kubernetes.io/role/elb" = "1"
        }
      + tags_all                                       = {
          + "Environment"            = "cloudneta-lab"
          + "Name"                   = "myeks-PublicSubnet"
          + "kubernetes.io/role/elb" = "1"
        }
      + vpc_id                                         = (known after apply)
    }

  # module.vpc.aws_subnet.public[1] will be created
  + resource "aws_subnet" "public" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "ap-northeast-2b"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "192.168.2.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block                                = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = true
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + region                                         = "ap-northeast-2"
      + tags                                           = {
          + "Environment"            = "cloudneta-lab"
          + "Name"                   = "myeks-PublicSubnet"
          + "kubernetes.io/role/elb" = "1"
        }
      + tags_all                                       = {
          + "Environment"            = "cloudneta-lab"
          + "Name"                   = "myeks-PublicSubnet"
          + "kubernetes.io/role/elb" = "1"
        }
      + vpc_id                                         = (known after apply)
    }

  # module.vpc.aws_subnet.public[2] will be created
  + resource "aws_subnet" "public" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "ap-northeast-2c"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "192.168.3.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block                                = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = true
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + region                                         = "ap-northeast-2"
      + tags                                           = {
          + "Environment"            = "cloudneta-lab"
          + "Name"                   = "myeks-PublicSubnet"
          + "kubernetes.io/role/elb" = "1"
        }
      + tags_all                                       = {
          + "Environment"            = "cloudneta-lab"
          + "Name"                   = "myeks-PublicSubnet"
          + "kubernetes.io/role/elb" = "1"
        }
      + vpc_id                                         = (known after apply)
    }

  # module.vpc.aws_vpc.this[0] will be created
  + resource "aws_vpc" "this" {
      + arn                                  = (known after apply)
      + cidr_block                           = "192.168.0.0/16"
      + default_network_acl_id               = (known after apply)
      + default_route_table_id               = (known after apply)
      + default_security_group_id            = (known after apply)
      + dhcp_options_id                      = (known after apply)
      + enable_dns_hostnames                 = true
      + enable_dns_support                   = true
      + enable_network_address_usage_metrics = (known after apply)
      + id                                   = (known after apply)
      + instance_tenancy                     = "default"
      + ipv6_association_id                  = (known after apply)
      + ipv6_cidr_block                      = (known after apply)
      + ipv6_cidr_block_network_border_group = (known after apply)
      + main_route_table_id                  = (known after apply)
      + owner_id                             = (known after apply)
      + region                               = "ap-northeast-2"
      + tags                                 = {
          + "Environment" = "cloudneta-lab"
          + "Name"        = "myeks-VPC"
        }
      + tags_all                             = {
          + "Environment" = "cloudneta-lab"
          + "Name"        = "myeks-VPC"
        }
    }

  # module.eks.module.eks_managed_node_group["default"].aws_eks_node_group.this[0] will be created
  + resource "aws_eks_node_group" "this" {
      + ami_type               = "AL2023_x86_64_STANDARD"
      + arn                    = (known after apply)
      + capacity_type          = "ON_DEMAND"
      + cluster_name           = (known after apply)
      + disk_size              = (known after apply)
      + id                     = (known after apply)
      + instance_types         = [
          + "t3.medium",
        ]
      + node_group_name        = "myeks-node-group"
      + node_group_name_prefix = (known after apply)
      + node_role_arn          = (known after apply)
      + region                 = "ap-northeast-2"
      + release_version        = "1.34.4-20260311"
      + resources              = (known after apply)
      + status                 = (known after apply)
      + subnet_ids             = (known after apply)
      + tags                   = {
          + "Environment" = "cloudneta-lab"
          + "Name"        = "myeks-node-group"
          + "Terraform"   = "true"
        }
      + tags_all               = {
          + "Environment" = "cloudneta-lab"
          + "Name"        = "myeks-node-group"
          + "Terraform"   = "true"
        }
      + version                = "1.34"

      + launch_template {
          + id      = (known after apply)
          + name    = (known after apply)
          + version = (known after apply)
        }

      + node_repair_config (known after apply)

      + scaling_config {
          + desired_size = 2
          + max_size     = 4
          + min_size     = 1
        }

      + update_config {
          + max_unavailable_percentage = 33
        }
    }

  # module.eks.module.eks_managed_node_group["default"].aws_iam_role.this[0] will be created
  + resource "aws_iam_role" "this" {
      + arn                   = (known after apply)
      + assume_role_policy    = jsonencode(
            {
              + Statement = [
                  + {
                      + Action    = "sts:AssumeRole"
                      + Effect    = "Allow"
                      + Principal = {
                          + Service = "ec2.amazonaws.com"
                        }
                      + Sid       = "EKSNodeAssumeRole"
                    },
                ]
              + Version   = "2012-10-17"
            }
        )
      + create_date           = (known after apply)
      + description           = "EKS managed node group IAM role"
      + force_detach_policies = true
      + id                    = (known after apply)
      + managed_policy_arns   = (known after apply)
      + max_session_duration  = 3600
      + name                  = (known after apply)
      + name_prefix           = "myeks-node-group-eks-node-group-"
      + path                  = "/"
      + tags                  = {
          + "Environment" = "cloudneta-lab"
          + "Terraform"   = "true"
        }
      + tags_all              = {
          + "Environment" = "cloudneta-lab"
          + "Terraform"   = "true"
        }
      + unique_id             = (known after apply)

      + inline_policy (known after apply)
    }

  # module.eks.module.eks_managed_node_group["default"].aws_iam_role_policy_attachment.this["AmazonEC2ContainerRegistryReadOnly"] will be created
  + resource "aws_iam_role_policy_attachment" "this" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
      + role       = (known after apply)
    }

  # module.eks.module.eks_managed_node_group["default"].aws_iam_role_policy_attachment.this["AmazonEKSWorkerNodePolicy"] will be created
  + resource "aws_iam_role_policy_attachment" "this" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
      + role       = (known after apply)
    }

  # module.eks.module.eks_managed_node_group["default"].aws_iam_role_policy_attachment.this["AmazonEKS_CNI_Policy"] will be created
  + resource "aws_iam_role_policy_attachment" "this" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
      + role       = (known after apply)
    }

  # module.eks.module.eks_managed_node_group["default"].aws_launch_template.this[0] will be created
  + resource "aws_launch_template" "this" {
      + arn                    = (known after apply)
      + default_version        = (known after apply)
      + description            = "Custom launch template for myeks-node-group EKS managed node group"
      + id                     = (known after apply)
      + key_name               = "voieul-key"
      + latest_version         = (known after apply)
      + name                   = (known after apply)
      + name_prefix            = "default-"
      + region                 = "ap-northeast-2"
      + tags                   = {
          + "Environment" = "cloudneta-lab"
          + "Terraform"   = "true"
        }
      + tags_all               = {
          + "Environment" = "cloudneta-lab"
          + "Terraform"   = "true"
        }
      + update_default_version = true
      + user_data              = "Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSJNSU1FQk9VTkRBUlkiCk1JTUUtVmVyc2lvbjogMS4wDQoNCi0tTUlNRUJPVU5EQVJZDQpDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0DQpDb250ZW50LVR5cGU6IHRleHQveC1zaGVsbHNjcmlwdA0KTWltZS1WZXJzaW9uOiAxLjANCg0KIyEvYmluL2Jhc2gKZWNobyAiU3RhcnRpbmcgY3VzdG9tIGluaXRpYWxpemF0aW9uLi4uIgpkbmYgdXBkYXRlIC15CmRuZiBpbnN0YWxsIC15IHRyZWUgYmluZC11dGlscwplY2hvICJDdXN0b20gaW5pdGlhbGl6YXRpb24gY29tcGxldGVkLiIKDQotLU1JTUVCT1VOREFSWS0tDQo="
      + vpc_security_group_ids = (known after apply)
        # (1 unchanged attribute hidden)

      + metadata_options {
          + http_endpoint               = "enabled"
          + http_protocol_ipv6          = (known after apply)
          + http_put_response_hop_limit = 1
          + http_tokens                 = "required"
          + instance_metadata_tags      = (known after apply)
        }

      + tag_specifications {
          + resource_type = "instance"
          + tags          = {
              + "Environment" = "cloudneta-lab"
              + "Name"        = "myeks-node-group"
              + "Terraform"   = "true"
            }
        }
      + tag_specifications {
          + resource_type = "network-interface"
          + tags          = {
              + "Environment" = "cloudneta-lab"
              + "Name"        = "myeks-node-group"
              + "Terraform"   = "true"
            }
        }
      + tag_specifications {
          + resource_type = "volume"
          + tags          = {
              + "Environment" = "cloudneta-lab"
              + "Name"        = "myeks-node-group"
              + "Terraform"   = "true"
            }
        }
    }

  # module.eks.module.kms.data.aws_iam_policy_document.this[0] will be read during apply
  # (config refers to values not yet known)
 <= data "aws_iam_policy_document" "this" {
      + id                        = (known after apply)
      + json                      = (known after apply)
      + minified_json             = (known after apply)
      + override_policy_documents = []
      + source_policy_documents   = []

      + statement {
          + actions   = [
              + "kms:*",
            ]
          + resources = [
              + "*",
            ]
          + sid       = "Default"

          + principals {
              + identifiers = [
                  + "arn:aws:iam::143649248460:root",
                ]
              + type        = "AWS"
            }
        }
      + statement {
          + actions   = [
              + "kms:CancelKeyDeletion",
              + "kms:Create*",
              + "kms:Delete*",
              + "kms:Describe*",
              + "kms:Disable*",
              + "kms:Enable*",
              + "kms:Get*",
              + "kms:ImportKeyMaterial",
              + "kms:List*",
              + "kms:Put*",
              + "kms:ReplicateKey",
              + "kms:Revoke*",
              + "kms:ScheduleKeyDeletion",
              + "kms:TagResource",
              + "kms:UntagResource",
              + "kms:Update*",
            ]
          + resources = [
              + "*",
            ]
          + sid       = "KeyAdministration"

          + principals {
              + identifiers = [
                  + "arn:aws:iam::143649248460:user/voieul-user",
                ]
              + type        = "AWS"
            }
        }
      + statement {
          + actions   = [
              + "kms:Decrypt",
              + "kms:DescribeKey",
              + "kms:Encrypt",
              + "kms:GenerateDataKey*",
              + "kms:ReEncrypt*",
            ]
          + resources = [
              + "*",
            ]
          + sid       = "KeyUsage"

          + principals {
              + identifiers = [
                  + (known after apply),
                ]
              + type        = "AWS"
            }
        }
    }

  # module.eks.module.kms.aws_kms_alias.this["cluster"] will be created
  + resource "aws_kms_alias" "this" {
      + arn            = (known after apply)
      + id             = (known after apply)
      + name           = "alias/eks/myeks"
      + name_prefix    = (known after apply)
      + region         = "ap-northeast-2"
      + target_key_arn = (known after apply)
      + target_key_id  = (known after apply)
    }

  # module.eks.module.kms.aws_kms_key.this[0] will be created
  + resource "aws_kms_key" "this" {
      + arn                                = (known after apply)
      + bypass_policy_lockout_safety_check = false
      + customer_master_key_spec           = "SYMMETRIC_DEFAULT"
      + description                        = "myeks cluster encryption key"
      + enable_key_rotation                = true
      + id                                 = (known after apply)
      + is_enabled                         = true
      + key_id                             = (known after apply)
      + key_usage                          = "ENCRYPT_DECRYPT"
      + multi_region                       = false
      + policy                             = (known after apply)
      + region                             = "ap-northeast-2"
      + rotation_period_in_days            = (known after apply)
      + tags                               = {
          + "Environment" = "cloudneta-lab"
          + "Terraform"   = "true"
        }
      + tags_all                           = {
          + "Environment" = "cloudneta-lab"
          + "Terraform"   = "true"
        }
    }

  # module.eks.module.eks_managed_node_group["default"].module.user_data.null_resource.validate_cluster_service_cidr will be created
  + resource "null_resource" "validate_cluster_service_cidr" {
      + id = (known after apply)
    }

Plan: 52 to add, 0 to change, 0 to destroy.

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.




v:Documents:s-aews:aews:1w $ nohup sh -c "terraform apply -auto-approve" > create.log 2>&1 &

[1] 81707

(상위 생략)
module.eks.time_sleep.this[0]: Still creating... [00m20s elapsed]
module.eks.aws_eks_addon.before_compute["vpc-cni"]: Still creating... [00m20s elapsed]
module.eks.time_sleep.this[0]: Still creating... [00m30s elapsed]
module.eks.time_sleep.this[0]: Creation complete after 30s [id=2026-03-18T12:37:02Z]
module.eks.module.eks_managed_node_group["default"].module.user_data.null_resource.validate_cluster_service_cidr: Creating...
module.eks.module.eks_managed_node_group["default"].module.user_data.null_resource.validate_cluster_service_cidr: Creation complete after 0s [id=5648872249672722725]
module.eks.module.eks_managed_node_group["default"].aws_launch_template.this[0]: Creating...
module.eks.aws_eks_addon.before_compute["vpc-cni"]: Still creating... [00m30s elapsed]
module.eks.module.eks_managed_node_group["default"].aws_launch_template.this[0]: Creation complete after 6s [id=lt-0c9986db3ebbdd005]
module.eks.module.eks_managed_node_group["default"].aws_eks_node_group.this[0]: Creating...
module.eks.aws_eks_addon.before_compute["vpc-cni"]: Still creating... [00m40s elapsed]
module.eks.module.eks_managed_node_group["default"].aws_eks_node_group.this[0]: Still creating... [00m10s elapsed]
module.eks.aws_eks_addon.before_compute["vpc-cni"]: Still creating... [00m50s elapsed]
module.eks.module.eks_managed_node_group["default"].aws_eks_node_group.this[0]: Still creating... [00m20s elapsed]
module.eks.aws_eks_addon.before_compute["vpc-cni"]: Still creating... [01m00s elapsed]
module.eks.aws_eks_addon.before_compute["vpc-cni"]: Creation complete after 1m5s [id=myeks:vpc-cni]
module.eks.module.eks_managed_node_group["default"].aws_eks_node_group.this[0]: Still creating... [00m30s elapsed]
module.eks.module.eks_managed_node_group["default"].aws_eks_node_group.this[0]: Still creating... [00m40s elapsed]
module.eks.module.eks_managed_node_group["default"].aws_eks_node_group.this[0]: Still creating... [00m50s elapsed]
module.eks.module.eks_managed_node_group["default"].aws_eks_node_group.this[0]: Still creating... [01m00s elapsed]
module.eks.module.eks_managed_node_group["default"].aws_eks_node_group.this[0]: Still creating... [01m10s elapsed]
module.eks.module.eks_managed_node_group["default"].aws_eks_node_group.this[0]: Still creating... [01m20s elapsed]
module.eks.module.eks_managed_node_group["default"].aws_eks_node_group.this[0]: Still creating... [01m30s elapsed]
module.eks.module.eks_managed_node_group["default"].aws_eks_node_group.this[0]: Still creating... [01m40s elapsed]
module.eks.module.eks_managed_node_group["default"].aws_eks_node_group.this[0]: Creation complete after 1m48s [id=myeks:myeks-node-group]
module.eks.aws_eks_addon.this["coredns"]: Creating...
module.eks.aws_eks_addon.this["kube-proxy"]: Creating...
module.eks.aws_eks_addon.this["coredns"]: Still creating... [00m10s elapsed]
module.eks.aws_eks_addon.this["kube-proxy"]: Still creating... [00m10s elapsed]
module.eks.aws_eks_addon.this["coredns"]: Creation complete after 14s [id=myeks:coredns]
module.eks.aws_eks_addon.this["kube-proxy"]: Still creating... [00m20s elapsed]
module.eks.aws_eks_addon.this["kube-proxy"]: Creation complete after 24s [id=myeks:kube-proxy]

Apply complete! Resources: 52 added, 0 changed, 0 destroyed.
[1]  + 81707 done




v:Documents:s-aews:aews:1w $ tail -f create.log

module.eks.module.eks_managed_node_group["default"].aws_eks_node_group.this[0]: Creation complete after 1m48s [id=myeks:myeks-node-group]
module.eks.aws_eks_addon.this["coredns"]: Creating...
module.eks.aws_eks_addon.this["kube-proxy"]: Creating...
module.eks.aws_eks_addon.this["coredns"]: Still creating... [00m10s elapsed]
module.eks.aws_eks_addon.this["kube-proxy"]: Still creating... [00m10s elapsed]
module.eks.aws_eks_addon.this["coredns"]: Creation complete after 14s [id=myeks:coredns]
module.eks.aws_eks_addon.this["kube-proxy"]: Still creating... [00m20s elapsed]
module.eks.aws_eks_addon.this["kube-proxy"]: Creation complete after 24s [id=myeks:kube-proxy]

Apply complete! Resources: 52 added, 0 changed, 0 destroyed.



# 자격증명 설정
v:Documents:s-aews:aews:1w $ aws eks update-kubeconfig --region ap-northeast-2 --name myeks
Added new context arn:aws:eks:ap-northeast-2:123123123123:cluster/myeks to /Users/test-user/.kube/config

# k8s config 확인 및 rename context
v:Documents:s-aews:aews:1w $ cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJV1F2NXVIS2d2aEF3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRFd01UZ3hPRE15TVRoYUZ3MHpOVEV3TVRZeE9ETTNNVGhhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUURRYWttb1NWSkZPeS9acFlCZFlZbDYvWnUxNHJjcVFwN2huSWVITlFRWmJwcTh4b2FTMkl6TmRuSEwKeUtpMmJPNTZJNGFzVk10RWpEQkZyK3h2M0VwOWYrTVRFNmJkUTRaMmZvbG1uWXpGT1lkZ2pMN2Y2cXM0bVBCYwp5ejVFZXpzQ1AvSlJHZCtINzg3TnpCekk4ZjYrV09wODl2QVVVYmQ2aTNJSDNtTncyekgwSkFFWjU1c3B2OEtlCjJ6Q05ocmU3Y1dEQlYrUS9jZCs2RVBtK1hieU1aRStPenF4ZlVxR1FxRmJ5dmNHK3VHbkpuaGdlVThHYXlRQXoKcG43WXNIditnWVFPQkZYajQxeEM2U0Rhc2xsS25EWGQrVUpBd3VqdnZZSFR3ZEdRYVlBVWhLV1pCTWxmM2NFNAphZUl5TUZ6dEJ4Zkh5dVRKQXRVN2tJU2xmbjM1QWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJSL2twbDFoSzB0bjQ4KzhIRjJPaFEwcFRiLzB6QVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ2p0YXJ2Y1NnUwpaeWtxMVZraUVCU0E4dTZOM1ZPazNmN1F0dTJrVG5tS1dGaGY3M0NWR0lxNXkxOWJoVzBNeUIxVHEzcE5ySVY1Cm0vRS96alp5RnRacEdNbHJDRCt0YjZvN1gxcnBLeitGVkR5TTEvR3plelI0Z2FVZ0ltbld4R2tlaStheEFkcFcKTmFjdGRTcWxGZExxcEpSQjJQRFM3MW5HUUF1cGlZMXpDVnRxM0FTdHBsWHk3TCtRd2NuNk83WUFQU1VvT2R0YQpJTWkyY0RKQi9VS09idEUwaldQczlLUG85cVY0WjVudWpwZ1RpbUttdDdSQUhRQUxvcHA3aGRDdG1na3liQVhqCkJWaFdJdVp4cmtrZUZnbEMvUXZaWmRidUZjSWZVYlZJSnVHcGo2Vmx6cTlMQ201UmpBYXMzM2NqUFV3emlrNWkKbjdlczBSdUdBWk9HCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://127.0.0.1:57280
  name: kind-myk8s
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJQU5uL3ovTUFlaWd3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TmpBek1UZ3hNakk1TWpoYUZ3MHpOakF6TVRVeE1qTTBNamhhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUMwUTYyK1ZudnV6aXdDTmx2SlIrRTNFUTY5OXpRNVB6V3JHb0dXNVN1L2VPczlsTklmYjVSVmU4M3EKRXNUWVpCUmY3YXluVFdVOHRVVGYzRFZWNzA5VTJPTW1mRWIzU0lCakhUb1lVS2xCN1RWaWVPY0NrdkVKRmRCUgpuTnBqYnpPVW5HMFNRcHFjMG5XSTMzcmxuTXhGM2xWMXRpV3FKSFovRlFkZE8zNmFoSldMQkRIbWUvL2tyTHZQClN1Z1NaeWVwWVQ3am9PRzhBWEVSeTFnMXBJdFVFSzkvd09QTk13R3J1S3hoU1RWUUorVFRyWTJpS2poVFllT24KTmJ0UGlnK0tHVjEvT2pzL0pWcFpoNHRxaCtObmVjd3ZjYmQ5U1lwbVhvVmJGSEVCb1FrbkMwOWxVOHJYYkM3Ngo4M1dyaFIxU2Rzdmpaai9HMWJjdy9qMXlwK2xMQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJRS1hLY0h6Ym16cXIwR1dGVTk4Q1NVdU9uTHJ6QVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ2pocUpjMUdPcQp3U0NVNWF1OWJlVXliRG5ZWXZQa0xLNDJDd2lWY3p0TzhRZ1Z6amptUk5OSjQxYWJrbnUrV0l4blJ0Tkp3amljCkNrN3ZFN0dEeTJGU0pPLzRjdXBvYkhDcUlPRXlNdHZtVFRoWEFvVDJST2JNaG1DTGtBY2p1YUMxS2FuV1BrMkkKN1ZVQzcvY3pDanVXb1oyWWpSVTVlSG5tUk5oUk9VckZSMlRhamt4SEFxZ2N6VVMybEpoOTZmc1RhR09pdjE3WQpnRis2aTZqdkNlSnhUV0xyOHY2NUpMcFFmcHBWVzloYVhQTVNuOUlsUXc2NmUrOUdRdlRvalhkTTVEUFl3dm51CmxHTGJtWm5UOFo2c0h3UzZjb3NIaU1LSUV4WGk0RXZPZG13QUNIQTZLTE5Ba3BzdWg4b0R0dXBVUnFjdE9lK2wKSjRrWElpQjR5ekg1Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://CC5D719ACF5FB0EC4C92959793A4488F.yl4.ap-northeast-2.eks.amazonaws.com
  name: arn:aws:eks:ap-northeast-2:123123123:cluster/myeks
contexts:
- context:
    cluster: kind-myk8s
    namespace: default
    user: kind-myk8s
  name: kind-myk8s
- context:
    cluster: arn:aws:eks:ap-northeast-2:123123123:cluster/myeks
    user: arn:aws:eks:ap-northeast-2:123123123:cluster/myeks
  name: arn:aws:eks:ap-northeast-2:123123123:cluster/myeks
current-context: arn:aws:eks:ap-northeast-2:123123123:cluster/myeks
kind: Config
users:
- name: kind-myk8s
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURLVENDQWhHZ0F3SUJBZ0lJVjh5ZmJvWkxCYnN3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRFd01UZ3hPRE15TVRoYUZ3MHlOakV3TVRneE9ETTNNVGhhTUR3eApIekFkQmdOVkJBb1RGbXQxWW1WaFpHMDZZMngxYzNSbGNpMWhaRzFwYm5NeEdUQVhCZ05WQkFNVEVHdDFZbVZ5CmJtVjBaWE10WVdSdGFXNHdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFDc2NqRFEKUnI5UENxY1BpMnRxMGhIRHlYZ2hIRDdXOXh0R2tteHFuai92cVk2cGFqRlEraSs1TXpaUDRJbERxVXVSZVIwZgpSZ3Q5cTJENyszcVJYM3p2VlMyRldpK1VEK2VJQnN3Y3J3U0pycFJSUzZGYkQwNmxuVzBzNmtnQlZLMXNZbjdsCjdkYmZDM0Zhd3VzMmtwQTFCeTJUQ3dOeHlkaDM2Y0FLMmlmNzVlV0ZmbTErMjNsckE4NzZPZCtIcHkrRlEwSEYKcWlYMC9LWXluWXduQi9zUE9tTzVKdHRxcFNzY0V2SjZQRUQ3b1FFUS81eEFMaVY3QWMxbUZ1ZUVBckJyQS9qQgpFd2hEY2V3V1pJTmZxSkc0TVUyZUQxWS8zWHpEZitvbTRSNmlJTUpFdW8wUFlienlMcWJuMldTeDV0NDFoQ2dHCnhVVGxtMi9jaC9ESnAvdmZBZ01CQUFHalZqQlVNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUsKQmdnckJnRUZCUWNEQWpBTUJnTlZIUk1CQWY4RUFqQUFNQjhHQTFVZEl3UVlNQmFBRkgrU21YV0VyUzJmano3dwpjWFk2RkRTbE52L1RNQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUJBUUJqT3RPUWxHU2FnZnV6NkErSDloZkM0NEtYClIzbXZzTkhSeHB0NW4rOHFqU1lWSlBJUTAvUU5CTElZK3NBN3hXLzc4STh2MmZudFRiZmhVNjA4aXRTZjZnblgKSkpiSFJ1TCtCeXY0ZnVjSlFEQ2gyc0dtbEZGM29UaDZkVTJkWmdXVXhvM2pHaDlmbmlHMjlaWDY5RDQyNXMrcQpTajRiNFhxa3FxaklETVkveFhqamorRW10bkQ3WXFOSjBjZnNnelg3eTBwbzJBQXZsUGF4eGJySkJkNEMzME1SClhvQ1hkSXhra3AzaERTU3VGb3hpS0lkejMrZ1c1UkZ5UzF5ME0zdnh4TkRJMWZSREFsVTZJNkt4bHo0Z21vZDYKWWU5NXBibk1wTmgwclRRY3piN2VDL0YzeHAvUWN5ejhDaWRyKzZCTUg3TytnV3dpUnFIV1pCZ1ZxR2I2Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBckhJdzBFYS9Ud3FuRDR0cmF0SVJ3OGw0SVJ3KzF2Y2JScEpzYXA0Lzc2bU9xV294ClVQb3Z1VE0yVCtDSlE2bExrWGtkSDBZTGZhdGcrL3Q2a1Y5ODcxVXRoVm92bEEvbmlBYk1ISzhFaWE2VVVVdWgKV3c5T3BaMXRMT3BJQVZTdGJHSis1ZTNXM3d0eFdzTHJOcEtRTlFjdGt3c0RjY25ZZCtuQUN0b24rK1hsaFg1dApmdHQ1YXdQTytqbmZoNmN2aFVOQnhhb2w5UHltTXAyTUp3ZjdEenBqdVNiYmFxVXJIQkx5ZWp4QSs2RUJFUCtjClFDNGxld0hOWmhibmhBS3dhd1A0d1JNSVEzSHNGbVNEWDZpUnVERk5uZzlXUDkxOHczL3FKdUVlb2lEQ1JMcU4KRDJHODhpNm01OWxrc2ViZU5ZUW9Cc1ZFNVp0djNJZnd5YWY3M3dJREFRQUJBb0lCQVFDQjFITVZ5NzNxeDIxaAprYWpzd24ybmR2NWZoMEYwWEpTSGZHUHRuWGtyZWUremN3VHdIM3hncGNMbFBucDVtM01PY2kzUHhzK043TUpXCjFFM0NOeTc3alpoNUJwNDlqZi9WOUxBbGhFc1pVWHZPL083ZGZOZk1ib3FzdnpJNDlrU2ZEa1RWM1V2aG4xN1gKWTFydE9ra2g4MmFIaDBvdm1EVEdpeEVQMnBFeDN2WC9BQytEaDV4VkY5SmpuTkJTZXY0SWNhdHhBUVdHNVY1dQpmdzhBNEZIRUZQUGVZKy9FVnlzRVR5T1FnTDJLa2hybjg3Z2lZRzBJMjkyY0VuYWlBY1RjMVF2WS81bHJMaEtBCndzQ3ZpTSt6WWxpQ0FGdzNoQ3lLdmZIZGYxTVgyM0g3MUJ1enJXNS9ZSERlUUtEc3RJNVFFdEVqR0Vna29mb2EKVUNiMmpnZlpBb0dCQU4rK3ROUTR0TmljQXprTEE4bEZHa3kxUDZBaFNjOFpoZ2ZqbTFMeXNaaDRubGZRQ0dhWQpNcjRYazNaMmR4SHhSWlBsRUkvWFFaOWxyQlU3OHpIS1dHOWtjSjI4NEtQZGRzMTlHS0FXRTJVaW1XdW1BNnpTCjFRc2lHUE5DWVVkZTZ6ZlVRcUJSUkVtN3Y1cVJ4TjFSMnRFWksrWDluQTE3d0J4MVVGR0lmdHhOQW9HQkFNVk8KVFVyem9RY3I5NGplS1VSTTFsY25reXNxeE1OTGZ5UW1HeXFtdi84YXE4SFlQS3JmaWFCZXBUUml3ZkRYSzU1RAppRkZyaCtDYjJCeUV4UkNlTzJPcVY1cERObTZJVEs2N294T0lZQWlVeGZZV2NlblFlMXh1WWQ4WXF1YkJEUEtjCk9nK1piQTd1dkYva1BVTUszZTBvU3YxN2JZS0pXdE9VcCtNSzBwN2JBb0dCQUp6czk0MEUvS29UdWhydkE4Zk4KWktXNlZaYXM0a1NUcFRLeFMwWkJHNWhSdU5Uai9wQmVYUENBUHBmT2ZMS2o0dVhZdWVYNDFuakNhWkEzRE5tMgpEcEtLQW9aUGE4cmlVQ25OZkZFRFNyVWJNRG1WSld5NExsM3htMGc2SFZwZVUyRkR5VHNCNUlCR1l4czQ4N2M2CmF0dE82VUFVd0xlZ1BOeDQxMDFvQzNuZEFvR0FiVWFkeGxwQ29CODR2SVFXcE81TmMvM0dJNDFQWnI2RWp6ZlAKcWdLcXFaWlM5RXhYNVdkaTZRQWlUVzQ0N2JPdVE3d3hYcTdJbFp5YXg4aTlBQ1F5emxORXEzcDRSaVdWR3QxdgpSMTBybXZVUzR1V3hkNGJ4RzlOQ3YzWUJDVVo0YmxJYVVoTnQ1cU5RajJkd2lwWVZMY2s0SjBYWjlBY3cxNmdvCmg3V3h5eXNDZ1lCRkdYZXRzMGorSkRKVWZWRGlXeW1VOEp4OWZudk5rMTUzQW5ibm1RYkdvNUNrVTE2T2xWaEgKNm9sc29aM3M3bmZqajFVdkhwZWlFeWZzY0tVaUNSL3pkVUM0UWRQME00VW5maUdxU0JKQThtNUl6d1NZZk9XcgpZcVFVK1ROYU1FRXEzL1YraUlnb3htMlI3b2VYZGtCU3lBS3Nyb0VFRlZ4M1IvZTRDa3cxL3c9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
- name: arn:aws:eks:ap-northeast-2:123123123:cluster/myeks
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - --region
      - ap-northeast-2
      - eks
      - get-token
      - --cluster-name
      - myeks
      - --output
      - json
      
      


v:Documents:s-aews:aews:1w $ cat ~/.kube/config | grep current-context | awk '{print $2}'
arn:aws:eks:ap-northeast-2:123123123:cluster/myeks

v:Documents:s-aews:aews:1w $ kubectl config rename-context $(cat ~/.kube/config | grep current-context | awk '{print $2}') myeks
Context "arn:aws:eks:ap-northeast-2:123123123:cluster/myeks" renamed to "myeks".

v:Documents:s-aews:aews:1w $ cat ~/.kube/config | grep current-context
current-context: myeks

 

 

  • eks 정보 확인 vs vanilla k8s 비교

# 제어부

# eks 클러스터 정보 확인
v:Documents:s-aews:aews:1w $ kubectl cluster-info
Kubernetes control plane is running at https://CC5D719ACF5FB0EC4C92959793A4488F.yl4.ap-northeast-2.eks.amazonaws.com
CoreDNS is running at https://CC5D719ACF5FB0EC4C92959793A4488F.yl4.ap-northeast-2.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.


# endpoint 확인
v:Documents:s-aews:aews:1w $ CLUSTER_NAME=myeks
v:Documents:s-aews:aews:1w $ aws eks describe-cluster --name $CLUSTER_NAME | jq
{
  "cluster": {
    "name": "myeks",
    "arn": "arn:aws:eks:ap-northeast-2:123123123:cluster/myeks",
    "createdAt": "2026-03-18T21:30:05.697000+09:00",
    "version": "1.34",
    "endpoint": "https://CC5D719ACF5FB0EC4C92959793A4488F.yl4.ap-northeast-2.eks.amazonaws.com",
    "roleArn": "arn:aws:iam::123123123:role/myeks-cluster-20260318122941951100000001",
    "resourcesVpcConfig": {
      "subnetIds": [
        "subnet-06b33fd18193e693b",
        "subnet-028f4b3ae2da413cf",
        "subnet-077140ca4e1985111"
      ],
      "securityGroupIds": [
        "sg-0e6e1c9e649185332"
      ],
      "clusterSecurityGroupId": "sg-0d6d36714729e81b1",
      "vpcId": "vpc-045dd0f66fad655bf",
      "endpointPublicAccess": true,
      "endpointPrivateAccess": false,
      "publicAccessCidrs": [
        "0.0.0.0/0"
      ]
    },
    "kubernetesNetworkConfig": {
      "serviceIpv4Cidr": "10.100.0.0/16",
      "ipFamily": "ipv4"
    },
    "logging": {
      "clusterLogging": [
        {
          "types": [
            "api",
            "audit",
            "authenticator",
            "controllerManager",
            "scheduler"
          ],
          "enabled": false
        }
      ]
    },
    "identity": {
      "oidc": {
        "issuer": "https://oidc.eks.ap-northeast-2.amazonaws.com/id/CC5D719ACF5FB0EC4C92959793A4488F"
      }
    },
    "status": "ACTIVE",
    "certificateAuthority": {
      "data": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJQU5uL3ovTUFlaWd3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TmpBek1UZ3hNakk1TWpoYUZ3MHpOakF6TVRVeE1qTTBNamhhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUMwUTYyK1ZudnV6aXdDTmx2SlIrRTNFUTY5OXpRNVB6V3JHb0dXNVN1L2VPczlsTklmYjVSVmU4M3EKRXNUWVpCUmY3YXluVFdVOHRVVGYzRFZWNzA5VTJPTW1mRWIzU0lCakhUb1lVS2xCN1RWaWVPY0NrdkVKRmRCUgpuTnBqYnpPVW5HMFNRcHFjMG5XSTMzcmxuTXhGM2xWMXRpV3FKSFovRlFkZE8zNmFoSldMQkRIbWUvL2tyTHZQClN1Z1NaeWVwWVQ3am9PRzhBWEVSeTFnMXBJdFVFSzkvd09QTk13R3J1S3hoU1RWUUorVFRyWTJpS2poVFllT24KTmJ0UGlnK0tHVjEvT2pzL0pWcFpoNHRxaCtObmVjd3ZjYmQ5U1lwbVhvVmJGSEVCb1FrbkMwOWxVOHJYYkM3Ngo4M1dyaFIxU2Rzdmpaai9HMWJjdy9qMXlwK2xMQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJRS1hLY0h6Ym16cXIwR1dGVTk4Q1NVdU9uTHJ6QVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ2pocUpjMUdPcQp3U0NVNWF1OWJlVXliRG5ZWXZQa0xLNDJDd2lWY3p0TzhRZ1Z6amptUk5OSjQxYWJrbnUrV0l4blJ0Tkp3amljCkNrN3ZFN0dEeTJGU0pPLzRjdXBvYkhDcUlPRXlNdHZtVFRoWEFvVDJST2JNaG1DTGtBY2p1YUMxS2FuV1BrMkkKN1ZVQzcvY3pDanVXb1oyWWpSVTVlSG5tUk5oUk9VckZSMlRhamt4SEFxZ2N6VVMybEpoOTZmc1RhR09pdjE3WQpnRis2aTZqdkNlSnhUV0xyOHY2NUpMcFFmcHBWVzloYVhQTVNuOUlsUXc2NmUrOUdRdlRvalhkTTVEUFl3dm51CmxHTGJtWm5UOFo2c0h3UzZjb3NIaU1LSUV4WGk0RXZPZG13QUNIQTZLTE5Ba3BzdWg4b0R0dXBVUnFjdE9lK2wKSjRrWElpQjR5ekg1Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
    },
    "platformVersion": "eks.18",
    "tags": {
      "Terraform": "true",
      "Environment": "cloudneta-lab"
    },
    "encryptionConfig": [
      {
        "resources": [
          "secrets"
        ],
        "provider": {
          "keyArn": "arn:aws:kms:ap-northeast-2:123123123:key/8c81b765-2233-44e4-84ee-2bbcce038fb3"
        }
      }
    ]
  }
}


v:Documents:s-aews:aews:1w $ aws eks describe-cluster --name $CLUSTER_NAME | jq -r .cluster.endpoint
https://CC5D719ACF5FB0EC4C92959793A4488F.yl4.ap-northeast-2.eks.amazonaws.com


v:Documents:s-aews:aews:1w $ APIDNS=$(aws eks describe-cluster --name $CLUSTER_NAME | jq -r .cluster.endpoint | cut -d '/' -f 3)
dig +short $APIDNS
43.202.134.241
15.164.74.222
v:Documents:s-aews:aews:1w $ curl -s ipinfo.io/43.202.134.241                                    
{
  "ip": "43.202.134.241",
  "hostname": "ec2-43-202-134-241.ap-northeast-2.compute.amazonaws.com",
  "city": "Incheon",
  "region": "Incheon",
  "country": "KR",
  "loc": "37.4565,126.7052",
  "org": "AS16509 Amazon.com, Inc.",
  "postal": "21505",
  "timezone": "Asia/Seoul",
  "readme": "https://ipinfo.io/missingauth"
}%



# eks 노드 그룹 정보 확인
v:Documents:s-aews:aews:1w $ aws eks describe-nodegroup --cluster-name $CLUSTER_NAME --nodegroup-name $CLUSTER_NAME-node-group | jq

{
  "nodegroup": {
    "nodegroupName": "myeks-node-group",
    "nodegroupArn": "arn:aws:eks:ap-northeast-2:143649248460:nodegroup/myeks/myeks-node-group/0ace8079-9855-fe64-5622-36bda851f604",
    "clusterName": "myeks",
    "version": "1.34",
    "releaseVersion": "1.34.4-20260311",
    "createdAt": "2026-03-18T21:37:13.964000+09:00",
    "modifiedAt": "2026-03-18T21:58:46.540000+09:00",
    "status": "ACTIVE",
    "capacityType": "ON_DEMAND",
    "scalingConfig": {
      "minSize": 1,
      "maxSize": 4,
      "desiredSize": 2
    },
    "instanceTypes": [
      "t3.medium"
    ],
    "subnets": [
      "subnet-06b33fd18193e693b",
      "subnet-028f4b3ae2da413cf",
      "subnet-077140ca4e1985111"
    ],
    "amiType": "AL2023_x86_64_STANDARD",
    "nodeRole": "arn:aws:iam::143649248460:role/myeks-node-group-eks-node-group-20260318122955045200000005",
    "labels": {},
    "resources": {
      "autoScalingGroups": [
        {
          "name": "eks-myeks-node-group-0ace8079-9855-fe64-5622-36bda851f604"
        }
      ]
    },
    "health": {
      "issues": []
    },
    "updateConfig": {
      "maxUnavailablePercentage": 33
    },
    "launchTemplate": {
      "name": "default-20260318123703010100000008",
      "version": "1",
      "id": "lt-0c9986db3ebbdd005"
    },
    "tags": {
      "Terraform": "true",
      "Environment": "cloudneta-lab",
      "Name": "myeks-node-group"
    }
  }
}


v:Documents:s-aews:aews:1w $ aws eks describe-nodegroup --cluster-name $CLUSTER_NAME --nodegroup-name $CLUSTER_NAME-node-group | jq

{
  "nodegroup": {
    "nodegroupName": "myeks-node-group",
    "nodegroupArn": "arn:aws:eks:ap-northeast-2:143649248460:nodegroup/myeks/myeks-node-group/0ace8079-9855-fe64-5622-36bda851f604",
    "clusterName": "myeks",
    "version": "1.34",
    "releaseVersion": "1.34.4-20260311",
    "createdAt": "2026-03-18T21:37:13.964000+09:00",
    "modifiedAt": "2026-03-18T21:58:46.540000+09:00",
    "status": "ACTIVE",
    "capacityType": "ON_DEMAND",
    "scalingConfig": {
      "minSize": 1,
      "maxSize": 4,
      "desiredSize": 2
    },
    "instanceTypes": [
      "t3.medium"
    ],
    "subnets": [
      "subnet-06b33fd18193e693b",
      "subnet-028f4b3ae2da413cf",
      "subnet-077140ca4e1985111"
    ],
    "amiType": "AL2023_x86_64_STANDARD",
    "nodeRole": "arn:aws:iam::143649248460:role/myeks-node-group-eks-node-group-20260318122955045200000005",
    "labels": {},
    "resources": {
      "autoScalingGroups": [
        {
          "name": "eks-myeks-node-group-0ace8079-9855-fe64-5622-36bda851f604"
        }
      ]
    },
    "health": {
      "issues": []
    },
    "updateConfig": {
      "maxUnavailablePercentage": 33
    },
    "launchTemplate": {
      "name": "default-20260318123703010100000008",
      "version": "1",
      "id": "lt-0c9986db3ebbdd005"
    },
    "tags": {
      "Terraform": "true",
      "Environment": "cloudneta-lab",
      "Name": "myeks-node-group"
    }
  }
}



# 노드 정보 확인 : OS와 컨테이너 런타임 확인
v:Documents:s-aews:aews:1w $ kubectl get node -owide

NAME                                               STATUS   ROLES    AGE   VERSION               INTERNAL-IP     EXTERNAL-IP      OS-IMAGE                        KERNEL-VERSION                   CONTAINER-RUNTIME
ip-192-168-1-31.ap-northeast-2.compute.internal    Ready    <none>   25m   v1.34.4-eks-f69f56f   192.168.1.31    43.202.52.69     Amazon Linux 2023.10.20260302   6.12.73-95.123.amzn2023.x86_64   containerd://2.1.5
ip-192-168-2-173.ap-northeast-2.compute.internal   Ready    <none>   25m   v1.34.4-eks-f69f56f   192.168.2.173   13.125.148.230   Amazon Linux 2023.10.20260302   6.12.73-95.123.amzn2023.x86_64   containerd://2.1.5


# 인증 정보 확인 : 자세한 정보는 보안에서 다룸
v:Documents:s-aews:aews:1w $ kubectl get node -v=6

I0318 22:04:35.324760   64149 cmd.go:527] kubectl command headers turned on
I0318 22:04:35.335149   64149 loader.go:402] Config loaded from file:  /Users/test-user/.kube/config
I0318 22:04:35.335881   64149 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0318 22:04:35.335898   64149 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0318 22:04:35.335900   64149 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0318 22:04:35.335903   64149 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0318 22:04:35.335905   64149 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0318 22:04:36.563534   64149 round_trippers.go:632] "Response" verb="GET" url="https://CC5D719ACF5FB0EC4C92959793A4488F.yl4.ap-northeast-2.eks.amazonaws.com/api/v1/nodes?limit=500" status="200 OK" milliseconds=1214
NAME                                               STATUS   ROLES    AGE   VERSION
ip-192-168-1-31.ap-northeast-2.compute.internal    Ready    <none>   26m   v1.34.4-eks-f69f56f
ip-192-168-2-173.ap-northeast-2.compute.internal   Ready    <none>   26m   v1.34.4-eks-f69f56f



## Get a token for authentication with an Amazon EKS cluster
v:Documents:s-aews:aews:1w $ AWS_DEFAULT_REGION=ap-northeast-2
v:Documents:s-aews:aews:1w $ aws eks get-token help


v:Documents:s-aews:aews:1w $ aws eks get-token --cluster-name $CLUSTER_NAME --region $AWS_DEFAULT_REGION | jq
{
  "kind": "ExecCredential",
  "apiVersion": "client.authentication.k8s.io/v1beta1",
  "spec": {},
  "status": {
    "expirationTimestamp": "2026-03-18T13:21:08Z",
    "token": "k8s-aws-v1.aHR0cHM6Ly9zdHMuYXAtbm9ydGhlYXN0LTIuYW1hem9uYXdzLmNvbS8_QWN0aW9uPUdldENhbGxlcklkZW50aXR5JlZlcnNpb249MjAxMS0wNi0xNSZYLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFTQzRSSlNUR09RM1pJSEVZJTJGMjAyNjAzMTglMkZhcC1ub3J0aGVhc3QtMiUyRnN0cyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjYwMzE4VDEzMDcwOFomWC1BbXotRXhwaXJlcz02MCZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QlM0J4LWs4cy1hd3MtaWQmWC1BbXotU2lnbmF0dXJlPWZiZWY3ODE3NzFiMjM1MTEwYTllNDY1MmIyYzdmNzFjZDM0NWYyNDcxZjUyNjI2ZjBlOWIzZTVlMjI5ODY1NWU"
  }
}

 

  • "API server endpoint의 url + /version" 으로 접속 시, 버전 정보 노출 X (이전에는 노출됐었음)

 

 

 

# 시스템 파드 정보 확인

# 파드 정보 확인 : 온프레미스 쿠버네티스의 파드 배치와 다른점은? , 파드의 IP의 특징이 어떤가요? 자세한 네트워크는 2주차에서 다룸
v:Documents:s-aews:aews:1w $ kubectl get pod -n kube-system

NAME                      READY   STATUS    RESTARTS   AGE
aws-node-b5tvm            2/2     Running   0          34m
aws-node-bln4t            2/2     Running   0          34m
coredns-d487b6fcb-77lxz   1/1     Running   0          34m
coredns-d487b6fcb-ng874   1/1     Running   0          34m
kube-proxy-fg2zs          1/1     Running   0          34m
kube-proxy-t4pvk          1/1     Running   0          34m
v:Documents:s-aews:aews:1w $ kubectl get pod -A

NAMESPACE     NAME                      READY   STATUS    RESTARTS   AGE
kube-system   aws-node-b5tvm            2/2     Running   0          35m
kube-system   aws-node-bln4t            2/2     Running   0          35m
kube-system   coredns-d487b6fcb-77lxz   1/1     Running   0          34m
kube-system   coredns-d487b6fcb-ng874   1/1     Running   0          34m
kube-system   kube-proxy-fg2zs          1/1     Running   0          34m
kube-system   kube-proxy-t4pvk          1/1     Running   0          34m
v:Documents:s-aews:aews:1w $ kubectl get pod -n kube-system -o wide

NAME                      READY   STATUS    RESTARTS   AGE   IP              NODE                                               NOMINATED NODE   READINESS GATES
aws-node-b5tvm            2/2     Running   0          35m   192.168.2.173   ip-192-168-2-173.ap-northeast-2.compute.internal   <none>           <none>
aws-node-bln4t            2/2     Running   0          35m   192.168.1.31    ip-192-168-1-31.ap-northeast-2.compute.internal    <none>           <none>
coredns-d487b6fcb-77lxz   1/1     Running   0          34m   192.168.1.100   ip-192-168-1-31.ap-northeast-2.compute.internal    <none>           <none>
coredns-d487b6fcb-ng874   1/1     Running   0          34m   192.168.2.230   ip-192-168-2-173.ap-northeast-2.compute.internal   <none>           <none>
kube-proxy-fg2zs          1/1     Running   0          34m   192.168.2.173   ip-192-168-2-173.ap-northeast-2.compute.internal   <none>           <none>
kube-proxy-t4pvk          1/1     Running   0          34m   192.168.1.31    ip-192-168-1-31.ap-northeast-2.compute.internal    <none>           <none>



# kube-system 네임스페이스에 모든 리소스 확인
v:Documents:s-aews:aews:1w $ kubectl get deploy,ds,pod,cm,secret,svc,ep,endpointslice,pdb,sa,role,rolebinding -n kube-system

Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns   2/2     2            2           35m

NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/aws-node     2         2         2       2            2           <none>          36m
daemonset.apps/kube-proxy   2         2         2       2            2           <none>          35m

NAME                          READY   STATUS    RESTARTS   AGE
pod/aws-node-b5tvm            2/2     Running   0          35m
pod/aws-node-bln4t            2/2     Running   0          35m
pod/coredns-d487b6fcb-77lxz   1/1     Running   0          35m
pod/coredns-d487b6fcb-ng874   1/1     Running   0          35m
pod/kube-proxy-fg2zs          1/1     Running   0          35m
pod/kube-proxy-t4pvk          1/1     Running   0          35m

NAME                                                             DATA   AGE
configmap/amazon-vpc-cni                                         7      36m
configmap/aws-auth                                               1      36m
configmap/coredns                                                1      35m
configmap/extension-apiserver-authentication                     6      39m
configmap/kube-apiserver-legacy-service-account-token-tracking   1      39m
configmap/kube-proxy                                             1      35m
configmap/kube-proxy-config                                      1      35m
configmap/kube-root-ca.crt                                       1      39m

NAME                                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                  AGE
service/eks-extension-metrics-api   ClusterIP   10.100.83.92   <none>        443/TCP                  39m
service/kube-dns                    ClusterIP   10.100.0.10    <none>        53/UDP,53/TCP,9153/TCP   35m

NAME                                  ENDPOINTS                                                        AGE
endpoints/eks-extension-metrics-api   172.0.32.0:10443                                                 39m
endpoints/kube-dns                    192.168.1.100:53,192.168.2.230:53,192.168.1.100:53 + 3 more...   35m

NAME                                                             ADDRESSTYPE   PORTS        ENDPOINTS                     AGE
endpointslice.discovery.k8s.io/eks-extension-metrics-api-zpgqq   IPv4          10443        172.0.32.0                    39m
endpointslice.discovery.k8s.io/kube-dns-l2d4m                    IPv4          9153,53,53   192.168.2.230,192.168.1.100   35m

NAME                                 MIN AVAILABLE   MAX UNAVAILABLE   ALLOWED DISRUPTIONS   AGE
poddisruptionbudget.policy/coredns   N/A             1                 1                     35m

NAME                                                         SECRETS   AGE
serviceaccount/attachdetach-controller                       0         39m
serviceaccount/aws-cloud-provider                            0         39m
serviceaccount/aws-node                                      0         36m
serviceaccount/certificate-controller                        0         39m
serviceaccount/clusterrole-aggregation-controller            0         39m
serviceaccount/coredns                                       0         35m
serviceaccount/cronjob-controller                            0         39m
serviceaccount/daemon-set-controller                         0         39m
serviceaccount/default                                       0         39m
serviceaccount/deployment-controller                         0         39m
serviceaccount/disruption-controller                         0         39m
serviceaccount/endpoint-controller                           0         39m
serviceaccount/endpointslice-controller                      0         39m
serviceaccount/endpointslicemirroring-controller             0         39m
serviceaccount/ephemeral-volume-controller                   0         39m
serviceaccount/expand-controller                             0         39m
serviceaccount/generic-garbage-collector                     0         39m
serviceaccount/horizontal-pod-autoscaler                     0         39m
serviceaccount/job-controller                                0         39m
serviceaccount/kube-proxy                                    0         35m
serviceaccount/legacy-service-account-token-cleaner          0         39m
serviceaccount/namespace-controller                          0         39m
serviceaccount/node-controller                               0         39m
serviceaccount/persistent-volume-binder                      0         39m
serviceaccount/pod-garbage-collector                         0         39m
serviceaccount/pv-protection-controller                      0         39m
serviceaccount/pvc-protection-controller                     0         39m
serviceaccount/replicaset-controller                         0         39m
serviceaccount/replication-controller                        0         39m
serviceaccount/resource-claim-controller                     0         39m
serviceaccount/resourcequota-controller                      0         39m
serviceaccount/root-ca-cert-publisher                        0         39m
serviceaccount/service-account-controller                    0         39m
serviceaccount/service-cidrs-controller                      0         39m
serviceaccount/service-controller                            0         39m
serviceaccount/statefulset-controller                        0         39m
serviceaccount/tagging-controller                            0         39m
serviceaccount/ttl-after-finished-controller                 0         39m
serviceaccount/ttl-controller                                0         39m
serviceaccount/validatingadmissionpolicy-status-controller   0         39m
serviceaccount/volumeattributesclass-protection-controller   0         39m

NAME                                                                            CREATED AT
role.rbac.authorization.k8s.io/eks-vpc-resource-controller-role                 2026-03-18T12:35:17Z
role.rbac.authorization.k8s.io/eks:addon-manager                                2026-03-18T12:35:16Z
role.rbac.authorization.k8s.io/eks:authenticator                                2026-03-18T12:35:14Z
role.rbac.authorization.k8s.io/eks:az-poller                                    2026-03-18T12:35:14Z
role.rbac.authorization.k8s.io/eks:coredns-autoscaler                           2026-03-18T12:35:14Z
role.rbac.authorization.k8s.io/eks:fargate-manager                              2026-03-18T12:35:16Z
role.rbac.authorization.k8s.io/eks:network-policy-controller                    2026-03-18T12:35:17Z
role.rbac.authorization.k8s.io/eks:node-manager                                 2026-03-18T12:35:16Z
role.rbac.authorization.k8s.io/eks:service-operations-configmaps                2026-03-18T12:35:15Z
role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader        2026-03-18T12:35:13Z
role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager   2026-03-18T12:35:13Z
role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler            2026-03-18T12:35:13Z
role.rbac.authorization.k8s.io/system:controller:bootstrap-signer               2026-03-18T12:35:13Z
role.rbac.authorization.k8s.io/system:controller:cloud-provider                 2026-03-18T12:35:13Z
role.rbac.authorization.k8s.io/system:controller:token-cleaner                  2026-03-18T12:35:13Z

NAME                                                                                      ROLE                                                  AGE
rolebinding.rbac.authorization.k8s.io/eks-vpc-resource-controller-rolebinding             Role/eks-vpc-resource-controller-role                 39m
rolebinding.rbac.authorization.k8s.io/eks:addon-manager                                   Role/eks:addon-manager                                39m
rolebinding.rbac.authorization.k8s.io/eks:authenticator                                   Role/eks:authenticator                                39m
rolebinding.rbac.authorization.k8s.io/eks:az-poller                                       Role/eks:az-poller                                    39m
rolebinding.rbac.authorization.k8s.io/eks:coredns-autoscaler                              Role/eks:coredns-autoscaler                           39m
rolebinding.rbac.authorization.k8s.io/eks:fargate-manager                                 Role/eks:fargate-manager                              39m
rolebinding.rbac.authorization.k8s.io/eks:network-policy-controller                       Role/eks:network-policy-controller                    39m
rolebinding.rbac.authorization.k8s.io/eks:node-manager                                    Role/eks:node-manager                                 39m
rolebinding.rbac.authorization.k8s.io/eks:service-operations                              Role/eks:service-operations-configmaps                39m
rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader   Role/extension-apiserver-authentication-reader        39m
rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager      Role/system::leader-locking-kube-controller-manager   39m
rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler               Role/system::leader-locking-kube-scheduler            39m
rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer                  Role/system:controller:bootstrap-signer               39m
rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider                    Role/system:controller:cloud-provider                 39m
rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner                     Role/system:controller:token-cleaner                  39m



# 모든 파드의 컨테이너 이미지 정보 확인 : dkr.ecr 저장소 확인!
v:Documents:s-aews:aews:1w $ kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" | tr -s '[[:space:]]' '\n' | sort | uniq -c

   2 602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon-k8s-cni:v1.21.1-eksbuild.5
   2 602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon/aws-network-policy-agent:v1.3.1-eksbuild.1
   2 602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/coredns:v1.13.2-eksbuild.3
   2 602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/kube-proxy:v1.34.5-eksbuild.2
   
   
# kube-proxy : iptables mode, bind 0.0.0.0, conntrack 등
v:Documents:s-aews:aews:1w $ kubectl describe pod -n kube-system -l k8s-app=kube-proxy

Name:                 kube-proxy-fg2zs
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      kube-proxy
Node:                 ip-192-168-2-173.ap-northeast-2.compute.internal/192.168.2.173
Start Time:           Wed, 18 Mar 2026 21:39:05 +0900
Labels:               controller-revision-hash=f7bb99b97
                      k8s-app=kube-proxy
                      pod-template-generation=1
Annotations:          <none>
Status:               Running
IP:                   192.168.2.173
IPs:
  IP:           192.168.2.173
Controlled By:  DaemonSet/kube-proxy
Containers:
  kube-proxy:
    Container ID:  containerd://308970dfd717b2b93e5341ea872e92ef972ca66b2a38aa864afa119f4bcaebb4
    Image:         602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/kube-proxy:v1.34.5-eksbuild.2
    Image ID:      602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/kube-proxy@sha256:839e8625a1b230e9cd90323a484ed2ec0c2dbb2dca63cc8cbe086e8b1252d8a5
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-proxy
      --v=2
      --config=/var/lib/kube-proxy-config/config
      --hostname-override=$(NODE_NAME)
    State:          Running
      Started:      Wed, 18 Mar 2026 21:39:08 +0900
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:  100m
    Environment:
      NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /var/lib/kube-proxy-config/ from config (rw)
      /var/lib/kube-proxy/ from kubeconfig (rw)
      /var/log from varlog (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rhthb (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       True
  ContainersReady             True
  PodScheduled                True
Volumes:
  varlog:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log
    HostPathType:  
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  kubeconfig:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-proxy
    Optional:  false
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-proxy-config
    Optional:  false
  kube-api-access-rhthb:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    Optional:                false
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  36m   default-scheduler  Successfully assigned kube-system/kube-proxy-fg2zs to ip-192-168-2-173.ap-northeast-2.compute.internal
  Normal  Pulling    36m   kubelet            Pulling image "602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/kube-proxy:v1.34.5-eksbuild.2"
  Normal  Pulled     36m   kubelet            Successfully pulled image "602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/kube-proxy:v1.34.5-eksbuild.2" in 1.938s (1.938s including waiting). Image size: 31766225 bytes.
  Normal  Created    36m   kubelet            Created container: kube-proxy
  Normal  Started    36m   kubelet            Started container kube-proxy


Name:                 kube-proxy-t4pvk
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      kube-proxy
Node:                 ip-192-168-1-31.ap-northeast-2.compute.internal/192.168.1.31
Start Time:           Wed, 18 Mar 2026 21:39:05 +0900
Labels:               controller-revision-hash=f7bb99b97
                      k8s-app=kube-proxy
                      pod-template-generation=1
Annotations:          <none>
Status:               Running
IP:                   192.168.1.31
IPs:
  IP:           192.168.1.31
Controlled By:  DaemonSet/kube-proxy
Containers:
  kube-proxy:
    Container ID:  containerd://c381e7cc96cb8fce49d55c8517dc673dbdec6f441d673700d4f9360f17c83bc1
    Image:         602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/kube-proxy:v1.34.5-eksbuild.2
    Image ID:      602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/kube-proxy@sha256:839e8625a1b230e9cd90323a484ed2ec0c2dbb2dca63cc8cbe086e8b1252d8a5
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-proxy
      --v=2
      --config=/var/lib/kube-proxy-config/config
      --hostname-override=$(NODE_NAME)
    State:          Running
      Started:      Wed, 18 Mar 2026 21:39:08 +0900
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:  100m
    Environment:
      NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /var/lib/kube-proxy-config/ from config (rw)
      /var/lib/kube-proxy/ from kubeconfig (rw)
      /var/log from varlog (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mvmkt (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       True
  ContainersReady             True
  PodScheduled                True
Volumes:
  varlog:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log
    HostPathType:  
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  kubeconfig:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-proxy
    Optional:  false
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-proxy-config
    Optional:  false
  kube-api-access-mvmkt:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    Optional:                false
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  36m   default-scheduler  Successfully assigned kube-system/kube-proxy-t4pvk to ip-192-168-1-31.ap-northeast-2.compute.internal
  Normal  Pulling    36m   kubelet            Pulling image "602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/kube-proxy:v1.34.5-eksbuild.2"
  Normal  Pulled     36m   kubelet            Successfully pulled image "602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/kube-proxy:v1.34.5-eksbuild.2" in 1.994s (1.994s including waiting). Image size: 31766225 bytes.
  Normal  Created    36m   kubelet            Created container: kube-proxy
  Normal  Started    36m   kubelet            Started container kube-proxy
  
  
v:Documents:s-aews:aews:1w $ kubectl get cm -n kube-system kube-proxy-config -o yaml

apiVersion: v1
data:
  config: |-
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    clientConnection:
      acceptContentTypes: ""
      burst: 10
      contentType: application/vnd.kubernetes.protobuf
      kubeconfig: /var/lib/kube-proxy/kubeconfig
      qps: 5
    clusterCIDR: ""
    configSyncPeriod: 15m0s
    conntrack:
      maxPerCore: 32768
      min: 131072
      tcpCloseWaitTimeout: 1h0m0s
      tcpEstablishedTimeout: 24h0m0s
    enableProfiling: false
    healthzBindAddress: 0.0.0.0:10256
    hostnameOverride: ""
    iptables:
      masqueradeAll: false
      masqueradeBit: 14
      minSyncPeriod: 0s
      syncPeriod: 30s
    ipvs:
      excludeCIDRs: null
      minSyncPeriod: 0s
      scheduler: ""
      syncPeriod: 30s
    kind: KubeProxyConfiguration
    metricsBindAddress: 0.0.0.0:10249
    mode: "iptables"
    nodePortAddresses: null
    oomScoreAdj: -998
    portRange: ""
kind: ConfigMap
metadata:
  creationTimestamp: "2026-03-18T12:39:05Z"
  labels:
    eks.amazonaws.com/component: kube-proxy
    k8s-app: kube-proxy
  name: kube-proxy-config
  namespace: kube-system
  resourceVersion: "1049"
  uid: ab166bde-beec-4947-95c0-3b6b3f789267
  
  
  
# coredns 
v:Documents:s-aews:aews:1w $ kubectl describe pod -n kube-system -l k8s-app=kube-dns

Name:                 coredns-d487b6fcb-77lxz
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      coredns
Node:                 ip-192-168-1-31.ap-northeast-2.compute.internal/192.168.1.31
Start Time:           Wed, 18 Mar 2026 21:39:06 +0900
Labels:               eks.amazonaws.com/component=coredns
                      k8s-app=kube-dns
                      pod-template-hash=d487b6fcb
Annotations:          <none>
Status:               Running
IP:                   192.168.1.100
IPs:
  IP:           192.168.1.100
Controlled By:  ReplicaSet/coredns-d487b6fcb
Containers:
  coredns:
    Container ID:  containerd://93169998c7bbb205700f274e8dc1fd65a77f14ecee811ee9c36e27bda2627028
    Image:         602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/coredns:v1.13.2-eksbuild.3
    Image ID:      602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/coredns@sha256:1be6df71365ceee2da14e9e408b6303854e9e5bf5a461357a3f02e4223b82e7d
    Ports:         53/UDP (dns), 53/TCP (dns-tcp), 9153/TCP (metrics)
    Host Ports:    0/UDP (dns), 0/TCP (dns-tcp), 0/TCP (metrics)
    Args:
      -conf
      /etc/coredns/Corefile
    State:          Running
      Started:      Wed, 18 Mar 2026 21:39:09 +0900
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:    http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k9kdf (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       True
  ContainersReady             True
  PodScheduled                True
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  kube-api-access-k9kdf:
    Type:                     Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:   3607
    ConfigMapName:            kube-root-ca.crt
    Optional:                 false
    DownwardAPI:              true
QoS Class:                    Burstable
Node-Selectors:               <none>
Tolerations:                  CriticalAddonsOnly op=Exists
                              node-role.kubernetes.io/control-plane:NoSchedule
                              node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Topology Spread Constraints:  topology.kubernetes.io/zone:ScheduleAnyway when max skew 1 is exceeded for selector k8s-app=kube-dns
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  37m   default-scheduler  Successfully assigned kube-system/coredns-d487b6fcb-77lxz to ip-192-168-1-31.ap-northeast-2.compute.internal
  Normal  Pulling    37m   kubelet            Pulling image "602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/coredns:v1.13.2-eksbuild.3"
  Normal  Pulled     37m   kubelet            Successfully pulled image "602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/coredns:v1.13.2-eksbuild.3" in 1.805s (1.805s including waiting). Image size: 25070066 bytes.
  Normal  Created    37m   kubelet            Created container: coredns
  Normal  Started    37m   kubelet            Started container coredns


Name:                 coredns-d487b6fcb-ng874
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      coredns
Node:                 ip-192-168-2-173.ap-northeast-2.compute.internal/192.168.2.173
Start Time:           Wed, 18 Mar 2026 21:39:06 +0900
Labels:               eks.amazonaws.com/component=coredns
                      k8s-app=kube-dns
                      pod-template-hash=d487b6fcb
Annotations:          <none>
Status:               Running
IP:                   192.168.2.230
IPs:
  IP:           192.168.2.230
Controlled By:  ReplicaSet/coredns-d487b6fcb
Containers:
  coredns:
    Container ID:  containerd://2f6927f14d2b80b6cd38f05cf04e0181eee3e86ec7be2716744c58aa8f7c695a
    Image:         602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/coredns:v1.13.2-eksbuild.3
    Image ID:      602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/coredns@sha256:1be6df71365ceee2da14e9e408b6303854e9e5bf5a461357a3f02e4223b82e7d
    Ports:         53/UDP (dns), 53/TCP (dns-tcp), 9153/TCP (metrics)
    Host Ports:    0/UDP (dns), 0/TCP (dns-tcp), 0/TCP (metrics)
    Args:
      -conf
      /etc/coredns/Corefile
    State:          Running
      Started:      Wed, 18 Mar 2026 21:39:09 +0900
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:    http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4pzps (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       True
  ContainersReady             True
  PodScheduled                True
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  kube-api-access-4pzps:
    Type:                     Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:   3607
    ConfigMapName:            kube-root-ca.crt
    Optional:                 false
    DownwardAPI:              true
QoS Class:                    Burstable
Node-Selectors:               <none>
Tolerations:                  CriticalAddonsOnly op=Exists
                              node-role.kubernetes.io/control-plane:NoSchedule
                              node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Topology Spread Constraints:  topology.kubernetes.io/zone:ScheduleAnyway when max skew 1 is exceeded for selector k8s-app=kube-dns
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  37m   default-scheduler  Successfully assigned kube-system/coredns-d487b6fcb-ng874 to ip-192-168-2-173.ap-northeast-2.compute.internal
  Normal  Pulling    37m   kubelet            Pulling image "602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/coredns:v1.13.2-eksbuild.3"
  Normal  Pulled     37m   kubelet            Successfully pulled image "602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/coredns:v1.13.2-eksbuild.3" in 1.88s (1.88s including waiting). Image size: 25070066 bytes.
  Normal  Created    37m   kubelet            Created container: coredns
  Normal  Started    37m   kubelet            Started container coredns
  
  
### 노드를 교체하거나 업데이트할 때, 적어도 CoreDNS 전체 개수에서 1개를 뺀 나머지는 항상 살아있어야 한다는 것!
v:Documents:s-aews:aews:1w $ kubectl get pdb -n kube-system coredns -o jsonpath='{.spec}' | jq

{
  "maxUnavailable": 1,
  "selector": {
    "matchLabels": {
      "eks.amazonaws.com/component": "coredns",
      "k8s-app": "kube-dns"
    }
  }
}



# aws-node : 2개의 컨테이너 - aws-node(cni plugin), aws-eks-nodeagent(network policy agent)
v:Documents:s-aews:aews:1w $ kubectl describe pod -n kube-system -l k8s-app=kube-dns

Name:                 coredns-d487b6fcb-77lxz
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      coredns
Node:                 ip-192-168-1-31.ap-northeast-2.compute.internal/192.168.1.31
Start Time:           Wed, 18 Mar 2026 21:39:06 +0900
Labels:               eks.amazonaws.com/component=coredns
                      k8s-app=kube-dns
                      pod-template-hash=d487b6fcb
Annotations:          <none>
Status:               Running
IP:                   192.168.1.100
IPs:
  IP:           192.168.1.100
Controlled By:  ReplicaSet/coredns-d487b6fcb
Containers:
  coredns:
    Container ID:  containerd://93169998c7bbb205700f274e8dc1fd65a77f14ecee811ee9c36e27bda2627028
    Image:         602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/coredns:v1.13.2-eksbuild.3
    Image ID:      602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/coredns@sha256:1be6df71365ceee2da14e9e408b6303854e9e5bf5a461357a3f02e4223b82e7d
    Ports:         53/UDP (dns), 53/TCP (dns-tcp), 9153/TCP (metrics)
    Host Ports:    0/UDP (dns), 0/TCP (dns-tcp), 0/TCP (metrics)
    Args:
      -conf
      /etc/coredns/Corefile
    State:          Running
      Started:      Wed, 18 Mar 2026 21:39:09 +0900
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:    http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k9kdf (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       True
  ContainersReady             True
  PodScheduled                True
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  kube-api-access-k9kdf:
    Type:                     Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:   3607
    ConfigMapName:            kube-root-ca.crt
    Optional:                 false
    DownwardAPI:              true
QoS Class:                    Burstable
Node-Selectors:               <none>
Tolerations:                  CriticalAddonsOnly op=Exists
                              node-role.kubernetes.io/control-plane:NoSchedule
                              node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Topology Spread Constraints:  topology.kubernetes.io/zone:ScheduleAnyway when max skew 1 is exceeded for selector k8s-app=kube-dns
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  37m   default-scheduler  Successfully assigned kube-system/coredns-d487b6fcb-77lxz to ip-192-168-1-31.ap-northeast-2.compute.internal
  Normal  Pulling    37m   kubelet            Pulling image "602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/coredns:v1.13.2-eksbuild.3"
  Normal  Pulled     37m   kubelet            Successfully pulled image "602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/coredns:v1.13.2-eksbuild.3" in 1.805s (1.805s including waiting). Image size: 25070066 bytes.
  Normal  Created    37m   kubelet            Created container: coredns
  Normal  Started    37m   kubelet            Started container coredns


Name:                 coredns-d487b6fcb-ng874
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      coredns
Node:                 ip-192-168-2-173.ap-northeast-2.compute.internal/192.168.2.173
Start Time:           Wed, 18 Mar 2026 21:39:06 +0900
Labels:               eks.amazonaws.com/component=coredns
                      k8s-app=kube-dns
                      pod-template-hash=d487b6fcb
Annotations:          <none>
Status:               Running
IP:                   192.168.2.230
IPs:
  IP:           192.168.2.230
Controlled By:  ReplicaSet/coredns-d487b6fcb
Containers:
  coredns:
    Container ID:  containerd://2f6927f14d2b80b6cd38f05cf04e0181eee3e86ec7be2716744c58aa8f7c695a
    Image:         602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/coredns:v1.13.2-eksbuild.3
    Image ID:      602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/coredns@sha256:1be6df71365ceee2da14e9e408b6303854e9e5bf5a461357a3f02e4223b82e7d
    Ports:         53/UDP (dns), 53/TCP (dns-tcp), 9153/TCP (metrics)
    Host Ports:    0/UDP (dns), 0/TCP (dns-tcp), 0/TCP (metrics)
    Args:
      -conf
      /etc/coredns/Corefile
    State:          Running
      Started:      Wed, 18 Mar 2026 21:39:09 +0900
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:    http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4pzps (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       True
  ContainersReady             True
  PodScheduled                True
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  kube-api-access-4pzps:
    Type:                     Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:   3607
    ConfigMapName:            kube-root-ca.crt
    Optional:                 false
    DownwardAPI:              true
QoS Class:                    Burstable
Node-Selectors:               <none>
Tolerations:                  CriticalAddonsOnly op=Exists
                              node-role.kubernetes.io/control-plane:NoSchedule
                              node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Topology Spread Constraints:  topology.kubernetes.io/zone:ScheduleAnyway when max skew 1 is exceeded for selector k8s-app=kube-dns
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  37m   default-scheduler  Successfully assigned kube-system/coredns-d487b6fcb-ng874 to ip-192-168-2-173.ap-northeast-2.compute.internal
  Normal  Pulling    37m   kubelet            Pulling image "602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/coredns:v1.13.2-eksbuild.3"
  Normal  Pulled     37m   kubelet            Successfully pulled image "602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/coredns:v1.13.2-eksbuild.3" in 1.88s (1.88s including waiting). Image size: 25070066 bytes.
  Normal  Created    37m   kubelet            Created container: coredns
  Normal  Started    37m   kubelet            Started container coredns
v:Documents:s-aews:aews:1w $ 
v:Documents:s-aews:aews:1w $ kubectl get pdb -n kube-system coredns -o jsonpath='{.spec}' | jq

{
  "maxUnavailable": 1,
  "selector": {
    "matchLabels": {
      "eks.amazonaws.com/component": "coredns",
      "k8s-app": "kube-dns"
    }
  }
}
v:Documents:s-aews:aews:1w $ kubectl describe pod -n kube-system -l k8s-app=aws-node

Name:                 aws-node-b5tvm
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      aws-node
Node:                 ip-192-168-2-173.ap-northeast-2.compute.internal/192.168.2.173
Start Time:           Wed, 18 Mar 2026 21:38:31 +0900
Labels:               app.kubernetes.io/instance=aws-vpc-cni
                      app.kubernetes.io/name=aws-node
                      controller-revision-hash=79c879494
                      k8s-app=aws-node
                      pod-template-generation=1
Annotations:          <none>
Status:               Running
IP:                   192.168.2.173
IPs:
  IP:           192.168.2.173
Controlled By:  DaemonSet/aws-node
Init Containers:
  aws-vpc-cni-init:
    Container ID:   containerd://fc515ebae5f8792d83f954bea7e2635d19382a926564db7db6beb7c2efa11811
    Image:          602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon-k8s-cni-init:v1.21.1-eksbuild.5
    Image ID:       602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon-k8s-cni-init@sha256:541f4e7f6d67f7b19d20a9a4b507548088bcf7998e2d9f178b8e71d2b3b3ab9e
    Port:           <none>
    Host Port:      <none>
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Wed, 18 Mar 2026 21:38:34 +0900
      Finished:     Wed, 18 Mar 2026 21:38:34 +0900
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:  25m
    Environment:
      DISABLE_TCP_EARLY_DEMUX:  false
      ENABLE_IPv6:              false
    Mounts:
      /host/opt/cni/bin from cni-bin-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s885g (ro)
Containers:
  aws-node:
    Container ID:   containerd://00cb7651f1f654da44d3a9fe0281e247fa5ac7e2e63169bc36d2e51789680a85
    Image:          602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon-k8s-cni:v1.21.1-eksbuild.5
    Image ID:       602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon-k8s-cni@sha256:1a4e6837f385273d9078de75160a4736d1e8efbadaec55279b24d3ef252f4a87
    Port:           61678/TCP (metrics)
    Host Port:      61678/TCP (metrics)
    State:          Running
      Started:      Wed, 18 Mar 2026 21:38:38 +0900
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:      25m
    Liveness:   exec [/app/grpc-health-probe -addr=:50051 -connect-timeout=5s -rpc-timeout=5s] delay=60s timeout=10s period=10s #success=1 #failure=3
    Readiness:  exec [/app/grpc-health-probe -addr=:50051 -connect-timeout=5s -rpc-timeout=5s] delay=1s timeout=10s period=10s #success=1 #failure=3
    Environment:
      ADDITIONAL_ENI_TAGS:                    {}
      ANNOTATE_POD_IP:                        false
      AWS_VPC_CNI_NODE_PORT_SUPPORT:          true
      AWS_VPC_ENI_MTU:                        9001
      AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG:     false
      AWS_VPC_K8S_CNI_EXTERNALSNAT:           false
      AWS_VPC_K8S_CNI_LOGLEVEL:               DEBUG
      AWS_VPC_K8S_CNI_LOG_FILE:               /host/var/log/aws-routed-eni/ipamd.log
      AWS_VPC_K8S_CNI_RANDOMIZESNAT:          prng
      AWS_VPC_K8S_CNI_VETHPREFIX:             eni
      AWS_VPC_K8S_PLUGIN_LOG_FILE:            /var/log/aws-routed-eni/plugin.log
      AWS_VPC_K8S_PLUGIN_LOG_LEVEL:           DEBUG
      CLUSTER_ENDPOINT:                       https://CC5D719ACF5FB0EC4C92959793A4488F.yl4.ap-northeast-2.eks.amazonaws.com
      CLUSTER_NAME:                           myeks
      DISABLE_INTROSPECTION:                  false
      DISABLE_METRICS:                        false
      DISABLE_NETWORK_RESOURCE_PROVISIONING:  false
      ENABLE_IMDS_ONLY_MODE:                  false
      ENABLE_IPv4:                            true
      ENABLE_IPv6:                            false
      ENABLE_MULTI_NIC:                       false
      ENABLE_POD_ENI:                         false
      ENABLE_PREFIX_DELEGATION:               false
      ENABLE_SUBNET_DISCOVERY:                true
      NETWORK_POLICY_ENFORCING_MODE:          standard
      VPC_CNI_VERSION:                        v1.21.1
      VPC_ID:                                 vpc-045dd0f66fad655bf
      WARM_ENI_TARGET:                        1
      WARM_PREFIX_TARGET:                     1
      MY_NODE_NAME:                            (v1:spec.nodeName)
      MY_POD_NAME:                            aws-node-b5tvm (v1:metadata.name)
    Mounts:
      /host/etc/cni/net.d from cni-net-dir (rw)
      /host/opt/cni/bin from cni-bin-dir (rw)
      /host/var/log/aws-routed-eni from log-dir (rw)
      /run/xtables.lock from xtables-lock (rw)
      /var/run/aws-node from run-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s885g (ro)
  aws-eks-nodeagent:
    Container ID:  containerd://fd81f3ed458acb7653840a113cd7c987147f05285fbbe5db87922d88d2195332
    Image:         602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon/aws-network-policy-agent:v1.3.1-eksbuild.1
    Image ID:      602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon/aws-network-policy-agent@sha256:f7bdccebe1209ca05dc7279f7015a3c6d6a181cd631a244fb71ee779b39f04e2
    Port:          8162/TCP (agentmetrics)
    Host Port:     8162/TCP (agentmetrics)
    Args:
      --enable-ipv6=false
      --enable-network-policy=false
      --enable-cloudwatch-logs=false
      --enable-policy-event-logs=false
      --log-file=/var/log/aws-routed-eni/network-policy-agent.log
      --metrics-bind-addr=:8162
      --health-probe-bind-addr=:8163
      --conntrack-cache-cleanup-period=300
      --log-level=debug
    State:          Running
      Started:      Wed, 18 Mar 2026 21:38:40 +0900
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:  25m
    Environment:
      MY_NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /host/opt/cni/bin from cni-bin-dir (rw)
      /sys/fs/bpf from bpf-pin-path (rw)
      /var/log/aws-routed-eni from log-dir (rw)
      /var/run/aws-node from run-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s885g (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       True
  ContainersReady             True
  PodScheduled                True
Volumes:
  bpf-pin-path:
    Type:          HostPath (bare host directory volume)
    Path:          /sys/fs/bpf
    HostPathType:  
  cni-bin-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/cni/bin
    HostPathType:  
  cni-net-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:  
  log-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log/aws-routed-eni
    HostPathType:  DirectoryOrCreate
  run-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/aws-node
    HostPathType:  DirectoryOrCreate
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  kube-api-access-s885g:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    Optional:                false
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  41m   default-scheduler  Successfully assigned kube-system/aws-node-b5tvm to ip-192-168-2-173.ap-northeast-2.compute.internal
  Normal  Pulling    41m   kubelet            Pulling image "602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon-k8s-cni-init:v1.21.1-eksbuild.5"
  Normal  Pulled     41m   kubelet            Successfully pulled image "602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon-k8s-cni-init:v1.21.1-eksbuild.5" in 2.328s (2.328s including waiting). Image size: 70086422 bytes.
  Normal  Created    41m   kubelet            Created container: aws-vpc-cni-init
  Normal  Started    41m   kubelet            Started container aws-vpc-cni-init
  Normal  Pulling    41m   kubelet            Pulling image "602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon-k8s-cni:v1.21.1-eksbuild.5"
  Normal  Pulled     41m   kubelet            Successfully pulled image "602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon-k8s-cni:v1.21.1-eksbuild.5" in 1.874s (1.874s including waiting). Image size: 53907715 bytes.
  Normal  Created    41m   kubelet            Created container: aws-node
  Normal  Started    41m   kubelet            Started container aws-node
  Normal  Pulling    41m   kubelet            Pulling image "602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon/aws-network-policy-agent:v1.3.1-eksbuild.1"
  Normal  Pulled     41m   kubelet            Successfully pulled image "602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon/aws-network-policy-agent:v1.3.1-eksbuild.1" in 1.791s (1.791s including waiting). Image size: 35639644 bytes.
  Normal  Created    41m   kubelet            Created container: aws-eks-nodeagent
  Normal  Started    41m   kubelet            Started container aws-eks-nodeagent


Name:                 aws-node-bln4t
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      aws-node
Node:                 ip-192-168-1-31.ap-northeast-2.compute.internal/192.168.1.31
Start Time:           Wed, 18 Mar 2026 21:38:31 +0900
Labels:               app.kubernetes.io/instance=aws-vpc-cni
                      app.kubernetes.io/name=aws-node
                      controller-revision-hash=79c879494
                      k8s-app=aws-node
                      pod-template-generation=1
Annotations:          <none>
Status:               Running
IP:                   192.168.1.31
IPs:
  IP:           192.168.1.31
Controlled By:  DaemonSet/aws-node
Init Containers:
  aws-vpc-cni-init:
    Container ID:   containerd://efd077017556b5366501ded6c85c2303f2c95a8e9957ca5bdec75fb5a6de1fd5
    Image:          602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon-k8s-cni-init:v1.21.1-eksbuild.5
    Image ID:       602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon-k8s-cni-init@sha256:541f4e7f6d67f7b19d20a9a4b507548088bcf7998e2d9f178b8e71d2b3b3ab9e
    Port:           <none>
    Host Port:      <none>
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Wed, 18 Mar 2026 21:38:34 +0900
      Finished:     Wed, 18 Mar 2026 21:38:34 +0900
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:  25m
    Environment:
      DISABLE_TCP_EARLY_DEMUX:  false
      ENABLE_IPv6:              false
    Mounts:
      /host/opt/cni/bin from cni-bin-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pxmgp (ro)
Containers:
  aws-node:
    Container ID:   containerd://590f663e0e21d2c66f9236ff4970d1fce9f2f8accaef9cb4f1dedcb42a338678
    Image:          602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon-k8s-cni:v1.21.1-eksbuild.5
    Image ID:       602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon-k8s-cni@sha256:1a4e6837f385273d9078de75160a4736d1e8efbadaec55279b24d3ef252f4a87
    Port:           61678/TCP (metrics)
    Host Port:      61678/TCP (metrics)
    State:          Running
      Started:      Wed, 18 Mar 2026 21:38:37 +0900
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:      25m
    Liveness:   exec [/app/grpc-health-probe -addr=:50051 -connect-timeout=5s -rpc-timeout=5s] delay=60s timeout=10s period=10s #success=1 #failure=3
    Readiness:  exec [/app/grpc-health-probe -addr=:50051 -connect-timeout=5s -rpc-timeout=5s] delay=1s timeout=10s period=10s #success=1 #failure=3
    Environment:
      ADDITIONAL_ENI_TAGS:                    {}
      ANNOTATE_POD_IP:                        false
      AWS_VPC_CNI_NODE_PORT_SUPPORT:          true
      AWS_VPC_ENI_MTU:                        9001
      AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG:     false
      AWS_VPC_K8S_CNI_EXTERNALSNAT:           false
      AWS_VPC_K8S_CNI_LOGLEVEL:               DEBUG
      AWS_VPC_K8S_CNI_LOG_FILE:               /host/var/log/aws-routed-eni/ipamd.log
      AWS_VPC_K8S_CNI_RANDOMIZESNAT:          prng
      AWS_VPC_K8S_CNI_VETHPREFIX:             eni
      AWS_VPC_K8S_PLUGIN_LOG_FILE:            /var/log/aws-routed-eni/plugin.log
      AWS_VPC_K8S_PLUGIN_LOG_LEVEL:           DEBUG
      CLUSTER_ENDPOINT:                       https://CC5D719ACF5FB0EC4C92959793A4488F.yl4.ap-northeast-2.eks.amazonaws.com
      CLUSTER_NAME:                           myeks
      DISABLE_INTROSPECTION:                  false
      DISABLE_METRICS:                        false
      DISABLE_NETWORK_RESOURCE_PROVISIONING:  false
      ENABLE_IMDS_ONLY_MODE:                  false
      ENABLE_IPv4:                            true
      ENABLE_IPv6:                            false
      ENABLE_MULTI_NIC:                       false
      ENABLE_POD_ENI:                         false
      ENABLE_PREFIX_DELEGATION:               false
      ENABLE_SUBNET_DISCOVERY:                true
      NETWORK_POLICY_ENFORCING_MODE:          standard
      VPC_CNI_VERSION:                        v1.21.1
      VPC_ID:                                 vpc-045dd0f66fad655bf
      WARM_ENI_TARGET:                        1
      WARM_PREFIX_TARGET:                     1
      MY_NODE_NAME:                            (v1:spec.nodeName)
      MY_POD_NAME:                            aws-node-bln4t (v1:metadata.name)
    Mounts:
      /host/etc/cni/net.d from cni-net-dir (rw)
      /host/opt/cni/bin from cni-bin-dir (rw)
      /host/var/log/aws-routed-eni from log-dir (rw)
      /run/xtables.lock from xtables-lock (rw)
      /var/run/aws-node from run-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pxmgp (ro)
  aws-eks-nodeagent:
    Container ID:  containerd://92cecba4bffa75d5ce877cc6b02e3a9cbf29f4bc50ab0094d17c28d2708e9c39
    Image:         602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon/aws-network-policy-agent:v1.3.1-eksbuild.1
    Image ID:      602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon/aws-network-policy-agent@sha256:f7bdccebe1209ca05dc7279f7015a3c6d6a181cd631a244fb71ee779b39f04e2
    Port:          8162/TCP (agentmetrics)
    Host Port:     8162/TCP (agentmetrics)
    Args:
      --enable-ipv6=false
      --enable-network-policy=false
      --enable-cloudwatch-logs=false
      --enable-policy-event-logs=false
      --log-file=/var/log/aws-routed-eni/network-policy-agent.log
      --metrics-bind-addr=:8162
      --health-probe-bind-addr=:8163
      --conntrack-cache-cleanup-period=300
      --log-level=debug
    State:          Running
      Started:      Wed, 18 Mar 2026 21:38:40 +0900
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:  25m
    Environment:
      MY_NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /host/opt/cni/bin from cni-bin-dir (rw)
      /sys/fs/bpf from bpf-pin-path (rw)
      /var/log/aws-routed-eni from log-dir (rw)
      /var/run/aws-node from run-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pxmgp (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       True
  ContainersReady             True
  PodScheduled                True
Volumes:
  bpf-pin-path:
    Type:          HostPath (bare host directory volume)
    Path:          /sys/fs/bpf
    HostPathType:  
  cni-bin-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/cni/bin
    HostPathType:  
  cni-net-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:  
  log-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log/aws-routed-eni
    HostPathType:  DirectoryOrCreate
  run-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/aws-node
    HostPathType:  DirectoryOrCreate
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  kube-api-access-pxmgp:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    Optional:                false
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  41m   default-scheduler  Successfully assigned kube-system/aws-node-bln4t to ip-192-168-1-31.ap-northeast-2.compute.internal
  Normal  Pulling    41m   kubelet            Pulling image "602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon-k8s-cni-init:v1.21.1-eksbuild.5"
  Normal  Pulled     41m   kubelet            Successfully pulled image "602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon-k8s-cni-init:v1.21.1-eksbuild.5" in 2.348s (2.348s including waiting). Image size: 70086422 bytes.
  Normal  Created    41m   kubelet            Created container: aws-vpc-cni-init
  Normal  Started    41m   kubelet            Started container aws-vpc-cni-init
  Normal  Pulling    41m   kubelet            Pulling image "602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon-k8s-cni:v1.21.1-eksbuild.5"
  Normal  Pulled     41m   kubelet            Successfully pulled image "602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon-k8s-cni:v1.21.1-eksbuild.5" in 1.883s (1.883s including waiting). Image size: 53907715 bytes.
  Normal  Created    41m   kubelet            Created container: aws-node
  Normal  Started    41m   kubelet            Started container aws-node
  Normal  Pulling    41m   kubelet            Pulling image "602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon/aws-network-policy-agent:v1.3.1-eksbuild.1"
  Normal  Pulled     41m   kubelet            Successfully pulled image "602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon/aws-network-policy-agent:v1.3.1-eksbuild.1" in 1.918s (1.918s including waiting). Image size: 35639644 bytes.
  Normal  Created    41m   kubelet            Created container: aws-eks-nodeagent
  Normal  Started    41m   kubelet            Started container aws-eks-nodeagent

 

 

 

 

# add-on 정보 확인

# 클러스터에 설치된 Addon 목록 확인
v:Documents:s-aews:aews:1w $ aws eks list-addons --cluster-name myeks | jq

{
  "addons": [
    "coredns",
    "kube-proxy",
    "vpc-cni"
  ]
}

# 특정 Addon 상세 정보
v:Documents:s-aews:aews:1w $ aws eks describe-addon --cluster-name myeks --addon-name vpc-cni | jq


# 전체 Addon 상세 정보  
{
  "addon": {
    "addonName": "vpc-cni",
    "clusterName": "myeks",
    "status": "ACTIVE",
    "addonVersion": "v1.21.1-eksbuild.5",
    "health": {
      "issues": []
    },
    "addonArn": "arn:aws:eks:ap-northeast-2:143649248460:addon/myeks/vpc-cni/64ce8079-52a0-cae5-d626-31d075f2d024",
    "createdAt": "2026-03-18T21:36:35.522000+09:00",
    "modifiedAt": "2026-03-18T21:37:33.494000+09:00",
    "tags": {
      "Terraform": "true",
      "Environment": "cloudneta-lab"
    }
  }
}

 

 

  • 워커 노드 정보 확인 - Docs vs vanilla k8s 비교 - Blog
    • 관리형 노드 그룹 : EKS 컴퓨팅 노드 그룹 확인 , EC2 오토 스케일링 그룹 확인 → 관련 시작 템플릿 확인

# 노드 ssh 접속

# 노드 IP 확인 및 공인IP 변수 지정
v:Documents:s-aews:aews:1w $ aws ec2 describe-instances --query "Reservations[*].Instances[*].{PublicIPAdd:PublicIpAddress,PrivateIPAdd:PrivateIpAddress,InstanceName:Tags[?Key=='Name']|[0].Value,Status:State.Name}" --filters Name=instance-state-name,Values=running --output table

--------------------------------------------------------------------
|                         DescribeInstances                        |
+------------------+-----------------+------------------+----------+
|   InstanceName   |  PrivateIPAdd   |   PublicIPAdd    | Status   |
+------------------+-----------------+------------------+----------+
|  myeks-node-group|  192.168.2.173  |  13.125.148.230  |  running |
|  myeks-node-group|  192.168.1.31   |  43.202.52.69    |  running |
+------------------+-----------------+------------------+----------+


NODE1=13.125.148.230
NODE2=43.202.52.69


v:Documents:s-aews:aews:1w $ NODE1=13.125.148.230
NODE2=43.202.52.69


# 노드의IP ping 테스트
v:Documents:s-aews:aews:1w $ ping -c 1 $NODE1
PING 13.125.148.230 (13.125.148.230): 56 data bytes
64 bytes from 13.125.148.230: icmp_seq=0 ttl=116 time=8.237 ms

--- 13.125.148.230 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 8.237/8.237/8.237/0.000 ms
v:Documents:s-aews:aews:1w $ ping -c 1 $NODE2
PING 43.202.52.69 (43.202.52.69): 56 data bytes
64 bytes from 43.202.52.69: icmp_seq=0 ttl=116 time=6.834 ms

--- 43.202.52.69 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 6.834/6.834/6.834/0.000 ms



# 노드 보안그룹 확인
v:Documents:s-aews:aews:1w $ aws ec2 describe-security-groups --filters "Name=tag:Name,Values=myeks-node-group-sg" | jq

{
  "SecurityGroups": [
    {
      "Description": "Security group for EKS Node Group",
      "GroupName": "myeks-node-group-sg",
      "IpPermissions": [
        {
          "IpProtocol": "-1",
          "IpRanges": [
            {
              "CidrIp": "59.10.176.51/32"
            },
            {
              "CidrIp": "192.168.1.100/32"
            }
          ],
          "Ipv6Ranges": [],
          "PrefixListIds": [],
          "UserIdGroupPairs": []
        }
      ],
      "OwnerId": "143649248460",
      "GroupId": "sg-0653995465c8630b3",
      "IpPermissionsEgress": [],
      "Tags": [
        {
          "Key": "Name",
          "Value": "myeks-node-group-sg"
        }
      ],
      "VpcId": "vpc-045dd0f66fad655bf"
    }
  ]
  
  
  
# 워커 노드 SSH 접속
v:Documents:s-aews:aews:1w $ ssh -i ~/Documents/aws_keypair/test-key.pem -o StrictHostKeyChecking=no ec2-user@$NODE1 hostname
ip-192-168-2-173.ap-northeast-2.compute.internal


# Tip. ssh config
cat ~/.ssh/config
No such file or directory

v:Documents:s-aews:aews:1w $ cat ~/.ssh/config       
Host *
    User ec2-user
    IdentityFile ~/Documents/aws_keypair/voieul-key.pem
    StrictHostKeyChecking no
    



v:Documents:s-aews:aews:1w $ ssh ec2-user@$NODE1

   ,     #_
   ~\_  ####_        Amazon Linux 2023
  ~~  \_#####\
  ~~     \###|
  ~~       \#/ ___   https://aws.amazon.com/linux/amazon-linux-2023
   ~~       V~' '->
    ~~~         /
      ~~._.   _/
         _/ _/
       _/m/'
Last login: Wed Mar 11 20:33:07 2026 from 52.94.123.202
[ec2-user@ip-192-168-2-173 ~]$ exit
logout
Connection to 13.125.148.230 closed.


v:Documents:s-aews:aews:1w $ ssh ec2-user@$NODE2
Warning: Permanently added '43.202.52.69' (ED25519) to the list of known hosts.
   ,     #_
   ~\_  ####_        Amazon Linux 2023
  ~~  \_#####\
  ~~     \###|
  ~~       \#/ ___   https://aws.amazon.com/linux/amazon-linux-2023
   ~~       V~' '->
    ~~~         /
      ~~._.   _/
         _/ _/
       _/m/'
Last login: Wed Mar 11 20:33:07 2026 from 52.94.123.202
[ec2-user@ip-192-168-1-31 ~]$


-> 또는 "ssh $NODE1" 명령만으로도 접근 가능 (ssh config 파일에 ec2-user 내용 존재하기 때문에)

 

 

# k8s 노드 동작을 위한 필수 설정 - Blog

# Node1 또는 2로 접근
# 관리자 전환
[ec2-user@ip-192-168-2-173 ~]$ sudo su - 
[root@ip-192-168-2-173 ~]# whoami
root


# 호스트 정보 확인
[root@ip-192-168-2-173 ~]# hostnamectl
 Static hostname: ip-192-168-2-173.ap-northeast-2.compute.internal
       Icon name: computer-vm
         Chassis: vm 🖴
      Machine ID: ec2c1a2556aabb1b129f3399dd63fffc
         Boot ID: 0cf9e509b6284c56ba447bbe0a7e03fe
  Virtualization: amazon
Operating System: Amazon Linux 2023.10.20260302
     CPE OS Name: cpe:2.3:o:amazon:amazon_linux:2023
          Kernel: Linux 6.12.73-95.123.amzn2023.x86_64
    Architecture: x86-64
 Hardware Vendor: Amazon EC2
  Hardware Model: t3.medium
Firmware Version: 1.0


# SELinux 설정 : Kubernetes는 Permissive 권장
[root@ip-192-168-2-173 ~]# getenforce
Permissive
[root@ip-192-168-2-173 ~]# sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   permissive
Mode from config file:          permissive
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Memory protection checking:     actual (secure)
Max kernel policy version:      33


# Swap 비활성화
[root@ip-192-168-2-173 ~]# free -h
               total        used        free      shared  buff/cache   available
Mem:           3.7Gi       347Mi       2.2Gi       1.0Mi       1.2Gi       3.2Gi
Swap:             0B          0B          0B
[root@ip-192-168-2-173 ~]# cat /etc/fstab
#
UUID=d306b125-f320-4f7c-8e41-c19d118b25e5     /           xfs    defaults,noatime  1   1
UUID=3D07-3F7F        /boot/efi       vfat    defaults,noatime,uid=0,gid=0,umask=0077,shortname=winnt,x-systemd.automount 0 2



# cgroup 확인 : 버전2
[root@ip-192-168-2-173 ~]# stat -fc %T /sys/fs/cgroup/
cgroup2fs


# overlay 커널 모듈 로드 확인 : https://interlude-3.tistory.com/47
[root@ip-192-168-2-173 ~]# lsmod | grep overlay
overlay               217088  7


# containerd 스냅샷 목록 보기
[root@ip-192-168-2-173 ~]# ctr -n k8s.io snapshots ls
WARN[0000] DEPRECATION: The `bin_dir` property of `[plugins."io.containerd.cri.v1.runtime".cni`] is deprecated since containerd v2.1 and will be removed in containerd v2.3. Use `bin_dirs` in the same section instead. 
KEY                                                                     PARENT                                                                  KIND      
00cb7651f1f654da44d3a9fe0281e247fa5ac7e2e63169bc36d2e51789680a85        sha256:e8c96cf7003a14c5cc198ab010604cf65abc33bca4317693411c842b89afdaf1 Active    
097a4f34e0f44c8a496acd29426747add2918602af81961cd64b1148772c68a5        sha256:d04a986803441bd3b5d521f093922b292e6adf38b1d473f320f3175b564ac95c Active    
2f6927f14d2b80b6cd38f05cf04e0181eee3e86ec7be2716744c58aa8f7c695a        sha256:7aadb4f9fcfff538563b51bad2fa2eb5e44e4fdf55b5712720b09260f755e084 Active    
308970dfd717b2b93e5341ea872e92ef972ca66b2a38aa864afa119f4bcaebb4        sha256:7dfc9d66fb3a8ec16a46f28c48abf0d9faa15fefdcc68c72ce01ec6d94713549 Active    
7cdcd0c595130e9eb209466cf71e89576e483a5874508dfce4183e48de6f61bb        sha256:d04a986803441bd3b5d521f093922b292e6adf38b1d473f320f3175b564ac95c Active    
983a2ad5fd13a575246626f4059f8a617e4ffd2f7c748170d8f2b7eb184f1d3f        sha256:d04a986803441bd3b5d521f093922b292e6adf38b1d473f320f3175b564ac95c Active    
fc515ebae5f8792d83f954bea7e2635d19382a926564db7db6beb7c2efa11811        sha256:f6c6aa42805569daa5f5d803fd7643be78a870742638e425a4c6e91246954f82 Active    
fd81f3ed458acb7653840a113cd7c987147f05285fbbe5db87922d88d2195332        sha256:1d29ddbe3fde80e61545aa67618e2c9f9c66c6c10514bf0b25c462e6c7ca95ea Active    
sha256:1d29ddbe3fde80e61545aa67618e2c9f9c66c6c10514bf0b25c462e6c7ca95ea sha256:ef780f5b36fc0627e5662be1a6fb38fecf3b631c7ce925c4f62c65210d336dc4 Committed 
sha256:2e44cea7eafc09df80c20c0e58a28bde2b7cb1e60aed38a016e2b6f61a33fb27 sha256:98dcdcf9d7ee1aa0961ea54b9e53c4aaa1f65ae15e7a905e46ee5652513e0500 Committed 
sha256:332cd5791f6431b7da15c3d7f47cc734d62aaeda09959775946713a4e20ee88c sha256:ef48011b800002de4779d49a85a8a17b8aca9a3eb720a76d1c4903120469d652 Committed 
sha256:3cabed2eb6ffb1a83cd997f0dd66ae73abc3707a7686e4e2851f369f30e5e558                                                                         Committed 
sha256:462f2eff945a51557aee8bc26a666488b770aa1d248de1ba0ddfac74248643f1 sha256:dd7fa3fc215f9dd3f6e54f7a781c8d5e07cbe2cf16fa906aad7838ef8cb58837 Committed 
sha256:4caa8c1f170ef5eabe7d8021700b197dbc6ec2de6b84e8cf7e7fb3341c4d1f03 sha256:916d785d4fc30e97613c0f173dab29e61b979965040d2c5172afa162f68794d3 Committed 
sha256:53a62a8cd216d79469b5fa82ff265302c5c00878bdeaad00042da40a049311b9                                                                         Committed 
sha256:67f338cd7df51b2f9676ed831148e777203375df2ec0621c06f0d0041ce44954 sha256:c72e8ab6e7367f40f2372dfb6b6ccd957f951078f05844e80cd568fc90129847 Committed 
sha256:6978733d716fbedf6708fb8062a8aaf2236c77356ed8b5f583b54e758bf3cd2f sha256:8da4d76fd64eb2a33751853a1f60f672fcc204d5c5e6d84a8350890b1aad0eee Committed 
sha256:7aadb4f9fcfff538563b51bad2fa2eb5e44e4fdf55b5712720b09260f755e084 sha256:53a62a8cd216d79469b5fa82ff265302c5c00878bdeaad00042da40a049311b9 Committed 
sha256:7dfc9d66fb3a8ec16a46f28c48abf0d9faa15fefdcc68c72ce01ec6d94713549 sha256:8820b64830a725575a2d0da0e661ceb440f613c8e01f74eff4fb210b9466434b Committed 
sha256:8820b64830a725575a2d0da0e661ceb440f613c8e01f74eff4fb210b9466434b sha256:4caa8c1f170ef5eabe7d8021700b197dbc6ec2de6b84e8cf7e7fb3341c4d1f03 Committed 
sha256:8da4d76fd64eb2a33751853a1f60f672fcc204d5c5e6d84a8350890b1aad0eee sha256:941b379f245719934f261bfb63698e81488a254d2ce58adbcbdbf6ed08c623c2 Committed 
sha256:916d785d4fc30e97613c0f173dab29e61b979965040d2c5172afa162f68794d3                                                                         Committed 
sha256:941b379f245719934f261bfb63698e81488a254d2ce58adbcbdbf6ed08c623c2 sha256:f51db364c6b29a572234d37e29bac943ca56de484a471a241f098192bafba7c0 Committed 
sha256:95a63f1edf818bcf7b36746df6fdeeee85140cc8838295f0f092f31b180b23aa sha256:c72e8ab6e7367f40f2372dfb6b6ccd957f951078f05844e80cd568fc90129847 Committed 
sha256:98dcdcf9d7ee1aa0961ea54b9e53c4aaa1f65ae15e7a905e46ee5652513e0500 sha256:95a63f1edf818bcf7b36746df6fdeeee85140cc8838295f0f092f31b180b23aa Committed 
sha256:c72e8ab6e7367f40f2372dfb6b6ccd957f951078f05844e80cd568fc90129847 sha256:3cabed2eb6ffb1a83cd997f0dd66ae73abc3707a7686e4e2851f369f30e5e558 Committed 
sha256:d04a986803441bd3b5d521f093922b292e6adf38b1d473f320f3175b564ac95c                                                                         Committed 
sha256:d79971e8aedb959f9d850b85a4e182a770b3e885841b36e93dca6ac2457e9095                                                                         Committed 
sha256:dd7fa3fc215f9dd3f6e54f7a781c8d5e07cbe2cf16fa906aad7838ef8cb58837 sha256:6978733d716fbedf6708fb8062a8aaf2236c77356ed8b5f583b54e758bf3cd2f Committed 
sha256:e8c96cf7003a14c5cc198ab010604cf65abc33bca4317693411c842b89afdaf1 sha256:2e44cea7eafc09df80c20c0e58a28bde2b7cb1e60aed38a016e2b6f61a33fb27 Committed 
sha256:ef48011b800002de4779d49a85a8a17b8aca9a3eb720a76d1c4903120469d652 sha256:d79971e8aedb959f9d850b85a4e182a770b3e885841b36e93dca6ac2457e9095 Committed 
sha256:ef780f5b36fc0627e5662be1a6fb38fecf3b631c7ce925c4f62c65210d336dc4 sha256:462f2eff945a51557aee8bc26a666488b770aa1d248de1ba0ddfac74248643f1 Committed 
sha256:f51db364c6b29a572234d37e29bac943ca56de484a471a241f098192bafba7c0 sha256:332cd5791f6431b7da15c3d7f47cc734d62aaeda09959775946713a4e20ee88c Committed 
sha256:f6c6aa42805569daa5f5d803fd7643be78a870742638e425a4c6e91246954f82 sha256:67f338cd7df51b2f9676ed831148e777203375df2ec0621c06f0d0041ce44954 Committed 


[root@ip-192-168-2-173 ~]# ls -la /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/
total 16
drwx------. 36 root root 16384 Mar 18 12:39 .
drwx------.  3 root root    42 Mar 11 20:30 ..
drwx------.  4 root root    28 Mar 11 20:33 10
drwx------.  4 root root    28 Mar 18 12:38 11
drwx------.  4 root root    28 Mar 18 12:38 12
drwx------.  4 root root    28 Mar 18 12:38 13
drwx------.  4 root root    28 Mar 18 12:38 14
drwx------.  4 root root    28 Mar 18 12:38 15
drwx------.  4 root root    28 Mar 18 12:38 16
drwx------.  4 root root    28 Mar 18 12:38 17
drwx------.  4 root root    28 Mar 18 12:38 18
drwx------.  4 root root    28 Mar 18 12:38 19
drwx------.  4 root root    28 Mar 18 12:38 20
drwx------.  4 root root    28 Mar 18 12:38 21
drwx------.  4 root root    28 Mar 18 12:38 22
drwx------.  4 root root    28 Mar 18 12:38 23
drwx------.  4 root root    28 Mar 18 12:38 24
drwx------.  4 root root    28 Mar 18 12:38 25
drwx------.  4 root root    28 Mar 18 12:38 26
drwx------.  4 root root    28 Mar 18 12:38 27
drwx------.  4 root root    28 Mar 18 12:38 28
drwx------.  4 root root    28 Mar 18 12:38 29
drwx------.  4 root root    28 Mar 18 12:38 30
drwx------.  4 root root    28 Mar 18 12:38 31
drwx------.  4 root root    28 Mar 18 12:38 32
drwx------.  4 root root    28 Mar 18 12:38 33
drwx------.  4 root root    28 Mar 18 12:39 34
drwx------.  4 root root    28 Mar 18 12:39 35
drwx------.  4 root root    28 Mar 18 12:39 36
drwx------.  4 root root    28 Mar 18 12:39 37
drwx------.  4 root root    28 Mar 18 12:39 38
drwx------.  4 root root    28 Mar 18 12:39 39
drwx------.  4 root root    28 Mar 18 12:39 40
drwx------.  4 root root    28 Mar 18 12:39 41
drwx------.  4 root root    28 Mar 18 12:39 42
drwx------.  4 root root    28 Mar 18 12:39 43


root@ip-192-168-2-173 ~]# tree /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/ -L 3
/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/
├── 10
│   ├── fs
│   │   └── pause
│   └── work
├── 11
│   ├── fs
│   │   ├── dev
│   │   ├── etc
│   │   ├── proc
│   │   └── sys
│   └── work
│       └── work
├── 12
│   ├── fs
│   │   ├── bin -> usr/bin
│   │   ├── boot
│   │   ├── dev
│   │   ├── etc
│   │   ├── home
│   │   ├── lib -> usr/lib
│   │   ├── lib64 -> usr/lib64
│   │   ├── media
│   │   ├── mnt
│   │   ├── opt
│   │   ├── proc
│   │   ├── root
│   │   ├── run
│   │   ├── sbin -> usr/sbin
│   │   ├── srv
│   │   ├── sys
│   │   ├── tmp
│   │   ├── usr
│   │   └── var
│   └── work
├── 13
│   ├── fs
│   │   ├── etc
│   │   ├── usr
│   │   └── var
│   └── work
├── 14
│   ├── fs
│   │   └── init
│   └── work
├── 15
│   ├── fs
│   │   └── init
│   └── work
├── 16
│   ├── fs
│   │   ├── etc
│   │   ├── host
│   │   └── run
│   └── work
│       └── work
├── 17
│   ├── fs
│   │   ├── dev
│   │   ├── etc
│   │   ├── usr
│   │   └── var
│   └── work
├── 18
│   ├── fs
│   │   └── app
│   └── work
├── 19
│   ├── fs
│   │   └── app
│   └── work
├── 20
│   ├── fs
│   │   ├── etc
│   │   ├── usr
│   │   └── var
│   └── work
├── 21
│   ├── fs
│   │   ├── etc
│   │   ├── host
│   │   ├── run
│   │   ├── tmp
│   │   ├── usr
│   │   └── var
│   └── work
│       └── work
├── 22
│   ├── fs
│   │   ├── bin -> usr/bin
│   │   ├── boot
│   │   ├── dev
│   │   ├── etc
│   │   ├── home
│   │   ├── lib -> usr/lib
│   │   ├── lib64 -> usr/lib64
│   │   ├── media
│   │   ├── mnt
│   │   ├── opt
│   │   ├── proc
│   │   ├── root
│   │   ├── run
│   │   ├── sbin -> usr/sbin
│   │   ├── srv
│   │   ├── sys
│   │   ├── tmp
│   │   ├── usr
│   │   └── var
│   └── work
├── 23
│   ├── fs
│   │   ├── etc
│   │   ├── usr
│   │   └── var
│   └── work
├── 24
│   ├── fs
│   │   └── controller
│   └── work
├── 25
│   ├── fs
│   │   └── aws-eks-na-cli
│   └── work
├── 26
│   ├── fs
│   │   └── aws-eks-na-cli-v6
│   └── work
├── 27
│   ├── fs
│   │   └── tc.v4ingress.bpf.o
│   └── work
├── 28
│   ├── fs
│   │   └── tc.v4egress.bpf.o
│   └── work
├── 29
│   ├── fs
│   │   └── tc.v6ingress.bpf.o
│   └── work
├── 30
│   ├── fs
│   │   └── tc.v6egress.bpf.o
│   └── work
├── 31
│   ├── fs
│   │   └── v4events.bpf.o
│   └── work
├── 32
│   ├── fs
│   │   └── v6events.bpf.o
│   └── work
├── 33
│   ├── fs
│   │   ├── etc
│   │   ├── host
│   │   ├── run
│   │   └── var
│   └── work
│       └── work
├── 34
│   ├── fs
│   │   ├── dev
│   │   ├── etc
│   │   ├── proc
│   │   └── sys
│   └── work
│       └── work
├── 35
│   ├── fs
│   │   ├── bin -> usr/bin
│   │   ├── boot
│   │   ├── dev
│   │   ├── etc
│   │   ├── home
│   │   ├── lib -> usr/lib
│   │   ├── lib64 -> usr/lib64
│   │   ├── media
│   │   ├── mnt
│   │   ├── opt
│   │   ├── proc
│   │   ├── root
│   │   ├── run
│   │   ├── sbin -> usr/sbin
│   │   ├── srv
│   │   ├── sys
│   │   ├── tmp
│   │   ├── usr
│   │   └── var
│   └── work
├── 36
│   ├── fs
│   │   ├── etc
│   │   ├── run
│   │   ├── usr
│   │   └── var
│   └── work
├── 37
│   ├── fs
│   │   ├── dev
│   │   ├── etc
│   │   ├── proc
│   │   └── sys
│   └── work
│       └── work
├── 38
│   ├── fs
│   │   ├── etc
│   │   ├── usr
│   │   └── var
│   └── work
├── 39
│   ├── fs
│   │   └── usr
│   └── work
├── 40
│   ├── fs
│   │   ├── bin -> usr/bin
│   │   ├── boot
│   │   ├── dev
│   │   ├── etc
│   │   ├── home
│   │   ├── lib -> usr/lib
│   │   ├── lib64 -> usr/lib64
│   │   ├── media
│   │   ├── mnt
│   │   ├── opt
│   │   ├── proc
│   │   ├── root
│   │   ├── run
│   │   ├── sbin -> usr/sbin
│   │   ├── srv
│   │   ├── sys
│   │   ├── tmp
│   │   ├── usr
│   │   └── var
│   └── work
├── 41
│   ├── fs
│   │   ├── etc
│   │   ├── run
│   │   └── var
│   └── work
│       └── work
├── 42
│   ├── fs
│   │   └── coredns
│   └── work
└── 43
    ├── fs
    │   ├── etc
    │   └── run
    └── work
        └── work

241 directories, 11 files



# 커널 파라미터 확인
[root@ip-192-168-2-173 ~]# tree /etc/sysctl.d/
/etc/sysctl.d/
├── 00-defaults.conf
├── 99-amazon.conf
├── 99-kubernetes-cri.conf
└── 99-sysctl.conf -> ../sysctl.conf

0 directories, 4 files
[root@ip-192-168-2-173 ~]# cat /etc/sysctl.d/00-defaults.conf
# Maximize console logging level for kernel printk messages
kernel.printk = 8 4 1 7

# Wait 5 seconds and then reboot
kernel.panic = 5

# Allow neighbor cache entries to expire even when the cache is not full
net.ipv4.neigh.default.gc_thresh1 = 0
net.ipv6.neigh.default.gc_thresh1 = 0

# Avoid neighbor table contention in large subnets
net.ipv4.neigh.default.gc_thresh2 = 15360
net.ipv6.neigh.default.gc_thresh2 = 15360
net.ipv4.neigh.default.gc_thresh3 = 16384
net.ipv6.neigh.default.gc_thresh3 = 16384

# Increasing to account for skb structure growth since the 3.4.x kernel series
net.ipv4.tcp_wmem = 4096 20480 4194304

# Set default TTL to 127.
net.ipv4.ip_default_ttl = 127

# Disable unprivileged access to bpf
kernel.unprivileged_bpf_disabled = 1

 

 

 

# time 동기화 확인

# 설정 확인
[root@ip-192-168-2-173 ~]# grep "^[^#]" /etc/chrony.conf
sourcedir /run/chrony.d
confdir /etc/chrony.d
sourcedir /etc/chrony.d
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
keyfile /etc/chrony.keys
ntsdumpdir /var/lib/chrony
logdir /var/log/chrony
log measurements statistics tracking


[root@ip-192-168-2-173 ~]# tree /run/chrony.d/
/run/chrony.d/
├── amazon-pool.sources -> /usr/share/amazon-chrony-config/amazon-pool_aws.sources
└── link-local-ipv4.sources -> /usr/share/amazon-chrony-config/link-local-ipv4_unspecified.sources

0 directories, 2 files


# time 서버 pool 확인
[root@ip-192-168-2-173 ~]# cat /usr/share/amazon-chrony-config/link-local-ipv4_unspecified.sources
# https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html
server 169.254.169.123 prefer iburst minpoll 4 maxpoll 4
[root@ip-192-168-2-173 ~]# cat /usr/share/amazon-chrony-config/amazon-pool_aws.sources
# Use Amazon Public NTP leap-second smearing time sources
# https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-time-sync
pool time.aws.com iburst


[root@ip-192-168-2-173 ~]# nslookup time.aws.com
Server:         192.168.0.2
Address:        192.168.0.2#53

Non-authoritative answer:
Name:   time.aws.com
Address: 54.90.191.9
Name:   time.aws.com
Address: 54.81.127.33
Name:   time.aws.com
Address: 44.201.148.133
Name:   time.aws.com
Address: 54.197.201.248
Name:   time.aws.com
Address: 3.87.127.143
Name:   time.aws.com
Address: 2600:1f18:4a3:6901:f191:59e4:8a22:4973
Name:   time.aws.com
Address: 2600:1f18:4a3:6902:b95e:13c8:ea01:fda0
Name:   time.aws.com
Address: 2600:1f18:4a3:6900:d7a7:7caa:d10e:8ea
Name:   time.aws.com
Address: 2600:1f18:4a3:6902:9df1:2d37:68ce:e611
Name:   time.aws.com
Address: 2600:1f18:4a3:6900:e76f:fcda:a6d1:b940


# 상태 확인
[root@ip-192-168-2-173 ~]# timedatectl status
               Local time: Wed 2026-03-18 13:52:41 UTC
           Universal time: Wed 2026-03-18 13:52:41 UTC
                 RTC time: Wed 2026-03-18 13:52:41
                Time zone: n/a (UTC, +0000)
System clock synchronized: yes
              NTP service: active
          RTC in local TZ: no
          
          
[root@ip-192-168-2-173 ~]# chronyc sources -v

  .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
 / .- Source state '*' = current best, '+' = combined, '-' = not combined,
| /             'x' = may be in error, '~' = too variable, '?' = unusable.
||                                                 .- xxxx [ yyyy ] +/- zzzz
||      Reachability register (octal) -.           |  xxxx = adjusted offset,
||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
||                                \     |          |  zzzz = estimated error.
||                                 |    |           \
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* 169.254.169.123               3   4   377     4  +6915ns[+7759ns] +/-  345us
^- ec2-13-218-199-213.compu>     4  10   377   451  -1417us[-1412us] +/-   91ms
^- ec2-52-207-222-50.comput>     4  10   377   255  -4681us[-4678us] +/-   90ms
^- ec2-54-210-225-137.compu>     4  10   377   263  +1448us[+1449us] +/-   90ms
^- ec2-3-86-4-106.compute-1>     4  10   377   256   -498us[ -495us] +/-   89ms

 

 

 

# 컨테이너 정보 확인

https://iximiuz.com/en/tags/?tag=crictl

 

# 기본 정보 확인
[root@ip-192-168-2-173 ~]# nerdctl info
Client:
 Namespace:     k8s.io
 Debug Mode:    false

Server:
 Server Version: 2.1.5
 Storage Driver: overlayfs
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Log:     fluentd journald json-file none syslog
  Storage: native overlayfs
 Security Options:
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version:   6.12.73-95.123.amzn2023.x86_64
 Operating System: Amazon Linux 2023.10.20260302
 OSType:           linux
 Architecture:     x86_64
 CPUs:             2
 Total Memory:     3.745GiB
 Name:             ip-192-168-2-173.ap-northeast-2.compute.internal
 ID:               00eabdb5-6e77-42b9-91c9-05c528908b82
 
 
 
 # 동작 중인 컨테이너 확인
[root@ip-192-168-2-173 ~]# nerdctl ps
CONTAINER ID    IMAGE                                                                                                  COMMAND                   CREATED              STATUS    PORTS    NAMES
2f6927f14d2b    602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/coredns:v1.13.2-eksbuild.3                       "/coredns -conf /etc…"    About an hour ago    Up                 k8s://kube-system/coredns-d487b6fcb-ng874/coredns
308970dfd717    602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/kube-proxy:v1.34.5-eksbuild.2                    "kube-proxy --v=2 --…"    About an hour ago    Up                 k8s://kube-system/kube-proxy-fg2zs/kube-proxy
097a4f34e0f4    602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/pause:3.10                                            "/pause"                  About an hour ago    Up                 k8s://kube-system/coredns-d487b6fcb-ng874
983a2ad5fd13    602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/pause:3.10                                            "/pause"                  About an hour ago    Up                 k8s://kube-system/kube-proxy-fg2zs
fd81f3ed458a    602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon/aws-network-policy-agent:v1.3.1-eksbuild.1    "/controller --enabl…"    About an hour ago    Up                 k8s://kube-system/aws-node-b5tvm/aws-eks-nodeagent
00cb7651f1f6    602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon-k8s-cni:v1.21.1-eksbuild.5                    "/app/aws-vpc-cni"        About an hour ago    Up                 k8s://kube-system/aws-node-b5tvm/aws-node
7cdcd0c59513    602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/pause:3.10                                            "/pause"                  About an hour ago    Up                 k8s://kube-system/aws-node-b5tvm


[root@ip-192-168-2-173 ~]# nerdctl images
REPOSITORY                                                                           TAG                   IMAGE ID        CREATED              PLATFORM       SIZE       BLOB SIZE
602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/coredns                        <none>                1be6df71365c    About an hour ago    linux/amd64    86.71MB    25.07MB
<none>                                                                               <none>                1be6df71365c    About an hour ago    linux/amd64    86.71MB    25.07MB
602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/coredns                        v1.13.2-eksbuild.3    1be6df71365c    About an hour ago    linux/amd64    86.71MB    25.07MB
602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/kube-proxy                     <none>                839e8625a1b2    About an hour ago    linux/amd64    93.9MB     31.77MB
<none>                                                                               <none>                839e8625a1b2    About an hour ago    linux/amd64    93.9MB     31.77MB
602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/kube-proxy                     v1.34.5-eksbuild.2    839e8625a1b2    About an hour ago    linux/amd64    93.9MB     31.77MB
602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon/aws-network-policy-agent    <none>                f7bdccebe120    About an hour ago    linux/amd64    110.4MB    35.64MB
<none>                                                                               <none>                f7bdccebe120    About an hour ago    linux/amd64    110.4MB    35.64MB
602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon/aws-network-policy-agent    v1.3.1-eksbuild.1     f7bdccebe120    About an hour ago    linux/amd64    110.4MB    35.64MB
602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon-k8s-cni                     <none>                1a4e6837f385    About an hour ago    linux/amd64    185.7MB    53.91MB
<none>                                                                               <none>                1a4e6837f385    About an hour ago    linux/amd64    185.7MB    53.91MB
602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon-k8s-cni                     v1.21.1-eksbuild.5    1a4e6837f385    About an hour ago    linux/amd64    185.7MB    53.91MB
602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon-k8s-cni-init                <none>                541f4e7f6d67    About an hour ago    linux/amd64    141.5MB    70.09MB
<none>                                                                               <none>                541f4e7f6d67    About an hour ago    linux/amd64    141.5MB    70.09MB
602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon-k8s-cni-init                v1.21.1-eksbuild.5    541f4e7f6d67    About an hour ago    linux/amd64    141.5MB    70.09MB
localhost/kubernetes/pause                                                           latest                76040a49ba6f    6 days ago           linux/amd64    737.3kB    318kB
localhost/kubernetes/pause                                                           latest                76040a49ba6f    6 days ago           linux/arm64    0B         265.6kB
<none>                                                                               <none>                76040a49ba6f    6 days ago           linux/arm64    0B         265.6kB
<none>                                                                               <none>                76040a49ba6f    6 days ago           linux/amd64    737.3kB    318kB
602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/pause                               3.10                  76040a49ba6f    6 days ago           linux/amd64    737.3kB    318kB
602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/pause                               3.10                  76040a49ba6f    6 days ago           linux/arm64    0B         265.6kB



[root@ip-192-168-2-173 ~]# nerdctl images | grep localhost
localhost/kubernetes/pause                                                           latest                76040a49ba6f    6 days ago           linux/amd64    737.3kB    318kB
localhost/kubernetes/pause                                                           latest                76040a49ba6f    6 days ago           linux/arm64    0B         265.6kB

 

 

 

# containerd 정보 확인 - Blog

https://labs.iximiuz.com/courses/containerd-cli/ctr/image-management

 

# 프로세스 확인
[root@ip-192-168-2-173 ~]# pstree -a
systemd --switched-root --system --deserialize=32
  ├─agetty -o -p -- \\u --noclear - linux
  ├─agetty -o -p -- \\u --keep-baud 115200,57600,38400,9600 - vt220
  ├─amazon-ssm-agen
  │   └─8*[{amazon-ssm-agen}]
  ├─auditd
  │   └─{auditd}
  ├─chronyd -F 2
  ├─containerd
  │   └─11*[{containerd}]
  ├─containerd-shim -namespace k8s.io -id 7cdcd0c595130e9eb209466cf71e89576e483a5874508dfce4183e48de6f61bb -address /run/containerd/containerd.sock
  │   ├─aws-vpc-cni
  │   │   ├─aws-k8s-agent
  │   │   │   └─8*[{aws-k8s-agent}]
  │   │   └─3*[{aws-vpc-cni}]
  │   ├─controller --enable-ipv6=false --enable-network-policy=false --enable-cloudwatch-logs=false --enable-policy-event-logs=false --log-file=/var/log/aws-routed-eni/network-policy-agent.log...
  │   │   └─7*[{controller}]
  │   ├─pause
  │   └─14*[{containerd-shim}]
  ├─containerd-shim -namespace k8s.io -id 983a2ad5fd13a575246626f4059f8a617e4ffd2f7c748170d8f2b7eb184f1d3f -address /run/containerd/containerd.sock
  │   ├─kube-proxy --v=2 --config=/var/lib/kube-proxy-config/config --hostname-override=ip-192-168-2-173.ap-northeast-2.compute.internal
  │   │   └─4*[{kube-proxy}]
  │   ├─pause
  │   └─12*[{containerd-shim}]
  ├─containerd-shim -namespace k8s.io -id 097a4f34e0f44c8a496acd29426747add2918602af81961cd64b1148772c68a5 -address /run/containerd/containerd.sock
  │   ├─coredns -conf /etc/coredns/Corefile
  │   │   └─7*[{coredns}]
  │   ├─pause
  │   └─12*[{containerd-shim}]
  ├─dbus-broker-lau --scope system --audit
  │   └─dbus-broker --log 4 --controller 9 --machine-id ec2c1a2556aabb1b129f3399dd63fffc --max-bytes 536870912 --max-fds 4096 --max-matches 16384 --audit
  ├─gssproxy -D
  │   └─5*[{gssproxy}]
  ├─irqbalance --foreground
  │   └─{irqbalance}
  ├─kubelet --node-ip=192.168.2.173 --runtime-cgroups=/runtime.slice/containerd.service --config=/etc/kubernetes/kubelet/config.json --config-dir=/etc/kubernetes/kubelet/config.json.d--kubeconf
  │   └─13*[{kubelet}]
  ├─sshd
  │   └─sshd
  │       └─sshd
  │           └─bash
  │               └─sudo su -
  │                   └─sudo su -
  │                       └─su -
  │                           └─bash
  │                               └─pstree -a
  ├─systemd --user
  │   └─(sd-pam)
  ├─systemd-homed
  ├─systemd-journal
  ├─systemd-logind
  ├─systemd-network
  ├─systemd-resolve
  ├─systemd-udevd
  └─systemd-userdbd
      ├─systemd-userwor
      ├─systemd-userwor
      └─systemd-userwor
      
      
      
[root@ip-192-168-2-173 ~]# systemctl status containerd --no-pager -l
● containerd.service - containerd container runtime
     Loaded: loaded (/usr/lib/systemd/system/containerd.service; disabled; preset: disabled)
    Drop-In: /etc/systemd/system/containerd.service.d
             └─00-runtime-slice.conf
     Active: active (running) since Wed 2026-03-18 12:38:29 UTC; 1h 20min ago
       Docs: https://containerd.io
   Main PID: 2195 (containerd)
      Tasks: 53
     Memory: 649.0M
        CPU: 1min 2.796s
     CGroup: /runtime.slice/containerd.service
             ├─2195 /usr/bin/containerd
             ├─2295 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 7cdcd0c595130e9eb209466cf71e89576e483a5874508dfce4183e48de6f61bb -address /run/containerd/containerd.sock
             ├─2711 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 983a2ad5fd13a575246626f4059f8a617e4ffd2f7c748170d8f2b7eb184f1d3f -address /run/containerd/containerd.sock
             └─2815 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 097a4f34e0f44c8a496acd29426747add2918602af81961cd64b1148772c68a5 -address /run/containerd/containerd.sock

Mar 18 12:43:40 ip-192-168-2-173.ap-northeast-2.compute.internal containerd[2195]: time="2026-03-18T12:43:40.491112948Z" level=info msg="container event discarded" container=fd81f3ed458acb7653840a113cd7c987147f05285fbbe5db87922d88d2195332 type=CONTAINER_CREATED_EVENT
Mar 18 12:43:40 ip-192-168-2-173.ap-northeast-2.compute.internal containerd[2195]: time="2026-03-18T12:43:40.583498965Z" level=info msg="container event discarded" container=fd81f3ed458acb7653840a113cd7c987147f05285fbbe5db87922d88d2195332 type=CONTAINER_STARTED_EVENT
Mar 18 12:44:06 ip-192-168-2-173.ap-northeast-2.compute.internal containerd[2195]: time="2026-03-18T12:44:06.140752384Z" level=info msg="container event discarded" container=983a2ad5fd13a575246626f4059f8a617e4ffd2f7c748170d8f2b7eb184f1d3f type=CONTAINER_CREATED_EVENT
Mar 18 12:44:06 ip-192-168-2-173.ap-northeast-2.compute.internal containerd[2195]: time="2026-03-18T12:44:06.140836981Z" level=info msg="container event discarded" container=983a2ad5fd13a575246626f4059f8a617e4ffd2f7c748170d8f2b7eb184f1d3f type=CONTAINER_STARTED_EVENT
Mar 18 12:44:07 ip-192-168-2-173.ap-northeast-2.compute.internal containerd[2195]: time="2026-03-18T12:44:07.255594498Z" level=info msg="container event discarded" container=097a4f34e0f44c8a496acd29426747add2918602af81961cd64b1148772c68a5 type=CONTAINER_CREATED_EVENT
Mar 18 12:44:07 ip-192-168-2-173.ap-northeast-2.compute.internal containerd[2195]: time="2026-03-18T12:44:07.255668685Z" level=info msg="container event discarded" container=097a4f34e0f44c8a496acd29426747add2918602af81961cd64b1148772c68a5 type=CONTAINER_STARTED_EVENT
Mar 18 12:44:08 ip-192-168-2-173.ap-northeast-2.compute.internal containerd[2195]: time="2026-03-18T12:44:08.105095003Z" level=info msg="container event discarded" container=308970dfd717b2b93e5341ea872e92ef972ca66b2a38aa864afa119f4bcaebb4 type=CONTAINER_CREATED_EVENT
Mar 18 12:44:08 ip-192-168-2-173.ap-northeast-2.compute.internal containerd[2195]: time="2026-03-18T12:44:08.219812420Z" level=info msg="container event discarded" container=308970dfd717b2b93e5341ea872e92ef972ca66b2a38aa864afa119f4bcaebb4 type=CONTAINER_STARTED_EVENT
Mar 18 12:44:09 ip-192-168-2-173.ap-northeast-2.compute.internal containerd[2195]: time="2026-03-18T12:44:09.152994215Z" level=info msg="container event discarded" container=2f6927f14d2b80b6cd38f05cf04e0181eee3e86ec7be2716744c58aa8f7c695a type=CONTAINER_CREATED_EVENT
Mar 18 12:44:09 ip-192-168-2-173.ap-northeast-2.compute.internal containerd[2195]: time="2026-03-18T12:44:09.224364756Z" level=info msg="container event discarded" container=2f6927f14d2b80b6cd38f05cf04e0181eee3e86ec7be2716744c58aa8f7c695a type=CONTAINER_STARTED_EVENT




[root@ip-192-168-2-173 ~]# cat /usr/lib/systemd/system/containerd.service
# Copyright The containerd Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target dbus.service

[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/bin/containerd

Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity

# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target





# 데몬 설정 확인
[root@ip-192-168-2-173 ~]# cat /etc/containerd/config.toml
version = 3
root = "/var/lib/containerd"
state = "/run/containerd"

[grpc]
address = "/run/containerd/containerd.sock"

[plugins.'io.containerd.cri.v1.images']
discard_unpacked_layers = true

[plugins.'io.containerd.cri.v1.images'.pinned_images]
sandbox = "localhost/kubernetes/pause"

[plugins."io.containerd.cri.v1.images".registry]
config_path = "/etc/containerd/certs.d:/etc/docker/certs.d"

[plugins.'io.containerd.cri.v1.runtime']
enable_cdi = true

[plugins.'io.containerd.cri.v1.runtime'.containerd]
default_runtime_name = "runc"

[plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
base_runtime_spec = "/etc/containerd/base-runtime-spec.json"

[plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.runc.options]
BinaryName = "/usr/sbin/runc"
SystemdCgroup = true

[plugins.'io.containerd.cri.v1.runtime'.cni]
bin_dir = "/opt/cni/bin"
conf_dir = "/etc/cni/net.d"

# 컨테이너를 생성할 때 사용하는 기본 OCI runtime spec 확인
[root@ip-192-168-2-173 ~]# cat /etc/containerd/base-runtime-spec.json  | jq
{
  "linux": {
    "maskedPaths": [
      "/proc/acpi",
      "/proc/asound",
      "/proc/kcore",
      "/proc/keys",
      "/proc/latency_stats",
      "/proc/sched_debug",
      "/proc/scsi",
      "/proc/timer_list",
      "/proc/timer_stats",
      "/sys/firmware"
    ],
    "namespaces": [
      {
        "type": "ipc"
      },
      {
        "type": "mount"
      },
      {
        "type": "network"
      },
      {
        "type": "pid"
      },
      {
        "type": "uts"
      }
    ],
    "readonlyPaths": [
      "/proc/bus",
      "/proc/fs",
      "/proc/irq",
      "/proc/sys",
      "/proc/sysrq-trigger"
    ],
    "resources": {
      "devices": [
        {
          "access": "rwm",
          "allow": false
        }
      ]
    }
  },
  "mounts": [
    {
      "destination": "/dev",
      "options": [
        "nosuid",
        "strictatime",
        "mode=755",
        "size=65536k"
      ],
      "source": "tmpfs",
      "type": "tmpfs"
    },
    {
      "destination": "/dev/mqueue",
      "options": [
        "nosuid",
        "noexec",
        "nodev"
      ],
      "source": "mqueue",
      "type": "mqueue"
    },
    {
      "destination": "/dev/pts",
      "options": [
        "nosuid",
        "noexec",
        "newinstance",
        "ptmxmode=0666",
        "mode=0620",
        "gid=5"
      ],
      "source": "devpts",
      "type": "devpts"
    },
    {
      "destination": "/proc",
      "options": [
        "nosuid",
        "noexec",
        "nodev"
      ],
      "source": "proc",
      "type": "proc"
    },
    {
      "destination": "/sys",
      "options": [
        "nosuid",
        "noexec",
        "nodev",
        "ro"
      ],
      "source": "sysfs",
      "type": "sysfs"
    }
  ],
  "ociVersion": "1.1.0",
  "process": {
    "capabilities": {
      "bounding": [
        "CAP_AUDIT_WRITE",
        "CAP_CHOWN",
        "CAP_DAC_OVERRIDE",
        "CAP_FOWNER",
        "CAP_FSETID",
        "CAP_KILL",
        "CAP_MKNOD",
        "CAP_NET_BIND_SERVICE",
        "CAP_NET_RAW",
        "CAP_SETFCAP",
        "CAP_SETGID",
        "CAP_SETPCAP",
        "CAP_SETUID",
        "CAP_SYS_CHROOT"
      ],
      "effective": [
        "CAP_AUDIT_WRITE",
        "CAP_CHOWN",
        "CAP_DAC_OVERRIDE",
        "CAP_FOWNER",
        "CAP_FSETID",
        "CAP_KILL",
        "CAP_MKNOD",
        "CAP_NET_BIND_SERVICE",
        "CAP_NET_RAW",
        "CAP_SETFCAP",
        "CAP_SETGID",
        "CAP_SETPCAP",
        "CAP_SETUID",
        "CAP_SYS_CHROOT"
      ],
      "permitted": [
        "CAP_AUDIT_WRITE",
        "CAP_CHOWN",
        "CAP_DAC_OVERRIDE",
        "CAP_FOWNER",
        "CAP_FSETID",
        "CAP_KILL",
        "CAP_MKNOD",
        "CAP_NET_BIND_SERVICE",
        "CAP_NET_RAW",
        "CAP_SETFCAP",
        "CAP_SETGID",
        "CAP_SETPCAP",
        "CAP_SETUID",
        "CAP_SYS_CHROOT"
      ]
    },
    "cwd": "/",
    "noNewPrivileges": true,
    "rlimits": [
      {
        "type": "RLIMIT_NOFILE",
        "soft": 65536,
        "hard": 1048576
      }
    ],
    "user": {
      "gid": 0,
      "uid": 0
    }
  },
  "root": {
    "path": "rootfs"
  }
}


# containerd의 유닉스 도메인 소켓 확인 : kubelet에서 사용 , containerd client 3종(ctr, nerdctr, crictl)도 사용
[root@ip-192-168-2-173 ~]# containerd config dump | grep -n containerd.sock
11:  address = '/run/containerd/containerd.sock'
[root@ip-192-168-2-173 ~]# ls -l /run/containerd/containerd.sock
srw-rw----. 1 root root 0 Mar 18 12:38 /run/containerd/containerd.sock
[root@ip-192-168-2-173 ~]# ss -xl | grep containerd
u_str LISTEN 0      4096   /run/containerd/s/dd17e43902c8743c66da9f74bc13bfb4f76b077fc8df4a37069c12d42bbb658c 8075             * 0   
u_str LISTEN 0      4096                                                /run/containerd/containerd.sock.ttrpc 4775             * 0   
u_str LISTEN 0      4096                                                      /run/containerd/containerd.sock 5725             * 0   
u_str LISTEN 0      4096   /run/containerd/s/ecb1f690b9a2bc232ac2618c464d0f21a620f1720972d8948eb18c5f9ad04409 5930             * 0   
u_str LISTEN 0      4096   /run/containerd/s/690d1bf652fade8901e2349b56a5fec11ea190501bbacd4656dd3a517abac07f 7878             * 0   



# 플러그인 확인
[root@ip-192-168-2-173 ~]# ctr --address /run/containerd/containerd.sock version
Client:
  Version:  2.1.5
  Revision: fcd43222d6b07379a4be9786bda52438f0dd16a1
  Go version: go1.24.12

WARN[0000] DEPRECATION: The `bin_dir` property of `[plugins."io.containerd.cri.v1.runtime".cni`] is deprecated since containerd v2.1 and will be removed in containerd v2.3. Use `bin_dirs` in the same section instead. 
Server:
  Version:  2.1.5
  Revision: fcd43222d6b07379a4be9786bda52438f0dd16a1
  UUID: 00eabdb5-6e77-42b9-91c9-05c528908b82
  
  

[root@ip-192-168-2-173 ~]# ctr plugins ls
WARN[0000] DEPRECATION: The `bin_dir` property of `[plugins."io.containerd.cri.v1.runtime".cni`] is deprecated since containerd v2.1 and will be removed in containerd v2.3. Use `bin_dirs` in the same section instead. 
TYPE                                      ID                       PLATFORMS      STATUS    
io.containerd.content.v1                  content                  -              ok        
io.containerd.image-verifier.v1           bindir                   -              ok        
io.containerd.internal.v1                 opt                      -              ok        
io.containerd.warning.v1                  deprecations             -              ok        
io.containerd.snapshotter.v1              blockfile                linux/amd64    skip      
io.containerd.snapshotter.v1              devmapper                linux/amd64    skip      
io.containerd.snapshotter.v1              erofs                    linux/amd64    skip      
io.containerd.snapshotter.v1              native                   linux/amd64    ok        
io.containerd.snapshotter.v1              overlayfs                linux/amd64    ok        
io.containerd.snapshotter.v1              zfs                      linux/amd64    skip      
io.containerd.event.v1                    exchange                 -              ok        
io.containerd.monitor.task.v1             cgroups                  linux/amd64    ok        
io.containerd.metadata.v1                 bolt                     -              ok        
io.containerd.gc.v1                       scheduler                -              ok        
io.containerd.differ.v1                   erofs                    -              skip      
io.containerd.differ.v1                   walking                  linux/amd64    ok        
io.containerd.lease.v1                    manager                  -              ok        
io.containerd.streaming.v1                manager                  -              ok        
io.containerd.transfer.v1                 local                    -              ok        
io.containerd.service.v1                  containers-service       -              ok        
io.containerd.service.v1                  content-service          -              ok        
io.containerd.service.v1                  diff-service             -              ok        
io.containerd.service.v1                  images-service           -              ok        
io.containerd.service.v1                  introspection-service    -              ok        
io.containerd.service.v1                  namespaces-service       -              ok        
io.containerd.service.v1                  snapshots-service        -              ok        
io.containerd.shim.v1                     manager                  -              ok        
io.containerd.runtime.v2                  task                     linux/amd64    ok        
io.containerd.service.v1                  tasks-service            -              ok        
io.containerd.grpc.v1                     containers               -              ok        
io.containerd.grpc.v1                     content                  -              ok        
io.containerd.grpc.v1                     diff                     -              ok        
io.containerd.grpc.v1                     events                   -              ok        
io.containerd.grpc.v1                     images                   -              ok        
io.containerd.grpc.v1                     introspection            -              ok        
io.containerd.grpc.v1                     leases                   -              ok        
io.containerd.grpc.v1                     namespaces               -              ok        
io.containerd.sandbox.store.v1            local                    -              ok        
io.containerd.cri.v1                      images                   -              ok        
io.containerd.cri.v1                      runtime                  linux/amd64    ok        
io.containerd.podsandbox.controller.v1    podsandbox               -              ok        
io.containerd.sandbox.controller.v1       shim                     -              ok        
io.containerd.grpc.v1                     sandbox-controllers      -              ok        
io.containerd.grpc.v1                     sandboxes                -              ok        
io.containerd.grpc.v1                     snapshots                -              ok        
io.containerd.grpc.v1                     streaming                -              ok        
io.containerd.grpc.v1                     tasks                    -              ok        
io.containerd.grpc.v1                     transfer                 -              ok        
io.containerd.grpc.v1                     version                  -              ok        
io.containerd.monitor.container.v1        restart                  -              ok        
io.containerd.tracing.processor.v1        otlp                     -              skip      
io.containerd.internal.v1                 tracing                  -              skip      
io.containerd.ttrpc.v1                    otelttrpc                -              ok        
io.containerd.grpc.v1                     healthcheck              -              ok        
io.containerd.nri.v1                      nri                      -              ok        
io.containerd.grpc.v1                     cri                      -              ok

 

 

 

# kubelet 정보 확인 - Blog , Blog2

# 프로세스 확인
[root@ip-192-168-2-173 ~]# ps afxuwww
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root           2  0.0  0.0      0     0 ?        S    12:38   0:00 [kthreadd]
root           3  0.0  0.0      0     0 ?        S    12:38   0:00  \_ [pool_workqueue_release]
root           4  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-kvfree_rcu_reclaim]
root           5  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-rcu_gp]
root           6  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-sync_wq]
root           7  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-slub_flushwq]
root           8  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-netns]
root          10  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/0:0H-kblockd]
root          12  0.0  0.0      0     0 ?        I    12:38   0:00  \_ [kworker/u8:0-events_unbound]
root          13  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-mm_percpu_wq]
root          14  0.0  0.0      0     0 ?        I    12:38   0:00  \_ [rcu_tasks_kthread]
root          15  0.0  0.0      0     0 ?        I    12:38   0:00  \_ [rcu_tasks_rude_kthread]
root          16  0.0  0.0      0     0 ?        I    12:38   0:00  \_ [rcu_tasks_trace_kthread]
root          17  0.0  0.0      0     0 ?        S    12:38   0:00  \_ [ksoftirqd/0]
root          18  0.0  0.0      0     0 ?        I    12:38   0:00  \_ [rcu_preempt]
root          19  0.0  0.0      0     0 ?        S    12:38   0:00  \_ [rcu_exp_par_gp_kthread_worker/0]
root          20  0.0  0.0      0     0 ?        S    12:38   0:00  \_ [rcu_exp_gp_kthread_worker]
root          21  0.0  0.0      0     0 ?        S    12:38   0:00  \_ [migration/0]
root          22  0.0  0.0      0     0 ?        S    12:38   0:00  \_ [cpuhp/0]
root          23  0.0  0.0      0     0 ?        S    12:38   0:00  \_ [cpuhp/1]
root          24  0.0  0.0      0     0 ?        S    12:38   0:00  \_ [migration/1]
root          25  0.0  0.0      0     0 ?        S    12:38   0:00  \_ [ksoftirqd/1]
root          27  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/1:0H-events_highpri]
root          30  0.0  0.0      0     0 ?        S    12:38   0:00  \_ [kdevtmpfs]
root          31  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-inet_frag_wq]
root          32  0.0  0.0      0     0 ?        S    12:38   0:00  \_ [kauditd]
root          33  0.0  0.0      0     0 ?        S    12:38   0:00  \_ [khungtaskd]
root          34  0.0  0.0      0     0 ?        S    12:38   0:00  \_ [oom_reaper]
root          36  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-writeback]
root          37  0.0  0.0      0     0 ?        S    12:38   0:00  \_ [kcompactd0]
root          38  0.0  0.0      0     0 ?        SN   12:38   0:00  \_ [khugepaged]
root          39  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-cryptd]
root          40  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-kintegrityd]
root          41  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-kblockd]
root          42  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-blkcg_punt_bio]
root          43  0.0  0.0      0     0 ?        S    12:38   0:00  \_ [irq/9-acpi]
root          45  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-tpm_dev_wq]
root          46  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-md]
root          47  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-md_bitmap]
root          48  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-edac-poller]
root          49  0.0  0.0      0     0 ?        S    12:38   0:00  \_ [watchdogd]
root          50  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-quota_events_unbound]
root          52  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/0:1H]
root          60  0.0  0.0      0     0 ?        S    12:38   0:00  \_ [kswapd0]
root          76  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-xfsalloc]
root          78  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-xfs_mru_cache]
root          81  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-kthrotld]
root         142  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-nvme-wq]
root         143  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-nvme-reset-wq]
root         145  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-nvme-delete-wq]
root         164  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-mld]
root         171  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-ipv6_addrconf]
root         172  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/1:1H-kblockd]
root         193  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-kstrp]
root         506  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/u9:0]
root        1134  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-xfs-buf/nvme0n1p1]
root        1135  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-xfs-conv/nvme0n1p1]
root        1136  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-xfs-reclaim/nvme0n1p1]
root        1137  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-xfs-blockgc/nvme0n1p1]
root        1138  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-xfs-inodegc/nvme0n1p1]
root        1139  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-xfs-log/nvme0n1p1]
root        1140  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-xfs-cil/nvme0n1p1]
root        1141  0.0  0.0      0     0 ?        S    12:38   0:01  \_ [xfsaild/nvme0n1p1]
root        1677  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-rpciod]
root        1678  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-xprtiod]
root        1721  0.0  0.0      0     0 ?        I<   12:38   0:00  \_ [kworker/R-ena]
root        2411  0.0  0.0      0     0 ?        I    12:38   0:00  \_ [kworker/u8:8-events_unbound]
root       21641  0.0  0.0      0     0 ?        I    13:38   0:00  \_ [kworker/0:0-events]
root       23444  0.0  0.0      0     0 ?        I    13:43   0:00  \_ [kworker/1:1-events]
root       24547  0.0  0.0      0     0 ?        I    13:47   0:00  \_ [kworker/0:1-events]
root       25576  0.0  0.0      0     0 ?        I    13:50   0:00  \_ [kworker/u8:1-events_unbound]
root       26470  0.0  0.0      0     0 ?        I    13:53   0:00  \_ [kworker/1:0-events]
root       28075  0.0  0.0      0     0 ?        I    13:58   0:00  \_ [kworker/0:2-events]
root       28908  0.0  0.0      0     0 ?        I    14:00   0:00  \_ [kworker/u8:2-events_unbound]
root       29807  0.0  0.0      0     0 ?        I    14:03   0:00  \_ [kworker/1:2-events]
root           1  0.0  0.4 107336 17632 ?        Ss   12:38   0:01 /usr/lib/systemd/systemd --switched-root --system --deserialize=32
root        1191  0.0  0.4  53976 17308 ?        Ss   12:38   0:00 /usr/lib/systemd/systemd-journald
root        1651  0.0  0.2  31960 11504 ?        Ss   12:38   0:00 /usr/lib/systemd/systemd-udevd
systemd+    1655  0.0  0.3  22580 15008 ?        Ss   12:38   0:00 /usr/lib/systemd/systemd-resolved
root        1664  0.0  0.0  21136  2548 ?        S<sl 12:38   0:00 /sbin/auditd
root        1813  0.0  0.0  81428  3188 ?        Ssl  12:38   0:00 /usr/sbin/irqbalance --foreground
root        1814  0.0  0.2  16876  8020 ?        Ss   12:38   0:00 /usr/lib/systemd/systemd-homed
root        1817  0.0  0.2  18888 10608 ?        Ss   12:38   0:00 /usr/lib/systemd/systemd-logind
dbus        1820  0.0  0.1   8492  4144 ?        Ss   12:38   0:00 /usr/bin/dbus-broker-launch --scope system --audit
dbus        1842  0.0  0.0   5388  3100 ?        S    12:38   0:00  \_ dbus-broker --log 4 --controller 9 --machine-id ec2c1a2556aabb1b129f3399dd63fffc --max-bytes 536870912 --max-fds 4096 --max-matches 16384 --audit
systemd+    1821  0.0  0.2 236852  9952 ?        Ss   12:38   0:00 /usr/lib/systemd/systemd-networkd
root        1852  0.0  0.0 281952  3788 ?        Ssl  12:38   0:00 /usr/sbin/gssproxy -D
root        1913  0.0  0.5 1240436 20228 ?       Ssl  12:38   0:00 /usr/bin/amazon-ssm-agent
root        1918  0.0  0.2  14340  8688 ?        Ss   12:38   0:00 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
root       22680  0.0  0.2  15940 10252 ?        Ss   13:41   0:00  \_ sshd: ec2-user [priv]
ec2-user   22693  0.0  0.1  15940  6260 ?        S    13:41   0:00      \_ sshd: ec2-user@pts/0
ec2-user   22696  0.0  0.1 223236  4256 pts/0    Ss   13:41   0:00          \_ -bash
root       24681  0.0  0.2 234588  8104 pts/0    S+   13:47   0:00              \_ sudo su -
root       24683  0.0  0.0 234588  2600 pts/1    Ss   13:47   0:00                  \_ sudo su -
root       24684  0.0  0.1 225304  4684 pts/1    S    13:47   0:00                      \_ su -
root       24685  0.0  0.1 223236  4240 pts/1    S    13:47   0:00                          \_ -bash
root       30337  0.0  0.0 223700  3128 pts/1    R+   14:05   0:00                              \_ ps afxuwww
root        1924  0.0  0.0 221368  1976 tty1     Ss+  12:38   0:00 /sbin/agetty -o -p -- \u --noclear - linux
root        1925  0.0  0.0 221412  2040 ttyS0    Ss+  12:38   0:00 /sbin/agetty -o -p -- \u --keep-baud 115200,57600,38400,9600 - vt220
chrony      1946  0.0  0.0  86280  3744 ?        S    12:38   0:00 /usr/sbin/chronyd -F 2
root        2195  0.6  1.6 2111852 64236 ?       Ssl  12:38   0:33 /usr/bin/containerd
root        2232  1.0  2.0 2274680 81396 ?       Ssl  12:38   0:52 /usr/bin/kubelet --node-ip=192.168.2.173 --runtime-cgroups=/runtime.slice/containerd.service --config=/etc/kubernetes/kubelet/config.json --config-dir=/etc/kubernetes/kubelet/config.json.d --kubeconfig=/var/lib/kubelet/kubeconfig --image-credential-provider-bin-dir=/etc/eks/image-credential-provider --cloud-provider=external --hostname-override=ip-192-168-2-173.ap-northeast-2.compute.internal --image-credential-provider-config=/etc/eks/image-credential-provider/config.json --node-labels=eks.amazonaws.com/sourceLaunchTemplateVersion=1,eks.amazonaws.com/nodegroup-image=ami-0c19bc6c6295a611b,eks.amazonaws.com/capacityType=ON_DEMAND,eks.amazonaws.com/nodegroup=myeks-node-group,eks.amazonaws.com/sourceLaunchTemplateId=lt-0c9986db3ebbdd005
root        2295  0.1  0.5 2305852 21288 ?       Sl   12:38   0:08 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 7cdcd0c595130e9eb209466cf71e89576e483a5874508dfce4183e48de6f61bb -address /run/containerd/containerd.sock
65535       2319  0.0  0.0   1020   620 ?        Ss   12:38   0:00  \_ /pause
root        2444  0.0  0.2 1234288 8540 ?        Ssl  12:38   0:00  \_ /app/aws-vpc-cni
root        2461  0.0  1.8 1324592 74188 ?       Sl   12:38   0:03  |   \_ ./aws-k8s-agent
root        2593  0.0  1.1 1817352 44768 ?       Ssl  12:38   0:00  \_ /controller --enable-ipv6=false --enable-network-policy=false --enable-cloudwatch-logs=false --enable-policy-event-logs=false --log-file=/var/log/aws-routed-eni/network-policy-agent.log --metrics-bind-addr=:8162 --health-probe-bind-addr=:8163 --conntrack-cache-cleanup-period=300 --log-level=debug
root        2711  0.0  0.5 2153780 19672 ?       Sl   12:39   0:01 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 983a2ad5fd13a575246626f4059f8a617e4ffd2f7c748170d8f2b7eb184f1d3f -address /run/containerd/containerd.sock
65535       2739  0.0  0.0   1020   620 ?        Ss   12:39   0:00  \_ /pause
root        2878  0.0  1.1 1271720 45552 ?       Ssl  12:39   0:00  \_ kube-proxy --v=2 --config=/var/lib/kube-proxy-config/config --hostname-override=ip-192-168-2-173.ap-northeast-2.compute.internal
root        2815  0.0  0.4 2153780 19244 ?       Sl   12:39   0:00 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 097a4f34e0f44c8a496acd29426747add2918602af81961cd64b1148772c68a5 -address /run/containerd/containerd.sock
65535       2839  0.0  0.0   1020   616 ?        Ss   12:39   0:00  \_ /pause
65532       3043  0.1  1.5 1335752 61520 ?       Ssl  12:39   0:06  \_ /coredns -conf /etc/coredns/Corefile
root       20268  0.0  0.1  16372  6928 ?        Ss   13:34   0:00 /usr/lib/systemd/systemd-userdbd
root       30178  0.0  0.1  16732  6960 ?        S    14:04   0:00  \_ systemd-userwork: waiting...
root       30232  0.0  0.1  16732  6444 ?        S    14:05   0:00  \_ systemd-userwork: waiting...
root       30233  0.0  0.1  16732  6444 ?        S    14:05   0:00  \_ systemd-userwork: waiting...
ec2-user   22684  0.0  0.3  22112 14180 ?        Ss   13:41   0:00 /usr/lib/systemd/sys



[root@ip-192-168-2-173 ~]# systemctl status kubelet --no-pager
● kubelet.service - Kubernetes Kubelet
     Loaded: loaded (/etc/systemd/system/kubelet.service; disabled; preset: disabled)
     Active: active (running) since Wed 2026-03-18 12:38:29 UTC; 1h 27min ago
       Docs: https://github.com/kubernetes/kubernetes
   Main PID: 2232 (kubelet)
      Tasks: 14 (limit: 4516)
     Memory: 81.8M
        CPU: 59.583s
     CGroup: /runtime.slice/kubelet.service
             └─2232 /usr/bin/kubelet --node-ip=192.168.2.173 --runtime-cgroups=/runtime.slice/containerd.service --config=/etc/kubernetes/kubelet/config.json --config-dir=/etc/kubernetes/kubelet/con…

Mar 18 12:39:06 ip-192-168-2-173.ap-northeast-2.compute.internal kubelet[2232]: I0318 12:39:06.841573    2232 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (U…
Mar 18 12:39:06 ip-192-168-2-173.ap-northeast-2.compute.internal kubelet[2232]: I0318 12:39:06.888853    2232 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p…
Mar 18 12:39:06 ip-192-168-2-173.ap-northeast-2.compute.internal kubelet[2232]: I0318 12:39:06.958085    2232 util.go:34] "No sandbox for pod can be found. Need to start a new one" pod="…b6fcb-ng874"
Mar 18 12:39:07 ip-192-168-2-173.ap-northeast-2.compute.internal kubelet[2232]: I0318 12:39:07.794184    2232 kubelet.go:2556] "SyncLoop (PLEG): event for pod" pod="kube-system/coredns-d…48772c68a5"}
Mar 18 12:39:08 ip-192-168-2-173.ap-northeast-2.compute.internal kubelet[2232]: I0318 12:39:08.803954    2232 kubelet.go:2556] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-prox…9f4bcaebb4"}
Mar 18 12:39:09 ip-192-168-2-173.ap-northeast-2.compute.internal kubelet[2232]: I0318 12:39:09.808215    2232 kubelet.go:2556] "SyncLoop (PLEG): event for pod" pod="kube-system/coredns-d…aa8f7c695a"}
Mar 18 12:39:09 ip-192-168-2-173.ap-northeast-2.compute.internal kubelet[2232]: I0318 12:39:09.826486    2232 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kub…
Mar 18 12:39:09 ip-192-168-2-173.ap-northeast-2.compute.internal kubelet[2232]: I0318 12:39:09.828197    2232 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cor…
Mar 18 12:39:10 ip-192-168-2-173.ap-northeast-2.compute.internal kubelet[2232]: I0318 12:39:10.812127    2232 kubelet.go:2644] "SyncLoop (probe)" probe="readiness" status="not ready" pod…b6fcb-ng874"
Mar 18 12:39:10 ip-192-168-2-173.ap-northeast-2.compute.internal kubelet[2232]: I0318 12:39:10.814062    2232 kubelet.go:2644] "SyncLoop (probe)" probe="readiness" status="ready" pod="ku…b6fcb-ng874"
Hint: Some lines were ellipsized, use -l to show in full.


[root@ip-192-168-2-173 ~]# cat /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Wants=containerd.service

[Service]
Slice=runtime.slice
EnvironmentFile=/etc/eks/kubelet/environment
ExecStartPre=/sbin/iptables -P FORWARD ACCEPT -w 5
ExecStart=/usr/bin/kubelet $NODEADM_KUBELET_ARGS

Restart=on-failure
RestartForceExitStatus=SIGPIPE
RestartSec=5
KillMode=process
CPUAccounting=true
MemoryAccounting=true

[Install]
WantedBy=multi-user.target



# 관련 파일 확인
root@ip-192-168-2-173 ~]# tree /etc/kubernetes/
/etc/kubernetes/
├── kubelet
│   ├── config.json
│   └── config.json.d
│       └── 40-nodeadm.conf
├── manifests
└── pki
    └── ca.crt

4 directories, 3 files



# k8s ca 인증서 확인 : 10년 유효 기간
[root@ip-192-168-2-173 ~]# cat /etc/kubernetes/pki/ca.crt | openssl x509 -text -noout
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 61361338546420264 (0xd9ffcff3007a28)
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=kubernetes
        Validity
            Not Before: Mar 18 12:29:28 2026 GMT
            Not After : Mar 15 12:34:28 2036 GMT
        Subject: CN=kubernetes
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:b4:43:ad:be:56:7b:ee:ce:2c:02:36:5b:c9:47:
                    e1:37:11:0e:bd:f7:34:39:3f:35:ab:1a:81:96:e5:
                    2b:bf:78:eb:3d:94:d2:1f:6f:94:55:7b:cd:ea:12:
                    c4:d8:64:14:5f:ed:ac:a7:4d:65:3c:b5:44:df:dc:
                    35:55:ef:4f:54:d8:e3:26:7c:46:f7:48:80:63:1d:
                    3a:18:50:a9:41:ed:35:62:78:e7:02:92:f1:09:15:
                    d0:51:9c:da:63:6f:33:94:9c:6d:12:42:9a:9c:d2:
                    75:88:df:7a:e5:9c:cc:45:de:55:75:b6:25:aa:24:
                    76:7f:15:07:5d:3b:7e:9a:84:95:8b:04:31:e6:7b:
                    ff:e4:ac:bb:cf:4a:e8:12:67:27:a9:61:3e:e3:a0:
                    e1:bc:01:71:11:cb:58:35:a4:8b:54:10:af:7f:c0:
                    e3:cd:33:01:ab:b8:ac:61:49:35:50:27:e4:d3:ad:
                    8d:a2:2a:38:53:61:e3:a7:35:bb:4f:8a:0f:8a:19:
                    5d:7f:3a:3b:3f:25:5a:59:87:8b:6a:87:e3:67:79:
                    cc:2f:71:b7:7d:49:8a:66:5e:85:5b:14:71:01:a1:
                    09:27:0b:4f:65:53:ca:d7:6c:2e:fa:f3:75:ab:85:
                    1d:52:76:cb:e3:66:3f:c6:d5:b7:30:fe:3d:72:a7:
                    e9:4b
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment, Certificate Sign
            X509v3 Basic Constraints: critical
                CA:TRUE
            X509v3 Subject Key Identifier: 
                0A:5C:A7:07:CD:B9:B3:AA:BD:06:58:55:3D:F0:24:94:B8:E9:CB:AF
            X509v3 Subject Alternative Name: 
                DNS:kubernetes
    Signature Algorithm: sha256WithRSAEncryption
    Signature Value:
        a3:86:a2:5c:d4:63:aa:c1:20:94:e5:ab:bd:6d:e5:32:6c:39:
        d8:62:f3:e4:2c:ae:36:0b:08:95:73:3b:4e:f1:08:15:ce:38:
        e6:44:d3:49:e3:56:9b:92:7b:be:58:8c:67:46:d3:49:c2:38:
        9c:0a:4e:ef:13:b1:83:cb:61:52:24:ef:f8:72:ea:68:6c:70:
        aa:20:e1:32:32:db:e6:4d:38:57:02:84:f6:44:e6:cc:86:60:
        8b:90:07:23:b9:a0:b5:29:a9:d6:3e:4d:88:ed:55:02:ef:f7:
        33:0a:3b:96:a1:9d:98:8d:15:39:78:79:e6:44:d8:51:39:4a:
        c5:47:64:da:8e:4c:47:02:a8:1c:cd:44:b6:94:98:7d:e9:fb:
        13:68:63:a2:bf:5e:d8:80:5f:ba:8b:a8:ef:09:e2:71:4d:62:
        eb:f2:fe:b9:24:ba:50:7e:9a:55:5b:d8:5a:5c:f3:12:9f:d2:
        25:43:0e:ba:7b:ef:46:42:f4:e8:8d:77:4c:e4:33:d8:c2:f9:
        ee:94:62:db:99:99:d3:f1:9e:ac:1f:04:ba:72:8b:07:88:c2:
        88:13:15:e2:e0:4b:ce:76:6c:00:08:70:3a:28:b3:40:92:9b:
        2e:87:ca:03:b6:ea:54:46:a7:2d:39:ef:a5:27:89:17:22:20:
        78:cb:31:f9
        
        
    
# kubelet 설정 파일 확인
[root@ip-192-168-2-173 ~]# cat /etc/kubernetes/kubelet/config.json | jq
{
  "address": "0.0.0.0",
  "authentication": {
    "x509": {
      "clientCAFile": "/etc/kubernetes/pki/ca.crt"
    },
    "webhook": {
      "enabled": true,
      "cacheTTL": "2m0s"
    },
    "anonymous": {
      "enabled": false
    }
  },
  "authorization": {
    "mode": "Webhook",
    "webhook": {
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    }
  },
  "cgroupDriver": "systemd",
  "cgroupRoot": "/",
  "clusterDNS": [
    "10.100.0.10"
  ],
  "clusterDomain": "cluster.local",
  "containerRuntimeEndpoint": "unix:///run/containerd/containerd.sock",
  "evictionHard": {
    "memory.available": "100Mi",
    "nodefs.available": "10%",
    "nodefs.inodesFree": "5%"
  },
  "featureGates": {
    "DynamicResourceAllocation": true,
    "MutableCSINodeAllocatableCount": true,
    "RotateKubeletServerCertificate": true
  },
  "hairpinMode": "hairpin-veth",
  "kubeReserved": {
    "cpu": "70m",
    "ephemeral-storage": "1Gi",
    "memory": "442Mi"
  },
  "kubeReservedCgroup": "/runtime",
  "logging": {
    "verbosity": 2
  },
  "maxPods": 17,   ### 해당 노드에 배포할 수 있는 pod 개수
  "protectKernelDefaults": true,
  "providerID": "aws:///ap-northeast-2b/i-0ebf40cd756119cb9",
  "readOnlyPort": 0,
  "serializeImagePulls": false,
  "serverTLSBootstrap": true,
  "shutdownGracePeriod": "2m30s",
  "shutdownGracePeriodCriticalPods": "30s",
  "systemReservedCgroup": "/system",
  "tlsCipherSuites": [
    "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
    "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
    "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305",
    "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
    "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
    "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305",
    "TLS_RSA_WITH_AES_128_GCM_SHA256",
    "TLS_RSA_WITH_AES_256_GCM_SHA384"
  ],
  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1"
}



# 관련 디렉터리 확인
[root@ip-192-168-2-173 ~]# tree /var/lib/kubelet -L 2
/var/lib/kubelet
├── actuated_pods_state
├── allocated_pods_state
├── checkpoints
├── cpu_manager_state
├── device-plugins
│   └── kubelet.sock
├── dra_manager_state
├── kubeconfig
├── memory_manager_state
├── pki
│   ├── kubelet-server-2026-03-18-12-38-47.pem
│   ├── kubelet-server-2026-03-18-12-39-03.pem
│   └── kubelet-server-current.pem -> /var/lib/kubelet/pki/kubelet-server-2026-03-18-12-39-03.pem
├── plugins
├── plugins_registry
├── pod-resources
│   └── kubelet.sock
└── pods
    ├── 3e815e5c-c417-4eb9-b61d-519ac847eebf
    ├── 90007953-9753-4aaf-8eee-66b72404d74e
    └── b7da62fe-6ef5-49c5-846b-144244722118

10 directories, 11 files



# kubelet(client) -> eks api server(server) 호출 시
[root@ip-192-168-2-173 ~]# cat /var/lib/kubelet/kubeconfig 
---
apiVersion: v1
kind: Config
clusters:
  - name: kubernetes
    cluster:
      certificate-authority: /etc/kubernetes/pki/ca.crt
      server: https://CC5D719ACF5FB0EC4C92959793A4488F.yl4.ap-northeast-2.eks.amazonaws.com
current-context: kubelet
contexts:
  - name: kubelet
    context:
      cluster: kubernetes
      user: kubelet
users:
  - name: kubelet
    user:
      exec:
        apiVersion: client.authentication.k8s.io/v1beta1
        command: aws
        args:
          - "eks"
          - "get-token"
          - "--cluster-name"
          - "myeks"
          - "--region"
          - "ap-northeast-2"
          
          


# eks api server(client) -> kubelet(server) 호출 시 : kubelet 이 https 서버 동작 시 TLS 서버 인증서
# 아래 SAN IP 'IP Address:192.168.2.110, IP Address:54.180.255.23' 는 해당 노드의 '사설, 공인' IP
[root@ip-192-168-2-173 ~]# curl ipinfo.io/ip ; echo
13.125.148.230


# kubelet이 클라이언트가 아닌 서버 역할을 하기도 함. 그때 사용하는 인증서
# 관리자가 kubectl exec 등의 명령어를 사용할 때 API server -> kubelet을 호출하는데 이때 인증서가 아래 내용임
[root@ip-192-168-2-173 ~]# cat /var/lib/kubelet/pki/kubelet-server-current.pem | openssl x509 -text -noout
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            3f:8d:10:ce:f2:4d:0f:91:fb:05:b3:84:d4:c1:16:7a:34:43:28:f0
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=kubernetes
        Validity
            Not Before: Mar 18 12:34:00 2026 GMT
            Not After : May  2 12:34:00 2026 GMT
        Subject: O=system:nodes, CN=system:node:ip-192-168-2-173.ap-northeast-2.compute.internal
        Subject Public Key Info:
            Public Key Algorithm: id-ecPublicKey
                Public-Key: (256 bit)
                pub:
                    04:1f:3d:fb:9a:b3:b8:ab:0c:af:01:e2:ec:9c:e7:
                    1e:37:c8:d5:c1:d9:5f:1a:7c:64:26:ad:72:eb:cf:
                    ab:08:ed:92:8e:43:43:cd:a2:fa:11:f0:3d:58:3a:
                    53:91:a2:2f:7c:96:84:dd:c9:4c:e4:67:0e:b0:87:
                    6e:a7:3d:05:4b
                ASN1 OID: prime256v1
                NIST CURVE: P-256
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature
            X509v3 Extended Key Usage: 
                TLS Web Server Authentication
            X509v3 Basic Constraints: critical
                CA:FALSE
            X509v3 Subject Key Identifier: 
                2D:B6:F2:1A:AA:7A:3C:EB:08:3F:A8:15:9F:DF:00:34:26:91:25:9B
            X509v3 Authority Key Identifier: 
                0A:5C:A7:07:CD:B9:B3:AA:BD:06:58:55:3D:F0:24:94:B8:E9:CB:AF
            X509v3 Subject Alternative Name: 
                DNS:ec2-13-125-148-230.ap-northeast-2.compute.amazonaws.com, DNS:ip-192-168-2-173.ap-northeast-2.compute.internal, IP Address:13.125.148.230, IP Address:192.168.2.173
    Signature Algorithm: sha256WithRSAEncryption
    Signature Value:
        3a:de:62:ec:c5:6a:47:9f:cb:65:62:0f:83:54:bc:f0:0d:68:
        42:18:06:71:f2:47:4f:ea:c3:b1:7d:88:f5:ef:f5:09:b8:62:
        2f:cc:eb:bb:44:12:9b:8b:65:24:2a:89:9b:5b:5d:56:c0:6c:
        c8:a3:71:6f:8c:a7:be:9a:7c:38:75:c5:c2:39:14:a3:93:64:
        e8:91:65:7d:84:84:a0:4d:4f:95:2e:dd:b2:0a:48:ea:e7:c4:
        9e:82:3e:13:6a:ed:53:19:d3:4d:8e:b6:cb:d4:94:08:99:22:
        39:bb:96:e3:9a:aa:5e:e5:01:4a:72:37:22:ad:cb:45:c7:31:
        2a:ea:5f:c2:9b:c2:d1:f6:6c:3c:3f:25:76:fb:22:de:b8:d5:
        ef:8c:4b:54:62:00:39:d1:c3:94:15:25:2f:42:44:ea:25:fd:
        ec:99:90:70:0e:93:ef:6c:22:27:8c:58:ca:30:ff:ea:83:eb:
        94:57:0e:8e:7f:62:8f:95:b4:a3:bd:e1:df:fd:bd:4f:7c:28:
        75:ef:bd:0d:87:4c:79:6d:aa:c8:c0:a8:fc:de:c6:e9:8d:4b:
        b2:57:a6:1e:86:c3:37:54:b9:95:87:68:90:4b:24:2b:7c:cb:
        b5:12:69:55:10:1e:ef:94:18:cf:6d:21:e1:ef:c2:aa:7c:31:
        6e:86:b3:3f
        
        
[root@ip-192-168-2-173 ~]# ip -br -c -4 addr
lo               UNKNOWN        127.0.0.1/8 
ens5             UP             192.168.2.173/24 metric 512 
ens6             UP             192.168.2.233/24 


[root@ip-192-168-2-173 ~]# curl ipinfo.io/ip ; echo
13.125.148.230

# (참고) 인증서 발급 시 사용한 csr 확인 : ec2 노드가 아니라 자신의 pc에서 확인!
v:Documents:aws_keypair $ kubectl get csr
NAME        AGE    SIGNERNAME                      REQUESTOR                                                      REQUESTEDDURATION   CONDITION
csr-5dm2f   107m   kubernetes.io/kubelet-serving   system:node:ip-192-168-2-173.ap-northeast-2.compute.internal   <none>              Approved,Issued
csr-5wlf4   107m   kubernetes.io/kubelet-serving   system:node:ip-192-168-2-173.ap-northeast-2.compute.internal   <none>              Approved,Issued
csr-7thrw   107m   kubernetes.io/kubelet-serving   system:node:ip-192-168-1-31.ap-northeast-2.compute.internal    <none>              Approved,Issued

 

 

 

# 스토리지 정보 확인

[root@ip-192-168-2-173 ~]# lsblk
NAME          MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
nvme0n1       259:0    0  20G  0 disk 
├─nvme0n1p1   259:1    0  20G  0 part /
├─nvme0n1p127 259:2    0   1M  0 part 
└─nvme0n1p128 259:3    0  10M  0 part /boot/efi

[root@ip-192-168-2-173 ~]# df -hT
Filesystem       Type      Size  Used Avail Use% Mounted on
devtmpfs         devtmpfs  4.0M     0  4.0M   0% /dev
tmpfs            tmpfs     1.9G     0  1.9G   0% /dev/shm
tmpfs            tmpfs     767M  1.1M  766M   1% /run
efivarfs         efivarfs  128K  3.2K  120K   3% /sys/firmware/efi/efivars
/dev/nvme0n1p1   xfs        20G  3.4G   17G  17% /
tmpfs            tmpfs     1.9G     0  1.9G   0% /tmp
/dev/nvme0n1p128 vfat       10M  1.3M  8.7M  13% /boot/efi
tmpfs            tmpfs     3.3G   12K  3.3G   1% /var/lib/kubelet/pods/90007953-9753-4aaf-8eee-66b72404d74e/volumes/kubernetes.io~projected/kube-api-access-s885g
shm              tmpfs      64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/7cdcd0c595130e9eb209466cf71e89576e483a5874508dfce4183e48de6f61bb/shm
overlay          overlay    20G  3.4G   17G  17% /run/containerd/io.containerd.runtime.v2.task/k8s.io/7cdcd0c595130e9eb209466cf71e89576e483a5874508dfce4183e48de6f61bb/rootfs
overlay          overlay    20G  3.4G   17G  17% /run/containerd/io.containerd.runtime.v2.task/k8s.io/00cb7651f1f654da44d3a9fe0281e247fa5ac7e2e63169bc36d2e51789680a85/rootfs
overlay          overlay    20G  3.4G   17G  17% /run/containerd/io.containerd.runtime.v2.task/k8s.io/fd81f3ed458acb7653840a113cd7c987147f05285fbbe5db87922d88d2195332/rootfs
tmpfs            tmpfs     3.3G   12K  3.3G   1% /var/lib/kubelet/pods/3e815e5c-c417-4eb9-b61d-519ac847eebf/volumes/kubernetes.io~projected/kube-api-access-rhthb
shm              tmpfs      64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/983a2ad5fd13a575246626f4059f8a617e4ffd2f7c748170d8f2b7eb184f1d3f/shm
overlay          overlay    20G  3.4G   17G  17% /run/containerd/io.containerd.runtime.v2.task/k8s.io/983a2ad5fd13a575246626f4059f8a617e4ffd2f7c748170d8f2b7eb184f1d3f/rootfs
tmpfs            tmpfs     170M   12K  170M   1% /var/lib/kubelet/pods/b7da62fe-6ef5-49c5-846b-144244722118/volumes/kubernetes.io~projected/kube-api-access-4pzps
shm              tmpfs      64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/097a4f34e0f44c8a496acd29426747add2918602af81961cd64b1148772c68a5/shm
overlay          overlay    20G  3.4G   17G  17% /run/containerd/io.containerd.runtime.v2.task/k8s.io/097a4f34e0f44c8a496acd29426747add2918602af81961cd64b1148772c68a5/rootfs
overlay          overlay    20G  3.4G   17G  17% /run/containerd/io.containerd.runtime.v2.task/k8s.io/308970dfd717b2b93e5341ea872e92ef972ca66b2a38aa864afa119f4bcaebb4/rootfs
overlay          overlay    20G  3.4G   17G  17% /run/containerd/io.containerd.runtime.v2.task/k8s.io/2f6927f14d2b80b6cd38f05cf04e0181eee3e86ec7be2716744c58aa8f7c695a/rootfs
tmpfs            tmpfs     384M     0  384M   0% /run/user/1000

[root@ip-192-168-2-173 ~]# findmnt
TARGET                                                  SOURCE         FSTYPE     OPTIONS
/                                                       /dev/nvme0n1p1 xfs        rw,noatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,sunit=1024,swidth=1024,noquota
├─/proc                                                 proc           proc       rw,nosuid,nodev,noexec,relatime
│ └─/proc/sys/fs/binfmt_misc                            systemd-1      autofs     rw,relatime,fd=33,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=2042
├─/sys                                                  sysfs          sysfs      rw,nosuid,nodev,noexec,relatime,seclabel
│ ├─/sys/kernel/security                                securityfs     securityfs rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/cgroup                                      cgroup2        cgroup2    rw,nosuid,nodev,noexec,relatime,seclabel
│ ├─/sys/fs/pstore                                      pstore         pstore     rw,nosuid,nodev,noexec,relatime,seclabel
│ ├─/sys/firmware/efi/efivars                           efivarfs       efivarfs   rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/bpf                                         bpf            bpf        rw,nosuid,nodev,noexec,relatime,mode=700
│ ├─/sys/fs/selinux                                     selinuxfs      selinuxfs  rw,nosuid,noexec,relatime
│ ├─/sys/kernel/debug                                   debugfs        debugfs    rw,nosuid,nodev,noexec,relatime,seclabel
│ ├─/sys/kernel/tracing                                 tracefs        tracefs    rw,nosuid,nodev,noexec,relatime,seclabel
│ ├─/sys/kernel/config                                  configfs       configfs   rw,nosuid,nodev,noexec,relatime
│ └─/sys/fs/fuse/connections                            fusectl        fusectl    rw,nosuid,nodev,noexec,relatime
├─/dev                                                  devtmpfs       devtmpfs   rw,nosuid,seclabel,size=4096k,nr_inodes=487417,mode=755
│ ├─/dev/shm                                            tmpfs          tmpfs      rw,nosuid,nodev,seclabel
│ ├─/dev/pts                                            devpts         devpts     rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000
│ ├─/dev/hugepages                                      hugetlbfs      hugetlbfs  rw,relatime,seclabel,pagesize=2M
│ └─/dev/mqueue                                         mqueue         mqueue     rw,nosuid,nodev,noexec,relatime,seclabel
├─/run                                                  tmpfs          tmpfs      rw,nosuid,nodev,seclabel,size=785292k,nr_inodes=819200,mode=755
│ ├─/run/credentials/systemd-sysctl.service             ramfs          ramfs      ro,nosuid,nodev,noexec,relatime,seclabel,mode=700
│ ├─/run/credentials/systemd-tmpfiles-setup-dev.service ramfs          ramfs      ro,nosuid,nodev,noexec,relatime,seclabel,mode=700
│ ├─/run/credentials/systemd-tmpfiles-setup.service     ramfs          ramfs      ro,nosuid,nodev,noexec,relatime,seclabel,mode=700
│ ├─/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd81f3ed458acb7653840a113cd7c987147f05285fbbe5db87922d88d2195332/rootfs
│ │                                                     overlay        overlay    rw,relatime,seclabel,lowerdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/32/fs:/var/lib/con
│ ├─/run/containerd/io.containerd.grpc.v1.cri/sandboxes/983a2ad5fd13a575246626f4059f8a617e4ffd2f7c748170d8f2b7eb184f1d3f/shm
│ │                                                     shm            tmpfs      rw,nosuid,nodev,noexec,relatime,seclabel,size=65536k
│ ├─/run/containerd/io.containerd.runtime.v2.task/k8s.io/00cb7651f1f654da44d3a9fe0281e247fa5ac7e2e63169bc36d2e51789680a85/rootfs
│ │                                                     overlay        overlay    rw,relatime,seclabel,lowerdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/20/fs:/var/lib/con
│ ├─/run/containerd/io.containerd.runtime.v2.task/k8s.io/983a2ad5fd13a575246626f4059f8a617e4ffd2f7c748170d8f2b7eb184f1d3f/rootfs
│ │                                                     overlay        overlay    rw,relatime,seclabel,lowerdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/10/fs,upperdir=/va
│ ├─/run/containerd/io.containerd.grpc.v1.cri/sandboxes/7cdcd0c595130e9eb209466cf71e89576e483a5874508dfce4183e48de6f61bb/shm
│ │                                                     shm            tmpfs      rw,nosuid,nodev,noexec,relatime,seclabel,size=65536k
│ ├─/run/containerd/io.containerd.runtime.v2.task/k8s.io/7cdcd0c595130e9eb209466cf71e89576e483a5874508dfce4183e48de6f61bb/rootfs
│ │                                                     overlay        overlay    rw,relatime,seclabel,lowerdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/10/fs,upperdir=/va
│ ├─/run/netns/cni-ea70dd72-1d18-5fc2-6426-24a54e56672d nsfs[net:[4026532207]]
│ │                                                                    nsfs       rw
│ ├─/run/containerd/io.containerd.grpc.v1.cri/sandboxes/097a4f34e0f44c8a496acd29426747add2918602af81961cd64b1148772c68a5/shm
│ │                                                     shm            tmpfs      rw,nosuid,nodev,noexec,relatime,seclabel,size=65536k
│ ├─/run/containerd/io.containerd.runtime.v2.task/k8s.io/097a4f34e0f44c8a496acd29426747add2918602af81961cd64b1148772c68a5/rootfs
│ │                                                     overlay        overlay    rw,relatime,seclabel,lowerdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/10/fs,upperdir=/va
│ ├─/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f6927f14d2b80b6cd38f05cf04e0181eee3e86ec7be2716744c58aa8f7c695a/rootfs
│ │                                                     overlay        overlay    rw,relatime,seclabel,lowerdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/42/fs:/var/lib/con
│ ├─/run/containerd/io.containerd.runtime.v2.task/k8s.io/308970dfd717b2b93e5341ea872e92ef972ca66b2a38aa864afa119f4bcaebb4/rootfs
│ │                                                     overlay        overlay    rw,relatime,seclabel,lowerdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/39/fs:/var/lib/con
│ └─/run/user/1000                                      tmpfs          tmpfs      rw,nosuid,nodev,relatime,seclabel,size=392644k,nr_inodes=98161,mode=700,uid=1000,gid=1000
├─/tmp                                                  tmpfs          tmpfs      rw,nosuid,nodev,seclabel,size=1963224k,nr_inodes=1048576
├─/boot/efi                                             systemd-1      autofs     rw,relatime,fd=56,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=3318
│ └─/boot/efi                                           /dev/nvme0n1p128
│                                                                      vfat       rw,noatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=winnt,errors=remount-ro
├─/var/lib/nfs/rpc_pipefs                               sunrpc         rpc_pipefs rw,relatime
├─/var/lib/kubelet/pods/3e815e5c-c417-4eb9-b61d-519ac847eebf/volumes/kubernetes.io~projected/kube-api-access-rhthb
│                                                       tmpfs          tmpfs      rw,relatime,seclabel,size=3371436k,noswap
├─/var/lib/kubelet/pods/90007953-9753-4aaf-8eee-66b72404d74e/volumes/kubernetes.io~projected/kube-api-access-s885g
│                                                       tmpfs          tmpfs      rw,relatime,seclabel,size=3371436k,noswap
└─/var/lib/kubelet/pods/b7da62fe-6ef5-49c5-846b-144244722118/volumes/kubernetes.io~projected/kube-api-access-4pzps
                                                        tmpfs          tmpfs      rw,relatime,seclabel,size=174080k,noswap

 

 

 

# cgroup 정보 확인

# cgroup 확인 : 버전2
[root@ip-192-168-2-173 ~]# stat -fc %T /sys/fs/cgroup/
cgroup2fs

[root@ip-192-168-2-173 ~]# findmnt |grep -i cgroup
│ ├─/sys/fs/cgroup                                                                                                               cgroup2                cgroup2    rw,nosuid,nodev,noexec,relatime,seclabel


# EKS node 전체 cgroup 구조
[root@ip-192-168-2-173 ~]# tree /sys/fs/cgroup/ -L 1
/sys/fs/cgroup/
├── cgroup.controllers
├── cgroup.max.depth
├── cgroup.max.descendants
├── cgroup.pressure
├── cgroup.procs
├── cgroup.stat
├── cgroup.subtree_control
├── cgroup.threads
├── cpu.pressure
├── cpu.stat
├── cpu.stat.local
├── cpuset.cpus.effective
├── cpuset.cpus.isolated
├── cpuset.mems.effective
├── dev-hugepages.mount
├── dev-mqueue.mount
├── init.scope
├── io.cost.model
├── io.cost.qos
├── io.pressure
├── io.stat
├── kubepods.slice
├── memory.numa_stat
├── memory.pressure
├── memory.reclaim
├── memory.stat
├── memory.zswap.writeback
├── misc.capacity
├── misc.current
├── misc.peak
├── runtime.slice
├── sys-fs-fuse-connections.mount
├── sys-kernel-config.mount
├── sys-kernel-debug.mount
├── sys-kernel-tracing.mount
├── system.slice
└── user.slice

11 directories, 26 files


# 관련 툴
systemd-cgls
systemd-cgtop

 

 

  • 샘플 app 배포

# kube-ops-view by nodePort 30000

# kube-ops-view
v:Documents:s-aews:aews $ helm repo add geek-cookbook https://geek-cookbook.github.io/charts/
"geek-cookbook" has been added to your repositories

v:Documents:s-aews:aews $ helm install kube-ops-view geek-cookbook/kube-ops-view --version 1.2.2 --set service.main.type=NodePort,service.main.ports.http.nodePort=30000 --set env.TZ="Asia/Seoul" --namespace kube-system
NAME: kube-ops-view
LAST DEPLOYED: Wed Mar 18 23:32:41 2026
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
  export NODE_PORT=$(kubectl get --namespace kube-system -o jsonpath="{.spec.ports[0].nodePort}" services kube-ops-view)
  export NODE_IP=$(kubectl get nodes --namespace kube-system -o jsonpath="{.items[0].status.addresses[0].address}")
  echo http://$NODE_IP:$NODE_PORT



# 확인
v:Documents:s-aews:aews $ kubectl get deploy,pod,svc,ep -n kube-system -l app.kubernetes.io/instance=kube-ops-view
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/kube-ops-view   1/1     1            1           39s

NAME                                READY   STATUS    RESTARTS   AGE
pod/kube-ops-view-97fd86569-kqhdf   1/1     Running   0          39s

NAME                    TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
service/kube-ops-view   NodePort   10.100.214.144   <none>        8080:30000/TCP   39s

NAME                      ENDPOINTS           AGE
endpoints/kube-ops-view   192.168.2.93:8080   39s


# kube-ops-view 접속
open "http://$NODE1:30000/#scale=1.5"
open "http://$NODE1:30000/#scale=1.3"

 

 

# 게임 파드 배포 by nodePort 30001

# 샘플 애플리케이션 배포
cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mario
  labels:
    app: mario
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mario
  template:
    metadata:
      labels:
        app: mario
    spec:
      containers:
      - name: mario
        image: pengbai/docker-supermario
---
apiVersion: v1
kind: Service
metadata:
   name: mario
spec:
  selector:
    app: mario
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
    nodePort: 30001
  type: NodePort
EOF


# 확인
v:Documents:s-aews:aews $ kubectl get deploy,pod,svc,ep
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mario   1/1     1            1           24s

NAME                         READY   STATUS    RESTARTS   AGE
pod/mario-868699b58f-9mfkz   1/1     Running   0          24s

NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.100.0.1       <none>        443/TCP        124m
service/mario        NodePort    10.100.194.124   <none>        80:30001/TCP   24s

NAME                   ENDPOINTS                            AGE
endpoints/kubernetes   192.168.1.90:443,192.168.2.113:443   124m
endpoints/mario        192.168.1.236:8080

# 접속
curl http://$NODE1:30001 -I
open http://$NODE1:30001

 

 

 

 

 

 

EKS Cluster Endpoint Access 소개

  • EKS 컨트롤 플레인 : 분산 구성 요소, 복원성, 가동 시간(SLA), AWS Managed VPC(3개 AZ, API NLB, ETCD ELB - 링크)

CPI(API서버)는 기본 2개

  • ETCD는 무엇일까요? API 부하분산을 ALB가 아닌 NLB를 사용하는 이유는?

https://aws.github.io/aws-eks-best-practices/reliability/docs/controlplane/

 

 

  • EKS 데이터 플레인 : Customer VPC - EKS owned ENI?, 노드 유형(Managed node groups, Self-managed nodes, AWS Auto Mode, AWS Fargate, Karpenter, AWS Hybrid Nodes) - 링크CPI(API서버)는 기본 2개로 EKS owned ENI 역시 2개 제공 

CPI(API서버)는 기본 2개로 EKS owned ENI 역시 2개 제공

 

  • EKS Cluster Endpoint - Public :
    • Control Plane → 워커노드 kubelet (EKS owned ENI를 통해 접근 / log 또는 kubectl exec 명령어 등에 대해)
    • 워커노드 → (API Server 퍼블릭 도메인) Control Plane
    • 사용자 kubectl → (API Server 퍼블릭 도메인) Control Plane

 

 

  • EKS Cluster Endpoint - Public Private :
    • Control Plane → (EKS owned ENI) 워커노드 kubelet
    • 워커노드 (kubelet, kube-proxy) → (프라이빗 도메인, EKS owned ENI를 통해 호출) Control Plane
    • 사용자 kubectl → (퍼블릭 도메인) Control Plane

  • API Endpoint 도메인 주에 대한 질의(쿼리) 처리를, Customer VPC 내에서는 Private hosted zone 에서 처리하여 내부(프라이빗) IP를 응답하게 됩니다.
    • 물론 외부망(인터넷) 에서는 API Endpoint 도메인 주에 대한 질의(쿼리)를 공인 IP로 응답함.

 

  • EKS Cluster Endpoint - Private :
    • Control Plane → (EKS owned ENI) 워커노드 kubelet
    • 워커노드,사용자 kubectl → (프라이빗 도메인, EKS owned ENI) Control Plane

  • API Endpoint 도메인 주에 대한 질의(쿼리) 처리를, Customer VPC 내에서는 Private hosted zone 에서 처리하여 내부(프라이빗) IP를 응답하며, 외부망(인터넷) 에서는 API Endpoint 도메인 주에 대한 질의(쿼리) 자체가 불가능함.

 

 

 

EKS Cluster Endpoint Access 실습

  • EKS Cluster Endpoint 관련 정보 확인
    • eks access endpoint 확인 : public
    • API 서버 엔드포인트 엑세스 설정 확인 : eks 콘솔 → 네트워킹 탭 메뉴

API server endpoint 주소만 알면 아무나 호출 가능

v:Documents:s-aews:aews $ curl -sk https://CC5D719ACF5FB0EC4C92959793A4488F.yl4.ap-northeast-2.eks.amazonaws.com/version
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}%

 

 

 

# EKS owned ENI 확인*

https://docs.aws.amazon.com/ko_kr/eks/latest/best-practices/control-plane.html

 

  • EKS owned ENI : 관리형 노드 그룹의 워커 노드는 내 소유지만, 연결된 ENI(NIC는 내꺼)의 인스턴스(CPI)는 AWS 소유이다.

 

  • ‘노드(client) → EKS API(server)’ 요청 시 정보 확인
# kubelet, kube-proxy 통신 Peer Address는 어딘인가요?
v:Documents:s-aews:aews $ for i in $NODE1 $NODE2; do echo ">> node $i <<"; ssh ec2-user@$i sudo ss -tnp | grep -v ssh; echo; done

>> node 13.125.148.230 <<
State Recv-Q Send-Q Local Address:Port    Peer Address:Port Process
ESTAB 0      0      192.168.2.173:57078 43.202.134.241:443   users:(("kubelet",pid=2232,fd=29))
ESTAB 0      0      192.168.2.173:50922 43.202.134.241:443   users:(("kube-proxy",pid=2878,fd=6))
ESTAB 0      0      192.168.2.173:57128 43.202.134.241:443   users:(("aws-k8s-agent",pid=2461,fd=7))

>> node 43.202.52.69 <<
State Recv-Q Send-Q Local Address:Port    Peer Address:Port Process
ESTAB 0      0       192.168.1.31:41522 43.202.134.241:443   users:(("kubelet",pid=2242,fd=29))
ESTAB 0      0       192.168.1.31:55734 43.202.134.241:443   users:(("kube-proxy",pid=2870,fd=6))
ESTAB 0      0       192.168.1.31:41554 43.202.134.241:443   users:(("aws-k8s-agent",pid=2464,fd=7))


# EKS 엔드포인트 IP 확인
v:Documents:s-aews:aews $ CLUSTER_NAME=myeks
v:Documents:s-aews:aews $ APIDNS=$(aws eks describe-cluster --name $CLUSTER_NAME | jq -r .cluster.endpoint | cut -d '/' -f 3)
v:Documents:s-aews:aews $ dig +short $APIDNS
15.164.74.222
43.202.134.241

 

  • ‘EKS API(client) → 노드 kubelet(server)’ 요청 시 정보 확인
# 파드 1개 bash 접속 해두기!
v:Documents:s-aews:aews $ kubectl exec -it -n kube-system deploy/kube-ops-view -- bash
I have no name!@kube-ops-view-97fd86569-kqhdf:/$
I have no name!@kube-ops-view-97fd86569-kqhdf:/$
I have no name!@kube-ops-view-97fd86569-kqhdf:/$
I have no name!@kube-ops-view-97fd86569-kqhdf:/$

# 확인 : exec 실행으로 추가된 연결 정보의 Peer Address는 어딘인가요? + AWS 네트워크 인터페이스 ENI에서 해당 IP 정보 확인
v:Documents:s-aews:aews $ for i in $NODE1 $NODE2; do echo ">> node $i <<"; ssh ec2-user@$i sudo ss -tnp | grep -v ssh; echo; done
>> node 13.125.148.230 <<
State Recv-Q Send-Q          Local Address:Port            Peer Address:Port Process
ESTAB 0      0                   127.0.0.1:54200              127.0.0.1:33759 users:(("kubelet",pid=2232,fd=19))
ESTAB 0      0               192.168.2.173:57078         43.202.134.241:443   users:(("kubelet",pid=2232,fd=29))
ESTAB 0      0               192.168.2.173:50922         43.202.134.241:443   users:(("kube-proxy",pid=2878,fd=6))
ESTAB 0      0                   127.0.0.1:33759              127.0.0.1:54200 users:(("containerd",pid=2195,fd=40))
ESTAB 0      0               192.168.2.173:57128         43.202.134.241:443   users:(("aws-k8s-agent",pid=2461,fd=7))
ESTAB 0      0      [::ffff:192.168.2.173]:10250 [::ffff:192.168.2.113]:54192 users:(("kubelet",pid=2232,fd=13))

>> node 43.202.52.69 <<
State Recv-Q Send-Q Local Address:Port    Peer Address:Port Process
ESTAB 0      0       192.168.1.31:41522 43.202.134.241:443   users:(("kubelet",pid=2242,fd=29))
ESTAB 0      0       192.168.1.31:55734 43.202.134.241:443   users:(("kube-proxy",pid=2870,fd=6))
ESTAB 0      0       192.168.1.31:41554 43.202.134.241:443   users:(("aws-k8s-agent",pid=2464,fd=7))

 

 

 

# eks access endpoint 변경 by Terraform : 퍼블릭 및 프라이빗 + 액세스 소스 설정 7분 소요

  • 코드 수정 후 "terraform apply -auto-approve"로 적용

 

  • 모니터링
# 반복 호출 (각 터미널 창에서 진행)
while true; do ssh ec2-user@$NODE1 dig +short $APIDNS ; date ; echo "-----" ; sleep 1 ; done
watch -d kubectl get node

 

  • 확인

 

 

 

EKS Fully Private Cluster 실습

# EKS Fully Private Cluster 배포 : 16분 소요 - Docs , Link

# 실습 디렉터리 진입
cd eks-private

# 변수 지정
v:Documents:s-aews:aews:eks-private $ export TF_VAR_KeyName=test-key
v:Documents:s-aews:aews:eks-private $ export TF_VAR_ssh_access_cidr=$(curl -s ipinfo.io/ip)/32
v:Documents:s-aews:aews:eks-private $ echo $TF_VAR_KeyName $TF_VAR_ssh_access_cidr

test-key x.x.x.x/32



# 초기화
v:Documents:s-aews:aews:eks-private $ terraform init

Initializing the backend...
Initializing modules...
Downloading registry.terraform.io/terraform-aws-modules/eks/aws 20.37.2 for eks...
- eks in .terraform/modules/eks
- eks.eks_managed_node_group in .terraform/modules/eks/modules/eks-managed-node-group
- eks.eks_managed_node_group.user_data in .terraform/modules/eks/modules/_user_data
- eks.fargate_profile in .terraform/modules/eks/modules/fargate-profile
Downloading registry.terraform.io/terraform-aws-modules/kms/aws 2.1.0 for eks.kms...
- eks.kms in .terraform/modules/eks.kms
- eks.self_managed_node_group in .terraform/modules/eks/modules/self-managed-node-group
- eks.self_managed_node_group.user_data in .terraform/modules/eks/modules/_user_data
Downloading registry.terraform.io/terraform-aws-modules/vpc/aws 5.21.0 for vpc...
- vpc in .terraform/modules/vpc
Downloading registry.terraform.io/terraform-aws-modules/vpc/aws 5.21.0 for vpc_endpoints...
- vpc_endpoints in .terraform/modules/vpc_endpoints/modules/vpc-endpoints
Initializing provider plugins...
- Finding hashicorp/aws versions matching ">= 4.33.0, >= 5.34.0, >= 5.79.0, >= 5.83.0, >= 5.95.0, < 6.0.0"...
- Finding hashicorp/tls versions matching ">= 3.0.0"...
- Finding hashicorp/time versions matching ">= 0.9.0"...
- Finding hashicorp/cloudinit versions matching ">= 2.0.0"...
- Finding hashicorp/null versions matching ">= 3.0.0"...
- Installing hashicorp/tls v4.2.1...
- Installed hashicorp/tls v4.2.1 (signed by HashiCorp)
- Installing hashicorp/time v0.13.1...
- Installed hashicorp/time v0.13.1 (signed by HashiCorp)
- Installing hashicorp/cloudinit v2.3.7...
- Installed hashicorp/cloudinit v2.3.7 (signed by HashiCorp)
- Installing hashicorp/null v3.2.4...
- Installed hashicorp/null v3.2.4 (signed by HashiCorp)
- Installing hashicorp/aws v5.100.0...
- Installed hashicorp/aws v5.100.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.



terraform plan

# vpc 배포 : 2분 소요 -> 이후 aws vpc 정보 확인!
terraform apply -target="module.vpc" -auto-approve


# eks 등 배포 : 14분 소요
terraform apply -auto-approve


# 자격증명 설정
v:Documents:s-aews:aews:eks-private $ aws eks --region ap-northeast-2 update-kubeconfig --name eks-private
Added new context arn:aws:eks:ap-northeast-2:143649248460:cluster/eks-private to /Users/mzc01-voieul/.kube/config

v:Documents:s-aews:aews:eks-private $ kubectl config rename-context $(cat ~/.kube/config | grep current-context | awk '{print $2}') eks-private
Context "arn:aws:eks:ap-northeast-2:143649248460:cluster/eks-private" renamed to "eks-private".


# kubectl 조회 시도
# pc 터미널 창에서 시도 시, 외부에서 api 서버에 접근하게 되는데, 현재 eks cluster는 내부에 존재하므로 호출 불가
v:Documents:s-aews:aews:eks-private $ kubectl get node -v=7
I0319 01:11:45.700521   18863 cmd.go:527] kubectl command headers turned on
I0319 01:11:45.721448   18863 loader.go:402] Config loaded from file:  /Users/mzc01-voieul/.kube/config
I0319 01:11:45.724569   18863 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0319 01:11:45.724580   18863 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0319 01:11:45.724583   18863 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0319 01:11:45.724586   18863 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0319 01:11:45.724588   18863 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0319 01:11:45.726498   18863 round_trippers.go:527] "Request" verb="GET" url="https://ADC36ED88971506380A730AAAAE7D6DB.gr7.ap-northeast-2.eks.amazonaws.com/api?timeout=32s" headers=<
        Accept: application/json;g=apidiscovery.k8s.io;v=v2;as=APIGroupDiscoveryList,application/json;g=apidiscovery.k8s.io;v=v2beta1;as=APIGroupDiscoveryList,application/json
        User-Agent: kubectl/v1.34.1 (darwin/arm64) kubernetes/93248f9
 >
 
 
# 자격증명 삭제
rm -rf ~/.kube/config

 

 

 

# Bastion EC2 접속 후 확인

  • 보안 그룹 eks-private-cluster (설명 : EKS cluster security group) : HTTPS bastion-ec2 sg 추가
# bastion ec2 접속
v:Documents:s-aews:aews:eks-private $ ssh -o StrictHostKeyChecking=no ubuntu@$(terraform output -raw bastion_ec2-public_ip)
Warning: Permanently added '3.36.119.203' (ED25519) to the list of known hosts.
Welcome to Ubuntu 24.04.4 LTS (GNU/Linux 6.17.0-1007-aws x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/pro

 System information as of Thu Mar 19 01:14:13 KST 2026

  System load:  0.0                Temperature:           -273.1 C
  Usage of /:   10.5% of 28.02GB   Processes:             111
  Memory usage: 9%                 Users logged in:       0
  Swap usage:   0%                 IPv4 address for ens5: 10.0.67.28

Expanded Security Maintenance for Applications is not enabled.

17 updates can be applied immediately.
17 of these updates are standard security updates.
To see these additional updates run: apt list --upgradable

Enable ESM Apps to receive additional future security updates.
See https://ubuntu.com/esm or run: sudo pro status



The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

root@bastion-EC2:~# 

============================================
# admin IAM User 자격 증명 설정
aws configure


# eks access endpoint 확인
root@bastion-EC2:~# APIDNS=$(aws eks describe-cluster --name eks-private | jq -r .cluster.endpoint | cut -d '/' -f 3)
echo $APIDNS
ADC36ED88971506380A730AAAAE7D6DB.gr7.ap-northeast-2.eks.amazonaws.com

root@bastion-EC2:~# dig +short $APIDNS
10.0.8.53
10.0.20.38


# kubectl 조회 시도 : DNS Lookup resolved" host="4924E16AD07400AA0CA66718E179E887.gr7.ap-northeast-2.eks.amazonaws.com" address=[{"IP":"10.0.28.171","Zone":""},{"IP":"10.0.14.147","Zone":""}]
root@bastion-EC2:~# kubectl cluster-info
E0319 01:18:37.894993    2124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp 127.0.0.1:8080: connect: connection refused"
E0319 01:18:37.895269    2124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp 127.0.0.1:8080: connect: connection refused"
E0319 01:18:37.896552    2124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp 127.0.0.1:8080: connect: connection refused"
E0319 01:18:37.896770    2124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp 127.0.0.1:8080: connect: connection refused"
E0319 01:18:37.898149    2124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp 127.0.0.1:8080: connect: connection refused"

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server localhost:8080 was refused - did you specify the right host or port?



root@bastion-EC2:~# kubectl get node -v=9
I0319 01:18:41.676123    2131 cmd.go:527] kubectl command headers turned on
I0319 01:18:41.680741    2131 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=false
I0319 01:18:41.680770    2131 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0319 01:18:41.680802    2131 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0319 01:18:41.680836    2131 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0319 01:18:41.680859    2131 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0319 01:18:41.680867    2131 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0319 01:18:41.680917    2131 discovery_client.go:252] "Request Body" body=""
I0319 01:18:41.681000    2131 round_trippers.go:527] "Request" curlCommand=<
        curl -v -XGET  -H "Accept: application/json;g=apidiscovery.k8s.io;v=v2;as=APIGroupDiscoveryList,application/json;g=apidiscovery.k8s.io;v=v2beta1;as=APIGroupDiscoveryList,application/json" -H "User-Agent: kubectl/v1.34.2 (linux/amd64) kubernetes/0ea4984" 'http://localhost:8080/api?timeout=32s'
 >
I0319 01:18:41.681459    2131 round_trippers.go:547] "HTTP Trace: DNS Lookup resolved" host="localhost" address=[{"IP":"127.0.0.1","Zone":""}]
I0319 01:18:41.681614    2131 round_trippers.go:560] "HTTP Trace: Dial failed" network="tcp" address="127.0.0.1:8080" err="dial tcp 127.0.0.1:8080: connect: connection refused"
I0319 01:18:41.681662    2131 round_trippers.go:632] "Response" verb="GET" url="http://localhost:8080/api?timeout=32s" status="" headers="" milliseconds=0 dnsLookupMilliseconds=0 dialMilliseconds=0 tlsHandshakeMilliseconds=0
E0319 01:18:41.681760    2131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp 127.0.0.1:8080: connect: connection refused"
I0319 01:18:41.681787    2131 cached_discovery.go:120] skipped caching discovery info due to Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
I0319 01:18:41.681838    2131 discovery_client.go:252] "Request Body" body=""
I0319 01:18:41.681918    2131 round_trippers.go:527] "Request" curlCommand=<
        curl -v -XGET  -H "Accept: application/json;g=apidiscovery.k8s.io;v=v2;as=APIGroupDiscoveryList,application/json;g=apidiscovery.k8s.io;v=v2beta1;as=APIGroupDiscoveryList,application/json" -H "User-Agent: kubectl/v1.34.2 (linux/amd64) kubernetes/0ea4984" 'http://localhost:8080/api?timeout=32s'
 >
I0319 01:18:41.682086    2131 round_trippers.go:547] "HTTP Trace: DNS Lookup resolved" host="localhost" address=[{"IP":"127.0.0.1","Zone":""}]
I0319 01:18:41.682197    2131 round_trippers.go:560] "HTTP Trace: Dial failed" network="tcp" address="127.0.0.1:8080" err="dial tcp 127.0.0.1:8080: connect: connection refused"
I0319 01:18:41.682232    2131 round_trippers.go:632] "Response" verb="GET" url="http://localhost:8080/api?timeout=32s" status="" headers="" milliseconds=0 dnsLookupMilliseconds=0 dialMilliseconds=0 tlsHandshakeMilliseconds=0
E0319 01:18:41.682322    2131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp 127.0.0.1:8080: connect: connection refused"
I0319 01:18:41.683479    2131 cached_discovery.go:120] skipped caching discovery info due to Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
I0319 01:18:41.683520    2131 shortcut.go:103] Error loading discovery information: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
I0319 01:18:41.683574    2131 discovery_client.go:252] "Request Body" body=""
I0319 01:18:41.683666    2131 round_trippers.go:527] "Request" curlCommand=<
        curl -v -XGET  -H "Accept: application/json;g=apidiscovery.k8s.io;v=v2;as=APIGroupDiscoveryList,application/json;g=apidiscovery.k8s.io;v=v2beta1;as=APIGroupDiscoveryList,application/json" -H "User-Agent: kubectl/v1.34.2 (linux/amd64) kubernetes/0ea4984" 'http://localhost:8080/api?timeout=32s'
 >
I0319 01:18:41.683870    2131 round_trippers.go:547] "HTTP Trace: DNS Lookup resolved" host="localhost" address=[{"IP":"127.0.0.1","Zone":""}]
I0319 01:18:41.683975    2131 round_trippers.go:560] "HTTP Trace: Dial failed" network="tcp" address="127.0.0.1:8080" err="dial tcp 127.0.0.1:8080: connect: connection refused"
I0319 01:18:41.684018    2131 round_trippers.go:632] "Response" verb="GET" url="http://localhost:8080/api?timeout=32s" status="" headers="" milliseconds=0 dnsLookupMilliseconds=0 dialMilliseconds=0 tlsHandshakeMilliseconds=0
E0319 01:18:41.684068    2131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp 127.0.0.1:8080: connect: connection refused"
I0319 01:18:41.684084    2131 cached_discovery.go:120] skipped caching discovery info due to Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
I0319 01:18:41.684134    2131 discovery_client.go:252] "Request Body" body=""
I0319 01:18:41.684186    2131 round_trippers.go:527] "Request" curlCommand=<
        curl -v -XGET  -H "Accept: application/json;g=apidiscovery.k8s.io;v=v2;as=APIGroupDiscoveryList,application/json;g=apidiscovery.k8s.io;v=v2beta1;as=APIGroupDiscoveryList,application/json" -H "User-Agent: kubectl/v1.34.2 (linux/amd64) kubernetes/0ea4984" 'http://localhost:8080/api?timeout=32s'
 >
I0319 01:18:41.684380    2131 round_trippers.go:547] "HTTP Trace: DNS Lookup resolved" host="localhost" address=[{"IP":"127.0.0.1","Zone":""}]
I0319 01:18:41.684489    2131 round_trippers.go:560] "HTTP Trace: Dial failed" network="tcp" address="127.0.0.1:8080" err="dial tcp 127.0.0.1:8080: connect: connection refused"
I0319 01:18:41.684523    2131 round_trippers.go:632] "Response" verb="GET" url="http://localhost:8080/api?timeout=32s" status="" headers="" milliseconds=0 dnsLookupMilliseconds=0 dialMilliseconds=0 tlsHandshakeMilliseconds=0
E0319 01:18:41.684566    2131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp 127.0.0.1:8080: connect: connection refused"
I0319 01:18:41.685659    2131 cached_discovery.go:120] skipped caching discovery info due to Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
I0319 01:18:41.685712    2131 discovery_client.go:252] "Request Body" body=""
I0319 01:18:41.685779    2131 round_trippers.go:527] "Request" curlCommand=<
        curl -v -XGET  -H "Accept: application/json;g=apidiscovery.k8s.io;v=v2;as=APIGroupDiscoveryList,application/json;g=apidiscovery.k8s.io;v=v2beta1;as=APIGroupDiscoveryList,application/json" -H "User-Agent: kubectl/v1.34.2 (linux/amd64) kubernetes/0ea4984" 'http://localhost:8080/api?timeout=32s'
 >
I0319 01:18:41.685917    2131 round_trippers.go:547] "HTTP Trace: DNS Lookup resolved" host="localhost" address=[{"IP":"127.0.0.1","Zone":""}]
I0319 01:18:41.686020    2131 round_trippers.go:560] "HTTP Trace: Dial failed" network="tcp" address="127.0.0.1:8080" err="dial tcp 127.0.0.1:8080: connect: connection refused"
I0319 01:18:41.686115    2131 round_trippers.go:632] "Response" verb="GET" url="http://localhost:8080/api?timeout=32s" status="" headers="" milliseconds=0 dnsLookupMilliseconds=0 dialMilliseconds=0 tlsHandshakeMilliseconds=0
E0319 01:18:41.686204    2131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp 127.0.0.1:8080: connect: connection refused"
I0319 01:18:41.686218    2131 cached_discovery.go:120] skipped caching discovery info due to Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
I0319 01:18:41.686307    2131 helpers.go:264] Connection error: Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?


# 보안 그룹 설정 후 재시도



# 자격증명 설정
root@bastion-EC2:~# aws eks --region ap-northeast-2 update-kubeconfig --name eks-private
Added new context arn:aws:eks:ap-northeast-2:143649248460:cluster/eks-private to /root/.kube/config

(arn:aws:eks:ap-northeast-2:143649248460:cluster/eks-private:N/A) root@bastion-EC2:~# kubectl config rename-context $(cat ~/.kube/config | grep current-context | awk '{print $2}') eks-private
Context "arn:aws:eks:ap-northeast-2:143649248460:cluster/eks-private" renamed to "eks-private".

(eks-private:N/A) root@bastion-EC2:~# APIDNS=$(aws eks describe-cluster --name eks-private | jq -r .cluster.endpoint | cut -d '/' -f 3)
(eks-private:N/A) root@bastion-EC2:~# echo $APIDNS
ADC36ED88971506380A730AAAAE7D6DB.gr7.ap-northeast-2.eks.amazonaws.com

(eks-private:N/A) root@bastion-EC2:~# dig +short $APIDNS
10.0.20.38
10.0.8.53


# kubectl 조회 시도 : DNS Lookup resolved" host="4924E16AD07400AA0CA66718E179E887.gr7.ap-northeast-2.eks.amazonaws.com" address=[{"IP":"10.0.28.171","Zone":""},{"IP":"10.0.14.147","Zone":""}]
(eks-private:N/A) root@bastion-EC2:~# kubectl cluster-info
Kubernetes control plane is running at https://ADC36ED88971506380A730AAAAE7D6DB.gr7.ap-northeast-2.eks.amazonaws.com
CoreDNS is running at https://ADC36ED88971506380A730AAAAE7D6DB.gr7.ap-northeast-2.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
(eks-private:N/A) root@bastion-EC2:~# kubectl get node -v=9
I0319 01:27:49.848813    2513 cmd.go:527] kubectl command headers turned on
I0319 01:27:49.855292    2513 loader.go:402] Config loaded from file:  /root/.kube/config
I0319 01:27:49.855734    2513 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0319 01:27:49.855758    2513 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=false
I0319 01:27:49.855768    2513 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0319 01:27:49.855779    2513 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0319 01:27:49.855787    2513 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0319 01:27:49.855794    2513 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0319 01:27:49.861726    2513 helper.go:113] "Request Body" body=""
I0319 01:27:49.861795    2513 round_trippers.go:527] "Request" curlCommand=<
        curl -v -XGET  -H "Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json" -H "User-Agent: kubectl/v1.34.2 (linux/amd64) kubernetes/0ea4984" 'https://ADC36ED88971506380A730AAAAE7D6DB.gr7.ap-northeast-2.eks.amazonaws.com/api/v1/nodes?limit=500'
 >
I0319 01:27:50.821473    2513 round_trippers.go:547] "HTTP Trace: DNS Lookup resolved" host="ADC36ED88971506380A730AAAAE7D6DB.gr7.ap-northeast-2.eks.amazonaws.com" address=[{"IP":"10.0.20.38","Zone":""},{"IP":"10.0.8.53","Zone":""}]
I0319 01:27:50.822832    2513 round_trippers.go:562] "HTTP Trace: Dial succeed" network="tcp" address="10.0.20.38:443"
I0319 01:27:51.157903    2513 round_trippers.go:632] "Response" verb="GET" url="https://ADC36ED88971506380A730AAAAE7D6DB.gr7.ap-northeast-2.eks.amazonaws.com/api/v1/nodes?limit=500" status="200 OK" headers=<
        Audit-Id: a409ccae-0c5b-4e3e-b278-62fb7f0aaa84
        Cache-Control: no-cache, private
        Content-Type: application/json
        Date: Wed, 18 Mar 2026 16:27:51 GMT
        X-Kubernetes-Pf-Flowschema-Uid: a9a90a4a-f697-4d9a-9b23-19f07acfb19f
        X-Kubernetes-Pf-Prioritylevel-Uid: 909a6ed1-920e-4177-95c9-25f9affe2f43
 > milliseconds=1295 dnsLookupMilliseconds=1 dialMilliseconds=1 tlsHandshakeMilliseconds=38 serverProcessingMilliseconds=296
I0319 01:27:51.158375    2513 helper.go:113] "Response Body" body="{\"kind\":\"Table\",\"apiVersion\":\"meta.k8s.io/v1\",\"metadata\":{\"resourceVersion\":\"4443\"},\"columnDefinitions\":[{\"name\":\"Name\",\"type\":\"string\",\"format\":\"name\",\"description\":\"Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names\",\"priority\":0},{\"name\":\"Status\",\"type\":\"string\",\"format\":\"\",\"description\":\"The status of the node\",\"priority\":0},{\"name\":\"Roles\",\"type\":\"string\",\"format\":\"\",\"description\":\"The roles of the node\",\"priority\":0},{\"name\":\"Age\",\"type\":\"string\",\"format\":\"\",\"description\":\"CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC.\\n\\nPopulated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\",\"priority\":0},{\"name\":\"Version\",\"type\":\"string\",\"format\":\"\",\"description\":\"Kubelet Version reported by the node.\",\"priority\":0},{\"name\":\"Internal-IP\",\"type\":\"string\",\"format\":\"\",\"description\":\"List of addresses reachable to the node. Queried from cloud provider, if available. More info: https://kubernetes.io/docs/reference/node/node-status/#addresses Note: This field is declared as mergeable, but the merge key is not sufficiently unique, which can cause data corruption when it is merged. Callers should instead use a full-replacement patch. See https://pr.k8s.io/79391 for an example. Consumers should assume that addresses can change during the lifetime of a Node. However, there are some exceptions where this may not be possible, such as Pods that inherit a Node's address in its own status or consumers of the downward API (status.hostIP).\",\"priority\":1},{\"name\":\"External-IP\",\"type\":\"string\",\"format\":\"\",\"description\":\"List of addresses reachable to the node. Queried from cloud provider, if available. More info: https://kubernetes.io/docs/reference/node/node-status/#addresses Note: This field is declared as mergeable, but the merge key is not sufficiently unique, which can cause data corruption when it is merged. Callers should instead use a full-replacement patch. See https://pr.k8s.io/79391 for an example. Consumers should assume that addresses can change during the lifetime of a Node. However, there are some exceptions where this may not be possible, such as Pods that inherit a Node's address in its own status or consumers of the downward API (status.hostIP).\",\"priority\":1},{\"name\":\"OS-Image\",\"type\":\"string\",\"format\":\"\",\"description\":\"OS Image reported by the node from /etc/os-release (e.g. Debian GNU/Linux 7 (wheezy)).\",\"priority\":1},{\"name\":\"Kernel-Version\",\"type\":\"string\",\"format\":\"\",\"description\":\"Kernel Version reported by the node from 'uname -r' (e.g. 3.16.0-0.bpo.4-amd64).\",\"priority\":1},{\"name\":\"Container-Runtime\",\"type\":\"string\",\"format\":\"\",\"description\":\"ContainerRuntime Version reported by the node through runtime remote API (e.g. containerd://1.4.2).\",\"priority\":1}],\"rows\":[{\"cells\":[\"ip-10-0-28-96.ap-northeast-2.compute.internal\",\"Ready\",\"\\u003cnone\\u003e\",\"18m\",\"v1.34.4-eks-f69f56f\",\"10.0.28.96\",\"\\u003cnone\\u003e\",\"Amazon Linux 2023.10.20260302\",\"6.12.73-95.123.amzn2023.x86_64\",\"containerd://2.1.5\"],\"object\":{\"kind\":\"PartialObjectMetadata\",\"apiVersion\":\"meta.k8s.io/v1\",\"metadata\":{\"name\":\"ip-10-0-28-96.ap-northeast-2.compute.internal\",\"uid\":\"885859e3-5cfc-45ea-b817-bf409bae225d\",\"resourceVersion\":\"4007\",\"creationTimestamp\":\"2026-03-18T16:08:53Z\",\"labels\":{\"beta.kubernetes.io/arch\":\"amd64\",\"beta.kubernetes.io/instance-type\":\"t3.medium\",\"beta.kubernetes.io/os\":\"linux\",\"eks.amazonaws.com/capacityType\":\"ON_DEMAND\",\"eks.amazonaws.com/nodegroup\":\"initial-2026031816075454430000001e\",\"eks.amazonaws.com/nodegroup-image\":\"ami-0c19bc6c6295a611b\",\"eks.amazonaws.com/sourceLaunchTemplateId\":\"lt-07932b4ce76f3470c\",\"eks.amazonaws.com/sourceLaunchTemplateVersion\":\"1\",\"failure-domain.beta.kubernetes.io/region\":\"ap-northeast-2\",\"failure-domain.beta.kubernetes.io/zone\":\"ap-northeast-2b\",\"k8s.io/cloud-provider-aws\":\"67f02c5fb8cbbdff68ce44913a998586\",\"kubernetes.io/arch\":\"amd64\",\"kubernetes.io/hostname\":\"ip-10-0-28-96.ap-northeast-2.compute.internal\",\"kubernetes.io/os\":\"linux\",\"node.kubernetes.io/instance-type\":\"t3.medium\",\"topology.k8s.aws/zone-id\":\"apne2-az2\",\"topology.kubernetes.io/region\":\"ap-northeast-2\",\"topology.kubernetes.io/zone\":\"ap-northeast-2b\"},\"annotations\":{\"alpha.kubernetes.io/provided-node-ip\":\"10.0.28.96\",\"node.alpha.kubernetes.io/ttl\":\"0\",\"volumes.kubernetes.io/controller-managed-attach-detach\":\"true\"},\"managedFields\":[{\"manager\":\"aws-cloud-controller-manager\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2026-03-18T16:08:53Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:labels\":{\"f:beta.kubernetes.io/instance-type\":{},\"f:failure-domain.beta.kubernetes.io/region\":{},\"f:failure-domain.beta.kubernetes.io/zone\":{},\"f:k8s.io/cloud-provider-aws\":{},\"f:node.kubernetes.io/instance-type\":{},\"f:topology.k8s.aws/zone-id\":{},\"f:topology.kubernetes.io/region\":{},\"f:topology.kubernetes.io/zone\":{}}}}},{\"manager\":\"aws-cloud-controller-manager\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2026-03-18T16:08:53Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:status\":{\"f:addresses\":{\"k:{\\\"type\\\":\\\"InternalDNS\\\"}\":{\".\":{},\"f:address\":{},\"f:type\":{}}}}},\"subresource\":\"status\"},{\"manager\":\"kube-controller-manager\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2026-03-18T16:08:53Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:annotations\":{\"f:node.alpha.kubernetes.io/ttl\":{}}}}},{\"manager\":\"kubelet\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2026-03-18T16:08:53Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:alpha.kubernetes.io/provided-node-ip\":{},\"f:volumes.kubernetes.io/controller-managed-attach-detach\":{}},\"f:labels\":{\".\":{},\"f:beta.kubernetes.io/arch\":{},\"f:beta.kubernetes.io/os\":{},\"f:eks.amazonaws.com/capacityType\":{},\"f:eks.amazonaws.com/nodegroup\":{},\"f:eks.amazonaws.com/nodegroup-image\":{},\"f:eks.amazonaws.com/sourceLaunchTemplateId\":{},\"f:eks.amazonaws.com/sourceLaunchTemplateVersion\":{},\"f:kubernetes.io/arch\":{},\"f:kubernetes.io/hostname\":{},\"f:kubernetes.io/os\":{}}},\"f:spec\":{\"f:providerID\":{}}}},{\"manager\":\"kubelet\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2026-03-18T16:25:12Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:status\":{\"f:conditions\":{\"k:{\\\"type\\\":\\\"DiskPressure\\\"}\":{\"f:lastHeartbeatTime\":{}},\"k:{\\\"type\\\":\\\"MemoryPressure\\\"}\":{\"f:lastHeartbeatTime\":{}},\"k:{\\\"type\\\":\\\"PIDPressure\\\"}\":{\"f:lastHeartbeatTime\":{}},\"k:{\\\"type\\\":\\\"Ready\\\"}\":{\"f:lastHeartbeatTime\":{},\"f:lastTransitionTime\":{},\"f:message\":{},\"f:reason\":{},\"f:status\":{}}},\"f:images\":{}}},\"subresource\":\"status\"}]}}},{\"cells\":[\"ip-10-0-40-52.ap-northeast-2.compute.internal\",\"Ready\",\"\\u003cnone\\u003e\",\"18m\",\"v1.34.4-eks-f69f56f\",\"10.0.40.52\",\"\\u003cnone\\u003e\",\"Amazon Linux 2023.10.20260302\",\"6.12.73-95.123.amzn2023.x86_64\",\"containerd://2.1.5\"],\"object\":{\"kind\":\"PartialObjectMetadata\",\"apiVersion\":\"meta.k8s.io/v1\",\"metadata\":{\"name\":\"ip-10-0-40-52.ap-northeast-2.compute.internal\",\"uid\":\"c624fc9d-565a-4617-9d63-019334b3605e\",\"resourceVersion\":\"4119\",\"creationTimestamp\":\"2026-03-18T16:08:54Z\",\"labels\":{\"beta.kubernetes.io/arch\":\"amd64\",\"beta.kubernetes.io/instance-type\":\"t3.medium\",\"beta.kubernetes.io/os\":\"linux\",\"eks.amazonaws.com/capacityType\":\"ON_DEMAND\",\"eks.amazonaws.com/nodegroup\":\"initial-2026031816075454430000001e\",\"eks.amazonaws.com/nodegroup-image\":\"ami-0c19bc6c6295a611b\",\"eks.amazonaws.com/sourceLaunchTemplateId\":\"lt-07932b4ce76f3470c\",\"eks.amazonaws.com/sourceLaunchTemplateVersion\":\"1\",\"failure-domain.beta.kubernetes.io/region\":\"ap-northeast-2\",\"failure-domain.beta.kubernetes.io/zone\":\"ap-northeast-2c\",\"k8s.io/cloud-provider-aws\":\"67f02c5fb8cbbdff68ce44913a998586\",\"kubernetes.io/arch\":\"amd64\",\"kubernetes.io/hostname\":\"ip-10-0-40-52.ap-northeast-2.compute.internal\",\"kubernetes.io/os\":\"linux\",\"node.kubernetes.io/instance-type\":\"t3.medium\",\"topology.k8s.aws/zone-id\":\"apne2-az3\",\"topology.kubernetes.io/region\":\"ap-northeast-2\",\"topology.kubernetes.io/zone\":\"ap-northeast-2c\"},\"annotations\":{\"alpha.kubernetes.io/provided-node-ip\":\"10.0.40.52\",\"node.alpha.kubernetes.io/ttl\":\"0\",\"volumes.kubernetes.io/controller-managed-attach-detach\":\"true\"},\"managedFields\":[{\"manager\":\"aws-cloud-controller-manager\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2026-03-18T16:08:54Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:labels\":{\"f:beta.kubernetes.io/instance-type\":{},\"f:failure-domain.beta.kubernetes.io/region\":{},\"f:failure-domain.beta.kubernetes.io/zone\":{},\"f:k8s.io/cloud-provider-aws\":{},\"f:node.kubernetes.io/instance-type\":{},\"f:topology.k8s.aws/zone-id\":{},\"f:topology.kubernetes.io/region\":{},\"f:topology.kubernetes.io/zone\":{}}}}},{\"manager\":\"aws-cloud-controller-manager\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2026-03-18T16:08:54Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:status\":{\"f:addresses\":{\"k:{\\\"type\\\":\\\"InternalDNS\\\"}\":{\".\":{},\"f:address\":{},\"f:type\":{}}}}},\"subresource\":\"status\"},{\"manager\":\"kube-controller-manager\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2026-03-18T16:08:54Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:annotations\":{\"f:node.alpha.kubernetes.io/ttl\":{}}}}},{\"manager\":\"kubelet\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2026-03-18T16:08:54Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:alpha.kubernetes.io/provided-node-ip\":{},\"f:volumes.kubernetes.io/controller-managed-attach-detach\":{}},\"f:labels\":{\".\":{},\"f:beta.kubernetes.io/arch\":{},\"f:beta.kubernetes.io/os\":{},\"f:eks.amazonaws.com/capacityType\":{},\"f:eks.amazonaws.com/nodegroup\":{},\"f:eks.amazonaws.com/nodegroup-image\":{},\"f:eks.amazonaws.com/sourceLaunchTemplateId\":{},\"f:eks.amazonaws.com/sourceLaunchTemplateVersion\":{},\"f:kubernetes.io/arch\":{},\"f:kubernetes.io/hostname\":{},\"f:kubern [truncated 548 chars]"
NAME                                            STATUS   ROLES    AGE   VERSION
ip-10-0-28-96.ap-northeast-2.compute.internal   Ready    <none>   18m   v1.34.4-eks-f69f56f
ip-10-0-40-52.ap-northeast-2.compute.internal   Ready    <none>   18m   v1.34.4-eks-f69f56f


(eks-private:N/A) root@bastion-EC2:~# kubectl get nodes
NAME                                            STATUS   ROLES    AGE   VERSION
ip-10-0-28-96.ap-northeast-2.compute.internal   Ready    <none>   20m   v1.34.4-eks-f69f56f
ip-10-0-40-52.ap-northeast-2.compute.internal   Ready    <none>   20m   v1.34.4-eks-f69f56f

 

  • 보안 그룹 참고 (2번째 규칙 추가)

 

 

 

# [Update] 파드 생성 및 노드 Shell 접속 - Blog

# 노드 정보 확인
(eks-private:N/A) root@bastion-EC2:~# kubectl get node -owide
NAME                                            STATUS   ROLES    AGE   VERSION               INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                        KERNEL-VERSION                   CONTAINER-RUNTIME
ip-10-0-28-96.ap-northeast-2.compute.internal   Ready    <none>   21m   v1.34.4-eks-f69f56f   10.0.28.96    <none>        Amazon Linux 2023.10.20260302   6.12.73-95.123.amzn2023.x86_64   containerd://2.1.5
ip-10-0-40-52.ap-northeast-2.compute.internal   Ready    <none>   21m   v1.34.4-eks-f69f56f   10.0.40.52    <none>        Amazon Linux 2023.10.20260302   6.12.73-95.123.amzn2023.x86_64   containerd://2.1.5


# 환경 변수 설정
NODE1NAME=ip-10-0-28-96.ap-northeast-2.compute.internal


# node-shell 파드 배포
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: node-shell
spec:
  containers:
  - name: debug
    image: public.ecr.aws/docker/library/alpine:latest
    command: ["/bin/sh", "-c", "sleep 36000"]
    securityContext:
      privileged: true
    volumeMounts:
      - mountPath: /host
        name: hostfs
  hostNetwork: true
  hostPID: true
  hostIPC: true
  volumes:
    - name: hostfs
      hostPath:
        path: /
EOF



# pod 정보 확인
(eks-private:N/A) root@bastion-EC2:~# kubectl get pods -A -o wide
NAMESPACE     NAME                      READY   STATUS    RESTARTS   AGE   IP            NODE                                            NOMINATED NODE   READINESS GATES
default       node-shell                1/1     Running   0          59s   10.0.40.52    ip-10-0-40-52.ap-northeast-2.compute.internal   <none>           <none>
kube-system   aws-node-k6gtk            2/2     Running   0          23m   10.0.40.52    ip-10-0-40-52.ap-northeast-2.compute.internal   <none>           <none>
kube-system   aws-node-lp4pk            2/2     Running   0          23m   10.0.28.96    ip-10-0-28-96.ap-northeast-2.compute.internal   <none>           <none>
kube-system   coredns-cc56d5f8b-6tvrf   1/1     Running   0          27m   10.0.17.112   ip-10-0-28-96.ap-northeast-2.compute.internal   <none>           <none>
kube-system   coredns-cc56d5f8b-tpd9z   1/1     Running   0          27m   10.0.31.3     ip-10-0-28-96.ap-northeast-2.compute.internal   <none>           <none>
kube-system   kube-proxy-6x72p          1/1     Running   0          24m   10.0.40.52    ip-10-0-40-52.ap-northeast-2.compute.internal   <none>           <none>
kube-system   kube-proxy-qm2rv          1/1     Running   0          24m   10.0.28.96    ip-10-0-28-96.ap-northeast-2.compute.internal   <none>           <none>


# 파드 내에서 chroot 실행으로 호스트 노드 진입!
(eks-private:N/A) root@bastion-EC2:~# kubectl exec -it node-shell -- chroot /host /bin/bash


[root@ip-10-0-40-52 /]# hostnamectl
 Static hostname: ip-10-0-40-52.ap-northeast-2.compute.internal
       Icon name: computer-vm
         Chassis: vm 🖴
      Machine ID: ec2ae0d1619079333a8c577c9664f2ee
         Boot ID: b052dfbd78714190ba923e4626aea7bb
  Virtualization: amazon
Operating System: Amazon Linux 2023.10.20260302
     CPE OS Name: cpe:2.3:o:amazon:amazon_linux:2023
          Kernel: Linux 6.12.73-95.123.amzn2023.x86_64
    Architecture: x86-64
 Hardware Vendor: Amazon EC2
  Hardware Model: t3.medium
Firmware Version: 1.0


[root@ip-10-0-40-52 /]# id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel),11(cdrom),20(games),26,27 context=system_u:system_r:unconfined_service_t:s0


[root@ip-10-0-40-52 /]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 0a:7e:91:4b:55:03 brd ff:ff:ff:ff:ff:ff
    altname enp0s5
    inet 10.0.40.52/20 metric 512 brd 10.0.47.255 scope global dynamic ens5
       valid_lft 2086sec preferred_lft 2086sec
    inet6 fe80::87e:91ff:fe4b:5503/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever


[root@ip-10-0-40-52 /]# ss -tnp
State     Recv-Q     Send-Q               Local Address:Port                 Peer Address:Port     Process                                      
ESTAB     0          0                       10.0.40.52:46320                   10.0.8.53:443       users:(("kube-proxy",pid=2223,fd=6))        
ESTAB     0          0                       10.0.40.52:37868                   10.0.8.53:443       users:(("aws-k8s-agent",pid=2874,fd=6))     
ESTAB     0          0                        127.0.0.1:57882                   127.0.0.1:33901     users:(("kubelet",pid=2017,fd=23))          
ESTAB     0          0                       10.0.40.52:56514                   10.0.8.53:443       users:(("kubelet",pid=2017,fd=30))          
ESTAB     0          0                        127.0.0.1:33901                   127.0.0.1:57882     users:(("containerd",pid=1982,fd=28))       
ESTAB     0          0              [::ffff:10.0.40.52]:10250         [::ffff:10.0.20.38]:57356     users:(("kubelet",pid=2017,fd=18))  
[root@ip-10-0-40-52 /]#

 

 

 

# 실습 완료 후 삭제

  • 보안 그룹 eks-private-cluster (설명 : EKS cluster security group) : HTTPS bastion-ec2 sg 규칙 제거
  • terraform destroy -auto-approve

'STUDY - AEWS' 카테고리의 다른 글

2주차 - EKS Networking  (0) 2026.03.25