devlos
Devlos Archive
devlos
전체 방문자
오늘
어제
03-05 05:24

최근 글

  • 분류 전체보기 (115)
    • 프로젝트 (1)
    • MSA 설계 & 도메인주도 설계 (9)
    • 클라우드 컴퓨팅 & NoSQL (95)
      • [K8S Deploy] K8S 디플로이 스터디 (8)
      • [Cilium Study] 실리움 스터디 (8)
      • [KANS] 쿠버네티스 네트워크 심화 스터디 (12)
      • [T101] 테라폼 4기 스터디 (8)
      • [CICD] CICD 맛보기 스터디 (3)
      • [T101] 테라폼 기초 입문 스터디 (6)
      • [AEWS] Amazon EKS 워크숍 스터디 (7)
      • [PKOS] 쿠버네티스 실무 실습 스터디 (7)
      • Kubernetes (13)
      • Docker (7)
      • Redis (1)
      • Jenkins (3)
      • Terraform (1)
      • Ansible (4)
      • Kafka (1)
    • 프로그래밍 (7)
      • Spring Boot (5)
      • Broker (1)
    • 성능과 튜닝 (1)
    • ALM (0)
    • 기타 (2)

인기 글

태그

  • 쿠버네티스
  • 데브옵스
  • t101 4기
  • ansible
  • Kubernetes
  • CloudNet@
  • 테라폼
  • docker
  • DevOps
  • MSA
  • 쿠버네티스 스터디
  • kOps
  • 도커
  • cilium
  • PKOS

티스토리

최근 댓글

hELLO · Designed By 정상우.
devlos

Devlos Archive

[7주차 - K8S Deploy] RKE2 & Cluster API (26.02.15)
클라우드 컴퓨팅 & NoSQL/[K8S Deploy] K8S 디플로이 스터디

[7주차 - K8S Deploy] RKE2 & Cluster API (26.02.15)

2026. 2. 21. 06:12
반응형

들어가며

안녕하세요! Devlos입니다.

이번 포스팅은 CloudNet@ 커뮤니티에서 주최하는 K8S Deploy 7주 차 주제인 "RKE2 & Cluster API"에 대해 정리한 내용입니다.

이번 7주차에서는 Rancher가 개발한 엔터프라이즈급 Kubernetes 배포판 RKE2와 클러스터 수명 주기를 선언적으로 관리하는 Cluster API를 다룹니다.

RKE2 아키텍처, 구성 요소(Helm Controller, CNI, containerd 등), 라이프사이클과 부트스트랩 과정을 살펴보고, 실습을 통해 RKE2 클러스터 설치부터 업그레이드까지 전 과정을 실습합니다. 마지막으로 Cluster API의 개념, Management/Workload Cluster, Infrastructure Provider 등 핵심 구성 요소와 "노드를 파드처럼" 다루는 선언적 클러스터 관리 모델을 알아봅니다.


 

RKE2 소개

 

RKE2(Rancher Kubernetes Engine 2)는 Rancher에서 개발한 엔터프라이즈급 Kubernetes 배포판입니다. 특히 미국 연방 정부 부문의 엄격한 보안 및 규정 준수 요구사항을 충족하도록 설계되었으며, 보안이 최우선인 환경에서 Kubernetes를 안전하고 쉽게 배포할 수 있는 솔루션입니다.

https://docs.rke2.io/

 

Introduction | RKE2

RKE2 is Rancher's enterprise-ready next-generation Kubernetes distribution. It has also been known as RKE Government.

docs.rke2.io

 

RKE2의 가장 큰 특징은 보안 중심 설계입니다. FIPS 140-2 Level 1 암호화 표준을 준수하고, CIS Kubernetes Benchmark가 기본적으로 적용됩니다. 또한 SELinux와 AppArmor 같은 강화된 보안 정책을 지원하며, Pod Security Standards가 기본적으로 활성화되어 있습니다. 이러한 보안 기능들은 정부 기관이나 금융 서비스, 의료 분야처럼 높은 보안 수준이 요구되는 환경에서 매우 중요합니다.

정부 및 엔터프라이즈 특화 기능도 RKE2의 핵심입니다. STIG(Security Technical Implementation Guide) 준수, Common Criteria 인증 지원, FedRAMP 요구사항 충족 등을 통해 정부 기관에서 요구하는 다양한 보안 표준을 만족합니다. 높은 수준의 감사 기능도 제공하여 규정 준수를 위한 로깅과 모니터링이 가능합니다.

RKE1과 비교했을 때 주요 개선사항들이 있습니다. containerd를 기본 런타임으로 사용하여 Docker 의존성을 제거했고, systemd 기반 서비스 관리로 안정성을 향상시켰습니다. etcd는 Embedded 방식과 External 방식 중 유연하게 선택할 수 있으며, Calico, Cilium, Canal 등 다양한 CNI 플러그인을 지원합니다.

다른 Kubernetes 배포판과 비교하면 RKE2만의 차별화된 장점이 명확합니다. kubeadm이나 Kubespray의 경우 보안 설정을 수동으로 하거나 별도 설정이 필요한 반면, RKE2는 FIPS 140-2와 CIS 기준이 기본 적용됩니다. 설치 복잡도 면에서도 RKE2는 단순한 바이너리 실행만으로 설치가 가능하며, 정부 인증인 FedRAMP와 STIG를 기본 준수합니다. 관리 측면에서는 Rancher와의 통합 관리가 가능하고, 자동화된 롤링 업데이트를 지원합니다.

다음은 RKE2 아키텍처입니다.

RKE2 아키텍처

 

RKE2의 아키텍처는 크게 Server Node와 Agent Node로 구분됩니다. 이 구조는 Kubernetes의 마스터-워커 노드 개념과 유사하지만, RKE2만의 특별한 관리 방식을 포함하고 있습니다.

 

RKE2 Server Node

Kubernetes 컨트롤 플레인 역할을 담당합니다. 핵심 구성 요소로는 RKE Supervisor가 있어 전체 클러스터를 관리하고 감독합니다. kubelet이 노드 관리를 담당하며, etcd가 클러스터 상태 정보를 저장합니다. api-server는 Kubernetes API 엔드포인트를 제공하고, controller-manager와 cloud-controller-manager가 각각 클러스터 리소스와 클라우드 관련 기능을 관리합니다.

scheduler는 Pod 스케줄링을 담당하며, 이들은 Static pods 형태로 실행됩니다.

 

RKE2 Agent Node

워커 노드 역할을 수행합니다. 마찬가지로 RKE Supervisor가 노드를 관리하며, kubelet과 CRI(Container Runtime Interface)로 containerd를 사용합니다. 이 노드들은 Managed processes로 관리되어 안정성을 보장합니다.

Agent Node에는 다양한 Kubernetes 시스템 컴포넌트들이 배포됩니다. RKE2 K8s Deployments 영역에는 kube-proxy, CNI(canal), helm-controller, CoreDNS, metric-server, Ingress 등이 포함됩니다. User-defined workloads 영역에는 실제 사용자가 배포한 애플리케이션들인 Service Mesh, Other Apps 등이 실행됩니다.

이 아키텍처의 특징은 RKE Supervisor가 모든 노드에서 중앙 집중식 관리를 담당한다는 점입니다. 이를 통해 노드 간 통신과 상태 관리가 효율적으로 이루어지며, 장애 발생 시 빠른 복구가 가능합니다. 또한 containerd를 기본 런타임으로 사용하여 Docker 의존성 없이도 안정적인 컨테이너 실행 환경을 제공합니다.

RKE2 구성 요소

Helm Controller
RKE2는 K3s에서 가져온 Helm Controller를 내장하고 있습니다. 이는 GitOps 스타일의 경량 배포기로, RKE2 클러스터가 시작될 때 필수 Helm 차트를 자동으로 설치하고 유지하는 역할을 담당합니다.

Helm Controller는 /var/lib/rancher/rke2/server/manifests 디렉토리를 감시하며, 이 디렉토리에 HelmChart YAML 파일이 배치되면 자동으로 해당 Helm 차트를 다운로드하고 설치합니다. 기본적으로 설치되는 매니페스트 파일들은 다음과 같습니다:

tree /var/lib/rancher/rke2/server/manifests
├── rke2-canal-config.yaml
├── rke2-canal.yaml
├── rke2-coredns-config.yaml
├── rke2-coredns.yaml
├── rke2-metrics-server.yaml
└── rke2-runtimeclasses.yaml

Helm Controller는 부트스트랩 시 필수 애드온을 자동 설치하는 용도로 사용되며, 이는 애플리케이션 배포를 담당하는 Argo CD와는 다른 목적을 가집니다.

 

핵심 Kubernetes 컴포넌트
RKE2는 표준 Kubernetes 컴포넌트들을 포함합니다. 컨트롤 플레인에는 API Server, Controller Manager, Scheduler가 있으며, 각 노드에는 kube-proxy와 kubelet이 실행됩니다. 클러스터 상태 저장을 위해 etcd가 사용되고, 컨테이너 런타임으로는 runc와 containerd/CRI가 활용됩니다.

 

네트워킹

RKE2는 다양한 CNI 옵션을 지원합니다. 기본적으로 Canal을 사용하는데, 이는 네트워크 연결은 Flannel이 담당하고 보안 정책은 Calico가 처리하는 하이브리드 CNI입니다. Canal DaemonSet 파드 내부에는 flannel 컨테이너가 VXLAN 오버레이를 담당하고, calico-node 컨테이너가 네트워크 정책과 iptables를 관리합니다.

kubectl describe pod -n kube-system -l k8s-app=canal | grep Image: | uniq
Image:         rancher/hardened-calico:v3.31.3-build20260119
Image:         rancher/hardened-flannel:v0.28.0-build20260119

 

추가로, canal 이미지를 보면 "hardened-"라는 접두사가 붙어 있습니다. 이는 Rancher에서 공식적으로 보안 강화(Hardening) 및 취약점 패치, 안정성 검증을 진행한 버전임을 의미합니다. RKE2는 일반 오픈소스 이미지 대신 이러한 hardened 이미지를 사용하여 보안과 안정성을 높입니다.

 

Cilium, Calico, Flannel 등의 다른 CNI 플러그인도 선택할 수 있으며, 추가적으로 Multus CNI도 지원합니다.

 

추가 서비스
DNS 해석을 위해 CoreDNS가 기본 제공되며, 인그레스 컨트롤러로는 NGINX Controller나 Traefik을 선택할 수 있습니다. 리소스 메트릭 수집을 위한 Metrics Server와 패키지 관리를 위한 Helm도 포함되어 있습니다.

RKE2 라이프 사이클

컨텐츠 부트스트랩

RKE2는 독특한 부트스트랩 방식을 사용합니다. 모든 필수 바이너리와 매니페스트를 포함한 rke2-runtime 컨테이너 이미지에서 필요한 구성 요소들을 추출하여 서버 노드와 에이전트 노드를 실행합니다.

 

런타임 이미지 구조
rke2-runtime 이미지는 약 91.6MB 크기로 두 가지 주요 디렉토리를 포함합니다:

crictl images
IMAGE                                           TAG                            IMAGE ID            SIZE
docker.io/rancher/rke2-runtime                  v1.34.3-rke2r3                 30afde048693a       91.6MB

tree
├── bin
│   ├── containerd
│   ├── containerd-shim-runc-v2
│   ├── crictl
│   ├── ctr
│   ├── kubectl
│   ├── kubelet
│   └── runc
├── charts
│   ├── harvester-cloud-provider.yaml
│   ├── harvester-csi-driver.yaml
│   ├── rancher-vsphere-cpi.yaml
│   ├── rancher-vsphere-csi.yaml
│   ├── rke2-calico-crd.yaml
│   ├── rke2-calico.yaml
│   ├── rke2-canal.yaml
│   ├── rke2-cilium.yaml
│   ├── rke2-coredns.yaml
│   ├── rke2-flannel.yaml
│   ├── rke2-ingress-nginx.yaml
│   ├── rke2-metrics-server.yaml
│   ├── rke2-multus.yaml
│   ├── rke2-runtimeclasses.yaml
│   ├── rke2-snapshot-controller-crd.yaml
│   ├── rke2-snapshot-controller.yaml
│   ├── rke2-snapshot-validation-webhook.yaml
│   ├── rke2-traefik-crd.yaml
│   └── rke2-traefik.yaml

bin 디렉토리에는 containerd, containerd-shim-runc-v2, crictl, ctr, kubectl, kubelet, runc 등의 핵심 바이너리가 들어있습니다. charts 디렉토리에는 다양한 CNI 플러그인(Calico, Canal, Cilium, Flannel), 인그레스 컨트롤러(NGINX, Traefik), 메트릭 서버, 스냅샷 컨트롤러 등의 Helm 차트들이 포함되어 있습니다.

이미지 검색 및 추출 과정
RKE2는 먼저 /var/lib/rancher/rke2/agent/images/ 디렉토리에서 runtime-image.txt 파일을 확인하여 사용할 런타임 이미지를 식별합니다. 로컬에서 이미지를 찾을 수 없는 경우 DockerHub에서 다운로드합니다.

tree /var/lib/rancher/rke2/agent/images/
├── etcd-image.txt
├── kube-apiserver-image.txt
├── kube-controller-manager-image.txt
├── kube-proxy-image.txt
├── kube-scheduler-image.txt
└── runtime-image.txt

cat /var/lib/rancher/rke2/agent/images/runtime-image.txt
index.docker.io/rancher/rke2-runtime:v1.34.3-rke2r3

이 파일에는 index.docker.io/rancher/rke2-runtime:v1.34.3-rke2r3과 같은 이미지 정보가 저장되어 있습니다.

이미지가 확보되면 RKE2는 런타임 이미지에서 바이너리들을 추출하여 /var/lib/rancher/rke2/data/${RKE2_DATA_KEY}/bin 디렉토리에 저장합니다. 여기서 RKE2_DATA_KEY는 이미지 버전을 식별하는 고유 문자열입니다.

tree /var/lib/rancher/rke2/data/
└── v1.34.3-rke2r3-5b8349de68df
    ├── bin
    │   ├── containerd
    │   ├── containerd-shim-runc-v2
    │   ├── crictl
    │   ├── ctr
    │   ├── kubectl
    │   ├── kubelet
    │   └── runc
    └── charts
        ├── harvester-cloud-provider.yaml
        ├── harvester-csi-driver.yaml
        ├── rancher-vsphere-cpi.yaml
        ├── rancher-vsphere-csi.yaml
        ├── rke2-calico-crd.yaml
        ├── rke2-calico.yaml
        ├── rke2-canal.yaml
        ├── rke2-cilium.yaml
        ├── rke2-coredns.yaml
        ├── rke2-flannel.yaml
        ├── rke2-ingress-nginx.yaml
        ├── rke2-metrics-server.yaml
        ├── rke2-multus.yaml
        ├── rke2-runtimeclasses.yaml
        ├── rke2-snapshot-controller-crd.yaml
        ├── rke2-snapshot-controller.yaml
        ├── rke2-snapshot-validation-webhook.yaml
        ├── rke2-traefik-crd.yaml
        └── rke2-traefik.yaml

예를 들어 v1.34.3-rke2r3-5b8349de68df와 같은 형태로 생성됩니다.

필수 구성 요소
RKE2가 정상 작동하기 위해서는 런타임 이미지에 특정 바이너리들이 반드시 포함되어야 합니다. 핵심 컴포넌트로는 containerd(CRI 구현체), containerd-shim(runc 태스크를 래핑하는 shim), kubelet(Kubernetes 노드 에이전트), runc(OCI 런타임)가 있습니다.

운영 도구로는 ctr(containerd 저수준 관리), crictl(CRI 인터페이스 관리), kubectl(Kubernetes 클러스터 관리), socat(포트 포워딩용) 등이 포함됩니다.

매니페스트 배포
바이너리 추출이 완료되면 RKE2는 런타임 이미지에서 Helm 차트들을 추출하여 /var/lib/rancher/rke2/server/manifests 디렉토리에 저장합니다.

tree /var/lib/rancher/rke2/server/manifests
├── rke2-canal-config.yaml
├── rke2-canal.yaml
├── rke2-coredns-config.yaml
├── rke2-coredns.yaml
├── rke2-metrics-server.yaml
└── rke2-runtimeclasses.yaml

이 디렉토리에는 rke2-canal.yaml, rke2-coredns.yaml, rke2-metrics-server.yaml 등의 기본 매니페스트 파일들이 배치됩니다. 이 차트들은 앞서 설명한 Helm Controller에 의해 자동으로 설치되어 클러스터의 기본 기능을 제공합니다.

서버 초기화 과정

RKE2 서버 초기화는 여러 단계를 거쳐 체계적으로 진행됩니다. 임베디드 K3s 엔진 서버에는 특수 에이전트 프로세스가 포함되어 있어, 노드 컨테이너 런타임이 시작될 때까지 다음 단계의 시작이 연기됩니다.

Static Pod 매니페스트 생성
RKE2는 Kubernetes 컨트롤 플레인 컴포넌트들을 Static Pod 형태로 실행합니다. 이들은 /var/lib/rancher/rke2/agent/pod-manifests/ 디렉토리에 YAML 파일로 저장됩니다.

ls -ltr /var/lib/rancher/rke2/agent/pod-manifests/
total 32
-rw-r--r--. 1 root root 3279 Feb 14 16:32 etcd.yaml
-rw-r--r--. 1 root root 2325 Feb 14 16:32 kube-proxy.yaml
-rw-r--r--. 1 root root 9337 Feb 14 16:33 kube-apiserver.yaml
-rw-r--r--. 1 root root 3724 Feb 14 16:33 kube-scheduler.yaml
-rw-r--r--. 1 root root 6325 Feb 14 16:33 kube-controller-manager.yaml

컴포넌트 준비 단계
각 Kubernetes 컴포넌트는 의존성에 따라 순차적으로 준비됩니다. RKE2는 Goroutine을 사용하여 비동기적으로 각 컴포넌트의 준비 상태를 확인하고 Static Pod 정의를 생성합니다.

컴포넌트 준비단계 의존성

kube-apiserver는 etcd가 준비될 때까지 기다린 후 Static Pod 정의를 작성합니다. kube-controller-manager와 kube-scheduler는 kube-apiserver가 준비될 때까지 기다린 후 각각의 Static Pod 정의를 생성합니다. 이러한 순차적 의존성 관리를 통해 안정적인 클러스터 초기화가 보장됩니다.

클러스터 시작
모든 컴포넌트 준비가 완료되면 RKE2는 HTTP 서버를 Goroutine으로 실행하여 다른 클러스터 서버나 에이전트의 연결을 대기합니다. 이를 통해 클러스터를 초기화하거나 기존 클러스터에 조인할 수 있습니다.

etcd는 kubelet이 준비될 때까지 기다린 후 Static Pod 정의를 작성합니다. 이미지가 존재하지 않으면, etcd 이미지를 가져와서 Goroutine을 회전시켜 kubelet을 기다린 다음 /var/lib/rancher/rke2/agent/pod-manifests/에 정적 포드 정의를 작성합니다.

helm-controller는 kube-apiserver가 준비된 후 Goroutine을 통해 내장된 Helm 컨트롤러를 시작하여 필수 애드온들을 자동으로 설치합니다.

배포된 RKE2-Server 프로세스를 확인해보면 다음과 같습니다.

# RKE2 프로세스 트리 확인
pstree -al
  ├─rke2                                    # 메인 RKE2 서버 프로세스
  │   ├─containerd -c /var/lib/rancher/rke2/agent/etc/containerd/config.toml  # 컨테이너 런타임
  │   │   └─12*[{containerd}]               # containerd 워커 스레드들
  │   ├─kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins ...        # Kubernetes 노드 에이전트
  │   │   └─14*[{kubelet}]                  # kubelet 워커 스레드들
  │   └─12*[{rke2}]                         # RKE2 메인 프로세스 스레드들

# RKE2 서비스 상태 확인
systemctl status rke2-server.service --no-pager
● rke2-server.service - Rancher Kubernetes Engine v2 (server)
     Loaded: loaded (/usr/lib/systemd/system/rke2-server.service; enabled; preset: disabled)
     Active: active (running) since Sat 2026-02-14 16:33:49 KST; 1h 37min ago
     ...
   Main PID: 6348 (rke2)                    # 메인 RKE2 프로세스 ID
      Tasks: 146                            # 총 태스크 수
     Memory: 1.1G                           # 메모리 사용량
     CGroup: /system.slice/rke2-server.service
             ├─6348 "/usr/bin/rke2 server"  # RKE2 서버 프로세스
             ├─6367 containerd ...           # containerd 프로세스
             ├─6426 kubelet ...              # kubelet 프로세스
             ├─6483 containerd-shim-runc-v2 ... # 컨테이너 shim 프로세스들
             └─...

# systemd unit 파일 확인
cat /usr/lib/systemd/system/rke2-server.service
[Unit]
Description=Rancher Kubernetes Engine v2 (server)
Documentation=https://github.com/rancher/rke2#readme
Wants=network-online.target               # 네트워크가 온라인 상태가 되기를 원함
After=network-online.target               # 네트워크 온라인 후에 시작
Conflicts=rke2-agent.service              # rke2-agent와 충돌 방지

[Install]
WantedBy=multi-user.target                # 멀티유저 타겟에서 시작

[Service]
Type=notify                               # systemd에 준비 완료 신호를 보내는 타입
EnvironmentFile=-/etc/default/%N          # 환경 변수 파일들 로드
EnvironmentFile=-/etc/sysconfig/%N
EnvironmentFile=-/usr/lib/systemd/system/%N.env
KillMode=process                          # 메인 프로세스만 종료, 자식 프로세스는 보존
Delegate=yes                              # cgroup 관리를 RKE2에 위임
LimitNOFILE=1048576                       # 파일 디스크립터 제한
LimitNPROC=infinity                       # 프로세스 수 제한 없음
LimitCORE=infinity                        # 코어 덤프 크기 제한 없음
TasksMax=infinity                         # 태스크 수 제한 없음
TimeoutStartSec=0                         # 시작 타임아웃 없음
Restart=always                            # 항상 재시작
RestartSec=5s                             # 재시작 대기 시간
ExecStartPre=-/sbin/modprobe br_netfilter # 네트워크 브리지 필터 모듈 로드
ExecStartPre=-/sbin/modprobe overlay      # 오버레이 파일시스템 모듈 로드
ExecStart=/usr/bin/rke2 server            # RKE2 서버 실행
ExecStopPost=-/bin/sh -c "..."            # 종료 후 남은 프로세스 정리

RKE2의 프로세스 관리와 컨테이너 생명주기

출처: https://tech-recipe.tistory.com/52 [김마늘의 테크 레시피:티스토리]

앞서 살펴본 RKE2의 systemd 설정에서 KillMode=process와 Delegate=yes 옵션이 컨테이너 생명주기 관리에 어떤 역할을 하는지 이해하기 위해, 실제 프로세스 트리와 컨테이너 생성 과정을 살펴보겠습니다.

프로세스 입양(Adoption) 메커니즘

contianerd로부터 생성된 프로세스 PID 1224700가 PID 1224703을 생성한 후 종료되면서, containerd-shim(PID 1224703)이 고아(Orphan) 상태가 되고 systemd(PID 1)이 이를 입양, containerd와 형제 프로세스가 되는 것입니다.

위 다이어그램은 containerd-shim이 systemd에 입양되는 과정을 보여줍니다. containerd로부터 생성된 프로세스(PID 1224700)가 containerd-shim(PID 1224703)을 생성한 후 종료되면서, containerd-shim이 고아(Orphan) 상태가 되고 systemd(PID 1)이 이를 입양하여 containerd와 형제 프로세스가 됩니다.

이것이 바로 KillMode=process 설정의 핵심입니다. RKE2 메인 프로세스가 종료되더라도 containerd-shim과 같은 자식 프로세스들은 systemd에 의해 보호받아 계속 실행될 수 있습니다. 또한 Delegate=yes 설정을 통해 cgroup 관리를 RKE2에 위임함으로써, 컨테이너 프로세스들의 리소스 관리가 효율적으로 이루어집니다.

Pod와 Pause 컨테이너의 관계

프로세스 트리를 자세히 살펴보면 containerd-shim(PID 1224703)의 자식 프로세스로 nginx(PID 1224754)와 pause(PID 1224727)가 보입니다. 여기서 주목할 점은 Pod 자체는 프로세스로 보이지 않는다는 것입니다. 그 이유는 Pod가 Kubernetes의 논리적인 그룹일 뿐, 리눅스 커널에는 존재하지 않는 객체이기 때문입니다.

Kubernetes는 Pod라는 논리적 그룹을 구현하기 위해 Infra 컨테이너(또는 sandbox 컨테이너) 개념을 사용합니다. 바로 pause 컨테이너(PID 1224727)가 Pod의 실제 구현체입니다. Kubernetes는 리눅스 네임스페이스 기능을 사용하여 Pod를 구현하는데, 네임스페이스는 최소한 하나의 프로세스가 그 안에 살아 있어야만 유지됩니다. pause 컨테이너는 아무 일도 하지 않지만 네임스페이스를 유지하는 역할을 담당하여 sandbox 환경(Pod)을 구현합니다.

Pause 컨테이너 생성 과정

Pause 컨테이너 생성은 복잡한 3단계 과정을 거칩니다.

1단계: 프로세스 생성과 고아화
containerd-shim 프로세스에서 시작하여 여러 중간 프로세스와 스레드를 거치면서 Pause 컨테이너 프로세스를 생성합니다. Pause 컨테이너가 생성되고 나면 containerd-shim과 Pause 프로세스 사이의 모든 중간 단계 프로세스와 스레드는 종료되어 Pause 컨테이너 프로세스를 고아(Orphan) 상태로 만듭니다.

2단계: Subreaper 설정
containerd-shim의 또 다른 스레드에서 PR_SET_CHILD_SUBREAPER를 선언하여 containerd-shim에 Subreaper 속성을 부여합니다. 이 과정은 1단계와 거의 동시에 일어납니다.

3단계: 프로세스 입양
고아가 된 Pause 컨테이너가 가장 가까운 reaper를 찾게 되고, Subreaper 속성이 부여된 containerd-shim 프로세스가 Pause 컨테이너 프로세스를 입양합니다.

이러한 복잡한 과정은 RKE2의 KillMode=process 설정과 밀접한 관련이 있습니다. 메인 프로세스가 종료되더라도 컨테이너들이 안전하게 보호받을 수 있도록 하는 견고한 프로세스 관리 메커니즘을 제공합니다.

RKE2 에이전트 프로세스 실행

에이전트 프로세스 진입점
RKE2의 에이전트 프로세스는 서버와 에이전트 노드 모두에서 실행되는 핵심 구성 요소입니다. 서버 프로세스의 경우 임베디드 K3s 엔진이 이를 직접 호출하여 실행합니다.

컨테이너 런타임 관리
RKE2는 containerd 프로세스를 생성하고 그 생명주기를 관리합니다. containerd 프로세스가 종료되면 RKE2 프로세스도 함께 종료되어, 컨테이너 런타임과 RKE2 간의 강한 결합을 보장합니다. 이는 컨테이너 환경의 안정성을 위한 중요한 설계 결정입니다.

노드 에이전트 관리
kubelet 프로세스는 RKE2에 의해 생성되고 감독됩니다. kubelet이 예기치 않게 종료되면 RKE2가 자동으로 재시작을 시도하여 노드의 안정성을 유지합니다. kubelet이 정상 작동하면 사용 가능한 모든 Static Pod가 시작됩니다.

서버 노드의 경우, etcd와 kube-apiserver가 순차적으로 시작되어 Static Pod를 통해 실행된 나머지 구성 요소들이 kube-apiserver에 연결되어 클러스터 운영을 시작할 수 있게 됩니다.

서버 차트 배포
서버 노드에서는 helm-controller가 /var/lib/rancher/rke2/server/manifests 디렉토리에 있는 모든 차트를 클러스터에 적용합니다. 주요 배포되는 구성 요소들은 다음과 같습니다:

  • 네트워킹: Canal, Cilium, Calico, Flannel, Multus 등의 CNI 플러그인 (DaemonSet, 부트스트랩)
  • DNS: CoreDNS (Deployment, 부트스트랩)
  • 인그레스: NGINX Controller 또는 Traefik과 관련 CRD (Deployment)
  • 모니터링: Metrics Server (Deployment)
  • 런타임: Runtime Classes (Deployment)
  • 스토리지: Snapshot Controller, CRD, Validation Webhook (Deployment)

데몬 프로세스 실행
모든 초기화가 완료되면 RKE2 프로세스는 데몬 모드로 전환되어 SIGTERM 또는 SIGKILL 신호를 받거나 컨테이너 프로세스가 종료될 때까지 무기한으로 실행됩니다. 이를 통해 안정적인 클러스터 운영 환경을 제공합니다.

RKE2 실습

Vagrantfile: Rocky Linux 9

# Base Image  https://portal.cloud.hashicorp.com/vagrant/discover/bento/rockylinux-9
BOX_IMAGE = "bento/rockylinux-9" # "bento/rockylinux-10.0"
BOX_VERSION = "202510.26.0"
N = 2 # max number of Node

Vagrant.configure("2") do |config|

# Nodes 
  (1..N).each do |i|
    config.vm.define "k8s-node#{i}" do |subconfig|
      subconfig.vm.box = BOX_IMAGE
      subconfig.vm.box_version = BOX_VERSION
      subconfig.vm.provider "virtualbox" do |vb|
        vb.customize ["modifyvm", :id, "--groups", "/RKE2-Lab"]
        vb.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]
        vb.name = "k8s-node#{i}"
        vb.cpus = 4
        vb.memory = 4096
        vb.linked_clone = true
      end
      subconfig.vm.host_name = "k8s-node#{i}"
      subconfig.vm.network "private_network", ip: "192.168.10.1#{i}"
      subconfig.vm.network "forwarded_port", guest: 22, host: "6000#{i}", auto_correct: true, id: "ssh"
      subconfig.vm.synced_folder "./", "/vagrant", disabled: true
      subconfig.vm.provision "shell", path: "init_cfg.sh" , args: [ N ]
    end
  end

end

nit_cfg.sh

#!/usr/bin/env bash

echo ">>>> Initial Config Start <<<<"


echo "[TASK 1] Change Timezone and Enable NTP"
timedatectl set-local-rtc 0
timedatectl set-timezone Asia/Seoul


echo "[TASK 2] Disable firewalld and selinux"
systemctl disable --now firewalld >/dev/null 2>&1
setenforce 0
sed -i 's/^SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config


echo "[TASK 3] Disable and turn off SWAP & Delete swap partitions"
swapoff -a
sed -i '/swap/d' /etc/fstab
sfdisk --delete /dev/sda 2 >/dev/null 2>&1
partprobe /dev/sda >/dev/null 2>&1


echo "[TASK 4] Config kernel & module"
cat << EOF > /etc/modules-load.d/k8s.conf
overlay
br_netfilter
#vxlan
EOF
modprobe overlay >/dev/null 2>&1
modprobe br_netfilter >/dev/null 2>&1
#modprobe vxlan >/dev/null 2>&1

cat << EOF >/etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF
sysctl --system >/dev/null 2>&1


echo "[TASK 5] Setting Local DNS Using Hosts file"
sed -i '/^127\.0\.\(1\|2\)\.1/d' /etc/hosts
for (( i=1; i<=$1; i++  )); do echo "192.168.10.1$i k8s-node$i" >> /etc/hosts; done


echo "[TASK 6] Install Helm"
curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | DESIRED_VERSION=v3.20.0 bash >/dev/null 2>&1


echo "[TASK 7] Setting SSHD"
cat << EOF >> /etc/ssh/sshd_config
PermitRootLogin yes
PasswordAuthentication yes
EOF
systemctl restart sshd >/dev/null 2>&1


echo "[TASK 8] Install packages"
dnf install -y conntrack python3-pip git >/dev/null 2>&1


echo "[TASK 9] NetworkManager to ignore calico/flannel related network interfaces"
# https://docs.rke2.io/known_issues#networkmanager
cat << EOF > /etc/NetworkManager/conf.d/k8s.conf
[keyfile]
unmanaged-devices=interface-name:flannel*;interface-name:cali*;interface-name:tunl*;interface-name:vxlan.calico;interface-name:vxlan-v6.calico;interface-name:wireguard.cali;interface-name:wg-v6.cali
EOF
systemctl reload NetworkManager


echo "[TASK 11] Install K9s"
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
wget -P /tmp https://github.com/derailed/k9s/releases/latest/download/k9s_linux_${CLI_ARCH}.tar.gz  >/dev/null 2>&1
tar -xzf /tmp/k9s_linux_${CLI_ARCH}.tar.gz -C /tmp
chown root:root /tmp/k9s
mv /tmp/k9s /usr/local/bin/
chmod +x /usr/local/bin/k9s


echo "[TASK 12] ETC"
echo "sudo su -" >> /home/vagrant/.bashrc


echo ">>>> Initial Config End <<<<"

가상환경(노드) 설치

mkdir k8s-rke2
cd k8s-rke2

curl -O https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/k8s-rke2/Vagrantfile
curl -O https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/k8s-rke2/init_cfg.sh

vagrant up
vagrant status

vagrant ssh k8s-node1

서버 노드 설치

# systemd 기반 시스템에 서비스로 편리하게 설치할 수 있는 설치 스크립트를 제공 : 서비스와 바이너리 파일이 컴퓨터에 설치
# https://docs.rke2.io/install/methods
curl -sfL https://get.rke2.io --output install.sh
chmod +x install.sh

#1.33버전으로 기본 설치 (실제 설치에 필요한 바이너리, 레포, 페키지 등을 설치)
INSTALL_RKE2_CHANNEL=v1.33 ./install.sh 
# ...
# Installed:
#   rke2-common-1.33.7~rke2r3-0.el9.aarch64             
#   rke2-selinux-0.22-1.el9.noarch             
#   rke2-server-1.33.7~rke2r3-0.el9.aa

# rke2 버전 확인
rke2 --version
# rke2 version v1.33.7+rke2r3 (7e4fd1a82edf497cab91c220144619bbad659cf4)
# go version go1.24.11 X:boringcrypto

# repo 추가 확인
dnf repolist
rancher-rke2-1.33-stable                                                  Rancher RKE2 1.33 (v1.33)
rancher-rke2-common-stable                                                Rancher RKE2 Common (v1.33)

tree /etc/yum.repos.d/
# /etc/yum.repos.d/
# ├── rancher-rke2.repo
# ├── rocky-addons.repo
# ├── rocky-devel.repo
# ├── rocky-extras.repo
# └── rocky.repo
cat /etc/yum.repos.d/rancher-rke2.repo
# [rancher-rke2-common-stable]
# name=Rancher RKE2 Common (v1.33)
# baseurl=https://rpm.rancher.io/rke2/stable/common/centos/9/noarch
# enabled=1
# gpgcheck=1
# repo_gpgcheck=1
# gpgkey=https://rpm.rancher.io/public.key
# [rancher-rke2-1.33-stable]
# name=Rancher RKE2 1.33 (v1.33)
# baseurl=https://rpm.rancher.io/rke2/stable/1.33/centos/9/aarch64
# enabled=1
# gpgcheck=1
# repo_gpgcheck=1
# gpgkey=https://rpm.rancher.io/public.key

# 디렉터리 생성 확인
tree /etc/rancher/
# /etc/rancher/
# └── rke2
tree /var/lib/rancher/
# /var/lib/rancher/
# └── rke2
#     ├── agent
#     │   ├── containerd
#     │   │   └── io.containerd.snapshotter.v1.overlayfs
#     │   │       └── snapshots
#     │   └── logs
#     ├── data
#     └── server #여기 디렉토리에 있는 것들을 사용하여 server 구축

# rke2 명령 확인
# https://docs.rke2.io/install/configuration#running-the-binary-directly
# rke2 server : Run the RKE2 management server, which will also launch the Kubernetes control plane components such as the API server, controller-manager, and scheduler. Only Supported on Linux.
# rke2 agent : Run the RKE2 node agent. This will cause RKE2 to run as a worker node, launching the Kubernetes node services kubelet and kube-proxy. Supported on Linux and Windows.
rke2 --h
   server           Run management server
   agent            Run node agent


# RKE2 설정 : cni 플러그인(canal) 등
# https://docs.rke2.io/install/configuration
# https://docs.rke2.io/advanced
cat << EOF > /etc/rancher/rke2/config.yaml
write-kubeconfig-mode: "0644"

debug: true

cni: canal

bind-address: 192.168.10.11
advertise-address: 192.168.10.11
node-ip: 192.168.10.11

disable-cloud-controller: true #CSP 로드밸런서 연결 비활성화

disable: #경량 형으로 설치하기 위해 끔
  - servicelb
  - rke2-coredns-autoscaler
  - rke2-ingress-nginx
  - rke2-snapshot-controller
  - rke2-snapshot-controller-crd
  - rke2-snapshot-validation-webhook
EOF
cat /etc/rancher/rke2/config.yaml

# canal cni 플러그인 helm chart values 파일 작성
# https://docs.rke2.io/networking/basic_network_options
# https://github.com/rancher/rke2-charts/blob/main-source/packages/rke2-canal/charts/values.yaml
mkdir -p /var/lib/rancher/rke2/server/manifests/
cat << EOF > /var/lib/rancher/rke2/server/manifests/rke2-canal-config.yaml
apiVersion: helm.cattle.io/v1 # rke2가 바라보는 커스텀 리소스
kind: HelmChartConfig
metadata:
  name: rke2-canal
  namespace: kube-system
spec:
  valuesContent: |-
    flannel:
      iface: "enp0s9"
EOF

# coredns 의 autoscaler 미설치를 위한 helm chart values 파일 작성
# https://docs.rke2.io/add-ons/helm#customizing-packaged-components-with-helmchartconfig
# https://github.com/rancher/rke2-charts/tree/main/charts/rke2-coredns/rke2-coredns/1.45.200
cat << EOF > /var/lib/rancher/rke2/server/manifests/rke2-coredns-config.yaml
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: rke2-coredns
  namespace: kube-system
spec:
  valuesContent: |-
    autoscaler:
      enabled: false
EOF


# 모니터링 : 신규 터미널창
watch -d pstree -a
### 기본
systemd no_timer_check --switched-root --system --deserialize 31
#   |-NetworkManager --no-daemon
#   |   `-2*[{NetworkManager}]
#   |-VBoxService --pidfile /var/run/vboxadd-service.sh
#   |   `-8*[{VBoxService}]
#   |-agetty -o -p -- \\u --noclear - linux
#   |-atd -f
#   |-auditd
#   |   |-sedispatch
#   |   `-2*[{auditd}]
#   |-chronyd -F 2
#   |-crond -n
#   |-dbus-broker-lau --scope system --audit
#   |   `-dbus-broker --log 4 --controller 9 --machine-id b1e58edb470b46c09649e684035da916 --max-bytes ...
#   |-gpg-agent --homedir /var/cache/dnf/rancher-rke2-common-stable-38dcbd8c1e621b96/pubring ...
#   |   |-scdaemon --multi-server --homedir ...
#   |   |   `-{scdaemon}
#   |   `-{gpg-agent}
#   |-gpg-agent --homedir /var/cache/dnf/rancher-rke2-1.33-stable-ecd687a1e3012961/pubring ...
#   |   |-scdaemon --multi-server --homedir ...
#   |   |   `-{scdaemon}
#   |   `-{gpg-agent}
#   |-gssproxy -D
#   |   `-5*[{gssproxy}]
#   |-irqbalance
#   |   `-{irqbalance}
#   |-lsmd -d
#   |-polkitd --no-debug
#   |   `-7*[{polkitd}]
#   |-rpcbind -w -f
#   |-rsyslogd -n
#   |   `-2*[{rsyslogd}]
#   |-sshd
#   |   |-sshd
#   |   |   `-sshd
#   |   |       `-bash
#   |   |           `-sudo su -
#   |   |               `-su -
#   |   |                   `-bash
#   |   `-sshd
### 설치시

#   |-rke2
#   |   |-containerd -c...
#   |   |   `-8*[{containerd}]
#   |   `-9*[{rke2}]
#   |-rpcbind -w -f     
#   |-rsyslogd -n
#   |   `-2*[{rsyslogd}]
#   |-sshd        
#   |   |-sshd        
#   |   |   `-sshd             
#   |   |       `-bash        
#   |   |           `-sudo su -   
#   |   |               `-su -                              
#   |   |                   `-bash                       
#   |   |                       `-systemctl enable --now ...
#   |   |                           `-systemd-tty-ask ...
#   |   |-sshd        
#   |   |   `-sshd             
#   |   |       `-bash        
#   |   |           `-sudo su -   
#   |   |               `-su -                      
#   |   |                   `-bash                      
#   |   |                       `-watch -d pstree -a
#   |   |                           `-watch -d pstree -a
#   |   |                               `-pstree -a
#   |   `-sshd        
#   |       `-sshd             
#   |           `-bash        

journalctl -u rke2-server -f
# ...
# Feb 21 01:29:09 k8s-node1 rke2[6682]: time="2026-02-21T01:29:09.793314123+09:00" level=info msg="connecting to shim 32544c3c59293fa49350ff619364851f9f7bd4c6fb5d29768de62ec0a354893e" address="unix:///run/containerd/s/3cb2eacb931daa4ec632837dc6fe1ad629295cc5593c51e5ccaa9ae16d7235fa" protocol=ttrpc version=3
# Feb 21 01:29:09 k8s-node1 rke2[6682]: time="2026-02-21T01:29:09.867783785+09:00" level=info msg="StartContainer for \"32544c3c59293fa49350ff619364851f9f7bd4c6fb5d29768de62ec0a354893e\" returns successfully"
# Feb 21 01:29:10 k8s-node1 rke2[6682]: time="2026-02-21T01:29:10+09:00" level=debug msg="Tunnel server egress proxy updating Node k8s-node1 IP 192.168.10.11/32"
# Feb 21 01:29:21 k8s-node1 rke2[6682]: time="2026-02-21T01:29:21+09:00" level=debug msg="Node k8s-node1-3298a913 is not changing etcd status condition"

# RKE2 시작 : 2분 정도 소요 -> coredns 파드까지 정상화 대략 1~2분 추가 소요
systemctl enable --now rke2-server.service
systemctl status rke2-server --no-pager
# ● rke2-server.service - Rancher Kubernetes Engine v2 (server)
#      Loaded: loaded (/usr/lib/systemd/system/rke2-server.service; enabled; preset: disabled)
#      Active: active (running) since Sat 2026-02-21 01:28:29 KST; 1min 18s ago
#        Docs: https://github.com/rancher/rke2#readme
#     Process: 6680 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
#     Process: 6681 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
#    Main PID: 6682 (rke2)
#       Tasks: 136
#      Memory: 2.4G
#         CPU: 41.505s
#      CGroup: /system.slice/rke2-server.service
#              ├─6682 "/usr/bin/rke2 server"
#              ├─6709 containerd -c /var/lib/rancher/rke2/ag…
#              ├─6790 kubelet --volume-plugin-dir=/var/lib/k…
#              ├─6855 /var/lib/rancher/rke2/data/v1.33.8-rke…
#              ├─6857 /var/lib/rancher/rke2/data/v1.33.8-rke…
#              ├─7027 /var/lib/rancher/rke2/data/v1.33.8-rke…
#              ├─7167 /var/lib/rancher/rke2/data/v1.33.8-rke…
#              ├─7185 /var/lib/rancher/rke2/data/v1.33.8-rke…
#              ├─7941 /var/lib/rancher/rke2/data/v1.33.8-rke…
#              ├─9691 /var/lib/rancher/rke2/data/v1.33.8-rke…
#              └─9789 /var/lib/rancher/rke2/data/v1.33.8-rke…

# Feb 21 01:29:29 k8s-node1 rke2[6682]: time="2026-02-21T0…0"

# 프로세스 확인
pstree -a | grep -v color | grep 'rke2$' -A5
#   |-rke2
#   |   |-containerd -c /var/lib/rancher/rke2/agent/etc/containerd/config.toml
#   |   |   `-9*[{containerd}]
#   |   |-kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s...
#   |   |   `-17*[{kub
pstree -a | grep -v color | grep 'containerd-shim ' -A2
#   |-containerd-shim -namespace k8s.io -id5094bf3a769627150290346dabf
#   |   |-etcd --config-file=/var/lib/rancher/rke2/server/db/etcd/config
#   |   |   `-9*[{etcd}]
# --
#   |-containerd-shim -namespace k8s.io -id6fd9161864791d1bfded7946c7c
#   |   |-kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s...
#   |   |   `-7*[{kube-proxy}]
# --
#   |-containerd-shim -namespace k8s.io -id40a4f51e503bf82d8d9ce5782ec
#   |   |-kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --advertise-address=192.168.10.11...
#   |   |   `-9*[{kube-apiserver}]
# --
#   |-containerd-shim -namespace k8s.io -id37307180057deda066c98e9fe57
#   |   |-kube-controller --permit-port-sharing=true --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins--terminated-pod-gc-thres
#   |   |   `-7*[{kube-controller}]
# --
#   |-containerd-shim -namespace k8s.io -id3d02bc2d58c8df56b80983dfb00
#   |   |-kube-scheduler --permit-port-sharing=true ...
#   |   |   `-8*[{kube-scheduler}]
# --
#   |-containerd-shim -namespace k8s.io -idc04e83786a7c2d596b381ca78bd
#   |   |-flanneld --ip-masq --kube-subnet-mgr --iptables-forward-rules=false --ip-blackhole-route
#   |   |   |-(timeout)
# --
#   |-containerd-shim -namespace k8s.io -ide08b6f1279e4e0f02a99f628251
#   |   |-metrics-server --secure-port=10250 --cert-dir=/tmp --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname...
#   |   |   `-9*[{metrics-server}]
# --
#   |-containerd-shim -namespace k8s.io -id55eeb747a6570293638c15cee61
#   |   |-coredns -conf /etc/coredns/Corefile
#   |   |   `-7*[{coredns}]

# 자격증명 파일 복사 (일반 쿠버네티스랑 동일하게 config파일 복사)
mkdir ~/.kube
ls -l /etc/rancher/rke2/rke2.yaml
cp /etc/rancher/rke2/rke2.yaml ~/.kube/config

# /etc/rancher 디렉터리 확인
tree /etc/rancher/
# /etc/rancher/
# ├── node
# │   └── password
# └── rke2
#     ├── config.yaml
#     ├── rke2-pss.yaml
#     └── rke2.yaml
cat /etc/rancher/node/password
# 00ec5c2ba31c0c9db0a3f049464c31dd
cat /etc/rancher/rke2/config.yaml
# write-kubeconfig-mode: "0644"

# debug: true

# cni: canal

# bind-address: 192.168.10.11
# advertise-address: 192.168.10.11
# node-ip: 192.168.10.11

# disable-cloud-controller: true #CSP 로드밸런서 연결 비활성화

# disable:
#   - servicelb
#   - rke2-coredns-autoscaler
#   - rke2-ingress-nginx
#   - rke2-snapshot-controller
#   - rke2-snapshot-controller-crd
#   - rke2-snapshot-validation-webhook
cat /etc/rancher/rke2/rke2-pss.yaml 
# apiVersion: apiserver.config.k8s.io/v1
# kind: AdmissionConfiguration
# plugins:
# - name: PodSecurity
#   configuration:
#     apiVersion: pod-security.admission.config.k8s.io/v1beta1
#     kind: PodSecurityConfiguration
#     defaults:
#       enforce: "privileged"
#       enforce-version: "latest"
#     exemptions:
#       usernames: []
#       runtimeClasses: []
#       namespaces: []
# 바이너리 파일 확인
tree /var/lib/rancher/rke2/bin/
# ├── containerd
# ├── containerd-shim-runc-v2
# ├── crictl
# ├── ctr
# ├── kubectl
# ├── kubelet
# └── runc

# PATH 안 건드리고 표준 위치로 바이너리 노출 설정 : 심볼릭 링크 방식 
ln -s /var/lib/rancher/rke2/bin/containerd /usr/local/bin/containerd
ln -s /var/lib/rancher/rke2/bin/kubectl /usr/local/bin/kubectl
ln -s /var/lib/rancher/rke2/bin/crictl /usr/local/bin/crictl
ln -s /var/lib/rancher/rke2/bin/runc /usr/local/bin/runc
ln -s /var/lib/rancher/rke2/bin/ctr /usr/local/bin/ctr
ln -s /var/lib/rancher/rke2/agent/etc/crictl.yaml /etc/crictl.yaml

runc --version
# runc version 1.4.0
# commit: v1.4.0-0-g8bd78a99
# spec: 1.3.0
# go: go1.24.11 X:boringcrypto
# libseccomp: 2.5.4
containerd --version
# containerd github.com/k3s-io/containerd v2.1.5-k3s1 e77c15f30e5162d6abab671b0d74ca2243e2916e
kubectl version
# Client Version: v1.33.8+rke2r1
# Kustomize Version: v5.6.0
# Server Version: v1.33.8+rke2r1

# 편의성 설정
source <(kubectl completion bash)
alias k=kubectl
complete -F __start_kubectl k
echo 'source <(kubectl completion bash)' >> /etc/profile
echo 'alias k=kubectl' >> /etc/profile
echo 'complete -F __start_kubectl k' >> /etc/profile

k9s

# 확인
kubectl cluster-info -v=6
# Kubernetes control plane is running at https://192.168.10.11:6443
# CoreDNS is running at https://192.168.10.11:6443/api/v1/namespaces/kube-system/services/rke2-coredns-rke2-coredns:udp-53/proxy

# 노드, 파드 정보 확인
kubectl get node -owide
# NAME        STATUS   ROLES                       AGE     VERSION          INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                      KERNEL-VERSION                  CONTAINER-RUNTIME
# k8s-node1   Ready    control-plane,etcd,master   5m41s   v1.33.8+rke2r1   192.168.10.11   <none>        Rocky Linux 9.6 (Blue Onyx)   5.14.0-570.52.1.el9_6.aarch64   containerd://2.1.5-k3s1
helm list -A
# NAME                    NAMESPACE       REVISION        UPDATED                                 STATUS     CHART                                    APP VERSION
# rke2-canal              kube-system     1               2026-02-20 16:28:39.127255545 +0000 UTC deployed   rke2-canal-v3.31.3-build2026020600       v3.31.3    
# rke2-coredns            kube-system     1               2026-02-20 16:28:39.114846341 +0000 UTC deployed   rke2-coredns-1.45.201                    1.13.1     
# rke2-metrics-server     kube-system     1               2026-02-20 16:29:02.42313151 +0000 UTC  deployed   rke2-metrics-server-3.13.007             0.8.0      
# rke2-runtimeclasses     kube-system     1               2026-02-20 16:29:01.465855689 +0000 UTC deployed   rke2-runtimeclasses-0.1.000              0.1.0    

kubectl get pod -A
NAMESPACE     NAME                                         READY   STATUS      RESTARTS   AGE
kube-system   etcd-k8s-node1                               1/1     Running     0          6m26s
kube-system   helm-install-rke2-canal-v477h                0/1     Completed   0          6m20s
kube-system   helm-install-rke2-coredns-zmzhd              0/1     Completed   0          6m20s
kube-system   helm-install-rke2-metrics-server-stbfg       0/1     Completed   0          6m20s
kube-system   helm-install-rke2-runtimeclasses-lp4ph       0/1     Completed   0          6m20s
kube-system   kube-apiserver-k8s-node1                     1/1     Running     0          6m26s
kube-system   kube-controller-manager-k8s-node1            1/1     Running     0          6m26s
kube-system   kube-proxy-k8s-node1                         1/1     Running     0          6m26s
kube-system   kube-scheduler-k8s-node1                     1/1     Running     0          6m26s
kube-system   rke2-canal-2gzwn                             2/2     Running     0          6m13s
kube-system   rke2-coredns-rke2-coredns-559595db99-dslf8   1/1     Running     0          6m13s
kube-system   rke2-metrics-server-fdcdf575d-rd92v          1/1     Running     0          5m50s

 

/var/lib/rancher/rke2 디렉터리 : addon helm chart, 인증서 확인

# 디렉터리 확인
tree /var/lib/rancher/rke2 -L 1
# ├── agent
# ├── bin -> /var/lib/rancher/rke2/data/v1.34.3-rke2r3-5b8349de68df/bin
# ├── data
# └── server

# server 디렉터리
tree /var/lib/rancher/rke2/server/
tree /var/lib/rancher/rke2/server/ -L 1
# ├── agent-token -> /var/lib/rancher/rke2/server/token
# ├── cred           # kubeconfig 정보들
# ├── db             # etcd snapshots, etcd member snap/wal
# ├── etc            
# ├── manifests      # helm chart manifests by helm controller
# ├── node-token -> /var/lib/rancher/rke2/server/token
# ├── tls            # 인증서 관련 파일들
# └── token

## 다른 서버나 에이전트 노드를 등록하는 데 사용할 수 있는 토큰이 /var/lib/rancher/rke2/server/node-token 경로에 생성됩니다.
ls -l /var/lib/rancher/rke2/server/
# total 12
# lrwxrwxrwx. 1 root root   34 Feb 21 01:27 agent-token -> /var/lib/rancher/rke2/server/token
# drwx------. 2 root root 4096 Feb 21 01:27 cred
# drwx------. 4 root root   35 Feb 21 01:28 db
# drwx------. 2 root root   66 Feb 21 01:27 etc
# drwxr-xr-x. 2 root root  180 Feb 21 01:28 manifests
# lrwxrwxrwx. 1 root root   34 Feb 21 01:27 node-token -> /var/lib/rancher/rke2/server/token
# drwx------. 6 root root 4096 Feb 21 01:27 tls
# -rw-------. 1 root root  109 Feb 21 01:27 token
cat /var/lib/rancher/rke2/server/node-token
# K108958903bb8ad7369a7df2faf6ce0b37738f328eaa481972a30e8e4102a534351::server:b7ae451a6e348be80caf6eb030aa517d
cat /var/lib/rancher/rke2/server/token
# K108958903bb8ad7369a7df2faf6ce0b37738f328eaa481972a30e8e4102a534351::server:b7ae451a6e348be80caf6eb030aa517d

## helm chart manifests + values 포함
cat /var/lib/rancher/rke2/server/manifests/rke2-coredns.yaml
# apiVersion: helm.cattle.io/v1
# kind: HelmChart
# ...
#   set:
#     global.clusterCIDR: 10.42.0.0/16
#     global.clusterCIDRv4: 10.42.0.0/16
#     global.clusterDNS: 10.43.0.10
#     global.clusterDomain: cluster.local
#     global.rke2DataDir: /var/lib/rancher/rke2
#     global.serviceCIDR: 10.43.0.0/16
#     global.systemDefaultIngressClass: ingress-nginx

# addon, helm controller 관련 crd 확인
kubectl get crd | grep -E 'helm|addon'
# addons.k3s.cattle.io                                    2026-02-20T16:28:26Z
# helmchartconfigs.helm.cattle.io                         2026-02-20T16:28:26Z
# helmcharts.helm.cattle.io                               2026-02-20T16:28:26Z

kubectl get helmcharts.helm.cattle.io -n kube-system -owide
# NAME                  REPO   CHART   VERSION   TARGETNAMESPACE   BOOTSTRAP   FAILED   JOB
# rke2-canal                                                       true        False    helm-install-rke2-canal
# rke2-coredns                                                     true        False    helm-install-rke2-coredns
# rke2-metrics-server                                                          False    helm-install-rke2-metrics-server
# rke2-runtimeclasses                                                          False    helm-install-rke2-runtimeclasses

kubectl get job -n kube-system
# NAME                               STATUS     COMPLETIONS   DURATION   AGE
# helm-install-rke2-canal            Complete   1/1           10s        10m
# helm-install-rke2-coredns          Complete   1/1           10s        10m
# helm-install-rke2-metrics-server   Complete   1/1           33s        10m
# helm-install-rke2-runtimeclasses   Complete   1/1           32s        10m

kubectl get helmchartconfigs -n kube-system
# NAME           AGE
# rke2-canal     10m
# rke2-coredns   10m

kubectl describe helmchartconfigs -n kube-system rke2-canal
# Name:         rke2-canal
# Namespace:    kube-system
# Labels:       objectset.rio.cattle.io/hash=58336a78decd5ad820da966205884079ad2003fb
# Annotations:  objectset.rio.cattle.io/applied:
#                 H4sIAAAAAAAA/4SPQUvDQBBG/0r4zkldE9umCx6kF8G7p14mu5NmzWY2ZLcVKfnvEkRQpPY4M7zHmwtodK88RRcEGh37YWUoJc8rF+7O98jRO7HQeGY/7Dua0j5I647IMXAiS4mgLy...
#               objectset.rio.cattle.io/id: 
#               objectset.rio.cattle.io/owner-gvk: k3s.cattle.io/v1, Kind=Addon
#               objectset.rio.cattle.io/owner-name: rke2-canal-config
#               objectset.rio.cattle.io/owner-namespace: kube-system
# API Version:  helm.cattle.io/v1
# Kind:         HelmChartConfig
# Metadata:
#   Creation Timestamp:  2026-02-20T16:28:31Z
#   Generation:          1
#   Resource Version:    348
#   UID:                 2a915091-e46e-4d1a-a222-8386a0ac8678
# Spec:
#   Failure Policy:  reinstall
#   Values Content:  flannel:
#   iface: "enp0s9"
# Events:  <none>

kubectl get addons.k3s.cattle.io -n kube-system
# NAME                  SOURCE                                                            CHECKSUM
# rke2-canal            /var/lib/rancher/rke2/server/manifests/rke2-canal.yaml            16ed8be707904f57f3bd5fe62e2eec2e218212a74dece20ee532e0dfcfa83559
# rke2-canal-config     /var/lib/rancher/rke2/server/manifests/rke2-canal-config.yaml     edd129b8e27e10dbc37f0cbe994901d188ade870e862becfab90e5dc96615f95
# rke2-coredns          /var/lib/rancher/rke2/server/manifests/rke2-coredns.yaml          82b3259844f08d9c2d9e910ce07f1ddc695e1465ae3e0f3262a750b208758adc
# rke2-coredns-config   /var/lib/rancher/rke2/server/manifests/rke2-coredns-config.yaml   9c8e2bbb7603c69c233c9343c516d3d302fd59b758e2e7084c7ab08b5bfba0e4
# rke2-metrics-server   /var/lib/rancher/rke2/server/manifests/rke2-metrics-server.yaml   484334bd1ef1710ceb48194dc771b00cf2a071e3e3a1226e927110228a0acce7
# rke2-runtimeclasses   /var/lib/rancher/rke2/server/manifests/rke2-runtimeclasses.yaml   b324ff85ddfdcebead4da492f852c2a963d41ca6eab6b68a2ad00cda2fdfd34b


## 인증서 관련 파일들
cat /var/lib/rancher/rke2/server/tls/server-ca.crt | openssl x509 -text -noout # k8s ca 인증서
# Certificate:
#     Data:
#         Version: 3 (0x2)
#         Serial Number: 0 (0x0)
#         Signature Algorithm: ecdsa-with-SHA256
#         Issuer: CN=rke2-server-ca@1771604865
#         Validity
#             Not Before: Feb 20 16:27:45 2026 GMT
#             Not After : Feb 18 16:27:45 2036 GMT
#         Subject: CN=rke2-server-ca@1771604865
#         Subject Public Key Info:
#             Public Key Algorithm: id-ecPublicKey
#                 Public-Key: (256 bit)
#                 pub:
#                     04:6a:26:f2:c1:0f:d8:4d:30:4d:ea:46:e9:ab:93:
#                     30:7a:09:ab:28:a2:93:5f:9d:d9:36:f7:07:b3:d0:
#                     4e:1f:93:34:78:a4:80:63:84:55:ab:2e:06:8f:78:
#                     84:fb:95:b2:69:ec:4c:1e:2f:86:a1:9e:5e:7e:5e:
#                     0b:a0:03:74:3a
#                 ASN1 OID: prime256v1
#                 NIST CURVE: P-256
#         X509v3 extensions:
#             X509v3 Key Usage: critical
#                 Digital Signature, Key Encipherment, Certificate Sign
#             X509v3 Basic Constraints: critical
#                 CA:TRUE
#             X509v3 Subject Key Identifier: 
#                 C0:A3:92:51:A8:4C:F5:2F:29:33:06:1E:15:2F:91:A6:2E:2A:18:BB
#     Signature Algorithm: ecdsa-with-SHA256
#     Signature Value:
#         30:45:02:21:00:de:0e:d8:8b:79:8e:6d:76:49:7d:51:88:e7:
#         bc:85:5b:3c:8d:ed:1d:86:b4:ce:65:4a:9c:5c:80:ce:53:25:
#         80:02:20:4a:e0:6e:3c:4f:63:a6:9a:0a:66:3f:ab:c2:5e:86:
#         27:92:70:d1:e4:9e:8f:7e:5d:3f:1c:c9:89:ce:97:27:f7
cat /var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt | openssl x509 -text -noout # apiserver web server 인증서
# Certificate:
#     Data:
#         Version: 3 (0x2)
#         Serial Number: 7569014125602326395 (0x690a8a3eb070c77b)
#         Signature Algorithm: ecdsa-with-SHA256
#         Issuer: CN=rke2-server-ca@1771604865
#         Validity
#             Not Before: Feb 20 16:27:45 2026 GMT
#             Not After : Feb 20 16:27:45 2027 GMT
#         Subject: CN=kube-apiserver
#         Subject Public Key Info:
#             Public Key Algorithm: id-ecPublicKey
#                 Public-Key: (256 bit)
#                 pub:
#                     04:65:a2:10:b5:6d:bb:68:9e:a5:3a:2b:36:68:fa:
#                     9d:af:0b:69:e4:60:b5:a9:cc:34:a6:4f:66:e8:d3:
#                     4f:73:87:b0:f8:3e:55:98:40:a5:a7:92:11:4c:25:
#                     2f:54:39:27:a8:12:2d:27:47:d5:b0:e0:e7:27:97:
#                     bc:d5:2e:95:b5
#                 ASN1 OID: prime256v1
#                 NIST CURVE: P-256
#         X509v3 extensions:
#             X509v3 Key Usage: critical
#                 Digital Signature, Key Encipherment
#             X509v3 Extended Key Usage: 
#                 TLS Web Server Authentication
#             X509v3 Authority Key Identifier: 
#                 C0:A3:92:51:A8:4C:F5:2F:29:33:06:1E:15:2F:91:A6:2E:2A:18:BB
#             X509v3 Subject Alternative Name: 
#                 DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, DNS:localhost, DNS:k8s-node1, IP Address:127.0.0.1, IP Address:0:0:0:0:0:0:0:1, IP Address:192.168.10.11, IP Address:192.168.10.11, IP Address:10.43.0.1
#     Signature Algorithm: ecdsa-with-SHA256
#     Signature Value:
#         30:46:02:21:00:bb:1b:7b:58:dd:c3:01:c0:d6:53:a2:64:26:
#         a2:92:69:ec:55:a5:a9:ce:b2:93:64:25:64:7b:5c:a4:95:34:
#         28:02:21:00:a5:b4:d6:c8:1a:48:8f:5a:8e:af:d8:89:99:2d:
#         41:9d:48:a5:f3:07:b6:a1:40:03:dd:46:a6:25:d0:6b:f4:c8


# data 디렉터리
tree /var/lib/rancher/rke2/data/
# /var/lib/rancher/rke2/data/
# └── v1.33.8-rke2r1-1b2872361ec5
#     ├── bin
#     │   ├── containerd
#     │   ├── containerd-shim-runc-v2
#     │   ├── crictl
#     │   ├── ctr
#     │   ├── kubectl
#     │   ├── kubelet
#     │   └── runc
#     └── charts
#         ├── harvester-cloud-provider.yaml
#         ├── harvester-csi-driver.yaml
#         ├── rancher-vsphere-cpi.yaml
#         ├── rancher-vsphere-csi.yaml
#         ├── rke2-calico-crd.yaml
#         ├── rke2-calico.yaml
#         ├── rke2-canal.yaml
#         ├── rke2-cilium.yaml
#         ├── rke2-coredns.yaml
#         ├── rke2-flannel.yaml
#         ├── rke2-ingress-nginx.yaml
#         ├── rke2-metrics-server.yaml
#         ├── rke2-multus.yaml
#         ├── rke2-runtimeclasses.yaml
#         ├── rke2-snapshot-controller-crd.yaml
#         ├── rke2-snapshot-controller.yaml
#         ├── rke2-snapshot-validation-webhook.yaml
#         ├── rke2-traefik-crd.yaml
#         └── rke2-traefik.yaml
tree /var/lib/rancher/rke2/data/ -L 2
# └── v1.34.3-rke2r3-5b8349de68df
#     ├── bin        # 핵심 구성요소 관련 바이너리 파일들
#     └── charts     # helm charts by helm controller

## helm chart manifests
cat /var/lib/rancher/rke2/data/v1.34.3-rke2r3-5b8349de68df/charts/rke2-coredns.yaml


# agent 디렉터리 (사용하지 않아도 모든 노드에 다있다.)
tree /var/lib/rancher/rke2/agent/ | more
tree /var/lib/rancher/rke2/agent/ -L 3
# ├── client-ca.crt                             # client 관련 kubeconfig 및 인증서 파일들
# ...
# ├── containerd                                # containerd root 디렉터리 , (설정값) root = "/var/lib/rancher/rke2/agent/containerd"
# │   ├── bin
# │   ├── containerd.log
# │   ├── io.containerd.content.v1.content
# │   │   ├── blobs
# │   │   └── ingest
#     ...
# │   └── tmpmounts
# ├── etc                                       # 주요 설정 파일
# │   ├── containerd
# │   │   └── config.toml
# │   ├── crictl.yaml
# │   └── kubelet.conf.d
# │       └── 00-rke2-defaults.conf
# ├── images
# │   ├── cloud-controller-manager-image.txt
# │   ├── etcd-image.txt
# │   ├── kube-apiserver-image.txt
# │   ├── kube-controller-manager-image.txt
# │   ├── kube-proxy-image.txt
# │   ├── kube-scheduler-image.txt
# │   └── runtime-image.txt
# ├── kubelet.kubeconfig                        # client 관련 kubeconfig 및 인증서 파일들
# ...
# ├── logs                                      # kubelet 로그
# │   └── kubelet.log
# ├── pod-manifests                             # static pod manifests kubelet이 바라보는 경로
# │   ├── cloud-controller-manager.yaml
# │   ├── etcd.yaml
# │   ├── kube-apiserver.yaml
# │   ├── kube-controller-manager.yaml
# │   ├── kube-proxy.yaml
# │   └── kube-scheduler.yaml
# ...

## crictl.yaml 파일 확인 : k3s 포함된 경로 확인!
crictl ps
# CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
# 32544c3c59293       4b1ae962e7ba6       16 minutes ago      Running             coredns                   0                   55eeb747a6570       rke2-coredns-rke2-coredns-559595db99-dslf8   kube-system
# 25a78dcf5c447       e6154bb99b6d4       16 minutes ago      Running             metrics-server            0                   e08b6f1279e4e       rke2-metrics-server-fdcdf575d-rd92v          kube-system
# 77d44a4a654b0       fc62334b90cf6       16 minutes ago      Running             kube-flannel              0                   c04e83786a7c2       rke2-canal-2gzwn                             kube-system
# 1af2f60082662       3b9613c95d89e       16 minutes ago      Running             calico-node               0                   c04e83786a7c2       rke2-canal-2gzwn                             kube-system
# 745b41379556c       603f9fc02b584       17 minutes ago      Running             kube-controller-manager   0                   37307180057de       kube-controller-manager-k8s-node1            kube-system
# a35a1c5db2394       603f9fc02b584       17 minutes ago      Running             kube-scheduler            0                   3d02bc2d58c8d       kube-scheduler-k8s-node1                     kube-system
# caad74d3a334a       603f9fc02b584       17 minutes ago      Running             kube-apiserver            0                   40a4f51e503bf       kube-apiserver-k8s-node1                     kube-system
# f772f09071914       45d834c35b2c8       17 minutes ago      Running             etcd                      0                   5094bf3a76962       etcd-k8s-node1                               kube-system
# 78e814851db5e       603f9fc02b584       17 minutes ago      Running             kube-proxy                0                   6fd9161864791       kube-proxy-k8s-node1                         kube-system
cat /var/lib/rancher/rke2/agent/etc/crictl.yaml
# runtime-endpoint: unix:///run/k3s/containerd/containerd.sock

ln -s /var/lib/rancher/rke2/agent/etc/crictl.yaml /etc/crictl.yaml
crictl ps
# CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
# 32544c3c59293       4b1ae962e7ba6       17 minutes ago      Running             coredns                   0                   55eeb747a6570       rke2-coredns-rke2-coredns-559595db99-dslf8   kube-system
# 25a78dcf5c447       e6154bb99b6d4       17 minutes ago      Running             metrics-server            0                   e08b6f1279e4e       rke2-metrics-server-fdcdf575d-rd92v          kube-system
# 77d44a4a654b0       fc62334b90cf6       17 minutes ago      Running             kube-flannel              0                   c04e83786a7c2       rke2-canal-2gzwn                             kube-system
# 1af2f60082662       3b9613c95d89e       17 minutes ago      Running             calico-node               0                   c04e83786a7c2       rke2-canal-2gzwn                             kube-system
# 745b41379556c       603f9fc02b584       17 minutes ago      Running             kube-controller-manager   0                   37307180057de       kube-controller-manager-k8s-node1            kube-system
# a35a1c5db2394       603f9fc02b584       17 minutes ago      Running             kube-scheduler            0                   3d02bc2d58c8d       kube-scheduler-k8s-node1                     kube-system
# caad74d3a334a       603f9fc02b584       17 minutes ago      Running             kube-apiserver            0                   40a4f51e503bf       kube-apiserver-k8s-node1                     kube-system
# f772f09071914       45d834c35b2c8       18 minutes ago      Running             etcd                      0                   5094bf3a76962       etcd-k8s-node1                               kube-system
# 78e814851db5e       603f9fc02b584       18 minutes ago      Running             kube-proxy                0                   6fd9161864791       kube-proxy-k8s-node1                         kube-system
crictl images
# IMAGE                                           TAG                            IMAGE ID            SIZE
# docker.io/rancher/hardened-calico               v3.31.3-build20260206          3b9613c95d89e       217MB
# docker.io/rancher/hardened-coredns              v1.14.1-build20260206          4b1ae962e7ba6       27.2MB
# docker.io/rancher/hardened-etcd                 v3.5.26-k3s1-build20260126     45d834c35b2c8       17.1MB
# docker.io/rancher/hardened-flannel              v0.28.1-build20260206          fc62334b90cf6       19.8MB
# docker.io/rancher/hardened-k8s-metrics-server   v0.8.1-build20260206           e6154bb99b6d4       19.4MB
# docker.io/rancher/hardened-kubernetes           v1.33.8-rke2r1-build20260210   603f9fc02b584       187MB
# docker.io/rancher/klipper-helm                  v0.9.14-build20260210          7e898467f8520       59MB
# docker.io/rancher/mirrored-pause                3.6                            7d46a07936af9       253kB
# docker.io/rancher/rke2-runtime                  v1.33.8-rke2r1                 35592a070625a       91.3MB

## rke2 에서 config.toml 은 직접 수정 비권장 -> 직접 수정하면 rke2/k3s 재시작 시 덮어써짐!
cat /var/lib/rancher/rke2/agent/etc/containerd/config.toml
# # File generated by rke2. DO NOT EDIT. Use config.toml.tmpl instead.
# version = 3
# root = "/var/lib/rancher/rke2/agent/containerd"
# state = "/run/k3s/containerd"

# [grpc]
#   address = "/run/k3s/containerd/containerd.sock"

# [plugins.'io.containerd.internal.v1.opt']
#   path = "/var/lib/rancher/rke2/agent/containerd"

# [plugins.'io.containerd.grpc.v1.cri']
#   stream_server_address = "127.0.0.1"
#   stream_server_port = "10010"

# [plugins.'io.containerd.cri.v1.runtime']
#   enable_selinux = true
#   enable_unprivileged_ports = true
#   enable_unprivileged_icmp = true
#   device_ownership_from_security_context = false

# [plugins.'io.containerd.cri.v1.images']
#   snapshotter = "overlayfs"
#   disable_snapshot_annotations = true
#   use_local_image_pull = true

# [plugins.'io.containerd.cri.v1.images'.pinned_images]
#   sandbox = "index.docker.io/rancher/mirrored-pause:3.6"

# [plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.runc]
#   runtime_type = "io.containerd.runc.v2"

# [plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.runc.options]
#   SystemdCgroup = true

# [plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.runhcs-wcow-process]
#   runtime_type = "io.containerd.runhcs.v1"

# [plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.'crun']
#   runtime_type = "io.containerd.runc.v2"

# [plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.'crun'.options]
#   BinaryName = "/usr/bin/crun"
#   SystemdCgroup = true

# [plugins.'io.containerd.cri.v1.images'.registry]
#   config_path = "/var/lib/rancher/rke2/agent/etc/containerd/certs.d"
# File generated by rke2. DO NOT EDIT. Use config.toml.tmpl instead.

## containerd registry 설정 디렉터리/파일이 현재는 없음
ls -l /var/lib/rancher/rke2/agent/etc/containerd/certs.d
# ls: cannot access '/var/lib/rancher/rke2/agent/etc/containerd/certs.d': No such file or directory

## 컨테이너 이미지 명
grep -H '' /var/lib/rancher/rke2/agent/images/*
# /var/lib/rancher/rke2/agent/images/etcd-image.txt:index.docker.io/rancher/hardened-etcd:v3.6.7-k3s1-build20260126
# /var/lib/rancher/rke2/agent/images/kube-apiserver-image.txt:index.docker.io/rancher/hardened-kubernetes:v1.34.3-rke2r3-build20260127
# /var/lib/rancher/rke2/agent/images/kube-controller-manager-image.txt:index.docker.io/rancher/hardened-kubernetes:v1.34.3-rke2r3-build20260127
# /var/lib/rancher/rke2/agent/images/kube-proxy-image.txt:index.docker.io/rancher/hardened-kubernetes:v1.34.3-rke2r3-build20260127
# /var/lib/rancher/rke2/agent/images/kube-scheduler-image.txt:index.docker.io/rancher/hardened-kubernetes:v1.34.3-rke2r3-build20260127
# /var/lib/rancher/rke2/agent/images/runtime-image.txt:index.docker.io/rancher/rke2-runtime:v1.34.3-rke2r3

crictl images
# IMAGE                                           TAG                            IMAGE ID            SIZE
# docker.io/rancher/hardened-calico               v3.31.3-build20260206          3b9613c95d89e       217MB
# docker.io/rancher/hardened-coredns              v1.14.1-build20260206          4b1ae962e7ba6       27.2MB
# docker.io/rancher/hardened-etcd                 v3.5.26-k3s1-build20260126     45d834c35b2c8       17.1MB
# docker.io/rancher/hardened-flannel              v0.28.1-build20260206          fc62334b90cf6       19.8MB
# docker.io/rancher/hardened-k8s-metrics-server   v0.8.1-build20260206           e6154bb99b6d4       19.4MB
# docker.io/rancher/hardened-kubernetes           v1.33.8-rke2r1-build20260210   603f9fc02b584       187MB
# docker.io/rancher/klipper-helm                  v0.9.14-build20260210          7e898467f8520       59MB
# docker.io/rancher/mirrored-pause                3.6                            7d46a07936af9       253kB
# docker.io/rancher/rke2-runtime                  v1.33.8-rke2r1                 35592a070625a       91.3MB

## kubelet config : 보안 권장 설정 및 엔터프라이즈 권장 설정 반영되어 있음
cat /var/lib/rancher/rke2/agent/etc/kubelet.conf.d/00-rke2-defaults.conf
# address: 192.168.10.11
# apiVersion: kubelet.config.k8s.io/v1beta1
# authentication:
#   anonymous:
#     enabled: false
#   webhook:
#     cacheTTL: 2m0s
#     enabled: true
#   x509:
#     clientCAFile: /var/lib/rancher/rke2/agent/client-ca.crt
# authorization:
#   mode: Webhook
#   webhook:
#     cacheAuthorizedTTL: 5m0s
#     cacheUnauthorizedTTL: 30s
# cgroupDriver: systemd
# clusterDNS:
# - 10.43.0.10
# clusterDomain: cluster.local
# containerRuntimeEndpoint: unix:///run/k3s/containerd/containerd.sock #차이점
# cpuManagerReconcilePeriod: 10s
# crashLoopBackOff: {}
# evictionHard:
#   imagefs.available: 5%
#   nodefs.available: 5%
# evictionMinimumReclaim:
#   imagefs.available: 10%
#   nodefs.available: 10%
# evictionPressureTransitionPeriod: 5m0s
# failSwapOn: false  # 원래 쿠버네티스는 스왑(Swap) 메모리가 켜져 있으면 실행되지 않지만, RKE2는 이를 허용하도록 설정되어 있습니다.
# fileCheckFrequency: 20s
# healthzBindAddress: 127.0.0.1
# httpCheckFrequency: 20s
# imageMaximumGCAge: 0s
# imageMinimumGCAge: 2m0s
# kind: KubeletConfiguration
# logging:
#   flushFrequency: 5s
#   format: text
#   options:
#     json:
#       infoBufferSize: "0"
#     text:
#       infoBufferSize: "0"
#   verbosity: 0
# memorySwap: {}
# nodeStatusReportFrequency: 5m0s
# nodeStatusUpdateFrequency: 10s     # 10초마다 "나 살아있어요"라고 컨트롤플레인 노드에 보고
# resolvConf: /etc/resolv.conf
# runtimeRequestTimeout: 2m0s
# serializeImagePulls: false         # 이미지 병렬 pull 허용
# shutdownGracePeriod: 0s
# shutdownGracePeriodCriticalPods: 0s
# staticPodPath: /var/lib/rancher/rke2/agent/pod-manifests
# streamingConnectionIdleTimeout: 4h0m0s
# syncFrequency: 1m0s
# tlsCertFile: /var/lib/rancher/rke2/agent/serving-kubelet.crt
# tlsPrivateKeyFile: /var/lib/rancher/rke2/agent/serving-kubelet.key
# volumeStatsAggPeriod: 1m0s

## kubelet 로그
tail -f /var/lib/rancher/rke2/agent/logs/kubelet.log
# I0221 01:29:10.256887    6790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/rke2-metrics-server-fdcdf575d-rd92v" podStartSLOduration=3.569063962 podStartE2EDuration="8.256873454s" podCreationTimestamp="2026-02-21 01:29:02 +0900 KST" firstStartedPulling="2026-02-21 01:29:03.023852794 +0900 KST m=+56.990693519" lastFinishedPulling="2026-02-21 01:29:07.711662328 +0900 KST m=+61.678503011" observedRunningTime="2026-02-21 01:29:08.253960391 +0900 KST m=+62.220801074" watchObservedRunningTime="2026-02-21 01:29:10.256873454 +0900 KST m=+64.223714179"
# I0221 01:29:10.257146    6790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/rke2-coredns-rke2-coredns-559595db99-dslf8" podStartSLOduration=24.83519902 podStartE2EDuration="31.257143539s" podCreationTimestamp="2026-02-21 01:28:39 +0900 KST" firstStartedPulling="2026-02-21 01:29:03.359699735 +0900 KST m=+57.326540460" lastFinishedPulling="2026-02-21 01:29:09.781644254 +0900 KST m=+63.748484979" observedRunningTime="2026-02-21 01:29:10.256827204 +0900 KST m=+64.223667887" watchObservedRunningTime="2026-02-21 01:29:10.257143539 +0900 KST m=+64.223984222"

 

static pod manifests
/var/lib/rancher/rke2/agent/pod-manifests : static pod manifests ← 보안 권장 설정 및 엔터프라이즈 권장 설정 반영되어 있습니다.

tree /var/lib/rancher/rke2/agent/pod-manifests
├── etcd.yaml
├── kube-apiserver.yaml
├── kube-controller-manager.yaml
├── kube-proxy.yaml
└── kube-scheduler.yaml

# kube-apiserver : 기존 kind, kubeadm, kubespary 다른 부분 찾아보자!
kubectl describe pod -n kube-system kube-apiserver-k8s-node1
cat /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
# ...
# spec:
#   containers:
#   - args:
#     - --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml
#     - --advertise-address=192.168.10.11
#     - --allow-privileged=true
#     - --anonymous-auth=false
#     - --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 # 보안 강화
#     - --authorization-mode=Node,RBAC
#     - --bind-address=0.0.0.0
#     - --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs
#     - --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt
#     - --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml
#     - --enable-admission-plugins=NodeRestriction
#     - --enable-aggregator-routing=true
#     - --enable-bootstrap-token-auth=true
#     - --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json  # etcd 저장 데이터 암호화(encryption at rest)
#     - --encryption-provider-config-automatic-reload=true #업데이트시 자동으로 암호화 키 리로드
#     - --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt
#     - --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt
#     - --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key
#     - --etcd-servers=https://127.0.0.1:2379
#     - --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt
#     - --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt
#     - --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key
#     - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
#     - --profiling=false
#     - --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt
#     - --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key
#     - --requestheader-allowed-names=system:auth-proxy
#     - --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt
#     - --requestheader-extra-headers-prefix=X-Remote-Extra-
#     - --requestheader-group-headers=X-Remote-Group
#     - --requestheader-username-headers=X-Remote-User
#     - --secure-port=6443
#     - --service-account-issuer=https://kubernetes.default.svc.cluster.local
#     - --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key
#     - --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key
#     - --service-cluster-ip-range=10.43.0.0/16
#     - --service-node-port-range=30000-32767
#     - --storage-backend=etcd3
#     - --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt
#     - --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 #사용하는 암호군 명시
#     - --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
#     command:
#     - kube-apiserver
#     env:
#     - name: FILE_HASH
#       value: 6fa5251708b3bc2cb032195cc31ae26e8550da13563ddacc90d723fa807594ab
#     - name: NO_PROXY
#       value: .svc,.cluster.local,10.42.0.0/16,10.43.0.0/16
#     image: index.docker.io/rancher/hardened-kubernetes:v1.34.3-rke2r3-build20260127
#     imagePullPolicy: IfNotPresent
#     livenessProbe:  # 헬스체크 엔드포인트 호출 시에도 mTLS로 클라이언트 인증 확인
#       exec:
#         command:
#         - kubectl
#         - get
#         - --server=https://localhost:6443/
         # mTLS기반 쌍방향 체크 가능하게 함
#         - --client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt
#         - --client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key
#         - --certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt
#         - --raw=/livez
#       failureThreshold: 8
#       initialDelaySeconds: 10
#       periodSeconds: 10
#       timeoutSeconds: 15
# ...

## etcd 저장 데이터 암호화(encryption at rest)
cat /var/lib/rancher/rke2/server/cred/encryption-config.json | jq
{
  "kind": "EncryptionConfiguration",
  "apiVersion": "apiserver.config.k8s.io/v1",
  "resources": [
    {
      "resources": [
        "secrets"      # Kubernetes Secret 객체만 암호화 대상
      ],
      "providers": [
        {
          "aescbc": {  # 암호화 방식: AES-CBC
            "keys": [
              {
                "name": "aescbckey",
                "secret": "OmLRJEFMVSmN4f4wtLTkPG12nOor+uAn0iHBLMZxX8U="
              }
            ]
          }
        },
        {
          "identity": {} # identity (fallback) : aescbc로 먼저 복호화 시도 -> 실패하면 identity(평문)로 읽기 시도
        }
      ]

# etcd : host volume 마운트 시 디렉터리 타입을 최소화하고, 개별 파일 타입을 적용
kubectl describe pod -n kube-system etcd-k8s-node1
cat /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml
# apiVersion: v1
# kind: Pod
# metadata:
#   annotations:
#     etcd.k3s.io/initial: '{"initial-advertise-peer-urls":"https://192.168.10.11:2380","initial-cluster":"k8s-node1-f5a5f8ea=https://192.168.10.11:2380","initial-cluster-state":"new"}'
#   labels:
#     component: etcd
#     tier: control-plane
#   name: etcd
#   namespace: kube-system
#   uid: deaa2c2683cf77ad1e5af65cc8130d37
# spec:
#   containers:
#   - args:
#     - --config-file=/var/lib/rancher/rke2/server/db/etcd/config
#     command:
#     - etcd
#     env:
#     - name: FILE_HASH
#       value: 27d68451c57d7082781b6f65e1f927cf356aacf8980819076c9f807fec8d082d
#     - name: NO_PROXY
#       value: .svc,.cluster.local,10.42.0.0/16,10.43.0.0/16
#     image: index.docker.io/rancher/hardened-etcd:v3.6.7-k3s1-build20260126
#   ...
#   volumes:
#   - hostPath:
#       path: /var/lib/rancher/rke2/server/db/etcd
#       type: DirectoryOrCreate
#     name: dir0
#   - hostPath:
#       path: /var/lib/rancher/rke2/server/tls/etcd/server-client.crt
#       type: File
#     name: file0
#   - hostPath:
#       path: /var/lib/rancher/rke2/server/tls/etcd/server-client.key
#       type: File
#     name: file1

## etcdctl
find / -name etcdctl 2>/dev/null
# /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/f772f0907191460fdbf9251817b210d687e55fbfa308511f8abfb30e70039544/rootfs/usr/local/bin/etcdctl
# /var/lib/rancher/rke2/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1/fs/usr/local/bin/etcdctl

ln -s /var/lib/rancher/rke2/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1/fs/usr/local/bin/etcdctl /usr/local/bin/etcdctl
etcdctl version
# etcdctl version: 3.5.26
# API version: 3.5

etcdctl \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt \
  --cert=/var/lib/rancher/rke2/server/tls/etcd/client.crt \
  --key=/var/lib/rancher/rke2/server/tls/etcd/client.key \
  endpoint status --write-out=table
# +------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
# |        ENDPOINT        |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
# +------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
# | https://127.0.0.1:2379 | 6571fb7574e87dba |  3.5.26 |  7.1 MB |      true |      false |         2 |       4497 |               4497 |        |
# +------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+


# kube-scheduler : 
kubectl describe pod -n kube-system kube-scheduler-k8s-node1
cat /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml
# ...
# spec:
#   containers:
#   - args:
#     - --permit-port-sharing=true   # 동일한 노드에 같은 hostPort를 사용하는 Pod를 허용 : 멀티 NIC / 특수 네트워크 구성에서 포트 제약 완화 목적
#     - --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig
#     - --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig
#     - --bind-address=127.0.0.1
#     - --kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig
#     - --profiling=false
#     - --secure-port=10259
#     - --tls-cert-file=/var/lib/rancher/rke2/server/tls/kube-scheduler/kube-scheduler.crt
#     - --tls-private-key-file=/var/lib/rancher/rke2/server/tls/kube-scheduler/kube-scheduler.key
#     command:
#     - kube-scheduler
#     env:
#     - name: FILE_HASH
#       value: 06ce3e872a26bcbe1809733397e9233a7a35db8e5a4288c9123eafc5dedfe813
#     - name: NO_PROXY
#       value: .svc,.cluster.local,10.42.0.0/16,10.43.0.0/16
#     image: index.docker.io/rancher/hardened-kubernetes:v1.34.3-rke2r3-build20260127 #보안 강화된 이미지 사용
# ...

# kube-controller-manager
cat /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
# ...
# spec:
#   containers:
#   - args:
#     - --permit-port-sharing=true
#     - --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins    # 볼륨 플러그인 (레거시)
#     - --terminated-pod-gc-threshold=1000       # 완료된 Pod가 1000개 이상 쌓이면 자동 정리(GC), 파드 로그/히스토리 과도 누적 방지
#     - --allocate-node-cidrs=true
#     - --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig
#     - --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig
#     - --bind-address=127.0.0.1
#     - --cluster-cidr=10.42.0.0/16
#     - --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt
#     - --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key
#     - --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt
#     - --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key
#     - --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt
#     - --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key
#     - --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt
#     - --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key
#     - --controllers=*,tokencleaner
#     - --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig
#     - --profiling=false
#     - --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt
#     - --secure-port=10257
#     - --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key
#     - --service-cluster-ip-range=10.43.0.0/16
#     - --tls-cert-file=/var/lib/rancher/rke2/server/tls/kube-controller-manager/kube-controller-manager.crt
#     - --tls-private-key-file=/var/lib/rancher/rke2/server/tls/kube-controller-manager/kube-controller-manager.key
#     - --use-service-account-credentials=true
#     command:
#     - kube-controller-manager

## 인증서 발급 및 서명 (CA 역할) : RKE2의 컨트롤러 매니저는 클러스터 내부의 인증서 발급기 역할도 수행합니다.
# --cluster-signing-...: Kubelet이 서버 인증서를 요청하거나, API 서버 클라이언트가 인증서를 요청할 때 어떤 CA 키와 인증서로 서명해줄지 지정합니다.
# --root-ca-file: 서비스 계정(Service Account) 토큰 등을 검증할 때 사용할 루트 인증서입니다.
# --service-account-private-key-file: Pod 내에서 API 서버와 통신할 때 사용하는 **서비스 계정 토큰(JWT)**을 생성하고 서명하는 데 사용되는 키입니다.


# kube-proxy
cat /var/lib/rancher/rke2/agent/pod-manifests/kube-proxy.yaml
# ...
# spec:
#   containers:
#   - args:
#     - --cluster-cidr=10.42.0.0/16
#     - --conntrack-max-per-core=0
#     - --conntrack-tcp-timeout-close-wait=0s
#     - --conntrack-tcp-timeout-established=0s
#     - --healthz-bind-address=127.0.0.1
#     - --hostname-override=k8s-node1    # 호스트네임 오버라이드, 노드 이름 강제 지정, kubelet 노드 이름과 반드시 일치해야 함
#     - --kubeconfig=/var/lib/rancher/rke2/agent/kubeproxy.kubeconfig
#     - --proxy-mode=iptables            # Service 라우팅 방식 : iptables 기반 NAT

## conntrack 관련 튜닝 : Service 트래픽이 많은 환경에서 세션 끊김 방지 / 대규모 연결 유지 최적화
# --conntrack-max-per-core=0 : 코어당 conntrack 최대값 제한 없음, 대규모 트래픽 환경에서 NAT 테이블 부족 방지
# --conntrack-tcp-timeout-close-wait=0s : ESTABLISHED 상태 timeout 무제한
# --conntrack-tcp-timeout-established=0s : CLOSE_WAIT 상태 timeout 무제한

워커노드 추가

[k8s-node1]

# A token that can be used to register other server or agent nodes
cat /var/lib/rancher/rke2/server/node-token
K108958903bb8ad7369a7df2faf6ce0b37738f328eaa481972a30e8e4102a534351::server:b7ae451a6e348be80caf6eb030aa517d

# 노드(서버/에이전트)가 RKE2 클러스터에 조인할 때 사용하는 전용 관리/부트스트랩 API 포트 확인
ss -tnlp | grep 9345
# LISTEN 0      4096   192.168.10.11:9345       0.0.0.0:*    users:(("rke2",pid=6348,fd=6))

# 모니터링
watch -d 'kubectl get node; echo; kubectl get pod -n kube-system'
### 조인 전
# NAME        STATUS   ROLES                       AGE   VERSION
# k8s-node1   Ready    control-plane,etcd,master   28m   v1.33.8+rke2r1

# NAME                                         READY   STATUS      RESTARTS   AGE
# etcd-k8s-node1                               1/1     Running     0          27m
# helm-install-rke2-canal-v477h                0/1     Completed   0          27m
# helm-install-rke2-coredns-zmzhd              0/1     Completed   0          27m
# helm-install-rke2-metrics-server-stbfg       0/1     Completed   0          27m
# helm-install-rke2-runtimeclasses-lp4ph       0/1     Completed   0          27m
# kube-apiserver-k8s-node1                     1/1     Running     0          27m
# kube-controller-manager-k8s-node1            1/1     Running     0          27m
# kube-proxy-k8s-node1                         1/1     Running     0          27m
# kube-scheduler-k8s-node1                     1/1     Running     0          27m
# rke2-canal-2gzwn                             2/2     Running     0          27m
# rke2-coredns-rke2-coredns-559595db99-dslf8   1/1     Running     0          27m
# rke2-metrics-server-fdcdf575d-rd92v          1/1     Running     0          27m

# 워커노드에서 추가
[k8s-node2]

# Run the installer
curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE="agent" INSTALL_RKE2_CHANNEL=v1.33 sh -
# ...
# Installed:
#   rke2-agent-1.33.8~rke2r1-0.el9.aarch64                   
#   rke2-common-1.33.8~rke2r1-0.el9.aarch64                  
#   rke2-selinux-0.22-1.el9.noarch                           

# Complete!

# Configure the rke2-agent service : rke-agent 설정 파일 작성 
## The rke2 server process listens on port 9345 for new nodes to register.
TOKEN=K108958903bb8ad7369a7df2faf6ce0b37738f328eaa481972a30e8e4102a534351::server:b7ae451a6e348be80caf6eb030aa517d

mkdir -p /etc/rancher/rke2/
cat << EOF > /etc/rancher/rke2/config.yaml
server: https://192.168.10.11:9345
token: $TOKEN
EOF
cat /etc/rancher/rke2/config.yaml
# server: https://192.168.10.11:9345
# token: K108958903bb8ad7369a7df2faf6ce0b37738f328eaa481972a30e8e4102a534351::server:b7ae451a6e348be80caf6eb030aa517d

# Enabled/Start the service
systemctl enable --now rke2-agent.service
# NAME        STATUS   ROLES                       AGE   VERSION
# k8s-node1   Ready    control-plane,etcd,master   39m   v1.33.8+rke2r1
# k8s-node2   Ready    <none>                      29s   v1.33.8+rke2r1

# NAME                                         READY   STATUS      RESTARTS   AGE
# etcd-k8s-node1                               1/1     Running     0          39m
# helm-install-rke2-canal-v477h                0/1     Completed   0          39m
# helm-install-rke2-coredns-zmzhd              0/1     Completed   0          39m
# helm-install-rke2-metrics-server-stbfg       0/1     Completed   0          39m
# helm-install-rke2-runtimeclasses-lp4ph       0/1     Completed   0          39m
# kube-apiserver-k8s-node1                     1/1     Running     0          39m
# kube-controller-manager-k8s-node1            1/1     Running     0          39m
# kube-proxy-k8s-node1                         1/1     Running     0          39m
# kube-proxy-k8s-node2                         1/1     Running     0          29s
# kube-scheduler-k8s-node1                     1/1     Running     0          39m
# rke2-canal-2gzwn                             2/2     Running     0          39m
# rke2-canal-dvzkg                             2/2     Running     0          29s
# rke2-coredns-rke2-coredns-559595db99-dslf8   1/1     Running     0          39m
# rke2-metrics-server-fdcdf575d-rd92v          1/1     Running     0          38m
journalctl -u rke2-agent -f
# ...
# Feb 21 02:07:37 k8s-node2 rke2[6558]: time="2026-02-21T02:07:37+09:00" level=info msg="Tunnel authorizer set Kubelet Port 0.0.0.0:10250"

kubectl get node -owide
# NAME        STATUS   ROLES                       AGE   VERSION          INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                      KERNEL-VERSION                  CONTAINER-RUNTIME
# k8s-node1   Ready    control-plane,etcd,master   40m   v1.33.8+rke2r1   192.168.10.11   <none>        Rocky Linux 9.6 (Blue Onyx)   5.14.0-570.52.1.el9_6.aarch64   containerd://2.1.5-k3s1
# k8s-node2   Ready    <none>                      79s   v1.33.8+rke2r1   192.168.10.12   <none>        Rocky Linux 9.6 (Blue Onyx)   5.14.0-570.52.1.el9_6.aarch64   containerd://2.1.5-k3s1

kubectl get pod -n kube-system -owide | grep k8s-node2
# kube-proxy-k8s-node2                         1/1     Running     0          92s   192.168.10.12   k8s-node2   <none>           <none>
# rke2-canal-dvzkg                             2/2     Running     0          92s   192.168.10.12   k8s-node2   <none>           <none>

[k8s-node2]

# 디렉터리 확인
tree /var/lib/rancher/rke2 -L 1
# ├── agent
# ├── bin -> /var/lib/rancher/rke2/data/v1.34.3-rke2r3-5b8349de68df/bin # 바이너리 파일
# ├── data         # bin, charts
# └── server       # 서버 역할이 아니여서 아무것도 없음

# rke2-agent
systemctl status rke2-agent.service --no-pager
# ● rke2-agent.service - Rancher Kubernetes Engine v2 (agent)
#      Loaded: loaded (/usr/lib/systemd/system/rke2-agent.service; enabled; preset: disabled)
#      Active: active (running) since Sat 2026-02-21 02:07:26 KST; 2min 14s ago
#        Docs: https://github.com/rancher/rke2#readme
#     Process: 6556 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
#     Process: 6557 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
#    Main PID: 6558 (rke2)
#       Tasks: 58
#      Memory: 2.2G
#         CPU: 18.695s
#      CGroup: /system.slice/rke2-agent.service
#              ├─6558 "/usr/bin/rke2 agent"
#              ├─6577 containerd -c /var/lib/rancher/rke2/agent/etc/contai…
#              ├─6635 kubelet --volume-plugin-dir=/var/lib/kubelet/volumep…
#              ├─6693 /var/lib/rancher/rke2/data/v1.33.8-rke2r1-1b2872361e…
#              └─6702 /var/lib/rancher/rke2/data/v1.33.8-rke2r1-1b2872361e…

# Feb 21 02:07:21 k8s-node2 rke2[6558]: time="2026-02-21T02:07:21+09:00" l…
# Feb 21 02:07:21 k8s-node2 rke2[6558]: time="2026-02-21T02:07:21+09:00…xt"
# Feb 21 02:07:21 k8s-node2 rke2[6558]: time="2026-02-21T02:07:21+09:00…r1"
# Feb 21 02:07:21 k8s-node2 rke2[6558]: time="2026-02-21T02:07:21+09:00" l…
# Feb 21 02:07:21 k8s-node2 rke2[6558]: time="2026-02-21T02:07:21+09:00…al"
# Feb 21 02:07:26 k8s-node2 rke2[6558]: time="2026-02-21T02:07:26+09:00…e2"
# Feb 21 02:07:26 k8s-node2 rke2[6558]: time="2026-02-21T02:07:26+09:00…ng"
# Feb 21 02:07:26 k8s-node2 systemd[1]: Started Rancher Kubernetes Engi…t).
# Feb 21 02:07:28 k8s-node2 rke2[6558]: time="2026-02-21T02:07:28+09:00" l…
# Feb 21 02:07:37 k8s-node2 rke2[6558]: time="2026-02-21T02:07:37+09:00…50"
# Hint: Some lines were ellipsized, use -l to show in full.
cat /usr/lib/systemd/system/rke2-agent.service
# [Unit]
# Description=Rancher Kubernetes Engine v2 (agent)
# Documentation=https://github.com/rancher/rke2#readme
# Wants=network-online.target
# After=network-online.target
# Conflicts=rke2-server.service

# [Install]
# WantedBy=multi-user.target

# [Service]
# Type=notify
# EnvironmentFile=-/etc/default/%N
# EnvironmentFile=-/etc/sysconfig/%N
# EnvironmentFile=-/usr/lib/systemd/system/%N.env
# KillMode=process
# Delegate=yes
# LimitNOFILE=1048576
# LimitNPROC=infinity
# LimitCORE=infinity
# TasksMax=infinity
# TimeoutStartSec=0
# Restart=always
# RestartSec=5s
# ExecStartPre=-/sbin/modprobe br_netfilter
# ExecStartPre=-/sbin/modprobe overlay
# ExecStart=/usr/bin/rke2 agent
# ExecStopPost=-/bin/sh -c "systemd-cgls /system.slice/%n | grep -Eo '[0-9]+ (containerd|kubelet)' | awk '{print $1}' | xargs -r kill"
pstree -al
#   ├─rke2
#   │   ├─containerd -c /var/lib/rancher/rke2/agent/etc/containerd/config.toml
#   │   │   └─10*[{containerd}]
#   │   ├─kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --config-dir=/var/lib/rancher/rke2/agent/etc/kubelet.conf.d --containerd=/run/k3s/containerd/containerd.sock --hostname-override=k8s-node2 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --node-labels= --read-only-port=0
#   │   │   └─11*[{kubelet}]
#   │   └─10*[{rke2}]

# PATH 안 건드리고 표준 위치로 노출 설정 : 심볼릭 링크 방식 
ln -s /var/lib/rancher/rke2/bin/containerd /usr/local/bin/containerd
ln -s /var/lib/rancher/rke2/bin/crictl /usr/local/bin/crictl
ln -s /var/lib/rancher/rke2/agent/etc/crictl.yaml /etc/crictl.yaml
crictl ps
# CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                    NAMESPACE
# 7fec44e33df00       fc62334b90cf6       2 minutes ago       Running             kube-flannel        0                   7044691ea506b       rke2-canal-dvzkg       kube-system
# 2d8cc30800f90       3b9613c95d89e       2 minutes ago       Running             calico-node         0                   7044691ea506b       rke2-canal-dvzkg       kube-system
# 6b31bd8af14f5       603f9fc02b584       2 minutes ago       Running             kube-proxy          0                   8715f3d181d0b       kube-proxy-k8s-node2   kube-system
crictl images
# IMAGE                                   TAG                            IMAGE ID            SIZE
# docker.io/rancher/hardened-calico       v3.31.3-build20260206          3b9613c95d89e       217MB
# docker.io/rancher/hardened-flannel      v0.28.1-build20260206          fc62334b90cf6       19.8MB
# docker.io/rancher/hardened-kubernetes   v1.33.8-rke2r1-build20260210   603f9fc02b584       187MB
# docker.io/rancher/mirrored-pause        3.6                            7d46a07936af9       253kB
# docker.io/rancher/rke2-runtime          v1.33.8-rke2r1                 35592a070625a       91.3MB

#
tree /etc/rancher/
# ├── node
# │   └── password
# └── rke2
#     ├── config.yaml
#     └── rke2-pss.yaml

# agent 디렉터리
tree /var/lib/rancher/rke2/agent/ -L 3
# ...
# ├── containerd
# │   ├── bin
# ...
# ├── etc
# │   ├── containerd
# │   │   └── config.toml
# │   ├── crictl.yaml
# │   ├── kubelet.conf.d
# │   │   └── 00-rke2-defaults.conf
# │   ├── rke2-agent-load-balancer.json
# │   └── rke2-api-server-agent-load-balancer.json
# ├── images
# │   ├── kube-proxy-image.txt
# │   └── runtime-image.txt
# ├── kubelet.kubeconfig
# ├── kubeproxy.kubeconfig
# ├── logs
# │   └── kubelet.log
# ├── pod-manifests
# │   └── kube-proxy.yaml
# ├── rke2controller.kubeconfig
# ├── server-ca.crt
# ├── serving-kubelet.crt
# └── serving-kubelet.key

##
cat /var/lib/rancher/rke2/agent/etc/containerd/config.toml
# # File generated by rke2. DO NOT EDIT. Use config.toml.tmpl instead.
# version = 3
# root = "/var/lib/rancher/rke2/agent/containerd"
# state = "/run/k3s/containerd"

# [grpc]
#   address = "/run/k3s/containerd/containerd.sock"

# [plugins.'io.containerd.internal.v1.opt']
#   path = "/var/lib/rancher/rke2/agent/containerd"

# [plugins.'io.containerd.grpc.v1.cri']
#   stream_server_address = "127.0.0.1"
#   stream_server_port = "10010"

# [plugins.'io.containerd.cri.v1.runtime']
#   enable_selinux = true
#   enable_unprivileged_ports = true
#   enable_unprivileged_icmp = true
#   device_ownership_from_security_context = false

# [plugins.'io.containerd.cri.v1.images']
#   snapshotter = "overlayfs"
#   disable_snapshot_annotations = true
#   use_local_image_pull = true

# [plugins.'io.containerd.cri.v1.images'.pinned_images]
#   sandbox = "index.docker.io/rancher/mirrored-pause:3.6"

# [plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.runc]
#   runtime_type = "io.containerd.runc.v2"

# [plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.runc.options]
#   SystemdCgroup = true

# [plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.runhcs-wcow-process]
#   runtime_type = "io.containerd.runhcs.v1"

# [plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.'crun']
#   runtime_type = "io.containerd.runc.v2"

# [plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.'crun'.options]
#   BinaryName = "/usr/bin/crun"
#   SystemdCgroup = true

# [plugins.'io.containerd.cri.v1.images'.registry]
#   config_path = "/var/lib/rancher/rke2/agent/etc/containerd/certs.d"

cat /var/lib/rancher/rke2/agent/etc/kubelet.conf.d/00-rke2-defaults.conf
# address: 0.0.0.0
# allowedUnsafeSysctls:
# - net.ipv4.ip_forward
# - net.ipv6.conf.all.forwarding
...

## 워커 노드를 컨트롤 플레인에 등록 시 서버 주소
cat /var/lib/rancher/rke2/agent/etc/rke2-agent-load-balancer.json  | jq
#   "ServerURL": "https://192.168.10.11:9345",
#   "ServerAddresses": [
#     "192.168.10.11:9345"

## 컨트롤 플레인 k8s apiserver 서버 주소
cat /var/lib/rancher/rke2/agent/etc/rke2-api-server-agent-load-balancer.json | jq
#   "ServerURL": "https://192.168.10.11:6443",
#   "ServerAddresses": [
    "192.168.10.11:6443"

## static pod : kube-proxy
cat /var/lib/rancher/rke2/agent/pod-manifests/kube-proxy.yaml
# apiVersion: v1
# kind: Pod
# metadata:
#   creationTimestamp: null
#   labels:
#     component: kube-proxy
#     tier: control-plane
#   name: kube-proxy
#   namespace: kube-system
#   uid: dde5650601446eaa7ebcd457c71eedb8
# spec:
#   containers:
#   - args:
#     - --cluster-cidr=10.42.0.0/16
#     - --conntrack-max-per-core=0
#     - --conntrack-tcp-timeout-close-wait=0s
#     - --conntrack-tcp-timeout-established=0s
#     - --healthz-bind-address=127.0.0.1
#     - --hostname-override=k8s-node2
#     - --kubeconfig=/var/lib/rancher/rke2/agent/kubeproxy.kubeconfig
#     - --proxy-mode=iptables
#     command:
#     - kube-proxy
#     env:
#     - name: FILE_HASH
#       value: 2b1243d34b4dbf4b29110d2c7b538869375faafd895bbce83b36b6eef2b693b6
#     image: index.docker.io/rancher/hardened-kubernetes:v1.33.8-rke2r1-build20260210
#     imagePullPolicy: IfNotPresent
#     livenessProbe:
#       failureThreshold: 8
#       httpGet:
#         host: localhost
#         path: /livez
#         port: 10256
#         scheme: HTTP
#       initialDelaySeconds: 10
#       periodSeconds: 10
#       timeoutSeconds: 15
#     name: kube-proxy
#     ports:
#     - containerPort: 10256
#       name: metrics
#       protocol: TCP
#     resources:
#       requests:
#         cpu: 250m
#         memory: 128Mi
#     securityContext:
#       privileged: true
#     volumeMounts:
#     - mountPath: /var/lib/rancher/rke2/agent/client-kube-proxy.crt
#       name: file0
#       readOnly: true
#     - mountPath: /var/lib/rancher/rke2/agent/client-kube-proxy.key
#       name: file1
#       readOnly: true
#     - mountPath: /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig
#       name: file2
#       readOnly: true
#     - mountPath: /var/lib/rancher/rke2/agent/server-ca.crt
#       name: file3
#       readOnly: true
#   hostNetwork: true
#   priorityClassName: system-cluster-critical
#   securityContext:
#     seLinuxOptions:
#       type: rke2_service_t
#   volumes:
#   - hostPath:
#       path: /var/lib/rancher/rke2/agent/client-kube-proxy.crt
#       type: File
#     name: file0
#   - hostPath:
#       path: /var/lib/rancher/rke2/agent/client-kube-proxy.key
#       type: File
#     name: file1
#   - hostPath:
#       path: /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig
#       type: File
#     name: file2
#   - hostPath:
#       path: /var/lib/rancher/rke2/agent/server-ca.crt
#       type: File
#     name: file3
# status: {}

워커 노드 삭제 → 워커 노드 다시 추가

# [k8s-node1]

# 노드 스케줄 차단 + 파드 안전 이동 (Drain)
kubectl drain k8s-node2 --ignore-daemonsets --delete-emptydir-data
# node/k8s-node2 cordoned
# Warning: ignoring DaemonSet-managed Pods: kube-system/rke2-canal-dvzkg
# node/k8s-node2 drained

# Kubernetes 클러스터에서 노드 제거 : 파드가 다 빠진 거 확인 후
kubectl delete node k8s-node2
# node "k8s-node2" deleted

# [k8s-node2]

# RKE2 서비스 중지
systemctl stop rke2-agent

# 삭제 스크립트 확인
ls -l /usr/bin/rke2*
# -rwxr-xr-x. 1 root root 119597592 Feb  5 04:44 /usr/bin/rke2
# -rwxr-xr-x. 1 root root      3430 Feb  5 23:02 /usr/bin/rke2-killall.sh
# -rwxr-xr-x. 1 root root      5573 Feb  5 23:02 /usr/bin/rke2-uninstall.sh

# RKE2 삭제 스크립트 실행 : 컨테이너, 네트워크 인터페이스, 관련 파일을 정리
cat /usr/bin/rke2-uninstall.sh
# rke2-uninstall.sh
rke2-uninstall.sh
# ...
# + log 'Removing uninstall script'
# ++ date '+%Y-%m-%d %H:%M:%S'
# + echo '[2026-02-21 02:14:32] Removing uninstall script'
# [2026-02-21 02:14:32] Removing uninstall script
# + rm -f -- /usr/bin/rke2-uninstall.sh

# 관련 디렉터리 삭제 확인
tree /etc/rancher
# 0 directories, 0 files
tree /var/lib/rancher
# 0 directories, 0 files

추가

[k8s-node2]

# Run the installer
curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE="agent" INSTALL_RKE2_CHANNEL=v1.33 sh -

# Configure the rke2-agent service
TOKEN=K108958903bb8ad7369a7df2faf6ce0b37738f328eaa481972a30e8e4102a534351::server:b7ae451a6e348be80caf6eb030aa517d

mkdir -p /etc/rancher/rke2/
cat << EOF > /etc/rancher/rke2/config.yaml
server: https://192.168.10.11:9345
token: $TOKEN
EOF
cat /etc/rancher/rke2/config.yaml

# Enabled/Start the service
systemctl enable --now rke2-agent.service
journalctl -u rke2-agent -f

샘플 애플리케이션 배포

# 샘플 애플리케이션 배포
cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webpod
spec:
  replicas: 2
  selector:
    matchLabels:
      app: webpod
  template:
    metadata:
      labels:
        app: webpod
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - sample-app
            topologyKey: "kubernetes.io/hostname"
      containers:
      - name: webpod
        image: traefik/whoami
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: webpod
  labels:
    app: webpod
spec:
  selector:
    app: webpod
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
    nodePort: 30000
  type: NodePort
EOF

# 확인
kubectl get deploy,pod,svc,ep -owide
# Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
# NAME                     READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES           SELECTOR
# deployment.apps/webpod   2/2     2            2           7s    webpod       traefik/whoami   app=webpod

# NAME                          READY   STATUS    RESTARTS   AGE   IP          NODE        NOMINATED NODE   READINESS GATES
# pod/webpod-697b545f57-4svtz   1/1     Running   0          7s    10.42.2.2   k8s-node2   <none>           <none>
# pod/webpod-697b545f57-82w4l   1/1     Running   0          7s    10.42.0.6   k8s-node1   <none>           <none>

# NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE   SELECTOR
# service/kubernetes   ClusterIP   10.43.0.1       <none>        443/TCP        48m   <none>
# service/webpod       NodePort    10.43.174.124   <none>        80:30000/TCP   7s    app=webpod

# NAME                   ENDPOINTS                   AGE
# endpoints/kubernetes   192.168.10.11:6443          48m
# endpoints/webpod       10.42.0.6:80,10.42.2.2:80   7s
# [HostPC] 반복 호출 : IP는 node 작업에 따라 변경
while true; do curl -s http://192.168.10.12:30000 | grep Hostname; date; sleep 1; done
# Sat Feb 21 02:16:50 KST 2026
# Sat Feb 21 02:16:51 KST 2026
# Sat Feb 21 02:16:52 KST 2026
# Hostname: webpod-697b545f57-4svtz
# Sat Feb 21 02:16:53 KST 2026
# Hostname: webpod-697b545f57-4svtz
# Sat Feb 21 02:16:54 KST 2026
# Hostname: webpod-697b545f57-4svtz
# Sat Feb 21 02:16:55 KST 2026

RKE2 업그레이드

인증서 관리 및 수동 갱신

RKE2에서 클라이언트와 서버 인증서는 발급일로부터 365일간 유효합니다. 인증서가 만료되었거나 만료 예정일이 120일 이내로 남은 경우, RKE2를 시작할 때마다 자동으로 인증서가 갱신됩니다. 이때 기존 키를 재사용하여 기존 인증서의 유효 기간만 연장됩니다. 만약 기존 인증서의 유효 기간 연장이 아닌, 새 인증서와 키를 생성하고 싶다면 하위 명령어인 rotate를 이용해야 합니다. 또한 인증서 만료일까지 120일 이내로 남으면 Kubernetes에서 CertificateExpirationWarning 경고 이벤트가 발생합니다.

 

노드 인증서와 만료일 확인

# 노드 인증서와 만료일 확인 :Check rke2 component certificates on disk
rke2 certificate --help
# NAME:
#    rke2 certificate - Manage RKE2 certificates

# USAGE:
#    rke2 certificate [command options]

# COMMANDS:
#    check      Check rke2 component certificates on disk
#    rotate     Rotate rke2 component certificates on disk
#    rotate-ca  Write updated rke2 CA certificates to the datastore
#    help, h    Shows a list of commands or help for one command

# OPTIONS:
#    --help, -h  show help

[k8s-node1]
rke2 certificate check --output table
# FILENAME                           SUBJECT                             USAGES                  EXPIRES                  RESIDUAL TIME   STATUS
--------                           -------                             ------                  -------                  -------------   ------
client-kubelet.crt                 system:node:k8s-node1               ClientAuth              Feb 20, 2027 16:27 UTC   1 year          OK
client-kubelet.crt                 rke2-client-ca@1771604865           CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
serving-kubelet.crt                k8s-node1                           ServerAuth              Feb 20, 2027 16:27 UTC   1 year          OK
serving-kubelet.crt                rke2-server-ca@1771604865           CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
client-kube-apiserver.crt          system:apiserver                    ClientAuth              Feb 20, 2027 16:27 UTC   1 year          OK
client-kube-apiserver.crt          rke2-client-ca@1771604865           CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
serving-kube-apiserver.crt         kube-apiserver                      ServerAuth              Feb 20, 2027 16:27 UTC   1 year          OK
serving-kube-apiserver.crt         rke2-server-ca@1771604865           CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
client-admin.crt                   system:admin                        ClientAuth              Feb 20, 2027 16:27 UTC   1 year          OK
client-admin.crt                   rke2-client-ca@1771604865           CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
client-rke2-cloud-controller.crt   rke2-cloud-controller-manager       ClientAuth              Feb 20, 2027 16:27 UTC   1 year          OK
client-rke2-cloud-controller.crt   rke2-client-ca@1771604865           CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
client-kube-proxy.crt              system:kube-proxy                   ClientAuth              Feb 20, 2027 16:27 UTC   1 year          OK
client-kube-proxy.crt              rke2-client-ca@1771604865           CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
client-rke2-controller.crt         system:rke2-controller              ClientAuth              Feb 20, 2027 16:27 UTC   1 year          OK
client-rke2-controller.crt         rke2-client-ca@1771604865           CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
client-auth-proxy.crt              system:auth-proxy                   ClientAuth              Feb 20, 2027 16:27 UTC   1 year          OK
client-auth-proxy.crt              rke2-request-header-ca@1771604865   CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
client-controller.crt              system:kube-controller-manager      ClientAuth              Feb 20, 2027 16:27 UTC   1 year          OK
client-controller.crt              rke2-client-ca@1771604865           CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
kube-controller-manager.crt        kube-controller-manager             ServerAuth              Feb 20, 2027 16:27 UTC   1 year          OK
kube-controller-manager.crt        rke2-server-ca@1771604865           CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
client.crt                         etcd-client                         ClientAuth              Feb 20, 2027 16:27 UTC   1 year          OK
client.crt                         etcd-server-ca@1771604865           CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
server-client.crt                  etcd-server                         ServerAuth,ClientAuth   Feb 20, 2027 16:27 UTC   1 year          OK
server-client.crt                  etcd-server-ca@1771604865           CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
peer-server-client.crt             etcd-peer                           ServerAuth,ClientAuth   Feb 20, 2027 16:27 UTC   1 year          OK
peer-server-client.crt             etcd-peer-ca@1771604865             CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
client-scheduler.crt               system:kube-scheduler               ClientAuth              Feb 20, 2027 16:27 UTC   1 year          OK
client-scheduler.crt               rke2-client-ca@1771604865           CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
kube-scheduler.crt                 kube-scheduler                      ServerAuth              Feb 20, 2027 16:27 UTC   1 year          OK
kube-scheduler.crt                 rke2-server-ca@1771604865           CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
client-supervisor.crt              system:rke2-supervisor              ClientAuth              Feb 20, 2027 16:27 UTC   1 year          OK
client-supervisor.crt              rke2-client-ca@1771604865           CertSign                Feb 18, 2036 16:27 UTC   10 years        OK


[k8s-node2]
rke2 certificate check --output table

FILENAME                     SUBJECT                     USAGES       EXPIRES                  RESIDUAL TIME   STATUS
--------                     -------                     ------       -------                  -------------   ------
client-rke2-controller.crt   system:rke2-controller      ClientAuth   Feb 20, 2027 17:15 UTC   1 year          OK
client-rke2-controller.crt   rke2-client-ca@1771604865   CertSign     Feb 18, 2036 16:27 UTC   10 years        OK
client-kube-proxy.crt        system:kube-proxy           ClientAuth   Feb 20, 2027 17:15 UTC   1 year          OK
client-kube-proxy.crt        rke2-client-ca@1771604865   CertSign     Feb 18, 2036 16:27 UTC   10 years        OK
client-kubelet.crt           system:node:k8s-node2       ClientAuth   Feb 20, 2027 17:15 UTC   1 year          OK
client-kubelet.crt           rke2-client-ca@1771604865   CertSign     Feb 18, 2036 16:27 UTC   10 years        OK
serving-kubelet.crt          k8s-node2                   ServerAuth   Feb 20, 2027 17:15 UTC   1 year          OK
serving-kubelet.crt          rke2-server-ca@1771604865   CertSign     Feb 18, 2036 16:27 UTC   10 years        OK

# 인증서 수동 교체 : rke2 certificate rotate 명령 사용.
[k8s-node1]

# Stop RKE2
systemctl stop rke2-server

# Rotate certificates
rke2 certificate rotate
# INFO[0000] Server detected, rotating agent and server certificates 
# INFO[0000] Rotating dynamic listener certificate        
# INFO[0000] Rotating certificates for controller-manager 
# INFO[0000] Rotating certificates for etcd               
# INFO[0000] Rotating certificates for scheduler          
# INFO[0000] Rotating certificates for supervisor         
# INFO[0000] Rotating certificates for kubelet            
# INFO[0000] Rotating certificates for rke2-controller    
# INFO[0000] Rotating certificates for auth-proxy         
# INFO[0000] Rotating certificates for cloud-controller   
# INFO[0000] Rotating certificates for kube-proxy         
# INFO[0000] Rotating certificates for api-server         
# INFO[0000] Rotating certificates for admin              
# INFO[0000] Successfully backed up certificates to /var/lib/rancher/rke2/server/tls-1771608284, please restart rke2 server or agent to rotate certificates 

# Start RKE2
rke2 certificate check --output table
# FILENAME   SUBJECT   USAGES   EXPIRES   RESIDUAL TIME   STATUS
# --------   -------   ------   -------   -------------   ------
systemctl start rke2-server

# 확인
rke2 certificate check --output table
# FILENAME                           SUBJECT                             USAGES                  EXPIRES                  RESIDUAL TIME   STATUS
# --------                           -------                             ------                  -------                  -------------   ------
# client-auth-proxy.crt              system:auth-proxy                   ClientAuth              Feb 20, 2027 17:25 UTC   1 year          OK
# client-auth-proxy.crt              rke2-request-header-ca@1771604865   CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
# client-rke2-cloud-controller.crt   rke2-cloud-controller-manager       ClientAuth              Feb 20, 2027 17:25 UTC   1 year          OK
# client-rke2-cloud-controller.crt   rke2-client-ca@1771604865           CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
# client-controller.crt              system:kube-controller-manager      ClientAuth              Feb 20, 2027 17:25 UTC   1 year          OK
# client-controller.crt              rke2-client-ca@1771604865           CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
# kube-controller-manager.crt        kube-controller-manager             ServerAuth              Feb 20, 2027 17:25 UTC   1 year          OK
# kube-controller-manager.crt        rke2-server-ca@1771604865           CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
# client-scheduler.crt               system:kube-scheduler               ClientAuth              Feb 20, 2027 17:25 UTC   1 year          OK
# client-scheduler.crt               rke2-client-ca@1771604865           CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
# kube-scheduler.crt                 kube-scheduler                      ServerAuth              Feb 20, 2027 17:25 UTC   1 year          OK
# kube-scheduler.crt                 rke2-server-ca@1771604865           CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
# client-supervisor.crt              system:rke2-supervisor              ClientAuth              Feb 20, 2027 17:25 UTC   1 year          OK
# client-supervisor.crt              rke2-client-ca@1771604865           CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
# client-kube-proxy.crt              system:kube-proxy                   ClientAuth              Feb 20, 2027 17:25 UTC   1 year          OK
# client-kube-proxy.crt              rke2-client-ca@1771604865           CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
# client.crt                         etcd-client                         ClientAuth              Feb 20, 2027 17:25 UTC   1 year          OK
# client.crt                         etcd-server-ca@1771604865           CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
# server-client.crt                  etcd-server                         ServerAuth,ClientAuth   Feb 20, 2027 17:25 UTC   1 year          OK
# server-client.crt                  etcd-server-ca@1771604865           CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
# peer-server-client.crt             etcd-peer                           ServerAuth,ClientAuth   Feb 20, 2027 17:25 UTC   1 year          OK
# peer-server-client.crt             etcd-peer-ca@1771604865             CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
# client-kubelet.crt                 system:node:k8s-node1               ClientAuth              Feb 20, 2027 17:25 UTC   1 year          OK
# client-kubelet.crt                 rke2-client-ca@1771604865           CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
# serving-kubelet.crt                k8s-node1                           ServerAuth              Feb 20, 2027 17:25 UTC   1 year          OK
# serving-kubelet.crt                rke2-server-ca@1771604865           CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
# client-rke2-controller.crt         system:rke2-controller              ClientAuth              Feb 20, 2027 17:25 UTC   1 year          OK
# client-rke2-controller.crt         rke2-client-ca@1771604865           CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
# client-kube-apiserver.crt          system:apiserver                    ClientAuth              Feb 20, 2027 17:25 UTC   1 year          OK
# client-kube-apiserver.crt          rke2-client-ca@1771604865           CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
# serving-kube-apiserver.crt         kube-apiserver                      ServerAuth              Feb 20, 2027 17:25 UTC   1 year          OK
# serving-kube-apiserver.crt         rke2-server-ca@1771604865           CertSign                Feb 18, 2036 16:27 UTC   10 years        OK
# client-admin.crt                   system:admin                        ClientAuth              Feb 20, 2027 17:25 UTC   1 year          OK
# client-admin.crt                   rke2-client-ca@1771604865           CertSign                Feb 18, 2036 16:27 UTC   10 years        OK

# kubeconfig 갱신
diff /etc/rancher/rke2/rke2.yaml ~/.kube/config
yes | cp /etc/rancher/rke2/rke2.yaml ~/.kube/config ; echo
kubectl cluster-info
# Kubernetes control plane is running at https://192.168.10.11:6443
# CoreDNS is running at https://192.168.10.11:6443/api/v1/namespaces/kube-system/services/rke2-coredns-rke2-coredns:udp-53/proxy
# rke2-agent는 기존 연결이 끊기면, server에 다시 붙으면서 새 CA 기준으로 인증서 재발급 → 기존 kubeadm 인증서 갱신 시 워커 노드 동작과 동일.

RKE2 수동 업그레이드 v1.33 → v1.34

RKE2 업그레이드 시에는 Kubernetes 버전 편차 정책(version skew policy)을 따라야 하며, 중간 마이너 버전을 건너뛰지 않도록 순차적으로 업그레이드를 진행해야 합니다.

 

업그레이드는 설치 스크립트 또는 자동 업그레이드(Auto Upgrade) 기능을 사용할 수 있으며, 각각 다양한 릴리스 채널(Release Channel)을 선택하여 적용할 수 있습니다.

 

최신 채널 목록 및 정보는 rke2 channel service API에서 확인할 수 있습니다.

수동 업그레이드 절차 : 먼저 컨트롤(서버) 노드를 하나씩 업그레이드 완료 → 이후 워커(에이전트) 노드를 업그레이드합니다.

 

컨트롤 플레인 업그레이드

# [rke2-server] 컨트롤 플레인 노드 업그레이드 : Installation Script (INSTALL_RKE2_VERSION)

# 모니터링
while true; do curl -s http://192.168.10.12:30000 | grep Hostname; date; sleep 1; done
watch -d "kubectl get pod -n kube-system -owide --sort-by=.metadata.creationTimestamp | tac"
watch -d "kubectl get node"
watch -d etcdctl \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt \
  --cert=/var/lib/rancher/rke2/server/tls/etcd/client.crt \
  --key=/var/lib/rancher/rke2/server/tls/etcd/client.key \
  member list --write-out=table

### 노드 상태 - 업그레이드 전 ###
# Every 2.0s: kubectl get ...  k8s-node1: Sat Feb 21 02:44:29 2026

# kube-scheduler-k8s-node1                     1/1     Running
#  0          18m   192.168.10.11   k8s-node1   <none>           <
# none>
# kube-controller-manager-k8s-node1            1/1     Running
#  0          18m   192.168.10.11   k8s-node1   <none>           <
# none>
# kube-apiserver-k8s-node1                     1/1     Running
#  0          18m   192.168.10.11   k8s-node1   <none>           <
# none>
# kube-proxy-k8s-node1                         1/1     Running
#  0          18m   192.168.10.11   k8s-node1   <none>           <
# none>
# etcd-k8s-node1                               1/1     Running
#  0          18m   192.168.10.11   k8s-node1   <none>           <
# none>
# rke2-canal-qbzkv                             2/2     Running
#  0          28m   192.168.10.12   k8s-node2   <none>           <
# none>
# kube-proxy-k8s-node2                         1/1     Running
#  0          28m   192.168.10.12   k8s-node2   <none>           <
# none>
# rke2-metrics-server-fdcdf575d-rd92v          1/1     Running
#  0          75m   10.42.0.4       k8s-node1   <none>           <
# none>
# rke2-canal-2gzwn                             2/2     Running
#  0          75m   192.168.10.11   k8s-node1   <none>           <
# none>
# rke2-coredns-rke2-coredns-559595db99-dslf8   1/1     Running
#  0          75m   10.42.0.5       k8s-node1   <none>           <
# none>
# helm-install-rke2-runtimeclasses-lp4ph       0/1     Completed
#  0          75m   <none>          k8s-node1   <none>           <
# none>
# helm-install-rke2-metrics-server-stbfg       0/1     Completed
#  0          75m   <none>          k8s-node1   <none>           <
# none>

# Every 2.0s: kubectl get node          k8s-node1: Sat Feb 21 02:44:52 2026

# NAME        STATUS   ROLES                       AGE   VERSION
# k8s-node1   Ready    control-plane,etcd,master   76m   v1.33.8+rke2r1
# k8s-node2   Ready    <none>                      28m   v1.33.8+rke2r1

# Every 2.0s: etcdctl --endpoints=https://12...  k8s-node1: Sat Feb 21 02:45:01 2026

# +------------------+---------+--------------------+----------------------------+--
# --------------------------+------------+
# |        ID        | STATUS  |        NAME        |         PEER ADDRS         |
#       CLIENT ADDRS        | IS LEARNER |
# +------------------+---------+--------------------+----------------------------+--
# --------------------------+------------+
# | 6571fb7574e87dba | started | k8s-node1-3298a913 | https://192.168.10.11:2380 | h
# ttps://192.168.10.11:2379 |      false |
# +------------------+---------+--------------------+----------------------------+--
# --------------------------+------------+
############################



# 버전 정보 확인
kubectl get node
# NAME        STATUS   ROLES                       AGE   VERSION
# k8s-node1   Ready    control-plane,etcd,master   76m   v1.33.8+rke2r1
# k8s-node2   Ready    <none>                      29m   v1.33.8+rke2r1
rke2 --version
# rke2 version v1.33.8+rke2r1 (eb75e3c1774cee5a584259d6fee77eb8cfa9b430)
# go version go1.24.12 X:boringcrypto

# 채널 정보 확인
curl -s https://update.rke2.io/v1-release/channels | jq .data
#   {
#     "id": "stable",
#     "type": "channel",
#     "links": {
#       "self": "https://update.rke2.io/v1-release/channels/stable"
#     },
#     "name": "stable",
#     "latest": "v1.34.3+rke2r3"
#   },
#   {
#     "id": "latest",
#     "type": "channel",
#     "links": {
#       "self": "https://update.rke2.io/v1-release/channels/latest"
#     },
#     "name": "latest",
#     "latest": "v1.35.0+rke2r3",
#     "latestRegexp": ".*",
#     "excludeRegexp": "(^[^+]+-|v1\\.25\\.5\\+rke2r1|v1\\.26\\.0\\+rke2r1)"
#   },
#   ...
#   {
#     "id": "v1.34",
#     "type": "channel",
#     "links": {
#       "self": "https://update.rke2.io/v1-release/channels/v1.34"
#     },
#     "name": "v1.34",
#     "latest": "v1.34.3+rke2r3",
#     "latestRegexp": "v1\\.34\\..*",
#     "excludeRegexp": "^[^+]+-"
#   },


# v1.34 버전 업그레이드! : 아래 Running scriptlet 과정에서 업그레이드 수행됨, app 통신 영향 없었음.
curl -sfL https://get.rke2.io | INSTALL_RKE2_CHANNEL=v1.34 sh -
...
Running transaction
  Preparing        :                                                                                                           1/1
  Upgrading        : rke2-common-1.34.3~rke2r3-0.el9.aarch64                                                                   1/4
  Upgrading        : rke2-server-1.34.3~rke2r3-0.el9.aarch64                                                                   2/4
  Running scriptlet: rke2-server-1.34.3~rke2r3-0.el9.aarch64                                                                   2/4
  Running scriptlet: rke2-server-1.33.7~rke2r3-0.el9.aarch64                                                                   3/4
  Cleanup          : rke2-server-1.33.7~rke2r3-0.el9.aarch64                                                                   3/4
  Running scriptlet: rke2-server-1.33.7~rke2r3-0.el9.aarch64                                                                   3/4
  Running scriptlet: rke2-common-1.33.7~rke2r3-0.el9.aarch64                                                                   4/4
  Cleanup          : rke2-common-1.33.7~rke2r3-0.el9.aarch64                                                                   4/4
  Running scriptlet: rke2-common-1.33.7~rke2r3-0.el9.aarch64                                                                   4/4
  Verifying        : rke2-common-1.34.3~rke2r3-0.el9.aarch64                                                                   1/4
  Verifying        : rke2-common-1.33.7~rke2r3-0.el9.aarch64                                                                   2/4
  Verifying        : rke2-server-1.34.3~rke2r3-0.el9.aarch64                                                                   3/4
  Verifying        : rke2-server-1.33.7~rke2r3-0.el9.aarch64                                                                   4/4                                                              4/4


### 노드 상태 - 업그레이드 후 ###
Every 2.0s: kubectl get pod -n kube-system -owide --sort-by=...  k8s-node1: Sat Feb 21 02:48:23 2026

# helm-install-rke2-canal-6l8m7                0/1     Completed   0              42s    192.168.10.11
#    k8s-node1   <none>           <none>
# helm-install-rke2-coredns-f4r26              0/1     Completed   0              42s    192.168.10.11
#    k8s-node1   <none>           <none>
# helm-install-rke2-metrics-server-9pp2v       0/1     Completed   0              42s    10.42.2.4
#    k8s-node2   <none>           <none>
# helm-install-rke2-runtimeclasses-tvdpm       0/1     Completed   0              42s    10.42.2.3
#    k8s-node2   <none>           <none>
# kube-controller-manager-k8s-node1            1/1     Running     0              61s    192.168.10.11
#    k8s-node1   <none>           <none>
# kube-scheduler-k8s-node1                     1/1     Running     0              65s    192.168.10.11
#    k8s-node1   <none>           <none>
# etcd-k8s-node1                               1/1     Running     0              69s    192.168.10.11
#    k8s-node1   <none>           <none>
# kube-proxy-k8s-node1                         1/1     Running     0              79s    192.168.10.11
#    k8s-node1   <none>           <none>
# kube-apiserver-k8s-node1                     1/1     Running     1 (115s ago)   112s   192.168.10.11
#    k8s-node1   <none>           <none>
# kube-proxy-k8s-node2                         1/1     Running     0              32m    192.168.10.12
#    k8s-node2   <none>           <none>
# rke2-canal-qbzkv                             2/2     Running     0              32m    192.168.10.12
#    k8s-node2   <none>           <none>
# rke2-metrics-server-fdcdf575d-rd92v          1/1     Running     0              79m    10.42.0.4
#    k8s-node1   <none>           <none>
# rke2-canal-2gzwn                             2/2     Running     0              79m    192.168.10.11
#    k8s-node1   <none>           <none>
# rke2-coredns-rke2-coredns-559595db99-dslf8   1/1     Running     0              79m    10.42.0.5
#    k8s-node1   <none>           <none>
# NAME                                         READY   STATUS      RESTARTS       AGE    IP
#    NODE        NOMINATED NODE   READINESS GATES

# Every 2.0s: ...  k8s-node1: Sat Feb 21 02:48:44 2026

# NAME        STATUS   ROLES                       AGE
#    VERSION
# k8s-node1   Ready    control-plane,etcd,master   80m
#    v1.34.4+rke2r1
# k8s-node2   Ready    <none>                      32m
#    v1.33.8+rke2r1

Every 2.0s: etcdctl --endpoints=https://12...  k8s-node1: Sat Feb 21 02:48:57 2026

# +------------------+---------+--------------------+----------------------------+--
# --------------------------+------------+
# |        ID        | STATUS  |        NAME        |         PEER ADDRS         |
#       CLIENT ADDRS        | IS LEARNER |
# +------------------+---------+--------------------+----------------------------+--
# --------------------------+------------+
# | 6571fb7574e87dba | started | k8s-node1-3298a913 | https://192.168.10.11:2380 | h
# ttps://192.168.10.11:2379 |      false |
# +------------------+---------+--------------------+----------------------------+--
# --------------------------+------------+
############################

# 확인
rke2 --version
# rke2 version v1.34.4+rke2r1 (c6b97dc03cefec17e8454a6f45b29f4e3d0a81d6)
# go version go1.24.12 X:boringcrypto

# 위 스크립트 설치 과정만으로 아래 처럼 파드들이 신규 재생성되었음
## 첫번쨰(etcd, apiserver, kube-proxy) -> 두번째(scheduler, kcm)
kubectl get pod -n kube-system --sort-by=.metadata.creationTimestamp | tac
# helm-install-rke2-canal-6l8m7                0/1     Completed   0               99s
# helm-install-rke2-coredns-f4r26              0/1     Completed   0               99s
# helm-install-rke2-metrics-server-9pp2v       0/1     Completed   0               99s
# helm-install-rke2-runtimeclasses-tvdpm       0/1     Completed   0               99s
# kube-controller-manager-k8s-node1            1/1     Running     0               118s
# kube-scheduler-k8s-node1                     1/1     Running     0               2m2s
# etcd-k8s-node1                               1/1     Running     0               2m6s
# kube-proxy-k8s-node1                         1/1     Running     0               2m16s
# kube-apiserver-k8s-node1                     1/1     Running     1 (2m52s ago)   2m49s
# kube-proxy-k8s-node2                         1/1     Running     0               33m
# rke2-canal-qbzkv                             2/2     Running     0               33m
# rke2-metrics-server-fdcdf575d-rd92v          1/1     Running     0               80m
# rke2-canal-2gzwn                             2/2     Running     0               80m
# rke2-coredns-rke2-coredns-559595db99-dslf8   1/1     Running     0               80m

# repo 추가 및 기존 repo 삭제 확인
dnf repolist
# repo id                          repo name
# appstream                        Rocky Linux 9 - AppStream
# baseos                           Rocky Linux 9 - BaseOS
# extras                           Rocky Linux 9 - Extras
# rancher-rke2-1.34-stable         Rancher RKE2 1.34 (v1.34)
# rancher-rke2-common-stable       Rancher RKE2 Common (v1.34)
tree /etc/yum.repos.d/
# /etc/yum.repos.d/
# ├── rancher-rke2.repo
# ├── rocky-addons.repo
# ├── rocky-devel.repo
# ├── rocky-extras.repo
# └── rocky.repo

cat /etc/yum.repos.d/rancher-rke2.repo
# [rancher-rke2-common-stable]
# name=Rancher RKE2 Common (v1.34)
# baseurl=https://rpm.rancher.io/rke2/stable/common/centos/9/noarch
# enabled=1
# gpgcheck=1
# repo_gpgcheck=1
# gpgkey=https://rpm.rancher.io/public.key
# [rancher-rke2-1.34-stable]
# name=Rancher RKE2 1.34 (v1.34)
# baseurl=https://rpm.rancher.io/rke2/stable/1.34/centos/9/aarch64
# enabled=1
# gpgcheck=1
# repo_gpgcheck=1
# gpgkey=https://rpm.rancher.io/public.key
cat /etc/yum.repos.d/rancher-rke2.repo | grep -iE 'name|baseurl'
# name=Rancher RKE2 Common (v1.34)
# baseurl=https://rpm.rancher.io/rke2/stable/common/centos/9/noarch
# name=Rancher RKE2 1.34 (v1.34)
# baseurl=https://rpm.rancher.io/rke2/stable/1.34/centos/9/aarch64

# kube-system 파드 별 컨테이너 이미지 정보 출력
kubectl get pods -n kube-system -o custom-columns="POD_NAME:.metadata.name,IMAGES:.spec.containers[*].image"
# POD_NAME                                     IMAGES
# etcd-k8s-node1                               index.docker.io/rancher/hardened-etcd:v3.6.7-k3s1-build20260126
# helm-install-rke2-canal-6l8m7                rancher/klipper-helm:v0.9.14-build20260210
# helm-install-rke2-coredns-f4r26              rancher/klipper-helm:v0.9.14-build20260210
# helm-install-rke2-metrics-server-9pp2v       rancher/klipper-helm:v0.9.14-build20260210
# helm-install-rke2-runtimeclasses-tvdpm       rancher/klipper-helm:v0.9.14-build20260210
# kube-apiserver-k8s-node1                     index.docker.io/rancher/hardened-kubernetes:v1.34.4-rke2r1-build20260210
# kube-controller-manager-k8s-node1            index.docker.io/rancher/hardened-kubernetes:v1.34.4-rke2r1-build20260210
# kube-proxy-k8s-node1                         index.docker.io/rancher/hardened-kubernetes:v1.34.4-rke2r1-build20260210
# kube-proxy-k8s-node2                         index.docker.io/rancher/hardened-kubernetes:v1.33.8-rke2r1-build20260210
# kube-scheduler-k8s-node1                     index.docker.io/rancher/hardened-kubernetes:v1.34.4-rke2r1-build20260210
# rke2-canal-2gzwn                             rancher/hardened-calico:v3.31.3-build20260206,rancher/hardened-flannel:v0.28.1-build20260206
# rke2-canal-qbzkv                             rancher/hardened-calico:v3.31.3-build20260206,rancher/hardened-flannel:v0.28.1-build20260206
# rke2-coredns-rke2-coredns-559595db99-dslf8   rancher/hardened-coredns:v1.14.1-build20260206
# rke2-metrics-server-fdcdf575d-rd92v          rancher/hardened-k8s-metrics-server:v0.8.1-build20260206
kubectl get pods -n kube-system \
  -o custom-columns=\
POD:.metadata.name,\
CONTAINERS:.spec.containers[*].name,\
IMAGES:.spec.containers[*].image
# POD                                          CONTAINERS                 IMAGES
# etcd-k8s-node1                               etcd                       index.docker.io/rancher/hardened-etcd:v3.6.7-k3s1-build20260126
# helm-install-rke2-canal-6l8m7                helm                       rancher/klipper-helm:v0.9.14-build20260210
# helm-install-rke2-coredns-f4r26              helm                       rancher/klipper-helm:v0.9.14-build20260210
# helm-install-rke2-metrics-server-9pp2v       helm                       rancher/klipper-helm:v0.9.14-build20260210
# helm-install-rke2-runtimeclasses-tvdpm       helm                       rancher/klipper-helm:v0.9.14-build20260210
# kube-apiserver-k8s-node1                     kube-apiserver             index.docker.io/rancher/hardened-kubernetes:v1.34.4-rke2r1-build20260210
# kube-controller-manager-k8s-node1            kube-controller-manager    index.docker.io/rancher/hardened-kubernetes:v1.34.4-rke2r1-build20260210
# kube-proxy-k8s-node1                         kube-proxy                 index.docker.io/rancher/hardened-kubernetes:v1.34.4-rke2r1-build20260210
# kube-proxy-k8s-node2                         kube-proxy                 index.docker.io/rancher/hardened-kubernetes:v1.33.8-rke2r1-build20260210
# kube-scheduler-k8s-node1                     kube-scheduler             index.docker.io/rancher/hardened-kubernetes:v1.34.4-rke2r1-build20260210
# rke2-canal-2gzwn                             calico-node,kube-flannel   rancher/hardened-calico:v3.31.3-build20260206,rancher/hardened-flannel:v0.28.1-build20260206
# rke2-canal-qbzkv                             calico-node,kube-flannel   rancher/hardened-calico:v3.31.3-build20260206,rancher/hardened-flannel:v0.28.1-build20260206
# rke2-coredns-rke2-coredns-559595db99-dslf8   coredns                    rancher/hardened-coredns:v1.14.1-build20260206
# rke2-metrics-server-fdcdf575d-rd92v          metrics-server             rancher/hardened-k8s-metrics-server:v0.8.1-build20260206

# 설치 후 rke2 프로세스를 다시 시작 Remember to restart the rke2 process after installing:
systemctl restart rke2-server

# 노드 정보 확인
kubectl get node -owide
# NAME        STATUS   ROLES                       AGE   VERSION          INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                      KERNEL-VERSION                  CONTAINER-RUNTIME
# k8s-node1   Ready    control-plane,etcd,master   83m   v1.34.4+rke2r1   192.168.10.11   <none>        Rocky Linux 9.6 (Blue Onyx)   5.14.0-570.52.1.el9_6.aarch64   containerd://2.1.5-k3s1
# k8s-node2   Ready    <none>                      35m   v1.33.8+rke2r1   192.168.10.12   <none>        Rocky Linux 9.6 (Blue Onyx)   5.14.0-570.52.1.el9_6.aarch64   containerd://2.1.5-k3s1

워커노드 업그레이드

[k8s-node2]


# 특정 채널(예: 최신 버전)의 최신 버전으로 업그레이드하려면 해당 채널을 지정
rke2 --version
# rke2 version v1.33.8+rke2r1 (eb75e3c1774cee5a584259d6fee77eb8cfa9b430)
# go version go1.24.12 X:boringcrypto
curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=agent INSTALL_RKE2_CHANNEL=v1.34 sh -
# Upgraded:
#   rke2-agent-1.34.4~rke2r1-0.el9.aarch64      
#   rke2-common-1.34.4~rke2r1-0.el9.aarch64     


# 확인
rke2 --version
# rke2 version v1.34.4+rke2r1 (c6b97dc03cefec17e8454a6f45b29f4e3d0a81d6)
# go version go1.24.12 X:boringcrypto

# repo 추가 및 기존 repo 삭제 확인
dnf repolist
# repo id                    repo name
# appstream                  Rocky Linux 9 - AppStream
# baseos                     Rocky Linux 9 - BaseOS
# extras                     Rocky Linux 9 - Extras
# rancher-rke2-1.34-stable   Rancher RKE2 1.34 (v1.34)
# rancher-rke2-common-stable Rancher RKE2 Common (v1.34)

# 설치 후 rke2 프로세스를 다시 시작 : Agent nodes
systemctl restart rke2-agent

[k8s-node1]

# kubeconfig 갱신 필요 없음
diff /etc/rancher/rke2/rke2.yaml ~/.kube/config
# > preferences: {}
yes | cp /etc/rancher/rke2/rke2.yaml ~/.kube/config ; echo

# 노드 확인
kubectl get node -owide
# NAME        STATUS   ROLES                       AGE   VERSION          INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                      KERNEL-VERSION                  CONTAINER-RUNTIME
# k8s-node1   Ready    control-plane,etcd,master   85m   v1.34.4+rke2r1   192.168.10.11   <none>        Rocky Linux 9.6 (Blue Onyx)   5.14.0-570.52.1.el9_6.aarch64   containerd://2.1.5-k3s1
# k8s-node2   Ready    <none>                      37m   v1.34.4+rke2r1   192.168.10.12   <none>        Rocky Linux 9.6 (Blue Onyx)   5.14.0-570.52.1.el9_6.aarch64   containerd://2.1.5-k3s1 containerd://2.1.5-k3s1

# 워커 노드에 kube-proxy 파드만 신규 재기동됨!
kubectl get pod -n kube-system --sort-by=.metadata.creationTimestamp | tac
# kube-proxy-k8s-node2                         1/1     Running     0               49s
# kube-proxy-k8s-node1                         1/1     Running     0               3m4s
# helm-install-rke2-canal-6l8m7                0/1     Completed   0               6m38s
# helm-install-rke2-coredns-f4r26              0/1     Completed   0               6m38s
# helm-install-rke2-metrics-server-9pp2v       0/1     Completed   0               6m38s
# helm-install-rke2-runtimeclasses-tvdpm       0/1     Completed   0               6m38s
# kube-controller-manager-k8s-node1            1/1     Running     0               6m57s
# kube-scheduler-k8s-node1                     1/1     Running     0               7m1s
# etcd-k8s-node1                               1/1     Running     0               7m5s
# kube-apiserver-k8s-node1                     1/1     Running     1 (7m51s ago)   7m48s
# rke2-canal-qbzkv                             2/2     Running     0               38m
# rke2-metrics-server-fdcdf575d-rd92v          1/1     Running     0               85m
# rke2-canal-2gzwn                             2/2     Running     0               85m
# rke2-coredns-rke2-coredns-559595db99-dslf8   1/1     Running     0               85m
# NAME                                         READY   STATUS      RESTARTS        AGEE                                         READY   STATUS      RESTARTS      AGE

# kube-system 파드 별 컨테이너 이미지 정보 출력
kubectl get pods -n kube-system -o custom-columns="POD_NAME:.metadata.name,IMAGES:.spec.containers[*].image"
# POD_NAME                                     IMAGES
# etcd-k8s-node1                               index.docker.io/rancher/hardened-etcd:v3.6.7-k3s1-build20260126
# helm-install-rke2-canal-6l8m7                rancher/klipper-helm:v0.9.14-build20260210
# helm-install-rke2-coredns-f4r26              rancher/klipper-helm:v0.9.14-build20260210
# helm-install-rke2-metrics-server-9pp2v       rancher/klipper-helm:v0.9.14-build20260210
# helm-install-rke2-runtimeclasses-tvdpm       rancher/klipper-helm:v0.9.14-build20260210
# kube-apiserver-k8s-node1                     index.docker.io/rancher/hardened-kubernetes:v1.34.4-rke2r1-build20260210
# kube-controller-manager-k8s-node1            index.docker.io/rancher/hardened-kubernetes:v1.34.4-rke2r1-build20260210
# kube-proxy-k8s-node1                         index.docker.io/rancher/hardened-kubernetes:v1.34.4-rke2r1-build20260210
# kube-proxy-k8s-node2                         index.docker.io/rancher/hardened-kubernetes:v1.34.4-rke2r1-build20260210
# kube-scheduler-k8s-node1                     index.docker.io/rancher/hardened-kubernetes:v1.34.4-rke2r1-build20260210
# rke2-canal-2gzwn                             rancher/hardened-calico:v3.31.3-build20260206,rancher/hardened-flannel:v0.28.1-build20260206
# rke2-canal-qbzkv                             rancher/hardened-calico:v3.31.3-build20260206,rancher/hardened-flannel:v0.28.1-build20260206
# rke2-coredns-rke2-coredns-559595db99-dslf8   rancher/hardened-coredns:v1.14.1-build20260206
# rke2-metrics-server-fdcdf575d-rd92v          rancher/hardened-k8s-metrics-server:v0.8.1-build20260206

자동 업그레이드 v1.34 → v1.35

시스템 업그레이드 컨트롤러(System Upgrade Controller)로 RKE2 클러스터의 자동 업그레이드를 관리할 수 있습니다. (GitHub 링크).

 

GitHub - rancher/system-upgrade-controller: In your Kubernetes, upgrading your nodes

In your Kubernetes, upgrading your nodes. Contribute to rancher/system-upgrade-controller development by creating an account on GitHub.

github.com

 

Kubernetes CRD인 Plan 리소스를 활용해 업그레이드 대상 노드와 버전을 선언적으로 지정합니다.


Plan에서 레이블 선택기로 업그레이드 노드를 지정할 수 있고, 컨트롤러가 Plan을 감시해 업그레이드 작업을 Job 형태로 실행하고, 작업 성공 시 노드에 완료 레이블 부여합니다.
(참고) Rancher로 관리 중인 RKE2 클러스터는 Rancher UI로 업그레이드하는 것이 권장됩니다.

# 모니터링
while true; do curl -s http://192.168.10.12:30000 | grep Hostname; date; sleep 1; done
watch -d 'kubectl -n system-upgrade get plans -o wide; echo ; kubectl -n system-upgrade get jobs,pods'
watch -d "kubectl get node"
watch -d "kubectl get pod -n kube-system -owide --sort-by=.metadata.creationTimestamp | tac"

# system-upgrade-controller 설치
kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/crd.yaml -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/system-upgrade-controller.yaml
# customresourcedefinition.apiextensions.k8s.io/plans.upgrade.cattle.io created
# namespace/system-upgrade created
# serviceaccount/system-upgrade created
# role.rbac.authorization.k8s.io/system-upgrade-controller created
# clusterrole.rbac.authorization.k8s.io/system-upgrade-controller created
# clusterrole.rbac.authorization.k8s.io/system-upgrade-controller-drainer created
# rolebinding.rbac.authorization.k8s.io/system-upgrade created
# clusterrolebinding.rbac.authorization.k8s.io/system-upgrade created
# clusterrolebinding.rbac.authorization.k8s.io/system-upgrade-drainer created
# configmap/default-controller-env created
# deployment.apps/system-upgrade-controller created

# 확인
kubectl get deploy,pod,cm -n system-upgrade
# NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
# deployment.apps/system-upgrade-controller   1/1     1            1           16s

# NAME                                             READY   STATUS    RESTARTS   AGE
# pod/system-upgrade-controller-5f667989c7-c8nhb   1/1     Running   0          16s

# NAME                               DATA   AGE
# configmap/default-controller-env   11     16s
# configmap/kube-root-ca.crt         1      16s

kubectl logs -n system-upgrade -l app.kubernetes.io/name=system-upgrade-controller -f
# ...
# I0220 17:57:51.998970       1 event.go:389] "Event occurred" object="k8s-node1" fieldPath="" kind="Node" apiVersion="" type="Normal" reason="Started" message="system-upgrade-controller v0.19.0 (b0ff1f4) running as system-upgrade/system-upgrade-controller"

# 계획 작성 후 실행 및 확인
# plan 작성 및 실행
cat << EOF | kubectl apply -f -
apiVersion: upgrade.cattle.io/v1
kind: Plan
metadata:
  name: server-plan
  namespace: system-upgrade
spec:
  concurrency: 1
  cordon: true
  nodeSelector:
    matchExpressions:
    - key: node-role.kubernetes.io/control-plane
      operator: In
      values:
      - "true"
  serviceAccountName: system-upgrade
  upgrade:
    image: rancher/rke2-upgrade
  channel: https://update.rke2.io/v1-release/channels/latest  # version: v1.35.0+rke2r3 , curl -s https://update.rke2.io/v1-release/channels | jq .data
---
apiVersion: upgrade.cattle.io/v1
kind: Plan
metadata:
  name: agent-plan
  namespace: system-upgrade
spec:
  concurrency: 1
  cordon: true
  nodeSelector:
    matchExpressions:
    - key: node-role.kubernetes.io/control-plane
      operator: DoesNotExist
  prepare:
    args:
    - prepare
    - server-plan
    image: rancher/rke2-upgrade
  serviceAccountName: system-upgrade
  upgrade:
    image: rancher/rke2-upgrade
  channel: https://update.rke2.io/v1-release/channels/latest
EOF
# plan.upgrade.cattle.io/server-plan created
# plan.upgrade.cattle.io/agent-plan created

## 업그레이드 중
# Every 2.0s: ...  k8s-node1: Sat Feb 21 02:59:03 2026

# NAME        STATUS                     ROLES
#                AGE   VERSION
# k8s-node1   Ready,SchedulingDisabled   control-plane
# ,etcd,master   90m   v1.34.4+rke2r1
# k8s-node2   Ready                      <none>
#                42m   v1.34.4+rke2r1

# 확인
kubectl get node -owide
NAME        STATUS   ROLES                       AGE    VERSION          INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                      KERNEL-VERSION                  CONTAINER-RUNTIME
k8s-node1   Ready    control-plane,etcd,master   108m   v1.35.0+rke2r3   192.168.10.11   <none>        Rocky Linux 9.6 (Blue Onyx)   5.14.0-570.52.1.el9_6.aarch64   containerd://2.1.5-k3s1
k8s-node2   Ready    <none>                      97m    v1.35.0+rke2r3   192.168.10.12   <none>        Rocky Linux 9.6 (Blue Onyx)   5.14.0-570.52.1.el9_6.aarch64   containerd://2.1.5-k3s1

#
kubectl -n system-upgrade get plans -o wide
# NAME                                                              STATUS     COMPLETIONS   DURATION   AGE
# apply-agent-plan-on-k8s-node2-with-58646b4639f2f26db71730-e1707   Running    0/1           55s        55s
# apply-server-plan-on-k8s-node1-with-58646b4639f2f26db7173-f5d08   Complete   1/1           51s        55s

kubectl -n system-upgrade get jobs
# NAME                                                              STATUS     COMPLETIONS   DURATION   AGE
# apply-agent-plan-on-k8s-node2-with-58646b4639f2f26db71730-e1707   Running    0/1           103s       103s
# apply-server-plan-on-k8s-node1-with-58646b4639f2f26db7173-f5d08   Complete   1/1           51s        103s

kubectl get pod -n system-upgrade -owide
# NAME                                                              READY   STATUS            RESTARTS   AGE     IP              NODE        NOMINATED NODE   READINESS GATES
# apply-agent-plan-on-k8s-node2-with-58646b4639f2f26db71730-n8gkq   0/1     PodInitializing   0          6s      192.168.10.12   k8s-node2   <none>           <none>
# apply-agent-plan-on-k8s-node2-with-58646b4639f2f26db71730-vw8zd   0/1     Unknown           0          116s    192.168.10.12   k8s-node2   <none>           <none>
# apply-server-plan-on-k8s-node1-with-58646b4639f2f26db7173-c2t5b   0/1     Unknown           0          116s    192.168.10.11   k8s-node1   <none>           <none>
# apply-server-plan-on-k8s-node1-with-58646b4639f2f26db7173-x4mdq   0/1     Completed         0          72s     192.168.10.11   k8s-node1   <none>           <none>
# system-upgrade-controller-5f667989c7-c8nhb                        1/1     Running           0          2m48s   10.42.0.7       k8s-node1   <none>           <none>

# job 에 의해 생성된 파드가 업그레이드 관련 수행을 시도하니, 해당 파드는 상당한 권한의 rbac과 호스트의 모든 / 경로를 마운트로 사용 가능해야 되는 것으로 보임
# 실행되는 업그레이드 작업은 기본 노드에 변경 사항을 적용하기 위해 높은 권한이 필요합니다. 기본적으로 다음과 같이 구성됩니다.
## Host IPC, NET, and PID namespaces + The CAP_SYS_BOOT capability + Host root mounted at /host with read and write permissions
kubectl describe pod -n system-upgrade |grep ^Volumes: -A4
# Volumes:
#   host-root:
#     Type:          HostPath (bare host directory volume)
#     Path:          /
#     HostPathType:  Directory
# --
# Volumes:
#   host-root:
#     Type:          HostPath (bare host directory volume)
#     Path:          /
#     HostPathType:  Directory
# --
# Volumes:
#   host-root:
#     Type:          HostPath (bare host directory volume)
#     Path:          /
#     HostPathType:  Directory
# --
# Volumes:
#   host-root:
#     Type:          HostPath (bare host directory volume)
#     Path:          /
#     HostPathType:  Directory
# --
# Volumes:
#   etc-ssl:
#     Type:          HostPath (bare host directory volume)
#     Path:          /etc/ssl
#     HostPathType:  DirectoryOrCreate
...

# 로그 확인
kubectl logs -n system-upgrade -l app.kubernetes.io/name=system-upgrade-controller
# object="system-upgrade/server-plan" fieldPath="" kind="Plan" apiVersion="upgrade.cattle.io/v1" type="Normal" reason="JobComplete" message="Job completed on Node k8s-node1"
# I0220 17:59:29.807629       1 event.go:389] "Event occurred" object="system-upgrade/server-plan" fieldPath="" kind="Plan" apiVersion="upgrade.cattle.io/v1" type="Normal" reason="Complete" message="Jobs complete for version v1.35.1-rke2r1. Hash: 58646b4639f2f26db717305a88bd392986d5f96e4cd53d78dc1852d9"
# I0220 18:00:38.810477       1 event.go:389] "Event occurred" object="system-upgrade/agent-plan" fieldPath="" kind="Plan" apiVersion="upgrade.cattle.io/v1" type="Normal" reason="JobComplete" message="Job completed on Node k8s-node2"
# I0220 18:00:38.822018       1 event.go:389] "Event occurred" object="system-upgrade/agent-plan" fieldPath="" kind="Plan" apiVersion="upgrade.cattle.io/v1" type="Normal" reason="Complete" message="Jobs complete for version v1.35.1-rke2r1. Hash: 58646b4639f2f26db717305a88bd392986d5f96e4cd53d78dc1852d9"
# ...

Cluster API

Cluster API는 Kubernetes 프로젝트에서 제공하는 선언적 API와 도구 모음으로, 여러 Kubernetes 클러스터의 프로비저닝, 업그레이드, 운영을 단순화합니다. 관리자가 YAML 매니페스트만으로 AWS, 온프레미스(IDC), 베어메탈 등 다양한 인프라 환경에 동일한 방식으로 Kubernetes 워크로드 클러스터를 생성하고 관리할 수 있게 해줍니다.

 

 

Management Plane (관리 평면) 은 Cluster API가 실행되는 컨트롤 플레인의 역할을 합니다. 이 평면의 핵심인 Cluster API Core System은 Admin이 배포한 클러스터 매니페스트를 수신하여 원하는 상태(Desired State)와 실제 상태를 지속적으로 조정(Reconciliation)합니다.

 

Cluster API Providers는 인프라별로 구분됩니다. AWS Provider는 Amazon Web Services API를 호출하여 EC2 인스턴스, VPC 등 Kubernetes 클러스터에 필요한 리소스를 프로비저닝합니다. 반면 BYOH(Bring Your Own Host) Provider는 IDC 환경이나 베어메탈 서버처럼 기존 호스트를 활용하는 온프레미스 배포에 적합합니다.

 

Administrator가 클러스터 매니페스트를 배포하면, Core System은 적절한 Provider(AWS 또는 BYOH)에 프로비저닝 작업을 위임합니다. 각 Provider는 해당 인프라의 API를 통해 Workload Kubernetes 클러스터(v1.25)를 생성하고, 생성된 클러스터의 상태는 다시 Core System으로 보고되어 선언된 상태로 유지되도록 관리됩니다.

 

이를 통해 멀티클라우드와 하이브리드 환경에서도 일관된 방식으로 Kubernetes 클러스터를 관리할 수 있으며, RKE2 같은 배포판과 함께 사용하면 클러스터 라이프사이클 관리의 자동화와 표준화를 더욱 강화할 수 있습니다.

 

Cluster API의 핵심 개념은 "쿠버네티스 라이프사이클을 쿠버네티스 스타일로" 관리한다는 것입니다. 애플리케이션 관리에서 Deployment가 ReplicaSet을, ReplicaSet이 Pod를 관리하는 것처럼, 인프라에서는 MachineDeployment가 MachineSet을, MachineSet이 Machine(노드)을 관리합니다. 즉, 노드를 마치 파드처럼 선언적으로 다룰 수 있습니다.

 

 

왼쪽 그림은 apps/v1 영역은 익숙한 구조입니다. 1개 서비스 = 1개 Deployment이며, Deployment → ReplicaSet → Pod 계층으로 애플리케이션 인스턴스를 관리합니다. cluster.x-k8s.io/v1beta1 영역은 이와 대응되는 Cluster API 리소스입니다. 1개 노드 풀 = 1개 MachineDeployment이며, MachineDeployment → MachineSet → Machine 계층으로 워커 노드를 관리합니다.

 

오른쪽 그림은 하나의 클러스터가 Cluster API Custom Resource로 추상화된 구조를 보여줍니다. Cluster 객체가 최상위이며, Master Nodes는 KubeadmControlPlane이 여러 Machine을 관리하는 형태로 HA 컨트롤 플레인을 구성합니다. Worker Nodes는 MachineDeployment → MachineSet → Machine 계층으로 워커 노드 풀을 선언적으로 스케일링·업데이트할 수 있습니다.

 

출처 - https://www.youtube.com/watch?v=dWLEUfXloA8

 

Cluster API 구성요소는 다음과 같습니다.

 

Management Cluster: Cluster API 컨트롤러들이 실행되는 Kubernetes 클러스터입니다. Admin이 클러스터 매니페스트(Cluster, MachineDeployment 등)를 배포하는 대상이며, Core System과 각종 Provider 컨트롤러가 여기서 동작합니다. 이 클러스터가 다른 워크로드 클러스터의 생성·수정·삭제를 관리하는 "관리자의 역할"을 합니다.

 

Workload Cluster: Cluster API에 의해 프로비저닝되고 관리되는 실제 Kubernetes 클러스터입니다. 사용자의 애플리케이션이 배포되는 대상이므로, Management Cluster와 구분하여 Workload Cluster라 부릅니다. Management Cluster에서 정의한 Cluster 리소스의 desired state에 맞춰 Provider가 인프라에 생성한 노드들로 구성됩니다.

 

Infrastructure Provider: Management Cluster에서 정의된 Cluster, Machine 등의 리소스를 실제 인프라(EC2, VM, 베어메탈 등)로 변환하는 컨트롤러입니다. AWS Provider, vSphere Provider, BYOH Provider 등 인프라 종류별로 존재하며, 각 Provider는 해당 클라우드/온프레미스 API를 호출해 머신 생성, 네트워크 구성, 부트스트랩 등을 수행합니다. Cluster API 규격에 따라 각 Provider는 구현체를 만들어서 배포합니다.

 

Kubernetes ownerReferences ‘이 리소스는 누가 소유(owner)하고 있는가?’를 명시하여 자동 정리(Garbage Collection) 를 가능하게 하는 메커니즘.

  • 가비지 컬렉션 (GC): 부모 객체가 삭제되면 컨트롤러가 ownerReferences를 검사하여, 해당 부모를 참조하는 모든 자식 객체를 연쇄적으로 삭제합니다.
  • 자동 설정: 사용자가 직접 작성하지 않으며, 컨트롤러가 자동으로 기입합니다. 예: Deployment 생성 시 ReplicaSet에 Deployment를 owner로 설정, ReplicaSet 생성 시 Pod에 ReplicaSet을 owner로 설정합니다.

Cluster API를 구축한 배경

Kubernetes는 클러스터가 정상 작동하려면 여러 구성 요소가 올바르게 설정되어야 하는 복잡한 시스템입니다. 이 점이 사용자에게 걸림돌이 될 수 있음을 인식한 커뮤니티는 부트스트래핑 프로세스 간소화에 집중했습니다.
100개가 넘는 Kubernetes 배포판과 설치 프로그램이 있으며, 각각 클러스터 및 지원 인프라에 대한 기본 구성이 다릅니다.

SIG Cluster Lifecycle은 공통 설치 문제를 해결할 수 있는 단일 도구의 필요성을 인식하고 kubeadm을 개발했습니다. kubeadm은 모범 사례에 따라 클러스터 부트스트래핑에 특화된 도구로 설계되었으며, 다른 설치 프로그램에서 재사용할 수 있도록 구성 작업량을 줄이는 것이 핵심 원칙이었습니다. 출시 이후 Kubespray, minikube, kind 등 여러 도구의 기본 부트스트래핑 엔진으로 자리 잡았습니다.

그러나 kubeadm과 같은 도구는 설치 복잡성은 줄여주지만, 클러스터의 일상적 관리나 Kubernetes 환경의 장기적 유지 관리는 다루지 않습니다. 프로덕션 환경에서는 여전히 이런 질문들에 직면합니다.

  • 다양한 인프라 제공업체·위치에 걸쳐 머신, 로드 밸런서, VPC 등을 일관되게 프로비저닝하려면?
  • 클러스터 업그레이드·삭제를 포함한 수명 주기 관리를 자동화하려면?
  • 여러 클러스터를 확장 가능하게 관리하려면?

SIG Cluster Lifecycle은 이런 격차를 메우기 위해 Cluster API를 시작했습니다. 클러스터 생성·구성·관리를 자동화하는 선언적 Kubernetes 스타일 API를 구축하고, 필요한 인프라 공급자(AWS, Azure, vSphere 등)와 부트스트랩 공급자(kubeadm 기본)를 플러그인 형태로 지원하도록 설계했습니다.

Cluster API의 목표

  • 선언적 API로 Kubernetes 표준을 준수하는 클러스터 수명 주기(생성, 확장, 업그레이드, 삭제) 관리
  • 온프레미스와 클라우드 환경 모두에서 동작
  • 공통 연산 정의, 기본 구현 제공, 구현 교체 가능한 플러그인 구조
  • 기존 생태계(노드 문제 감지기, 클러스터 자동 확장기, SIG-Multi-cluster 등) 재사용·통합
  • 기존 클러스터 라이프사이클 도구가 점진적으로 Cluster API를 채택할 수 있는 전환 경로 제공

Cluster API 실습

목표: kind로 Management 클러스터를 배포한 뒤, Docker(Docker Provider)를 사용해 Workload 클러스터를 프로비저닝합니다. (윈도우 WSL2 + Docker 환경에서 실습 가능)

빠른 설치 방법

  1. clusterctl (CLI) – Cluster API 관리 클러스터의 수명 주기를 CLI로 관리합니다. clusterctl은 간단하고 빠른 시작을 위해 설계되었으며, 공급자 구성 요소를 정의하는 YAML을 자동으로 가져와 설치합니다. 공급자 관리 모범 사례와 업그레이드 등 2일차 운영 가이드가 문서에 포함되어 있습니다.
  2. Cluster API Operator – Docs clusterctl 기반 Kubernetes Operator로, 선언적 방식으로 Management 클러스터 내 Cluster API 공급자 생명 주기를 관리합니다. 일상 작업 단순화와 GitOps 기반 워크플로 자동화를 목표로 합니다.

Management 클러스터 설치 및 구성

  • Cluster API 사용을 위해 kubectl로 접근 가능한 기존 Kubernetes 클러스터가 필요합니다.
  • 설치 시 해당 클러스터는 공급자 구성 요소가 설치된 Management 클러스터로 변환되므로, 애플리케이션 워크로드와 분리된 별도 클러스터로 두는 것이 권장됩니다.
  • 일반적인 절차는 로컬 임시 부트스트랩 클러스터를 먼저 만든 뒤, 이를 이용해 인프라 공급자로 목표 Management 클러스터를 프로비저닝하는 방식입니다.

버전 호환성 (Cluster API v1.12.2)

클러스터 유형 지원 Kubernetes 버전
Management v1.31.x ~ v1.35.x
Workload v1.29.x ~ v1.35.x

[01] kind k8s에 관리용 Management 클러스터 설치 init

# 작업 디렉터리 생성
mkdir capi-docker && cd capi-docker

# (TS)
docker context ls
NAME              DESCRIPTION                               DOCKER ENDPOINT                                ERROR
default           Current DOCKER_HOST based configuration   unix:///var/run/docker.sock                    
desktop-linux *   Docker Desktop                            unix:///Users/devlos/.docker/run/docker.sock   

# 중요! 혹시 미삭제 컨테이너 있는지 확인
docker ps -a

# k8s 설치
kind create cluster --name myk8s --image kindest/node:v1.35.0 --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraMounts:
  - hostPath: /Users/devlos/.docker/run/docker.sock  # 대분분 이 값 사용: /var/run/docker.sock
    containerPath: /var/run/docker.sock
  extraPortMappings:
  - containerPort: 30000     # sample
    hostPort: 30000
  - containerPort: 30001     # kube-ops-view
    hostPort: 30001
EOF

# (옵션) kube-ops-view
helm repo add geek-cookbook https://geek-cookbook.github.io/charts/
helm install kube-ops-view geek-cookbook/kube-ops-view --version 1.2.2 \
  --set service.main.type=NodePort,service.main.ports.http.nodePort=30001 \
  --set env.TZ="Asia/Seoul" --namespace kube-system
# NAMESPACE: kube-system
# STATUS: deployed
# REVISION: 1
# DESCRIPTION: Install complete
# TEST SUITE: None
# NOTES:
# 1. Get the application URL by running these commands:
#   export NODE_PORT=$(kubectl get --namespace kube-system -o jsonpath="{.spec.ports[0].nodePort}" services kube-ops-view)
#   export NODE_IP=$(kubectl get nodes --namespace kube-system -o jsonpath="{.items[0].status.addresses[0].address}")
#   echo http://$NODE_IP:$NODE_PORT
open "http://127.0.0.1:30001/#scale=1.5"


# Install clusterctl
## mac 사용자
brew install clusterctl  

## Windows WSL2 사용자 (Linux amd64)
curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.12.2/clusterctl-linux-amd64 -o clusterctl
sudo install -o root -g root -m 0755 clusterctl /usr/local/bin/clusterctl

## 버전 확인
clusterctl version -o json | jq
# {
#   "clusterctl": {
#     "major": "1",
#     "minor": "12",
#     "gitVersion": "v1.12.3",
#     "gitCommit": "Homebrew",
#     "gitTreeState": "clean",
#     "buildDate": "2026-02-17T10:43:04Z",
#     "goVersion": "go1.26.0",
#     "compiler": "gc",
#     "platform": "darwin/arm64"
#   }
# }

# [Docker 프로바이더] Initialize the management cluster : 현재 k8s 를 관리 클러스터로 변환
## Docker 프로바이더는 프로덕션 환경에 사용하도록 설계되지 않았으며 개발 환경 전용
## ClusterTopology관리형 토폴로지 및 ClusterClass 지원을 활성화하는 데 필요한 기능은 다음과 같이 활성화
## https://cluster-api.sigs.k8s.io/tasks/experimental-features/experimental-features
export CLUSTER_TOPOLOGY=true # Enable the experimental Cluster topology feature
# 1.93버전으로 설치시 오류 발생하면 다음과 같이 버전 강제 지정
mkdir -p ~/.cluster-api
cat > ~/.cluster-api/clusterctl.yaml << 'EOF'
cert-manager:
  version: "v1.19.1"
EOF
clusterctl init --infrastructure docker # Initialize the management cluster
# Installing cert-manager version="v1.19.1"
# Waiting for cert-manager to be available...
# spec.privateKey.rotationPolicy: In cert-manager >= v1.18.0, the default value changed from `Never` to `Always`.
# Installing provider="cluster-api" version="v1.12.3" targetNamespace="capi-system"
# spec.privateKey.rotationPolicy: In cert-manager >= v1.18.0, the default value changed from `Never` to `Always`.
# Installing provider="bootstrap-kubeadm" version="v1.12.3" targetNamespace="capi-kubeadm-bootstrap-system"
# spec.privateKey.rotationPolicy: In cert-manager >= v1.18.0, the default value changed from `Never` to `Always`.
# Installing provider="control-plane-kubeadm" version="v1.12.3" targetNamespace="capi-kubeadm-control-plane-system"
# spec.privateKey.rotationPolicy: In cert-manager >= v1.18.0, the default value changed from `Never` to `Always`.
# Installing provider="infrastructure-docker" version="v1.12.3" targetNamespace="capd-system"
# spec.privateKey.rotationPolicy: In cert-manager >= v1.18.0, the default value changed from `Never` to `Always`.

# Your management cluster has been initialized successfully!


# 확인
## 관련 crd 설치
kubectl get crd
kubectl get crd | grep x-k8s
# clusterclasses.cluster.x-k8s.io                              2026-02-20T19:45:46Z
# clusterresourcesetbindings.addons.cluster.x-k8s.io           2026-02-20T19:45:46Z
# clusterresourcesets.addons.cluster.x-k8s.io                  2026-02-20T19:45:46Z
# clusters.cluster.x-k8s.io                                    2026-02-20T19:45:46Z
# devclusters.infrastructure.cluster.x-k8s.io                  2026-02-20T19:45:48Z
# devclustertemplates.infrastructure.cluster.x-k8s.io          2026-02-20T19:45:48Z
# devmachines.infrastructure.cluster.x-k8s.io                  2026-02-20T19:45:48Z
# devmachinetemplates.infrastructure.cluster.x-k8s.io          2026-02-20T19:45:48Z
# dockerclusters.infrastructure.cluster.x-k8s.io               2026-02-20T19:45:48Z
# dockerclustertemplates.infrastructure.cluster.x-k8s.io       2026-02-20T19:45:48Z
# dockermachinepools.infrastructure.cluster.x-k8s.io           2026-02-20T19:45:48Z
# dockermachinepooltemplates.infrastructure.cluster.x-k8s.io   2026-02-20T19:45:48Z
# dockermachines.infrastructure.cluster.x-k8s.io               2026-02-20T19:45:48Z
# dockermachinetemplates.infrastructure.cluster.x-k8s.io       2026-02-20T19:45:48Z
# extensionconfigs.runtime.cluster.x-k8s.io                    2026-02-20T19:45:46Z
# ipaddressclaims.ipam.cluster.x-k8s.io                        2026-02-20T19:45:46Z
# ipaddresses.ipam.cluster.x-k8s.io                            2026-02-20T19:45:47Z
# kubeadmconfigs.bootstrap.cluster.x-k8s.io                    2026-02-20T19:45:47Z
# kubeadmconfigtemplates.bootstrap.cluster.x-k8s.io            2026-02-20T19:45:47Z
# kubeadmcontrolplanes.controlplane.cluster.x-k8s.io           2026-02-20T19:45:48Z
# kubeadmcontrolplanetemplates.controlplane.cluster.x-k8s.io   2026-02-20T19:45:48Z
# machinedeployments.cluster.x-k8s.io                          2026-02-20T19:45:47Z
# machinedrainrules.cluster.x-k8s.io                           2026-02-20T19:45:47Z
# machinehealthchecks.cluster.x-k8s.io                         2026-02-20T19:45:47Z
# machinepools.cluster.x-k8s.io                                2026-02-20T19:45:47Z
# machines.cluster.x-k8s.io                                    2026-02-20T19:45:47Z
# machinesets.cluster.x-k8s.io                                 2026-02-20T19:45:47Z
# providers.clusterctl.cluster.x-k8s.io                        2026-02-20T19:39:14Z

## capd-system, capi-(kueadm-X/Y, system), cert-manager 네임스페이스가 생성
kubectl get pod -A
# NAMESPACE                           NAME                                                             READY   STATUS              RESTARTS   AGE
# capd-system                         capd-controller-manager-54755cdd6-2lpxm                          0/1     ContainerCreating   0          100s
# capi-kubeadm-bootstrap-system       capi-kubeadm-bootstrap-controller-manager-94d8964d9-m6pk8        1/1     Running             0          102s
# capi-kubeadm-control-plane-system   capi-kubeadm-control-plane-controller-manager-6796744c76-xmd2h   0/1     ContainerCreating   0          101s
# capi-system                         capi-controller-manager-59c5798655-bwwh4                         1/1     Running             0          102s
# cert-manager                        cert-manager-598d877b78-nhdl7                                    1/1     Running             0          7m14s
# cert-manager                        cert-manager-cainjector-6b5777d564-stpcj                         1/1     Running             0          7m15s
# cert-manager                        cert-manager-webhook-5d9fc6b4ff-b8gzz                            1/1     Running             0          7m12s
# kube-system                         coredns-7d764666f9-wkw7v                                         1/1     Running             0          8m40s
# kube-system                         coredns-7d764666f9-x9cfr                                         1/1     Running             0          8m40s
# kube-system                         etcd-myk8s-control-plane                                         1/1     Running             0          8m47s
# kube-system                         kindnet-q58t4                                                    1/1     Running             0          8m40s
# kube-system                         kube-apiserver-myk8s-control-plane                               1/1     Running             0          8m47s
# kube-system                         kube-controller-manager-myk8s-control-plane                      1/1     Running             0          8m47s
# kube-system                         kube-ops-view-5c64986f74-m4wz9                                   1/1     Running             0          8m37s
# kube-system                         kube-proxy-k8bwj                                                 1/1     Running             0          8m40s
# kube-system                         kube-scheduler-myk8s-control-plane                               1/1     Running             0          8m48s
# local-path-storage                  local-path-provisioner-67b8995b4b-26hq6                          1/1     Running             0          8m40s

[02] 관리용 Management K8S 클러스터 정보 확인

## capd-system, capi-(kueadm-X/Y, system), cert-manager 네임스페이스가 생성
kubectl get pod -A
NAMESPACE                           NAME                                                            READY   STATUS    RESTARTS   AGE
capd-system                         capd-controller-manager-7c9d67ffdf-7npsd                        1/1     Running   0          3m10s
capi-kubeadm-bootstrap-system       capi-kubeadm-bootstrap-controller-manager-bd5f89bbd-9c9ng       1/1     Running   0          3m10s
capi-kubeadm-control-plane-system   capi-kubeadm-control-plane-controller-manager-55c48d9b5-bckj5   1/1     Running   0          3m10s
capi-system                         capi-controller-manager-6cc7b949c4-rmd7h                        1/1     Running   0          3m11s
cert-manager                        cert-manager-598d877b78-9lkmd                                   1/1     Running   0          3m37s
cert-manager                        cert-manager-cainjector-6b5777d564-7mfzz                        1/1     Running   0          3m37s
cert-manager                        cert-manager-webhook-5d9fc6b4ff-slscg                           1/1     Running   0          3m37s
...

## Enabling Experimental Features 정보 확인 : ClusterTopology=true , InPlaceUpdates=false 등
kubectl describe -n capi-system deployment.apps/capi-controller-manager | grep feature-gates
    #   --feature-gates=MachinePool=true,ClusterTopology=true,RuntimeSDK=false,MachineSetPreflightChecks=true,MachineWaitForVolumeDetachConsiderVolumeAttachments=true,PriorityQueue=false,ReconcilerRateLimiting=false,InPlaceUpdates=false,MachineTaintPropagation=false

# 프로바이더 (타입별)확인 : CAPI 구성요소가 설치된 상태
kubectl get providers.clusterctl.cluster.x-k8s.io -A
# NAMESPACE                           NAME                    AGE     TYPE                     PROVIDER      VERSION
# capd-system                         infrastructure-docker   2m47s   InfrastructureProvider   docker        v1.12.3
# capi-kubeadm-bootstrap-system       bootstrap-kubeadm       2m49s   BootstrapProvider        kubeadm       v1.12.3
# capi-kubeadm-control-plane-system   control-plane-kubeadm   2m48s   ControlPlaneProvider     kubeadm       v1.12.3
# capi-system                         cluster-api             2m49s   CoreProvider             cluster-api   v1.12.3

# CAPI의 핵심 컨트롤러 집합 : Cluster / MachineDeployment / MachineSet / Machine CRD 관리, 전체 reconcile orchestration 담당
kubectl get providers -n capi-system cluster-api -o yaml
providerName: cluster-api
type: CoreProvider
version: v1.12.3

# 노드를 Kubernetes로 부팅시키는 역할 : cloud-init user-data 생성, kubeadm join/init config 생성
kubectl get providers -n capi-kubeadm-bootstrap-system bootstrap-kubeadm -o yaml
providerName: cluster-api
type: CoreProvider
version: v1.12.3

# Control Plane 전용 Machine 관리 : KubeadmControlPlane 리소스 관리, Control Plane 노드 스케일링, etcd 포함 업그레이드 관리
kubectl get providers -n capi-kubeadm-control-plane-system control-plane-kubeadm -o yaml
providerName: kubeadm
type: ControlPlaneProvider
version: v1.12.3

# 실제 인프라 리소스 생성 담당 : 실제 Docker 컨테이너를 VM처럼 생성, Dev/Test 용도 (CAPD)
kubectl get providers -n capd-system infrastructure-docker -o yaml
providerName: docker
type: InfrastructureProvider
version: v1.12.3

[03] cert-manger 확인

kubectl get crd | grep cert
# certificaterequests.cert-manager.io                          2026-02-20T19:39:20Z
# certificates.cert-manager.io                                 2026-02-20T19:39:20Z
# challenges.acme.cert-manager.io                              2026-02-20T19:39:20Z
# clusterissuers.cert-manager.io                               2026-02-20T19:39:20Z
# issuers.cert-manager.io                                      2026-02-20T19:39:20Z
# orders.acme.cert-manager.io                                  2026-02-20T19:39:20Z

kubectl get deploy,pod,svc,ep,cm,secret,sa -n cert-manager
# NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
# deployment.apps/cert-manager              1/1     1            1           11m
# deployment.apps/cert-manager-cainjector   1/1     1            1           11m
# deployment.apps/cert-manager-webhook      0/1     1            0           11m

# NAME                                           READY   STATUS    RESTARTS      AGE
# pod/cert-manager-598d877b78-nhdl7              1/1     Running   1 (32s ago)   11m
# pod/cert-manager-cainjector-6b5777d564-stpcj   1/1     Running   1 (32s ago)   11m
# pod/cert-manager-webhook-5d9fc6b4ff-b8gzz      0/1     Running   1 (32s ago)   11m

# NAME                              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)            AGE
# service/cert-manager              ClusterIP   10.96.77.176    <none>        9402/TCP           11m
# service/cert-manager-cainjector   ClusterIP   10.96.211.52    <none>        9402/TCP           11m
# service/cert-manager-webhook      ClusterIP   10.96.122.229   <none>        443/TCP,9402/TCP   11m

# NAME                                ENDPOINTS          AGE
# endpoints/cert-manager              10.244.0.7:9402    11m
# endpoints/cert-manager-cainjector   10.244.0.11:9402   11m
# endpoints/cert-manager-webhook                         11m

# NAME                         DATA   AGE
# configmap/kube-root-ca.crt   1      12m

# NAME                             TYPE     DATA   AGE
# secret/cert-manager-webhook-ca   Opaque   3      5m50s

# NAME                                     AGE
# serviceaccount/cert-manager              12m
# serviceaccount/cert-manager-cainjector   12m
# serviceaccount/cert-manager-webhook      12m
# serviceaccount/default                   12m

#
kubectl get issuers.cert-manager.io -A                    
# NAMESPACE                           NAME                                           READY   AGE
# capd-system                         capd-selfsigned-issuer                         True    6m1s
# capi-kubeadm-bootstrap-system       capi-kubeadm-bootstrap-selfsigned-issuer       True    6m2s
# capi-kubeadm-control-plane-system   capi-kubeadm-control-plane-selfsigned-issuer   True    6m1s
# capi-system                         capi-selfsigned-issuer                         True    6m3s

kubectl get certificaterequests.cert-manager.io -A -owide
# NAMESPACE                           NAME                                        APPROVED   DENIED   READY   ISSUER                                         REQUESTER                                         STATUS                                         AGE
# capd-system                         capd-serving-cert-1                         True                True    capd-selfsigned-issuer                         system:serviceaccount:cert-manager:cert-manager   Certificate fetched from issuer successfully   6m11s
# capi-kubeadm-bootstrap-system       capi-kubeadm-bootstrap-serving-cert-1       True                True    capi-kubeadm-bootstrap-selfsigned-issuer       system:serviceaccount:cert-manager:cert-manager   Certificate fetched from issuer successfully   6m13s
# capi-kubeadm-control-plane-system   capi-kubeadm-control-plane-serving-cert-1   True                True    capi-kubeadm-control-plane-selfsigned-issuer   system:serviceaccount:cert-manager:cert-manager   Certificate fetched from issuer successfully   6m12s
# capi-system                         capi-serving-cert-1                         True                True    capi-selfsigned-issuer                         system:serviceaccount:cert-manager:cert-manager   Certificate fetched from issuer successfully   6m14s

kubectl get certificates.cert-manager.io -A -owide
# NAMESPACE                           NAME                                      READY   SECRET                                            ISSUER                                         STATUS                                          AGE
# capd-system                         capd-serving-cert                         True    capd-webhook-service-cert                         capd-selfsigned-issuer                         Certificate is up to date and has not expired   6m24s
# capi-kubeadm-bootstrap-system       capi-kubeadm-bootstrap-serving-cert       True    capi-kubeadm-bootstrap-webhook-service-cert       capi-kubeadm-bootstrap-selfsigned-issuer       Certificate is up to date and has not expired   6m25s
# capi-kubeadm-control-plane-system   capi-kubeadm-control-plane-serving-cert   True    capi-kubeadm-control-plane-webhook-service-cert   capi-kubeadm-control-plane-selfsigned-issuer   Certificate is up to date and has not expired   6m24s
# capi-system                         capi-serving-cert                         True    capi-webhook-service-cert                         capi-selfsigned-issuer                         Certificate is up to date and has not expired   6m26s

[04] 첫 번째 워크로드 Workload 클러스터 생성 및 확인*

Cluster API에서 Workload 클러스터를 생성할 때 Cluster(실제 인스턴스), ClusterClass(설계도), Templates(부품)가 어떻게 연결되는지 아래 다이어그램으로 정리할 수 있습니다.

주요 구성 요소

구성 요소 리소스 역할
Control Plane KubeadmControlPlaneTemplate 컨트롤 플레인 생성 방식 정의
Infrastructure DockerMachineTemplate 노드를 Docker 컨테이너(CAPD)로 생성하도록 정의
Patches variables 기반 variables 값에 따라 YAML 구조를 동적으로 변경

 

Patches 예시: ClusterClass의 patches 섹션은 사용자 변수에 따라 리소스를 실시간으로 수정합니다.

  • imageRepository – 컨테이너 이미지 저장소 변경
  • etcdImageTag, coreDNSImageTag – 특정 컴포넌트 버전 지정
  • PodSecurityStandard – API 서버 Admission Configuration에 Pod Security 정책 자동 삽입
# 모니터링
watch -d "docker ps ; echo ; clusterctl describe cluster capi-quickstart"
# CONTAINER ID   IMAGE                  COMMAND                  CREATED          STATUS         PORTS                                                             NAMES
# 322c18c5580f   kindest/node:v1.35.0   "/usr/local/bin/entr…"   14 minutes ago   Up 2 minutes   0.0.0.0:30000-30001->30000-30001/tcp, 127.0.0.1:62897->6443/tcp   myk8s-co
# ntrol-plane

# Error: clusters.cluster.x-k8s.io "capi-quickstart" not found
watch -d kubectl get cluster -o wide  # or kubectl get cluster -w
watch -d kubectl get machines  # watch -d "docker ps ; echo ; kubectl get machines"
watch -d kubectl get pod -A

# 첫 번째 워크로드 구성을 위한 환경 변수 설정 : 필요에 맞게 수정.
## The list of service CIDR, default ["10.128.0.0/12"]
export SERVICE_CIDR=["10.20.0.0/16"]

## The list of pod CIDR, default ["192.168.0.0/16"]
export POD_CIDR=["10.10.0.0/16"]

## The service domain, default "cluster.local"
export SERVICE_DOMAIN="myk8s-1.local"

## PSS Disable
export POD_SECURITY_STANDARD_ENABLED="false"


# Generating the cluster configuration : 워크로드 클러스터 생성 --dry-run
## https://cluster-api.sigs.k8s.io/clusterctl/commands/generate-cluster
clusterctl generate cluster capi-quickstart --flavor development \
  --kubernetes-version v1.34.3 \
  --control-plane-machine-count=3 \
  --worker-machine-count=3 \
  > capi-quickstart.yaml

# Cluster, Machines, Machine Deployments 등과 같은 Cluster API 객체의 미리 정의된 목록이 포함된 YAML 파일이 생성됨.
open capi-quickstart.yaml
# cat capi-quickstart.yaml | grep -E '^apiVersion:|^kind:'
# apiVersion: cluster.x-k8s.io/v1beta2
# kind: ClusterClass                       #  ClusterClass 정의 (설계도)
# apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
# kind: DockerClusterTemplate
# apiVersion: controlplane.cluster.x-k8s.io/v1beta2
# kind: KubeadmControlPlaneTemplate
# apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
# kind: DockerMachineTemplate
# apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
# kind: DockerMachineTemplate
# apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
# kind: DockerMachinePoolTemplate
# apiVersion: bootstrap.cluster.x-k8s.io/v1beta2
# kind: KubeadmConfigTemplate
# apiVersion: cluster.x-k8s.io/v1beta2
# kind: Cluster                           # Cluster (Topology 기반 실제 클러스터)


# Apply the workload cluster
kubectl apply -f capi-quickstart.yaml
# clusterclass.cluster.x-k8s.io/quick-start created
# dockerclustertemplate.infrastructure.cluster.x-k8s.io/quick-start-cluster created
# kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/quick-start-control-plane created
# dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-control-plane created
# dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-default-worker-machinetemplate created
# dockermachinepooltemplate.infrastructure.cluster.x-k8s.io/quick-start-default-worker-machinepooltemplate created
# kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/quick-start-default-worker-bootstraptemplate created
# cluster.cluster.x-k8s.io/capi-quickstart created

# 생성 확인 & kubeconfig 자격 증명 & CNI 플러그인 설치 후 확인
# Accessing the workload cluster

## The cluster will now start provisioning. You can check status with
kubectl get cluster -o wide
# NAME              CLUSTERCLASS   AVAILABLE   CP DESIRED   CP CURRENT   CP READY   CP AVAILABLE   CP UP-TO-DATE   W DESIRED   W CURRENT   W READY   W AVAILABLE   W UP-TO-DATE   PAUSED   PHASE         AGE   VERSION
# capi-quickstart   quick-start    False       3            1            0          0              1               3           3           0         0             3              False    Provisioned   24s   v1.34.3

## You can also get an “at glance” view of the cluster and its resources by running:
clusterctl describe cluster capi-quickstart
# NAME                                                           REPLICAS AVAILABLE READY UP TO DATE STATUS REASON        SINCE MESSAGE                                                                                              
# Cluster/capi-quickstart                                        6/6      0         0     6          False  NotAvailable  91s   * WorkersAvailable:                                                                                  
# │                                                                                                                               * MachineDeployment capi-quickstart-md-0-n4lb4: 0 available replicas, at least 3 required          
# │                                                                                                                                 (spec.strategy.rollout.maxUnavailable is 0, spec.replicas is 3)                                  
# ├─ClusterInfrastructure - DockerCluster/capi-quickstart-8qq7g                                      True   Ready         87s                                                                                                        
# ├─ControlPlane - KubeadmControlPlane/capi-quickstart-v6vgk     3/3      0         0     3          True   Available     10s                                                                                                        
# │ └─3 Machines...                                                       0         0     3          False  NotReady      50s   See capi-quickstart-v6vgk-4v66p, capi-quickstart-v6vgk-5rv76, ...                                    
# └─Workers                                                                                                                                                                                                                          
#   └─MachineDeployment/capi-quickstart-md-0-n4lb4               3/3      0         0     3          False  NotAvailable  91s   0 available replicas, at least 3 required (spec.strategy.rollout.maxUnavailable is 0, spec.replicas  
#     │                                                                                                                         is 3)                                                                                                
#     └─3 Machines...                                                     0         0     3          False  NotReady      47s   See capi-quickstart-md-0-n4lb4-h9bj7-brv67, capi-quickstart-md-0-n4lb4-h9bj7-pzghs, ...              

# docker 프로바이더에 의해서, 워크로드 클러스터에 머신이 컨테이너로 기동
docker ps
# CONTAINER ID   IMAGE                                COMMAND                  CREATED              STATUS              PORTS                                                             NAMES
# 8651f06f7654   kindest/node:v1.34.3                 "/usr/local/bin/entr…"   20 seconds ago       Up 19 seconds       127.0.0.1:55004->6443/tcp                                         capi-quickstart-v6vgk-5rv76
# 612b2bcac44f   kindest/node:v1.34.3                 "/usr/local/bin/entr…"   56 seconds ago       Up 55 seconds       127.0.0.1:55003->6443/tcp                                         capi-quickstart-v6vgk-4v66p
# 0f1e7e8a1b7d   kindest/node:v1.34.3                 "/usr/local/bin/entr…"   About a minute ago   Up About a minute                                                                     capi-quickstart-md-0-n4lb4-h9bj7-pzghs
# f15421747ab0   kindest/node:v1.34.3                 "/usr/local/bin/entr…"   About a minute ago   Up About a minute                                                                     capi-quickstart-md-0-n4lb4-h9bj7-whlcx
# e286beee795a   kindest/node:v1.34.3                 "/usr/local/bin/entr…"   About a minute ago   Up About a minute                                                                     capi-quickstart-md-0-n4lb4-h9bj7-brv67
# 0f98ac850dcf   kindest/node:v1.34.3                 "/usr/local/bin/entr…"   About a minute ago   Up About a minute   127.0.0.1:55002->6443/tcp                                         capi-quickstart-v6vgk-q5fxw
# f6aaa74afe25   kindest/haproxy:v20230606-42a2262b   "haproxy -W -db -f /…"   About a minute ago   Up About a minute   0.0.0.0:55000->6443/tcp, 0.0.0.0:55001->8404/tcp                  capi-quickstart-lb
# 322c18c5580f   kindest/node:v1.35.0                 "/usr/local/bin/entr…"   17 minutes ago       Up 5 minutes        0.0.0.0:30000-30001->30000-30001/tcp, 127.0.0.1:62897->6443/tcp   myk8s-control-plane

# 컨트롤 플레인 역할 컨테이너 로그 1대 확인 해보기
docker logs -f capi-quickstart-v6vgk-4v66p
# ...
# Welcome to Debian GNU/Linux 12 (bookworm)!

# Queued start job for default target graphical.target.
# [  OK  ] Created slice kubelet.slic… used to run Kubernetes / Kubelet.
# [  OK  ] Created slice system-modpr…lice - Slice /system/modprobe.
...

# 워크로드 클러스터 자격증명
clusterctl get kubeconfig capi-quickstart > capi-quickstart.kubeconfig
kubectl --kubeconfig=capi-quickstart.kubeconfig get nodes -owide
# NAME                                     STATUS                        ROLES           AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION     CONTAINER-RUNTIME
# capi-quickstart-md-0-n4lb4-h9bj7-7pv7l   NotReady,SchedulingDisabled   <none>          5m1s    v1.34.3   172.18.0.5    <none>        Debian GNU/Linux 12 (bookworm)   6.11.11-linuxkit   containerd://2.2.0
# capi-quickstart-md-0-n4lb4-h9bj7-9zsxs   NotReady,SchedulingDisabled   <none>          5m1s    v1.34.3   172.18.0.7    <none>        Debian GNU/Linux 12 (bookworm)   6.11.11-linuxkit   containerd://2.2.0
# capi-quickstart-md-0-n4lb4-h9bj7-h6fjv   NotReady,SchedulingDisabled   <none>          5m1s    v1.34.3   172.18.0.6    <none>        Debian GNU/Linux 12 (bookworm)   6.11.11-linuxkit   containerd://2.2.0
# capi-quickstart-v6vgk-689lg              NotReady                      control-plane   3m      v1.34.3   172.18.0.9    <none>        Debian GNU/Linux 12 (bookworm)   6.11.11-linuxkit   containerd://2.2.0
# capi-quickstart-v6vgk-drxgh              NotReady                      control-plane   3m35s   v1.34.3   172.18.0.8    <none>        Debian GNU/Linux 12 (bookworm)   6.11.11-linuxkit   containerd://2.2.0
# capi-quickstart-v6vgk-tgspb              NotReady                      control-plane   4m14s   v1.34.3   172.18.0.4    <none>        Debian GNU/Linux 12 (bookworm)   6.11.11-linuxkit   containerd://2.2.0
# 노드 NotReady 상태 해결 : CNI 플러그인 설치 하자!
kubectl --kubeconfig=capi-quickstart.kubeconfig apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/calico.yaml
# /v3.27.0/manifests/calico.yaml
# poddisruptionbudget.policy/calico-kube-controllers created
# serviceaccount/calico-kube-controllers created
# serviceaccount/calico-node created
# serviceaccount/calico-cni-plugin created
# configmap/calico-config created
# customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
# customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
# customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
# customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
# customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
# customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
# customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
# customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
# customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
# customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
# customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
# customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
# customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
# customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
# customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
# customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
# customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
# customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
# clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
# clusterrole.rbac.authorization.k8s.io/calico-node created
# clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
# clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
# clusterrolebinding.rbac.authorization.k8s.io/calico-node created
# clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
# daemonset.apps/calico-node created
# deployment.apps/calico-kube-controllers created
kubectl --kubeconfig=capi-quickstart.kubeconfig get nodes -owide
# NAME                                     STATUS   ROLES           AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION     CONTAINER-RUNTIME
# capi-quickstart-md-0-n4lb4-h9bj7-jp6mt   Ready    <none>          37s     v1.34.3   172.18.0.5    <none>        Debian GNU/Linux 12 (bookworm)   6.11.11-linuxkit   containerd://2.2.0
# capi-quickstart-md-0-n4lb4-h9bj7-q7wlm   Ready    <none>          37s     v1.34.3   172.18.0.6    <none>        Debian GNU/Linux 12 (bookworm)   6.11.11-linuxkit   containerd://2.2.0
# capi-quickstart-md-0-n4lb4-h9bj7-wkckm   Ready    <none>          37s     v1.34.3   172.18.0.7    <none>        Debian GNU/Linux 12 (bookworm)   6.11.11-linuxkit   containerd://2.2.0
# capi-quickstart-v6vgk-689lg              Ready    control-plane   3m43s   v1.34.3   172.18.0.9    <none>        Debian GNU/Linux 12 (bookworm)   6.11.11-linuxkit   containerd://2.2.0
# capi-quickstart-v6vgk-drxgh              Ready    control-plane   4m18s   v1.34.3   172.18.0.8    <none>        Debian GNU/Linux 12 (bookworm)   6.11.11-linuxkit   containerd://2.2.0
# capi-quickstart-v6vgk-tgspb              Ready    control-plane   4m57s   v1.34.3   172.18.0.4    <none>        Debian GNU/Linux 12 (bookworm)   6.11.11-linuxkit   containerd://2.2.0


# 워크로드 클러스터 상태 : 성공!
clusterctl describe cluster capi-quickstart
# NAME                                                           REPLICAS AVAILABLE READY UP TO DATE STATUS REASON     SINCE  MESSAGE                                                                                  
# Cluster/capi-quickstart                                        6/6      6         6     6          True   Available  3m12s                                                                                           
# ├─ClusterInfrastructure - DockerCluster/capi-quickstart-8qq7g                                      True   Ready      19m                                                                                             
# ├─ControlPlane - KubeadmControlPlane/capi-quickstart-v6vgk     3/3      3         3     3          True   Available  12m                                                                                             
# │ └─3 Machines...                                                       3         3     3          True   Ready      3m8s   See capi-quickstart-v6vgk-689lg, capi-quickstart-v6vgk-drxgh, ...                        
# └─Workers                                                                                                                                                                                                            
#   └─MachineDeployment/capi-quickstart-md-0-n4lb4               3/3      3         3     3          True   Available  3m12s                                                                                           
#     └─3 Machines...                                                     3         3     3          True   Ready      3m15s  See capi-quickstart-md-0-n4lb4-h9bj7-jp6mt, capi-quickstart-md-0-n4lb4-h9bj7-q7wlm, ...  

[05] 첫 번째 워크로드 클러스터 관련 Cluster API CRD 확인

# clusterclasses 확인 : 클러스터의 기본 템플릿 정보 제공, 예) 아래 처럼 etcd 컨테이너 이미지 tag  3.5.3-0
kubectl get clusterclasses quick-start -o yaml
# ...
#   patches:
#   - definitions:
#     - jsonPatches:
#    ...
#  - name: etcdImageTag
#     required: true
#     schema:
#       openAPIV3Schema:
#         default: ""
#         description: etcdImageTag sets the tag for the etcd image.
#         example: 3.5.3-0
#         type: string
# ...

kubectl get clusterclasses -owide
# NAME          PAUSED   VARIABLES READY   AGE
# quick-start   False    True              20m


# cluster 확인 : 아래 Varibles 중 Name에 값이 없는 경우, clusterclasses 템플릿에 variables 값이 반영됨
kubectl get cluster -owide
# NAME          PAUSED   VARIABLES READY   AGE
# quick-start   False    True              20m
# ❯ kubectl get cluster -owide
# NAME              CLUSTERCLASS   AVAILABLE   CP DESIRED   CP CURRENT   CP READY   CP AVAILABLE   CP UP-TO-DATE   W DESIRED   W CURRENT   W READY   W AVAILABLE   W UP-TO-DATE   PAUSED   PHASE         AGE   VERSION
# capi-quickstart   quick-start    True        3            3            3          3              3               3           3           3         3             3              False    Provisioned   20m   v1.34.3

kubectl describe cluster capi-quickstart
# ...
# Spec:
#   Cluster Network:
#     Pods:
#       Cidr Blocks:
#         10.10.0.0/16
#     Service Domain:  myk8s-1.local
#     Services:
#       Cidr Blocks:
#         10.20.0.0/16
#   Control Plane Endpoint:
#     Host:  192.168.97.3
#     Port:  6443
#   ...
#   Topology:
#     Class Ref:
#       Name:  quick-start
#     Control Plane:
#       Replicas:  3
#     Variables:
#       Name:   imageRepository
#       Value:
#       Name:   etcdImageTag
#       Value:
#       Name:   coreDNSImageTag
#       Value:
#       Name:   podSecurityStandard
#       Value:
#         Audit:    restricted
#         Enabled:  false
#         Enforce:  baseline
#         Warn:     restricted
#     Version:      v1.34.3
#     Workers:
#       Machine Deployments:
#         Class:     default-worker
#         Name:      md-0
#         Replicas:  3

# 워크로드 클러스터에서 etcd 이미지 태그 정보 확인
kubectl --kubeconfig=capi-quickstart.kubeconfig get pod -n kube-system -l component=etcd -o yaml | grep image: | uniq
    #   image: registry.k8s.io/etcd:3.6.5-0


# To verify the first control plane is up:
kubectl get kubeadmcontrolplane -owide
# NAME                    CLUSTER           AVAILABLE   DESIRED   CURRENT   READY   AVAILABLE   UP-TO-DATE   PAUSED   INITIALIZED   AGE   VERSION
# capi-quickstart-v6vgk   capi-quickstart   True        3         3         3       3           3            False    true          20m   v1.34.3
## kubeadm 에 의해 생성된 컨트롤 플레인 정보
kubectl get kubeadmcontrolplane -o yaml
# ...
#   spec:
#     kubeadmConfigSpec:
#       clusterConfiguration:
#         apiServer:
#           certSANs:
#           - localhost
#           - 127.0.0.1
#           - 0.0.0.0
#           - host.docker.internal
#       initConfiguration:
#         nodeRegistration:
#           kubeletExtraArgs:
#           - name: eviction-hard
#             value: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
#       joinConfiguration:
#         nodeRegistration:
#           kubeletExtraArgs:
#           - name: eviction-hard
#             value: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
#     machineTemplate:
#       metadata:
#         labels:
#           cluster.x-k8s.io/cluster-name: capi-quickstart
#           topology.cluster.x-k8s.io/owned: ""
#       spec:
#         infrastructureRef:
#           apiGroup: infrastructure.cluster.x-k8s.io
#           kind: DockerMachineTemplate
#           name: capi-quickstart-tdwjt
#     replicas: 3
#     rollout:
#       strategy:
#         rollingUpdate:
#           maxSurge: 1
#         type: RollingUpdate
#     version: v1.34.3
#   status:
#     availableReplicas: 3
#     ...


# kubeadmconfigtemplate 확인
kubectl get kubeadmconfigtemplate
NAME                                           CLUSTERCLASS   CLUSTER           AGE
capi-quickstart-md-0-f24ns                                    capi-quickstart   21m    # 실제 Cluster 생성 시 자동으로 만들어진 복사본
quick-start-default-worker-bootstraptemplate   quick-start                      21m    # ClusterClass에 속함, 실제 클러스터에 직접 사용되지 않음, '설계도용 템플릿'

## kubeadm 작업 시 템플릿 : 아래는 워커 노드 join 시, kubelet args 설정
kubectl get kubeadmconfigtemplate -l cluster.x-k8s.io/cluster-name=capi-quickstart     # 실제 Cluster 생성 시 자동으로 만들어진 복사본
# NAME                         CLUSTERCLASS   CLUSTER           AGE
# capi-quickstart-md-0-f24ns                  capi-quickstart   21m
kubectl get kubeadmconfigtemplate -l cluster.x-k8s.io/cluster-name=capi-quickstart -o yaml # 아래 spec 내용과 동일
# ...
#   spec:
#     template:
#       spec:
#         joinConfiguration:
#           nodeRegistration:
#             kubeletExtraArgs:
#             - name: eviction-hard
#               value: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%

kubectl get kubeadmconfigtemplate quick-start-default-worker-bootstraptemplate -o yaml # ClusterClass에 속함, 실제 클러스터에 직접 사용되지 않음, '설계도용 템플릿'
# ...
# spec:
#   template:
#     spec:
#       joinConfiguration:
#         nodeRegistration:
#           kubeletExtraArgs:
#           - name: eviction-hard
#             value: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%

# dockerclustertemplate 확인
kubectl get dockerclustertemplate
# NAME                  AGE
# quick-start-cluster   21m
kubectl describe dockerclustertemplate
# ...
# Spec:
#   Template:
#     Spec:
#       Control Plane Endpoint:
#         Port:  6443
#       Load Balancer:


# dockermachinetemplate 확인
kubectl get dockermachinetemplate
NAME                                         AGE
capi-quickstart-md-0-87q27                   22m   # Worker(MachineDeployment md-0) 전용 복사본
capi-quickstart-shm4t                        22m   # ControlPlane용 클러스터 전용 복사본
quick-start-control-plane                    22m   # ClusterClass에서 정의한 원본 템플릿 : 설계도, ClusterClass가 참조
quick-start-default-worker-machinetemplate   22m   # ClusterClass에 정의된 Worker용 원본 템플릿 : 설계도, 실제 사용 시 복제됨

## dockermachinetemplate : 도커 프로바이더에 의해 배포된 노드(실제로는 컨테이너) 템플릿 - Spec(마운트 정보 등) 등 설정
## quick-start-control-plane, quick-start-default-worker-machinetemplate 의 Spec, Capacity 동일
kubectl describe dockermachinetemplate quick-start-control-plane
# Name:         quick-start-control-plane
# Namespace:    default
# Labels:       <none>
# Annotations:  <none>
# API Version:  infrastructure.cluster.x-k8s.io/v1beta2
# Kind:         DockerMachineTemplate
# Metadata:
#   Creation Timestamp:  2026-02-20T19:54:14Z
#   Generation:          1
#   Owner References:
#     API Version:     cluster.x-k8s.io/v1beta2
#     Kind:            ClusterClass
#     Name:            quick-start
#     UID:             5f5b447a-20fa-408b-b916-2ae952202b88
#   Resource Version:  2788
#   UID:               7fe34ccb-0a36-4e53-a2e4-97d5de531639
# Spec:
#   Template:
#     Spec:
#       Extra Mounts:
#         Container Path:  /var/run/docker.sock
#         Host Path:       /var/run/docker.sock
# Status:
#   Capacity:
#     Cpu:     16
#     Memory:  66243528Ki
# Events:      <none>
kubectl describe dockermachinetemplate quick-start-default-worker-machinetemplate
# ...
# Spec:
#   Template:
#     Spec:
#       Extra Mounts:
#         Container Path:  /var/run/docker.sock
#         Host Path:       /var/run/docker.sock
# Status:
#   Capacity:
#     Cpu:     8
#     Memory:  11092404Ki

## (참고) 아래 2개는 cluster 에 의해서 생성
kubectl get dockermachinetemplate -l cluster.x-k8s.io/cluster-name=capi-quickstart
# NAME                         AGE
# capi-quickstart-md-0-87q27   22m
# capi-quickstart-shm4t        22m

kubectl describe cluster capi-quickstart | grep DockerMachineTemplate
#   Normal  TopologyCreate           23m                 topology/cluster-controller  Created DockerMachineTemplate "default/capi-quickstart-shm4t"
#   Normal  TopologyCreate           23m                 topology/cluster-controller  Created DockerMachineTemplate "default/capi-quickstart-md-0-87q27"

kubectl get dockermachinetemplate -l cluster.x-k8s.io/cluster-name=capi-quickstart -o yaml
# ...
#   spec:
#     template:
#       spec:
#         customImage: kindest/node:v1.34.3
#         extraMounts:
#         - containerPath: /var/run/docker.sock
#           hostPath: /var/run/docker.sock
#   status:
#     capacity:
#       cpu: "8"
#       memory: 11092404Ki


# (참고) dockermachinepooltemplate 확인 : 특별한 설정은 없음
kubectl get dockermachinepooltemplate
# NAME                                             AGE
# quick-start-default-worker-machinepooltemplate   23m
kubectl get dockermachinepooltemplate quick-start-default-worker-machinepooltemplate -o yaml
# apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
# kind: DockerMachinePoolTemplate
# metadata:
#   annotations:
#     kubectl.kubernetes.io/last-applied-configuration: |
#       {"apiVersion":"infrastructure.cluster.x-k8s.io/v1beta2","kind":"DockerMachinePoolTemplate","metadata":{"annotations":{},"name":"quick-start-default-worker-machinepooltemplate","namespace":"default"},"spec":{"template":{"spec":{"template":{}}}}}
#   creationTimestamp: "2026-02-20T19:54:14Z"
#   generation: 1
#   name: quick-start-default-worker-machinepooltemplate
#   namespace: default
#   ownerReferences:
#   - apiVersion: cluster.x-k8s.io/v1beta2
#     kind: ClusterClass
#     name: quick-start
#     uid: 5f5b447a-20fa-408b-b916-2ae952202b88
#   resourceVersion: "2793"
#   uid: 0e175db2-1104-4658-adad-6c5bf464428c
# spec:
#   template:
#     spec:
#       template: {}

# machinedeployments 확인 : 컨트롤플레인 노드(머신들)에 machinedeployments와 같은 동작은 KubeadmControlPlane 에서 처리함.
k get machinedeployments
# NAME                         CLUSTER           AVAILABLE   DESIRED   CURRENT   READY   AVAILABLE   UP-TO-DATE   PHASE     AGE   VERSION
# capi-quickstart-md-0-n4lb4   capi-quickstart   True        3         3         3       3           3            Running   23m   v1.34.3

k get machinedeployments -o yaml
# ...
#   spec:
#     clusterName: capi-quickstart
#     replicas: 3
#     rollout:
#       strategy:
#         rollingUpdate:
#           maxSurge: 1
#           maxUnavailable: 0
#         type: RollingUpdate
#     selector:
#       matchLabels:
#         cluster.x-k8s.io/cluster-name: capi-quickstart
#         topology.cluster.x-k8s.io/deployment-name: md-0
#         topology.cluster.x-k8s.io/owned: ""
#     template:
#       metadata:
#         labels:
#           cluster.x-k8s.io/cluster-name: capi-quickstart
#           topology.cluster.x-k8s.io/deployment-name: md-0
#           topology.cluster.x-k8s.io/owned: ""
#       spec:
#         bootstrap:
#           configRef:
#             apiGroup: bootstrap.cluster.x-k8s.io
#             kind: KubeadmConfigTemplate
#             name: capi-quickstart-md-0-25dll
#         clusterName: capi-quickstart
#         infrastructureRef:
#           apiGroup: infrastructure.cluster.x-k8s.io
#           kind: DockerMachineTemplate
#           name: capi-quickstart-md-0-ms2hr
#         version: v1.34.3


# machinesets 확인
k get machinesets
# NAME                               CLUSTER           DESIRED   CURRENT   READY   AVAILABLE   UP-TO-DATE   AGE   VERSION
# capi-quickstart-md-0-n4lb4-h9bj7   capi-quickstart   3         3         3       3           3            24m   v1.34.3

k get machinesets -o yaml
# ...
#   spec:
#     clusterName: capi-quickstart
#     replicas: 3
#     selector:
#       matchLabels:
#         cluster.x-k8s.io/cluster-name: capi-quickstart
#         machine-template-hash: 1586322356-crp4c
#         topology.cluster.x-k8s.io/deployment-name: md-0
#         topology.cluster.x-k8s.io/owned: ""
#     template:
#       metadata:
#         labels:
#           cluster.x-k8s.io/cluster-name: capi-quickstart
#           machine-template-hash: 1586322356-crp4c
#           topology.cluster.x-k8s.io/deployment-name: md-0
#           topology.cluster.x-k8s.io/owned: ""
#       spec:
#         bootstrap:
#           configRef:
#             apiGroup: bootstrap.cluster.x-k8s.io
#             kind: KubeadmConfigTemplate
#             name: capi-quickstart-md-0-25dll
#         clusterName: capi-quickstart
#         infrastructureRef:
#           apiGroup: infrastructure.cluster.x-k8s.io
#           kind: DockerMachineTemplate
#           name: capi-quickstart-md-0-ms2hr
#         version: v1.34.3

# machines 확인 : 개별 머신에 대한 정보
k get machines  -o wide
# NAME                                     CLUSTER           NODE NAME                                PROVIDER ID                                         READY   AVAILABLE   UP-TO-DATE   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         PAUSED   PHASE     AGE     VERSION
# capi-quickstart-md-0-n4lb4-h9bj7-jp6mt   capi-quickstart   capi-quickstart-md-0-n4lb4-h9bj7-jp6mt   docker:////capi-quickstart-md-0-n4lb4-h9bj7-jp6mt   True    True        True         172.18.0.5    172.18.0.5    Debian GNU/Linux 12 (bookworm)   False    Running   8m20s   v1.34.3
# capi-quickstart-md-0-n4lb4-h9bj7-q7wlm   capi-quickstart   capi-quickstart-md-0-n4lb4-h9bj7-q7wlm   docker:////capi-quickstart-md-0-n4lb4-h9bj7-q7wlm   True    True        True         172.18.0.6    172.18.0.6    Debian GNU/Linux 12 (bookworm)   False    Running   8m20s   v1.34.3
# capi-quickstart-md-0-n4lb4-h9bj7-wkckm   capi-quickstart   capi-quickstart-md-0-n4lb4-h9bj7-wkckm   docker:////capi-quickstart-md-0-n4lb4-h9bj7-wkckm   True    True        True         172.18.0.7    172.18.0.7    Debian GNU/Linux 12 (bookworm)   False    Running   8m20s   v1.34.3
# capi-quickstart-v6vgk-689lg              capi-quickstart   capi-quickstart-v6vgk-689lg              docker:////capi-quickstart-v6vgk-689lg              True    True        True         172.18.0.9    172.18.0.9    Debian GNU/Linux 12 (bookworm)   False    Running   11m     v1.34.3
# capi-quickstart-v6vgk-drxgh              capi-quickstart   capi-quickstart-v6vgk-drxgh              docker:////capi-quickstart-v6vgk-drxgh              True    True        True         172.18.0.8    172.18.0.8    Debian GNU/Linux 12 (bookworm)   False    Running   12m     v1.34.3
# capi-quickstart-v6vgk-tgspb              capi-quickstart   capi-quickstart-v6vgk-tgspb              docker:////capi-quickstart-v6vgk-tgspb              True    True        True         172.18.0.4    172.18.0.4    Debian GNU/Linux 12 (bookworm)   False    Running   13m     v1.34.3
## controlplane 머신 정보
k get machines -l cluster.x-k8s.io/control-plane -o yaml
# ...
# spec:
#   bootstrap:
#     configRef:
#       apiGroup: bootstrap.cluster.x-k8s.io
#       kind: KubeadmConfig
#       name: capi-quickstart-z2v4j-f6btm
#     dataSecretName: capi-quickstart-z2v4j-f6btm
#   clusterName: capi-quickstart
#   deletion:
#     nodeDeletionTimeoutSeconds: 10
#   infrastructureRef:
#     apiGroup: infrastructure.cluster.x-k8s.io
#     kind: DockerMachine
#     name: capi-quickstart-z2v4j-f6btm
#   providerID: docker:////capi-quickstart-z2v4j-f6btm
#   readinessGates:
#   - conditionType: APIServerPodHealthy
#   - conditionType: ControllerManagerPodHealthy
#   - conditionType: SchedulerPodHealthy
#   - conditionType: EtcdPodHealthy
#   - conditionType: EtcdMemberHealthy
#   version: v1.34.3


# machinehealthchecks 확인 : 머신(집합) 수준 헬스체크 상태
k get machinehealthchecks -o yaml
k get machinehealthchecks -owide
NAME                         CLUSTER           REPLICAS   HEALTHY   PAUSED   AGE
# capi-quickstart-md-0-n4lb4   capi-quickstart   3          3         False    24m
# capi-quickstart-v6vgk        capi-quickstart   3          3         False    24m

[06] 첫 번째 워크로드 클러스터 노드 정보 확인 & alias 설정(k8s1) & kube-ops-view 설치 & 샘플 app 배포

# alias 설정 : k8s1
kubectl --kubeconfig=capi-quickstart.kubeconfig cluster-info
# Kubernetes control plane is running at https://127.0.0.1:55000
# CoreDNS is running at https://127.0.0.1:55000/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

# To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
alias k8s1='kubectl --kubeconfig=capi-quickstart.kubeconfig'
k8s1 cluster-info
# Kubernetes control plane is running at https://127.0.0.1:55000
# CoreDNS is running at https://127.0.0.1:55000/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

# To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
# kube-ops-view 설치
helm install kube-ops-view geek-cookbook/kube-ops-view --version 1.2.2 \
  --set env.TZ="Asia/Seoul" --namespace kube-system --kubeconfig=capi-quickstart.kubeconfig

helm list --kubeconfig=capi-quickstart.kubeconfig -n kube-system
k8s1 get svc,ep -n kube-system kube-ops-view

## kube-ops-view 포트 포워딩 설정 
k8s1 -n kube-system port-forward svc/kube-ops-view 8080:8080 &
open "http://127.0.0.1:8080/#scale=1.5" # 웹 접속


# 호스트에서 컨테이너 정보 확인 : LB 컨테이너 1대, CT 컨테이너 3대, WK 컨테이너 3대, 마지막 1대는 최초 kind로 구성한 관리형 k8s
docker ps
# CONTAINER ID   IMAGE                                COMMAND                  CREATED          STATUS          PORTS                                                             NAMES
# a4590585faea   kindest/node:v1.34.3                 "/usr/local/bin/entr…"   9 minutes ago    Up 9 minutes                                                                      capi-quickstart-md-0-n4lb4-h9bj7-wkckm
# ecb9b00d0a15   kindest/node:v1.34.3                 "/usr/local/bin/entr…"   9 minutes ago    Up 9 minutes                                                                      capi-quickstart-md-0-n4lb4-h9bj7-q7wlm
# 1878439017c3   kindest/node:v1.34.3                 "/usr/local/bin/entr…"   9 minutes ago    Up 9 minutes                                                                      capi-quickstart-md-0-n4lb4-h9bj7-jp6mt
# 6af40757f171   kindest/node:v1.34.3                 "/usr/local/bin/entr…"   13 minutes ago   Up 13 minutes   127.0.0.1:55010->6443/tcp                                         capi-quickstart-v6vgk-689lg
# ce5409a45d19   kindest/node:v1.34.3                 "/usr/local/bin/entr…"   13 minutes ago   Up 13 minutes   127.0.0.1:55009->6443/tcp                                         capi-quickstart-v6vgk-drxgh
# 181452f134f4   kindest/node:v1.34.3                 "/usr/local/bin/entr…"   14 minutes ago   Up 14 minutes   127.0.0.1:55008->6443/tcp                                         capi-quickstart-v6vgk-tgspb
# f6aaa74afe25   kindest/haproxy:v20230606-42a2262b   "haproxy -W -db -f /…"   25 minutes ago   Up 25 minutes   0.0.0.0:55000->6443/tcp, 0.0.0.0:55001->8404/tcp                  capi-quickstart-lb
# 322c18c5580f   kindest/node:v1.35.0                 "/usr/local/bin/entr…"   41 minutes ago   Up 29 minutes   0.0.0.0:30000-30001->30000-30001/tcp, 127.0.0.1:62897->6443/tcp   myk8s-control-plane

# 샘플 애플리케이션 배포
cat << EOF | kubectl --kubeconfig=capi-quickstart.kubeconfig apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webpod
spec:
  replicas: 3
  selector:
    matchLabels:
      app: webpod
  template:
    metadata:
      labels:
        app: webpod
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - sample-app
            topologyKey: "kubernetes.io/hostname"
      containers:
      - name: webpod
        image: traefik/whoami
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: webpod
  labels:
    app: webpod
spec:
  selector:
    app: webpod
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
    nodePort: 30003
  type: NodePort
EOF

# 확인 : alias k8s1='kubectl --kubeconfig=capi-quickstart.kubeconfig'
k8s1 get deploy,pod,svc,ep -owide
# Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
# NAME                     READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES           SELECTOR
# deployment.apps/webpod   3/3     3            3           9s    webpod       traefik/whoami   app=webpod

# NAME                          READY   STATUS    RESTARTS   AGE   IP              NODE                                     NOMINATED NODE   READINESS GATES
# pod/webpod-59589fb744-b57vz   1/1     Running   0          9s    10.10.62.2      capi-quickstart-md-0-n4lb4-h9bj7-wkckm   <none>           <none>
# pod/webpod-59589fb744-gf8h2   1/1     Running   0          9s    10.10.55.65     capi-quickstart-md-0-n4lb4-h9bj7-jp6mt   <none>           <none>
# pod/webpod-59589fb744-sg87f   1/1     Running   0          9s    10.10.117.132   capi-quickstart-md-0-n4lb4-h9bj7-q7wlm   <none>           <none>

# NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE   SELECTOR
# service/kubernetes   ClusterIP   10.20.0.1      <none>        443/TCP        25m   <none>
# service/webpod       NodePort    10.20.28.224   <none>        80:30003/TCP   9s    app=webpod

# NAME                   ENDPOINTS                                         AGE
# endpoints/kubernetes   172.18.0.4:6443,172.18.0.8:6443,172.18.0.9:6443   25m
# endpoints/webpod       10.10.117.132:80,10.10.55.65:80,10.10.62.2:80     9s
# 반복 호출 : 최초 배포한 kind k8s 에 컨트롤 플레인 노드에서 서비스의 NodePort 로 호출
docker ps
CT1=capi-quickstart-hvt8l-hb4hb

docker exec -it myk8s-control-plane curl -s $CT1:30003
while true; do docker exec -it myk8s-control-plane curl -s $CT1:30003 | grep Hostname; date; sleep 1; done

 

 

[07] 첫 번째 워크로드 클러스터에 컨트롤 플레인 노드 3대 k8s apiserver 에 대한 LB(HAProxy) 확인

# 호스트에서 컨테이너 정보 확인 : LB 컨테이너 1대, CT 컨테이너 3대, WK 컨테이너 3대, 마지막 1대는 최초 kind로 구성한 관리형 k8s
docker ps
#  CONTAINER ID   IMAGE                                COMMAND                  CREATED          STATUS          PORTS                                                             NAMES
# a4590585faea   kindest/node:v1.34.3                 "/usr/local/bin/entr…"   11 minutes ago   Up 11 minutes                                                                     capi-quickstart-md-0-n4lb4-h9bj7-wkckm
# ecb9b00d0a15   kindest/node:v1.34.3                 "/usr/local/bin/entr…"   11 minutes ago   Up 11 minutes                                                                     capi-quickstart-md-0-n4lb4-h9bj7-q7wlm
# 1878439017c3   kindest/node:v1.34.3                 "/usr/local/bin/entr…"   11 minutes ago   Up 11 minutes                                                                     capi-quickstart-md-0-n4lb4-h9bj7-jp6mt
# 6af40757f171   kindest/node:v1.34.3                 "/usr/local/bin/entr…"   14 minutes ago   Up 14 minutes   127.0.0.1:55010->6443/tcp                                         capi-quickstart-v6vgk-689lg
# ce5409a45d19   kindest/node:v1.34.3                 "/usr/local/bin/entr…"   15 minutes ago   Up 15 minutes   127.0.0.1:55009->6443/tcp                                         capi-quickstart-v6vgk-drxgh
# 181452f134f4   kindest/node:v1.34.3                 "/usr/local/bin/entr…"   15 minutes ago   Up 15 minutes   127.0.0.1:55008->6443/tcp                                         capi-quickstart-v6vgk-tgspb
# f6aaa74afe25   kindest/haproxy:v20230606-42a2262b   "haproxy -W -db -f /…"   27 minutes ago   Up 27 minutes   0.0.0.0:55000->6443/tcp, 0.0.0.0:55001->8404/tcp                  capi-quickstart-lb
# 322c18c5580f   kindest/node:v1.35.0                 "/usr/local/bin/entr…"   42 minutes ago   Up 30 minutes   0.0.0.0:30000-30001->30000-30001/tcp, 127.0.0.1:62897->6443/tcp   myk8s-control-plane

# LB 컨테이너 
docker inspect capi-quickstart-lb | jq
...
      "Entrypoint": [
        "haproxy",
        "-W",
        "-db",
        "-f",
        "/usr/local/etc/haproxy/haproxy.cfg"

      "Ports": {
        "6443/tcp": [
          {
            "HostIp": "0.0.0.0",
            "HostPort": "55000"      # 호스트PC에서 curl -sk https://127.0.0.1:55000/version
          }
        ],
        "8404/tcp": [            
          {
            "HostIp": "0.0.0.0",
            "HostPort": "55000"      # 호스트PC 웹에서 open http://127.0.0.1:55000/stats
      ...
      "IPAddress": "192.168.97.3",   # 현재 k8s apiserver 엔드포인트는 192.168.97.3 IP이고 해당 IP는 HAProxy 컨테이너의 IP.

# LB에 APIserver 포트로 호출
curl -sk https://127.0.0.1:55000/version | jq
# {
#   "major": "1",
#   "minor": "34",
#   "emulationMajor": "1",
#   "emulationMinor": "34",
#   "minCompatibilityMajor": "1",
#   "minCompatibilityMinor": "33",
#   "gitVersion": "v1.34.3",
#   "gitCommit": "df11db1c0f08fab3c0baee1e5ce6efbf816af7f1",
#   "gitTreeState": "clean",
#   "buildDate": "2025-12-09T14:59:13Z",
#   "goVersion": "go1.24.11",
#   "compiler": "gc",
#   "platform": "linux/arm64"
# }

# 현재 k8s apiserver 엔드포인트는 192.168.97.3 IP이고 해당 IP는 HAProxy 컨테이너의 IP.
k8s1 cluster-info
# Kubernetes control plane is running at https://127.0.0.1:55000
# CoreDNS is running at https://127.0.0.1:55000/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

# To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.


# 컨테이너 내부에 설정 파일을 호스트에 복사하여 가져오기
docker cp capi-quickstart-lb:/usr/local/etc/haproxy/haproxy.cfg .

# 설정 파일 확인
cat haproxy.cfg
# -------------------------------------------
# # generated by kind
# global
#   log /dev/log local0
#   log /dev/log local1 notice
#   daemon
#   # limit memory usage to approximately 18 MB
#   # (see https://github.com/kubernetes-sigs/kind/pull/3115)
#   maxconn 100000

# resolvers docker
#   nameserver dns 127.0.0.11:53

# defaults
#   log global
#   mode tcp
#   option dontlognull
#   # TODO: tune these
#   timeout connect 5000
#   timeout client 50000
#   timeout server 50000
#   # allow to boot despite dns don't resolve backends
#   default-server init-addr none

# frontend stats
#   mode http
#   bind *:8404
#   stats enable
#   stats uri /stats
#   stats refresh 1s
#   stats admin if TRUE

# frontend control-plane
#   bind *:6443

#   default_backend kube-apiservers

# backend kube-apiservers
#   option httpchk GET /healthz

#   server capi-quickstart-v6vgk-689lg 172.18.0.9:6443 weight 100 check check-ssl verify none resolvers docker resolve-prefer ipv4
#   server capi-quickstart-v6vgk-drxgh 172.18.0.8:6443 weight 100 check check-ssl verify none resolvers docker resolve-prefer ipv4
#   server capi-quickstart-v6vgk-tgspb 172.18.0.4:6443 weight 100 check check-ssl verify none resolvers docker resolve-prefer ipv4
-------------------------------------------

# 로그 확인 
docker logs -f capi-quickstart-lb
[WARNING] 041/141425 (90) : Server kube-apiservers/capi-quickstart-j9fdm-6zg8v is UP, reason: Layer7 check passed, code: 200, check duration: 5ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
...

[08] 컨트롤 플레인 노드 정보 확인 : kubeadm-config configmap

# ct node 확인
k8s1 describe node -l node-role.kubernetes.io/control-plane
k8s1 get node -owide -l node-role.kubernetes.io/control-plane
# Non-terminated Pods:          (6 in total)
#   Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
#   ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
#   kube-system                 calico-node-896rz                                      250m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
#   kube-system                 etcd-capi-quickstart-v6vgk-tgspb                       100m (0%)     0 (0%)      100Mi (0%)       0 (0%)         18m
#   kube-system                 kube-apiserver-capi-quickstart-v6vgk-tgspb             250m (1%)     0 (0%)      0 (0%)           0 (0%)         18m
#   kube-system                 kube-controller-manager-capi-quickstart-v6vgk-tgspb    200m (1%)     0 (0%)      0 (0%)           0 (0%)         18m
#   kube-system                 kube-proxy-wwtzd                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
#   kube-system                 kube-scheduler-capi-quickstart-v6vgk-tgspb             100m (0%)     0 (0%)      0 (0%)           0 (0%)         18m
# Allocated resources:
#   (Total limits may be over 100 percent, i.e., overcommitted.)
#   Resource           Requests    Limits
#   --------           --------    ------
#   cpu                900m (5%)   0 (0%)
#   memory             100Mi (0%)  0 (0%)
#   ephemeral-storage  0 (0%)      0 (0%)
#   hugepages-1Gi      0 (0%)      0 (0%)
#   hugepages-2Mi      0 (0%)      0 (0%)
#   hugepages-32Mi     0 (0%)      0 (0%)
#   hugepages-64Ki     0 (0%)      0 (0%)
# Events:
#   Type    Reason                   Age                From             Message
#   ----    ------                   ----               ----             -------
#   Normal  Starting                 18m                kube-proxy       
#   Normal  Starting                 18m                kubelet          Starting kubelet.
#   Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node capi-quickstart-v6vgk-tgspb status is now: NodeHasSufficientMemory
#   Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node capi-quickstart-v6vgk-tgspb status is now: NodeHasNoDiskPressure
#   Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node capi-quickstart-v6vgk-tgspb status is now: NodeHasSufficientPID
#   Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
#   Normal  RegisteredNode           18m                node-controller  Node capi-quickstart-v6vgk-tgspb event: Registered Node capi-quickstart-v6vgk-tgspb in Controller
#   Normal  RegisteredNode           17m                node-controller  Node capi-quickstart-v6vgk-tgspb event: Registered Node capi-quickstart-v6vgk-tgspb in Controller
#   Normal  NodeReady                13m                kubelet          Node capi-quickstart-v6vgk-tgspb status is now: NodeReady


# kube-system 네임스페이스에 주요 파드 확인
k8s1 get pod -n kube-system
# NAME                                                  READY   STATUS    RESTARTS   AGE
# calico-kube-controllers-6fd9cc49d6-wfsjf              1/1     Running   0          14m
# calico-node-896rz                                     1/1     Running   0          14m
# calico-node-b4hk9                                     1/1     Running   0          14m
# calico-node-jx55b                                     1/1     Running   0          14m
# calico-node-llpzt                                     1/1     Running   0          14m
# calico-node-rqcwv                                     1/1     Running   0          14m
# calico-node-rs55l                                     1/1     Running   0          14m
# coredns-66bc5c9577-dvzn2                              1/1     Running   0          29m
# coredns-66bc5c9577-k8r7t                              1/1     Running   0          29m
# etcd-capi-quickstart-v6vgk-689lg                      1/1     Running   0          17m
# etcd-capi-quickstart-v6vgk-drxgh                      1/1     Running   0          17m
# etcd-capi-quickstart-v6vgk-tgspb                      1/1     Running   0          18m
# kube-apiserver-capi-quickstart-v6vgk-689lg            1/1     Running   0          17m
# kube-apiserver-capi-quickstart-v6vgk-drxgh            1/1     Running   0          17m
# kube-apiserver-capi-quickstart-v6vgk-tgspb            1/1     Running   0          18m
# kube-controller-manager-capi-quickstart-v6vgk-689lg   1/1     Running   0          17m
# kube-controller-manager-capi-quickstart-v6vgk-drxgh   1/1     Running   0          17m
# kube-controller-manager-capi-quickstart-v6vgk-tgspb   1/1     Running   0          18m
# kube-ops-view-97fd86569-2rcpk                         1/1     Running   0          5m
# kube-proxy-24dkz                                      1/1     Running   0          14m
# kube-proxy-mtnfl                                      1/1     Running   0          14m
# kube-proxy-nskxk                                      1/1     Running   0          14m
# kube-proxy-pwvjx                                      1/1     Running   0          18m
# kube-proxy-vl9hc                                      1/1     Running   0          17m
# kube-proxy-wwtzd                                      1/1     Running   0          18m
# kube-scheduler-capi-quickstart-v6vgk-689lg            1/1     Running   0          17m
# kube-scheduler-capi-quickstart-v6vgk-drxgh            1/1     Running   0          17m
# kube-scheduler-capi-quickstart-v6vgk-tgspb            1/1     Running   0          18m

#
k8s1 get cm,secret,csr -n kube-system
# NAME                                                      AGE   SIGNERNAME                                    REQUESTOR                                 REQUESTEDDURATION   CONDITION
# certificatesigningrequest.certificates.k8s.io/csr-24ph2   19m   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:rendei                   <none>              Approved,Issued
# certificatesigningrequest.certificates.k8s.io/csr-47fnb   29m   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:lf5cyt                   <none>              Approved,Issued
# certificatesigningrequest.certificates.k8s.io/csr-4hfxv   14m   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:kuy73z                   <none>              Approved,Issued
# certificatesigningrequest.certificates.k8s.io/csr-7kx5v   24m   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:hq7thg                   <none>              Approved,Issued
# certificatesigningrequest.certificates.k8s.io/csr-7w8q6   14m   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:riaa4a                   <none>              Approved,Issued
# certificatesigningrequest.certificates.k8s.io/csr-862k6   24m   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:61egis                   <none>              Approved,Issued
# certificatesigningrequest.certificates.k8s.io/csr-9jwqb   29m   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:rr5re5                   <none>              Approved,Issued
# certificatesigningrequest.certificates.k8s.io/csr-9lmp9   14m   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:1gffms                   <none>              Approved,Issued
# certificatesigningrequest.certificates.k8s.io/csr-9wfpc   18m   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:05ze5k                   <none>              Approved,Issued
# certificatesigningrequest.certificates.k8s.io/csr-bgph5   23m   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:m9kaw0                   <none>              Approved,Issued
# certificatesigningrequest.certificates.k8s.io/csr-dhq68   29m   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:1qvd93                   <none>              Approved,Issued
# certificatesigningrequest.certificates.k8s.io/csr-dtbwt   30m   kubernetes.io/kube-apiserver-client-kubelet   system:node:capi-quickstart-v6vgk-q5fxw   <none>              Approved,Issued
# certificatesigningrequest.certificates.k8s.io/csr-htgkr   29m   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:2bfqki                   <none>              Approved,Issued
# certificatesigningrequest.certificates.k8s.io/csr-jcf6d   24m   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:lt4k83                   <none>              Approved,Issued
# certificatesigningrequest.certificates.k8s.io/csr-jss6w   24m   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:sibjzs                   <none>              Approved,Issued
# certificatesigningrequest.certificates.k8s.io/csr-lr7s4   17m   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:4mg65u                   <none>              Approved,Issued
# certificatesigningrequest.certificates.k8s.io/csr-tdz7z   18m   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:0c0isg                   <none>              Approved,Issued
# certificatesigningrequest.certificates.k8s.io/csr-tvqp2   24m   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:pcqne4                   <none>              Approved,Issued
# certificatesigningrequest.certificates.k8s.io/csr-v4pzs   19m   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:sgaylo                   <none>              Approved,Issued
# certificatesigningrequest.certificates.k8s.io/csr-wfcnn   19m   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:yguept                   <none>              Approved,Issued
# certificatesigningrequest.certificates.k8s.io/csr-zhnsh   29m   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:ngxrin                   <none>              Approved,Issued

# kubeadm-config configmap 확인
k8s1 get cm -n kube-system kubeadm-config -o yaml
# data:
#   ClusterConfiguration: |
#     apiServer:
#       certSANs:
#       - localhost
#       - 127.0.0.1
#       - 0.0.0.0
#       - host.docker.internal
#     apiVersion: kubeadm.k8s.io/v1beta4
#     caCertificateValidityPeriod: 87600h0m0s
#     certificateValidityPeriod: 8760h0m0s
#     certificatesDir: /etc/kubernetes/pki
#     clusterName: capi-quickstart
#     controlPlaneEndpoint: 192.168.97.3:6443
#     controllerManager: {}
#     dns: {}
#     encryptionAlgorithm: RSA-2048
#     etcd:
#       local:
#         dataDir: /var/lib/etcd
#     featureGates:
#       ControlPlaneKubeletLocalMode: true
#     imageRepository: registry.k8s.io
#     kind: ClusterConfiguration
#     kubernetesVersion: v1.34.3
#     networking:
#       dnsDomain: myk8s-1.local
#       podSubnet: 10.10.0.0/16
#       serviceSubnet: 10.20.0.0/16
...

# featureGates: ControlPlaneKubeletLocalMode: true 의미
## 목적 : 컨트롤 플레인 초기화/재기동 시에 'API 서버 살아있어야 kubelet이 컨트롤플레인 파드를 관리하는' 닭-달걀 문제를 완화
## 기본 kubeadm 동작 : kubelet ──(API Server 필요)──> control plane pod 관리
## feature gate 사용 : kubelet ──(로컬 파일만 보고)──> control plane pod 기동

# 컨트롤 플레인 역할 컨테이너 이름 변수 지정
docker ps
CTR1=capi-quickstart-v6vgk-tgspb

docker exec -i $CTR1 sh -c "apt update ; apt install tree psmisc -y"
docker exec -i $CTR1 pstree -a
# systemd
#   |-containerd
#   |   `-25*[{containerd}]
#   |-containerd-shim -namespace k8s.io -id 55d09b0ca92be3e1e95176ac565d8f97d64630f7fa33d7fd7cf7d4fc11b8a057-addre
#   |   |-kube-apiserver --advertise-address=172.18.0.4 --allow-privileged=true --authorization-mode=Node,RBAC...
#   |   |   `-20*[{kube-apiserver}]
#   |   |-pause
#   |   `-9*[{containerd-shim}]
#   |-containerd-shim -namespace k8s.io -id c606219ceb8ee4a939e47afaa54768f3634b0bb0ad6e21f4852cc07655af3a5c-addre
#   |   |-kube-controller --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf--authorizati
#   |   |   `-9*[{kube-controller}]
#   |   |-pause
#   |   `-10*[{containerd-shim}]
#   |-containerd-shim -namespace k8s.io -id 66fde1ee67cc9dd14e756adf7c81cd52c9e61cdcd46aaa5f0dae206709af863f-addre
#   |   |-kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf ...
#   |   |   `-16*[{kube-scheduler}]
#   |   |-pause
docker exec -i $CTR1 crictl ps
# CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                   NAMESPACE
# 2a08808e89054       c445639cb2880       15 minutes ago      Running             calico-node               0                   caf93a33f80e8       calico-node-896rz                                     kube-system
# d25552f3ab486       4461daf6b6af8       20 minutes ago      Running             kube-proxy                0                   568294912546f       kube-proxy-wwtzd                                      kube-system
# 03dad6750543c       2c5f0dedd21c2       20 minutes ago      Running             etcd                      0                   98cf9da90296a       etcd-capi-quickstart-v6vgk-tgspb                      kube-system
# 7a80e7925d46e       cf65ae6c8f700       20 minutes ago      Running             kube-apiserver            0                   55d09b0ca92be       kube-apiserver-capi-quickstart-v6vgk-tgspb            kube-system
# 1fe11d9075fab       7ada8ff13e54b       20 minutes ago      Running             kube-controller-manager   0                   c606219ceb8ee       kube-controller-manager-capi-quickstart-v6vgk-tgspb   kube-system
# ff16e1e241362       2f2aa21d34d2d       20 minutes ago      Running             kube-scheduler            0                   66fde1ee67cc9       kube-scheduler-capi-quickstart-v6vgk-tgspb            kube-system

docker exec -i $CTR1 kubeadm certs check-expiration
# [check-expiration] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
# [check-expiration] Use 'kubeadm init phase upload-config kubeadm --config your-config-file' to re-upload it.

# CERTIFICATE                  EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
# admin.conf                   Feb 20, 2027 20:05 UTC   364d            ca                      no      
# apiserver                    Feb 20, 2027 20:05 UTC   364d            ca                      no      
# apiserver-etcd-client        Feb 20, 2027 20:05 UTC   364d            etcd-ca                 no      
# apiserver-kubelet-client     Feb 20, 2027 20:05 UTC   364d            ca                      no      
# controller-manager.conf      Feb 20, 2027 20:05 UTC   364d            ca                      no      
# etcd-healthcheck-client      Feb 20, 2027 20:05 UTC   364d            etcd-ca                 no      
# etcd-peer                    Feb 20, 2027 20:05 UTC   364d            etcd-ca                 no      
# etcd-server                  Feb 20, 2027 20:05 UTC   364d            etcd-ca                 no      
# front-proxy-client           Feb 20, 2027 20:05 UTC   364d            front-proxy-ca          no      
# scheduler.conf               Feb 20, 2027 20:05 UTC   364d            ca                      no      
# !MISSING! super-admin.conf                                                                    

# CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
# ca                      Feb 18, 2036 19:54 UTC   9y              no      
# etcd-ca                 Feb 18, 2036 19:54 UTC   9y              no      
# front-proxy-ca          Feb 18, 2036 19:54 UTC   9y              no      
docker exec -i $CTR1 tree /etc/kubernetes
docker exec -i $CTR1 tree /etc/kubernetes/pki
docker exec -i $CTR1 cat /etc/kubernetes/pki/ca.crt | openssl x509 -text -noout
docker exec -i $CTR1 tree /etc/kubernetes/manifests
docker exec -i $CTR1 tree /var/lib/etcd

# apiserver 인증서 SAN에 IP 0.0.0.0 등록 확인
docker exec -i $CTR1 cat /etc/kubernetes/pki/apiserver.crt | openssl x509 -text -noout
            # X509v3 Subject Alternative Name:
            #     DNS:DNS:capi-quickstart-v6vgk-tgspb, DNS:host.docker.internal, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.myk8s-1.local, DNS:localhost, IP Address:10.20.0.1, IP Address:192.168.97.8, IP Address:192.168.97.3, IP Address:127.0.0.1, IP Address:0.0.0.0

# search 도메인 확인
k8s1 exec -it -n kube-system deploy/kube-ops-view -- cat /etc/resolv.conf
# search kube-system.svc.myk8s-1.local svc.myk8s-1.local myk8s-1.local
# nameserver 10.20.0.10
# options ndots:5

# kubelet-config
k8s1 get cm -n kube-system kubelet-config -o yaml
# ...
#     cgroupDriver: systemd
#     clusterDNS:
#     - 10.20.0.10
#     clusterDomain: myk8s-1.local
#     ...
#     staticPodPath: /etc/kubernetes/manifests

[09] 컨트롤 플레인 노드 apiserver, scheduler, kcm, etcd 정보 확인

# lease 확인
k8s1 get lease -n kube-system
# NAME                                   HOLDER                                                                      AGE
# apiserver-3dz3x5vr6zqwbljtoenzdsdoke   apiserver-3dz3x5vr6zqwbljtoenzdsdoke_b192fd7a-6d88-46cc-93f0-4b16b6c35229   20m
# apiserver-5eofigvrltaliqxpp3jivz5hjy   apiserver-5eofigvrltaliqxpp3jivz5hjy_f78c1117-9eba-4645-b6c8-dfd18839d635   32m
# apiserver-anfpb5iw2x2rpso2d66dcl3oxm   apiserver-anfpb5iw2x2rpso2d66dcl3oxm_bbe35a6a-679b-4c83-b1bc-3ca25c9cb4dc   27m
# apiserver-bjylywgvqwuzy5msqemjh24cba   apiserver-bjylywgvqwuzy5msqemjh24cba_87103ef3-5310-47d1-80c4-7eaed2e973dd   21m
# apiserver-floux4alsuelwgr27dqr3otiuu   apiserver-floux4alsuelwgr27dqr3otiuu_c265d1fb-e11e-4d54-a729-842c3933af13   32m
# apiserver-n2t3hbfuuj5d4cm2zailap2dse   apiserver-n2t3hbfuuj5d4cm2zailap2dse_80c2268b-efb9-431e-848c-40d60dfad458   26m
# apiserver-s7oneirbkfrsnsanazmefcne7q   apiserver-s7oneirbkfrsnsanazmefcne7q_79639217-6269-41e6-b999-17904fa140f7   21m
# apiserver-uuqwgmim5r6zhb6gk2ofo326zu   apiserver-uuqwgmim5r6zhb6gk2ofo326zu_01fe706e-7333-4908-a161-d26a8fa3e5c5   26m
# apiserver-xn5c5eyl2szitofwszq5fo374m   apiserver-xn5c5eyl2szitofwszq5fo374m_1c57ba89-1fca-4d9e-b85b-a25c62185716   31m
# kube-controller-manager                capi-quickstart-v6vgk-drxgh_5e0f2308-135b-483a-9614-90ae78290f14            32m
# kube-scheduler                         capi-quickstart-v6vgk-drxgh_f1e3ea21-a368-4088-8c16-5322ba7ee947            32m

# 컨트롤 플레인 역할 컨테이너 이름 변수 지정
docker ps
CTR1=capi-quickstart-v6vgk-tgspb

# kube-apiserver 확인
k8s1 get pod -n kube-system -l component=kube-apiserver
# NAME                                         READY   STATUS    RESTARTS   AGE
# kube-apiserver-capi-quickstart-v6vgk-689lg   1/1     Running   0          20m
# kube-apiserver-capi-quickstart-v6vgk-drxgh   1/1     Running   0          21m
# kube-apiserver-capi-quickstart-v6vgk-tgspb   1/1     Running   0          22m
k8s1 describe pod -n kube-system -l component=kube-apiserver
    #   --etcd-servers=https://127.0.0.1:2379
    #   --service-account-issuer=https://kubernetes.default.svc.myk8s-1.local
    #   --service-cluster-ip-range=10.20.0.0/16

# kube-scheduler 확인
k8s1 get pod -n kube-system -l component=kube-scheduler
# NAME                                         READY   STATUS    RESTARTS   AGE
# kube-scheduler-capi-quickstart-v6vgk-689lg   1/1     Running   0          21m
# kube-scheduler-capi-quickstart-v6vgk-drxgh   1/1     Running   0          21m
# kube-scheduler-capi-quickstart-v6vgk-tgspb   1/1     Running   0          22m
k8s1 describe pod -n kube-system -l component=kube-scheduler
docker exec -i $CTR1 cat /etc/kubernetes/scheduler.conf | grep server
# server: https://192.168.97.8:6443 # server 엔드포인트가 LB(HAProxy)가 아니고, 자신의 CT노드 IP

# kube-controller-manager 확인
k8s1 get pod -n kube-system -l component=kube-controller-manager
# NAME                                                  READY   STATUS    RESTARTS   AGE
# kube-controller-manager-capi-quickstart-v6vgk-689lg   1/1     Running   0          21m
# kube-controller-manager-capi-quickstart-v6vgk-drxgh   1/1     Running   0          22m
# kube-controller-manager-capi-quickstart-v6vgk-tgspb   1/1     Running   0          22m
k8s1 describe pod -n kube-system -l component=kube-controller-manager
    #   --bind-address=127.0.0.1
    #   --cluster-cidr=10.10.0.0/16
    #   --cluster-name=capi-quickstart
    #   --service-cluster-ip-range=10.20.0.0/16

# etcd 확인
k8s1 get pod -n kube-system -l component=etcd
# NAME                               READY   STATUS    RESTARTS   AGE
# etcd-capi-quickstart-v6vgk-689lg   1/1     Running   0          21m
# etcd-capi-quickstart-v6vgk-drxgh   1/1     Running   0          22m
# etcd-capi-quickstart-v6vgk-tgspb   1/1     Running   0          23m
k8s1 describe pod -n kube-system -l component=etcd
    #   --advertise-client-urls=https://192.168.97.4:2379
    #   --data-dir=/var/lib/etcd
    #   --feature-gates=InitialCorruptCheck=true
    #   --initial-advertise-peer-urls=https://192.168.97.4:2380
    #   --initial-cluster=capi-quickstart-j9fdm-ggm9z=https://192.168.97.4:2380
    #   --key-file=/etc/kubernetes/pki/etcd/server.key
    #   --listen-client-urls=https://127.0.0.1:2379,https://192.168.97.4:2379
    #   --listen-metrics-urls=http://127.0.0.1:2381
    #   --listen-peer-urls=https://192.168.97.4:2380
    #   --name=capi-quickstart-j9fdm-ggm9z


# 컨트롤 플레인 역할 컨테이너 Shell 진입 후 etcdctl 확인 
docker exec -it $CTR1 bash
-------------------------------------
#
find / -name etcdctl
# /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/63/fs/usr/local/bin/etcdctl
# /run/containerd/io.containerd.runtime.v2.task/k8s.io/03dad6750543c9dfce736d78c7f3358e3cf6cc753c4ab86aab7cef91c7965d81/rootfs/usr/local/bin/etcdctl

ln -s /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/63/fs/usr/local/bin/etcdctl /usr/local/bin/
etcdctl version
# etcdctl version: 3.6.5
# API version: 3.6

# etcdctl 환경변수 설정
export ETCDCTL_CACERT=/etc/kubernetes/pki/etcd/ca.crt
export ETCDCTL_CERT=/etc/kubernetes/pki/etcd/server.crt
export ETCDCTL_KEY=/etc/kubernetes/pki/etcd/server.key
export ETCDCTL_ENDPOINTS=https://127.0.0.1:2379  # listen-client-urls

# 확인
etcdctl endpoint status -w table
# +------------------------+------------------+---------+-----------------+---------+--------+-----------------------+-------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+
# |        ENDPOINT        |        ID        | VERSION | STORAGE VERSION | DB SIZE | IN USE | PERCENTAGE NOT IN USE | QUOTA | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | DOWNGRADE TARGET VERSION | DOWNGRADE ENABLED |
# +------------------------+------------------+---------+-----------------+---------+--------+-----------------------+-------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+
# | https://127.0.0.1:2379 | 3c6942e2a6067b63 |   3.6.5 |           3.6.0 |   11 MB | 2.6 MB |                   76% |   0 B |      true |      false |         5 |       8472 |               8472 |        |                          |             false |
# +------------------------+------------------+---------+-----------------+---------+--------+-----------------------+-------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+

etcdctl member list -w table
# +------------------+---------+-----------------------------+-------------------------+-------------------------+------------+
# |        ID        | STATUS  |            NAME             |       PEER ADDRS        |      CLIENT ADDRS       | IS LEARNER |
# +------------------+---------+-----------------------------+-------------------------+-------------------------+------------+
# | 2d983dd9d761758f | started | capi-quickstart-v6vgk-689lg | https://172.18.0.9:2380 | https://172.18.0.9:2379 |      false |
# | 3c6942e2a6067b63 | started | capi-quickstart-v6vgk-tgspb | https://172.18.0.4:2380 | https://172.18.0.4:2379 |      false |
# | 4d71514f48125969 | started | capi-quickstart-v6vgk-drxgh | https://172.18.0.8:2380 | https://172.18.0.8:2379 |      false |
# +------------------+---------+-----------------------------+-------------------------+-------------------------+------------+

# 빠져나오기
exit
-------------------------------------

[10] 인증서 정보 확인

# kind 관리클러스터에서 에서 인증서 정보가 담긴 secret 확인
k get secret -l cluster.x-k8s.io/cluster-name=capi-quickstart
# NAME                                     TYPE                      DATA   AGE
# capi-quickstart-ca                       cluster.x-k8s.io/secret   2      37m
# capi-quickstart-etcd                     cluster.x-k8s.io/secret   2      37m
# capi-quickstart-kubeconfig               cluster.x-k8s.io/secret   1      37m
# capi-quickstart-md-0-n4lb4-h9bj7-jp6mt   cluster.x-k8s.io/secret   2      21m
# capi-quickstart-md-0-n4lb4-h9bj7-q7wlm   cluster.x-k8s.io/secret   2      21m
# capi-quickstart-md-0-n4lb4-h9bj7-wkckm   cluster.x-k8s.io/secret   2      21m
# capi-quickstart-proxy                    cluster.x-k8s.io/secret   2      37m
# capi-quickstart-sa                       cluster.x-k8s.io/secret   2      37m
# capi-quickstart-v6vgk-689lg              cluster.x-k8s.io/secret   2      25m
# capi-quickstart-v6vgk-drxgh              cluster.x-k8s.io/secret   2      25m
# capi-quickstart-v6vgk-tgspb              cluster.x-k8s.io/secret   2      26m

# 샘플로 시크릿 확인
k get secret capi-quickstart-kubeconfig -o jsonpath='{.data.value}' | base64 -d
# ...
# - context:
#     cluster: capi-quickstart
#     user: capi-quickstart-admin
#   name: capi-quickstart-admin@capi-quickstart
k get secret capi-quickstart-md-0-n4lb4-h9bj7-q7wlm -o jsonpath='{.data.value}' | base64 -d
# ## template: jinja
# #cloud-config

# write_files:
# -   path: /run/kubeadm/kubeadm-join-config.yaml
#     owner: root:root
#     permissions: '0640'
#     content: |
#       ---
#       apiVersion: kubeadm.k8s.io/v1beta4
#       discovery:
#         bootstrapToken:
#           apiServerEndpoint: 172.18.0.2:6443
#           caCertHashes:
#           - sha256:98eb5b61a058245bb8907e23efe643d742d4b833f4e5ef7565e50b2af9473097
#           token: 1gffms.mc43j2x40qsthgla
#       kind: JoinConfiguration
#       nodeRegistration:
#         kubeletExtraArgs:
#         - name: eviction-hard
#           value: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
#         taints:
#         - effect: NoSchedule
#           key: node.cluster.x-k8s.io/uninitialized

# -   path: /run/cluster-api/placeholder
#     owner: root:root
#     permissions: '0640'
#     content: "This placeholder file is used to create the /run/cluster-api sub directory in a way that is compatible with both Linux and Windows (mkdir -p /run/cluster-api does not work with Windows)"
# runcmd:
#   - kubeadm join --config /run/kubeadm/kubeadm-join-config.yaml  && echo success > /run/cluster-api/bootstrap-success.complete

[10] 첫 번째 워크로드 클러스터에 워커 노드 3대 확인

# 호스트에서 컨테이너 정보 확인 : LB 컨테이너 1대, CT 컨테이너 3대, WK 컨테이너 3대, 마지막 1대는 최초 kind로 구성한 관리형 k8s
docker ps
# CONTAINER ID   IMAGE                                COMMAND                  CREATED          STATUS          PORTS                                                             NAMES
# a4590585faea   kindest/node:v1.34.3                 "/usr/local/bin/entr…"   22 minutes ago   Up 22 minutes                                                                     capi-quickstart-md-0-n4lb4-h9bj7-wkckm
# ecb9b00d0a15   kindest/node:v1.34.3                 "/usr/local/bin/entr…"   22 minutes ago   Up 22 minutes                                                                     capi-quickstart-md-0-n4lb4-h9bj7-q7wlm
# 1878439017c3   kindest/node:v1.34.3                 "/usr/local/bin/entr…"   22 minutes ago   Up 22 minutes                                                                     capi-quickstart-md-0-n4lb4-h9bj7-jp6mt
# 6af40757f171   kindest/node:v1.34.3                 "/usr/local/bin/entr…"   26 minutes ago   Up 26 minutes   127.0.0.1:55010->6443/tcp                                         capi-quickstart-v6vgk-689lg
# ce5409a45d19   kindest/node:v1.34.3                 "/usr/local/bin/entr…"   26 minutes ago   Up 26 minutes   127.0.0.1:55009->6443/tcp                                         capi-quickstart-v6vgk-drxgh
# 181452f134f4   kindest/node:v1.34.3                 "/usr/local/bin/entr…"   27 minutes ago   Up 27 minutes   127.0.0.1:55008->6443/tcp                                         capi-quickstart-v6vgk-tgspb
# f6aaa74afe25   kindest/haproxy:v20230606-42a2262b   "haproxy -W -db -f /…"   38 minutes ago   Up 38 minutes   0.0.0.0:55000->6443/tcp, 0.0.0.0:55001->8404/tcp                  capi-quickstart-lb
# 322c18c5580f   kindest/node:v1.35.0                 "/usr/local/bin/entr…"   54 minutes ago   Up 42 minutes   0.0.0.0:30000-30001->30000-30001/tcp, 127.0.0.1:62897->6443/tcp   myk8s-control-plane

# 워커 역할 컨테이너 이름 변수 지정
docker ps
WK1=capi-quickstart-md-0-n4lb4-h9bj7-jp6mt

docker exec -i $WK1 sh -c "apt update ; apt install tree psmisc -y"
docker exec -i $WK1 pstree -a
# systemd
#   |-containerd
#   |   `-19*[{containerd}]
#   |-containerd-shim -namespace k8s.io -id 5905e59222cda9db9978f77dc24d15aa7a52c10e9e54660008addd92136cf5c1-addre
#   |   |-kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=capi-quickstart-md-0-n4lb4-h9bj7-jp6mt
#   |   |   `-11*[{kube-proxy}]
#   |   |-pause
#   |   `-9*[{containerd-shim}]
#   |-containerd-shim -namespace k8s.io -id 434dead5ce1906c5ab24cd36ecf30dba66fb6a245bbb7b66dc5b70ded3bd2fae-addre
#   |   |-pause
#   |   |-runsvdir -P /etc/service/enabled
#   |   |   |-runsv confd
#   |   |   |   `-calico-node -confd
#   |   |   |       `-13*[{calico-node}]
#   |   |   |-runsv felix
#   |   |   |   `-calico-node -felix
#   |   |   |       `-21*[{calico-node}]
#   |   |   |-runsv allocate-tunnel-addrs
docker exec -i $WK1 crictl ps
# CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                       NAMESPACE
# 19f3c7f017a7b       ab541801c8cc5       13 minutes ago      Running             webpod              0                   92e94d25db368       webpod-59589fb744-gf8h2   default
# 6f867b462cb7e       c445639cb2880       23 minutes ago      Running             calico-node         0                   434dead5ce190       calico-node-rqcwv         kube-system
# 5f7c7d10d9d20       4461daf6b6af8       23 minutes ago      Running             kube-proxy          0                   5905e59222cda       kube-proxy-24dkz          kube-system
docker exec -i $WK1 tree /etc/kubernetes
# /etc/kubernetes
# |-- kubelet.conf
# |-- manifests
# `-- pki
#     `-- ca.crt
docker exec -i $WK1 tree /var/lib/kubelet
# \/var/lib/kubelet
# |-- actuated_pods_state
# |-- allocated_pods_state
# |-- checkpoints
# |-- config.yaml
# |-- cpu_manager_state
# |-- device-plugins
# |   `-- kubelet.sock
# |-- dra_manager_state
# |-- instance-config.yaml
# |-- kubeadm-flags.env
# kubelet(client) -> [HAProxy] -> apiserver 파드들 호출 확인
docker exec -i $WK1 cat /etc/kubernetes/kubelet.conf | grep server
    # server: https://172.18.0.2:6443

# args 조사해두자
docker exec -i $WK1 cat /var/lib/kubelet/kubeadm-flags.env
# KUBELET_KUBEADM_ARGS="--cgroup-root=/kubelet --eviction-hard=nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0% --fail-swap-on=false --image-gc-high-threshold=100 --pod-infra-container-image=registry.k8s.io/pause:3.10.1 --register-with-taints=node.cluster.x-k8s.io/uninitialized:NoSchedule --runtime-cgroups=/system.slice/containerd.service"

docker exec -i $WK1 cat /var/lib/kubelet/instance-config.yaml
# containerRuntimeEndpoint: "unix:///var/run/containerd/containerd.sock"

docker exec -i $WK1 systemctl status kubelet --no-pager
# ● kubelet.service - kubelet: The Kubernetes Node Agent
#      Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; preset: enabled)
#     Drop-In: /etc/systemd/system/kubelet.service.d
#              └─10-kubeadm.conf, 11-kind.conf
#      Active: active (running) since Fri 2026-02-20 20:10:20 UTC; 24min ago
#        Docs: http://kubernetes.io/docs/
docker exec -i $WK1 cat /var/lib/kubelet/config.yaml
# apiVersion: kubelet.config.k8s.io/v1beta1
# authentication:
#   anonymous:
#     enabled: false
#   webhook:
#     cacheTTL: 0s

docker exec -i $WK1 systemctl status containerd --no-pager
# Feb 20 20:35:11 capi-quickstart-md-0-n4lb4-h9bj7-jp6mt containerd[117]: time="2026-02-20T20:35:11.801333216Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubelet.slice/kubelet-kubepods.slice/kubelet-kubepods-burstable.slice/kubelet-kubepods-burstable-podecf9ce0c_5c0c_451e_be27_740a2c301c11.slice/cri-containerd-6f867b462cb7eb2d2bb2ac7262c7cbb33deb23556267aa80ba80cc5d837c0ef0.scope/hugetlb.1GB.events\""
docker exec -i $WK1 cat /etc/containerd/cri-base.json | jq

# {
#   "ociVersion": "1.2.1",
#   "process": {
#     "user": {
#       "uid": 0,
#       "gid": 0
#     },
#     "cwd": "/",
#     "capabilities": {
#       "bounding": [
#         "CAP_CHOWN",
#         "CAP_DAC_OVERRIDE",
#         "CAP_FSETID",

docker exec -i $WK1 containerd --version
# containerd github.com/containerd/containerd/v2 v2.2.0 

docker exec -i $WK1 cat /etc/containerd/config.toml
# # explicitly use v2 config format
# version = 2   # containerd 가 2.x 인데, config 는 아직 v2 사용 중

# [proxy_plugins]
# # fuse-overlayfs is used for rootless
# [proxy_plugins."fuse-overlayfs"]
#   type = "snapshot"
#   address = "/run/containerd-fuse-overlayfs.sock"

# [plugins."io.containerd.grpc.v1.cri".containerd]
#   # save disk space when using a single snapshotter
#   discard_unpacked_layers = true
#   # explicitly use default snapshotter so we can sed it in entrypoint
#   snapshotter = "overlayfs"
#   # explicit default here, as we're configuring it below
#   default_runtime_name = "runc"
# [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
#   # set default runtime handler to v2, which has a per-pod shim
#   runtime_type = "io.containerd.runc.v2"
#   # Generated by "ctr oci spec" and modified at base container to mount poduct_uuid
#   base_runtime_spec = "/etc/containerd/cri-base.json"
#   [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
#     # use systemd cgroup by default
#     SystemdCgroup = true

# # Setup a runtime with the magic name ("test-handler") used for Kubernetes
# # runtime class tests ...
# [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.test-handler]
#   # same settings as runc
#   runtime_type = "io.containerd.runc.v2"
#   base_runtime_spec = "/etc/containerd/cri-base.json"
#   [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.test-handler.options]
#     SystemdCgroup = true

# [plugins."io.containerd.grpc.v1.cri"]
#   # use fixed sandbox image
#   sandbox_image = "registry.k8s.io/pause:3.10"
#   # allow hugepages controller to be missing
#   # see https://github.com/containerd/cri/pull/1501
#   tolerate_missing_hugepages_controller = true
#   # restrict_oom_score_adj needs to be true when running inside UserNS (rootless)
#   restrict_oom_score_adj = false

[11] 워크로드 클러스터 업그레이드

# 업그레이드 실행 : ClusterClass / Topology 사용 중이므로 KubeadmControlPlane, MachineDeployment 를 직접 수정 안됨.
## 컨트롤 플레인 노드 1대씩 완료 -> 워커 노드 
kubectl get cluster
# NAME              CLUSTERCLASS   AVAILABLE   CP DESIRED   CP AVAILABLE   CP UP-TO-DATE   W DESIRED   W AVAILABLE   W UP-TO-DATE   PHASE         AGE   VERSION
# capi-quickstart   quick-start    True        3            3              3               3           3             3              Provisioned   41m   v1.34.3
kubectl patch cluster capi-quickstart --type merge -p '{"spec":{"topology":{"version":"v1.35.0"}}}'

## (참고) kube-ops-view 포트 포워딩 exit 될 경우 다시 실행
k8s1 -n kube-system port-forward svc/kube-ops-view 8080:8080 &


k8s1 get node
# NAME                                     STATUS   ROLES           AGE     VERSION
# capi-quickstart-md-0-n4lb4-6vkkj-b29h2   Ready    <none>          98s     v1.35.0
# capi-quickstart-md-0-n4lb4-6vkkj-ldj57   Ready    <none>          64s     v1.35.0
# capi-quickstart-md-0-n4lb4-6vkkj-nfc5s   Ready    <none>          32s     v1.35.0
# capi-quickstart-v6vgk-9v9nx              Ready    control-plane   2m58s   v1.35.0
# capi-quickstart-v6vgk-q6c2v              Ready    control-plane   3m36s   v1.35.0
# capi-quickstart-v6vgk-sg9qx              Ready    control-plane   2m13s   v1.35.0

# 신규 버전의 머신(컨테이너)이 생성됨을 확인
docker ps
# CONTAINER ID   IMAGE                                COMMAND                  CREATED              STATUS              PORTS                                                             NAMES
# 62cd36201b2b   kindest/node:v1.35.0                 "/usr/local/bin/entr…"   53 seconds ago       Up 52 seconds                                                                         capi-quickstart-md-0-n4lb4-6vkkj-nfc5s
# 5fea398bfe9e   kindest/node:v1.35.0                 "/usr/local/bin/entr…"   About a minute ago   Up About a minute                                                                     capi-quickstart-md-0-n4lb4-6vkkj-ldj57
# 7b7e4a7aae25   kindest/node:v1.35.0                 "/usr/local/bin/entr…"   2 minutes ago        Up 2 minutes                                                                          capi-quickstart-md-0-n4lb4-6vkkj-b29h2
# 140b21eb3b7f   kindest/node:v1.35.0                 "/usr/local/bin/entr…"   2 minutes ago        Up 2 minutes        127.0.0.1:55013->6443/tcp                                         capi-quickstart-v6vgk-sg9qx
# d3a50a4cb4fd   kindest/node:v1.35.0                 "/usr/local/bin/entr…"   3 minutes ago        Up 3 minutes        127.0.0.1:55012->6443/tcp                                         capi-quickstart-v6vgk-9v9nx
# 7b8dc0f5d141   kindest/node:v1.35.0                 "/usr/local/bin/entr…"   4 minutes ago        Up 4 minutes        127.0.0.1:55011->6443/tcp                                         capi-quickstart-v6vgk-q6c2v
# f6aaa74afe25   kindest/haproxy:v20230606-42a2262b   "haproxy -W -db -f /…"   46 minutes ago       Up 46 minutes       0.0.0.0:55000->6443/tcp, 0.0.0.0:55001->8404/tcp                  capi-quickstart-lb
# 322c18c5580f   kindest/node:v1.35.0                 "/usr/local/bin/entr…"   About an hour ago    Up 49 minutes       0.0.0.0:30000-30001->30000-30001/tcp, 127.0.0.1:62897->6443/tcp   myk8s-control-plane

# 컨테이너 내부에 설정 파일을 호스트에 복사하여 가져오기
docker cp capi-quickstart-lb:/usr/local/etc/haproxy/haproxy.cfg .

# 설정 파일 확인 : 아래 백엔드 서버에 목록 업데이트 확인!
cat haproxy.cfg
# ...
# backend kube-apiservers
#   option httpchk GET /healthz

# #   server capi-quickstart-v6vgk-9v9nx 172.18.0.4:6443 weight 100 check check-ssl verify none resolvers docker resolve-prefer ipv4
# #   server capi-quickstart-v6vgk-q6c2v 172.18.0.10:6443 weight 100 check check-ssl verify none resolvers docker resolve-prefer ipv4
# #   server capi-quickstart-v6vgk-sg9qx 172.18.0.8:6443 weight 100 check check-ssl verify none resolvers docker resolve-prefer ipv4

[12] 워크로드 클러스터 노드(워커) 확장 → 축소 : 컨트롤 플레인 노드 확장 기능 지원은 없음

kubectl get cluster
# NAME              CLUSTERCLASS   AVAILABLE   CP DESIRED   CP AVAILABLE   CP UP-TO-DATE   W DESIRED   W AVAILABLE   W UP-TO-DATE   PHASE         AGE   VERSION
# capi-quickstart   quick-start    True        3            3              3               3           3             3              Provisioned   46m   v1.35.0

kubectl patch cluster capi-quickstart --type merge -p '{
  "spec": {
    "topology": {
      "workers": {
        "machineDeployments": [
          {
            "class": "default-worker",
            "name": "md-0",
            "replicas": 5
          }
        ]
      }
    }
  }
}'
# cluster.cluster.x-k8s.io/capi-quickstart patched

# 확인
kubectl get cluster
# NAME              CLUSTERCLASS   AVAILABLE   CP DESIRED   CP AVAILABLE   CP UP-TO-DATE   W DESIRED   W AVAILABLE   W UP-TO-DATE   PHASE         AGE   VERSION
# capi-quickstart   quick-start    False       3            3              3               5           3             5              Provisioned   47m   v1.35.0

k8s1 get node                                     
# NAME                                     STATUS   ROLES           AGE     VERSION
# capi-quickstart-md-0-n4lb4-6vkkj-9n2mr   Ready    <none>          18s     v1.35.0
# capi-quickstart-md-0-n4lb4-6vkkj-b29h2   Ready    <none>          3m12s   v1.35.0
# capi-quickstart-md-0-n4lb4-6vkkj-gvngj   Ready    <none>          18s     v1.35.0
# capi-quickstart-md-0-n4lb4-6vkkj-ldj57   Ready    <none>          2m38s   v1.35.0
# capi-quickstart-md-0-n4lb4-6vkkj-nfc5s   Ready    <none>          2m6s    v1.35.0
# capi-quickstart-v6vgk-9v9nx              Ready    control-plane   4m32s   v1.35.0
# capi-quickstart-v6vgk-q6c2v              Ready    control-plane   5m10s   v1.35.0
# capi-quickstart-v6vgk-sg9qx              Ready    control-plane   3m47s   v1.35.0

# 다시 3대로 축소
kubectl patch cluster capi-quickstart --type merge -p '{
  "spec": {
    "topology": {
      "workers": {
        "machineDeployments": [
          {
            "class": "default-worker",
            "name": "md-0",
            "replicas": 3
          }
        ]
      }
    }
  }
}'

kubectl get cluster
# NAME              CLUSTERCLASS   AVAILABLE   CP DESIRED   CP AVAILABLE   CP UP-TO-DATE   W DESIRED   W AVAILABLE   W UP-TO-DATE   PHASE         AGE   VERSION
# capi-quickstart   quick-start    True        3            3              3               3           3             3              Provisioned   48m   v1.35.0

[13] Add a MachineDeployment : 머신디플로이먼트 자체 추가

# 현재 워크로드 클러스터에 워커노드 머신디플로이먼트 확인
kubectl get cluster capi-quickstart -o jsonpath='{.spec.topology.workers}' | jq
# {
#   "machineDeployments": [
#     {
#       "class": "default-worker",
#       "name": "md-0",
#       "replicas": 3
#     }
#   ]
# }

# second-deployment 이름의 머신디플로이먼트 추가
kubectl patch cluster capi-quickstart --type json --patch '[{"op": "add", "path": "/spec/topology/workers/machineDeployments/-",  "value": {"name": "second-deployment", "replicas": 1, "class": "default-worker"} }]'
# cluster.cluster.x-k8s.io/capi-quickstart patched

# 확인
kubectl --kubeconfig=capi-quickstart.kubeconfig get nodes -owide
# NAME                                                  STATUS   ROLES           AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION     CONTAINER-RUNTIME
# capi-quickstart-md-0-n4lb4-6vkkj-gvngj                Ready    <none>          115s    v1.35.0   172.18.0.7    <none>        Debian GNU/Linux 12 (bookworm)   6.11.11-linuxkit   containerd://2.2.0
# capi-quickstart-md-0-n4lb4-6vkkj-ldj57                Ready    <none>          4m15s   v1.35.0   172.18.0.5    <none>        Debian GNU/Linux 12 (bookworm)   6.11.11-linuxkit   containerd://2.2.0
# capi-quickstart-md-0-n4lb4-6vkkj-nfc5s                Ready    <none>          3m43s   v1.35.0   172.18.0.6    <none>        Debian GNU/Linux 12 (bookworm)   6.11.11-linuxkit   containerd://2.2.0
# capi-quickstart-second-deployment-mcvvj-4cl5b-md4k5   Ready    <none>          16s     v1.35.0   172.18.0.9    <none>        Debian GNU/Linux 12 (bookworm)   6.11.11-linuxkit   containerd://2.2.0
# capi-quickstart-v6vgk-9v9nx                           Ready    control-plane   6m9s    v1.35.0   172.18.0.4    <none>        Debian GNU/Linux 12 (bookworm)   6.11.11-linuxkit   containerd://2.2.0
# capi-quickstart-v6vgk-q6c2v                           Ready    control-plane   6m47s   v1.35.0   172.18.0.10   <none>        Debian GNU/Linux 12 (bookworm)   6.11.11-linuxkit   containerd://2.2.0
# capi-quickstart-v6vgk-sg9qx                           Ready    control-plane   5m24s   v1.35.0   172.18.0.8    <none>        Debian GNU/Linux 12 (bookworm)   6.11.11-linuxkit   containerd://2.2.0
kubectl get cluster capi-quickstart -o jsonpath='{.spec.topology.workers}' | jq
# {
#   "machineDeployments": [
#     {
#       "class": "default-worker",
#       "name": "md-0",
#       "replicas": 3
#     },
#     {
#       "class": "default-worker",
#       "name": "second-deployment",
#       "replicas": 1
#     }
#   ]
# }

[14] 워크로드 클러스터 컨트롤 플레인 노드 Scale

# 현재 워크로드 클러스터에 컨트롤플레인 정보 확인
kubectl get cluster capi-quickstart -o jsonpath='{.spec.topology.controlPlane}' | jq
# {
#   "replicas": 3
# }

# 기존 3대에서 -> 5대로 업데이트
kubectl patch cluster capi-quickstart --type json --patch '[{"op": "replace", "path": "/spec/topology/controlPlane/replicas",  "value": 5}]'

# 확인
kubectl --kubeconfig=capi-quickstart.kubeconfig get nodes -owide
# NAME                                                  STATUS     ROLES           AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION     CONTAINER-RUNTIME
# capi-quickstart-md-0-n4lb4-6vkkj-gvngj                Ready      <none>          3m24s   v1.35.0   172.18.0.7    <none>        Debian GNU/Linux 12 (bookworm)   6.11.11-linuxkit   containerd://2.2.0
# capi-quickstart-md-0-n4lb4-6vkkj-ldj57                Ready      <none>          5m44s   v1.35.0   172.18.0.5    <none>        Debian GNU/Linux 12 (bookworm)   6.11.11-linuxkit   containerd://2.2.0
# capi-quickstart-md-0-n4lb4-6vkkj-nfc5s                Ready      <none>          5m12s   v1.35.0   172.18.0.6    <none>        Debian GNU/Linux 12 (bookworm)   6.11.11-linuxkit   containerd://2.2.0
# capi-quickstart-second-deployment-mcvvj-4cl5b-md4k5   Ready      <none>          105s    v1.35.0   172.18.0.9    <none>        Debian GNU/Linux 12 (bookworm)   6.11.11-linuxkit   containerd://2.2.0
# capi-quickstart-v6vgk-9v9nx                           Ready      control-plane   7m38s   v1.35.0   172.18.0.4    <none>        Debian GNU/Linux 12 (bookworm)   6.11.11-linuxkit   containerd://2.2.0
# capi-quickstart-v6vgk-q6c2v                           Ready      control-plane   8m16s   v1.35.0   172.18.0.10   <none>        Debian GNU/Linux 12 (bookworm)   6.11.11-linuxkit   containerd://2.2.0
# capi-quickstart-v6vgk-sg9qx                           Ready      control-plane   6m53s   v1.35.0   172.18.0.8    <none>        Debian GNU/Linux 12 (bookworm)   6.11.11-linuxkit   containerd://2.2.0
# capi-quickstart-v6vgk-zgrrd                           NotReady   control-plane   2s      v1.35.0   172.18.0.12   <none>        Debian GNU/Linux 12 (bookworm)   6.11.11-linuxkit   containerd://2.2.0
# capi-quickstart-v6vgk-zhxcg                           NotReady   control-plane   36s     v1.35.0   172.18.0.11   <none>        Debian GNU/Linux 12 (bookworm)   6.11.11-linuxkit   containerd://2.2.0
kubectl get cluster capi-quickstart -o jsonpath='{.spec.topology.controlPlane}' | jq
# {
#   "replicas": 5
# }

실습 리소스 삭제

# 첫 번째 워크로드 클러스터(머신 포함) 삭제  
kubectl delete cluster capi-quickstart && docker ps
# kind 삭제  
kind delete cluster --name myk8s

마치며

이번 7주차에서는 RKE2의 보안 중심 아키텍처와 구성 요소, 부트스트랩 및 업그레이드 흐름을 정리하고, Cluster API를 통해 멀티클라우드 환경에서 클러스터 수명 주기를 선언적으로 관리하는 방법을 살펴봤습니다.

 

RKE2는 엔터프라이즈 환경에 적합한 안정성과 보안 기본값을 제공하지만, 추상화된 추가 레이어로 인해 학습곡선이 높을 것으로 보입니다.

 

Cluster API는 표준화된 리소스 모델로 운영 자동화를 가능하게 해줍니다. 잘 활용된다면 실습 과정에서 확인한 것처럼 관리 클러스터와 워크로드 클러스터를 분리하면 운영 복잡성을 줄이고 확장성 있는 멀티클러스터 운영 기반을 마련할 수 있을 것 같습니다.

 

7주간의 긴 스터디 과정이 끝났네요! 이번 스터디 과정을 통해 Kubernetes 설치·운영의 흐름을 단계적으로 정리하고, 보안과 자동화까지 한 번에 이어볼 수 있었습니다. 함께 달려주신 분들 모두 수고 많으셨고, 이후에도 실무에서 계속 응용해 나가면 좋겠습니다.

 

긴 글 읽어주셔서 감사합니다 :)

반응형

'클라우드 컴퓨팅 & NoSQL > [K8S Deploy] K8S 디플로이 스터디' 카테고리의 다른 글

[6주차 - K8S Deploy] Kubespray-offline [Air-gap 환경] (26.02.08)  (0) 2026.02.15
[5주차 - K8S Deploy] Kubespray 고가용성(HA) 실습 및 클러스터 운영 (26.02.01)  (0) 2026.02.07
[4주차 - K8S Deploy] kubespray 배포 분석 (26.01.25)  (0) 2026.01.31
[3주차 - K8S Deploy] Kubeadm & K8S Upgrade 2/2 (26.01.23)  (0) 2026.01.23
[3주차 - K8S Deploy] Kubeadm & K8S Upgrade 1/2 (26.01.23)  (0) 2026.01.23
    devlos
    devlos
    안녕하세요, Devlos 입니다. 새로 공부 중인 지식들을 공유하고, 명확히 이해하고자 블로그를 개설했습니다 :) 여러 DEVELOPER 분들과 자유롭게 지식을 공유하고 싶어요! 방문해 주셔서 감사합니다 😀 - DEVLOS -

    티스토리툴바