devlos
Devlos Archive
devlos
전체 방문자
오늘
어제
02-19 08:58

최근 글

  • 분류 전체보기 (114)
    • 프로젝트 (1)
    • MSA 설계 & 도메인주도 설계 (9)
    • 클라우드 컴퓨팅 & NoSQL (94)
      • [K8S Deploy] K8S 디플로이 스터디 (7)
      • [Cilium Study] 실리움 스터디 (8)
      • [KANS] 쿠버네티스 네트워크 심화 스터디 (12)
      • [T101] 테라폼 4기 스터디 (8)
      • [CICD] CICD 맛보기 스터디 (3)
      • [T101] 테라폼 기초 입문 스터디 (6)
      • [AEWS] Amazon EKS 워크숍 스터디 (7)
      • [PKOS] 쿠버네티스 실무 실습 스터디 (7)
      • Kubernetes (13)
      • Docker (7)
      • Redis (1)
      • Jenkins (3)
      • Terraform (1)
      • Ansible (4)
      • Kafka (1)
    • 프로그래밍 (7)
      • Spring Boot (5)
      • Broker (1)
    • 성능과 튜닝 (1)
    • ALM (0)
    • 기타 (2)

인기 글

태그

  • ansible
  • 쿠버네티스 스터디
  • t101 4기
  • 데브옵스
  • kOps
  • docker
  • 쿠버네티스
  • 테라폼
  • DevOps
  • Kubernetes
  • cilium
  • CloudNet@
  • MSA
  • 도커
  • PKOS

티스토리

최근 댓글

hELLO · Designed By 정상우.
devlos

Devlos Archive

[6주차 - K8S Deploy] Kubespray-offline [Air-gap 환경] (26.02.08)
클라우드 컴퓨팅 & NoSQL/[K8S Deploy] K8S 디플로이 스터디

[6주차 - K8S Deploy] Kubespray-offline [Air-gap 환경] (26.02.08)

2026. 2. 15. 00:52
반응형

들어가며

안녕하세요! Devlos입니다.

이번 포스팅은 CloudNet@ 커뮤니티에서 주최하는 K8S Deploy 6주 차 주제인 "Kubespray 오프라인 설치 및 Air-Gap 환경 구축"에 대해 정리한 내용입니다.

 

실습 환경(Admin 서버 + k8s 노드 2대)에서 kubespray-offline 도구를 활용해 필요한 바이너리 파일들과 컨테이너 이미지들을 사전 다운로드하고, 오프라인 패키지 저장소(RPM/DEB), PyPI 미러, 프라이빗 컨테이너 레지스트리를 구축한 뒤, 완전히 격리된 환경에서 Kubernetes 클러스터를 성공적으로 배포하는 전 과정을 다룹니다.

폐쇄망에서 필요한 핵심 서비스들(DNS, NTP, Package Repository, Container Registry 등) 구성부터 시작해, Helm OCI Registry 활용, 트러블슈팅까지 실무에서 바로 적용 가능한 오프라인 배포 방법에 대해 알아봅니다.

 


 

폐쇄망 환경 소개

실습을 위해서 Air-Gap 환경에서 Kubernetes 클러스터 배포를 위한 네트워크를 구성합니다.

 

 

  • 내부망: 폐쇄된 네트워크 환경
  • Admin Server: 다중 역할을 수행하는 중앙 관리 서버
    • DNS 서버: 내부 도메인 이름 해석
    • Internet Gateway: 외부 연결 관리
    • NAT Gateway: 네트워크 주소 변환
    • Registry: 컨테이너 이미지 저장소
  • k8s node1 (Master): Kubernetes 마스터 노드
  • k8s node2 (Worker): Kubernetes 워커 노드

폐쇄망 환경에서 Kubernetes 클러스터를 성공적으로 배포하기 위해 필요한 핵심 서비스들은 다음과 같습니다.

  1. Network Gateway (GW, NATGW): 네트워크 게이트웨이 및 NAT 서비스
  2. NTP Server: 시간 동기화 서비스
  3. Helm Artifact Repository: Helm 차트 저장소
  4. Private PyPI Mirror: Python 패키지 미러 저장소
  5. DNS Server: 도메인 이름 해석 서비스
  6. Local (Mirror) YUM/DNF Repository: OS 패키지 미러 저장소
  7. Private Container (Image) Registry: 컨테이너 이미지 저장소

폐쇠망 실습 환경 배포

Vagrantfile에서는 admin 서버의 디스크 용량을 120GB로 확장하고, Kubernetes 클러스터를 위해 컨트롤러 노드와 워커 노드 역할을 하는 k8s-node 2대를 구성했습니다.

Vagrantfile

# Base Image  https://portal.cloud.hashicorp.com/vagrant/discover/bento/rockylinux-10.0
BOX_IMAGE = "bento/rockylinux-10.0" # "bento/rockylinux-9"
BOX_VERSION = "202510.26.0"
N = 2 # max number of Node

Vagrant.configure("2") do |config|

# Nodes 
  (1..N).each do |i|
    config.vm.define "k8s-node#{i}" do |subconfig|
      subconfig.vm.box = BOX_IMAGE
      subconfig.vm.box_version = BOX_VERSION
      subconfig.vm.provider "virtualbox" do |vb|
        vb.customize ["modifyvm", :id, "--groups", "/Kubespary-offline-Lab"]
        vb.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]
        vb.name = "k8s-node#{i}"
        vb.cpus = 4
        vb.memory = 2048
        vb.linked_clone = true
      end
      subconfig.vm.host_name = "k8s-node#{i}"
      subconfig.vm.network "private_network", ip: "192.168.10.1#{i}"
      subconfig.vm.network "forwarded_port", guest: 22, host: "6000#{i}", auto_correct: true, id: "ssh"
      subconfig.vm.synced_folder "./", "/vagrant", disabled: true
      subconfig.vm.provision "shell", path: "init_cfg.sh" , args: [ N ]
    end
  end

# Admin Node
    config.vm.define "admin" do |subconfig|
      subconfig.vm.box = BOX_IMAGE
      subconfig.vm.box_version = BOX_VERSION
      subconfig.vm.provider "virtualbox" do |vb|
        vb.customize ["modifyvm", :id, "--groups", "/Kubespary-offline-Lab"]
        vb.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]
        vb.name = "admin"
        vb.cpus = 4
        vb.memory = 2048
        vb.linked_clone = true
      end
      subconfig.vm.host_name = "admin"
      subconfig.vm.network "private_network", ip: "192.168.10.10"
      subconfig.vm.network "forwarded_port", guest: 22, host: "60000", auto_correct: true, id: "ssh"
      subconfig.vm.synced_folder "./", "/vagrant", disabled: true
      subconfig.vm.disk :disk, size: "120GB", primary: true # https://developer.hashicorp.com/vagrant/docs/disks/usage
      subconfig.vm.provision "shell", path: "admin.sh" , args: [ N ]
    end

end

admin.sh는 폐쇄망 환경에서 Admin 서버를 구성하기 위한 초기 설정 스크립트입니다.
주요 기능은 다음과 같습니다.

  • 시스템 기본 설정: 시간대 설정, 방화벽/SELinux 비활성화
  • 네트워크 구성: 로컬 DNS 설정, IP 포워딩 활성화, 라우팅 설정
  • SSH 환경 구성: SSH 키 생성 및 배포, 패스워드 인증 활성화
  • 필수 패키지 설치: Python, Git, Helm, K9s 등 Kubernetes 관리 도구
  • 디스크 확장: 시스템 디스크 용량 자동 확장

admin.sh

#!/usr/bin/env bash

echo ">>>> Initial Config Start <<<<"


echo "[TASK 1] Change Timezone and Enable NTP"
timedatectl set-local-rtc 0                           # 하드웨어 클럭을 UTC로 설정
timedatectl set-timezone Asia/Seoul                   # 시스템 타임존을 서울로 변경


echo "[TASK 2] Disable firewalld and selinux"
systemctl disable --now firewalld >/dev/null 2>&1      # firewalld 비활성화 및 중지
setenforce 0                                           # SELinux 모드를 Permissive로 전환(일시)
sed -i 's/^SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config  # SELinux 영구 설정 변경


echo "[TASK 3] Setting Local DNS Using Hosts file"
sed -i '/^127\.0\.\(1\|2\)\.1/d' /etc/hosts            # hosts 파일에서 기본 loopback 항목 제거
echo "192.168.10.10 admin" >> /etc/hosts               # admin 서버 hosts 등록
for (( i=1; i<=$1; i++  )); do echo "192.168.10.1$i k8s-node$i" >> /etc/hosts; done  # 각 k8s 노드 hosts 등록


echo "[TASK 4] Delete default routing - enp0s9 NIC" # setenforce 0 설정 필요
nmcli connection modify enp0s9 ipv4.never-default yes # enp0s9 인터페이스에 기본 라우팅 금지
nmcli connection up enp0s9 >/dev/null 2>&1           # enp0s9 인터페이스 재시작


echo "[TASK 5] Config net.ipv4.ip_forward"
cat << EOF > /etc/sysctl.d/99-ipforward.conf
net.ipv4.ip_forward = 1         # IP 포워딩 활성화
EOF
sysctl --system  >/dev/null 2>&1    # sysctl 전체 재적용


echo "[TASK 6] Install packages"
dnf install -y python3-pip git sshpass cloud-utils-growpart >/dev/null 2>&1 # 필수 패키지 설치


echo "[TASK 7] Install Helm"
curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | DESIRED_VERSION=v3.20.0 bash >/dev/null 2>&1 # Helm 설치


echo "[TASK 8] Increase Disk Size"
growpart /dev/sda 3 >/dev/null 2>&1 # /dev/sda3 파티션 용량 확장
xfs_growfs /dev/sda3 >/dev/null 2>&1 # xfs 파일시스템 확장


echo "[TASK 9] Setting SSHD"
echo "root:qwe123" | chpasswd                      # root 비밀번호 설정

cat << EOF >> /etc/ssh/sshd_config                 # SSH 서버 설정 변경: 루트로그인/패스워드 인증 허용
PermitRootLogin yes
PasswordAuthentication yes
EOF
systemctl restart sshd >/dev/null 2>&1            # SSH 서버 재시작


echo "[TASK 10] Setting SSH Key"
ssh-keygen -t rsa -N "" -f /root/.ssh/id_rsa >/dev/null 2>&1                           # root 계정 ssh key 생성
sshpass -p 'qwe123' ssh-copy-id -o StrictHostKeyChecking=no root@192.168.10.10  >/dev/null 2>&1  # self ssh key 등록
ssh -o StrictHostKeyChecking=no root@admin-lb hostname >/dev/null 2>&1                # (옵션) admin-lb 접속 확인
for (( i=1; i<=$1; i++  )); do sshpass -p 'qwe123' ssh-copy-id -o StrictHostKeyChecking=no root@192.168.10.1$i >/dev/null 2>&1 ; done  # 각 k8s 노드에 ssh key 복사
for (( i=1; i<=$1; i++  )); do sshpass -p 'qwe123' ssh -o StrictHostKeyChecking=no root@k8s-node$i hostname >/dev/null 2>&1 ; done    # 각 k8s 노드 ssh 접속 테스트


echo "[TASK 11] Install K9s"
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi         # 아키텍처에 따라 바이너리 선택
wget -P /tmp https://github.com/derailed/k9s/releases/latest/download/k9s_linux_${CLI_ARCH}.tar.gz  >/dev/null 2>&1  # K9s 바이너리 다운로드
tar -xzf /tmp/k9s_linux_${CLI_ARCH}.tar.gz -C /tmp                # 압축 해제
chown root:root /tmp/k9s                                          # 소유자 변경
mv /tmp/k9s /usr/local/bin/                                       # 실행경로로 이동
chmod +x /usr/local/bin/k9s                                       # 실행권한 부여


echo "[TASK 12] ETC"
echo "sudo su -" >> /home/vagrant/.bashrc                         # vagrant 계정 로그인시 root 자동 전환


echo ">>>> Initial Config End <<<<"

init_cfg.sh 스크립트 기능

init_cfg.sh는 Kubernetes 노드들(Master/Worker)의 초기 설정을 위한 스크립트입니다.

  • 시스템 기본 설정: 시간대 설정, 방화벽/SELinux 비활성화
  • Kubernetes 준비: SWAP 비활성화, 커널 모듈 로드, 네트워크 설정
  • 네트워크 구성: 로컬 DNS 설정, 기본 라우팅 제거
  • SSH 환경 구성: SSH 접근 설정 및 패스워드 인증 활성화
  • 필수 패키지 설치: Python, Git 등 기본 도구 설치

init_cfg.sh

#!/usr/bin/env bash

echo ">>>> Initial Config Start <<<<"


echo "[TASK 1] Change Timezone and Enable NTP"
timedatectl set-local-rtc 0                    # 하드웨어 시계를 UTC로 설정
timedatectl set-timezone Asia/Seoul            # 시간대를 서울로 설정


echo "[TASK 2] Disable firewalld and selinux"
systemctl disable --now firewalld >/dev/null 2>&1    # 방화벽 비활성화 및 중지
setenforce 0                                          # SELinux 임시 비활성화
sed -i 's/^SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config  # SELinux 영구 비활성화


echo "[TASK 3] Disable and turn off SWAP & Delete swap partitions"
swapoff -a                                     # 모든 SWAP 비활성화 (Kubernetes 요구사항)
sed -i '/swap/d' /etc/fstab                    # fstab에서 SWAP 항목 제거
sfdisk --delete /dev/sda 2 >/dev/null 2>&1     # SWAP 파티션 삭제
partprobe /dev/sda >/dev/null 2>&1             # 파티션 테이블 다시 읽기


echo "[TASK 4] Config kernel & module"
# Kubernetes 필수 커널 모듈 설정
cat << EOF > /etc/modules-load.d/k8s.conf
overlay                                        # 컨테이너 오버레이 파일시스템
br_netfilter                                   # 브리지 네트워크 필터링
vxlan                                          # VXLAN 터널링 (CNI 플러그인용)
EOF
modprobe overlay >/dev/null 2>&1               # overlay 모듈 즉시 로드
modprobe br_netfilter >/dev/null 2>&1          # br_netfilter 모듈 즉시 로드

# Kubernetes 네트워킹을 위한 커널 파라미터 설정
cat << EOF >/etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1       # 브리지 트래픽을 iptables로 처리
net.bridge.bridge-nf-call-ip6tables = 1       # 브리지 IPv6 트래픽을 ip6tables로 처리
net.ipv4.ip_forward                 = 1       # IP 포워딩 활성화
EOF
sysctl --system >/dev/null 2>&1                # 커널 파라미터 적용


echo "[TASK 5] Setting Local DNS Using Hosts file"
sed -i '/^127\.0\.\(1\|2\)\.1/d' /etc/hosts    # 기존 localhost 항목 제거
echo "192.168.10.10 admin" >> /etc/hosts       # admin 서버 호스트 추가
# 동적으로 k8s-node 호스트들 추가 ($1은 노드 개수)
for (( i=1; i<=$1; i++  )); do echo "192.168.10.1$i k8s-node$i" >> /etc/hosts; done


echo "[TASK 6] Delete default routing - enp0s9 NIC" # setenforce 0 설정 필요
nmcli connection modify enp0s9 ipv4.never-default yes  # enp0s9 인터페이스를 기본 라우트에서 제외
nmcli connection up enp0s9 >/dev/null 2>&1             # 네트워크 연결 활성화


echo "[TASK 7] Setting SSHD"
echo "root:qwe123" | chpasswd                  # root 계정 패스워드 설정

# SSH 접근 허용 설정
cat << EOF >> /etc/ssh/sshd_config
PermitRootLogin yes                            # root 로그인 허용
PasswordAuthentication yes                     # 패스워드 인증 허용
EOF
systemctl restart sshd >/dev/null 2>&1         # SSH 서비스 재시작


echo "[TASK 8] Install packages"
dnf install -y python3-pip git >/dev/null 2>&1  # Python pip, Git 설치


echo "[TASK 9] ETC"
echo "sudo su -" >> /home/vagrant/.bashrc      # vagrant 사용자 로그인 시 자동으로 root 전환


echo ">>>> Initial Config End <<<<"

설치 스크립트는 다음과 같습니다.

mkdir k8s-offline
cd k8s-offline

curl -O https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/k8s-kubespary-offline/Vagrantfile
curl -O https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/k8s-kubespary-offline/admin.sh
curl -O https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/k8s-kubespary-offline/init_cfg.sh

vagrant up
vagrant status

# admin, k8s-node1, k8s-node2 각각 접속 : 호스트 OS에 sshpass가 없을 경우 ssh로 root로 접속 후 암호 qwe123 입력
sshpass -p 'qwe123' ssh root@192.168.10.10 # ssh root@192.168.10.10
sshpass -p 'qwe123' ssh root@192.168.10.11 # ssh root@192.168.10.11
sshpass -p 'qwe123' ssh root@192.168.10.12 # ssh root@192.168.10.12

Network Gateway(IGW, NATGW)

################################################################
# [k8s-node] 네트워크 기본 설정 : enp0s8 연결 down, enp0s9 디폴트 라우팅 → 외부 통신 확인
################################################################

nmcli connection down enp0s8
nmcli connection modify enp0s8 connection.autoconnect no
nmcli connection modify enp0s9 +ipv4.routes "0.0.0.0/0 192.168.10.10 200"
nmcli connection up enp0s9
# Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/7)
ip route

# k8s-node1:~# ip route
# default via 192.168.10.10 dev enp0s9 proto static metric 200 
# 192.168.10.0/24 dev enp0s9 proto kernel scope link src 192.168.10.11 metric 100 

# k8s-node2:~# ip route
# default via 192.168.10.10 dev enp0s9 proto static metric 200 
# 192.168.10.0/24 dev enp0s9 proto kernel scope link src 192.168.10.12 metric 100 

################################################################
# [k8s-node] 통신 안되게
################################################################
# enp0s8 연결 내리기 : 실행 직후 부터 외부 통신 X
nmcli connection down enp0s8

# enp0s8 확인 : 할당된 IP가 제거되고, 외부 통신 라우팅 정보도 삭제됨 
cat /etc/NetworkManager/system-connections/enp0s8.nmconnection
ip addr show enp0s8
# 2: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
#     link/ether 08:00:27:90:ea:eb brd ff:ff:ff:ff:ff:ff
#     altname enx08002790eaeb

ip route
# k8s-node1:~# ip route
# default via 192.168.10.10 dev enp0s9 proto static metric 200 
# 192.168.10.0/24 dev enp0s9 proto kernel scope link src 192.168.10.11 metric 100 

# k8s-node2:~# ip route
# 192.168.10.0/24 dev enp0s9 proto kernel scope link src 192.168.10.12 metric 100 
# 192.168.1

# 재부팅 이후에도 자동 연결 내리기 설정 시
nmcli connection modify enp0s8 connection.autoconnect no
cat /etc/NetworkManager/system-connections/enp0s8.nmconnection
[connection]
id=enp0s8
uuid=7f94e839-e070-4bfe-9330-07090381d89f
type=ethernet
autoconnect=false
...

# 외부 통신을 위해 enp0s9 에 디폴트 라우팅 추가 : 우선순위 200 설정
nmcli connection modify enp0s9 +ipv4.routes "0.0.0.0/0 192.168.10.10 200"
cat /etc/NetworkManager/system-connections/enp0s9.nmconnection
...
[ipv4]
address1=192.168.10.11/24
method=manual
never-default=true              # nmcli connection modify enp0s9 ipv4.never-default yes
route1=0.0.0.0/0,192.168.10.10
...

# 설정 적용하기
nmcli connection up enp0s9
# Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/8)
ip route
# k8s-node1:~# ip route
# default via 192.168.10.10 dev enp0s9 proto static metric 200 
# 192.168.10.0/24 dev enp0s9 proto kernel scope link src 192.168.10.11 metric 100 
# k8s-node2:~# ip route
# default via 192.168.10.10 dev enp0s9 proto static metric 200 
# 192.168.10.0/24 dev enp0s9 proto kernel scope link src 192.168.10.12 metric 100 
# 외부 통신 확인

ping -w 1 -W 1 8.8.8.8
# PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.

# --- 8.8.8.8 ping statistics ---
# 1 packets transmitted, 0 received, 100% packet loss, time 0ms
curl www.google.com
# curl: (6) Could not resolve host: www.google.com

# DNS Nameserver 정보 확인
cat /etc/resolv.conf
# Generated by NetworkManager
cat << EOF > /etc/resolv.conf
nameserver 168.126.63.1
nameserver 8.8.8.8
EOF
curl www.google.com
# 네임서버를 별도로 잡아도 통신이 안됨

# (참고) 디폴트 라우팅 제거 시
# nmcli connection modify enp0s9 -ipv4.routes "0.0.0.0/0 192.168.10.10 200"
# nmcli connection up enp0s9
# ip route


################################################################
# [k8s-node] Admin server를 NAT GW로 설정
################################################################
# 라우팅 설정 : 이미 설정 되어 있음
sysctl -w net.ipv4.ip_forward=1  # sysctl net.ipv4.ip_forward
cat <<EOF | tee /etc/sysctl.d/99-ipforward.conf
net.ipv4.ip_forward = 1
EOF
sysctl --system

# NAT 설정
iptables -t nat -A POSTROUTING -o enp0s8 -j MASQUERADE
# NAT 설정 이후에는 통신이 잘됨

iptables -t nat -S
# -P PREROUTING ACCEPT
# -P INPUT ACCEPT
# -P OUTPUT ACCEPT
# -P POSTROUTING ACCEPT
# -A POSTROUTING -o enp0s8 -j MASQUERADE

iptables -t nat -L -n -v
...
Chain POSTROUTING (policy ACCEPT 1 packets, 120 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    2   168 MASQUERADE  all  --  *      enp0s8  0.0.0.0/0            0.0.0.0/0 

################################
# [admin] ⇒  다시 iptables NAT 설정 제거로, 다시 내부 k8s-node 인터넷 단절
################################
 iptables -t nat -D POSTROUTING -o enp0s8 -j MASQUERADE 

################################################################
# [admin] ⇒  NTP 서버 설정
################################################################
# 현재 ntp 서버와 타임 동기화 설정 및 상태 확인
systemctl status chronyd.service --no-pager
# grep "^[^#]" /etc/chrony.conf
# ● chronyd.service - NTP client/server
#      Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; preset: enabled)
#      Active: active (running) since Wed 2026-02-11 22:03:12 KST; 31min ago
#  Invocation: d06309e9517b40fc92a5233b01e45663
#        Docs: man:chronyd(8)
grep "^[^#]" /etc/chrony.conf
pool 2.rocky.pool.ntp.org iburst # 시간 정보를 받아올 외부 서버 주소입니다. 서비스 시작 직후 짧은 간격으로 4~8개의 패킷을 보내 서버와 빠르게 연결(초기 동기화)되도록 돕는 옵션입니다.
sourcedir /run/chrony-dhcp       # DHCP 서버로부터 받은 NTP 서버 정보를 저장하는 디렉토리입니다. 네트워크 환경에 따라 유동적으로 서버를 추가할 수 있게 해줍니다.
driftfile /var/lib/chrony/drift  # 컴퓨터 내부 시계(수정 진동자)가 실제 시간과 얼마나 오차가 발생하는지(시간 편차) 기록하는 파일입니다. 나중에 네트워크가 끊겨도 이 기록을 바탕으로 오차를 보정합니다.
makestep 1.0 3                   # 시스템 부팅 시 시간 차이가 너무 크면 점진적 보정 대신 '강제 점프(Step)'를 수행하라는 설정입니다. 1.0 3은 "시간 차이가 1초 이상일 경우, 초기 3번의 업데이트 내에서 즉시 시간을 맞추라"는 뜻입니다.
rtcsync                          # 시스템 시계(OS 시간)의 동기화 결과를 하드웨어 시계(메인보드의 RTC)에 주기적으로 복사합니다. 서버를 재부팅해도 시간이 잘 맞게 유지됩니다.
ntsdumpdir /var/lib/chrony       # NTS(Network Time Security) 키 정보를 저장하는 위치입니다. (보안 연결용)
logdir /var/log/chrony

# chrony가 어떤 NTP 서버들을 알고 있고, 그중 어떤 서버를 기준으로 시간을 맞추는지를 보여줍니다.
chronyc sources -v
dig +short 2.rocky.pool.ntp.org

# chrony 설정
cp /etc/chrony.conf /etc/chrony.bak
cat << EOF > /etc/chrony.conf
# 외부 한국 공용 NTP 서버 설정
server pool.ntp.org iburst
server kr.pool.ntp.org iburst

# 내부망(192.168.10.0/24)에서 이 서버에 접속하여 시간 동기화 허용
allow 192.168.10.0/24

# 외부망이 끊겼을 때도 로컬 시계를 기준으로 내부망에 시간 제공 (선택 사항)
local stratum 10

# 로그
logdir /var/log/chrony
EOF
systemctl restart chronyd.service
systemctl status chronyd.service --no-pager

# 상태 확인
timedatectl status
#                Local time: Wed 2026-02-11 22:36:16 KST
#            Universal time: Wed 2026-02-11 13:36:16 UTC
#                  RTC time: Wed 2026-02-11 14:09:31
#                 Time zone: Asia/Seoul (KST, +0900)
# System clock synchronized: yes
#               NTP service: active
#           RTC in local TZ: no
chronyc sources -v


################################################################
# [k8s-node] NTP 클라이언트 설정
################################################################
# 상태 확인
timedatectl status
chronyc sources -v

# chrony 설정
cp /etc/chrony.conf /etc/chrony.bak
cat << EOF > /etc/chrony.conf
server 192.168.10.10 iburst
logdir /var/log/chrony
EOF
systemctl restart chronyd.service
systemctl status chronyd.service --no-pager
# ● chronyd.service - NTP client/server
#      Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; preset: enabled)
#      Active: active (running) since Wed 2026-02-11 22:37:57 KST; 45ms ago
#  Invocation: 39c962f585994a78b5e708a3461a917f
#        Docs: man:chronyd(8)
#              man:chrony.conf(5)
#     Process: 5754 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS)
#    Main PID: 5756 (chronyd)
#       Tasks: 1 (limit: 12336)
#      Memory: 992K (peak: 2.8M)
#         CPU: 23ms
#      CGroup: /system.slice/chronyd.service
#              └─5756 /usr/sbin/chronyd -F 2
# 상태 확인
timedatectl status
#                Local time: Wed 2026-02-11 22:38:19 KST
#            Universal time: Wed 2026-02-11 13:38:19 UTC
#                  RTC time: Wed 2026-02-11 14:11:34
#                 Time zone: Asia/Seoul (KST, +0900)
# System clock synchronized: no
#               NTP service: active
#           RTC in local TZ: no
chronyc sources -v
# MS Name/IP address         Stratum Poll Reach LastRx Last sample
# ===============================================================================
# ^* admin                         0   7     0     -     +0ns[   +0ns] +/-    0ns

# [admin-lb] 자신의 NTP Server 를 사용하는 클라이언트 확인
chronyc clients
# Hostname                      NTP   Drop Int IntL Last     Cmd   Drop Int  Last
# ===============================================================================
# k8s-node1                       3      0   1   -     1       0      0   -     -
# k8s-node2                       2      0   1   -     0       0      0   -     -


################################################################
# [admin] DNS 서버(bind) 설정
################################################################
# bind 설치
dnf install -y bind bind-utils

# named.conf 설정
cp /etc/named.conf /etc/named.bak
cat <<EOF > /etc/named.conf
options {
        listen-on port 53 { any; };
        listen-on-v6 port 53 { ::1; };
        directory       "/var/named";
        dump-file       "/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";
        secroots-file   "/var/named/data/named.secroots";
        recursing-file  "/var/named/data/named.recursing";
        allow-query     { 127.0.0.1; 192.168.10.0/24; };
        allow-recursion { 127.0.0.1; 192.168.10.0/24; };

        forwarders {
                168.126.63.1;
                8.8.8.8;
        };

        recursion yes;

        dnssec-validation auto;  # https://sirzzang.github.io/kubernetes/Kubernetes-Kubespray-08-01-06/#troubleshooting-dnssec-%EA%B2%80%EC%A6%9D-%EC%8B%A4%ED%8C%A8

        managed-keys-directory "/var/named/dynamic";
        geoip-directory "/usr/share/GeoIP";

        pid-file "/run/named/named.pid";
        session-keyfile "/run/named/session.key";

        include "/etc/crypto-policies/back-ends/bind.config";
};

logging {
        channel default_debug {
                file "data/named.run";
                severity dynamic;
        };
};

zone "." IN {
        type hint;
        file "named.ca";
};

include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";
EOF


# 문법 오류 확인 (아무 메시지 없으면 정상)
named-checkconf /etc/named.conf

# 서비스 활성화 및 시작
systemctl enable --now named

# DMZ 서버 자체 DNS 설정 (자기 자신 사용)
cat /etc/resolv.conf
echo "nameserver 192.168.10.10" > /etc/resolv.conf
# search Davolink
# nameserver 203.248.252.2
# nameserver 164.124.101.2

# 확인
dig +short google.com @192.168.10.10
dig +short google.com

################################
# [k8s-nodes] DNS 클라이언트 설정 : NetworkManager에서 DNS 관리 끄기
################################
# NetworkManager에서 DNS 관리 끄기
cat /etc/NetworkManager/conf.d/99-dns-none.conf
cat << EOF > /etc/NetworkManager/conf.d/99-dns-none.conf
[main]
dns=none
EOF
systemctl restart NetworkManager

# DNS 서버 정보 설정
echo "nameserver 192.168.10.10" > /etc/resolv.conf

# 확인
dig +short google.com @192.168.10.10 #인터넷이 안되도, 도메인 질의가 가능하다
# 142.250.183.110
dig +short google.com
# 142.250.183.110

################################################################
# Local (Mirror) YUM/DNF Repository - kubespray 오프라인 모드에서 지원해서 사용 안해도됨
################################################################

################################
# [admin] Linux 패키지 저장소 12분 소요 : reposync + createrepo + nginx
################################
# 패키지 설치
dnf install -y dnf-plugins-core createrepo nginx


# 패키지(저장소) 동기화 (reposync) : 외부 저장소(BaseOS, AppStream 등)의 패키지를 로컬 디렉토리로 가져옵니다.
## 미러 저장 디렉터리 생성
mkdir -p /data/repos/rocky/10
cd /data/repos/rocky/10

## BaseOS, AppStream, Extras 동기화
# Rocky 10 repo id 확인
dnf repolist
# repo id                                              repo name
# appstream                                            Rocky Linux 10 - AppStream
# baseos                                               Rocky Linux 10 - BaseOS
# extras                                               Rocky Linux 10 - Extras

# 특정 레포 동기화 (예: baseos, appstream)
# --download-metadata 옵션을 쓰면 원본 메타데이터까지 가져옵니다 : 메타데이터 생성 - 다운로드한 패키지들을 dnf가 인식할 수 있도록 인덱싱 작업을 합니다.

## baseos : 3분 소요, 4.8G
dnf reposync --repoid=baseos    --download-metadata -p /data/repos/rocky/10 # 로컬에 다운로드  
# ...  
# (1472/1474): zlib-ng-compat-2.2.3-2.el10.aarch64. 2.7 MB/s |  64 kB     00:00    
# (1473/1474): zsh-5.9-15.el10.aarch64.rpm           13 MB/s | 3.3 MB     00:00    
# (1474/1474): zstd-1.5.5-9.el10.aarch64.rpm        1.7 MB/s | 453 kB     00:00  
du -sh /data/repos/rocky/10/baseos/
# 6.2G    /data/repos/rocky/10/baseos/

## (참고) 메타데이터 확인 : YUM/DNF 저장소의 핵심 두뇌 역할을 하는 Metadata(메타데이터) 파일들 확인
ls -l /data/repos/rocky/10/baseos/repodata/
# total 30836
# -rw-r--r--. 1 root root    62360 Feb 11 22:50 0fb567b1f4c5f9bb81eed3bafc9f40b252558508fcd362a7fdd6b5e1f4c85e52-comps-BaseOS.aarch64.xml.xz
# -rw-r--r--. 1 root root 10561544 Feb 11 22:50 1b34bb7357df23636e2683908e763013b4918a245739769642ebf900f077201d-primary.sqlite.xz
# -rw-r--r--. 1 root root   343464 Feb 11 22:50 1de704afeee6d14017ee0a20de05455359ebcb291b70b5fd93020b1f1df76b23-other.sqlite.xz
# -rw-r--r--. 1 root root  1631654 Feb 11 22:50 3a1d90c1347a5d6d179d8a779da81b8326d1ca1a7111cfc4bf51d7cef9f831f5-filelists.xml.gz
# -rw-r--r--. 1 root root  1082008 Feb 11 22:50 62bf9ca6e90e9fbaf53c10eeb8b08b9d8d2cd0d2ebb04d6e488e9526a19c9982-filelists.sqlite.xz
# -rw-r--r--. 1 root root  1624517 Feb 11 22:50 9027e315287283bc4e95bd862e0e09012e4e4aa8e93916708e955731d5462f6d-other.xml.gz
# -rw-r--r--. 1 root root   308363 Feb 11 22:50 a55eb7089d714a338318e5acf8e9ff8a682433816e0608a3e0eeefc230153a0e-comps-BaseOS.aarch64.xml
# -rw-r--r--. 1 root root   111103 Feb 11 22:50 acf4170ee10547bb5dd491406d99b67c32bf7b3d5229928dc3c58aa03b13353a-updateinfo.xml.gz
# -rw-r--r--. 1 root root 15820685 Feb 11 22:50 f49f5be57837217b638f20dbb776b3b1dd5e92daaf3bb0e20c67abf38d2c5e40-primary.xml.gz
# -rw-r--r--. 1 root root     4449 Feb 11 22:50 repomd.xml
cat /data/repos/rocky/10/baseos/repodata/repomd.xml # 저장소의 마스터 인덱스입니다. 다른 모든 메타데이터 파일의 위치와 체크섬(Hash) 정보를 담고 있어, 클라이언트가 가장 먼저 다운로드하는 파일입니다.
# <?xml version="1.0" encoding="UTF-8"?>
# <repomd xmlns="http://linux.duke.edu/metadata/repo" xmlns:rpm="http://linux.duke.edu/metadata/rpm">
#   <revision>10</revision>
#   <tags>
#     <distro cpeid="cpe:/o:rocky:rocky:10">Rocky Linux 10</distro>
#   </tags>
#   <data type="primary">
#     <checksum type="sha256">f49f5be57837217b638f20dbb776b3b1dd5e92daaf3bb0e20c67abf38d2c5e40</checksum>
#     <open-checksum type="sha256">82d38dc9ad7c0db87413a65a9a6f6c3ad8a8651cdefcb7828a26e4288af6637c</open-checksum>
#     <location href="repodata/f49f5be57837217b638f20dbb776b3b1dd5e92daaf3bb0e20c67abf38d2c5e40-primary.xml.gz"/>
#     <timestamp>1770488169</timestamp>
#     <size>15820685</size>
#     <open-size>115690744</open-size>
#   </data>
#   <data type="filelists">
#     <checksum type="sha256">3a1d90c1347a5d6d179d8a779da81b8326d1ca1a7111cfc4bf51d7cef9f831f5</checksum>
#     <open-checksum type="sha256">a67acaa86bad7a0dc7ddd5c80cae5cdf1242d0edfe78c17d7d4854820b961840</open-checksum>
#     <location href="repodata/3a1d90c1347a5d6d179d8a779da81b8326d1ca1a7111cfc4bf51d7cef9f831f5-filelists.xml.gz"/>
#     <timestamp>1770488169</timestamp>
#     <size>1631654</size>
#     <open-size>21511816</open-size>
#   </data>
#   <data type="other">
#     <checksum type="sha256">9027e315287283bc4e95bd862e0e09012e4e4aa8e93916708e955731d5462f6d</checksum>
#     <open-checksum type="sha256">d97a6faa184813af47dda95dd336b888e18d91b13951655b843b72b6c1149c26</open-checksum>
#     <location href="repodata/9027e315287283bc4e95bd862e0e09012e4e4aa8e93916708e955731d5462f6d-other.xml.gz"/>
#     <timestamp>1770488169</timestamp>
#     <size>1624517</size>
#     <open-size>14384037</open-size>
#   </data>
#   <data type="primary_db">
#     <checksum type="sha256">1b34bb7357df23636e2683908e763013b4918a245739769642ebf900f077201d</checksum>
#     <open-checksum type="sha256">8f9e7bedd4060846ba2884d6f9b6dc585d70cd1a5177956bef61cb5173376d36</open-checksum>
#     <location href="repodata/1b34bb7357df23636e2683908e763013b4918a245739769642ebf900f077201d-primary.sqlite.xz"/>
#     <timestamp>1770488221</timestamp>
#     <size>10561544</size>
#     <open-size>144027648</open-size>
#     <database_version>10</database_version>
#   </data>
#   <data type="filelists_db">
#     <checksum type="sha256">62bf9ca6e90e9fbaf53c10eeb8b08b9d8d2cd0d2ebb04d6e488e9526a19c9982</checksum>
#     <open-checksum type="sha256">028f71fa1ed528030d4fa4c8956ef60ea1b8a7dccc7ff1648c83d37690492734</open-checksum>
#     <location href="repodata/62bf9ca6e90e9fbaf53c10eeb8b08b9d8d2cd0d2ebb04d6e488e9526a19c9982-filelists.sqlite.xz"/>
#     <timestamp>1770488174</timestamp>
#     <size>1082008</size>
#     <open-size>11321344</open-size>
#     <database_version>10</database_version>
#   </data>
#   <data type="other_db">
#     <checksum type="sha256">1de704afeee6d14017ee0a20de05455359ebcb291b70b5fd93020b1f1df76b23</checksum>
#     <open-checksum type="sha256">f148df792c63a83051b341a259339e0c39b08559e76767ffe61fde958a44030c</open-checksum>
#     <location href="repodata/1de704afeee6d14017ee0a20de05455359ebcb291b70b5fd93020b1f1df76b23-other.sqlite.xz"/>
#     <timestamp>1770488174</timestamp>
#     <size>343464</size>
#     <open-size>15183872</open-size>
#     <database_version>10</database_version>
#   </data>
#   <data type="group">
#     <checksum type="sha256">a55eb7089d714a338318e5acf8e9ff8a682433816e0608a3e0eeefc230153a0e</checksum>
#     <location href="repodata/a55eb7089d714a338318e5acf8e9ff8a682433816e0608a3e0eeefc230153a0e-comps-BaseOS.aarch64.xml"/>
#     <timestamp>1770488147</timestamp>
#     <size>308363</size>
#   </data>
#   <data type="group_xz">
#     <checksum type="sha256">0fb567b1f4c5f9bb81eed3bafc9f40b252558508fcd362a7fdd6b5e1f4c85e52</checksum>
#     <open-checksum type="sha256">a55eb7089d714a338318e5acf8e9ff8a682433816e0608a3e0eeefc230153a0e</open-checksum>
#     <location href="repodata/0fb567b1f4c5f9bb81eed3bafc9f40b252558508fcd362a7fdd6b5e1f4c85e52-comps-BaseOS.aarch64.xml.xz"/>
#     <timestamp>1770488169</timestamp>
#     <size>62360</size>
#     <open-size>308363</open-size>
#   </data>
#   <data type="updateinfo">
#     <checksum type="sha256">acf4170ee10547bb5dd491406d99b67c32bf7b3d5229928dc3c58aa03b13353a</checksum>
#     <open-checksum type="sha256">6a4ae116304f66077264269fc41f3a68df34958eb19e50faed039fc6e5c630a4</open-checksum>
#     <location href="repodata/acf4170ee10547bb5dd491406d99b67c32bf7b3d5229928dc3c58aa03b13353a-updateinfo.xml.gz"/>
#     <timestamp>1770488945</timestamp>
#     <size>111103</size>
#     <open-size>812991</open-size>
#   </data>
# </repomd>

## appstream : 9분 소요, 13G
dnf reposync --repoid=appstream --download-metadata -p /data/repos/rocky/10 # 로컬에 다운로드  
# ... 
# (5216/5219): zenity-4.0.1-5.el10.aarch64.rpm      6.6 MB/s | 3.3 MB     00:00    
# (5217/5219): zram-generator-1.1.2-14.el10.aarch64 4.5 MB/s | 399 kB     00:00    
# (5218/5219): zziplib-0.13.78-2.el10.aarch64.rpm   1.5 MB/s |  89 kB     00:00    
# (5219/5219): zziplib-utils-0.13.78-2.el10.aarch64 1.8 MB/s |  45 kB     00:00 
du -sh /data/repos/rocky/10/appstream/
# 14G     /data/repos/rocky/10/appstream/

## extras : 금방 끝남, 67M
dnf reposync --repoid=extras    --download-metadata -p /data/repos/rocky/10 # 로컬에 다운로드  
# ...
# (24/26): update-motd-0.1-2.el10.noarch.rpm        197 kB/s |  10 kB     00:00    
# (25/26): rpmfusion-free-release-tainted-10-1.noar 136 kB/s | 7.3 kB     00:00    
# (26/26): rocky-backgrounds-extras-100.4-7.el10.no  12 MB/s |  63 MB     00:05  
du -sh /data/repos/rocky/10/extras/
# 67M     /data/repos/rocky/10/extras/


# 내부 배포용 웹 서버 설정 (nginx)
cat <<EOF > /etc/nginx/conf.d/repos.conf
server {
    listen 80;
    server_name repo-server;

    location /rocky/10/ {
        autoindex on;                 # 디렉터리 목록 표시
        autoindex_exact_size off;     # 파일 크기 KB/MB/GB 단위로 보기 좋게
        autoindex_localtime on;       # 서버 로컬 타임으로 표시
        root /data/repos;
    }
}
EOF
systemctl enable --now nginx
systemctl status nginx.service --no-pager
ss -tnlp | grep nginx
# LISTEN 0      511          0.0.0.0:80        0.0.0.0:*    users:(("nginx",pid=6010,fd=6),("nginx",pid=6009,fd=6),("nginx",pid=6008,fd=6),("nginx",pid=6007,fd=6),("nginx",pid=6006,fd=6))
# LISTEN 0      511             [::]:80           [::]:*    users:(("nginx",pid=6010,fd=7),("nginx",pid=6009,fd=7),("nginx",pid=6008,fd=7),("nginx",pid=6007,fd=7),("nginx",pid=6006,fd=7))

# 접속 테스트
curl http://192.168.10.10/rocky/10/
open http://192.168.10.10/rocky/10/baseos/
# <html>
# <head><title>Index of /rocky/10/baseos/</title></head>
# <body>
# <h1>Index of /rocky/10/baseos/</h1><hr><pre><a href="../">../</a>
# <a href="Packages/">Packages/</a>                                          11-Feb-2026 22:50       -
# <a href="repodata/">repodata/</a>                                          11-Feb-2026 22:50       -
# <a href="mirrorlist">mirrorlist</a>                                         11-Feb-2026 22:50    2601
# </pre><hr></body>
# </html>

################################
# [k8s-node] 인터넷이 안 되는 내부 서버에서 admin Linux 패키지 저장소를 바라보게 설정
################################
# 기존 레포 설정 백업
tree /etc/yum.repos.d/
# /etc/yum.repos.d/
# ├── rocky-addons.repo
# ├── rocky-devel.repo
# ├── rocky-extras.repo
# └── rocky.repo
mkdir /etc/yum.repos.d/backup
mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/backup/

# 로컬 레포 파일 생성: 서버 IP는 Repo 서버의 IP로 수정
cat <<EOF > /etc/yum.repos.d/internal-rocky.repo
[internal-baseos]
name=Internal Rocky 10 BaseOS
baseurl=http://192.168.10.10/rocky/10/baseos
enabled=1
gpgcheck=0

[internal-appstream]
name=Internal Rocky 10 AppStream
baseurl=http://192.168.10.10/rocky/10/appstream
enabled=1
gpgcheck=0

[internal-extras]
name=Internal Rocky 10 Extras
baseurl=http://192.168.10.10/rocky/10/extras
enabled=1
gpgcheck=0
EOF

# 내부 서버 repo 정상 동작 확인 : 클라이언트에서 캐시를 비우고 목록을 불러옵니다.
dnf clean all
# 18 files removed
dnf repolist
# repo id                              repo name
# internal-appstream                   Internal Rocky 10 AppStream
# internal-baseos                      Internal Rocky 10 BaseOS
# internal-extras                      Internal Rocky 10 Extras
dnf makecache
# Internal Rocky 10 BaseOS                          151 MB/s |  15 MB     00:00    
# Internal Rocky 10 AppStream                        40 MB/s | 2.1 MB     00:00    
# Internal Rocky 10 Extras                          254 kB/s | 6.2 kB     00:00    
# Metadata cache created.

# 패키지 인스톨 정상 실행 확인
dnf install -y nfs-utils
# Last metadata expiration check: 0:00:33 ago on Wed 11 Feb 2026 11:38:16 PM KST.
# Package nfs-utils-1:2.8.2-3.el10.aarch64 is already installed.
# Dependencies resolved.
# ==================================================================================
#  Package           Architecture  Version              Repository             Size
# ==================================================================================
# Upgrading:
#  libnfsidmap       aarch64       1:2.8.3-0.el10       internal-baseos        61 k
#  nfs-utils         aarch64       1:2.8.3-0.el10       internal-baseos       476 k

# Transaction Summary
# ==================================================================================
# Upgrade  2 Packages

# Total download size: 537 k
# Downloading Packages:
# (1/2): libnfsidmap-2.8.3-0.el10.aarch64.rpm        12 MB/s |  61 kB     00:00    
# (2/2): nfs-utils-2.8.3-0.el10.aarch64.rpm          67 MB/s | 476 kB     00:00    
# ----------------------------------------------------------------------------------
# Total                                              52 MB/s | 537 kB     00:00     
# Running transaction check
# Transaction check succeeded.
# Running transaction test
# Transaction test succeeded.
# Running transaction
#   Preparing        :                                                          1/1 
#   Upgrading        : libnfsidmap-1:2.8.3-0.el10.aarch64                       1/4 
#   Running scriptlet: nfs-utils-1:2.8.3-0.el10.aarch64                         2/4 
#   Upgrading        : nfs-utils-1:2.8.3-0.el10.aarch64                         2/4 
#   Running scriptlet: nfs-utils-1:2.8.3-0.el10.aarch64                         2/4 
#   Running scriptlet: nfs-utils-1:2.8.2-3.el10.aarch64                         3/4 
#   Cleanup          : nfs-utils-1:2.8.2-3.el10.aarch64                         3/4 
#   Running scriptlet: nfs-utils-1:2.8.2-3.el10.aarch64                         3/4 
#   Cleanup          : libnfsidmap-1:2.8.2-3.el10.aarch64                       4/4 
#   Running scriptlet: libnfsidmap-1:2.8.2-3.el10.aarch64                       4/4 

# Upgraded:
#   libnfsidmap-1:2.8.3-0.el10.aarch64       nfs-utils-1:2.8.3-0.el10.aarch64      

# Complete!
## 패키지 정보에 repo 확인
dnf info nfs-utils | grep -i repo
# Repository   : @System
# From repo    : internal-baseos

################################
# [admin] 다음 실습을 위해 삭제 
################################
systemctl disable --now nginx && dnf remove -y nginx

################################################################
# Private Container (Image) Registry - kubespray 오프라인 모드에서 지원해서 사용 안해도됨
################################################################

################################
# [admin] podman 으로 컨테이너 이미지 저장소 Docker Registry (registry) 기동
################################
# podman 설치 : 기본 설치 되어 있음
dnf install -y podman
dnf info podman | grep repo
# From repo    : appstream

# podman 확인
which podman
# /usr/bin/podman
podman --version
podman info
cat /etc/containers/registries.conf
# unqualified-search-registries = ["registry.access.redhat.com", "registry.redhat.io", "docker.io"]
cat /etc/containers/registries.conf.d/000-shortnames.conf

# Registry 이미지 받기
podman pull docker.io/library/registry:latest
# Trying to pull docker.io/library/registry:latest...
# Getting image source signatures
# Copying blob 92c7580d074a done   | 
# Copying blob a447a5de8f4e done   | 
# Copying blob 50b5971fe294 done   | 
# Copying blob 1cc3d49277b7 done   | 
# Copying blob 0a52a06d47e0 done   | 
# Copying config 2f5ec5015b done   | 
# Writing manifest to image destination
# 2f5ec5015badd603680de78accbba6eb3e9146f4d642a7ccef64205e55ac518f
podman images
# REPOSITORY                  TAG         IMAGE ID      CREATED      SIZE
# docker.io/library/registry  latest      2f5ec5015bad  2 weeks ago  57.3 MB

# Registry 데이터 저장 디렉터리 준비
mkdir -p /data/registry
chmod 755 /data/registry

# Docker Registry 컨테이너 실행 (기본, 인증 없음)
podman run -d --name local-registry -p 5000:5000 -v /data/registry:/var/lib/registry --restart=always docker.io/library/registry:latest
# cb9a1b4c69541125c5725447a75a078233ae627e5dd9528380aed700c5cd57a9

# 확인
podman ps
# CONTAINER ID  IMAGE                              COMMAND               CREATED        STATUS        PORTS                   NAMES
# cb9a1b4c6954  docker.io/library/registry:latest  /etc/distribution...  8 seconds ago  Up 9 seconds  0.0.0.0:5000->5000/tcp  local-registry
ss -tnlp | grep 5000
# LISTEN 0      4096         0.0.0.0:5000      0.0.0.0:*    users:(("conmon",pid=6418,fd=5))          

pstree -a
# ...
#   ├─conmon --api-version 1 -ccb9a1b4c69541125c5725447a75a078233ae627e5dd9528
#   │   └─registry serve /etc/distribution/config.yml
#   │       └─6*[{registry}]
# ...

# Registry 정상 동작 확인
curl -s http://localhost:5000/v2/_catalog | jq
# {"repositories":[]}

################################
# [admin] 컨테이너 이미지 저장소 Docker Registry 에 이미지 push
################################
# 이미지 가져오기
podman pull alpine
# Resolved "alpine" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
# Trying to pull docker.io/library/alpine:latest...
# Writing manifest to image destination
# 1ab49c19c53ebca95c787b482aeda86d1d681f58cdf19278c476bcaf37d96de1

cat /etc/containers/registries.conf.d/000-shortnames.conf | grep alpine
  # "alpine" = "docker.io/library/alpine"

podman images
# REPOSITORY                  TAG         IMAGE ID      CREATED      SIZE
# docker.io/library/registry  latest      2f5ec5015bad  2 weeks ago  57.3 MB
# docker.io/library/alpine    latest      1ab49c19c53e  2 weeks ago  8.99 MB

# tag
podman tag alpine:latest 192.168.10.10:5000/alpine:1.0
podman images
# REPOSITORY                  TAG         IMAGE ID      CREATED      SIZE
# docker.io/library/registry  latest      2f5ec5015bad  2 weeks ago  57.3 MB
# 192.168.10.10:5000/alpine   1.0         1ab49c19c53e  2 weeks ago  8.99 MB
# docker.io/library/alpine    latest      1ab49c19c53e  2 weeks ago  8.99 MB

# 프라이빗 레지스트리에 업로드 : 실패!
podman push 192.168.10.10:5000/alpine:1.0
# Getting image source signatures
# Error: trying to reuse blob sha256:45f3ea5848e8a25ca27718b640a21ffd8c8745d342a24e1d4ddfc8c449b0a724 at destination: pinging container registry 192.168.10.10:5000: Get "https://192.168.10.10:5000/v2/": http: server gave HTTP response to HTTPS client

# 기본적으로 컨테이너 엔진들은 HTTPS를 요구합니다. 내부망에서 HTTP로 테스트하려면 Registry 주소를 '안전하지 않은 저장소'로 등록해야 합니다.
# (참고) registries.conf 는 containers-common 설정이라서, 'podman, skopeo, buildah' 등 전부 동일하게 적용됨.
cp /etc/containers/registries.conf /etc/containers/registries.bak
cat <<EOF >> /etc/containers/registries.conf
[[registry]]
location = "192.168.10.10:5000"
insecure = true
EOF
grep "^[^#]" /etc/containers/registries.conf
# unqualified-search-registries = ["registry.access.redhat.com", "registry.redhat.io", "docker.io"]
# short-name-mode = "enforcing"
# [[registry]]
# location = "192.168.10.10:5000"
# insecure = true

# 프라이빗 레지스트리에 업로드 : 성공!
podman push 192.168.10.10:5000/alpine:1.0
# Getting image source signatures
# Copying blob 45f3ea5848e8 done   | 
# Copying config 1ab49c19c5 done   | 
# Writing manifest to image destination


# 업로드된 이미지와 태그 조회
curl -s 192.168.10.10:5000/v2/_catalog | jq
# {
#   "repositories": [
#     "alpine"
#   ]
# }

curl -s 192.168.10.10:5000/v2/alpine/tags/list | jq
# {
#   "name": "alpine",
#   "tags": [
#     "1.0"
#   ]
# }

################################
# [k8s-node] 컨테이너 이미지 pull
################################
# registries.conf 설정
cp /etc/containers/registries.conf /etc/containers/registries.bak
cat <<EOF >> /etc/containers/registries.conf
[[registry]]
location = "192.168.10.10:5000"
insecure = true
EOF
grep "^[^#]" /etc/containers/registries.conf
# unqualified-search-registries = ["registry.access.redhat.com", "registry.redhat.io", "docker.io"]
# short-name-mode = "enforcing"
# [[registry]]
# location = "192.168.10.10:5000"
# insecure = true

# 이미지 가져오기
podman pull 192.168.10.10:5000/alpine:1.0
# 1ab49c19c53ebca95c787b482aeda86d1d681f58cdf19278c476bcaf37d96de1
podman images
# REPOSITORY                 TAG         IMAGE ID      CREATED      SIZE
# 192.168.10.10:5000/alpine  1.0         1ab49c19c53e  2 weeks ago  8.98 MB

################################
# [admin] 다음 실습을 위해 삭제 
################################
podman rm -f local-registry

################################
# [admin, k8s-node] 다음 실습을 위해 파일 원복
################################
mv /etc/containers/registries.bak /etc/containers/registries.conf

################################################################
# Private PyPI(Python Package Index) Mirror - kubespray 오프라인 모드에서 지원해서 사용 안해도됨
################################################################

################################
# [admin] Python 패키지 저장소 구성 : devpi-server 
################################
# devpi-server 설치
## devpi-server :    PyPI 미러/사설 패키지 저장소 서버
## devpi-client    :devpi 서버에 패키지 업로드/관리
## devpi-web : 웹 UI (선택)
pip install devpi-server devpi-client devpi-web
# ...
# readme-renderer-44.0 repoze.lru-0.7 ruamel.yaml-0.19.1 soupsieve-2.8.3 strictyaml-1.7.3 translationstring-1.4 typing-extensions-4.15.0 venusian-3.1.1 waitress-3.0.2 webob-1.8.9 zope.deprecation-6.0 zope.interface-8.2
# WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
pip list | grep devpi
# devpi-client              7.2.0
# devpi-common              4.1.1
# devpi-server              6.19.1
# devpi-web                 5.0.1

# 서버 데이터 디렉토리 생성 및 초기화 : Initialize new devpi-server instance.
## --serverdir을 지정하지 않으면 기본값은 ~/.devpi/server 입니다.
devpi-init --serverdir /data/devpi_data
# 2026-02-11 23:46:03,323 INFO  NOCTX Loading node info from /data/devpi_data/.nodeinfo
# 2026-02-11 23:46:03,323 INFO  NOCTX generated uuid: 72aa3df59e644d1fb446f1f7ea57fff8
# 2026-02-11 23:46:03,323 INFO  NOCTX wrote nodeinfo to: /data/devpi_data/.nodeinfo
# 2026-02-11 23:46:03,325 INFO  NOCTX DB: Creating schema
# 2026-02-11 23:46:03,360 INFO  [Wtx-1] setting password for user 'root'
# 2026-02-11 23:46:03,360 INFO  [Wtx-1] created user 'root'
# 2026-02-11 23:46:03,360 INFO  [Wtx-1] created root user
# 2026-02-11 23:46:03,360 INFO  [Wtx-1] created root/pypi index
# 2026-02-11 23:46:03,362 INFO  [Wtx-1] fswriter0: committed at 0

ls -al /data/devpi_data/ #인덱스 관련된 패키지들 저장
# total 28
# drwxr-xr-x. 2 root root    60 Feb 11 23:46 .
# drwxr-xr-x. 5 root root    53 Feb 11 23:46 ..
# -rw-------. 1 root root    72 Feb 11 23:46 .nodeinfo
# -rw-r--r--. 1 root root     1 Feb 11 23:46 .serverversion
# -rw-r--r--. 1 root root 20480 Feb 11 23:46 .sqlite

# devpi 서버 기동 : --host 0.0.0.0은 외부(다른 PC) 접속을 허용하기 위함
## 백그라운드 상시 구동은 systemd 서비스로 등록 설정 할 것
nohup devpi-server --serverdir /data/devpi_data --host 0.0.0.0 --port 3141 > /var/log/devpi.log 2>&1 &

# 확인
ss -tnlp | grep devpi-server
LISTEN 0      1024         0.0.0.0:3141      0.0.0.0:*    users:(("devpi-server",pid=6710,fd=12))         

tail -f /var/log/devpi.log
# 2026-02-11 23:46:25,948 INFO  NOCTX serving at url: http://0.0.0.0:3141 (might be http://[0.0.0.0]:3141 for IPv6)
# 2026-02-11 23:46:25,948 INFO  NOCTX using 50 threads
# 2026-02-11 23:46:25,948 INFO  NOCTX bug tracker: https://github.com/devpi/devpi/issues
...

# 웹 UI 접속
open http://192.168.10.10:3141

################################
# [admin] devpi-server 필요한 패키지 업로드
################################
# 서버 연결
devpi use http://192.168.10.10:3141
# Warning: insecure http host, trusted-host will be set for pip
# using server: http://192.168.10.10:3141/ (not logged in)
# no current index: type 'devpi use -l' to discover indices
# /root/.config/pip/pip.conf: no config file exists
# /root/.config/uv/uv.toml: no config file exists
# /root/.pydistutils.cfg: no config file exists
# /root/.buildout/default.cfg: no config file exists
# always-set-cfg: no

# 로그인 (기본 root 비번은 없음)
## (skip) root 비밀번호 설정 devpi user -m root password=<신규비밀번호>
devpi login root --password ""
# logged in 'root', credentials valid for 10.00 hours


# 폐쇄망에서 쓸 패키지 미리 받아두기
pip download jmespath netaddr -d /tmp/pypi-packages
tree /tmp/pypi-packages/
# /tmp/pypi-packages/
# ├── jmespath-1.1.0-py3-none-any.whl
# └── netaddr-1.3.0-py3-none-any.whl


# 팀 또는 프로젝트용 인덱스 생성 (예: prod)
## 상속(Inheritance): bases=root/pypi 설정을 통해, 외부망이 연결된 환경에서는 자동으로 PyPI에서 캐싱해오고, 폐쇄망에서는 내부 업로드 파일을 우선 조회하도록 구성할 수 있습니다.
## 참고) PyPI 미러 인덱스 생성(캐시용) : devpi index -c pypi-mirror type=mirror mirror_url=https://pypi.org/simple
devpi index -c prod bases=root/pypi
# http://192.168.10.10:3141/root/prod?no_projects=:
#   type=stage
#   bases=root/pypi
#   volatile=True
#   acl_upload=root
#   acl_toxresult_upload=:ANONYMOUS:
#   mirror_whitelist=
#   mirror_whitelist_inheritance=intersection

# devpi 인덱스(저장소) 목록 확인
devpi index -l
# root/prod
# root/pypi

# 특정 인덱스(저장소)에 패키지 있는지 확인
devpi use root/pypi
# Warning: insecure http host, trusted-host will be set for pip
# current devpi index: http://192.168.10.10:3141/root/pypi (logged in as root)
# supported features: push-no-docs, push-only-docs, push-register-project, server-keyvalue-parsing
# /root/.config/pip/pip.conf: no config file exists
# /root/.config/uv/uv.toml: no config file exists
# /root/.pydistutils.cfg: no config file exists
# /root/.buildout/default.cfg: no config file exists
# always-set-cfg: no
devpi list

devpi use root/prod
# Warning: insecure http host, trusted-host will be set for pip
# current devpi index: http://192.168.10.10:3141/root/prod (logged in as root)
# supported features: push-no-docs, push-only-docs, push-register-project, server-keyvalue-parsing
# /root/.config/pip/pip.conf: no config file exists
# /root/.config/uv/uv.toml: no config file exists
# /root/.pydistutils.cfg: no config file exists
# /root/.buildout/default.cfg: no config file exists
devpi list

# devpi 서버 root/prod 인덱스(저장소)에 패키지 업로드
devpi upload /tmp/pypi-packages/*
# file_upload of jmespath-1.1.0-py3-none-any.whl to http://192.168.10.10:3141/root/prod/
# file_upload of netaddr-1.3.0-py3-none-any.whl to http://192.168.10.10:3141/root/prod/

# 업로드 한 패키지 실제 위치 확인
tree /data/devpi_data/+files/
# /data/devpi_data/+files/
# └── root
#     └── prod
#         └── +f
#             ├── a56
#             │   └── 63118de4908c9
#             │       └── jmespath-1.1.0-py3-none-any.whl
#             └── c2c
#                 └── 6a8ebe5554ce3
#                     └── netaddr-1.3.0-py3-none-any.whl

# 업로드 한 패키지 확인
devpi list
jmespath
netaddr

################################
# [k8s-node] pip 설정 및 사용
################################
# (방안1) 일회성 사용
pip list | grep -i jmespath
pip install jmespath --index-url http://192.168.10.10:3141/root/prod/+simple --trusted-host 192.168.10.10
# Looking in indexes: http://192.168.10.10:3141/root/prod/+simple
# Collecting jmespath
#   Downloading http://192.168.10.10:3141/root/prod/%2Bf/a56/63118de4908c9/jmespath-1.1.0-py3-none-any.whl (20 kB)
# Installing collected packages: jmespath
# Successfully installed jmespath-1.1.0
# WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
pip list | grep -i jmespath
# jmespath                  1.1.0

# (방안2) 전역 설정 : /root/prod (사람용 웹 UI) , /root/prod/+simple (pip 전용 API 엔드포인트)
## pip 설정에서 반드시 +simple 붙인 URL을 써야 정상 동작한다 <- pip 표준 인덱스 엔드포인트
## pip는 패키지 저장소를 PEP 503 “Simple API” 형식으로 접근함.
## +simple 의미 : pip 전용 인덱스 엔드포인트, 패키지 목록을 pip가 파싱할 수 있는 HTML 포맷으로 제공
cat <<EOF > /etc/pip.conf
[global]
index-url = http://192.168.10.10:3141/root/prod/+simple
trusted-host = 192.168.10.10
timeout = 60
EOF

pip list | grep -i netaddr
# 없음
pip install netaddr
# Looking in indexes: http://192.168.10.10:3141/root/prod/+simple
# Collecting netaddr
#   Downloading http://192.168.10.10:3141/root/prod/%2Bf/c2c/6a8ebe5554ce3/netaddr-1.3.0-py3-none-any.whl (2.3 MB)
#      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.3/2.3 MB 152.5 MB/s eta 0:00:00
# Installing collected packages: netaddr
# Successfully installed netaddr-1.3.0
# WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
pip list | grep -i netaddr
# netaddr                   1.3.0


# 현재 devpi-server 없는 패키지 설치 시도 : 성공!
pip install cryptography
# Looking in indexes: http://192.168.10.10:3141/root/prod/+simple
# Collecting cryptography
#   Downloading http://192.168.10.10:3141/root/pypi/%2Bf/e92/51e3be159d102/cryptography-46.0.5-cp311-abi3-manylinux_2_34_aarch64.whl (4.3 MB)
#      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.3/4.3 MB 8.0 MB/s eta 0:00:00
# Collecting cffi>=2.0.0 (from cryptography)
#   Downloading http://192.168.10.10:3141/root/pypi/%2Bf/b21/e08af67b8a103/cffi-2.0.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl (220 kB)
#      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 220.1/220.1 kB 52.1 MB/s eta 0:00:00
# Collecting pycparser (from cffi>=2.0.0->cryptography)
#   Downloading http://192.168.10.10:3141/root/pypi/%2Bf/b72/7414169a36b7d/pycparser-3.0-py3-none-any.whl (48 kB)
#      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 48.2/48.2 kB 197.9 MB/s eta 0:00:00
# Installing collected packages: pycparser, cffi, cryptography
# Successfully installed cffi-2.0.0 cryptography-46.0.5 pycparser-3.0
# WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv

################################
# [admin] 다음 실습을 위해 삭제
################################
pkill -f "devpi-server --serverdir /data/devpi_data"

################################
# [k8s-node] 다음 실습을 위해
################################
rm -rf /etc/pip.conf

 

 

 

Air-gap 환경에서 오프라인 Kubernetes 클러스터를 구축하기 위해서는 인터넷 연결 없이 모든 필요한 파일, 패키지, 그리고 컨테이너 이미지를 미리 준비해야 합니다. Kubespray-offline을 활용하면 아래와 같은 절차로 손쉽게 오프라인 설치 환경을 구성할 수 있습니다.

Kubespray-offline 설치

위에서 설명한 각각이 kubespray-offline에서 모두 지원이 됩니다.


kubespray-offline은 오프라인 Kubernetes 클러스터 구축을 위한 파일 및 이미지를 자동으로 수집, 정리, 배포할 수 있도록 도와주는 도구입니다. 인터넷이 연결되지 않은 환경에서도 필요한 모든 구성요소(컨테이너 이미지, 패키지, 바이너리 등)를 사전에 다운로드해 레지스트리와 로컬 저장소에 준비할 수 있습니다. 이를 통해 Kubespray 기반의 Kubernetes 클러스터를 완전한 air-gap 환경에서 손쉽게 설치하고 유지관리할 수 있습니다.

 

kubespary 버전이 업데이트 될 때마다, 해당 프로젝트 개발자가 버전 업데이트에 맞게 kubespary-offline을 업데이트해서 편리하게 사용할 수 있습니다.

 

 

Kubespray 오프라인 배포 아키텍처

이 다이어그램은 인터넷이 차단된 Air-Gap 환경에서 Kubernetes 클러스터를 배포하기 위한 전체적인 워크플로우를 보여줍니다.

구성 요소:

  • 인터넷 환경 (좌측): Kubespray를 사용하여 필요한 파일들과 컨테이너 이미지들을 준비 (~300MB)
  • Air-Gap 환경 (우측 점선 영역): 인터넷과 격리된 폐쇄망 환경
    • Install Cluster: Kubespray를 통한 클러스터 설치
    • Internal OS Repo: 내부 OS 패키지 저장소 (ISO)
    • Control Plane & Worker 노드들: Kubernetes 클러스터 구성 요소

오프라인 배포 프로세스:

  1. 다운로드될 파일 목록과 컨테이너 이미지 목록을 파일별로 생성
  2. 오프라인 배포를 위한 컨테이너 이미지 다운로드 및 이미지 레지스트리(저장소)에 등록(업로드)
  3. 파일(목록)을 다운로드 하고 Nginx 컨테이너를 실행하여, 파일 다운로드 기능 제공

레포지토리: https://github.com/kubespray-offline/kubespray-offline

스크립트 기능:

  • 다운로드 기능은 Kubespray의 편의성 스크립트를 활용합니다.
  1. 오프라인 파일 다운로드 (Download offline files)
  • OS 패키지 저장소: Yum/Deb 저장소 파일 다운로드
  • 컨테이너 이미지: Kubespray에서 사용하는 모든 컨테이너 이미지 다운로드
  • Python 패키지: Kubespray의 PyPI 미러 파일 다운로드
  1. 배포 서버(Admin) 지원 스크립트 (Support scripts for deployment server)
  • Container Runtime 설치: 로컬 파일에서 containerd 설치
  • 웹 서버 구동: Nginx 컨테이너를 통한 Yum/Deb 저장소 및 PyPI 미러 서비스 제공
  • 프라이빗 레지스트리: Docker 프라이빗 레지스트리 구동
  • 이미지 배포: 모든 컨테이너 이미지를 로드하고 프라이빗 레지스트리에 업로드
  1. 시스템 요구사항
  • RHEL 계열: RHEL / AlmaLinux / Rocky Linux 9.x (10.x)
  • Debian 계열: Ubuntu 22.04 / 24.04

kubespary-offline 설치 실습kubespary-offline 설치 실습

기본 환경을 준비합니다.

[0] git clone 후 download-all.sh 로 설치에 필요한 파일들 다운로드 수행 (3.3GB 정도) 17분 소요

# Kubespray 오프라인 배포를 위한 저장소 클론
git clone https://github.com/kubespray-offline/kubespray-offline.git
cd kubespray-offline/


################################
# download-all.sh : 아래 후속 실행되는 스크립트들 확인 (최상위 스크립트)
# 오프라인 배포에 필요한 모든 파일들을 순차적으로 다운로드하는 마스터 스크립트
################################
#!/bin/bash

# 스크립트 실행 함수 (에러 발생 시 종료)
run() {
    echo "=> Running: $*"
    $* || {
        echo "Failed in : $*"
        exit 1
    }
}

# 실습을 위해서는 한줄씩 실행해보면 됨
source ./config.sh                          # 설정 파일 로드

#run ./install-docker.sh                    # Docker 설치 (선택사항)
#run ./install-nerdctl.sh                   # nerdctl 설치 (선택사항)
run ./precheck.sh                           # 사전 요구사항 검증
run ./prepare-pkgs.sh || exit 1             # 필수 패키지 설치
run ./prepare-py.sh                         # Python 환경 준비
run ./get-kubespray.sh                      # Kubespray 소스코드 다운로드
if $ansible_in_container; then
    run ./build-ansible-container.sh        # Ansible 컨테이너 빌드
else
    run ./pypi-mirror.sh                    # PyPI 미러 생성
fi
run ./download-kubespray-files.sh           # Kubespray 필수 파일들 다운로드
run ./download-additional-containers.sh     # 추가 컨테이너 이미지 다운로드
run ./create-repo.sh                        # RPM/DEB 저장소 생성
run ./copy-target-scripts.sh                # 타겟 스크립트들 복사

echo "Done."

################################
# config.sh → target-scripts/config.sh 
# 설치되는 버전 정보 변수 설정 : 버전 변경 시에는 이 단계에서 수정 필요!, 버전 변수에 파일 다운로드 됨
# 모든 컴포넌트의 버전을 중앙에서 관리하는 핵심 설정 파일
################################
#!/bin/bash

source ./target-scripts/config.sh           # 타겟 스크립트 설정 로드

# container runtime for preparation node (준비 노드용 컨테이너 런타임 선택)
docker=${docker:-podman}                    # 기본값: podman 사용
#docker=${docker:-docker}                   # 대안: Docker 사용
#docker=${docker:-/usr/local/bin/nerdctl}   # 대안: nerdctl 사용

# Run ansible in container? (Ansible을 컨테이너에서 실행할지 여부)
ansible_in_container=${ansible_in_container:-false}

cat ./target-scripts/config.sh

#!/bin/bash
# Kubespray version to download. Use "master" for latest master branch.
KUBESPRAY_VERSION=${KUBESPRAY_VERSION:-2.30.0}    # Kubespray 버전 (안정 버전)
#KUBESPRAY_VERSION=${KUBESPRAY_VERSION:-master}   # 최신 개발 버전 사용 시

# Versions of containerd related binaries used in `install-containerd.sh`
# These version must be same as kubespray.
# Refer `roles/kubespray_defaults/vars/main/checksums.yml` of kubespray.
RUNC_VERSION=1.3.4                         # OCI 런타임 버전
CONTAINERD_VERSION=2.2.1                   # containerd 버전
NERDCTL_VERSION=2.2.1                      # nerdctl (Docker CLI 대체) 버전
CNI_VERSION=1.8.0                          # CNI 플러그인 버전

# Some container versions, must be same as ../imagelists/images.txt
NGINX_VERSION=1.29.4                       # 웹서버용 Nginx 버전
REGISTRY_VERSION=3.0.0                     # 프라이빗 레지스트리 버전

# container registry port (컨테이너 레지스트리 포트)
REGISTRY_PORT=${REGISTRY_PORT:-35000}      # 기본 포트: 35000

# Additional container registry hosts (추가 컨테이너 레지스트리 호스트)
ADDITIONAL_CONTAINER_REGISTRY_LIST=${ADDITIONAL_CONTAINER_REGISTRY_LIST:-"myregistry.io"}

# Architecture of binary files (바이너리 파일 아키텍처 감지)
# Detect OS type and get architecture accordingly
map_arch() {
    case "$1" in
        x86_64)
            echo "amd64"                    # Intel/AMD 64비트
            ;;
        aarch64)
            echo "arm64"                    # ARM 64비트 (Apple Silicon 등)
            ;;
        *)
            echo "$1"                       # 기타 아키텍처
            ;;
    esac
}

# OS별 아키텍처 감지 로직
if [ -e /etc/redhat-release ]; then
    # RHEL/AlmaLinux/Rocky Linux
    ARCH=$(uname -m)
    IMAGE_ARCH=$(map_arch "$ARCH")
elif command -v dpkg >/dev/null 2>&1; then
    # Ubuntu/Debian
    IMAGE_ARCH=$(dpkg --print-architecture)
else
    # Fallback: use uname -m
    ARCH=$(uname -m)
    IMAGE_ARCH=$(map_arch "$ARCH")
fi

################################
# precheck.sh - 사전 요구사항 검증 스크립트
# 시스템 환경이 Kubespray 오프라인 설치에 적합한지 확인
################################
#!/bin/bash

source /etc/os-release                      # OS 정보 로드
source ./config.sh                          # 설정 파일 로드

# Docker/Podman 설치 여부 확인
if [ "$docker" != "podman" ]; then
    if ! command -v $docker >/dev/null 2>&1; then
        echo "No $docker installed"         # 지정된 컨테이너 런타임이 설치되지 않음
        exit 1
    fi
fi

# RHEL7/CentOS7에서 SELinux 확인 (더 이상 지원하지 않지만 호환성 체크)
if [ -e /etc/redhat-release ] && [[ "$VERSION_ID" =~ ^7.* ]]; then
    if [ "$(getenforce)" == "Enforcing" ]; then
        echo "You must disable SELinux for RHEL7/CentOS7"  # SELinux 비활성화 필요
        exit 1
    fi
fi

################################
# prepare-pkgs.sh : 필수 패키지 설치 스크립트
# OS별로 Kubespray 오프라인 배포에 필요한 패키지들을 설치
################################
#!/bin/bash

echo "==> prepare-pkgs.sh"

. /etc/os-release                           # OS 정보 로드
. ./scripts/common.sh                       # 공통 스크립트 로드

# Select python version (Python 버전 선택)
. ./target-scripts/pyver.sh

# Install required packages (필수 패키지 설치)
if [ -e /etc/redhat-release ]; then
    echo "==> Install required packages"
    $sudo dnf check-update                  # 패키지 업데이트 확인

    # 핵심 빌드 도구 및 컨테이너 런타임 설치
    $sudo dnf install -y rsync gcc libffi-devel createrepo git podman || exit 1

    case "$VERSION_ID" in
        7*)
            # RHEL/CentOS 7 (더 이상 지원하지 않음)
            echo "FATAL: RHEL/CentOS 7 is not supported anymore."
            exit 1
            ;;
        8*)
            # RHEL/CentOS 8 - 모듈 메타데이터 도구 설치
            if ! command -v repo2module >/dev/null; then
                echo "==> Install modulemd-tools"
                $sudo dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
                $sudo dnf copr enable -y frostyx/modulemd-tools-epel
                $sudo dnf install -y modulemd-tools
            fi
            ;;
        9*)
            # RHEL 9 - 모듈 메타데이터 도구 설치
            if ! command -v repo2module >/dev/null; then
                $sudo dnf install -y modulemd-tools
            fi
            ;;
        10*)
            # RHEL 10 - createrepo_c 도구 설치
            if ! command -v repo2module >/dev/null; then
                $sudo dnf install -y createrepo_c
            fi
            ;;
        *)
            echo "Unknown version_id: $VERSION_ID"
            exit 1
            ;;
    esac

    # Python 및 관련 도구 설치
    $sudo dnf install -y python${PY} python${PY}-pip python${PY}-devel || exit 1
else
    # Ubuntu/Debian 계열 패키지 설치
    $sudo apt update
    if [ "$1" == "--upgrade" ]; then
        $sudo apt upgrade                   # 시스템 업그레이드 (선택사항)
    fi
    # 기본 개발 도구 설치
    $sudo apt -y install lsb-release curl gpg gcc libffi-dev rsync git software-properties-common || exit 1

    case "$VERSION_ID" in
        20.04)
            # Ubuntu 20.04 - Podman 저장소 추가
            echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/ /" | $sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
            curl -SL https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/xUbuntu_${VERSION_ID}/Release.key | $sudo apt-key add -

            # 최신 Python3 저장소 추가
            sudo add-apt-repository ppa:deadsnakes/ppa -y || exit 1
            $sudo apt update
            ;;
    esac
    # Python 및 컨테이너 런타임 설치
    $sudo apt install -y python${PY} python${PY}-venv python${PY}-dev python3-pip python3-selinux podman || exit 1
fi

################################
# prepare-py.sh → target-scripts/venv.sh : Python 가상환경 설정 및 패키지 설치
# Ansible 실행을 위한 Python 환경을 준비
################################
#!/bin/bash

# Create python3 env (Python3 환경 생성)

echo "==> prepare-py.sh"

. /etc/os-release                           # OS 정보 로드

. ./target-scripts/venv.sh                  # 가상환경 설정 스크립트 실행

source ./scripts/set-locale.sh             # 로케일 설정

echo "==> Update pip, etc"
pip install -U pip setuptools              # pip 및 setuptools 업그레이드
#if [ "$(getenforce)" == "Enforcing" ]; then
#    pip install -U selinux                 # SELinux 활성화 시 selinux 패키지 설치
#fi

echo "==> Install python packages"
pip install -r requirements.txt            # Kubespray 필수 Python 패키지 설치



################################
# target-scripts/venv.sh - Python 가상환경 생성 및 활성화
################################

#!/bin/bash

source /etc/os-release                      # OS 정보 로드

# Select python version (Python 버전 선택)
source "$(dirname "${BASH_SOURCE[0]}")/pyver.sh"

python3=python${PY}                        # Python 실행 파일 경로
VENV_DIR=${VENV_DIR:-~/.venv/${PY}}         # 가상환경 디렉터리 (기본값: ~/.venv/3.x)

echo "python3 = $python3"
echo "VENV_DIR = ${VENV_DIR}"
if [ ! -e ${VENV_DIR} ]; then
    $python3 -m venv ${VENV_DIR}            # 가상환경 생성
fi
source ${VENV_DIR}/bin/activate             # 가상환경 활성화

################################
# get-kubespray.sh - Kubespray 소스코드 다운로드 및 준비
# 지정된 버전의 Kubespray를 다운로드하고 필요한 패치를 적용
################################
#!/bin/bash

CURRENT_DIR=$(cd $(dirname $0); pwd)       # 현재 디렉터리 경로
source config.sh                           # 설정 파일 로드

KUBESPRAY_TARBALL=kubespray-${KUBESPRAY_VERSION}.tar.gz  # 압축 파일명

KUBESPRAY_DIR=./cache/kubespray-${KUBESPRAY_VERSION}     # 캐시 디렉터리

umask 022                                  # 파일 권한 설정

mkdir -p ./cache                           # 캐시 디렉터리 생성
mkdir -p outputs/files/                    # 출력 파일 디렉터리 생성

# 기존 Kubespray 캐시 디렉터리 제거 함수
remove_kubespray_cache_dir() {
    if [ -e ${KUBESPRAY_DIR} ]; then
        /bin/rm -rf ${KUBESPRAY_DIR}
    fi
}

# If KUBESPRAY_VERSION looks like a git commit hash, check out that commit
# Git 커밋 해시로 지정된 경우 해당 커밋 체크아웃
if [[ $KUBESPRAY_VERSION =~ ^[0-9a-f]{7,40}$ ]]; then
    remove_kubespray_cache_dir
    echo "===> Checkout kubespray commit: $KUBESPRAY_VERSION"

    git clone https://github.com/kubernetes-sigs/kubespray.git ${KUBESPRAY_DIR}
    cd ${KUBESPRAY_DIR}
    git checkout $KUBESPRAY_VERSION || {
        echo "Error: commit $KUBESPRAY_VERSION not found"
        exit 1
    }
    cd - >/dev/null

    tar czf outputs/files/${KUBESPRAY_TARBALL} -C ./cache kubespray-${KUBESPRAY_VERSION}
    echo "Done (commit checkout)."
    exit 0
fi

# 브랜치 이름으로 지정된 경우 (master, release-x.x 등)
if [ $KUBESPRAY_VERSION == "master" ] || [[ $KUBESPRAY_VERSION =~ ^release- ]]; then
    remove_kubespray_cache_dir
    echo "===> Checkout kubespray branch : $KUBESPRAY_VERSION"
    if [ ! -e ${KUBESPRAY_DIR} ]; then
        git clone -b $KUBESPRAY_VERSION https://github.com/kubernetes-sigs/kubespray.git ${KUBESPRAY_DIR}
        tar czf outputs/files/${KUBESPRAY_TARBALL} -C ./cache kubespray-${KUBESPRAY_VERSION}
    fi
    exit 0
fi

# 릴리스 태그로 지정된 경우 (기본값)
if [ ! -e outputs/files/${KUBESPRAY_TARBALL} ]; then
    echo "===> Download ${KUBESPRAY_TARBALL}"
    # GitHub 릴리스에서 압축 파일 다운로드
    curl -SL https://github.com/kubernetes-sigs/kubespray/archive/refs/tags/v${KUBESPRAY_VERSION}.tar.gz >outputs/files/${KUBESPRAY_TARBALL} || exit 1

    remove_kubespray_cache_dir
fi

# 압축 파일 해제 및 패치 적용
if [ ! -e ${KUBESPRAY_DIR} ]; then
    echo "===> Extract ${KUBESPRAY_TARBALL}"
    tar xzf outputs/files/${KUBESPRAY_TARBALL}

    mv kubespray-${KUBESPRAY_VERSION} ${KUBESPRAY_DIR}

    sleep 1  # ad hoc, for vagrant shared directory (공유 디렉터리 동기화 대기)

    # Apply patches (버전별 패치 적용)
    patch_dir=${CURRENT_DIR}/target-scripts/patches/${KUBESPRAY_VERSION}
    if [ -d $patch_dir ]; then
        for patch in ${patch_dir}/*.patch; do
            echo "===> Apply patch $patch"
            (cd $KUBESPRAY_DIR && patch -p1 < $patch) || exit 1
        done
    fi
fi

echo "Done."

################################
# pypi-mirror.sh : Python 패키지 미러 생성 스크립트
# 오프라인 환경에서 Python 패키지 설치를 위한 PyPI 미러 저장소 구축
################################
#!/bin/bash

source config.sh                               # 설정 파일 로드

KUBESPRAY_DIR=./cache/kubespray-${KUBESPRAY_VERSION}
if [ ! -e $KUBESPRAY_DIR ]; then
    echo "No kubespray dir at $KUBESPRAY_DIR"
    exit 1
fi

source /etc/os-release                          # OS 정보 로드

source ./target-scripts/venv.sh                # Python 가상환경 활성화

source ./scripts/set-locale.sh                 # 로케일 설정

umask 022                                       # 파일 권한 설정

echo "==> Create pypi mirror for kubespray"
#set -x
pip install -U pip python-pypi-mirror          # PyPI 미러 도구 설치

DEST="-d outputs/pypi/files"                   # 다운로드 대상 디렉터리
PLATFORM="--platform manylinux2014_$(uname -m)"  # PEP-599 (호환성 플랫폼)
#PLATFORM="--platform manylinux_2_17_$(uname -m)"  # PEP-600 (최신 플랫폼)

# Kubespray 요구사항 파일 준비
REQ=requirements.tmp
#sed "s/^ansible/#ansible/" ${KUBESPRAY_DIR}/requirements.txt > $REQ  # Ansible은 바이너리 패키지 제공하지 않음
cp ${KUBESPRAY_DIR}/requirements.txt $REQ
echo "PyYAML" >> $REQ                          # Ansible 의존성
echo "ruamel.yaml" >> $REQ                     # Inventory builder 의존성

# Python 버전별 바이너리 패키지 다운로드
for pyver in 3.11 3.12; do
    echo "===> Download binary for python $pyver"
    pip download $DEST --only-binary :all: --python-version $pyver $PLATFORM -r $REQ || exit 1
done
/bin/rm $REQ                                   # 임시 파일 정리

echo "===> Download source packages"
# 소스 패키지 다운로드 (바이너리가 없는 경우 대비)
pip download $DEST --no-binary :all: -r ${KUBESPRAY_DIR}/requirements.txt

echo "===> Download pip, setuptools, wheel, etc"
pip download $DEST pip setuptools wheel || exit 1
pip download $DEST pip setuptools==40.9.0 || exit 1  # RHEL 호환성을 위한 구버전

echo "===> Download additional packages"
PKGS=selinux                                   # SELinux 지원을 위해 필요 (#4)
PKGS="$PKGS flit_core"                         # pyparsing 빌드 의존성 (#6)
PKGS="$PKGS cython<3"                          # PyYAML이 Python 3.10에서 Cython 필요 (Ubuntu 22.04)
pip download $DEST pip $PKGS || exit 1

# PyPI 미러 생성
pypi-mirror create $DEST -m outputs/pypi

echo "pypi-mirror.sh done"

################################
# (가장 중요) download-kubespray-files.sh : Kubespray 필수 파일 및 이미지 다운로드
# 1. 파일과 이미지 목록 작성 (generate_list.sh 사용)
# 2. 바이너리 파일들 다운로드 
# 3. 컨테이너 이미지 다운로드 (download-images.sh 실행)
################################
#!/bin/bash

umask 022                                       # 파일 권한 설정

source ./config.sh                             # 설정 파일 로드
source scripts/common.sh                       # 공통 함수 로드
source scripts/images.sh                       # 이미지 관련 함수 로드

KUBESPRAY_DIR=./cache/kubespray-${KUBESPRAY_VERSION} # Kubespray 소스 디렉터리
if [ ! -e $KUBESPRAY_DIR ]; then
    echo "No kubespray dir at $KUBESPRAY_DIR"
    exit 1
fi

FILES_DIR=outputs/files                         # 파일 저장 디렉터리

# URL에서 상대 디렉터리 경로 결정 함수
# 각 도구별로 버전에 따른 디렉터리 구조를 정의
#
# kubernetes/vx.x.x        : kubeadm/kubectl/kubelet
# kubernetes/etcd          : etcd
# kubernetes/cni           : CNI plugins
# kubernetes/cri-tools     : crictl
# kubernetes/calico/vx.x.x : calico
# kubernetes/calico        : calicoctl
# runc/vx.x.x              : runc
# cilium-cli/vx.x.x        : cilium-cli
# gvisor/{ver}/{arch}      : gvisor (runsc, containerd-shim)
# skopeo/vx.x.x            : skopeo
# yq/vx.x.x                : yq
#
decide_relative_dir() {
    local url=$1
    local rdir
    rdir=$url
    # Kubernetes 바이너리 (kubeadm, kubectl, kubelet)
    rdir=$(echo $rdir | sed "s@.*/\(v[0-9.]*\)/.*/kube\(adm\|ctl\|let\)@kubernetes/\1@g")
    # etcd 바이너리
    rdir=$(echo $rdir | sed "s@.*/etcd-.*.tar.gz@kubernetes/etcd@")
    # CNI 플러그인
    rdir=$(echo $rdir | sed "s@.*/cni-plugins.*.tgz@kubernetes/cni@")
    # CRI 도구 (crictl)
    rdir=$(echo $rdir | sed "s@.*/crictl-.*.tar.gz@kubernetes/cri-tools@")
    # Calico 도구
    rdir=$(echo $rdir | sed "s@.*/\(v.*\)/calicoctl-.*@kubernetes/calico/\1@")
    # runc 런타임
    rdir=$(echo $rdir | sed "s@.*/\(v.*\)/runc.${IMAGE_ARCH}@runc/\1@")
    # Cilium CLI
    rdir=$(echo $rdir | sed "s@.*/\(v.*\)/cilium-linux-.*@cilium-cli/\1@")
    # gVisor 런타임
    rdir=$(echo $rdir | sed "s@.*/\([^/]*\)/\([^/]*\)/runsc@gvisor/\1/\2@")
    rdir=$(echo $rdir | sed "s@.*/\([^/]*\)/\([^/]*\)/containerd-shim-runsc-v1@gvisor/\1/\2@")
    # Skopeo 도구
    rdir=$(echo $rdir | sed "s@.*/\(v[^/]*\)/skopeo-linux-.*@skopeo/\1@")
    # yq 도구
    rdir=$(echo $rdir | sed "s@.*/\(v[^/]*\)/yq_linux_*@yq/\1@")

    if [ "$url" != "$rdir" ]; then
        echo $rdir
        return
    fi

    # Calico 일반 파일들
    rdir=$(echo $rdir | sed "s@.*/calico/.*@kubernetes/calico@")
    if [ "$url" != "$rdir" ]; then
        echo $rdir
    else
        echo ""                                 # 매칭되지 않으면 루트 디렉터리
    fi
}

# URL에서 파일 다운로드 함수 (재시도 로직 포함)
get_url() {
    url=$1
    filename="${url##*/}"                       # URL에서 파일명 추출

    rdir=$(decide_relative_dir $url)            # 저장할 디렉터리 결정

    if [ -n "$rdir" ]; then
        if [ ! -d $FILES_DIR/$rdir ]; then
            mkdir -p $FILES_DIR/$rdir           # 디렉터리 생성
        fi
    else
        rdir="."                                # 기본 디렉터리
    fi

    if [ ! -e $FILES_DIR/$rdir/$filename ]; then
        echo "==> Download $url"
        # 3회 재시도 로직
        for i in {1..3}; do
            curl --location --show-error --fail --output $FILES_DIR/$rdir/$filename $url && return
            echo "curl failed. Attempt=$i"
        done
        echo "Download failed, exit : $url"
        exit 1
    else
        echo "==> Skip $url"                    # 이미 존재하면 스킵
    fi
}

# Kubespray의 오프라인 목록 생성 스크립트 실행
generate_list() {
    #if [ $KUBESPRAY_VERSION == "2.18.0" ]; then
    #    export containerd_version=${containerd_version:-1.5.8}
    #    export host_os=linux
    #    export image_arch=amd64
    #fi
    # Kubespray 내장 스크립트로 필요한 파일과 이미지 목록 생성
    LANG=C /bin/bash ${KUBESPRAY_DIR}/contrib/offline/generate_list.sh || exit 1

    #if [ $KUBESPRAY_VERSION == "2.18.0" ]; then
    #    # roles/download/default/main.yml에서 버전 확인
    #    snapshot_controller_tag=${snapshot_controller_tag:-v4.2.1}
    #    sed -i "s@\(.*/snapshot-controller:\)@\1${snapshot_controller_tag}@" ${KUBESPRAY_DIR}/contrib/offline/temp/images.list || exit 1
    #fi
}

. ./target-scripts/venv.sh                     # Python 가상환경 활성화

generate_list                                   # 목록 작성 스크립트 실행

mkdir -p $FILES_DIR                            # 파일 저장 디렉터리 생성

# 생성된 목록 파일들을 출력 디렉터리로 복사
cp ${KUBESPRAY_DIR}/contrib/offline/temp/files.list $FILES_DIR/
cp ${KUBESPRAY_DIR}/contrib/offline/temp/images.list $IMAGES_DIR/

# 바이너리 파일들 다운로드
files=$(cat ${FILES_DIR}/files.list)
for i in $files; do
    get_url $i                                 # 각 파일 다운로드
done

# 컨테이너 이미지들 다운로드
./download-images.sh || exit 1


################################
# 위 스크립트로 생성된 파일과 이미지 목록 파일 내용
################################
cat outputs/files/files.list
https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubelet
https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubectl
https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm
https://github.com/etcd-io/etcd/releases/download/v3.5.26/etcd-v3.5.26-linux-arm64.tar.gz
https://github.com/containernetworking/plugins/releases/download/v1.8.0/cni-plugins-linux-arm64-v1.8.0.tgz
https://github.com/projectcalico/calico/releases/download/v3.30.6/calicoctl-linux-arm64
https://github.com/projectcalico/calico/archive/v3.30.6.tar.gz
https://github.com/cilium/cilium-cli/releases/download/v0.18.9/cilium-linux-arm64.tar.gz
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.34.0/crictl-v1.34.0-linux-arm64.tar.gz
https://storage.googleapis.com/cri-o/artifacts/cri-o.arm64.v1.34.4.tar.gz
https://get.helm.sh/helm-v3.18.4-linux-arm64.tar.gz
https://github.com/opencontainers/runc/releases/download/v1.3.4/runc.arm64
https://github.com/containers/crun/releases/download/1.17/crun-1.17-linux-arm64
https://github.com/youki-dev/youki/releases/download/v0.5.7/youki-0.5.7-aarch64-gnu.tar.gz
https://github.com/kata-containers/kata-containers/releases/download/3.7.0/kata-static-3.7.0-arm64.tar.xz
https://storage.googleapis.com/gvisor/releases/release/20260112.0/aarch64/runsc
https://storage.googleapis.com/gvisor/releases/release/20260112.0/aarch64/containerd-shim-runsc-v1
https://github.com/containerd/nerdctl/releases/download/v2.2.1/nerdctl-2.2.1-linux-arm64.tar.gz
https://github.com/containerd/containerd/releases/download/v2.2.1/containerd-2.2.1-linux-arm64.tar.gz
https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.23/cri-dockerd-0.3.23.arm64.tgz
https://github.com/lework/skopeo-binary/releases/download/v1.16.1/skopeo-linux-arm64
https://github.com/mikefarah/yq/releases/download/v4.42.1/yq_linux_arm64
https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.4.1/standard-install.yaml
https://github.com/prometheus-operator/prometheus-operator/releases/download/v0.84.0/stripped-down-crds.yaml

cat outputs/images/images.list
docker.io/mirantis/k8s-netchecker-server:v1.2.2
docker.io/mirantis/k8s-netchecker-agent:v1.2.2
quay.io/coreos/etcd:v3.5.26
quay.io/cilium/cilium:v1.18.6
quay.io/cilium/operator:v1.18.6
quay.io/cilium/hubble-relay:v1.18.6
quay.io/cilium/certgen:v0.2.4
quay.io/cilium/hubble-ui:v0.13.3
quay.io/cilium/hubble-ui-backend:v0.13.3
quay.io/cilium/cilium-envoy:v1.34.10-1762597008-ff7ae7d623be00078865cff1b0672cc5d9bfc6d5
ghcr.io/k8snetworkplumbingwg/multus-cni:v4.2.2
docker.io/flannel/flannel:v0.27.3
docker.io/flannel/flannel-cni-plugin:v1.7.1-flannel1
quay.io/calico/node:v3.30.6
quay.io/calico/cni:v3.30.6
quay.io/calico/kube-controllers:v3.30.6
quay.io/calico/typha:v3.30.6
quay.io/calico/apiserver:v3.30.6
docker.io/kubeovn/kube-ovn:v1.12.21
docker.io/cloudnativelabs/kube-router:v2.1.1
registry.k8s.io/pause:3.10.1
ghcr.io/kube-vip/kube-vip:v1.0.3
docker.io/library/nginx:1.28.0-alpine
docker.io/library/haproxy:3.2.4-alpine
registry.k8s.io/coredns/coredns:v1.12.1
registry.k8s.io/dns/k8s-dns-node-cache:1.25.0
registry.k8s.io/cpa/cluster-proportional-autoscaler:v1.8.8
docker.io/library/registry:2.8.1
registry.k8s.io/metrics-server/metrics-server:v0.8.0
registry.k8s.io/sig-storage/local-volume-provisioner:v2.5.0
docker.io/rancher/local-path-provisioner:v0.0.32
registry.k8s.io/ingress-nginx/controller:v1.13.3
docker.io/amazon/aws-alb-ingress-controller:v1.1.9
quay.io/jetstack/cert-manager-controller:v1.15.3
quay.io/jetstack/cert-manager-cainjector:v1.15.3
quay.io/jetstack/cert-manager-webhook:v1.15.3
registry.k8s.io/sig-storage/csi-attacher:v4.4.2
registry.k8s.io/sig-storage/csi-provisioner:v3.6.2
registry.k8s.io/sig-storage/csi-snapshotter:v6.3.2
registry.k8s.io/sig-storage/snapshot-controller:v7.0.2
registry.k8s.io/sig-storage/csi-resizer:v1.9.2
registry.k8s.io/sig-storage/livenessprobe:v2.11.0
registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.4.0
registry.k8s.io/provider-os/cinder-csi-plugin:v1.30.0
docker.io/amazon/aws-ebs-csi-driver:v0.5.0
docker.io/kubernetesui/dashboard:v2.7.0
docker.io/kubernetesui/metrics-scraper:v1.0.8
quay.io/metallb/speaker:v0.13.9
quay.io/metallb/controller:v0.13.9
registry.k8s.io/kube-apiserver:v1.34.3
registry.k8s.io/kube-controller-manager:v1.34.3
registry.k8s.io/kube-scheduler:v1.34.3
registry.k8s.io/kube-proxy:v1.34.3

cat outputs/images/additional-images.list 
nginx:1.29.4
registry:3.0.0

################################
# download-images.sh : Kubespray 기본 컨테이너 이미지 다운로드
# generate_list.sh로 생성된 이미지 목록을 기반으로 컨테이너 이미지들을 다운로드
################################
#!/bin/bash

source ./config.sh                             # 설정 파일 로드
source scripts/common.sh                       # 공통 함수 로드
source scripts/images.sh                       # 이미지 관련 함수 로드

# 이미지 다운로드 스킵 옵션 확인
if [ "$SKIP_DOWNLOAD_IMAGES" = "true" ]; then
    exit 0
fi

# 이미지 목록 파일 존재 여부 확인
if [ ! -e "${IMAGES_DIR}/images.list" ]; then
    echo "${IMAGES_DIR}/images.list does not exist. Run download-kubespray-files.sh first."
    exit 1
fi

# Kubespray 필수 컨테이너 이미지들 다운로드
images=$(cat ${IMAGES_DIR}/images.list)
for i in $images; do
    get_image $i                               # scripts/images.sh의 get_image 함수 호출
done

################################
# download-additional-containers.sh : 추가 컨테이너 이미지 다운로드
# 오프라인 환경 구축을 위한 추가 이미지들 (nginx, registry 등) 다운로드
################################
#!/bin/bash

# 이미지 다운로드 스킵 옵션 확인
if [ "$SKIP_DOWNLOAD_IMAGES" = "true" ]; then
    exit 0
fi

echo "==> Pull additional container images"

umask 022                                       # 파일 권한 설정

source ./config.sh                             # 설정 파일 로드
source scripts/common.sh                       # 공통 함수 로드
source scripts/images.sh                       # 이미지 관련 함수 로드

# imagelists 디렉터리의 모든 .txt 파일에서 이미지 목록 수집
# 주석 제거 및 중복 제거 후 additional-images.list 생성
cat imagelists/*.txt | sed "s/#.*$//g" | sort -u > $IMAGES_DIR/additional-images.list
cat $IMAGES_DIR/additional-images.list

IMAGES=$(cat $IMAGES_DIR/additional-images.list)

# 각 추가 이미지 다운로드
for image in $IMAGES; do
    image=$(expand_image_repo $image)          # 이미지 저장소 경로 확장
    get_image $image                           # 이미지 다운로드
done

################################
# create-repo.sh : OS별 패키지 저장소 생성 스크립트 실행
# RHEL 계열과 Ubuntu 계열을 구분하여 해당하는 저장소 생성 스크립트 실행
################################
#!/bin/bash

# OS 타입에 따라 적절한 저장소 생성 스크립트 실행
if [ -e /etc/redhat-release ]; then
    ./scripts/create-repo-rhel.sh || exit 1    # RHEL/CentOS/Rocky Linux용
else
    ./scripts/create-repo-ubuntu.sh || exit 1  # Ubuntu/Debian용
fi

################################
# scripts/create-repo-rhel.sh : RHEL 계열 RPM 저장소 생성
# 오프라인 환경에서 사용할 RPM 패키지들을 다운로드하고 저장소 메타데이터 생성
################################
#!/bin/bash

umask 022                                       # 파일 권한 설정

. /etc/os-release                               # OS 정보 로드

REQUIRE_MODULE=false                            # 모듈 메타데이터 필요 여부

# OS 버전별 설정
VERSION_MAJOR=$VERSION_ID
case "${VERSION_MAJOR}" in
    8*)
        REQUIRE_MODULE=true                     # RHEL 8은 모듈 메타데이터 필요
        VERSION_MAJOR=8
        ;;
    9*)
        REQUIRE_MODULE=true                     # RHEL 9도 모듈 메타데이터 필요
        VERSION_MAJOR=9
        ;;
    10*)
        VERSION_MAJOR=10                        # RHEL 10은 모듈 메타데이터 불필요
        ;;
    *)
        echo "Unsupported version: $VERSION_MAJOR"
        ;;
esac

# 패키지 목록 수집 (주석 제거 후 중복 제거)
PKGS=$(cat pkglist/rhel/*.txt pkglist/rhel/${VERSION_MAJOR}/*.txt | grep -v "^#" | sort | uniq)

CACHEDIR=cache/cache-rpms                       # RPM 캐시 디렉터리
mkdir -p $CACHEDIR

RT="sudo dnf download --resolve --alldeps --downloaddir $CACHEDIR"

echo "==> Downloading: " $PKGS
# 의존성 포함하여 모든 필요 패키지 다운로드
$RT $PKGS || {
    echo "Download error"
    exit 1
}

# RPM 저장소 디렉터리 생성
RPMDIR=outputs/rpms/local
if [ -e $RPMDIR ]; then
    /bin/rm -rf $RPMDIR || exit 1               # 기존 디렉터리 제거
fi
mkdir -p $RPMDIR
/bin/cp $CACHEDIR/*.rpm $RPMDIR/                # RPM 파일들 복사
/bin/rm $RPMDIR/*.i686.rpm                      # 32비트 패키지 제거 (64비트 환경)

echo "==> createrepo"
createrepo $RPMDIR || exit 1                    # 저장소 메타데이터 생성

#Wait a second to avoid error on Vagrant
sleep 1                                         # Vagrant 환경에서의 동기화 대기

# RHEL 8/9에서는 모듈 메타데이터 추가 생성
if $REQUIRE_MODULE; then
    cd $RPMDIR
    #createrepo_c . || exit 1
    echo "==> repo2module"
    # 모듈 메타데이터 생성
    LANG=C repo2module -s stable . modules.yaml || exit 1
    echo "==> modifyrepo"
    # 저장소에 모듈 메타데이터 추가
    modifyrepo_c --mdtype=modules modules.yaml repodata/ || exit 1
fi

echo "create-repo done."

################################
# copy-target-scripts.sh : cp -f -r target-scripts/* outputs/
################################
# 기본 60G -> 120G 증설 확인
lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda      8:0    0   120G  0 disk
├─sda1   8:1    0   600M  0 part /boot/efi
├─sda2   8:2    0   3.8G  0 part [SWAP]
└─sda3   8:3    0 115.6G  0 part /

df -hT /
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/sda3      xfs   116G  5.5G  111G   5% /


# git clone
git clone https://github.com/kubespray-offline/kubespray-offline
tree kubespray-offline/
cd kubespray-offline/

# 변수 정보 확인 : nginx 와 registry 는 각각 1.29.4 와 3.0.0 변수 선언 확인 <- 버전 변경 시에는 이 단계에서 수정 필요!
source ./config.sh
echo -e "kubespary $KUBESPRAY_VERSION"
# kubespary 2.30.0
echo -e "runc $RUNC_VERSION"
# runc 1.3.4
echo -e "containerd $CONTAINERD_VERSION"
# containerd 2.2.1
echo -e "nercdtl $NERDCTL_VERSION"
# nercdtl 2.2.1
echo -e "cni $CNI_VERSION"
# cni 1.8.0
echo -e "nginx $NGINX_VERSION"
# nginx 1.29.4
echo -e "registry $REGISTRY_VERSION"
# registry 3.0.0
echo -e "registry_port: $REGISTRY_PORT"
# registry_port: 35000
echo -e "Additional container registry hosts: $ADDITIONAL_CONTAINER_REGISTRY_LIST"
# Additional container registry hosts: myregistry.io
echo -e "cpu arch: $IMAGE_ARCH"
# cpu arch: arm64

# 17분 소요
cat download-all.sh
# #!/bin/bash

# run() {
#     echo "=> Running: $*"
#     $* || {
#         echo "Failed in : $*"
#         exit 1
#     }
# }

# source ./config.sh

# #run ./install-docker.sh
# #run ./install-nerdctl.sh
# run ./precheck.sh
# run ./prepare-pkgs.sh || exit 1
# run ./prepare-py.sh
# run ./get-kubespray.sh
# if $ansible_in_container; then
#     run ./build-ansible-container.sh
# else
#     run ./pypi-mirror.sh
# fi
# run ./download-kubespray-files.sh
# run ./download-additional-containers.sh
# run ./create-repo.sh
# run ./copy-target-scripts.sh

# echo "Done."
./download-all.sh
# ...
# (230/230): container-selinux-2.240.0-1.el10.noarch.rpm                                    4.6 kB/s |  56 kB     00:12    
# /bin/rm: cannot remove 'outputs/rpms/local/*.i686.rpm': No such file or directory # 32비트 RPM 파일이 애초에 없어서 지울 게 없음. 오류 아니여서 무시해도됨.
# ==> createrepo
# Directory walk started
# Directory walk done - 230 packages
# Temporary output repo path: outputs/rpms/local/.repodata/
# Pool started (with 5 workers)
# Pool finished
# create-repo done.
# => Running: ./copy-target-scripts.sh
# ==> Copy target scripts
# Done.

# venv 디렉터리 확인
du -sh ~/.venv
# 490M    /root/.venv

tree ~/.venv | more
# /root/.venv
# └── 3.12
#     ├── bin
#     │   ├── activate
#     │   ├── activate.csh
#     │   ├── activate.fish
#     │   ├── Activate.ps1
#     │   ├── ansible
#     │   ├── ansible-community
#     │   ├── ansible-config
#     │   ├── ansible-connection
#     │   ├── ansible-console
#     │   ├── ansible-doc
#     │   ├── ansible-galaxy
#     │   ├── ansible-inventory
#     │   ├── ansible-playbook
#     │   ├── ansible-pull
#     │   ├── ansible-test
#     │   ├── ansible-vault
#     │   ├── jp.py
#     │   ├── netaddr
#     │   ├── pip
#     │   ├── pip3
#     │   ├── pip3.12
#     │   ├── __pycache__
#     │   │   └── jp.cpython-312.pyc
#     │   ├── pypi-mirror

# /root/.cache 디렉터리 확인
tree ~/.cache | more
# /root/.cache
# ├── pip
# │   ├── http-v2
# │   │   ├── 0
# │   │   │   ├── 0
# │   │   │   │   ├── 9
# │   │   │   │   │   └── 3
# │   │   │   │   │       └── 9
# │   │   │   │   │           ├── 00939978e0a8f8f7eab4897b254c02e418040597caf2fd468f
# fe420c
# │   │   │   │   │           └── 00939978e0a8f8f7eab4897b254c02e418040597caf2fd468f
# fe420c.body
# │   │   │   │   ├── c
# │   │   │   │   │   └── 5
# │   │   │   │   │       └── 4
# │   │   │   │   │           ├── 00c54da3561097bf55c70323ba8e2045674b5d7985daf39677
# 7a4605
# │   │   │   │   │           └── 00c54da3561097bf55c70323ba8e2045674b5d7985daf39677
# 7a4605.body
# │   │   │   │   └── d
# │   │   │   │       └── f
# │   │   │   │           └── d
# │   │   │   │               ├── 00dfda658f2fcde6926308004100798ad2247e2f45d1313ebf
# ef973d
# │   │   │   │               └── 00dfda658f2fcde6926308004100798ad2247e2f45d1313ebf
# ef973d.body
# │   │   │   ├── 2
du -sh ~/.cache
814M    /root/.cache

# 다운로드 될 파일과 이미지 생성 스크립트는 kubespary repo 에 offline 참고 확인
tree /root/kubespray-offline/cache/kubespray-2.30.0/contrib/offline/
# cache/kubespray-2.30.0/contrib/offline/
# ├── docker-daemon.json
# ├── generate_list.sh # 중요
# ├── generate_list.yml # 중요
# ├── manage-offline-container-images.sh
# ├── manage-offline-files.sh
# ├── nginx.conf
# ├── README.md
# ├── registries.conf
# ├── temp
# │   ├── files.list
# │   ├── files.list.template
# │   ├── images.list
# │   └── images.list.template
# └── upload2artifactory.py


# 용량 확인
du -sh /root/kubespray-offline/outputs/
3.3G    /root/kubespray-offline/outputs/

# 디렉터리/파일 구조 확인
tree outputs/ | more

# tree outputs/ -L 1
# outputs/
# ├── config.sh
# ├── config.toml
# ├── containerd.service
# ├── extract-kubespray.sh
# ├── files                  # kubectl/kubelet/kubeadm, containerd 등 바이너리 파일들
# ├── images                 # 컨테이너 이미지를 .tar.gz 압축파일
# ├── install-containerd.sh
# ├── load-push-all-images.sh
# ├── nginx-default.conf
# ├── patches
# ├── playbook               # 노드들에 offline repo 설정을 위한 playbook/role
# ├── pypi                   # python 패키지 파일들 - index.html, *.whl, *.tar.gz
# ├── pyver.sh
# ├── rpms                   # rpm 패키지 파일들
# ├── setup-all.sh
# ├── setup-container.sh
# ├── setup-offline.sh
# ├── setup-py.sh
# ├── start-nginx.sh
# ├── start-registry.sh
# └── venv.sh

# 7 directories, 15 files

[1] outputs 디렉터리 이동 후 setup-container.sh 실행 : 추가로 install-containerd.sh 실행됨


################################
# setup-container.sh
################################
#!/bin/bash

# install containerd (변수 맞춰주는 것도 있음)
./install-containerd.sh
echo "==> Load registry, nginx images"
NERDCTL=/usr/local/bin/nerdctl
cd ./images

for f in docker.io_library_registry-*.tar.gz docker.io_library_nginx-*.tar.gz; do
    sudo $NERDCTL load -i $f || exit 1
done

if [ -f kubespray-offline-container.tar.gz ]; then
    sudo $NERDCTL load -i kubespray-offline-container.tar.gz || exit 1
fi

# ==> download https://github.com/opencontainers/runc/releases/download/v1.3.4/runc.arm64
#   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
#                                  Dload  Upload   Total   Spent    Left  Speed
#   0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
# 100 11.6M  100 11.6M    0     0  10.5M      0  0:00:01  0:00:01 --:--:-- 27.1M
# ==> download https://github.com/containerd/containerd/releases/download/v2.2.1/containerd-2.2.1-linux-arm64.tar.gz
#   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
#                                  Dload  Upload   Total   Spent    Left  Speed
#   0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
# 100 30.6M  100 30.6M    0     0  19.7M      0  0:00:01  0:00:01 --:--:-- 29.7M
# ==> download https://github.com/containerd/nerdctl/releases/download/v2.2.1/nerdctl-2.2.1-linux-arm64.tar.gz
#   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
#                                  Dload  Upload   Total   Spent    Left  Speed
#   0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
# 100 9986k  100 9986k    0     0  12.1M      0 --:--:-- --:--:-- --:--:-- 12.1M
# ==> download https://github.com/containernetworking/plugins/releases/download/v1.8.0/cni-plugins-linux-arm64-v1.8.0.tgz
#   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
#                                  Dload  Upload   Total   Spent    Left  Speed
#   0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
# 100 48.8M  100 48.8M    0     0  15.8M      0  0:00:03  0:00:03 --:--:-- 17.5M
# ==> Install runc
# ==> Install nerdctl
# nerdctl
# containerd-rootless-setuptool.sh
# containerd-rootless.sh
# ==> Install containerd
# bin/containerd-stress
# bin/containerd
# bin/ctr
# bin/containerd-shim-runc-v2
# ==> Start containerd
# Created symlink '/etc/systemd/system/multi-user.target.wants/containerd.service' → '/etc/systemd/system/containerd.service'.
# ==> Install CNI plugins
# ./
# ./README.md
# ./static
# ./host-device
# ./ipvlan
# ./dhcp
# ./LICENSE
# ./portmap
# ./tap
# ./host-local
# ./vlan
# ./loopback
# ./sbr
# ./firewall
# ./bandwidth
# ./bridge
# ./vrf
# ./macvlan
# ./tuning
# ./dummy
# ./ptp
# Load images
################################
# install-containerd.sh
################################
#!/bin/bash

source ./config.sh

ENABLE_DOWNLOAD=${ENABLE_DOWNLOAD:-false}

if [ ! -e files ]; then
    mkdir -p files
fi

FILES_DIR=./files
if $ENABLE_DOWNLOAD; then
    FILES_DIR=./tmp/files
    mkdir -p $FILES_DIR
fi

# download files, if not found
download() {
    url=$1
    dir=$2

    filename=$(basename $1)
    mkdir -p ${FILES_DIR}/$dir

    if [ ! -e ${FILES_DIR}/$dir/$filename ]; then
        echo "==> download $url"
        (cd ${FILES_DIR}/$dir && curl -SLO $1)
    fi
}

if $ENABLE_DOWNLOAD; then
    download https://github.com/opencontainers/runc/releases/download/v${RUNC_VERSION}/runc.${IMAGE_ARCH} runc/v${RUNC_VERSION}
    download https://github.com/containerd/containerd/releases/download/v${CONTAINERD_VERSION}/containerd-${CONTAINERD_VERSION}-linux-${IMAGE_ARCH}.tar.gz
    download https://github.com/containerd/nerdctl/releases/download/v${NERDCTL_VERSION}/nerdctl-${NERDCTL_VERSION}-linux-${IMAGE_ARCH}.tar.gz
    download https://github.com/containernetworking/plugins/releases/download/v${CNI_VERSION}/cni-plugins-linux-${IMAGE_ARCH}-v${CNI_VERSION}.tgz kubernetes/cni
else
    FILES_DIR=./files
fi

select_latest() {
    local latest=$(ls $* | tail -1)
    if [ -z "$latest" ]; then
        echo "No such file: $*"
        exit 1
    fi
    echo $latest
}

check_file() {
    if [ ! -e $1 ]; then
        echo "FATAL ERROR: No such file: $1"
        exit 1
    fi
}

# Install runc
echo "==> Install runc"
file="${FILES_DIR}/runc/v${RUNC_VERSION}/runc.${IMAGE_ARCH}"
check_file $file
sudo cp "$file" /usr/local/bin/runc
sudo chmod 755 /usr/local/bin/runc

# Install nerdctl
echo "==> Install nerdctl"
file="${FILES_DIR}/nerdctl-${NERDCTL_VERSION}-linux-${IMAGE_ARCH}.tar.gz"
check_file "$file"
tar xvf "$file" -C /tmp
sudo cp /tmp/nerdctl /usr/local/bin

# Install containerd
echo "==> Install containerd"
file="${FILES_DIR}/containerd-${CONTAINERD_VERSION}-linux-${IMAGE_ARCH}.tar.gz"
check_file "$file"
sudo tar xvf "$file" --strip-components=1 -C /usr/local/bin
sudo cp ./containerd.service /etc/systemd/system/

# 필요한 디렉토리 생성
sudo mkdir -p \
     /etc/systemd/system/containerd.service.d \
     /etc/containerd \
     /var/lib/containerd \
     /run/containerd

sudo cp config.toml /etc/containerd/

echo "==> Start containerd"
sudo systemctl daemon-reload
sudo systemctl enable --now containerd

# Install cni plugins
echo "==> Install CNI plugins"
sudo mkdir -p /opt/cni/bin
file="${FILES_DIR}/kubernetes/cni/cni-plugins-linux-${IMAGE_ARCH}-v${CNI_VERSION}.tgz"
check_file "$file"
sudo tar xvzf "$file" -C /opt/cni/bin

################################
# 확인
################################
# outputs 확인
cd /root/kubespray-offline/outputs
ls -l *.sh
# -rw-r--r--. 1 root root 1371 Feb 12 01:58 config.sh
# -rwxr-xr-x. 1 root root  719 Feb 12 01:58 extract-kubespray.sh
# -rwxr-xr-x. 1 root root 2544 Feb 12 01:58 install-containerd.sh
# -rwxr-xr-x. 1 root root 1141 Feb 12 01:58 load-push-all-images.sh
# -rw-r--r--. 1 root root  607 Feb 12 01:58 pyver.sh
# -rwxr-xr-x. 1 root root  394 Feb 12 01:58 setup-all.sh
# -rwxr-xr-x. 1 root root  408 Feb 12 01:58 setup-container.sh
# -rwxr-xr-x. 1 root root 2106 Feb 12 01:58 setup-offline.sh
# -rwxr-xr-x. 1 root root 1213 Feb 12 01:58 setup-py.sh
# -rwxr-xr-x. 1 root root  654 Feb 12 01:58 start-nginx.sh
# -rwxr-xr-x. 1 root root  445 Feb 12 01:58 start-registry.sh
# -rw-r--r--. 1 root root  322 Feb 12 01:58 venv.sh


# Install containerd from local files. Load nginx and registry images to containerd.
./setup-container.sh
==> Install runc
==> Install nerdctl
==> Install containerd
==> Start containerd
==> Install CNI plugins
==> Load registry, nginx images

# 설치된 바이너리 파일 및 버전 확인
which runc && runc --version
# /usr/local/bin/runc
# runc version 1.3.4
# commit: v1.3.4-0-gd6d73eb8
# spec: 1.2.1
# go: go1.24.10
# libseccomp: 2.5.6
which containerd && containerd --version
# /usr/local/bin/containerd
# containerd github.com/containerd/containerd/v2 v2.2.1 dea7da592f5d1d2b7755e3a161be07f43fad8f75
which nerdctl && nerdctl --version
# /usr/local/bin/nerdctl
# nerdctl version 2.2.1
tree -ug /opt/cni/bin/
# [root     root    ]  /opt/cni/bin/
# ├── [root     root    ]  bandwidth
# ├── [root     root    ]  bridge
# ├── [root     root    ]  dhcp
# ├── [root     root    ]  dummy
# ├── [root     root    ]  firewall
# ├── [root     root    ]  host-device
# ├── [root     root    ]  host-local
# ├── [root     root    ]  ipvlan
# ├── [root     root    ]  LICENSE
# ├── [root     root    ]  loopback
# ├── [root     root    ]  macvlan
# ├── [root     root    ]  portmap
# ├── [root     root    ]  ptp
# ├── [root     root    ]  README.md
# ├── [root     root    ]  sbr
# ├── [root     root    ]  static
# ├── [root     root    ]  tap
# ├── [root     root    ]  tuning
# ├── [root     root    ]  vlan
# └── [root     root    ]  vrf
# containerd systemd unit 확인
cat /etc/containerd/config.tomll
# version = 2
# root = "/var/lib/containerd"
# state = "/run/containerd"
# oom_score = 0

# [grpc]
#   address = "/run/containerd/containerd.sock"
#   uid = 0
#   gid = 0

# [debug]
#   address = "/run/containerd/debug.sock"
#   uid = 0
#   gid = 0
#   level = "info"

# [metrics]
#   address = ""
#   grpc_histogram = false

# [cgroup]
#   path = ""

# [plugins]
#   [plugins."io.containerd.grpc.v1.cri".containerd]
#     default_runtime_name = "runc"
#     snapshotter = "overlayfs"
#   [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
#     runtime_type = "io.containerd.runc.v2"
#   [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
#     systemdCgroup = true
cat /etc/systemd/system/containerd.service
# [Unit]
# Description=containerd container runtime
# Documentation=https://containerd.io
# After=network.target local-fs.target

# [Service]
# ExecStartPre=-/sbin/modprobe overlay
# ExecStart=/usr/local/bin/containerd

# Type=notify
# Delegate=yes
# KillMode=process
# Restart=always
# RestartSec=5
# # Having non-zero Limit*s causes performance problems due to accounting overhead
# # in the kernel. We recommend using cgroups to do container-local accounting.
# LimitNPROC=infinity
# LimitCORE=infinity
# LimitNOFILE=infinity
# # Comment TasksMax if your systemd version does not supports it.
# # Only systemd 226 and above support this version.
# TasksMax=infinity
# OOMScoreAdjust=-999

# [Install]
# WantedBy=multi-user.target
systemctl status containerd.service --no-pager
# ● containerd.service - containerd container runtime
#      Loaded: loaded (/etc/systemd/system/containerd.service; enabled; preset: disabled)
#      Active: active (running) since Thu 2026-02-12 02:00:42 KST; 3min 53s ago
#  Invocation: 8d6ceab1ea844520b5d8ebeb0a356197
#        Docs: https://containerd.io
#    Main PID: 21026 (containerd)
#       Tasks: 10
#      Memory: 491.1M (peak: 491.4M)
#         CPU: 1.786s
#      CGroup: /system.slice/containerd.service
#              └─21026 /usr/local/bin/containerd

# Feb 12 02:00:42 admin containerd[21026]: time="2026-02-12T02:00:42.537886605+…trpc
# Feb 12 02:00:42 admin containerd[21026]: time="2026-02-12T02:00:42.537909814+…sock
# Feb 12 02:00:42 admin containerd[21026]: time="2026-02-12T02:00:42.537886647+…ult"
# Feb 12 02:00:42 admin containerd[21026]: time="2026-02-12T02:00:42.537983480+…ver"
# Feb 12 02:00:42 admin containerd[21026]: time="2026-02-12T02:00:42.537988855+…NRI"
# Feb 12 02:00:42 admin containerd[21026]: time="2026-02-12T02:00:42.537993105+…..."
# Feb 12 02:00:42 admin containerd[21026]: time="2026-02-12T02:00:42.537996272+…..."
# Feb 12 02:00:42 admin containerd[21026]: time="2026-02-12T02:00:42.538003814+…ate"
# Feb 12 02:00:42 admin containerd[21026]: time="2026-02-12T02:00:42.538213272+…05s"
# Feb 12 02:00:42 admin systemd[1]: Started containerd.service - containerd co…time.
# Hint: Some lines were ellipsized, use -l to show in full.

# 다운받은 이미지를 압축해제 후 로컬에 load 확인 : CPU Arch = PLATFORM 확인!
# 내부 레지스트리용 컨테이너를 기동시키기 위해 다운로드함
nerdctl images
# REPOSITORY    TAG              IMAGE ID        CREATED               PLATFORM       SIZE       BLOB SIZE
# nginx         1.29.4           4c333d291372    About a minute ago    linux/arm64    190.7MB    184MB
# nginx         1.28.0-alpine    bcb5257f77e1    About a minute ago    linux/arm64    52.73MB    51.23MB
# registry      3.0.0            496d3637ba81    About a minute ago    linux/arm64    57.52MB    57.34MB
# registry      2.8.1            b1524398e0af    About a minute ago    linux/arm64    25.85MB    25.68MB

[2] start-nginx.sh 실행 : 웹 서버로 files, images, pypi, rpms 제공

################################
# start-nginx.sh 웹서버를 컨테이너로 기동함
################################
#!/bin/bash

source ./config.sh

BASEDIR="."
if [ ! -d images ] && [ -d ../outputs ]; then
    BASEDIR="../outputs"  # for tests
fi
BASEDIR=$(cd $BASEDIR; pwd)
NERDCTL="sudo /usr/local/bin/nerdctl"

NGINX_IMAGE=nginx:${NGINX_VERSION}

echo "===> Stop nginx"
$NERDCTL container update --restart no nginx 2>/dev/null
$NERDCTL container stop nginx 2>/dev/null
$NERDCTL container rm nginx 2>/dev/null

echo "===> Start nginx"
$NERDCTL container run -d \
    --network host \
    --restart always \
    --name nginx \
    -v ${BASEDIR}:/usr/share/nginx/html \
    -v ${BASEDIR}/nginx-default.conf:/etc/nginx/conf.d/default.conf \
    ${NGINX_IMAGE} || exit 1


################################
# 확인
################################
# (옵션) nginx conf 파일 수정 : 디렉터리 목록 표시되게
cp nginx-default.conf nginx-default.bak
cat << EOF > nginx-default.conf 
server {
    listen       80;
    listen  [::]:80;
    server_name  localhost;

    location / {
        root   /usr/share/nginx/html;
        # index  index.html index.htm;

        # 디렉토리 목록을 자세히 확인하기 위해 추가
        autoindex on;                 # 디렉터리 목록 표시
        autoindex_exact_size off;     # 파일 크기 KB/MB/GB 단위로 보기 좋게
        autoindex_localtime on;       # 서버 로컬 타임으로 표시
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

    # Force sendfile to off
    sendfile off;     
}
EOF

# Start nginx container.
./start-nginx.sh
# ===> Stop nginx
# ===> Start nginx
# 21cb6e5f96efd138d639c602678c02defe3f946c66602a6b5a295a806832115e
# (옵션) 실행 명령 참고
echo "===> Start nginx"
$NERDCTL container run -d \
    --network host \
    --restart always \
    --name nginx \
    -v ${BASEDIR}:/usr/share/nginx/html \
    -v ${BASEDIR}/nginx-default.conf:/etc/nginx/conf.d/default.conf \
    ${NGINX_IMAGE} || exit 1

# nginx 컨테이너 확인
nerdctl ps
# CONTAINER ID    IMAGE                             COMMAND                   CREATED           STATUS    PORTS    NAMES
# 21cb6e5f96ef    docker.io/library/nginx:1.29.4    "/docker-entrypoint.…"    22 seconds ago    Up                 nginx

ss -tnlp | grep nginx
# LISTEN 0      511          0.0.0.0:80         0.0.0.0:*    users:(("nginx",pid=21649,fd=6),("nginx",pid=21648,fd=6),("nginx",pid=21647,fd=6),("nginx",pid=21646,fd=6),("nginx",pid=21612,fd=6))
# LISTEN 0      511             [::]:80            [::]:*    users:(("nginx",pid=21649,fd=7),("nginx",pid=21648,fd=7),("nginx",pid=21647,fd=7),("nginx",pid=21646,fd=7),("nginx",pid=21612,fd=7))

# nginx 웹 접속하여 rpm들 확인
curl http://192.168.10.10
# <html>
# <head><title>Index of /</title></head>
# <body>
# <h1>Index of /</h1><hr><pre><a href="../">../</a>
# <a href="files/">files/</a>                                             11-Feb-2026 16:46       -
# <a href="images/">images/</a>                                            11-Feb-2026 16:58       -
# <a href="patches/">patches/</a>                                           11-Feb-2026 16:58       -
# <a href="playbook/">playbook/</a>                                          11-Feb-2026 16:58       -
# <a href="pypi/">pypi/</a>                                              11-Feb-2026 16:45       -
# <a href="rpms/">rpms/</a>                                              11-Feb-2026 16:58       -
# <a href="config.sh">config.sh</a>                                          11-Feb-2026 16:58    1371
# <a href="config.toml">config.toml</a>                                        11-Feb-2026 16:58     647
# <a href="containerd.service">containerd.service</a>                                 11-Feb-2026 16:58    1270
# <a href="extract-kubespray.sh">extract-kubespray.sh</a>                               11-Feb-2026 16:58     719
# <a href="install-containerd.sh">install-containerd.sh</a>                              11-Feb-2026 16:58    2544
# <a href="load-push-all-images.sh">load-push-all-images.sh</a>                            11-Feb-2026 16:58    1141
# <a href="nginx-default.bak">nginx-default.bak</a>                                  11-Feb-2026 17:05     349
# <a href="nginx-default.conf">nginx-default.conf</a>                                 11-Feb-2026 17:05     650
# <a href="pyver.sh">pyver.sh</a>                                           11-Feb-2026 16:58     607
# <a href="setup-all.sh">setup-all.sh</a>                                       11-Feb-2026 16:58     394
# <a href="setup-container.sh">setup-container.sh</a>                                 11-Feb-2026 16:58     408
# <a href="setup-offline.sh">setup-offline.sh</a>                                   11-Feb-2026 16:58    2106
# <a href="setup-py.sh">setup-py.sh</a>                                        11-Feb-2026 16:58    1213
# <a href="start-nginx.sh">start-nginx.sh</a>                                     11-Feb-2026 16:58     654
# <a href="start-registry.sh">start-registry.sh</a>                                  11-Feb-2026 16:58     445
# <a href="venv.sh">venv.sh</a>                                            11-Feb-2026 16:58     322
# </pre><hr></body>
# </html>

[3] setup-offline.sh 실행 : offline repo 설정, pypi mirror 전역 설정

################################
# setup-offline.sh 오프라인 레포 설정
################################
#!/bin/bash

# Setup offline repo for ansible node.

source ./config.sh
source /etc/os-release

# setup yum/deb repository
setup_yum_repos() {
    sudo /bin/rm /etc/yum.repos.d/offline.repo

    echo "===> Disable all yumrepositories"
    for repo in /etc/yum.repos.d/*.repo; do
        #sudo sed -i "s/^enabled=.*/enabled=0/" $repo
        sudo mv "${repo}" "${repo}.original"
    done

    echo "===> Setup local yum repository"
    cat <<EOF | sudo tee /etc/yum.repos.d/offline.repo
[offline-repo]
name=Offline repo
baseurl=http://localhost/rpms/local/
enabled=1
gpgcheck=0
EOF
}

# setup yum/deb repository
setup_deb_repos() {
    echo "===> Setup deb offline repository"
    cat <<EOF | sudo tee /etc/apt/apt.conf.d/99offline
APT::Get::AllowUnauthenticated "true";
Acquire::AllowInsecureRepositories "true";
Acquire::AllowDowngradeToInsecureRepositories "true";
EOF

    cat <<EOF | sudo tee /etc/apt/sources.list.d/offline.list
deb [trusted=yes] http://localhost/debs/local/ ./
EOF

    case "$VERSION_ID" in
        "20.04"|"22.04")
            echo "===> Disable default repositories"
            if [ ! -e /etc/apt/sources.list.original ]; then
                sudo cp /etc/apt/sources.list /etc/apt/sources.list.original
            fi
            sudo sed -i "s/^deb /# deb /" /etc/apt/sources.list
            ;;

        *)
            echo "===> Disable default repositories"
            if [ ! -e /etc/apt/sources.list.d/ubuntu.sources.original ]; then
                sudo cp /etc/apt/sources.list.d/ubuntu.sources /etc/apt/sources.list.d/ubuntu.sources.original
            fi
            sudo /bin/rm /etc/apt/sources.list.d/ubuntu.sources
            sudo touch /etc/apt/sources.list.d/ubuntu.sources
            ;;
    esac

}


setup_pypi_mirror() {
    # PyPI mirror
    echo "===> Setup PyPI mirror"
    mkdir -p ~/.config/pip/
    cat <<EOF >~/.config/pip/pip.conf
[global]
index = http://localhost/pypi/
index-url = http://localhost/pypi/
trusted-host = localhost
EOF
}

if [ -e /etc/redhat-release ]; then
    setup_yum_repos
else
    setup_deb_repos
fi
setup_pypi_mirror

################################
# 확인
################################
# 스크립트 실행 전 기본 정보 확인
dnf repolist
# repo id                                              repo name
# appstream                                            Rocky Linux 10 - AppStream
# baseos                                               Rocky Linux 10 - BaseOS
# extras                                               Rocky Linux 10 - Extras

cat /etc/redhat-release
# Rocky Linux release 10.0 (Red Quartz)


# 스크립트 실행 : Setup yum/deb repo config and PyPI mirror config to use local nginx server.
./setup-offline.sh
# /bin/rm: cannot remove '/etc/yum.repos.d/offline.repo': No such file or directory
# ===> Disable all yumrepositories
# ===> Setup local yum repository
# [offline-repo]
# name=Offline repo
# baseurl=http://localhost/rpms/local/
# enabled=1
# gpgcheck=0
# ===> Setup PyPI mirror
# 기존 repo 이름이 .original로 변경되고, offline.repo 추가 확인
tree /etc/yum.repos.d/
/etc/yum.repos.d/
├── offline.repo
├── rocky-addons.repo.original
├── rocky-devel.repo.original
├── rocky-extras.repo.original
└── rocky.repo.original

# offline.repo 파일 확인
cat /etc/yum.repos.d/offline.repo
[offline-repo]
name=Offline repo
baseurl=http://localhost/rpms/local/
enabled=1
gpgcheck=0

# offline repo 확인
dnf clean all # 오리지날로 설정
dnf repolist
# repo id                                                      repo name
# offline-repo                                                 Offline repo

# pip 전역 설정 : pypi mirror 설정 확인
cat ~/.config/pip/pip.conf
# [global]
# index = http://localhost/pypi/ # 사용자 전역 설정
# index-url = http://localhost/pypi/ 
# trusted-host = localhost

[4] setup-py.sh 실행 : offline repo 로 부터 python${PY} 설치 시도 → offline repo 동작 여부 확인

################################
# setup-py.sh 파이썬 관련 셋업
################################
#!/bin/bash

. /etc/os-release

IS_OFFLINE=${IS_OFFLINE:-true}

# Select python version
source "$(dirname "${BASH_SOURCE[0]}")/pyver.sh"

# Install python and dependencies
echo "===> Install python, venv, etc"
if [ -e /etc/redhat-release ]; then
    # RHEL
    DNF_OPTS=
    if [[ $IS_OFFLINE = "true" ]]; then
        DNF_OPTS="--disablerepo=* --enablerepo=offline-repo"
    fi
    #sudo dnf install -y $DNF_OPTS gcc libffi-devel openssl-devel || exit 1

    if [[ "$VERSION_ID" =~ ^7.* ]]; then
        echo "FATAL: RHEL/CentOS 7 is not supported anymore."
        exit 1
    fi

    sudo dnf install -y $DNF_OPTS python${PY} || exit 1
    #sudo dnf install -y $DNF_OPTS python${PY}-devel || exit 1
else
    # Ubuntu
    sudo apt update
    case "$VERSION_ID" in
        20.04)
            if [ "${IS_OFFLINE}" = "false" ]; then
                # Prepare for latest python3
                sudo apt install -y software-properties-common
                sudo add-apt-repository ppa:deadsnakes/ppa -y || exit 1
                sudo apt update
            fi
            ;;
    esac
    #sudo apt install -y python${PY}-dev gcc libffi-dev libssl-dev || exit 1
    sudo apt install -y python${PY}-venv || exit 1
fi

################################
# pyver.sh
################################
#!/bin/bash
# Select python version

. /etc/os-release

# Python version
PY=3.11

if [ -e /etc/redhat-release ]; then
    case "$VERSION_ID" in
        7*)
            # RHEL/CentOS 7
            echo "FATAL: RHEL/CentOS 7 is not supported anymore."
            exit 1
            ;;
        8*|9*)
            ;;
        10*)
            PY=3.12            
            ;;
        *)
            echo "Unknown version_id: $VERSION_ID"
            exit 1
            ;;
    esac
else
    case "$VERSION_ID" in
        20.04|22.04)
            ;;

        24.04)
           PY=3.12
           ;;
    esac
fi

################################
# 확인
################################
# Install python3 and venv from local repo.
## sudo dnf install -y --disablerepo=* --enablerepo=offline-repo python${PY}
./setup-py.sh
# ===> Install python, venv, etc
# Last metadata expiration check: 0:06:40 ago on Wed 04 Feb 2026 12:23:11 AM KST.
# Package python3-3.12.12-3.el10_1.aarch64 is already installed.
# Dependencies resolved.
# Nothing to do.
# Complete!

# 변수 확인
source pyver.sh
echo -e "python_version $python${PY}"
# python_version 3.12

# offline-repo 에 패키지 파일 확인
dnf info python3
# Last metadata expiration check: 0:00:18 ago on Thu 12 Feb 2026 02:09:14 AM KST.
# Installed Packages
# Name         : python3
# Version      : 3.12.12
# Release      : 3.el10_1
# Architecture : aarch64
# Size         : 83 k
# Source       : python3.12-3.12.12-3.el10_1.src.rpm
# Repository   : @System
# From repo    : baseos
# Summary      : Python 3.12 interpreter
# URL          : https://www.python.org/
# License      : Python-2.0.1
# Description  : Python 3.12 is an accessible, high-level, dynamically typed,
#              : interpreted programming language, designed with an emphasis on code
#              : readability. It includes an extensive standard library, and has a
#              : vast ecosystem of third-party libraries.
#              : 
#              : The python3 package provides the "python3" executable: the
#              : reference interpreter for the Python language, version 3.
#              : The majority of its standard library is provided in the
#              : python3-libs package, which should be installed automatically along
#              : with python3. The remaining parts of the Python standard library
#              : are broken out into the python3-tkinter and python3-test packages,
#              : which may need to be installed separately.
#              : 
#              : Documentation for Python is provided in the python3-docs package.
#              : 
#              : Packages containing additional libraries for Python are generally
#              : named with the "python3-" prefix.

[5] start-registry.sh 실행 : (컨테이너) 이미지 저장소 컨테이너로 기동

################################
# start-registry.sh (컨테이너) 이미지 저장소 컨테이너로 기동
################################
#!/bin/bash

source ./config.sh

REGISTRY_IMAGE=registry:${REGISTRY_VERSION}
REGISTRY_DIR=${REGISTRY_DIR:-/var/lib/registry}

if [ ! -e $REGISTRY_DIR ]; then
    sudo mkdir $REGISTRY_DIR
fi

echo "===> Start registry"
sudo /usr/local/bin/nerdctl run -d \
    --network host \
    -e REGISTRY_HTTP_ADDR=0.0.0.0:${REGISTRY_PORT} \
    --restart always \
    --name registry \
    -v $REGISTRY_DIR:/var/lib/registry \
    $REGISTRY_IMAGE || exit 1

################################
# 확인
################################
# Start docker private registry container.
./start-registry.sh
===> Start registry
# e6f30fbf5159f51df58338d21458a79490394965bf6b2fdb927351d72cea499b
# (옵션) 실행 명령 참고
echo "===> Start registry"
sudo /usr/local/bin/nerdctl run -d \
    --network host \
    -e REGISTRY_HTTP_ADDR=0.0.0.0:${REGISTRY_PORT} \
    --restart always \
    --name registry \
    -v $REGISTRY_DIR:/var/lib/registry \
    $REGISTRY_IMAGE || exit 1

# 관련 변수 확인
source config.sh
echo -e "registry_port: $REGISTRY_PORT"
# registry_port: 35000

# 컨테이너 실행 확인
nerdctl ps
# CONTAINER ID    IMAGE                               COMMAND                   CREATED           STATUS    PORTS    NAMES
# e6f30fbf5159    docker.io/library/registry:3.0.0    "/entrypoint.sh /etc…"    22 seconds ago    Up                 registry
# 21cb6e5f96ef    docker.io/library/nginx:1.29.4      "/docker-entrypoint.…"    5 minutes ago     Up                 nginx
ss -tnlp | grep registry
# LISTEN 0      4096               *:35000            *:*    users:(("registry",pid=20021,fd=7))                                                                                                 
# LISTEN 0      4096               *:5001             *:*    users:(("registry",pid=20021,fd=3))  

# 현재는 registry 서버 내부에 저장된 (컨테이너) 이미지 없는 상태 : (참고) REGISTRY_DIR=${REGISTRY_DIR:-/var/lib/registry}
tree /var/lib/registry/
# /var/lib/registry/

# tcp 5001 port : debug, metrics 
curl 192.168.10.10:5001/metrics #프로메테우스
# ...
# registry_storage_action_seconds_sum{action="Stat",driver="filesystem"} 0.000181542
# registry_storage_action_seconds_count{action="Stat",driver="filesystem"} 3
# # HELP registry_storage_cache_errors_total The number of cache request errors
# # TYPE registry_storage_cache_errors_total counter
# registry_storage_cache_errors_total 0
# # HELP registry_storage_cache_hits_total The number of cache request received
# # TYPE registry_storage_cache_hits_total counter
# registry_storage_cache_hits_total 0
# # HELP registry_storage_cache_requests_total The number of cache request received
# # TYPE registry_storage_cache_requests_t
curl 192.168.10.10:5001/debug/pprof/ #디버깅용 로그
# <li><div class=profile-name>trace: </div> A trace of execution of the current program. You can specify the duration in the seconds GET parameter. After you get the trace file, use the go tool trace command to investigate the trace.</li>
# </ul>
# </p>
# </body>

[6] load-push-images.sh 실행 : (컨테이너) 이미지 저장소에 이미지 push

################################
# load-push-images.sh
################################
#!/bin/bash

source ./config.sh

LOCAL_REGISTRY=${LOCAL_REGISTRY:-"localhost:${REGISTRY_PORT}"}
NERDCTL=/usr/local/bin/nerdctl

BASEDIR="."
if [ ! -d images ] && [ -d ../outputs ]; then
    BASEDIR="../outputs"  # for tests
fi

load_images() {
    for image in $BASEDIR/images/*.tar.gz; do
        echo "===> Loading $image"
        sudo $NERDCTL load -i $image || exit 1
    done
}

push_images() {
    images=$(cat $BASEDIR/images/*.list)
    for image in $images; do
        case "$image" in
            */*) ;;
            *) image=docker.io/library/$image ;;
        esac

        # Removes specific repo parts from each image for kubespray
        newImage=$image
        for repo in registry.k8s.io k8s.gcr.io gcr.io ghcr.io docker.io quay.io $ADDITIONAL_CONTAINER_REGISTRY_LIST; do
            newImage=$(echo ${newImage} | sed s@^${repo}/@@)
        done

        newImage=${LOCAL_REGISTRY}/${newImage}

        echo "===> Tag ${image} -> ${newImage}"
        sudo $NERDCTL tag ${image} ${newImage} || exit 1

        echo "===> Push ${newImage}"
        sudo $NERDCTL push ${newImage} || exit 1
    done
}

load_images
push_images

################################
# 확인
################################
# 스크립트 실행 전 관련 변수 확인
echo -e "cpu arch: $IMAGE_ARCH"
# cpu arch: arm64 맥용

## (옵션) 'registry.k8s.io k8s.gcr.io gcr.io ghcr.io docker.io quay.io' 이외에 추가로 필요한 저장소가 있다면 설정
echo -e "Additional container registry hosts: $ADDITIONAL_CONTAINER_REGISTRY_LIST"
Additional container registry hosts: myregistry.io

# nerdctl load 할 .tar.gz 파일들
ls -l images/*.tar.gz
-rw-r--r--. 1 root root  10068768 Feb  3 19:42 images/docker.io_amazon_aws-alb-ingress-controller-v1.1.9.tar.gz
-rw-r--r--. 1 root root 175407403 Feb  3 19:44 images/docker.io_amazon_aws-ebs-csi-driver-v0.5.0.tar.gz
-rw-r--r--. 1 root root  95707259 Feb  3 19:40 images/docker.io_cloudnativelabs_kube-router-v2.1.1.tar.gz
...


# Load all container images to containerd. Tag and push them to the private registry.
./load-push-all-images.sh
# FATA[0003] image might be filtered out (Hint: set `--platform=PLATFORM` or `--all-platforms`) 
# aarch64) PLATFORM="linux/arm64" vs x86_64) PLATFORM="linux/amd64"

# (TS) mac사용자: 아래 내용 추가 후 스크립트 다시 실행
vi load-push-all-images.sh
# 아래 추가 -----------------------
...
load_images() {
    for image in $BASEDIR/images/*.tar.gz; do
        echo "===> Loading $image"
        sudo $NERDCTL load --all-platforms -i $image || exit 1
    done
...
# push_images() {
#     ...
#         echo "===> Push ${newImage}"
#         sudo $NERDCTL push --platform=linux/arm64 ${newImage} || exit 1 # 왼쪽 추가 안해도 됨
#     done
# -------------------------------

# 다시 실행 후 성공! : 2분 소요
./load-push-all-images.sh

# 로컬 이미지 load 확인
nerdctl images
# REPOSITORY                                               TAG                                                             IMAGE ID        CREATED               PLATFORM       SIZE       BLOB SIZE
# localhost:35000/kube-proxy                               v1.34.3                                                         fa5ed2c96dd3    46 seconds ago        linux/arm64    78.05MB    75.94MB
# localhost:35000/kube-scheduler                           v1.34.3                                                         985575f183de    46 seconds ago        linux/arm64    53.34MB    51.59MB
# localhost:35000/kube-controller-manager                  v1.34.3                                                         354700b61969    47 seconds ago        linux/arm64    74.38MB    72.62MB
# localhost:35000/kube-apiserver                           v1.34.3                                                         dece5cf2dd3b    47 seconds ago        linux/arm64    86.56MB    84.81MB
# ...
# localhost:35000/flannel/flannel-cni-plugin               v1.7.1-flannel1                                                 332db17b4c4a    About a minute ago    linux/arm64    11.39MB    11.37MB
# localhost:35000/flannel/flannel                          v0.27.3                                                         3b36a8d4db19    About a minute ago    linux/arm64    102.6MB    101.5MB
# ...
# registry.k8s.io/pause                                    3.10.1                                                          3f85f9d8a6bc    About a minute ago    linux/arm64    516.1kB    516.9kB
# registry.k8s.io/metrics-server/metrics-server            v0.8.0                                                          87ccea7af925    About a minute ago    linux/arm64    82.58MB    80.84MB
# registry.k8s.io/kube-scheduler                           v1.34.3                                                         985575f183de    About a minute ago    linux/arm64    53.34MB    51.59MB
# registry.k8s.io/kube-proxy                               v1.34.3                                                         fa5ed2c96dd3    About a minute ago    linux/arm64    78.05MB    75.94MB
# registry.k8s.io/kube-controller-manager                  v1.34.3                                                         354700b61969    About a minute ago    linux/arm64    74.38MB    72.62MB
# registry.k8s.io/kube-apiserver                           v1.34.3                                                         dece5cf2dd3b    About a minute ago    linux/arm64    86.56MB    84.81MB
# registry.k8s.io/ingress-nginx/controller                 v1.13.3                                                         68a587e5104f    About a minute ago    linux/arm64    336.3MB    334.2MB
# registry.k8s.io/dns/k8s-dns-node-cache                   1.25.0                                                          7071feee8b70    About a minute ago    linux/arm64    90.54MB    88.43MB
# registry.k8s.io/cpa/cluster-proportional-autoscaler      v1.8.8                                                          4146047e636f    About a minute ago    linux/arm64    39.98MB    37.86MB
# registry.k8s.io/coredns/coredns                          v1.12.1                                                         e674cf21adf3    About a minute ago    linux/arm64    74.94MB    73.19MB
# ...
# quay.io/metallb/speaker                                  v0.13.9                                                         51f18d4f5d4d    About a minute ago    linux/arm64    111.1MB    111.1MB
# quay.io/metallb/controller                               v0.13.9                                                         b724b69a4c9b    About a minute ago    linux/arm64    63.12MB    63.11MB
# quay.io/jetstack/cert-manager-webhook                    v1.15.3                                                         2d91656807bb    About a minute ago    linux/arm64    58.15MB    56.39MB
# quay.io/jetstack/cert-manager-controller                 v1.15.3                                                         5114bfbeac23    About a minute ago    linux/arm64    67.13MB    65.37MB
# quay.io/jetstack/cert-manager-cainjector                 v1.15.3                                                         a13418dc926e    About a minute ago    linux/arm64    44.65MB    42.89MB
# quay.io/coreos/etcd                                      v3.5.26                                                         4b003fe9069c    About a minute ago    linux/arm64    66.06MB    63.34MB
# ...
# rancher/local-path-provisioner                           v0.0.32                                                         4a3d51575c84    2 minutes ago         linux/arm64    61.37MB    61.35MB
# mirantis/k8s-netchecker-server                           v1.2.2                                                          8e0ef348cf54    2 minutes ago         linux/amd64    125.8MB    123.7MB
# mirantis/k8s-netchecker-agent                            v1.2.2                                                          e07c83f8f083    2 minutes ago         linux/amd64    5.681MB    5.856MB
# flannel/flannel                                          v0.27.3                                                         3b36a8d4db19    2 minutes ago         linux/arm64    102.6MB    101.5MB
# flannel/flannel-cni-plugin                               v1.7.1-flannel1                                                 332db17b4c4a    2 minutes ago         linux/arm64    11.39MB    11.37MB
# ...

# (참고) kube-proxy 컨테이너가 localhost:35000 과 registry.k8s.io 로 push 된 상태 확인
nerdctl images | grep -i kube-proxy
# localhost:35000/kube-proxy                               v1.34.3                                                         fa5ed2c96dd3    26 seconds ago        linux/arm64    78.05MB    75.94MB
# registry.k8s.io/kube-proxy                               v1.34.3                                                         fa5ed2c96dd3    About a minute ago    linux/arm64    78.05MB    75.94MB

# localhost 있는 이미지가 각각 registry, quay, rancher, flannel 등으로 push 된것을 알 수 있다
nerdctl images | grep localhost
nerdctl images | grep localhost | wc -l
# 55

nerdctl images | grep -v localhost
nerdctl images | grep -v localhost | wc -l
# 56


# 이미지 저장소 카탈로그 확인
curl -s http://localhost:35000/v2/_catalog | jq
# {
#   "repositories": [
#     "amazon/aws-alb-ingress-controller",
#     "amazon/aws-ebs-csi-driver",
#     ...

# kube-apiserver 태그 별로 정보 확인  
curl -s http://localhost:35000/v2/kube-apiserver/tags/list | jq
# {
#   "name": "kube-apiserver",
#   "tags": [
#     "v1.34.3"
#   ]
# }

## Image Manifest
curl -s http://localhost:35000/v2/kube-apiserver/manifests/v1.34.3 | jq
# {
#   "schemaVersion": 2,
#   "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
#   "config": {
#     "mediaType": "application/vnd.docker.container.image.v1+json",
#     "digest": "sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896",
#     "size": 2906
#   },
#   "layers": [
#     {
#       "mediaType": "application/vnd.docker.image.rootfs.diff.tar",
#       "digest": "sha256:378b3db0974f7a5a8767b6329ad310983bc712d0e400ff5faa294f95f869cc8c",
#       "size": 327680
#     },
#   ...(생략)

# 이미지 저장소의 저장 디렉터리 확인
tree /var/lib/registry/ -L 5
# /var/lib/registry/
# └── docker
#     └── registry
#         └── v2
#             ├── blobs
#             │   └── sha256
#             └── repositories
#                 ├── amazon
#                 ├── calico
#                 ├── cilium
#                 ├── cloudnativelabs
#                 ├── coredns
#                 ├── coreos
#                 ├── cpa
#                 ├── dns
#                 ├── flannel
#                 ├── ingress-nginx
#                 ├── jetstack
#                 ├── k8snetworkplumbingwg
#                 ├── kube-apiserver
#                 ├── kube-controller-manager
#                 ├── kubeovn
#                 ├── kube-proxy
#                 ├── kubernetesui
#                 ├── kube-scheduler
#                 ├── kube-vip
#                 ├── library
#                 ├── metallb
#                 ├── metrics-server
#                 ├── mirantis
#                 ├── pause
#                 ├── provider-os
#                 ├── rancher
#                 └── sig-storage

[OCI 레지스트리 설명]

[7] extract-kubespary.sh 실행 : kubespary 저장소 압축 해제

################################
# extract-kubespary.sh
################################
#!/bin/bash

cd $(dirname $0)
CURRENT_DIR=$(pwd)
source ./config.sh

KUBESPRAY_TARBALL=files/kubespray-${KUBESPRAY_VERSION}.tar.gz
DIR=kubespray-${KUBESPRAY_VERSION}

if [ -d $DIR ]; then
    echo "${DIR} already exists."
    exit 0
fi

if [ ! -e $KUBESPRAY_TARBALL ]; then
    echo "$KUBESPRAY_TARBALL does not exist."
    exit 1
fi

tar xvzf $KUBESPRAY_TARBALL || exit 1

# apply patches
sleep 1 # avoid annoying patch error in shared folders.
if [ -d $CURRENT_DIR/patches/${KUBESPRAY_VERSION} ]; then
    for patch in $CURRENT_DIR/patches/${KUBESPRAY_VERSION}/*.patch; do
        if [[ -f "${patch}" ]]; then
          echo "===> Apply patch: $patch"
          (cd $DIR && patch -p1 < $patch)
        fi
    done
fi
################################
# 확인
################################
# 스크립트 실행 전 관련 파일 확인 : kubespary repo 압축된 파일
ls -lh files/kubespray-*
# -rw-r--r--. 1 root root 2.5M Feb 12 01:41 files/kubespray-2.30.0.tar.gz

# patches 파일 : kubespary-2.18.0 버전에서 patch 되는 내용으로, kubespary-2.30.0과 관계없음
tree patches/
└── 2.18.0
    ├── 0001-nerdctl-insecure-registry-config-8339.patch
    ├── 0002-Update-config.toml.j2-8340.patch
    └── 0003-generate-list-8537.patch


# Extract kubespray tarball and apply all patches.
./extract-kubespray.sh

# kubespary 저장소 압축해제된 파일들 확인
tree kubespray-2.30.0/ -L 1
# kubespray-2.30.0/
# ├── ansible.cfg
# ├── CHANGELOG.md
# ├── cluster.yml
# ├── CNAME
# ├── code-of-conduct.md
# ├── _config.yml
# ├── contrib
# ├── CONTRIBUTING.md
# ├── Dockerfile
# ├── docs
# ├── extra_playbooks
# ├── galaxy.yml
# ├── index.html
# ├── inventory
# ├── library
# ├── LICENSE
# ├── logo
# ├── meta
# ├── OWNERS
# ├── OWNERS_ALIASES
# ├── pipeline.Dockerfile
# ├── playbooks
# ├── plugins
# ├── README.md
# ├── recover-control-plane.yml
# ├── RELEASE.md
# ├── remove-node.yml
# ├── remove_node.yml
# ├── requirements.txt
# ├── reset.yml
# ├── roles
# ├── scale.yml
# ├── scripts
# ├── SECURITY_CONTACTS
# ├── test-infra
# ├── tests
# ├── upgrade-cluster.yml
# ├── upgrade_cluster.yml
# └── Vagrantfile

# 14 directories, 26 files

kubespary 설치 (3분 소요)

offline.yml : etcd cpu arch 변수 수정 {{ etcd_version }}-linux-amd64 → {{ etcd_version }}-linux-{{ image_arch }}

#
# offline.yml sample
#

http_server: "http://YOUR_HOST"
registry_host: "YOUR_HOST:35000"

# Insecure registries for containerd
containerd_registries_mirrors:
  - prefix: "{{ registry_host }}"
    mirrors:
      - host: "http://{{ registry_host }}"
        capabilities: ["pull", "resolve"]
        skip_verify: true

files_repo: "{{ http_server }}/files"
yum_repo: "{{ http_server }}/rpms"
ubuntu_repo: "{{ http_server }}/debs"

# Registry overrides
kube_image_repo: "{{ registry_host }}"
gcr_image_repo: "{{ registry_host }}"
docker_image_repo: "{{ registry_host }}"
quay_image_repo: "{{ registry_host }}"
github_image_repo: "{{ registry_host }}"

local_path_provisioner_helper_image_repo: "{{ registry_host }}/busybox"

# Download URLs: See roles/download/defaults/main.yml of kubespray.
kubeadm_download_url: "{{ files_repo }}/kubernetes/v{{ kube_version }}/kubeadm"
kubectl_download_url: "{{ files_repo }}/kubernetes/v{{ kube_version }}/kubectl"
kubelet_download_url: "{{ files_repo }}/kubernetes/v{{ kube_version }}/kubelet"

# etcd is optional if you **DON'T** use etcd_deployment=host
# etcd_download_url: "{{ files_repo }}/kubernetes/etcd/etcd-v{{ etcd_version }}-linux-amd64.tar.gz"
# MAC 실습 가능하도록 변경
etcd_download_url: "{{ files_repo }}/kubernetes/etcd/etcd-v{{ etcd_version }}-linux-{{ image_arch }}.tar.gz"

# CNI plugins
cni_download_url: "{{ files_repo }}/kubernetes/cni/cni-plugins-linux-{{ image_arch }}-v{{ cni_version }}.tgz"
# cri-tools
crictl_download_url: "{{ files_repo }}/kubernetes/cri-tools/crictl-v{{ crictl_version }}-{{ ansible_system | lower }}-{{ image_arch }}.tar.gz"

# If using Calico
calicoctl_download_url: "{{ files_repo }}/kubernetes/calico/v{{ calico_ctl_version }}/calicoctl-linux-{{ image_arch }}"
# If using Calico with kdd
calico_crds_download_url: "{{ files_repo }}/kubernetes/calico/v{{ calico_version }}.tar.gz"

# If using cilium
ciliumcli_download_url: "{{ files_repo }}/cilium-cli/v{{ cilium_cli_version }}/cilium-linux-{{ image_arch }}.tar.gz"

# helm
helm_download_url: "{{ files_repo }}/helm-v{{ helm_version }}-linux-{{ image_arch }}.tar.gz"

# crun
crun_download_url: "{{ files_repo }}/crun-{{ crun_version }}-linux-{{ image_arch }}"

# kata
kata_containers_download_url: "{{ files_repo }}/kata-static-{{ kata_containers_version }}-{{ image_arch }}.tar.xz"

# Containerd
runc_download_url: "{{ files_repo }}/runc/v{{ runc_version }}/runc.{{ image_arch }}"
nerdctl_download_url: "{{ files_repo }}/nerdctl-{{ nerdctl_version }}-{{ ansible_system | lower }}-{{ image_arch }}.tar.gz"
containerd_download_url: "{{ files_repo }}/containerd-{{ containerd_version }}-linux-{{ image_arch }}.tar.gz"

# cri-o
crio_download_url: "{{ files_repo }}/cri-o.{{ image_arch }}.v{{ crio_version }}.tar.gz"
skopeo_download_url: "{{ files_repo }}/skopeo/v{{ skopeo_version }}/skopeo-linux-{{ image_arch }}"

# gvisor
gvisor_runsc_download_url: "{{ files_repo }}/gvisor/{{ gvisor_version }}/{{ ansible_architecture }}/runsc"
gvisor_containerd_shim_runsc_download_url: "{{ files_repo }}/gvisor/{{ gvisor_version }}/{{ ansible_architecture }}/containerd-shim-runsc-v1"

# others
youki_download_url: "{{ files_repo }}/youki-{{ youki_version }}-{{ ansible_architecture }}-musl.tar.gz"
yq_download_url: "{{ files_repo }}/yq/v{{ yq_version }}/yq_linux_{{ image_arch }}"

설치

# check python version
python --version
# Python 3.12.12

# venv 실행
python3.12 -m venv ~/.venv/3.12
source ~/.venv/3.12/bin/activate
# ((3.12) )  ...
which ansible
# /root/.venv/3.12/bin/ansible
tree ~/.venv/3.12/ -L 4
# /root/.venv/3.12/
# ├── bin
# │   ├── activate
# │   ├── activate.csh
# │   ├── activate.fish
# │   ├── Activate.ps1
# │   ├── ansible
# ...

# kubespary 디렉터리 이동
cd /root/kubespray-offline/outputs/kubespray-2.30.0

# Install ansible : 이미 설치 완료된 상태
pip install -U pip                # update pip
# Looking in indexes: http://localhost/pypi/
# Requirement already satisfied: pip in /root/.venv/3.12/lib64/python3.12/site-packages (26.0.1)
pip install -r requirements.txt   # Install ansible


# offline.yml 파일 복사 후 inventory 복사
cp ../../offline.yml .
cp -r inventory/sample inventory/mycluster
tree inventory/mycluster/
# inventory/mycluster/
# ├── group_vars
# │   ├── all
# │   │   ├── all.yml
# │   │   ├── aws.yml
# │   │   ├── azure.yml
# │   │   ├── containerd.yml
# │   │   ├── coreos.yml
# │   │   ├── cri-o.yml
# │   │   ├── docker.yml
# │   │   ├── etcd.yml
# │   │   ├── gcp.yml
# │   │   ├── hcloud.yml
# │   │   ├── huaweicloud.yml
# │   │   ├── oci.yml
# │   │   ├── offline.yml
# │   │   ├── openstack.yml
# │   │   ├── upcloud.yml
# │   │   └── vsphere.yml
# │   └── k8s_cluster
# │       ├── addons.yml
# │       ├── k8s-cluster.yml
# │       ├── k8s-net-calico.yml
# │       ├── k8s-net-cilium.yml
# │       ├── k8s-net-custom-cni.yml
# │       ├── k8s-net-flannel.yml
# │       ├── k8s-net-kube-ovn.yml
# │       ├── k8s-net-kube-router.yml
# │       ├── k8s-net-macvlan.yml
# │       └── kube_control_plane.yml
# └── inventory.ini

# 4 directories, 27 files
# 웹서버와 이미지 저장소 정보 수정 : http_server, registry_host
cat offline.yml
sed -i "s/YOUR_HOST/192.168.10.10/g" offline.yml
cat offline.yml | grep 192.168.10.10
# http_server: "http://192.168.10.10"
# registry_host: "192.168.10.10:35000"

# 수정 반영된 offline.yml 파일을 inventory 디렉터리 내부로 복사
\cp -f offline.yml inventory/mycluster/group_vars/all/offline.yml
cat inventory/mycluster/group_vars/all/offline.yml

# inventory 파일 작성
cat <<EOF > inventory/mycluster/inventory.ini
[kube_control_plane]
k8s-node1 ansible_host=192.168.10.11 ip=192.168.10.11 etcd_member_name=etcd1

[etcd:children]
kube_control_plane

[kube_node]
k8s-node2 ansible_host=192.168.10.12 ip=192.168.10.12
EOF
cat inventory/mycluster/inventory.ini
# [kube_control_plane]
# k8s-node1 ansible_host=192.168.10.11 ip=192.168.10.11 etcd_member_name=etcd1

# [etcd:children]
# kube_control_plane

# [kube_node]
# k8s-node2 ansible_host=192.168.10.12 ip=192.168.10.12

# ansible 연결 확인
ansible -i inventory/mycluster/inventory.ini all -m ping
# [WARNING]: Platform linux on host k8s-node2 is using the discovered Python
# interpreter at /usr/bin/python3.12, but future installation of another Python
# interpreter could change the meaning of that path. See
# https://docs.ansible.com/ansible-
# core/2.17/reference_appendices/interpreter_discovery.html for more information.
# k8s-node2 | SUCCESS => {
#     "ansible_facts": {
#         "discovered_interpreter_python": "/usr/bin/python3.12"
#     },
#     "changed": false,
#     "ping": "pong"
# }
# [WARNING]: Platform linux on host k8s-node1 is using the discovered Python
# interpreter at /usr/bin/python3.12, but future installation of another Python
# interpreter could change the meaning of that path. See
# https://docs.ansible.com/ansible-
# core/2.17/reference_appendices/interpreter_discovery.html for more information.
# k8s-node1 | SUCCESS => {
#     "ansible_facts": {
#         "discovered_interpreter_python": "/usr/bin/python3.12"
#     },
#     "changed": false,
#     "ping": "pong"
# }
# 각 노드에 offline repo 설정
tree ../playbook/
# ├── offline-repo.yml
# └── roles
#     └── offline-repo
#         ├── defaults
#         │   └── main.yml
#         ├── files
#         │   └── 99offline
#         └── tasks
#             ├── Debian.yml
#             ├── main.yml
#             └── RedHat.yml

mkdir offline-repo
cp -r ../playbook/ offline-repo/
tree offline-repo/
# offline-repo/
# └── playbook
#     ├── offline-repo.yml
#     └── roles
#         └── offline-repo
#             ├── defaults
#             │   └── main.yml
#             ├── files
#             │   └── 99offline
#             └── tasks
#                 ├── Debian.yml
#                 ├── main.yml
#                 └── RedHat.yml
ansible-playbook -i inventory/mycluster/inventory.ini offline-repo/playbook/offline-repo.yml
# 엄청 빠르게 설치가 된다..
# PLAY [all] ***********************************************************************
# Thursday 12 February 2026  02:25:27 +0900 (0:00:00.018)       0:00:00.018 ***** 

# TASK [Gathering Facts] ***********************************************************
# ok: [k8s-node1]
# ok: [k8s-node2]
# Thursday 12 February 2026  02:25:28 +0900 (0:00:00.998)       0:00:01.016 ***** 

# TASK [offline-repo : include_tasks] **********************************************
# included: /root/kubespray-offline/outputs/kubespray-2.30.0/offline-repo/playbook/roles/offline-repo/tasks/RedHat.yml for k8s-node2, k8s-node1
# Thursday 12 February 2026  02:25:28 +0900 (0:00:00.025)       0:00:01.042 ***** 

# TASK [offline-repo : Install offline yum repo] ***********************************
# changed: [k8s-node2]
# changed: [k8s-node1]

# PLAY RECAP ***********************************************************************
# k8s-node1                  : ok=3    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
# k8s-node2                  : ok=3    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

# Thursday 12 February 2026  02:25:28 +0900 (0:00:00.188)       0:00:01.231 ***** 
# =============================================================================== 
# Gathering Facts ----------------------------------------------------------- 1.00s
# offline-repo : Install offline yum repo ----------------------------------- 0.19s
# offline-repo : include_tasks ---------------------------------------------- 0.03s
# k8s-node 확인

ssh k8s-node1 tree /etc/yum.repos.d/
# /etc/yum.repos.d/
# ├── backup
# │   ├── rocky-addons.repo
# │   ├── rocky-devel.repo
# │   ├── rocky-extras.repo
# │   └── rocky.repo
# ├── internal-rocky.repo
# └── offline.repo

# 2 directories, 6 files
ssh k8s-node1 dnf repolist
# repo id                          repo name
# appstream                        Rocky Linux 10 - AppStream
# baseos                           Rocky Linux 10 - BaseOS
# extras                           Rocky Linux 10 - Extras
# offline-repo                     Offline repo for kubespray

ssh k8s-node1 cat /etc/yum.repos.d/offline.repo
# [offline-repo]
# baseurl = http://192.168.10.10/rpms/local
# enabled = 1
# gpgcheck = 0
# name = Offline repo for kubespray

## 추가로 설치를 위해 기존 repo 제거 : 미실행할 경우, kubespary 실행 시 fail됨
for i in rocky-addons rocky-devel rocky-extras rocky; do
  ssh k8s-node1 "mv /etc/yum.repos.d/$i.repo /etc/yum.repos.d/$i.repo.original"
  ssh k8s-node2 "mv /etc/yum.repos.d/$i.repo /etc/yum.repos.d/$i.repo.original"
done

ssh k8s-node1 tree /etc/yum.repos.d/
ssh k8s-node1 dnf repolist
# repo id                           repo name
# offline-repo                      Offline repo for kubespray

ssh k8s-node2 tree /etc/yum.repos.d/
ssh k8s-node2 dnf repolist
# repo id                           repo name
# offline-repo                      Offline repo for kubespray


# admin-lb 에 kubectl 없는 것 확인
which kubectl
# /usr/bin/which: no kubectl in (/root/.venv/3.12/bin:/root/.local/bin:/root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)

# group vars 실습 환경에 맞게 설정
echo "kubectl_localhost: true" >> inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml # 배포를 수행하는 로컬 머신의 bin 디렉토리에도 kubectl 바이너리를 다운로드
sed -i 's|kube_owner: kube|kube_owner: root|g' inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
sed -i 's|kube_network_plugin: calico|kube_network_plugin: flannel|g' inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
sed -i 's|kube_proxy_mode: ipvs|kube_proxy_mode: iptables|g' inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
sed -i 's|enable_nodelocaldns: true|enable_nodelocaldns: false|g' inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
grep -iE 'kube_owner|kube_network_plugin:|kube_proxy_mode|enable_nodelocaldns:' inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
echo "enable_dns_autoscaler: false" >> inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
echo "flannel_interface: enp0s9" >> inventory/mycluster/group_vars/k8s_cluster/k8s-net-flannel.yml
grep "^[^#]" inventory/mycluster/group_vars/k8s_cluster/k8s-net-flannel.yml
sed -i 's|helm_enabled: false|helm_enabled: true|g' inventory/mycluster/group_vars/k8s_cluster/addons.yml
sed -i 's|metrics_server_enabled: false|metrics_server_enabled: true|g' inventory/mycluster/group_vars/k8s_cluster/addons.yml
grep -iE 'metrics_server_enabled:' inventory/mycluster/group_vars/k8s_cluster/addons.yml
echo "metrics_server_requests_cpu: 25m"     >> inventory/mycluster/group_vars/k8s_cluster/addons.yml
echo "metrics_server_requests_memory: 16Mi" >> inventory/mycluster/group_vars/k8s_cluster/addons.yml
# Note: cilium needs to set kube_owner to root https://kubespray.io/#/docs/CNI/cilium?id=unprivileged-agent-configuration
# kube_owner: root
# kube_network_plugin: flannel
# kube_proxy_mode: iptables
# enable_nodelocaldns: false

# 지원 버전 정보 확인
cat roles/kubespray_defaults/vars/main/checksums.yml | grep -i kube -A40


# [macOS 사용자] (TS) 이슈 해결
# -----------------------------------------------
# TASK [download : Download_file | Download item] **************************************************************************
# fatal: [k8s-node1]: FAILED! => {"attempts": 4, "changed": false, "dest": "/tmp/releases/etcd-3.5.26-linux-arm64.tar.gz", "elapsed": 0, "msg": "Request failed", "response": "HTTP Error 404: Not Found", "status_code": 404, "url": "http://192.168.10.10/files/kubernetes/etcd/etcd-v3.5.26-linux-amd64.tar.gz"}
# http://192.168.10.10/files/kubernetes/etcd/etcd-v3.5.26-linux-amd64.tar.gz
# # vi roles/download/tasks/download_file.yml >> no_log: false # (참고) 실패 task 상세 로그 출력 설정하여 원인 파악

cat inventory/mycluster/group_vars/all/offline.yml | grep amd64
etcd_download_url: "{{ files_repo }}/kubernetes/etcd/etcd-v{{ etcd_version }}-linux-amd64.tar.gz"
sed -i 's/amd64/arm64/g' inventory/mycluster/group_vars/all/offline.yml
# -----------------------------------------------


# 배포 : 설치 완료까지 3분!
ansible-playbook -i inventory/mycluster/inventory.ini -v cluster.yml -e kube_version="1.34.3"


# 설치 후 NetworkManger 에 dns 설정 파일 추가 확인
ssh k8s-node2 cat /etc/NetworkManager/conf.d/dns.conf
# [global-dns-domain-*]
servers = 10.233.0.3,192.168.10.10
# [global-dns]
# searches = default.svc.cluster.local,svc.cluster.local
# options = ndots:2,timeout:2,attempts:2
# 하지만 '/etc/NetworkManager/conf.d/99-dns-none.conf' 파일로 인해, 위 설정이 resolv.conf 에 반영되지 않음, 즉 노드에서는 service명으로 도메인 질의는 불가능.
ssh k8s-node2 cat /etc/resolv.conf
# nameserver 192.168.10.10

# 설치 후 NetworkManger 에서 특정 NIC은 관리하지 않게 설정 추가 확인
ssh k8s-node2 cat /etc/NetworkManager/conf.d/k8s.conf
# [keyfile]
# unmanaged-devices+=interface-name:kube-ipvs0;interface-name:nodelocaldns


# # kubectl 바이너리 파일을 ansible-playbook 실행한 서버에 다운로드 확인
# file inventory/mycluster/artifacts/kubectl
ls -l inventory/mycluster/artifacts/kubectl
tree inventory/mycluster/
# inventory/mycluster/
# ├── artifacts
# │   └── kubectl
# ├── credentials
# │   └── kubeadm_certificate_key.creds
# ├── group_vars
# ...

cp inventory/mycluster/artifacts/kubectl /usr/local/bin/
kubectl version --client=true
Client Version: v1.34.3
Kustomize Version: v5.7.1

# k8s admin 자격증명 확인 
mkdir /root/.kube
scp k8s-node1:/root/.kube/config /root/.kube/
sed -i 's/127.0.0.1/192.168.10.11/g' /root/.kube/config
k9s

# 자동완성 및 단축키 설정
source <(kubectl completion bash)
alias k=kubectl
complete -F __start_kubectl k
echo 'source <(kubectl completion bash)' >> /etc/profile
echo 'alias k=kubectl' >> /etc/profile
echo 'complete -F __start_kubectl k' >> /etc/profile

# 이미지 저장소가 192.168.10.10:35000 임을 확인
kubectl get deploy,sts,ds -n kube-system -owide
NAME                             READY   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS       IMAGES                                                     SELECTOR
deployment.apps/coredns          2/2     2            2           4m9s   coredns          192.168.10.10:35000/coredns/coredns:v1.12.1                k8s-app=kube-dns
deployment.apps/metrics-server   1/1     1            1           4m5s   metrics-server   192.168.10.10:35000/metrics-server/metrics-server:v0.8.0   app.kubernetes.io/name=metrics-server,version=0.8.0

NAME                                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE     CONTAINERS     IMAGES                                        SELECTOR
daemonset.apps/kube-flannel              0         0         0       0            0           <none>                   4m25s   kube-flannel   192.168.10.10:35000/flannel/flannel:v0.27.3   app=flannel
...
daemonset.apps/kube-proxy                1         1         1       1            1           kubernetes.io/os=linux   4m43s   kube-proxy     192.168.10.10:35000/kube-proxy:v1.34.3        k8s-app=kube-proxy

kubespary-offline 에 kube_version 변경 적용하여 관련 파일 다운로드

# 디렉터리 이동 후 기존 정보 백업
cd /root/kubespray-offline
# tree cache/kubespray-2.30.0/contrib/offline/temp/
# ├── files.list
# ├── files.list.template
# ├── images.list
# └── images.list.template
mv cache/kubespray-2.30.0/contrib/offline/temp/files.list cache/kubespray-2.30.0/contrib/offline/temp/files-2.list
mv cache/kubespray-2.30.0/contrib/offline/temp/images.list cache/kubespray-2.30.0/contrib/offline/temp/images-2.list

# (옵션) 다운로드 스크립트에 실제 다운로드 실행하는 부분 제거 시
# -----------------------------------------------
cat download-kubespray-files.sh
cp download-kubespray-files.sh download-kubespray-files.bak
sed -i '/generate_list$/,$ { /generate_list/!d }' download-kubespray-files.sh
diff download-kubespray-files.sh download-kubespray-files.bak
# 104a105,118
# > 
# > mkdir -p $FILES_DIR
# > 
# > cp ${KUBESPRAY_DIR}/contrib/offline/temp/files.list $FILES_DIR/
# > cp ${KUBESPRAY_DIR}/contrib/offline/temp/images.list $IMAGES_DIR/
# > 
# > # download files
# > files=$(cat ${FILES_DIR}/files.list)
# > for i in $files; do
# >     get_url $i
# > done
# > 
# > # download images
# > ./download-images.sh || exit 1

# 다운로드 목록 작성 스크립트(ansible-playbook 후속 실행)에 kube_version=1.33.7 추가
sed -i 's|offline/generate_list.sh|offline/generate_list.sh -e kube_version=1.33.7|g' download-kubespray-files.sh
cat download-kubespray-files.sh | grep kube_version
    # LANG=C /bin/bash ${KUBESPRAY_DIR}/contrib/offline/generate_list.sh -e kube_version=1.33.7 || exit 1
# -----------------------------------------------


# 스크립트 실행 후 확인
./download-kubespray-files.sh
cd cache/kubespray-2.30.0/contrib/offline/temp

# 최초 설치 버전과 비교 확인
diff files-2.list files.list
# 1,3c1,3
# < https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubelet
# < https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubectl
# < https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm
# ---
# > https://dl.k8s.io/release/v1.33.7/bin/linux/arm64/kubelet
# > https://dl.k8s.io/release/v1.33.7/bin/linux/arm64/kubectl
# > https://dl.k8s.io/release/v1.33.7/bin/linux/arm64/kubeadm
# 9,10c9,10
# < https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.34.0/crictl-v1.34.0-linux-arm64.tar.gz
# < https://storage.googleapis.com/cri-o/artifacts/cri-o.arm64.v1.34.4.tar.gz
# ---
# > https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.33.0/crictl-v1.33.0-linux-arm64.tar.gz
# > https://storage.googleapis.com/cri-o/artifacts/cri-o.arm64.v1.33.8.tar.gz
vi -d files-2.list files.list

diff images-2.list images.list
# 21c21
# < registry.k8s.io/pause:3.10.1
# ---
# > registry.k8s.io/pause:3.10
# 25c25
# < registry.k8s.io/coredns/coredns:v1.12.1
# ---
# > registry.k8s.io/coredns/coredns:v1.12.0
# 50,53c50,53
# < registry.k8s.io/kube-apiserver:v1.34.3
# < registry.k8s.io/kube-controller-manager:v1.34.3
# < registry.k8s.io/kube-scheduler:v1.34.3
# < registry.k8s.io/kube-proxy:v1.34.3
# ---
# > registry.k8s.io/kube-apiserver:v1.33.7
# > registry.k8s.io/kube-controller-manager:v1.33.7
# > registry.k8s.io/kube-scheduler:v1.33.7
# > registry.k8s.io/kube-proxy:v1.33.7
vi -d images-2.list images.list


# (옵션) kube_version 1.33.7 관련 버전을 적용하여 파일과 이미지 다운로드 실행 시
cp download-kubespray-files.bak download-kubespray-files.sh
sed -i 's|offline/generate_list.sh|offline/generate_list.sh -e kube_version=1.33.7|g' download-kubespray-files.sh
./download-kubespray-files.sh

K8S 관련 폐쇄망 서비스 실습

![alt text](<images/폐쇠망 필요 서비스.png>)

Private Container (Image) Registry 설정


################################
# 샘플 앱 배포 : nginx:alpine
################################
# [k8s-node] 외부 통신은 안되는 상태
ping -c 1 -w 1 -W 1 8.8.8.8
ip route
# default via 192.168.10.10 dev enp0s9 proto static metric 200 
# 192.168.10.0/24 dev enp0s9 proto kernel scope link src 192.168.10.11 metric 100 
crictl images
# IMAGE                                               TAG                 IMAGE ID            SIZE
# 192.168.10.10:35000/coredns/coredns                 v1.12.1             138784d87c9c5       73.2MB
# 192.168.10.10:35000/flannel/flannel-cni-plugin      v1.7.1-flannel1     e5bf9679ea8c3       11.4MB
# 192.168.10.10:35000/flannel/flannel                 v0.27.3             cadcae92e6360       102MB
# 192.168.10.10:35000/kube-apiserver                  v1.34.3             cf65ae6c8f700       84.8MB
# 192.168.10.10:35000/kube-controller-manager         v1.34.3             7ada8ff13e54b       72.6MB
# 192.168.10.10:35000/kube-proxy                      v1.34.3             4461daf6b6af8       75.9MB
# 192.168.10.10:35000/kube-scheduler                  v1.34.3             2f2aa21d34d2d       51.6MB
# 192.168.10.10:35000/metrics-server/metrics-server   v0.8.0              bc6c1e09a843d       80.8MB
# 192.168.10.10:35000/pause                           3.10.1              d7b100cd9a77b       517kB
tree /etc/containerd/certs.d/
/etc/containerd/certs.d/
└── 192.168.10.10:35000
    └── hosts.toml


# [admin-lb] nginx 디플로이먼트 배포 시도
cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:alpine   # docker.io/library/nginx:alpine
          ports:
            - containerPort: 80
EOF

# 확인 : docker.io/library/nginx:alpine 이미지 가져오기 실패!
kubectl describe pod
...
  # Warning  FailedScheduling        24s   default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
  # Normal   Scheduled               10s   default-scheduler  Successfully assigned default/nginx-54fc99c8d-hlzql to k8s-node2
  # Warning  FailedCreatePodSandBox  10s   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "04b1e8fe34c38b8535d7e09e33b54d1a0dde95f2030e511eb88447bd18e066e2": plugin type="flannel" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory
  # Normal   Pulling                 3s    kubelet            Pulling image "nginx:alpine"
...

################################
# (컨테이너) 이미지 저장소에 이미지 push
# 이방법을 쓰면 매번 이미지를 업로드 해줘야함
################################
# [admin] (컨테이너) 이미지 저장소에 이미지 push
podman images
# REPOSITORY                 TAG         IMAGE ID      CREATED      SIZE
# 192.168.10.10:5000/alpine  1.0         1ab49c19c53e  2 weeks ago  8.98 MB
# 로컬에 nginx:alpine 다운로드
podman pull nginx:alpine
docker.io/library/nginx:alpine 선택
# ✔ docker.io/library/nginx:alpine
# Trying to pull docker.io/library/nginx:alpine...
# Getting image source signatures
# Copying blob 7833e4e4252c done   | 
# Copying blob d8ad8cd72600 skipped: already exists  
# Copying blob 31d394b0c9ed done   | 
# Copying blob 9084d2ffc283 done   | 
# Copying blob c5ad07fbd6e6 done   | 
# Copying blob 821790ca706f done   | 
# Copying blob 88799a707571 done   | 
# Copying blob da8475fa07c7 done   | 
# Copying config 128568fed7 done   | 
# Writing manifest to image destination
# 128568fed7ff6f758ccfd95b4d4491a53d765e5553c46f44889c6c5f136c8c5b
podman images | grep nginx
# docker.io/library/nginx                                alpine                                                        128568fed7ff  6 days ago     62.9 MB
# docker.io/library/nginx                                1.29.4                                                        85e894eaa91f  8 days ago     184 MB
# registry.k8s.io/ingress-nginx/controller               v1.13.3                                                       21bfedf4686d  4 months ago   334 MB
# docker.io/library/nginx                                1.28.0-alpine                                                 5a91d90f47dd  9 months ago   51.2 MB
# (컨테이너) 이미지 저장소에 이미지 push
podman tag nginx:alpine 192.168.10.10:35000/library/nginx:alpine
podman images | grep nginx
podman images | grep nginx
# docker.io/library/nginx                                alpine                                                        128568fed7ff  6 days ago     62.9 MB
# 192.168.10.10:35000/library/nginx                      alpine                                                        128568fed7ff  6 days ago     62.9 MB
# docker.io/library/nginx                                1.29.4                                                        85e894eaa91f  8 days ago     184 MB
# registry.k8s.io/ingress-nginx/controller               v1.13.3                                                       21bfedf4686d  4 months ago   334 MB
# docker.io/library/nginx                                1.28.0-alpine                                                 5a91d90f47dd  9 months ago   51.2 MB

# 기본적으로 컨테이너 엔진들은 HTTPS를 요구합니다. 내부망에서 HTTP로 테스트하려면 Registry 주소를 '안전하지 않은 저장소'로 등록해야 합니다.
# (참고) registries.conf 는 containers-common 설정이라서, 'podman, skopeo, buildah' 등 전부 동일하게 적용됨.
cat <<EOF >> /etc/containers/registries.conf
[[registry]]
location = "192.168.10.10:35000"
insecure = true
EOF
grep "^[^#]" /etc/containers/registries.conf

# 프라이빗 레지스트리에 업로드 : 성공!
podman push 192.168.10.10:35000/library/nginx:alpine
# Getting image source signatures
# Copying blob e7a74677e0bc done   | 
# Copying blob b55a1191629e done   | 
# Copying blob ce442f0e19c5 done   | 
# Copying blob 45f3ea5848e8 done   | 
# Copying blob ca1e0477244e done   | 
# Copying blob cfbf14f957f1 done   | 
# Copying blob a5aee8cb909c done   | 
# Copying blob 5196ca0ce600 done   | 
# Copying config 128568fed7 done   | 
# Writing manifest to image destination
# 업로드된 이미지와 태그 조회
curl -s 192.168.10.10:35000/v2/_catalog | jq
# {
#   "repositories": [
#      ...
#     "library/nginx",

curl -s 192.168.10.10:35000/v2/library/nginx/tags/list | jq
# {
#   "name": "library/nginx",
#   "tags": [
#     "1.28.0-alpine",
#     "1.29.4",
#     "alpine"
#   ]
# }

################################
# 샘플 앱 이미지 업데이트 후 배포 확인
################################
# 현재 파드 상태
kubectl get pod
NAME                    READY   STATUS             RESTARTS   AGE
nginx-54fc99c8d-m6fl5   0/1     ImagePullBackOff   0          16m

# 이미지 정보
kubectl get deploy -owide
NAME    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES         SELECTOR
nginx   0/1     1            0           17m   nginx        nginx:alpine   app=nginx

# 디플로이먼트에 이미지 정보 업데이트
kubectl set image deployment/nginx nginx=192.168.10.10:35000/library/nginx:alpine
kubectl get deploy -owide
# NAME    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS IMAGES                                 SELECTOR
# nginx   1/1     1            1           19m   nginx        192.168.10.10:35000/library/nginx:alpine   app=nginx

# 현재 파드 상태
kubectl get pod
# NAME                    READY   STATUS    RESTARTS   AGE
nginx-5ff7dd7b8-t6b2n   1/1     Running   0          22s

################################
# k8s-node 에서 저장소 미러 설정 후 배포 확인 (Image tag 주소를 그대로 사용하도록)
# 파드에 이미지를 그대로 설정해서 쓸 수 있음, 하지만 모든 노드에 다 설정 해줘야함
################################
# 삭제 후 다시 디플로이먼트 배포
kubectl delete deployments.apps nginx

cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:alpine   # docker.io/library/nginx:alpine
          ports:
            - containerPort: 80
EOF

# 현재 파드 상태
kubectl get pod
# NAME                    READY   STATUS         RESTARTS   AGE
# nginx-54fc99c8d-xbbb2   0/1     ErrImagePull   0          36s

[k8s-node1, k8s-node2]
# docker.io 대신 내부 이미지 레지스트리 주소 설정
mkdir -p /etc/containerd/certs.d/docker.io #docker.io 디렉토리 파일을 만들고 내부에서 우회하도록 설정
cat <<EOF > /etc/containerd/certs.d/docker.io/hosts.toml
server = "https://docker.io"         # [HTTPS] 원본 registry 주소 , docker.io 를 대상으로 할 때 이 설정을 참조한다는 의미

[host."http://192.168.10.10:35000"]  # [HTTP] 내부 레지스트리를 미러로 지정 , docker.io 대신 실제 이미지를 가져올 내부 레지스트리 주소
  capabilities = ["pull", "resolve"] # "pull": 이미지 다운로드 허용 , "resolve": 태그 → 다이제스트 해석
  skip_verify = true                 # HTTPS 인증서 검증 스킵 (HTTP라서 사실상 의미 없는지 테스트 해보자)
EOF
systemctl restart containerd

# 이미지 가져오기 실행 후 확인 : k8s-nodes는 현재 외부 통신 불능 상태인데, 아래 처럼 docker.io 미러 설정되어서 가져오는 것을 확인!
nerdctl pull docker.io/library/nginx:alpine
crictl images | grep nginx
# 192.168.10.10:35000/library/nginx                   alpine              aea88c29b151e       25.7MB
# docker.io/library/nginx                             alpine              aea88c29b151e       25.7MB


# 현재 파드 상태
kubectl get pod
# NAME                    READY   STATUS    RESTARTS   AGE
# nginx-54fc99c8d-8jpqt   1/1     Running   0          12m

# 하지만 이렇게 하면 모든 노드에 해당 설정을 적용해야해서 불편함

################################
# kubespary 에 containerd_registries_mirrors values 설정 후 노드들에 적용 --tags containerd
# 이미지 미러설정을 통해서 모든 노드들이 미러를 사용하도록 적용
################################
[admin]

# containerd registry 정보 확인
cat inventory/mycluster/group_vars/all/offline.yml | head -n 15
# ...
# http_server: "http://192.168.10.10"
# registry_host: "192.168.10.10:35000"

# # Insecure registries for containerd
# containerd_registries_mirrors:
#   - prefix: "{{ registry_host }}"
#     mirrors:
#       - host: "http://{{ registry_host }}"
#         capabilities: ["pull", "resolve"]
#         skip_verify: true

# 수정
vim inventory/mycluster/group_vars/all/offline.yml
-------------------------------------------------
# Insecure registries for containerd
containerd_registries_mirrors:
  - prefix: "{{ registry_host }}"
    mirrors:
      - host: "http://{{ registry_host }}"
        capabilities: ["pull", "resolve"]
        skip_verify: true
  - prefix: "docker.io"
    mirrors:
      - host: "http://192.168.10.10:35000"
        capabilities: ["pull", "resolve"]
        skip_verify: false
  - prefix: "registry-1.docker.io"
    mirrors:
      - host: "http://192.168.10.10:35000"
        capabilities: ["pull", "resolve"]
        skip_verify: false
  - prefix: "quay.io"
    mirrors:
      - host: "http://192.168.10.10:35000"
        capabilities: ["pull", "resolve"]
        skip_verify: false
-------------------------------------------------
cat inventory/mycluster/group_vars/all/offline.yml | head -n 30

# 설정 업데이트
ansible-playbook -i inventory/mycluster/inventory.ini -v cluster.yml -e kube_version="1.34.3" --tags containerd

# 확인
ssh k8s-node2 tree /etc/containerd
# /etc/containerd
# ├── certs.d
# │   ├── 192.168.10.10:35000
# │   │   └── hosts.toml
# │   ├── docker.io
# │   │   └── hosts.toml
# │   ├── quay.io
# │   │   └── hosts.toml
# │   └── registry-1.docker.io
# │       └── hosts.toml
# ├── config.toml
# └── cri-base.json

ssh k8s-node2 cat /etc/containerd/certs.d/quay.io/hosts.toml
# server = "https://quay.io"
# [host."http://192.168.10.10:35000"]
#   capabilities = ["pull","resolve"]
#   skip_verify = false
#   override_path = false

################################
# 참고) quay.io 이미지 사용하는 디플로이먼트 배포 테스트
################################
# quay.io nginx 이미지 가져오기
podman pull quay.io/nginx/nginx-unprivileged
podman images | grep quay
# quay.io/nginx/nginx-unprivileged                       latest                                                        2855ab84d81d  2 days ago     176 MB
# quay.io/calico/typha                                   v3.30.6                                                       22b487c8d463  2 weeks ago    82.4 MB
# quay.io/calico/node                                    v3.30.6                                                       da2d3185a92c  2 weeks ago    400 MB
# quay.io/calico/kube-controllers                        v3.30.6                                                       aa683ae5ed4f  2 weeks ago    117 MB
# quay.io/calico/cni                                     v3.30.6                                                       ea5b530b72ac  2 weeks ago    157 MB
# quay.io/calico/apiserver                               v3.30.6                                                       5c3f658d92c1  2 weeks ago    114 MB
# quay.io/cilium/operator                                v1.18.6                                                       240efd941dbb  4 weeks ago    258 MB
# quay.io/cilium/cilium                                  v1.18.6                                                       6652ae6ea5d1  4 weeks ago    725 MB
# quay.io/cilium/hubble-relay                            v1.18.6                                                       a861ba8e34e2  4 weeks ago    90.9 MB
# quay.io/coreos/etcd                                    v3.5.26                                                       b0892fff86d3  7 weeks ago    63.3 MB
# quay.io/cilium/cilium-envoy                            v1.34.10-1762597008-ff7ae7d623be00078865cff1b0672cc5d9bfc6d5  f6a19e0d7ae7  3 months ago   202 MB
# quay.io/cilium/hubble-ui-backend                       v0.13.3                                                       01633e2d07a4  5 months ago   67.1 MB
# quay.io/cilium/hubble-ui                               v0.13.3                                                       f84e0b3555ca  5 months ago   31.9 MB
# quay.io/cilium/certgen                                 v0.2.4                                                        6f358e8c8dc2  9 months ago   58.7 MB
# quay.io/jetstack/cert-manager-controller               v1.15.3                                                       4f3dc4247923  18 months ago  65.4 MB
# quay.io/jetstack/cert-manager-webhook                  v1.15.3                                                       af719c1e1656  18 months ago  56.4 MB
# quay.io/jetstack/cert-manager-cainjector               v1.15.3                                                       c8e016f32d8d  18 months ago  42.9 MB
# quay.io/metallb/controller                             v0.13.9                                                       b6a9700b077b  2 years ago    63.1 MB
# quay.io/metallb/speaker                                v0.13.9                                                       e39bc116ce9a  2 years ago    111 MB
podman tag quay.io/nginx/nginx-unprivileged 192.168.10.10:35000/nginx/nginx-unprivileged
podman images | grep nginx

# 프라이빗 레지스트리에 업로드
podman push 192.168.10.10:35000/nginx/nginx-unprivileged

# 업로드된 이미지와 태그 조회
curl -s 192.168.10.10:35000/v2/_catalog | jq
curl -s 192.168.10.10:35000/v2/nginx/nginx-unprivileged/tags/list | jq


# quay.io nginx 디플로이먼트 배포 시도
cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-quay
  labels:
    app: nginx-quay
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-quay
  template:
    metadata:
      labels:
        app: nginx-quay
    spec:
      containers:
        - name: nginx-quay
          image: quay.io/nginx/nginx-unprivileged
          ports:
            - containerPort: 80
EOF

# 확인
kubectl get pod -l app=nginx-quay -owide
# NAME                         READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
# nginx-quay-8998ffd9b-s7879   1/1     Running   0          11s   10.233.65.7   k8s-node2   <none>           <none>
kubectl get deploy nginx-quay -owide
# NAME         READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                             SELECTOR
# nginx-quay   1/1     1            1           27s   nginx-quay   quay.io/nginx/nginx-unprivileged   app=nginx-quay

ssh k8s-node2 crictl images | grep quay
# quay.io/nginx/nginx-unprivileged                    latest              2855ab84d81d9       61MB
# 삭제
kubectl delete deploy nginx-quay

Helm Artifact Repository : 헬름 차트 저장소, 예시) Chart Museum

OCI (Open Container Initiative) Registry for Helm은 Docker Registry를 활용하여 Helm Chart를 저장하고 배포할 수 있는 혁신적인 표준 방식입니다.

💡 참고 자료: Helm 공식 문서 | Harbor OCI 가이드

패러다임의 변화

과거 OCI는 단순히 컨테이너 이미지의 표준 형식을 정의하는 조직이었습니다. 하지만 이제는 Helm Chart도 이 표준을 따라 컨테이너 이미지 저장소(Registry)에 저장할 수 있도록 확장되었습니다!

Before 🔴 → After 🟢

  • Before: Helm 전용 repo 서버 운영 필요
  • After: 기존 OCI 호환 컨테이너 레지스트리에 Helm Chart 저장 가능

기존 방식 vs OCI 방식 비교

구분 Helm Repo 방식 (기존) OCI 방식 (신규)
저장소 별도의 Helm repo
https://charts.bitnami.com/bitnami
OCI Container Registry
registry-1.docker.io
배포 명령어 helm repo add
helm repo update
helm install
helm install oci://...
인증 방식 repo 서버별 인증 필요 Docker registry 인증 재사용
보안 관리 별도 보안 정책 관리 기존 Docker registry 보안 정책 재사용
장점 • 익숙하고 안정적인 방식
• 광범위한 호환성
• CI/CD 파이프라인 친화적
• 별도 repo 관리 불필요
• 표준화된 접근 방식
단점 • 별도 repo 서버 운영 필요
• 추가 인프라 비용
• Helm 3.8 이상 버전 필요
• 상대적으로 새로운 방식

################################################################
# [Case0] nginx helm 차트 작성, 배포 및 tgz 패키징
################################################################

################################
# 이미지 다운로드 후 로컬 저장소에 push
################################
# [admin-lb] (컨테이너) 이미지 저장소에 이미지 push
podman images

# 로컬에 nginx:alpine 다운로드
podman pull nginx:1.28.0-alpine
podman images | grep nginx
# quay.io/nginx/nginx-unprivileged                       latest                                                        2855ab84d81d  2 days ago     176 MB
# 192.168.10.10:35000/nginx/nginx-unprivileged           latest                                                        2855ab84d81d  2 days ago     176 MB
# 192.168.10.10:35000/library/nginx                      alpine                                                        128568fed7ff  6 days ago     62.9 MB
# docker.io/library/nginx                                alpine                                                        128568fed7ff  6 days ago     62.9 MB
# docker.io/library/nginx                                1.29.4                                                        85e894eaa91f  8 days ago     184 MB
# registry.k8s.io/ingress-nginx/controller               v1.13.3                                                       21bfedf4686d  4 months ago   334 MB
# docker.io/library/nginx                                1.28.0-alpine                                                 5a91d90f47dd  9 months ago   51.2 MB
# (컨테이너) 이미지 저장소에 이미지 push
podman tag nginx:1.28.0-alpine 192.168.10.10:35000/library/nginx:1.28.0-alpine
podman images | grep nginx
# 192.168.10.10:35000/nginx/nginx-unprivileged           latest                                                        2855ab84d81d  2 days ago     176 MB
# quay.io/nginx/nginx-unprivileged                       latest                                                        2855ab84d81d  2 days ago     176 MB
# docker.io/library/nginx                                alpine                                                        128568fed7ff  6 days ago     62.9 MB
# 192.168.10.10:35000/library/nginx                      alpine                                                        128568fed7ff  6 days ago     62.9 MB
# docker.io/library/nginx                                1.29.4                                                        85e894eaa91f  8 days ago     184 MB
# registry.k8s.io/ingress-nginx/controller               v1.13.3                                                       21bfedf4686d  4 months ago   334 MB
# docker.io/library/nginx                                1.28.0-alpine                                                 5a91d90f47dd  9 months ago   51.2 MB
# 192.168.10.10:35000/library/nginx                      1.28.0-alpine                                                 5a91d90f47dd  9 months ago   51.2 MB
# 프라이빗 레지스트리에 업로드 : 성공!
podman push 192.168.10.10:35000/library/nginx:1.28.0-alpine

# 업로드된 이미지 태그 조회
curl -s 192.168.10.10:35000/v2/library/nginx/tags/list | jq
{
  "name": "library/nginx",
  "tags": [
    "1.28.0-alpine",
    "1.29.4",
    "alpine"
  ]
}
################################
# nginx helm 차트 작성
################################
# 디렉터리 생성
cd
mkdir nginx-chart
cd nginx-chart

mkdir templates

cat > templates/configmap.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ .Release.Name }}
data:
  index.html: |
{{ .Values.indexHtml | indent 4 }}
EOF

cat > templates/deployment.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Release.Name }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app: {{ .Release.Name }}
  template:
    metadata:
      labels:
        app: {{ .Release.Name }}
    spec:
      containers:
      - name: nginx
        image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
        ports:
        - containerPort: 80
        volumeMounts:
        - name: index-html
          mountPath: /usr/share/nginx/html/index.html
          subPath: index.html
      volumes:
      - name: index-html
        configMap:
          name: {{ .Release.Name }}
EOF

cat > templates/service.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: {{ .Release.Name }}
spec:
  selector:
    app: {{ .Release.Name }}
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
    nodePort: 30000
  type: NodePort
EOF

cat > values.yaml <<EOF
indexHtml: |
  <!DOCTYPE html>
  <html>
  <head>
    <title>Welcome to Nginx!</title>
  </head>
  <body>
    <h1>Hello, Kubernetes!</h1>
    <p>Nginx version 1.28.0 - alpine</p>
  </body>
  </html>

image:
  repository: nginx
  tag: 1.28.0-alpine

replicaCount: 1
EOF

cat > Chart.yaml <<EOF
apiVersion: v2
name: nginx-chart
description: A Helm chart for deploying Nginx with custom index.html
type: application
version: 1.0.0
appVersion: "1.28.0-alpine"
EOF

tree
├── Chart.yaml
├── templates
│   ├── configmap.yaml
│   ├── deployment.yaml
│   └── service.yaml
└── values.yaml

################################
# 배포 및 tgz 패키징
################################
# 적용 전 렌더링 확인 Render chart templates locally and display the output.
helm template dev-nginx . -f values.yaml
# ---
# # Source: nginx-chart/templates/configmap.yaml
# apiVersion: v1
# kind: ConfigMap
# metadata:
#   name: dev-nginx
# data:
#   index.html: |
#     <!DOCTYPE html>
#     <html>
#     <head>
#       <title>Welcome to Nginx!</title>
#     </head>
#     <body>
#       <h1>Hello, Kubernetes!</h1>
#       <p>Nginx version 1.28.0 - alpine</p>
#     </body>
#     </html>
# ---
# # Source: nginx-chart/templates/service.yaml
# apiVersion: v1
# kind: Service
# metadata:
#   name: dev-nginx
# spec:
#   selector:
#     app: dev-nginx
#   ports:
#   - protocol: TCP
#     port: 80
#     targetPort: 80
#     nodePort: 30000
#   type: NodePort
# ---
# # Source: nginx-chart/templates/deployment.yaml
# apiVersion: apps/v1
# kind: Deployment
# metadata:
#   name: dev-nginx
# spec:
#   replicas: 1
#   selector:
#     matchLabels:
#       app: dev-nginx
#   template:
#     metadata:
#       labels:
#         app: dev-nginx
#     spec:
#       containers:
#       - name: nginx
#         image: nginx:1.28.0-alpine
#         ports:
#         - containerPort: 80
#         volumeMounts:
#         - name: index-html
#           mountPath: /usr/share/nginx/html/index.html
#           subPath: index.html
#       volumes:
#       - name: index-html
#         configMap:
#           name: dev-nginx
# 직접 배포 해보기
helm install dev-nginx . -f values.yaml
helm list
# NAME: dev-nginx
# LAST DEPLOYED: Thu Feb 12 03:09:16 2026
# NAMESPACE: default
# STATUS: deployed
# REVISION: 1
# TEST SUITE: None
# NAME            NAMESPACE       REVISION        UPDATED                                       STATUS          CHART                 APP VERSION  
# dev-nginx       default         1               2026-02-12 03:09:16.066740757 +0900 KST       deployed        nginx-chart-1.0.0     1.28.0-alpine
kubectl get deploy,svc,ep,cm dev-nginx -owide
# Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
# NAME                        READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                SELECTOR
# deployment.apps/dev-nginx   1/1     1            1           12s   nginx        nginx:1.28.0-alpine   app=dev-nginx

# NAME                TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE   SELECTOR
# service/dev-nginx   NodePort   10.233.31.130   <none>        80:30000/TCP   12s   app=dev-nginx

# NAME                  ENDPOINTS        AGE
# endpoints/dev-nginx   10.233.65.8:80   12s

# NAME                  DATA   AGE
# configmap/dev-nginx   1      12s
# 호출 확인
curl http://192.168.10.11:30000
# <!DOCTYPE html>
# <html>
# <head>
#   <title>Welcome to Nginx!</title>
# </head>
# <body>
#   <h1>Hello, Kubernetes!</h1>
#   <p>Nginx version 1.28.0 - alpine</p>
# </body>
# </html>
curl -s http://192.168.10.11:30000 | grep version
  # <p>Nginx version 1.28.0 - alpine</p>
open http://192.168.10.11:30000
# Hello, Kubernetes!
# Nginx version 1.28.0 - alpine

# 차트를 .tgz 파일로 패키징
helm package .
# Successfully packaged chart and saved it to: /root/nginx-chart/nginx-chart-1.0.0.tgz

tar -tzf nginx-chart-1.0.0.tgz
# nginx-chart/Chart.yaml
# nginx-chart/values.yaml
# nginx-chart/templates/configmap.yaml
# nginx-chart/templates/deployment.yaml
# nginx-chart/templates/service.yaml

zcat nginx-chart-1.0.0.tgz | tar -xOf - nginx-chart/Chart.yaml
# apiVersion: v2
# appVersion: 1.28.0-alpine
# description: A Helm chart for deploying Nginx with custom index.html
# name: nginx-chart
# type: application
# version: 1.0.0
zcat nginx-chart-1.0.0.tgz | tar -xOf - nginx-chart/values.yaml
# indexHtml: |
#   <!DOCTYPE html>
#   <html>
#   <head>
#     <title>Welcome to Nginx!</title>
#   </head>
#   <body>
#     <h1>Hello, Kubernetes!</h1>
#     <p>Nginx version 1.28.0 - alpine</p>
#   </body>
#   </html>

# image:
#   repository: nginx
#   tag: 1.28.0-alpine

# replicaCount: 1

# 다음 실습을 위해 삭제
helm uninstall dev-nginx
helm list
kubectl get deploy,svc,ep,cm dev-nginx -owide

################################
# [Case1] 외부 공용 차트 tgz 패키징 다운로드 후 배포 서버에 복사 후 사용
################################


################################
# OCI Registry 사용 : Bitnami nginx chart - [Artifacthub](https://artifacthub.io/packages/helm/bitnami/nginx) , [Github](https://github.com/bitnami/charts/tree/main/bitnami/nginx)
################################
# 사전 준비 : 로컬에 nginx:alpine 다운로드 후 로컬 저장소에 push
podman pull docker.io/bitnami/nginx:latest
podman tag bitnami/nginx:latest 192.168.10.10:35000/bitnami/nginx:latest
podman push 192.168.10.10:35000/bitnami/nginx:latest

# (참고) helm repo 확인 : 이번 실습에서는 해당 방식 미사용
helm repo list


# Bitnami nginx chart OCI Registry 주소  oci://registry-1.docker.io/bitnamicharts/nginx

# (참고) helm show 명령
helm show readme oci://registry-1.docker.io/bitnamicharts/nginx
helm show values oci://registry-1.docker.io/bitnamicharts/nginx
helm show chart oci://registry-1.docker.io/bitnamicharts/nginx

# helm chart 가져오기 : OCI Registry 사용
cd
mkdir nginx-oci-reg && cd nginx-oci-reg
helm pull oci://registry-1.docker.io/bitnamicharts/nginx --version 22.4.7
# Pulled: registry-1.docker.io/bitnamicharts/nginx:22.4.7
# Digest: sha256:242404d094afc82aebcd4fcc649e415db3724774d7a72fad25fa679c9647333b

# 파일 목록 확인
tar -tzf nginx-22.4.7.tgz
# nginx/Chart.yaml
# nginx/README.md
# nginx/values.schema.json
# nginx/values.yaml
...

zcat nginx-22.4.7.tgz| tar -xOf - nginx/Chart.yaml
# annotations:
#   fips: "true"
#   images: |
#     - name: git
#       version: 2.53.0
#       image: registry-1.docker.io/bitnami/git:latest
#     - name: nginx
#       version: 1.29.5
#       image: registry-1.docker.io/bitnami/nginx:latest
#     - name: nginx-exporter
#       version: 1.5.1
#       image: registry-1.docker.io/bitnami/nginx-exporter:latest
#   licenses: Apache-2.0
#   tanzuCategory: clusterUtility
# apiVersion: v2
# appVersion: 1.29.5
# dependencies:
# - name: common
#   repository: oci://registry-1.docker.io/bitnamicharts
#   tags:
#   - bitnami-common
#   version: 2.34.0
# description: "NGINX Open Source is a web server that can be also used as a reverse proxy, load balancer, and HTTP cache. Recommended for high-demanding sites due to its ability to provide faster content."
# home: https://bitnami.com
# icon: https://dyltqmyl993wv.cloudfront.net/assets/stacks/nginx/img/nginx-stack-220x234.png
# keywords:
# - nginx
# - http
# - web
# - www
# - reverse proxy
# maintainers:
# - name: "Broadcom, Inc. All Rights Reserved."
#   url: https://github.com/bitnami/charts
# name: nginx
# sources:
# - https://github.com/bitnami/charts/tree/main/bitnami/nginx
# version: 22.4.7
zcat nginx-22.4.7.tgz| tar -xOf - nginx/values.yaml
zcat nginx-22.4.7.tgz| tar -xOf - nginx/values.schema.json


# helm chart 설치
helm install my-nginx ./nginx-22.4.7.tgz --set service.type=NodePort
helm repo list # 미사용으로 없음!

# helm 확인
helm list
helm get metadata my-nginx
# NAME: my-nginx
# CHART: nginx
# VERSION: 22.4.7
# APP_VERSION: 1.29.5
# ANNOTATIONS: fips=true,images=- name: git
#   version: 2.53.0
#   image: registry-1.docker.io/bitnami/git:latest
# - name: nginx
#   version: 1.29.5
#   image: registry-1.docker.io/bitnami/nginx:latest
# - name: nginx-exporter
#   version: 1.5.1
#   image: registry-1.docker.io/bitnami/nginx-exporter:latest
# ,licenses=Apache-2.0,tanzuCategory=clusterUtility
# DEPENDENCIES: common
# NAMESPACE: default
# REVISION: 1
# STATUS: deployed
# DEPLOYED_AT: 2026-02-12T03:16:23+09:00

# 디플로이먼트 확인 : IMAGES tags 확인
kubectl get deploy -owide
NAME       READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                                      SELECTOR
# my-nginx   1/1     1            1           32s   nginx        registry-1.docker.io/bitnami/nginx:latest   app.kubernetes.io/instance=my-nginx,app.kubernetes.io/name=nginx
helm get manifest my-nginx | grep 'image:'
          image: registry-1.docker.io/bitnami/nginx:latest

# 삭제
helm uninstall my-nginx

################################
# [Case2-A] 내부망에 Helm Chart 저장소(ChartMuseum) 구축 후 nginx 차트 업로드 후 사용
################################

################################
# Helm Chart 저장소(ChartMuseum) 구축
################################
# ChartMuseum 컨테이너 실행 (Podman)

## 저장 디렉터리 준비
mkdir -p /data/chartmuseum/charts
chmod 777 /data/chartmuseum/charts # (테스트용) 컨테이너 전용 디렉터리 권한 조정

## ChartMuseum 컨테이너 실행
podman run -d \
  --name chartmuseum \
  -p 8080:8080 \
  -v /data/chartmuseum/charts:/charts \
  -e STORAGE=local \
  -e STORAGE_LOCAL_ROOTDIR=/charts \
  -e DEBUG=true \
  ghcr.io/helm/chartmuseum:v0.16.4

podman ps
# CONTAINER ID  IMAGE                             COMMAND     CREATED       STATUS       PORTS                   NAMES
# 38a201cc08e0  ghcr.io/helm/chartmuseum:v0.16.4              1 second ago  Up 1 second  0.0.0.0:8080->8080/tcp  chartmuseum

## 정상 동작 확인
curl -s http://192.168.10.10:8080/health | jq
# {
#   "healthy": true
# }
curl -s http://192.168.10.10:8080/api/charts | jq
# {}
################################
# 내부 차트 저장소를 Helm repo 등록 및 차트 업로드
################################
# Helm 클라이언트에 Repo 등록 : 이름 internal
helm repo add internal http://192.168.10.10:8080
helm repo update
helm repo list
NAME            URL                      
# internal        http://192.168.10.10:8080


# 차트 업로드 (Push)
## 방법 A. helm-push 플러그인 vs 방법 B. curl api엔드포인트 직접 요청
helm plugin install https://github.com/chartmuseum/helm-push.git
helm plugin list
# NAME    VERSION DESCRIPTION                      
# cm-push 0.10.4  Push chart package to ChartMuseum

ls -l /root/nginx-chart/*.tgz
-rw-r--r--. 1 root root 855 Feb  12 03:25 /root/nginx-chart/nginx-chart-1.0.0.tgz

## 내부 차트 저장소 ChartMuseum에 업로드 : 성공!
helm cm-push /root/nginx-chart/nginx-chart-1.0.0.tgz internal
# Pushing nginx-chart-1.0.0.tgz to internal...
# Done.

# 확인
helm repo update
curl -s http://192.168.10.10:8080/api/charts | jq
# {
#   "nginx-chart": [
#     {
#       "name": "nginx-chart",
#       "version": "1.0.0",
#       "description": "A Helm chart for deploying Nginx with custom index.html",
# ...

################################
# 내부 차트 저장소의 Helm Chart 사용
################################
# helm install
helm repo update
helm install my-nginx internal/nginx-chart
helm list
NAME        NAMESPACE    REVISION    UPDATED                                    STATUS      CHART                 APP VERSION
my-nginx    default      1           2026-02-12 03:25:04.078247928 +0900 KST    deployed    nginx-chart-1.0.0     1.28.0-alpine

kubectl get deploy,svc,ep,cm my-nginx -owide
curl http://192.168.10.11:30000
curl -s http://192.168.10.11:30000 | grep version

# 다음 실습을 위해 삭제
helm uninstall my-nginx
helm list
helm repo list

################################
# [Case2-B] 내부망에 사내 OCI Registry 에 nginx 차트 업로드 후 내부망에서 사용
################################
# 사내 OCI Registry로 Helm Chart Push
helm push /root/nginx-chart/nginx-chart-1.0.0.tgz oci://192.168.10.10:35000/helm-charts
Pushed: 192.168.10.10:35000/helm-charts/nginx-chart:1.0.0
Digest: sha256:3bf4f6d919299aefbdea64e689bf0219d6e3416ed660fc8f00b243be0516f2b2

# 사내 OCI Registry 에서 확인
curl -s 192.168.10.10:35000/v2/_catalog | jq | grep helm
    "helm-charts/nginx-chart",

curl -s 192.168.10.10:35000/v2/helm-charts/nginx-chart/tags/list | jq
# {
#   "name": "helm-charts/nginx-chart",
#   "tags": [
#     "1.0.0"
#   ]
# }    

# 사내 OCI Registry 에서 바로 설치
helm install my-nginx oci://192.168.10.10:35000/helm-charts/nginx-chart --version 1.0.0

# 확인
helm list
# NAME        NAMESPACE    REVISION    UPDATED                                    STATUS      CHART                 APP VERSION
# my-nginx    default      1           2026-02-12 03:35:42.428298781 +0900 KST    deployed    nginx-chart-1.0.0     1.28.0-alpine

kubectl get deploy,svc,ep,cm my-nginx -owide
curl http://192.168.10.11:30000
curl -s http://192.168.10.11:30000 | grep version

# 다음 실습을 위해 삭제
helm uninstall my-nginx
helm list

Local (Mirror) YUM/DNF Repository


################################
# [admin] 패키지 Repo 재설정 및 내부 Linux 패키지 저장소 설정 (12분 소요)
################################
[admin]

# 패키지(저장소) 동기화 (reposync) : 외부 저장소(BaseOS, AppStream 등)의 패키지를 로컬 디렉토리로 가져옵니다.
## 미러 저장 디렉터리 생성
mkdir -p /root/kubespray-offline/outputs/rpms/rocky/10
cd /root/kubespray-offline/outputs/rpms/rocky/10

# 기본 repo 다시 활성화
tree /etc/yum.repos.d/
# ├── backup
# │   ├── rocky-addons.repo
# │   ├── rocky-devel.repo
# │   ├── rocky-extras.repo
# │   └── rocky.repo
# ├── offline.repo
# └── rocky.repo.original
for i in rocky-addons rocky-devel rocky-extras rocky; do
  mv /etc/yum.repos.d/$i.repo.original /etc/yum.repos.d/$i.repo
done
dnf repolist
# repo id                 repo name
# appstream               Rocky Linux 10 - AppStream
# baseos                  Rocky Linux 10 - BaseOS
# extras                  Rocky Linux 10 - Extras
# offline-repo            Offline repo

# 특정 레포 동기화 (예: baseos, extras, appstream)
# --download-metadata 옵션을 쓰면 원본 메타데이터까지 가져옵니다 : 메타데이터 생성 - 다운로드한 패키지들을 dnf가 인식할 수 있도록 인덱싱 작업을 합니다.
## extras : 금방 끝남, 67M
dnf reposync --repoid=extras --download-metadata -p /root/kubespray-offline/outputs/rpms/rocky/10
# Rocky Linux 10 - Extras       3.9 kB/s | 6.2 kB     00:01    
# Rocky Linux 10 - Extras        29 kB/s |  40 kB     00:01    
# (1/26): centos-release-ceph-s 127 kB/s | 7.5 kB     00:00    
# (2/26): centos-release-kmods- 159 kB/s | 9.7 kB     00:00  
# ...

## baseos : 3분 소요, 4.8G
dnf reposync --repoid=baseos --download-metadata -p /root/kubespray-offline/outputs/rpms/rocky/10
# ...
# (1472/1474): zlib-ng-compat-2 2.8 MB/s |  63 kB     00:00    
# (1473/1474): zsh-5.9-15.el10.  22 MB/s | 3.3 MB     00:00    
# (1474/1474): zstd-1.5.5-9.el1 1.8 MB/s | 453 kB     00:00  
## appstream : 9분 소요, 13G
dnf reposync --repoid=appstream --download-metadata -p /root/kubespray-offline/outputs/rpms/rocky/10
# ...
# (5217/5219): zziplib-0.13.78- 4.6 MB/s |  89 kB     00:00    
# (5218/5219): zziplib-utils-0. 1.8 MB/s |  45 kB     00:00    
# (5219/5219): zram-generator-1  13 MB/s | 399 kB     00:00  
# admin-lb 가상머신에 잔여 Disk 용량 확인
df -hT /
# Filesystem     Type  Size  Used Avail Use% Mounted on
# /dev/sda3      xfs   116G   76G   40G  66% /
free -h
#                total        used        free      shared  buff/cache   available
# Mem:           1.8Gi       570Mi        68Mi       6.2Mi       1.3Gi       1.2Gi
# Swap:          3.8Gi        50Mi       3.8Gi
# 접속 테스트
curl http://192.168.10.10/rpms/rocky/10/
# <html>
# <head><title>Index of /rpms/rocky/10/</title></head>
# <body>
# <h1>Index of /rpms/rocky/10/</h1><hr><pre><a href="../">../</a>
# <a href="appstream/">appstream/</a>                                         11-Feb-2026 18:27       -
# <a href="baseos/">baseos/</a>                                            11-Feb-2026 18:22       -
# <a href="extras/">extras/</a>                                            11-Feb-2026 18:22       -
# </pre><hr></body>
# </html>
curl http://192.168.10.10/rpms/rocky/10/baseos/
# <html>
# <head><title>Index of /rpms/rocky/10/baseos/</title></head>
# <body>
# <h1>Index of /rpms/rocky/10/baseos/</h1><hr><pre><a href="../">../</a>
# <a href="Packages/">Packages/</a>                                          11-Feb-2026 18:22       -
# <a href="repodata/">repodata/</a>                                          11-Feb-2026 18:22       -
# <a href="mirrorlist">mirrorlist</a>                                         11-Feb-2026 18:22    2659
# </pre><hr></body>
# </html>

################################
# [k8s-node] rocky repo 활성화 및 내부 Linux 패키지 저장소를 admin 로 설정 후 사용
################################
[k8s-node1, k8s-node2]

# 로컬 레포 파일 생성: 서버 IP는 Repo 서버의 IP로 수정
tree /etc/yum.repos.d
# /etc/yum.repos.d
# ├── backup
# │   ├── rocky-addons.repo
# │   ├── rocky-devel.repo
# │   ├── rocky-extras.repo
# │   └── rocky.repo
# ├── internal-rocky.repo
# └── offline.repo
cat <<EOF > /etc/yum.repos.d/internal-rocky.repo
[internal-baseos]
name=Internal Rocky 10 BaseOS
baseurl=http://192.168.10.10/rpms/rocky/10/baseos
enabled=1
gpgcheck=0

[internal-appstream]
name=Internal Rocky 10 AppStream
baseurl=http://192.168.10.10/rpms/rocky/10/appstream
enabled=1
gpgcheck=0

[internal-extras]
name=Internal Rocky 10 Extras
baseurl=http://192.168.10.10/rpms/rocky/10/extras
enabled=1
gpgcheck=0
EOF
tree /etc/yum.repos.d
# /etc/yum.repos.d
# ├── backup
# │   ├── rocky-addons.repo
# │   ├── rocky-devel.repo
# │   ├── rocky-extras.repo
# │   └── rocky.repo
# ├── internal-rocky.repo
# └── offline.repo

# 내부 서버 repo 정상 동작 확인 : 클라이언트에서 캐시를 비우고 목록을 불러옵니다.
dnf clean all
dnf repolist
# repo id                    repo name
# internal-appstream         Internal Rocky 10 AppStream
# internal-baseos            Internal Rocky 10 BaseOS
# internal-extras            Internal Rocky 10 Extras
# offline-repo               Offline repo for kubespray
dnf makecache
# Internal Rocky 10 BaseOS      161 MB/s |  15 MB     00:00    
# Internal Rocky 10 AppStream    92 MB/s | 2.1 MB     00:00    
# Internal Rocky 10 Extras      1.0 MB/s | 6.2 kB     00:00    
# Offline repo for kubespray    493 kB/s |  85 kB     00:00  
# 패키지 인스톨 정상 실행 확인
dnf install -y nfs-utils vim

## 패키지 정보에 repo 확인
dnf info nfs-utils | grep -i repo
Repository   : @System
# From repo    : internal-baseos
# Total                         174 MB/s |  10 MB     00:00     
# Running transaction check
# Transaction check succeeded.
# Running transaction test
# Transaction test succeeded.
# Running transaction
#   Preparing        :                                      1/1 
#   Upgrading        : vim-data-2:9.1.083-6.el10.noarch     1/8 
#   Upgrading        : vim-common-2:9.1.083-6.el10.aarch6   2/8 
#   Upgrading        : vim-enhanced-2:9.1.083-6.el10.aarc   3/8 
#   Upgrading        : vim-minimal-2:9.1.083-6.el10.aarch   4/8 
#   Cleanup          : vim-enhanced-2:9.1.083-5.el10_0.1.   5/8 
#   Cleanup          : vim-common-2:9.1.083-5.el10_0.1.aa   6/8 
#   Cleanup          : vim-minimal-2:9.1.083-5.el10_0.1.a   7/8 
#   Cleanup          : vim-data-2:9.1.083-5.el10_0.1.noar   8/8 
#   Running scriptlet: vim-data-2:9.1.083-5.el10_0.1.noar   8/8 

# Upgraded:
#   vim-common-2:9.1.083-6.el10.aarch64                         
#   vim-data-2:9.1.083-6.el10.noarch                            
#   vim-enhanced-2:9.1.083-6.el10.aarch64                       
#   vim-minimal-2:9.1.083-6.el10.aarch64                        

# Complete!

Private PyPI(Python Package Index) Mirror


################################
# [k8s-node] pip 설정 및 사용
# node1, node2
################################
# pypi index url 확인
curl http://192.168.10.10/pypi/
  #   <title>Simple index</title>
  # </head>
  # <body>
  #   <a href="ansible/index.html">ansible</a>
  #   <a href="ansible-core/index.html">ansible-core</a>
  #   <a href="cffi/index.html">cffi</a>
  #   ...

# pip 설정 확인
cat <<EOF > /etc/pip.conf
[global]
index-url = http://192.168.10.10/pypi
trusted-host = 192.168.10.10
timeout = 60
EOF

pip install netaddr
pip list | grep -i netaddr
netaddr                   1.3.0


# 현재 admin pypi mirror에 없는 패키지 설치 시도 : 실패!
pip install httpx
# ERROR: Could not find a version that satisfies the requirement httpx (from versions: none)
# ERROR: No matching distribution found for httpx

################################
# pypi 에 추가 패키지 설치 후 사용
################################
[admin]

# 현재 pip.conf 확인
cat /root/.config/pip/pip.conf
[global]
index = http://localhost/pypi/
index-url = http://localhost/pypi/
trusted-host = localhost

#
mv /root/.config/pip/pip.conf /root/.config/pip/pip.bak
pip install httpx
pip list | grep httpx

#
find / -name *.whl | tee whl.list
cat whl.list | grep -i http
# /root/.cache/pip/wheels/c6/69/46/5e87f24c4c35735a0015d9b6c234048dd71c273d789dffa96f/httpx-0.28.1-py3-none-any.whl

#
tree /root/kubespray-offline/outputs/pypi/files/
cp /root/.cache/pip/wheels/c6/69/46/5e87f24c4c35735a0015d9b6c234048dd71c273d789dffa96f/httpx-0.28.1-py3-none-any.whl /root/kubespray-offline/outputs/pypi/files/
tree /root/kubespray-offline/outputs/pypi/files/
# /root/kubespray-offline/outputs/pypi/files/
# ├── ansible-10.7.0-py3-none-any.whl
# ├── ansible-10.7.0.tar.gz
# ├── ansible_core-2.17.14-py3-none-any.whl
# ├── ansible_core-2.17.14.tar.gz
# ├── cffi-2.0.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl
# ├── cffi-2.0.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl
# ├── cffi-2.0.0.tar.gz
# ├── cryptography-46.0.3-cp311-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl
# ├── cryptography-46.0.3.tar.gz
# ├── Cython-0.29.37-py2.py3-none-any.whl
# ├── distro-1.9.0-py3-none-any.whl
# ├── flit_core-3.12.0-py3-none-any.whl
# ├── httpx-0.28.1-py3-none-any.whl
# ├── jinja2-3.1.6-py3-none-any.whl
# ├── jinja2-3.1.6.tar.gz
# ├── jmespath-1.1.0-py3-none-any.whl
# ├── jmespath-1.1.0.tar.gz
# ├── markupsafe-3.0.3-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl
# ├── markupsafe-3.0.3-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl
# ├── markupsafe-3.0.3.tar.gz
# ├── netaddr-1.3.0-py3-none-any.whl
# ├── netaddr-1.3.0.tar.gz
# ├── packaging-26.0-py3-none-any.whl
# ├── packaging-26.0.tar.gz
# ├── pip-26.0.1-py3-none-any.whl
# ├── pycparser-3.0-py3-none-any.whl
# ├── pycparser-3.0.tar.gz
# ├── pyyaml-6.0.3-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl
# ├── pyyaml-6.0.3-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl
# ├── pyyaml-6.0.3.tar.gz
# ├── resolvelib-1.0.1-py2.py3-none-any.whl
# ├── resolvelib-1.0.1.tar.gz
# ├── ruamel_yaml-0.19.1-py3-none-any.whl
# ├── selinux-0.3.0-py2.py3-none-any.whl
# ├── setuptools-40.9.0-py2.py3-none-any.whl
# ├── setuptools-82.0.0-py3-none-any.whl
# └── wheel-0.46.3-py3-none-any.whl
# pypi index registry 에 httpx 추가
cd /root/kubespray-offline/
./pypi-mirror.sh

#
curl http://192.168.10.10/pypi/
    <a href="httpx/index.html">httpx</a>


[k8s-node]
# httpx 설치 시도
pip install httpx
#   Downloading http://192.168.10.10/pypi/httpx/httpx-0.28.1-py3-none-any.whl (73 kB)
#      ━━━━━━━━━━━━━━━━━━━━━━━ 73.3/73.3 kB 9.2 MB/s eta 0:00:00
# INFO: pip is looking at multiple versions of httpx to determine which version is compatible with other requirements. This could take a while.

마치며

이번 주차에서는 인터넷이 차단된 폐쇄망(Air-Gap) 환경에서 Kubespray를 이용한 오프라인 Kubernetes 클러스터 구축을 실습했습니다. kubespray-offline 도구를 활용해 필요한 바이너리 파일들과 컨테이너 이미지들을 사전 다운로드하고, Admin 서버에 오프라인 패키지 저장소(RPM), PyPI 미러, 프라이빗 컨테이너 레지스트리를 구축하는 전 과정을 다뤘습니다.

 

폐쇄망에서 필요한 핵심 서비스들(DNS, NTP, Package Repository, Container Registry 등)의 역할을 이해하고, Nginx 웹서버를 통한 파일 서빙, Docker Registry를 통한 이미지 저장소 구성, 그리고 완전히 격리된 환경에서의 Kubernetes 클러스터 배포까지 성공적으로 완료했습니다. 또한 Helm OCI Registry를 활용한 차세대 차트 관리 방식과 다양한 트러블슈팅 상황도 경험했습니다.

 

6주차에서는 보안이 중요한 기업 환경이나 인터넷 접근이 제한된 환경에서의 실무적인 배포 방법을 익힐 수 있었습니다. 실제 엔터프라이즈 환경에서는 보안 정책으로 인해 오프라인 설치가 필수인 경우가 많으므로, 이번에 다룬 kubespray-offline 도구와 Air-Gap 환경 구축 노하우를 잘 활용하면 실무에서 큰 도움이 될 것이라고 생각합니다.

 

필자는 지금껏 에어갭횐경을 구축하며 여러가지 노가다(?)를 많이 했었는데, 다음 프로젝트부터는 kubespray-offline을 통해 한결 수월하게 작업할 수 있을 것 같습니다.

 

다음 주차에서 뵙겠습니다. 긴 글 읽어 주셔서 감사합니다 :)

반응형

'클라우드 컴퓨팅 & NoSQL > [K8S Deploy] K8S 디플로이 스터디' 카테고리의 다른 글

[5주차 - K8S Deploy] Kubespray 고가용성(HA) 실습 및 클러스터 운영 (26.02.01)  (0) 2026.02.07
[4주차 - K8S Deploy] kubespray 배포 분석 (26.01.25)  (0) 2026.01.31
[3주차 - K8S Deploy] Kubeadm & K8S Upgrade 2/2 (26.01.23)  (0) 2026.01.23
[3주차 - K8S Deploy] Kubeadm & K8S Upgrade 1/2 (26.01.23)  (0) 2026.01.23
[2주차 - K8S Deploy] Ansible 기초 (26.01.11)  (1) 2026.01.15
    devlos
    devlos
    안녕하세요, Devlos 입니다. 새로 공부 중인 지식들을 공유하고, 명확히 이해하고자 블로그를 개설했습니다 :) 여러 DEVELOPER 분들과 자유롭게 지식을 공유하고 싶어요! 방문해 주셔서 감사합니다 😀 - DEVLOS -

    티스토리툴바