devlos
Devlos Archive
devlos
전체 방문자
오늘
어제
12-11 12:40

최근 글

  • 분류 전체보기 (107)
    • 프로젝트 (1)
    • MSA 설계 & 도메인주도 설계 (9)
    • 클라우드 컴퓨팅 & NoSQL (87)
      • [Cilium Study] 실리움 스터디 (8)
      • [KANS] 쿠버네티스 네트워크 심화 스터디 (12)
      • [T101] 테라폼 4기 스터디 (8)
      • [CICD] CICD 맛보기 스터디 (3)
      • [T101] 테라폼 기초 입문 스터디 (6)
      • [AEWS] Amazon EKS 워크숍 스터디 (7)
      • [PKOS] 쿠버네티스 실무 실습 스터디 (7)
      • Kubernetes (13)
      • Docker (7)
      • Redis (1)
      • Jenkins (3)
      • Terraform (1)
      • Ansible (4)
      • Kafka (1)
    • 프로그래밍 (7)
      • Spring Boot (5)
      • Broker (1)
    • 성능과 튜닝 (1)
    • ALM (0)
    • 기타 (2)

인기 글

태그

  • 테라폼
  • 쿠버네티스
  • MSA
  • cilium
  • docker
  • Kubernetes
  • CloudNet@
  • DevOps
  • t101 4기
  • terraform
  • 도커
  • kOps
  • PKOS
  • 쿠버네티스 스터디
  • 데브옵스

티스토리

최근 댓글

hELLO · Designed By 정상우.
devlos

Devlos Archive

[1주차 - Cilium 스터디] Cilium 이란? (25.07.13)
클라우드 컴퓨팅 & NoSQL/[Cilium Study] 실리움 스터디

[1주차 - Cilium 스터디] Cilium 이란? (25.07.13)

2025. 7. 18. 01:21
반응형

 

들어가며

안녕하세요 여러분 Devlos 입니다. 요즘은 스터디 할때만 블로그에 글을 올리는 것 같네요 ㅠ.ㅠ

과거의 저를 반성하며.. 즐거운 스터디를 시작해 봅니다!

이번 포스팅은 CloudNet@ 커뮤니티에서 주최하는 Cilium 스터디 1주 차 주제인 "Cilium Study"에 대해서 정리한 내용입니다.


Cilium CNI 소개

출처 - https://isovalent.com/blog/post/migrating-from-metallb-to-cilium/

 

Migrating from MetalLB to Cilium

In this blog post, you will learn how to migrate from MetalLB to Cilium for local service advertisement over Layer 2.

isovalent.com

 

쿠버네티스 클러스터 내 모든 Pod는 고유한 IP를 갖고, 서로 직접 통신할 수 있어야 합니다.

 

쿠버네티스의 네트워크 모델에서 가장 핵심적인 원칙 중 하나는, 클러스터 내에 존재하는 모든 Pod가 고유한 IP 주소를 부여받는다는 점입니다. 이로 인해 각 Pod는 마치 물리적인 서버처럼 독립적인 네트워크 엔드포인트가 되며, 동일한 클러스터 내의 다른 Pod들과 별도의 NAT(Network Address Translation)이나 포트 포워딩 없이도 직접적으로 통신할 수 있습니다. 이러한 구조는 기존의 가상머신 기반 인프라와 비교했을 때 훨씬 단순하고 직관적인 네트워크 환경을 제공합니다.

 

즉, 쿠버네티스에서는 Pod 간의 통신이 마치 하나의 평면 네트워크(flat network)에서 이루어지는 것처럼 설계되어 있습니다. 이로 인해 개발자는 네트워크 계층의 복잡한 설정을 신경 쓰지 않고도, 각 Pod가 제공하는 서비스에 쉽게 접근할 수 있습니다. 또한, 서비스 디스커버리와 로드밸런싱, 보안 정책 적용 등 다양한 네트워크 기능이 이 단일 네트워크 공간 위에서 일관성 있게 동작할 수 있게 됩니다.

 

이러한 네트워크 모델은 쿠버네티스가 대규모 분산 시스템을 효율적으로 운영할 수 있는 기반이 되며, 마이크로서비스 아키텍처에서 각 서비스가 독립적으로 배포되고 확장될 수 있도록 중요한 역할을 합니다. 따라서, Pod가 고유한 IP를 갖고 서로 직접 통신할 수 있다는 점은 쿠버네티스 네트워크의 근간이자, 클러스터 내 서비스 간 연결과 확장성, 그리고 관리의 용이성을 보장하는 매우 중요한 개념입니다.

 

또한 마이크로서비스 환경에서 이벤트를 이벤트 브로커를 통해 전달하는 횟수가 많아질 수록, 각 서비스와 Pod 간의 네트워크 경로가 복잡해지고, 라우팅 테이블의 크기와 관리 부담이 기하급수적으로 증가하게 됩니다. 전통적인 iptables 기반의 라우팅 방식에서는 이러한 네트워크 경로가 많아질수록 규칙이 폭증하여, 패킷 처리 지연(latency)과 CPU 오버헤드가 심각해지고, 실시간 이벤트 처리나 대규모 트래픽 분산에 한계가 발생할 수 있습니다.

 

특히, 이벤트 브로커(예: Kafka, RabbitMQ 등)를 중심으로 수많은 서비스가 메시지를 주고받는 환경에서는, 네트워크의 확장성과 성능이 전체 시스템의 신뢰성과 효율성에 직접적인 영향을 미칩니다. 이때, iptables와 같은 전통적인 방식은 라우팅 테이블의 갱신 속도와 처리 성능에서 한계를 드러내며, 서비스가 많아질수록 네트워크 관리가 점점 더 어려워집니다.

 

이러한 문제를 해결하기 위해 최근에는 eBPF 기반의 Cilium과 같은 네트워크 플러그인이 주목받고 있습니다. Cilium은 커널 수준에서 네트워크 패킷을 효율적으로 처리하고, 라우팅 및 정책 적용을 실시간으로 수행할 수 있어, 대규모 마이크로서비스 환경에서도 높은 성능과 확장성을 보장합니다. 따라서, 마이크로서비스 아키텍처에서 네트워크의 복잡성과 성능 한계를 극복하기 위해서는, 기존 iptables 방식에서 벗어나 Cilium과 같은 차세대 네트워크 솔루션의 도입이 점점 더 중요해지고 있습니다.

 


출처 - https://cilium.io/blog/2020/11/10/ebpf-future-of-networking/

 

eBPF - The Future of Networking & Security

Today is an exciting day for the Cilium community: Isovalent, the company behind Cilium, is announcing its $29M Series A financing ro...

cilium.io

 

위 그림은 eBPF(extended Berkeley Packet Filter) 기술이 리눅스 커널에 어떻게 "인젝션"(주입)되어 동작하는지를 시각적으로 보여줍니다. 기존의 네트워크 패킷 처리는 커널 내부의 고정된 코드 경로를 따라 이루어졌지만, eBPF는 사용자가 작성한 프로그램(코드 조각)을 커널의 특정 지점에 동적으로 삽입할 수 있게 해줍니다. 이를 통해 네트워크 패킷이 커널을 통과할 때, 미리 정의된 규칙이 아니라 실시간으로 주입된 eBPF 코드가 직접 패킷을 분석·처리·필터링할 수 있습니다.

 

Cilium은 이러한 eBPF의 커널 인젝션 기능을 적극적으로 활용하여, 네트워크 라우팅, 정책 적용, 관측(모니터링) 등 다양한 네트워크 작업을 고성능으로 처리합니다. 즉, 네트워크 트래픽이 사용자 공간(user space)으로 올라오지 않고, 커널 공간(kernel space)에서 바로 처리되기 때문에, 불필요한 오버헤드 없이 빠르고 효율적으로 네트워크를 제어할 수 있습니다. 이로 인해 대규모 마이크로서비스 환경에서도 확장성과 성능, 그리고 실시간 정책 적용이 가능해집니다.

 


출처 - https://cilium.io/blog/2021/05/11/cni-benchmark/

 

CNI Benchmark: Understanding Cilium Network Performance

As more crucial workloads are being migrated to Kubernetes, network performance benchmarks are becoming an important selection criter...

cilium.io

 

위 그림은 Cilium이 쿠버네티스 환경에서 네트워크를 어떻게 구성하고, 트래픽을 처리하는지에 대한 전체적인 구조를 보여줍니다. Cilium은 eBPF 기술을 활용하여 각 노드의 커널 레벨에서 네트워크 패킷을 직접 처리하며, Pod 간 통신, 서비스 라우팅, 네트워크 정책, 로드밸런싱, 관측(Observability) 등 다양한 네트워크 기능을 제공합니다.

 

기존의 CNI 플러그인(예: Flannel, Calico 등)은 주로 iptables나 IPVS와 같은 전통적인 커널 기능에 의존하여 트래픽을 라우팅하고 정책을 적용합니다. 반면, Cilium은 커널에 eBPF 프로그램을 주입하여, 네트워크 패킷이 노드에 도달하는 즉시 커널 공간에서 실시간으로 처리합니다. 이로 인해 패킷 처리 속도가 빨라지고, 네트워크 지연(latency)이 크게 줄어들며, 대규모 환경에서도 성능 저하 없이 확장할 수 있습니다.

 

또한, Cilium은 L3/L4(네트워크/전송 계층)뿐만 아니라 L7(애플리케이션 계층)까지 트래픽을 인식하고 제어할 수 있어, HTTP, gRPC, Kafka 등 다양한 프로토콜에 대한 세밀한 정책 적용과 관측이 가능합니다. 이러한 구조 덕분에, Cilium은 마이크로서비스 환경에서 보안, 가시성, 성능, 확장성 등 네트워크의 모든 측면을 효과적으로 관리할 수 있는 차세대 네트워크 솔루션으로 각광받고 있습니다.


실습환경 구성

이번 주차 실습은 mac M3 Pro max 환경에서 실습을 진행했고, VirtualBox + Vagrant로 환경을 구성했습니다.

 

Virtual Box, Vagrant 설치

# VirtualBox 설치
brew install --cask virtualbox
# Vagrant 설치
brew install --cask vagrant

VBoxManage --version
7.1.10r169112
vagrant version    
Installed Version: 2.4.7

 

출처 - CloudNet@ 스터디

 

 

실습 환경은 다음과 같이 쿠버네티스 클러스터 내 각 노드(192.168.10.100~102)가 192.168.10.0/24 네트워크에 연결되어 있으며, 클러스터 내부 파드들은 별도의 10.244.0.0/16 대역을 사용해 통신하는 구조를 나타냅니다.

이를 위해 다음과 같이 프로젝트 폴더를 구성하고 파일을 추가합니다.

 

Project Folder

실습을 위해 다음과 같이 프로젝트 폴더를 구성합니다. 각 파일의 역할은 아래와 같습니다.

  • Vagrantfile :
    실습 환경의 가상 머신(VM)들을 자동으로 생성하고 설정하는 Vagrant 설정 파일입니다. Control Plane과 Worker 노드의 수, 네트워크, 리소스 할당 등을 정의합니다.
  • init_cfg.sh :
    각 노드에서 공통적으로 실행되는 초기 설정 스크립트입니다. 타임존, 패키지 설치, 커널 파라미터, Kubernetes 및 Containerd 설치 등 기본 환경을 구성합니다.
  • k8s-ctr.sh :
    Control Plane 노드에서만 실행되는 스크립트로, Kubernetes 클러스터 초기화, kubeconfig 설정, 편의성 도구 설치 및 프롬프트 설정 등을 담당합니다.
  • k8s-w.sh :
    Worker 노드에서 실행되는 스크립트로, Control Plane에 워커 노드를 join 시키는 역할을 합니다.

이렇게 구성된 파일들을 바탕으로, Vagrant를 통해 손쉽게 쿠버네티스 실습 환경을 구축할 수 있습니다.

 

Vagrantfile

# Variables
K8SV = '1.33.2-1.1' # Kubernetes Version : apt list -a kubelet , ex) 1.32.5-1.1
CONTAINERDV = '1.7.27-1' # Containerd Version : apt list -a containerd.io , ex) 1.6.33-1
N = 2 # max number of worker nodes

# Base Image  https://portal.cloud.hashicorp.com/vagrant/discover/bento/ubuntu-24.04
## Rocky linux Image https://portal.cloud.hashicorp.com/vagrant/discover/rockylinux
BOX_IMAGE = "bento/ubuntu-24.04"
BOX_VERSION = "202502.21.0"

Vagrant.configure("2") do |config|
#-ControlPlane Node
    config.vm.define "k8s-ctr" do |subconfig|
      subconfig.vm.box = BOX_IMAGE
      subconfig.vm.box_version = BOX_VERSION
      subconfig.vm.provider "virtualbox" do |vb|
        vb.customize ["modifyvm", :id, "--groups", "/Cilium-Lab"]
        vb.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]
        vb.name = "k8s-ctr"
        vb.cpus = 2
        vb.memory = 2048
        vb.linked_clone = true
      end
      subconfig.vm.host_name = "k8s-ctr"
      subconfig.vm.network "private_network", ip: "192.168.10.100"
      subconfig.vm.network "forwarded_port", guest: 22, host: 60000, auto_correct: true, id: "ssh"
      subconfig.vm.synced_folder "./", "/vagrant", disabled: true
      subconfig.vm.provision "shell", path: "init_cfg.sh", args: [ K8SV, CONTAINERDV]
      subconfig.vm.provision "shell", path: "k8s-ctr.sh", args: [ N ]
    end

#-Worker Nodes Subnet1
  (1..N).each do |i|
    config.vm.define "k8s-w#{i}" do |subconfig|
      subconfig.vm.box = BOX_IMAGE
      subconfig.vm.box_version = BOX_VERSION
      subconfig.vm.provider "virtualbox" do |vb|
        vb.customize ["modifyvm", :id, "--groups", "/Cilium-Lab"]
        vb.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]
        vb.name = "k8s-w#{i}"
        vb.cpus = 2
        vb.memory = 1536
        vb.linked_clone = true
      end
      subconfig.vm.host_name = "k8s-w#{i}"
      subconfig.vm.network "private_network", ip: "192.168.10.10#{i}"
      subconfig.vm.network "forwarded_port", guest: 22, host: "6000#{i}", auto_correct: true, id: "ssh"
      subconfig.vm.synced_folder "./", "/vagrant", disabled: true
      subconfig.vm.provision "shell", path: "init_cfg.sh", args: [ K8SV, CONTAINERDV]
      subconfig.vm.provision "shell", path: "k8s-w.sh"
    end
  end

end

 

init_cfg.sh

#!/usr/bin/env bash

echo ">>>> Initial Config Start <<<<"

echo "[TASK 1] Setting Profile & Change Timezone"
echo 'alias vi=vim' >> /etc/profile
echo "sudo su -" >> /home/vagrant/.bashrc
ln -sf /usr/share/zoneinfo/Asia/Seoul /etc/localtime


echo "[TASK 2] Disable AppArmor"
systemctl stop ufw && systemctl disable ufw >/dev/null 2>&1
systemctl stop apparmor && systemctl disable apparmor >/dev/null 2>&1

echo "[TASK 3] Disable and turn off SWAP"
swapoff -a && sed -i '/swap/s/^/#/' /etc/fstab

echo "[TASK 4] Install Packages"
apt update -qq >/dev/null 2>&1
apt-get install apt-transport-https ca-certificates curl gpg -y -qq >/dev/null 2>&1

# Download the public signing key for the Kubernetes package repositories.
mkdir -p -m 755 /etc/apt/keyrings
K8SMMV=$(echo $1 | sed -En 's/^([0-9]+\.[0-9]+)\..*/\1/p')
curl -fsSL https://pkgs.k8s.io/core:/stable:/v$K8SMMV/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v$K8SMMV/deb/ /" >> /etc/apt/sources.list.d/kubernetes.list
curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null

# packets traversing the bridge are processed by iptables for filtering
echo 1 > /proc/sys/net/ipv4/ip_forward
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.d/k8s.conf

# enable br_netfilter for iptables 
modprobe br_netfilter
modprobe overlay
echo "br_netfilter" >> /etc/modules-load.d/k8s.conf
echo "overlay" >> /etc/modules-load.d/k8s.conf

echo "[TASK 5] Install Kubernetes components (kubeadm, kubelet and kubectl)"
# Update the apt package index, install kubelet, kubeadm and kubectl, and pin their version
apt update >/dev/null 2>&1

# apt list -a kubelet ; apt list -a containerd.io
apt-get install -y kubelet=$1 kubectl=$1 kubeadm=$1 containerd.io=$2 >/dev/null 2>&1
apt-mark hold kubelet kubeadm kubectl >/dev/null 2>&1

# containerd configure to default and cgroup managed by systemd
containerd config default > /etc/containerd/config.toml
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

# avoid WARN&ERRO(default endpoints) when crictl run  
cat <<EOF > /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
EOF

# ready to install for k8s 
systemctl restart containerd && systemctl enable containerd
systemctl enable --now kubelet

echo "[TASK 6] Install Packages & Helm"
apt-get install -y bridge-utils sshpass net-tools conntrack ngrep tcpdump ipset arping wireguard jq tree bash-completion unzip kubecolor >/dev/null 2>&1
curl -s https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash >/dev/null 2>&1


echo ">>>> Initial Config End <<<<"

 

k8s-ctr.sh

#!/usr/bin/env bash

echo ">>>> K8S Controlplane config Start <<<<"

echo "[TASK 1] Initial Kubernetes"
kubeadm init --token 123456.1234567890123456 --token-ttl 0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/16 --apiserver-advertise-address=192.168.10.100 --cri-socket=unix:///run/containerd/containerd.sock >/dev/null 2>&1

echo "[TASK 2] Setting kube config file"
mkdir -p /root/.kube
cp -i /etc/kubernetes/admin.conf /root/.kube/config
chown $(id -u):$(id -g) /root/.kube/config

echo "[TASK 3] Source the completion"
echo 'source <(kubectl completion bash)' >> /etc/profile
echo 'source <(kubeadm completion bash)' >> /etc/profile

echo "[TASK 4] Alias kubectl to k"
echo 'alias k=kubectl' >> /etc/profile
echo 'alias kc=kubecolor' >> /etc/profile
echo 'complete -F __start_kubectl k' >> /etc/profile

echo "[TASK 5] Install Kubectx & Kubens"
git clone https://github.com/ahmetb/kubectx /opt/kubectx >/dev/null 2>&1
ln -s /opt/kubectx/kubens /usr/local/bin/kubens
ln -s /opt/kubectx/kubectx /usr/local/bin/kubectx

echo "[TASK 6] Install Kubeps & Setting PS1"
git clone https://github.com/jonmosco/kube-ps1.git /root/kube-ps1 >/dev/null 2>&1
cat <<"EOT" >> /root/.bash_profile
source /root/kube-ps1/kube-ps1.sh
KUBE_PS1_SYMBOL_ENABLE=true
function get_cluster_short() {
  echo "$1" | cut -d . -f1
}
KUBE_PS1_CLUSTER_FUNCTION=get_cluster_short
KUBE_PS1_SUFFIX=') '
PS1='$(kube_ps1)'$PS1
EOT
kubectl config rename-context "kubernetes-admin@kubernetes" "HomeLab" >/dev/null 2>&1

echo "[TASK 6] Install Kubeps & Setting PS1"
echo "192.168.10.100 k8s-ctr" >> /etc/hosts
for (( i=1; i<=$1; i++  )); do echo "192.168.10.10$i k8s-w$i" >> /etc/hosts; done

echo ">>>> K8S Controlplane Config End <<<<"

 

k8s-w.sh

#!/usr/bin/env bash

echo ">>>> K8S Node config Start <<<<"

echo "[TASK 1] K8S Controlplane Join" 
kubeadm join --token 123456.1234567890123456 --discovery-token-unsafe-skip-ca-verification 192.168.10.100:6443  >/dev/null 2>&1


echo ">>>> K8S Node config End <<<<"

 

Vagrant 실행

vagrant up

 

✅ 실행 결과(요약)

# Control Plane VM 생성 시작
Bringing machine 'k8s-ctr' up with 'virtualbox' provider...
# Worker1 VM 생성 시작
Bringing machine 'k8s-w1' up with 'virtualbox' provider...
# Worker2 VM 생성 시작
Bringing machine 'k8s-w2' up with 'virtualbox' provider...
...
# Control Plane VM 부팅 및 네트워크 설정
==> k8s-ctr: Booting VM...
==> k8s-ctr: Waiting for machine to boot. This may take a few minutes...
...
# Control Plane 초기 설정 스크립트 실행
    k8s-ctr: >>>> Initial Config Start <<<<
    k8s-ctr: [TASK 1] Setting Profile & Change Timezone
    k8s-ctr: [TASK 2] Disable AppArmor
    k8s-ctr: [TASK 3] Disable and turn off SWAP
    k8s-ctr: [TASK 4] Install Packages
    k8s-ctr: [TASK 5] Install Kubernetes components (kubeadm, kubelet and kubectl)
    k8s-ctr: [TASK 6] Install Packages & Helm
    k8s-ctr: >>>> Initial Config End <<<<
...
# Control Plane Kubernetes 클러스터 초기화
    k8s-ctr: >>>> K8S Controlplane config Start <<<<
    k8s-ctr: [TASK 1] Initial Kubernetes
    k8s-ctr: [TASK 2] Setting kube config file
    k8s-ctr: [TASK 3] Source the completion
    k8s-ctr: [TASK 4] Alias kubectl to k
    k8s-ctr: [TASK 5] Install Kubectx & Kubens
    k8s-ctr: [TASK 6] Install Kubeps & Setting PS1
    k8s-ctr: >>>> K8S Controlplane Config End <<<<
...
# Worker1 VM 부팅 및 초기 설정
    k8s-w1: >>>> Initial Config Start <<<<
    k8s-w1: [TASK 1] Setting Profile & Change Timezone
    k8s-w1: [TASK 2] Disable AppArmor
    k8s-w1: [TASK 3] Disable and turn off SWAP
    k8s-w1: [TASK 4] Install Packages
    k8s-w1: [TASK 5] Install Kubernetes components (kubeadm, kubelet and kubectl)
    k8s-w1: [TASK 6] Install Packages & Helm
    k8s-w1: >>>> Initial Config End <<<<
...
# Worker1 노드 클러스터 조인
    k8s-w1: >>>> K8S Node config Start <<<<
    k8s-w1: [TASK 1] K8S Controlplane Join
    k8s-w1: >>>> K8S Node config End <<<<
...
# Worker2 VM 부팅 및 초기 설정
    k8s-w2: >>>> Initial Config Start <<<<
    k8s-w2: [TASK 1] Setting Profile & Change Timezone
    k8s-w2: [TASK 2] Disable AppArmor
    k8s-w2: [TASK 3] Disable and turn off SWAP
    k8s-w2: [TASK 4] Install Packages
    k8s-w2: [TASK 5] Install Kubernetes components (kubeadm, kubelet and kubectl)
    k8s-w2: [TASK 6] Install Packages & Helm
    k8s-w2: >>>> Initial Config End <<<<
...
# Worker2 노드 클러스터 조인
    k8s-w2: >>>> K8S Node config Start <<<<
    k8s-w2: [TASK 1] K8S Controlplane Join
    k8s-w2: >>>> K8S Node config End <<<<

 

배포 후 ssh 접속전 노드들의 eth0 IP 확인

for i in ctr w1 w2 ; do echo ">> node : k8s-$i <<"; vagrant ssh k8s-$i -c 'ip -c -4 addr show dev eth0'; echo; done #

 

✅ 실행 결과

>> node : k8s-ctr <<
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    altname enp0s8
    inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 85722sec preferred_lft 85722sec

>> node : k8s-w1 <<
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    altname enp0s8
    inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 85972sec preferred_lft 85972sec

>> node : k8s-w2 <<
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    altname enp0s8
    inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 86055sec preferred_lft 86055sec

 

[k8s-ctr] 접속 후 기본 정보 확인

vagrant ssh k8s-ctr   # k8s-ctr 접속
whoami                # 현재 사용자 확인
pwd                   # 현재 작업 디렉토리 확인
hostnamectl           # 호스트 정보(호스트명, OS 등) 확인
htop                  # 시스템 리소스 모니터링(실시간)
cat /etc/hosts        # hosts 파일 확인(노드별 호스트네임 매핑)
ping -c 1 k8s-w1      # 워커1 노드 네트워크 연결 확인
ping -c 1 k8s-w2      # 워커2 노드 네트워크 연결 확인
sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-w1 hostname   # 워커1에 SSH 접속 후 호스트명 확인
sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-w2 hostname   # 워커2에 SSH 접속 후 호스트명 확인
ss -tnp |grep sshd    # SSH 데몬 포트/연결 상태 확인
ip -c addr            # 네트워크 인터페이스 및 IP 정보 확인
ip -c route           # 라우팅 테이블 확인
resolvectl            # DNS 및 네트워크 해상도 정보 확인

 

✅ 실행 결과

(⎈|HomeLab:N/A) root@k8s-ctr:~# whoami
root

(⎈|HomeLab:N/A) root@k8s-ctr:~# pwd 
/root

(⎈|HomeLab:N/A) root@k8s-ctr:~# hostnamectl
 Static hostname: k8s-ctr
       Icon name: computer-vm
         Chassis: vm 🖴
      Machine ID: f8ef1e172a214e15b6a93da1cd332fe8
         Boot ID: e6fe99c436984a13b9d48fce6e1a1a70
  Virtualization: qemu
Operating System: Ubuntu 24.04.2 LTS              
          Kernel: Linux 6.8.0-53-generic
    Architecture: arm64

(⎈|HomeLab:N/A) root@k8s-ctr:~# htop
    0[||||||                                                          2.8%] Tasks: 43, 146 thr, 71 kthr; 2 running
    1[|||||                                                           2.3%] Load average: 0.00 0.05 0.06 
  Mem[||||||||||||||||||||||||||||||||||||||||||||||||||||||||||520M/1.79G] Uptime: 00:17:47
  Swp[                                                               0K/0K]

  [Main] [I/O]
    PID USER       PRI  NI  VIRT   RES   SHR S  CPU%▽MEM%   TIME+  Command
   4493 root        20   0 1926M 96700 62080 S   0.0  5.2  0:01.23 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --pod-infra-container-image=
   4343 root        20   0 1282M  105M 65664 S   0.0  5.8  0:02.25 kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/etc/kubernetes/pki/ca.crt
   4435 root        20   0 1420M  249M 73088 S   0.5 13.6  0:05.24 kube-apiserver --advertise-address=192.168.10.100 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernete
   4437 root        20   0 1420M  249M 73088 S   0.0 13.6  0:02.16 kube-apiserver --advertise-address=192.168.10.100 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernete
   4443 root        20   0 1420M  249M 73088 S   0.5 13.6  0:04.08 kube-apiserver --advertise-address=192.168.10.100 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernete
   4445 root        20   0 11.1G 48896 22400 S   0.9  2.6  0:02.02 etcd --advertise-client-urls=https://192.168.10.100:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --experimental-initial-corrupt-check=true --experimental-watch-progress-notify-interval=5s --initial-
   4514 root        20   0 1926M 96700 62080 S   0.0  5.2  0:02.00 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --pod-infra-container-image=
   4672 root        20   0 1420M  249M 73088 S   0.9 13.6  0:03.75 kube-apiserver --advertise-address=192.168.10.100 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernete
   5219 root        20   0  9580  5120  3328 R   0.5  0.3  0:00.04 htop
      1 root        20   0 22532 12616  8520 S   0.0  0.7  0:01.92 /sbin/init autoinstall
    280 root        19  -1 50360 16128 14976 S   0.0  0.9  0:00.17 /usr/lib/systemd/systemd-journald
    336 root        RT   0  347M 26496  7424 S   0.0  1.4  0:00.04 /sbin/multipathd -d -s
 ...
    717 root        20   0  461M 12416 10368 S   0.0  0.7  0:00.00 /usr/libexec/udisks2/udisksd
    720 root        20   0  461M 12416 10368 S   0.0  0.7  0:00.00 /usr/libexec/udisks2/udisksd
    743 syslog      20   0  217M  4864  3712 S   0.0  0.3  0:00.00 /usr/sbin/rsyslogd -n -iNONE
    752 polkitd     20   0  375M  8980  6784 S   0.0  0.5  0:00.01 /usr/lib/polkit-1/polkitd --no-debug
    753 polkitd     20   0  375M  8980  6784 S   0.0  0.5  0:00.00 /usr/lib/polkit-1/polkitd --no-debug
F1Help  F2Setup F3SearchF4FilterF5Tree  F6SortByF7Nice -F8Nice +F9Kill  F10Quit  

(⎈|HomeLab:N/A) root@k8s-ctr:~# cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 vagrant

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
127.0.2.1 k8s-ctr k8s-ctr
192.168.10.100 k8s-ctr
192.168.10.101 k8s-w1
192.168.10.102 k8s-w2

(⎈|HomeLab:N/A) root@k8s-ctr:~# ping -c 1 k8s-w1
PING k8s-w1 (192.168.10.101) 56(84) bytes of data.
64 bytes from k8s-w1 (192.168.10.101): icmp_seq=1 ttl=64 time=0.452 ms

--- k8s-w1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.452/0.452/0.452/0.000 ms

(⎈|HomeLab:N/A) root@k8s-ctr:~# ping -c 1 k8s-w2
PING k8s-w2 (192.168.10.102) 56(84) bytes of data.
64 bytes from k8s-w2 (192.168.10.102): icmp_seq=1 ttl=64 time=0.432 ms

--- k8s-w2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms

(⎈|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-w1 hostname
Warning: Permanently added 'k8s-w1' (ED25519) to the list of known hosts.
k8s-w1

(⎈|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-w2 hostname
Warning: Permanently added 'k8s-w2' (ED25519) to the list of known hosts.
k8s-w2

(⎈|HomeLab:N/A) root@k8s-ctr:~# ss -tnp |grep sshd
ESTAB 0      0           [::ffff:10.0.2.15]:22          [::ffff:10.0.2.2]:59130 users:(("sshd",pid=4982,fd=4),("sshd",pid=4938,fd=4))

(⎈|HomeLab:N/A) root@k8s-ctr:~# ip -c addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:71:19:d8 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
    inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 85113sec preferred_lft 85113sec
    inet6 fd17:625c:f037:2:a00:27ff:fe71:19d8/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 86234sec preferred_lft 14234sec
    inet6 fe80::a00:27ff:fe71:19d8/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:36:80:8d brd ff:ff:ff:ff:ff:ff
    altname enp0s9
    inet 192.168.10.100/24 brd 192.168.10.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe36:808d/64 scope link 
       valid_lft forever preferred_lft forever

(⎈|HomeLab:N/A) root@k8s-ctr:~# ip -c route
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100 

(⎈|HomeLab:N/A) root@k8s-ctr:~# resolvectl
Global
         Protocols: -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
  resolv.conf mode: stub

Link 2 (eth0)
    Current Scopes: DNS
         Protocols: +DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Current DNS Server: 10.0.2.3
       DNS Servers: 10.0.2.3

Link 3 (eth1)
    Current Scopes: none
         Protocols: -DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported

 

[k8s-ctr] k8s 정보 확인

kubectl cluster-info   # 클러스터 API 서버 및 주요 서비스 엔드포인트 정보 확인
kubectl get node -owide   # 노드 목록 및 상세 정보(내부/외부 IP, OS, 역할 등) 확인
kubectl get pod -A -owide # 전체 네임스페이스의 파드 목록 및 상세 정보 확인
k  describe pod -n kube-system -l k8s-app=kube-dns   # kube-dns 파드 상세 정보 확인 (kubectl 별칭)
kc describe pod -n kube-system -l k8s-app=kube-dns   # kube-dns 파드 상세 정보 확인 (kubecolor 별칭)

 

✅ 실행 결과

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl cluster-info
Kubernetes control plane is running at https://192.168.10.100:6443
CoreDNS is running at https://192.168.10.100:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get node -owide
NAME      STATUS     ROLES           AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s-ctr   NotReady   control-plane   25m   v1.33.2   192.168.10.100   <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w1    NotReady   <none>          24m   v1.33.2   10.0.2.15        <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w2    NotReady   <none>          22m   v1.33.2   10.0.2.15        <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -A -owide
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE   IP          NODE      NOMINATED NODE   READINESS GATES
kube-system   coredns-674b8bbfcf-b7drj          0/1     Pending   0          25m   <none>      <none>    <none>           <none>
kube-system   coredns-674b8bbfcf-jhpj4          0/1     Pending   0          25m   <none>      <none>    <none>           <none>
kube-system   etcd-k8s-ctr                      1/1     Running   0          25m   10.0.2.15   k8s-ctr   <none>           <none>
kube-system   kube-apiserver-k8s-ctr            1/1     Running   0          25m   10.0.2.15   k8s-ctr   <none>           <none>
kube-system   kube-controller-manager-k8s-ctr   1/1     Running   0          25m   10.0.2.15   k8s-ctr   <none>           <none>
kube-system   kube-proxy-5zz6r                  1/1     Running   0          22m   10.0.2.15   k8s-w2    <none>           <none>
kube-system   kube-proxy-fszcp                  1/1     Running   0          24m   10.0.2.15   k8s-w1    <none>           <none>
kube-system   kube-proxy-kxbld                  1/1     Running   0          25m   10.0.2.15   k8s-ctr   <none>           <none>
kube-system   kube-scheduler-k8s-ctr            1/1     Running   0          25m   10.0.2.15   k8s-ctr   <none>           <none>

(⎈|HomeLab:N/A) root@k8s-ctr:~# k  describe pod -n kube-system -l k8s-app=kube-dns
Name:                 coredns-674b8bbfcf-b7drj
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      coredns
Node:                 <none>
Labels:               k8s-app=kube-dns
                      pod-template-hash=674b8bbfcf
Annotations:          <none>
Status:               Pending
IP:                   
IPs:                  <none>
Controlled By:        ReplicaSet/coredns-674b8bbfcf
Containers:
  coredns:
    Image:       registry.k8s.io/coredns/coredns:v1.12.0
    Ports:       53/UDP, 53/TCP, 9153/TCP
...

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get node -owide
NAME      STATUS     ROLES           AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s-ctr   NotReady   control-plane   29m   v1.33.2   192.168.10.100   <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w1    NotReady   <none>          27m   v1.33.2   10.0.2.15        <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w2    NotReady   <none>          26m   v1.33.2   10.0.2.15        <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -A -owide
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE   IP          NODE      NOMINATED NODE   READINESS GATES
kube-system   coredns-674b8bbfcf-b7drj          0/1     Pending   0          29m   <none>      <none>    <none>           <none>
kube-system   coredns-674b8bbfcf-jhpj4          0/1     Pending   0          29m   <none>      <none>    <none>           <none>
kube-system   etcd-k8s-ctr                      1/1     Running   0          29m   10.0.2.15   k8s-ctr   <none>           <none>
kube-system   kube-apiserver-k8s-ctr            1/1     Running   0          29m   10.0.2.15   k8s-ctr   <none>           <none>
kube-system   kube-controller-manager-k8s-ctr   1/1     Running   0          29m   10.0.2.15   k8s-ctr   <none>           <none>
kube-system   kube-proxy-5zz6r                  1/1     Running   0          26m   10.0.2.15   k8s-w2    <none>           <none>
kube-system   kube-proxy-fszcp                  1/1     Running   0          27m   10.0.2.15   k8s-w1    <none>           <none>
kube-system   kube-proxy-kxbld                  1/1     Running   0          29m   10.0.2.15   k8s-ctr   <none>           <none>
kube-system   kube-scheduler-k8s-ctr            1/1     Running   0          29m   10.0.2.15   k8s-ctr   <none>           <none>

 

[k8s-ctr] INTERNAL-IP 변경 설정

cat /var/lib/kubelet/kubeadm-flags.env   # 현재 kubelet 설정 플래그 확인

# eth1 인터페이스의 IPv4 주소를 추출하여 NODEIP 변수에 저장
NODEIP=$(ip -4 addr show eth1 | grep -oP '(?<=inet\s)\d+(\.\d+){3}')
echo $NODEIP

# kubelet 설정 파일에 --node-ip 파라미터를 추가하여 INTERNAL-IP를 변경
sed -i "s/^\(KUBELET_KUBEADM_ARGS=\"\)/\1--node-ip=${NODEIP} /" /var/lib/kubelet/kubeadm-flags.env

# systemd 데몬 재실행 및 kubelet 재시작
systemctl daemon-reexec && systemctl restart kubelet

cat /var/lib/kubelet/kubeadm-flags.env   # 변경된 kubelet 설정 플래그 확인
kubectl get node -owide                  # 노드의 IP가 변경되었는지 확인

 

✅ 실행 결과

(⎈|HomeLab:N/A) root@k8s-ctr:~# cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.10"

(⎈|HomeLab:N/A) root@k8s-ctr:~# NODEIP=$(ip -4 addr show eth1 | grep -oP '(?<=inet\s)\d+(\.\d+){3}')

(⎈|HomeLab:N/A) root@k8s-ctr:~# echo $NODEIP
192.168.10.100

(⎈|HomeLab:N/A) root@k8s-ctr:~# sed -i "s/^\(KUBELET_KUBEADM_ARGS=\"\)/\1--node-ip=${NODEIP} /" /var/lib/kubelet/kubeadm-flags.env

(⎈|HomeLab:N/A) root@k8s-ctr:~# systemctl daemon-reexec && systemctl restart kubelet

(⎈|HomeLab:N/A) root@k8s-ctr:~# cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--node-ip=192.168.10.100 --container-runtime-endpoint=unix:///run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.10"

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get node -owide
NAME      STATUS     ROLES           AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s-ctr   NotReady   control-plane   32m   v1.33.2   192.168.10.100   <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w1    NotReady   <none>          31m   v1.33.2   10.0.2.15        <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w2    NotReady   <none>          29m   v1.33.2   10.0.2.15        <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27

### k8s-w1 에서 실행 (vagrant ssh k8s-w1)

root@k8s-w1:~# cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.10"

root@k8s-w1:~# NODEIP=$(ip -4 addr show eth1 | grep -oP '(?<=inet\s)\d+(\.\d+){3}')

root@k8s-w1:~# echo $NODEIP
192.168.10.101

root@k8s-w1:~# sed -i "s/^\(KUBELET_KUBEADM_ARGS=\"\)/\1--node-ip=${NODEIP} /" /var/lib/kubelet/kubeadm-flags.env

oot@k8s-w1:~# systemctl daemon-reexec && systemctl restart kubelet

root@k8s-w1:~# cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--node-ip=192.168.10.101 --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.10"

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get node -owide
NAME      STATUS     ROLES           AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s-ctr   NotReady   control-plane   36m   v1.33.2   192.168.10.100   <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w1    NotReady   <none>          34m   v1.33.2   192.168.10.101   <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w2    NotReady   <none>          33m   v1.33.2   10.0.2.15        <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27

### k8s-w2 에서 실행 (vagrant ssh k8s-w2)

root@k8s-w2:~# cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.10"

root@k8s-w2:~# NODEIP=$(ip -4 addr show eth1 | grep -oP '(?<=inet\s)\d+(\.\d+){3}')

root@k8s-w2:~# echo $NODEIP
192.168.10.102

root@k8s-w2:~# sed -i "s/^\(KUBELET_KUBEADM_ARGS=\"\)/\1--node-ip=${NODEIP} /" /var/lib/kubelet/kubeadm-flags.env

root@k8s-w2:~# systemctl daemon-reexec && systemctl restart kubelet

root@k8s-w2:~# cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--node-ip=192.168.10.102 --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.10"

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get node -owide
NAME      STATUS     ROLES           AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s-ctr   NotReady   control-plane   38m   v1.33.2   192.168.10.100   <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w1    NotReady   <none>          37m   v1.33.2   192.168.10.101   <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w2    NotReady   <none>          35m   v1.33.2   192.168.10.102   <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27

 

[k8s-ctr] static pod 의 IP 변경 설정

tree /etc/kubernetes/manifests              # static pod manifest 디렉토리 구조 확인
cat /etc/kubernetes/manifests/etcd.yaml     # etcd static pod manifest 파일 내용 확인
tree /var/lib/etcd/                         # etcd 데이터 디렉토리 구조 확인

 

✅ 실행 결과

(⎈|HomeLab:N/A) root@k8s-ctr:~# tree /etc/kubernetes/manifests
/etc/kubernetes/manifests
├── etcd.yaml
├── kube-apiserver.yaml
├── kube-controller-manager.yaml
└── kube-scheduler.yaml

1 directory, 4 files
(⎈|HomeLab:N/A) root@k8s-ctr:~# cat /etc/kubernetes/manifests/etcd.yaml
apiVersion: v1
kind: Pod
metadata:
...
  volumes:
  - hostPath:
      path: /etc/kubernetes/pki/etcd
      type: DirectoryOrCreate
    name: etcd-certs
  - hostPath:
      path: /var/lib/etcd
      type: DirectoryOrCreate
    name: etcd-data
status: {}

(⎈|HomeLab:N/A) root@k8s-ctr:~# tree /var/lib/etcd/
/var/lib/etcd/
└── member
    ├── snap
    │   └── db
    └── wal
        ├── 0000000000000000-0000000000000000.wal
        └── 0.tmp

4 directories, 3 files

 

vagrant ssh k8s-ctr 재접속 후 확인

kubectl get pod -n kube-system -owide

 

✅ 실행 결과

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -n kube-system -owide
NAME                              READY   STATUS    RESTARTS   AGE   IP               NODE      NOMINATED NODE   READINESS GATES
coredns-674b8bbfcf-b7drj          0/1     Pending   0          42m   <none>           <none>    <none>           <none>
coredns-674b8bbfcf-jhpj4          0/1     Pending   0          42m   <none>           <none>    <none>           <none>
etcd-k8s-ctr                      1/1     Running   0          42m   192.168.10.100   k8s-ctr   <none>           <none>
kube-apiserver-k8s-ctr            1/1     Running   0          42m   192.168.10.100   k8s-ctr   <none>           <none>
kube-controller-manager-k8s-ctr   1/1     Running   0          42m   192.168.10.100   k8s-ctr   <none>           <none>
kube-proxy-5zz6r                  1/1     Running   0          39m   192.168.10.102   k8s-w2    <none>           <none>
kube-proxy-fszcp                  1/1     Running   0          40m   192.168.10.101   k8s-w1    <none>           <none>
kube-proxy-kxbld                  1/1     Running   0          42m   192.168.10.100   k8s-ctr   <none>           <none>
kube-scheduler-k8s-ctr            1/1     Running   0          42m   192.168.10.100   k8s-ctr   <none>           <none>

 

vagrant ssh k8s-ctr로 다시 접속한 후, kubectl get pod -n kube-system -owide 명령어를 실행하면 각 Pod의 IP 대역이 기존(10.0.2.x 등)에서 192.168.10.x 대역으로 변경된 것을 확인할 수 있습니다.


이는 static pod(예: etcd, kube-apiserver 등)의 manifest 파일에서 hostNetwork 설정이나 노드의 네트워크 인터페이스가 변경되었기 때문입니다.


즉, 쿠버네티스 컨트롤 플레인 컴포넌트들이 노드의 새로운 네트워크(192.168.10.0/24)를 사용하여 통신하게 된 것을 의미합니다. static pod의 IP는 노드의 네트워크 설정에 따라 자동으로 변경되므로, 네트워크 구조 변경 시 반드시 확인이 필요합니다.

 


 

Flannel란?

Flannel은 쿠버네티스(Kubernetes) 클러스터에서 Pod 간의 네트워크를 구성해주는 가장 널리 사용되는 CNI(Container Network Interface) 플러그인 중 하나입니다. 기본적으로 각 노드에 오버레이 네트워크를 생성하여, 서로 다른 노드에 있는 Pod들도 마치 같은 네트워크에 있는 것처럼 통신할 수 있도록 해줍니다.

 

Flannel 주요 특징

  • 간단한 구조: 복잡한 네트워크 설정 없이 손쉽게 설치 및 운영이 가능합니다.
  • 오버레이 네트워크: VXLAN, host-gw 등 다양한 백엔드 방식을 지원하여, 노드 간에 가상 네트워크를 형성합니다.
  • Pod 간 통신 보장: 모든 Pod에 고유한 IP를 할당하고, 노드 간 라우팅을 자동으로 처리합니다.
  • 경량화: 비교적 가볍고, 별도의 외부 의존성이 적어 소규모/테스트 환경에 적합합니다.
  • 확장성: 다양한 클라우드 및 온프레미스 환경에서 사용할 수 있습니다.

Flannel을 사용하면 쿠버네티스 클러스터 내의 네트워크 구성이 단순해지고, Pod 간 통신이 원활하게 이루어집니다. Flannel은 기본적으로 iptables 기반의 kube-proxy와 함께 사용됩니다. 자체적으로 오버레이 네트워크(예: VXLAN, host-gw 등)를 제공하지만, Service 트래픽(ClusterIP, NodePort 등)의 로드밸런싱과 라우팅은 kube-proxy가 담당합니다.


kube-proxy의 기본 모드가 iptables이기 때문에, Flannel을 사용하는 대부분의 환경에서는 iptables 기반 kube-proxy가 함께 동작합니다.


Pod 간 통신은 Flannel이, Service 트래픽 분산은 iptables 기반 kube-proxy가 처리합니다.

kube-proxy + Flannel CNI 사용 실습

현재 실습 상황은 아직 CNI 플러그인(예: Flannel, Calico, Cilium 등)이 설치되지 않은 상태입니다.

 

kubectl get pod -A -owide 결과를 보면, CoreDNS 파드가 Pending 상태이고, IP가 할당되지 않은 것을 확인할 수 있습니다. 이는 쿠버네티스 클러스터가 네트워크 플러그인(CNI)이 설치되어 있지 않아, Pod 네트워크를 할당할 수 없기 때문입니다. 컨트롤 플레인(예: etcd, kube-apiserver 등)은 static pod로서 hostNetwork를 사용하므로 노드의 IP(192.168.10.x)가 할당되어 있지만, 일반 파드(CoreDNS 등)는 CNI가 설치되어야만 IP가 할당되고 Running 상태가 됩니다.

 

따라서, 이제 Flannel 등 CNI 플러그인을 설치하면 CoreDNS 등 파드가 정상적으로 Running 상태로 전환되는 것을 확인하겠습니다.

 

설치 전 확인

kubectl cluster-info dump | grep -m 2 -E "cluster-cidr|service-cluster-ip-range"  # 클러스터 Pod/Service 네트워크 CIDR 확인
kubectl get pod -n kube-system -l k8s-app=kube-dns -owide                         # CoreDNS 파드의 상태 및 IP 확인
ip -c link                                                                        # 네트워크 인터페이스 정보 확인
ip -c route                                                                       # 라우팅 테이블 확인
brctl show                                                                        # 브리지 네트워크 정보 확인
iptables-save                                                                     # 전체 iptables 규칙 백업
iptables -t nat -S                                                                # NAT 테이블 규칙 확인
iptables -t filter -S                                                             # 필터 테이블 규칙 확인
iptables -t mangle -S                                                             # mangle 테이블 규칙 확인
tree /etc/cni/net.d/                                                              # CNI 네트워크 설정 파일 구조 확인

 

✅ 실행 결과

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl cluster-info dump | grep -m 2 -E "cluster-cidr|service-cluster-ip-range"
                            "--service-cluster-ip-range=10.96.0.0/16",
                            "--cluster-cidr=10.244.0.0/16",

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -n kube-system -l k8s-app=kube-dns -owide
NAME                       READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
coredns-674b8bbfcf-b7drj   0/1     Pending   0          52m   <none>   <none>   <none>           <none>
coredns-674b8bbfcf-jhpj4   0/1     Pending   0          52m   <none>   <none>   <none>           <none>

(⎈|HomeLab:N/A) root@k8s-ctr:~# ip -c link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:71:19:d8 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:36:80:8d brd ff:ff:ff:ff:ff:ff
    altname enp0s9
(⎈|HomeLab:N/A) root@k8s-ctr:~# ip -c route
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100 

(⎈|HomeLab:N/A) root@k8s-ctr:~# brctl show

(⎈|HomeLab:N/A) root@k8s-ctr:~# iptables-save
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 02:13:20 2025
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
...
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
COMMIT
# Completed on Wed Jul 16 02:13:20 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 02:13:20 2025
*filter
:INPUT ACCEPT [561038:116307578]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [553911:105491006]
...
COMMIT
# Completed on Wed Jul 16 02:13:20 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 02:13:20 2025
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
...
:KUBE-SERVICES - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
...
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-NPX46M4PTMTKRN6Y ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https -> 192.168.10.100:6443" -j KUBE-SEP-ETI7FUQQE3BS2IXE
COMMIT
# Completed on Wed Jul 16 02:13:20 2025

(⎈|HomeLab:N/A) root@k8s-ctr:~# tree /etc/cni/net.d/
/etc/cni/net.d/
0 directories, 0 files

 

Flannel 설치

# Flannel 네임스페이스 생성
kubectl create ns kube-flannel

# Flannel 네임스페이스에 Pod Security 정책 라벨 추가 (privileged)
kubectl label --overwrite ns kube-flannel pod-security.kubernetes.io/enforce=privileged

# Flannel Helm 저장소 추가
helm repo add flannel https://flannel-io.github.io/flannel/

# Helm 저장소 목록 확인
helm repo list

# Flannel 차트 검색
helm search repo flannel

# Flannel 차트의 기본 values.yaml 확인
helm show values flannel/flannel

# Flannel 설치를 위한 values 파일 작성 (Pod CIDR, 인터페이스 등 지정)
cat << EOF > flannel-values.yaml
podCidr: "10.244.0.0/16"

flannel:
  args:
  - "--ip-masq"
  - "--kube-subnet-mgr"
  - "--iface=eth1"  
EOF

# Flannel Helm Chart 설치 (kube-flannel 네임스페이스, 커스텀 values 적용)
helm install flannel --namespace kube-flannel flannel/flannel -f flannel-values.yaml

# 모든 네임스페이스의 Helm 릴리스 목록 확인
helm list -A

# Flannel 파드 상태 확인 (kc는 kubecolor alias)
kc describe pod -n kube-flannel -l app=flannel

# CNI 바이너리 설치 경로 확인 (flannel 관련)
tree /opt/cni/bin/ # flannel

# CNI 네트워크 설정 파일 경로 확인
tree /etc/cni/net.d/

# flannel이 생성한 CNI 네트워크 설정 파일 내용 확인 (JSON 포맷)
cat /etc/cni/net.d/10-flannel.conflist | jq

# flannel 설정 ConfigMap 상세 확인
kc describe cm -n kube-flannel kube-flannel-cfg

 

✅ 실행 결과

(⎈|HomeLab:N/A) root@k8s-ctr:~# tree /etc/cni/net.d/
/etc/cni/net.d/

0 directories, 0 files
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl create ns kube-flannel
namespace/kube-flannel created
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl label --overwrite ns kube-flannel pod-security.kubernetes.io/enforce=privileged
namespace/kube-flannel labeled
(⎈|HomeLab:N/A) root@k8s-ctr:~# helm repo add flannel https://flannel-io.github.io/flannel/
"flannel" has been added to your repositories
(⎈|HomeLab:N/A) root@k8s-ctr:~# helm repo list
NAME    URL                                  
flannel https://flannel-io.github.io/flannel/
(⎈|HomeLab:N/A) root@k8s-ctr:~# helm search repo flannel
NAME            CHART VERSION   APP VERSION     DESCRIPTION                    
flannel/flannel v0.27.1         v0.27.1         Install Flannel Network Plugin.
(⎈|HomeLab:N/A) root@k8s-ctr:~# helm show values flannel/flannel
---
global:
  imagePullSecrets:
# - name: "a-secret-name"

# The IPv4 cidr pool to create on startup if none exists. Pod IPs will be
# chosen from this range.
podCidr: "10.244.0.0/16"
podCidrv6: ""

flannel:
  # kube-flannel image
  image:
    repository: ghcr.io/flannel-io/flannel
    tag: v0.27.1
...

(⎈|HomeLab:N/A) root@k8s-ctr:~# cat << EOF > flannel-values.yaml
podCidr: "10.244.0.0/16"

flannel:
  args:
  - "--ip-masq"
  - "--kube-subnet-mgr"
  - "--iface=eth1"  
EOF

(⎈|HomeLab:N/A) root@k8s-ctr:~# helm install flannel --namespace kube-flannel flannel/flannel -f flannel-values.yaml
NAME: flannel
LAST DEPLOYED: Wed Jul 16 02:21:45 2025
NAMESPACE: kube-flannel
STATUS: deployed
REVISION: 1
TEST SUITE: None

(⎈|HomeLab:N/A) root@k8s-ctr:~# helm list -A
NAME    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART             APP VERSION
flannel kube-flannel    1               2025-07-16 02:21:45.505905595 +0900 KST deployed        flannel-v0.27.1   v0.27.1   

(⎈|HomeLab:N/A) root@k8s-ctr:~# kc describe pod -n kube-flannel -l app=flannel
Name:                 kube-flannel-ds-5vszz
Namespace:            kube-flannel
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      flannel
Node:                 k8s-w2/192.168.10.102
...

(⎈|HomeLab:N/A) root@k8s-ctr:~# tree /opt/cni/bin/ # flannel
/opt/cni/bin/
├── bandwidth
├── bridge
├── dhcp
├── dummy
├── firewall
├── flannel
...
└── vrf
1 directory, 21 files

(⎈|HomeLab:N/A) root@k8s-ctr:~# tree /etc/cni/net.d/
/etc/cni/net.d/
└── 10-flannel.conflist
1 directory, 1 file

(⎈|HomeLab:N/A) root@k8s-ctr:~# cat /etc/cni/net.d/10-flannel.conflist | jq
{
  "name": "cbr0",
  "cniVersion": "0.3.1",
  "plugins": [
    {
      "type": "flannel",
      "delegate": {
        "hairpinMode": true,
        "isDefaultGateway": true
      }
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": true
      }
    }
  ]
}

(⎈|HomeLab:N/A) root@k8s-ctr:~# kc describe cm -n kube-flannel kube-flannel-cfg
Name:         kube-flannel-cfg
...
net-conf.json:
----
{
  "Network": "10.244.0.0/16",
  "Backend": {
    "Type": "vxlan"
  }
}
...

 

설치 전과 비교

k get nodes -owide    # 노드들의 상세 정보(내부/외부 IP 등) 확인
ip -c link            # 네트워크 인터페이스 상태 및 정보 확인
ip -c route | grep 10.244.   # Pod 네트워크(Flannel) 라우팅 정보 확인

ping -c 1 10.244.1.0   # Pod 네트워크 대역(예시)으로 ICMP 테스트
ping -c 1 10.244.2.0   # Pod 네트워크 대역(예시)으로 ICMP 테스트

brctl show             # 브릿지 네트워크 정보 확인
iptables-save          # 전체 iptables 규칙 저장본 출력
iptables -t nat -S     # NAT 테이블의 iptables 규칙 확인
iptables -t filter -S  # filter 테이블의 iptables 규칙 확인

# 워커 노드(w1, w2)에서 네트워크 인터페이스 정보 확인
for i in w1 w2 ; do 
  echo ">> node : k8s-$i <<"
  sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i ip -c link
  echo
done

# 워커 노드(w1, w2)에서 라우팅 테이블 정보 확인
for i in w1 w2 ; do 
  echo ">> node : k8s-$i <<"
  sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i ip -c route
  echo
done

# 워커 노드(w1, w2)에서 브릿지 정보 확인
for i in w1 w2 ; do 
  echo ">> node : k8s-$i <<"
  sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i brctl show
  echo
done

# 워커 노드(w1, w2)에서 NAT 테이블 iptables 규칙 확인
for i in w1 w2 ; do 
  echo ">> node : k8s-$i <<"
  sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i sudo iptables -t nat -S
  echo
done

 

✅ 실행 결과(요약)

(⎈|HomeLab:N/A) root@k8s-ctr:~# k get nodes -owide
NAME      STATUS   ROLES           AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s-ctr   Ready    control-plane   65m   v1.33.2   192.168.10.100   <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w1    Ready    <none>          63m   v1.33.2   192.168.10.101   <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w2    Ready    <none>          62m   v1.33.2   192.168.10.102   <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27

(⎈|HomeLab:N/A) root@k8s-ctr:~# ip -c link 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:71:19:d8 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:36:80:8d brd ff:ff:ff:ff:ff:ff
    altname enp0s9
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/ether 9a:cb:2f:7b:35:5b brd ff:ff:ff:ff:ff:ff

(⎈|HomeLab:N/A) root@k8s-ctr:~# ip -c route | grep 10.244.
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink 
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink 

(⎈|HomeLab:N/A) root@k8s-ctr:~# ping -c 1 10.244.1.0
PING 10.244.1.0 (10.244.1.0) 56(84) bytes of data.
64 bytes from 10.244.1.0: icmp_seq=1 ttl=64 time=0.519 ms

--- 10.244.1.0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms

(⎈|HomeLab:N/A) root@k8s-ctr:~# ping -c 1 10.244.2.0
PING 10.244.2.0 (10.244.2.0) 56(84) bytes of data.
64 bytes from 10.244.2.0: icmp_seq=1 ttl=64 time=0.476 ms

--- 10.244.2.0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms
(⎈|HomeLab:N/A) root@k8s-ctr:~# brctl show  

(⎈|HomeLab:N/A) root@k8s-ctr:~# iptables-save
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 02:28:42 2025
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
...라우팅 정보 엄청 많음

(⎈|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i ip -c link ; echo; done
>> node : k8s-w1 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:71:19:d8 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:3a:b7:94 brd ff:ff:ff:ff:ff:ff
    altname enp0s9
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/ether 7a:3f:88:1b:b5:2f brd ff:ff:ff:ff:ff:ff
>> node : k8s-w2 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:71:19:d8 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:84:e2:70 brd ff:ff:ff:ff:ff:ff
    altname enp0s9
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/ether d6:5f:ea:13:4c:98 brd ff:ff:ff:ff:ff:ff
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 76:b6:28:95:a9:1b brd ff:ff:ff:ff:ff:ff
6: vethbf6a9dbf@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
    link/ether 66:1e:47:68:fa:2f brd ff:ff:ff:ff:ff:ff link-netns cni-f619bbeb-22e5-3ae6-5eaf-09ee3e4f8dbb
7: vethec7cd1d3@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
    link/ether d6:ed:e9:4d:6b:44 brd ff:ff:ff:ff:ff:ff link-netns cni-60fd232f-98cb-f1a9-cd8d-4d7d2916725b

(⎈|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i ip -c route ; echo; done
>> node : k8s-w1 <<
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink 
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.101 
>> node : k8s-w2 <<
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink 
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink 
10.244.2.0/24 dev cni0 proto kernel scope link src 10.244.2.1 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.102 


(⎈|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i brctl show ; echo; done
>> node : k8s-w1 <<

>> node : k8s-w2 <<
bridge name     bridge id               STP enabled     interfaces
cni0            8000.76b62895a91b       no              vethbf6a9dbf
                                                        vethec7cd1d3

root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i sudo iptables -t nat -S ; echo; done
>> node : k8s-w1 <<
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
... 라우팅 정보 생략

 

샘플 애플리케이션 배포 및 확인

# 샘플 애플리케이션 배포
cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webpod
spec:
  replicas: 2
  selector:
    matchLabels:
      app: webpod
  template:
    metadata:
      labels:
        app: webpod
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - sample-app
            topologyKey: "kubernetes.io/hostname"
      containers:
      - name: webpod
        image: traefik/whoami
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: webpod
  labels:
    app: webpod
spec:
  selector:
    app: webpod
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP
EOF


# k8s-ctr 노드에 curl-pod 파드 배포
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: curl-pod
  labels:
    app: curl
spec:
  nodeName: k8s-ctr
  containers:
    - name: curl
      image: alpine/curl
      command: ["sleep", "36000"]
EOF

#
crictl ps
for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i sudo crictl ps ; echo; done

kubectl get deploy,svc,ep webpod -owide         # webpod Deployment, Service, Endpoints 정보를 wide 포맷으로 조회
kubectl api-resources | grep -i endpoint        # Endpoints 관련 리소스 종류 확인
kubectl get endpointslices -l app=webpod        # app=webpod 라벨을 가진 EndpointSlice 조회
# 배포 전과 비교해보자
ip -c link                                      # 네트워크 인터페이스 정보 확인
brctl show                                      # 브리지 정보 확인
iptables-save                                   # iptables 전체 규칙 저장 출력
iptables -t nat -S                              # iptables NAT 테이블 규칙만 출력

 

✅ 실행 결과(요약)

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get deploy,svc,ep webpod -owide
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES           SELECTOR
deployment.apps/webpod   2/2     2            2           62s   webpod       traefik/whoami   app=webpod

NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service/webpod   ClusterIP   10.96.255.154   <none>        80/TCP    62s   app=webpod

NAME               ENDPOINTS                     AGE
endpoints/webpod   10.244.1.2:80,10.244.2.4:80   62s

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl api-resources | grep -i endpoint
endpoints                           ep           v1                                true         Endpoints
endpointslices                                   discovery.k8s.io/v1               true         EndpointSlice

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get endpointslices -l app=webpod
NAME           ADDRESSTYPE   PORTS   ENDPOINTS               AGE
webpod-8mgnh   IPv4          80      10.244.1.2,10.244.2.4   88s

### 배포 전과 비교
(⎈|HomeLab:N/A) root@k8s-ctr:~# ip -c link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:71:19:d8 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:36:80:8d brd ff:ff:ff:ff:ff:ff
    altname enp0s9
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/ether 9a:cb:2f:7b:35:5b brd ff:ff:ff:ff:ff:ff
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 5a:50:9b:28:ae:5a brd ff:ff:ff:ff:ff:ff
6: vetha8c014b3@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
    link/ether 86:93:b5:ef:5f:5f brd ff:ff:ff:ff:ff:ff link-netns cni-45dc7758-6fd9-7524-c5af-7c10701684bf

(⎈|HomeLab:N/A) root@k8s-ctr:~# brctl show
bridge name     bridge id               STP enabled     interfaces
cni0            8000.5a509b28ae5a       no              vetha8c014b3

(⎈|HomeLab:N/A) root@k8s-ctr:~# iptables-save
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 02:36:07 2025
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
...

 

샘플 애플리케이션 통신 확인

kubectl get pod -l app=webpod -owide
POD1IP=10.244.1.2
kubectl exec -it curl-pod -- curl $POD1IP

#
kubectl get svc,ep webpod
kubectl exec -it curl-pod -- curl webpod
kubectl exec -it curl-pod -- curl webpod | grep Hostname
kubectl exec -it curl-pod -- sh -c 'while true; do curl -s webpod | grep Hostname; sleep 1; done'

# Service 동작 처리에 iptables 규칙 활용 확인 >> Service 가 100개 , 1000개 , 10000개 증가 되면???
kubectl get svc webpod -o jsonpath="{.spec.clusterIP}"
SVCIP=$(kubectl get svc webpod -o jsonpath="{.spec.clusterIP}")
iptables -t nat -S | grep $SVCIP
for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i sudo iptables -t nat -S | grep $SVCIP ; echo; done
-A KUBE-SERVICES -d 10.96.255.104/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-SVC-CNZCPOCNCNOROALA
-A KUBE-SVC-CNZCPOCNCNOROALA ! -s 10.244.0.0/16 -d 10.96.255.104/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ

 

✅ 실행 결과(요약)

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -l app=webpod -owide
NAME                      READY   STATUS    RESTARTS   AGE   IP           NODE      NOMINATED NODE   READINESS GATES
webpod-697b545f57-8dkv7   1/1     Running   0          10m   10.244.1.2   k8s-w1   <none>           <none>
webpod-697b545f57-rxtvt   1/1     Running   0          10m   10.244.2.4   k8s-w2   <none>           <none>

(⎈|HomeLab:N/A) root@k8s-ctr:~# POD1IP=10.244.1.2

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl $POD1IP
Hostname: webpod-697b545f57-8dkv7
IP: 127.0.0.1
IP: ::1
IP: 10.244.1.2
IP: fe80::fcb7:3eff:feed:cd36
RemoteAddr: 10.244.0.2:56156
GET / HTTP/1.1
Host: 10.244.1.2
User-Agent: curl/8.14.1
Accept: */*

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc,ep webpod
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/webpod   ClusterIP   10.96.255.154   <none>        80/TCP    11m

NAME               ENDPOINTS                     AGE
endpoints/webpod   10.244.1.2:80,10.244.2.4:80   11m

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl webpod | grep Hostname
Hostname: webpod-697b545f57-8dkv7

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- sh -c 'while true; do curl -s webpod | grep Hostname; sleep 1; done'
Hostname: webpod-697b545f57-rxtvt
Hostname: webpod-697b545f57-rxtvt
^Ccommand terminated with exit code 130

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc webpod -o jsonpath="{.spec.clusterIP}"
10.96.255.154

(⎈|HomeLab:N/A) rooSVCIP=$(kubectl get svc webpod -o jsonpath="{.spec.clusterIP}").clusterIP}")

(⎈|HomeLab:N/A) root@k8s-ctr:~# iptables -t nat -S | grep $SVCIP
-A KUBE-SERVICES -d 10.96.255.154/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-SVC-CNZCPOCNCNOROALA
-A KUBE-SVC-CNZCPOCNCNOROALA ! -s 10.244.0.0/16 -d 10.96.255.154/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ

for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i sudo iptables -t nat -S | grep $SVCIP ; echo; done
-A KUBE-SERVICES -d 10.96.255.104/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-SVC-CNZCPOCNCNOROALA
-A KUBE-SVC-CNZCPOCNCNOROALA ! -s 10.244.0.0/16 -d 10.96.255.104/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
>> node : k8s-w1 <<
-A KUBE-SERVICES -d 10.96.255.154/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-SVC-CNZCPOCNCNOROALA
-A KUBE-SVC-CNZCPOCNCNOROALA ! -s 10.244.0.0/16 -d 10.96.255.154/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
>> node : k8s-w2 <<
-A KUBE-SERVICES -d 10.96.255.154/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-SVC-CNZCPOCNCNOROALA
-A KUBE-SVC-CNZCPOCNCNOROALA ! -s 10.244.0.0/16 -d 10.96.255.154/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ

 

curl-pod에서 webpod 파드의 IP(10.244.1.2)로 직접 접속이 가능함을 확인하였고, ClusterIP 서비스(webpod)를 통해서도 정상적으로 파드에 접근할 수 있는 것을 확인하였습니다.

 

또한, kubectl get svc,ep webpod 명령어로 서비스의 ClusterIP와 엔드포인트(파드 IP:포트)를 확인하였고, 서비스의 ClusterIP(10.96.255.154)와 연결된 파드 IP(10.244.1.2, 10.244.2.4)가 정상적으로 등록되어 있음을 확인하였습니다.


서비스의 ClusterIP에 대한 iptables NAT 규칙이 각 노드에 생성되어 있고, 각 노드(k8s-ctr, k8s-w1, k8s-w2)에서 동일한 ClusterIP에 대한 NAT 규칙이 존재하는 것도 확인하였습니다.

 

실습을 통해 kube-proxy(iptables 모드)는 서비스 생성 시 각 노드의 iptables에 규칙을 추가하여 ClusterIP로 들어온 트래픽을 실제 파드로 라우팅하며, 파드와 서비스가 많아질수록 iptables 규칙이 기하급수적으로 늘어나고 관리 및 성능에 한계가 있음을 확인할 수 있었습니다.

 

대규모 환경에서 IP 테이블 단점

대규모 Kubernetes 환경에서 iptables 기반 kube-proxy는 다음과 같은 심각한 한계와 문제점을 가집니다.

  • 네트워크 지연(latency) 증가
    • 노드와 파드가 많아질수록 iptables 규칙이 기하급수적으로 늘어나고, 연결 설정 시 약 1.2ms의 추가 네트워크 지연이 발생합니다.
    • 초저지연이 중요한 클라우드 게이밍 등에서는 이 오버헤드가 매우 치명적입니다.
  • 규칙 업데이트에 매우 긴 시간 소요
    • 수천 노드, 수만 파드 환경에서는 각 노드에 수만 개의 iptables 규칙이 생성됩니다.
    • 클러스터 iptables 규칙을 갱신하는 데 5분 이상이 걸릴 수 있어, 서비스 추가/삭제, 스케일링 등 변화가 잦은 환경에서 실시간성이 크게 저하됩니다.
  • 높은 CPU 오버헤드
    • iptables 규칙이 많아질수록 패킷 처리와 규칙 갱신에 53% 이상의 CPU 오버헤드가 발생하여, 전체 시스템 성능이 저하됩니다.
  • 스케일 한계
    • iptables는 규칙이 많아질수록 선형적으로 성능이 저하되어, 대규모 클러스터(수천 노드, 수만 파드)에서는 실질적으로 운영이 어렵습니다.

이러한 이유로, 대규모 환경에서는 iptables 기반 kube-proxy 대신 Cilium(eBPF 기반)과 같은 대안이 각광받고 있습니다.

이번 실습 환경에서도 iptables 기반 kube-proxy가 기본적으로 사용되고 있으며, 실제로 다음과 같은 현상을 직접 확인할 수 있습니다.

  • 서비스와 파드가 늘어날수록 iptables 규칙이 많아지고, iptables-save, iptables -t nat -S 명령어 출력이 길어집니다.
  • 서비스가 많아질수록 네트워크 지연, 규칙 적용 지연, CPU 사용률 증가 등의 현상을 체감할 수 있습니다.
  • 실습에서 여러 번 서비스/파드를 배포·삭제하면 규칙 갱신이 느려지는 것을 경험할 수 있습니다.

 

이후 실습에서는 이러한 한계를 극복하기 위해 Cilium(eBPF 기반) 네트워크 플러그인을 직접 설치하고, iptables와 비교하여 네트워크 성능 및 관리 효율성이 어떻게 달라지는지 실습을 통해 확인할 예정입니다.

 


 

Cilium CNI 설치

 

시스템 요구사항

  • AMD64 또는 AArch64 CPU 아키텍처를 사용하는 호스트
  • Linux 커널 5.4 이상 또는 동등 버전(예: RHEL 8.6의 경우 4.18)
    (필자는 Rocky 9.6 버전에서 정상적으로 잘 설치했어요)

OS 확인

arch
uname -r

 

✅ 실행 결과

(⎈|HomeLab:N/A) root@k8s-ctr:~# arch
aarch64

(⎈|HomeLab:N/A) root@k8s-ctr:~# uname -r
6.8.0-53-generic

 

커널 구성 옵션 활성화

# [커널 구성 옵션] 기본 요구 사항 
grep -E 'CONFIG_BPF|CONFIG_BPF_SYSCALL|CONFIG_NET_CLS_BPF|CONFIG_BPF_JIT|CONFIG_NET_CLS_ACT|CONFIG_NET_SCH_INGRESS|CONFIG_CRYPTO_SHA1|CONFIG_CRYPTO_USER_API_HASH|CONFIG_CGROUPS|CONFIG_CGROUP_BPF|CONFIG_PERF_EVENTS|CONFIG_SCHEDSTATS' /boot/config-$(uname -r)
CONFIG_BPF=y
CONFIG_BPF_SYSCALL=y
CONFIG_BPF_JIT=y
CONFIG_NET_CLS_BPF=m
CONFIG_NET_CLS_ACT=y
CONFIG_NET_SCH_INGRESS=m
CONFIG_CRYPTO_SHA1=y
CONFIG_CRYPTO_USER_API_HASH=m
CONFIG_CGROUPS=y
CONFIG_CGROUP_BPF=y
CONFIG_PERF_EVENTS=y
CONFIG_SCHEDSTATS=y


# [커널 구성 옵션] Requirements for Tunneling and Routing
grep -E 'CONFIG_VXLAN=y|CONFIG_VXLAN=m|CONFIG_GENEVE=y|CONFIG_GENEVE=m|CONFIG_FIB_RULES=y' /boot/config-$(uname -r)
CONFIG_FIB_RULES=y # 커널에 내장됨
CONFIG_VXLAN=m # 모듈로 컴파일됨 → 커널에 로드해서 사용
CONFIG_GENEVE=m # 모듈로 컴파일됨 → 커널에 로드해서 사용

## (참고) 커널 로드
lsmod | grep -E 'vxlan|geneve'
modprobe geneve
lsmod | grep -E 'vxlan|geneve'


# [커널 구성 옵션] Requirements for L7 and FQDN Policies
grep -E 'CONFIG_NETFILTER_XT_TARGET_TPROXY|CONFIG_NETFILTER_XT_TARGET_MARK|CONFIG_NETFILTER_XT_TARGET_CT|CONFIG_NETFILTER_XT_MATCH_MARK|CONFIG_NETFILTER_XT_MATCH_SOCKET' /boot/config-$(uname -r)
CONFIG_NETFILTER_XT_TARGET_CT=m
CONFIG_NETFILTER_XT_TARGET_MARK=m
CONFIG_NETFILTER_XT_TARGET_TPROXY=m
CONFIG_NETFILTER_XT_MATCH_MARK=m
CONFIG_NETFILTER_XT_MATCH_SOCKET=m

...

# [커널 구성 옵션] Requirements for Netkit Device Mode
grep -E 'CONFIG_NETKIT=y|CONFIG_NETKIT=m' /boot/config-$(uname -r)

 

기존 Flannel 제거

helm uninstall -n kube-flannel flannel
helm list -A

kubectl get all -n kube-flannel
kubectl delete ns kube-flannel

kubectl get pod -A -owide

 

✅ 실행 결과

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get all -n kube-flannel
No resources found in kube-flannel namespace.

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl delete ns kube-flannel
namespace "kube-flannel" deleted

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -A -owide
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE   IP               NODE      NOMINATED NODE   READINESS GATES
default       curl-pod                          1/1     Running   0          20h   10.244.0.2       k8s-ctr   <none>           <none>
default       webpod-697b545f57-8dkv7           1/1     Running   0          20h   10.244.1.2       k8s-w1    <none>           <none>
default       webpod-697b545f57-rxtvt           1/1     Running   0          20h   10.244.2.4       k8s-w2    <none>           <none>
kube-system   coredns-674b8bbfcf-b7drj          1/1     Running   0          21h   10.244.2.3       k8s-w2    <none>           <none>
kube-system   coredns-674b8bbfcf-jhpj4          1/1     Running   0          21h   10.244.2.2       k8s-w2    <none>           <none>
kube-system   etcd-k8s-ctr                      1/1     Running   0          21h   192.168.10.100   k8s-ctr   <none>           <none>
kube-system   kube-apiserver-k8s-ctr            1/1     Running   0          21h   192.168.10.100   k8s-ctr   <none>           <none>
kube-system   kube-controller-manager-k8s-ctr   1/1     Running   0          21h   192.168.10.100   k8s-ctr   <none>           <none>
kube-system   kube-proxy-5zz6r                  1/1     Running   0          21h   192.168.10.102   k8s-w2    <none>           <none>
kube-system   kube-proxy-fszcp                  1/1     Running   0          21h   192.168.10.101   k8s-w1    <none>           <none>
kube-system   kube-proxy-kxbld                  1/1     Running   0          21h   192.168.10.100   k8s-ctr   <none>           <none>
kube-system   kube-scheduler-k8s-ctr            1/1     Running   0          21h   192.168.10.100   k8s-ctr   <none>           <none>

 

k8s-ctr, k8s-w1, k8s-w2 모든 노드에서 아래 실행 (제거 전 확인)

ip -c link
for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i ip -c link ; echo; done

brctl show
for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i brctl show ; echo; done

ip -c route
for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i ip -c route ; echo; done

 

✅ 실행 결과

(⎈|HomeLab:N/A) root@k8s-ctr:~# ip -c link
for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i ip -c link ; echo; done
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:71:19:d8 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:36:80:8d brd ff:ff:ff:ff:ff:ff
    altname enp0s9
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/ether 9a:cb:2f:7b:35:5b brd ff:ff:ff:ff:ff:ff
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 5a:50:9b:28:ae:5a brd ff:ff:ff:ff:ff:ff
6: vetha8c014b3@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
    link/ether 86:93:b5:ef:5f:5f brd ff:ff:ff:ff:ff:ff link-netns cni-45dc7758-6fd9-7524-c5af-7c10701684bf
>> node : k8s-w1 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:71:19:d8 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:3a:b7:94 brd ff:ff:ff:ff:ff:ff
    altname enp0s9
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/ether 7a:3f:88:1b:b5:2f brd ff:ff:ff:ff:ff:ff
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 1a:4a:bb:62:66:99 brd ff:ff:ff:ff:ff:ff
6: veth1e22d9ff@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
    link/ether 1e:da:33:ae:b3:8c brd ff:ff:ff:ff:ff:ff link-netns cni-68cfaa26-70d6-042e-1bb7-e26424f7d874

>> node : k8s-w2 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:71:19:d8 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:84:e2:70 brd ff:ff:ff:ff:ff:ff
    altname enp0s9
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/ether d6:5f:ea:13:4c:98 brd ff:ff:ff:ff:ff:ff
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 76:b6:28:95:a9:1b brd ff:ff:ff:ff:ff:ff
6: vethbf6a9dbf@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
    link/ether 66:1e:47:68:fa:2f brd ff:ff:ff:ff:ff:ff link-netns cni-f619bbeb-22e5-3ae6-5eaf-09ee3e4f8dbb
7: vethec7cd1d3@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
    link/ether d6:ed:e9:4d:6b:44 brd ff:ff:ff:ff:ff:ff link-netns cni-60fd232f-98cb-f1a9-cd8d-4d7d2916725b
8: veth17fbaf96@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
    link/ether 82:43:bd:6e:e8:8f brd ff:ff:ff:ff:ff:ff link-netns cni-a74ade2e-3c98-64ac-543d-7fa0626f0652

(⎈|HomeLab:N/A) root@k8s-ctr:~# brctl show
for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i brctl show ; echo; done
bridge name     bridge id               STP enabled     interfaces
cni0            8000.5a509b28ae5a       no              vetha8c014b3
>> node : k8s-w1 <<
bridge name     bridge id               STP enabled     interfaces
cni0            8000.1a4abb626699       no              veth1e22d9ff

>> node : k8s-w2 <<
bridge name     bridge id               STP enabled     interfaces
cni0            8000.76b62895a91b       no              veth17fbaf96
                                                        vethbf6a9dbf
                                                        vethec7cd1d3

(⎈|HomeLab:N/A) root@k8s-ctr:~# ip -c route
for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i ip -c route ; echo; done
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1 
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink 
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100 
>> node : k8s-w1 <<
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink 
10.244.1.0/24 dev cni0 proto kernel scope link src 10.244.1.1 
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.101 

>> node : k8s-w2 <<
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink 
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink 
10.244.2.0/24 dev cni0 proto kernel scope link src 10.244.2.1 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.102 

# 타 노드에도 동일하게 flannel 관련 정보가 존재함

 

vnic 제거

# vnic 제거
ip link del flannel.1
ip link del cni0

for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i sudo ip link del flannel.1 ; echo; done
for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i sudo ip link del cni0 ; echo; done

# 제거 확인
ip -c link
for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i ip -c link ; echo; done

brctl show
for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i brctl show ; echo; done

ip -c route
for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i ip -c route ; echo; done

 

✅ 실행 결과

(⎈|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i sudo ip link del flannel.1 ; echo; done
>> node : k8s-w1 <<

>> node : k8s-w2 <<

(⎈|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i sudo ip link del cni0 ; echo; done
>> node : k8s-w1 <<

>> node : k8s-w2 <<

(⎈|HomeLab:N/A) root@k8s-ctr:~# ip -c link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:71:19:d8 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:36:80:8d brd ff:ff:ff:ff:ff:ff
    altname enp0s9
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/ether 9a:cb:2f:7b:35:5b brd ff:ff:ff:ff:ff:ff
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 5a:50:9b:28:ae:5a brd ff:ff:ff:ff:ff:ff
6: vetha8c014b3@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
    link/ether 86:93:b5:ef:5f:5f brd ff:ff:ff:ff:ff:ff link-netns cni-45dc7758-6fd9-7524-c5af-7c10701684bf
(⎈|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i ip -c link ; echo; done
>> node : k8s-w1 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:71:19:d8 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:3a:b7:94 brd ff:ff:ff:ff:ff:ff
    altname enp0s9
6: veth1e22d9ff@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 1e:da:33:ae:b3:8c brd ff:ff:ff:ff:ff:ff link-netns cni-68cfaa26-70d6-042e-1bb7-e26424f7d874

>> node : k8s-w2 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:71:19:d8 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:84:e2:70 brd ff:ff:ff:ff:ff:ff
    altname enp0s9
6: vethbf6a9dbf@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 66:1e:47:68:fa:2f brd ff:ff:ff:ff:ff:ff link-netns cni-f619bbeb-22e5-3ae6-5eaf-09ee3e4f8dbb
7: vethec7cd1d3@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether d6:ed:e9:4d:6b:44 brd ff:ff:ff:ff:ff:ff link-netns cni-60fd232f-98cb-f1a9-cd8d-4d7d2916725b
8: veth17fbaf96@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 82:43:bd:6e:e8:8f brd ff:ff:ff:ff:ff:ff link-netns cni-a74ade2e-3c98-64ac-543d-7fa0626f0652

(⎈|HomeLab:N/A) root@k8s-ctr:~# brctl show
bridge name     bridge id               STP enabled     interfaces
cni0            8000.5a509b28ae5a       no              vetha8c014b3
(⎈|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i brctl show ; echo; done
>> node : k8s-w1 <<

>> node : k8s-w2 <<

(⎈|HomeLab:N/A) root@k8s-ctr:~# ip -c route
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1 
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink 
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100 
(⎈|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i ip -c route ; echo; done
>> node : k8s-w1 <<
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.101 

>> node : k8s-w2 <<
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.102 

 

기존 kube-proxy 제거

kubectl -n kube-system delete ds kube-proxy
kubectl -n kube-system delete cm kube-proxy
kubectl get pod -A -owide # 배포된 파드의 IP는 남겨져 있음
kubectl exec -it curl-pod -- curl webpod
iptables-save

 

✅ 실행 결과

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system delete ds kube-proxy
daemonset.apps "kube-proxy" deleted

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system delete cm kube-proxy
configmap "kube-proxy" deleted

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -A -owide
NAMESPACE     NAME                              READY   STATUS    RESTARTS       AGE   IP               NODE      NOMINATED NODE   READINESS GATES
default       curl-pod                          1/1     Running   0              20h   10.244.0.2       k8s-ctr   <none>           <none>
default       webpod-697b545f57-8dkv7           1/1     Running   0              20h   10.244.1.2       k8s-w1    <none>           <none>
default       webpod-697b545f57-rxtvt           1/1     Running   0              20h   10.244.2.4       k8s-w2    <none>           <none>
kube-system   coredns-674b8bbfcf-b7drj          0/1     Running   1 (110s ago)   21h   10.244.2.3       k8s-w2    <none>           <none>
kube-system   coredns-674b8bbfcf-jhpj4          0/1     Running   1 (110s ago)   21h   10.244.2.2       k8s-w2    <none>           <none>
kube-system   etcd-k8s-ctr                      1/1     Running   0              21h   192.168.10.100   k8s-ctr   <none>           <none>
kube-system   kube-apiserver-k8s-ctr            1/1     Running   0              21h   192.168.10.100   k8s-ctr   <none>           <none>
kube-system   kube-controller-manager-k8s-ctr   1/1     Running   0              21h   192.168.10.100   k8s-ctr   <none>           <none>
kube-system   kube-scheduler-k8s-ctr            1/1     Running   0              21h   192.168.10.100   k8s-ctr   <none>           <none>

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl webpod
curl: (6) Could not resolve host: webpod
command terminated with exit code 6

(⎈|HomeLab:N/A) root@k8s-ctr:~# iptables-save
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:15:12 2025
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:KUBE-IPTABLES-HINT - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
COMMIT
# Completed on Wed Jul 16 23:15:12 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:15:12 2025
*filter
:INPUT ACCEPT [2900242:582487179]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [2875184:536467080]
:FLANNEL-FWD - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-PROXY-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
-A INPUT -m comment --comment "kubernetes health check service ports" -j KUBE-NODEPORTS
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A FORWARD -m comment --comment "flanneld forward" -j FLANNEL-FWD
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A FLANNEL-FWD -s 10.244.0.0/16 -m comment --comment "flanneld forward" -j ACCEPT
-A FLANNEL-FWD -d 10.244.0.0/16 -m comment --comment "flanneld forward" -j ACCEPT
....
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
-A KUBE-SEP-ETI7FUQQE3BS2IXE -s 192.168.10.100/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-ETI7FUQQE3BS2IXE -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 192.168.10.100:6443
-A KUBE-SEP-PQBQBGZJJ5FKN3TB -s 10.244.1.2/32 -m comment --comment "default/webpod" -j KUBE-MARK-MASQ
-A KUBE-SEP-PQBQBGZJJ5FKN3TB -p tcp -m comment --comment "default/webpod" -m tcp -j DNAT --to-destination 10.244.1.2:80
-A KUBE-SEP-R5LRHDMUTGTM635J -s 10.244.2.4/32 -m comment --comment "default/webpod" -j KUBE-MARK-MASQ
-A KUBE-SEP-R5LRHDMUTGTM635J -p tcp -m comment --comment "default/webpod" -m tcp -j DNAT --to-destination 10.244.2.4:80
-A KUBE-SERVICES -d 10.96.255.154/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-SVC-CNZCPOCNCNOROALA
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-CNZCPOCNCNOROALA ! -s 10.244.0.0/16 -d 10.96.255.154/32 -p tcp -m comment --comment "default/webpod cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SVC-CNZCPOCNCNOROALA -m comment --comment "default/webpod -> 10.244.1.2:80" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-PQBQBGZJJ5FKN3TB
-A KUBE-SVC-CNZCPOCNCNOROALA -m comment --comment "default/webpod -> 10.244.2.4:80" -j KUBE-SEP-R5LRHDMUTGTM635J
-A KUBE-SVC-NPX46M4PTMTKRN6Y ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https -> 192.168.10.100:6443" -j KUBE-SEP-ETI7FUQQE3BS2IXE
COMMIT
# Completed on Wed Jul 16 23:15:12 2025

 

k8s-ctr, k8s-w1, k8s-w2 모든 노드에서 아래 실행

# Run on each node with root permissions:
iptables-save | grep -v KUBE | grep -v FLANNEL | iptables-restore
iptables-save

sshpass -p 'vagrant' ssh vagrant@k8s-w1 "sudo iptables-save | grep -v KUBE | grep -v FLANNEL | sudo iptables-restore"
sshpass -p 'vagrant' ssh vagrant@k8s-w1 sudo iptables-save

sshpass -p 'vagrant' ssh vagrant@k8s-w2 "sudo iptables-save | grep -v KUBE | grep -v FLANNEL | sudo iptables-restore"
sshpass -p 'vagrant' ssh vagrant@k8s-w2 sudo iptables-save

#
kubectl get pod -owide

kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.podCIDR}{"\n"}{end}'

kc describe pod -n kube-system kube-controller-manager-k8s-ctr | grep Command: -A5

 

✅ 실행 결과

(⎈|HomeLab:N/A) root@k8s-ctr:~# iptables-save | grep -v KUBE | grep -v FLANNEL | iptables-restore

(⎈|HomeLab:N/A) root@k8s-ctr:~# iptables-save
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:16:30 2025
*mangle
:PREROUTING ACCEPT [1483:283019]
:INPUT ACCEPT [1483:283019]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1399:279976]
:POSTROUTING ACCEPT [1399:279976]
COMMIT
# Completed on Wed Jul 16 23:16:30 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:16:30 2025
*filter
:INPUT ACCEPT [1483:283019]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1399:279976]
COMMIT
# Completed on Wed Jul 16 23:16:30 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:16:30 2025
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
COMMIT
# Completed on Wed Jul 16 23:16:30 2025

(⎈|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@k8s-w1 "sudo iptables-save | grep -v KUBE | grep -v FLANNEL | sudo iptables-restore"

(⎈|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@k8s-w1 sudo iptables-save
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:16:54 2025
*mangle
:PREROUTING ACCEPT [22:4841]
:INPUT ACCEPT [22:4841]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [18:4894]
:POSTROUTING ACCEPT [18:4894]
:KUBE-IPTABLES-HINT - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
COMMIT
# Completed on Wed Jul 16 23:16:54 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:16:54 2025
*filter
:INPUT ACCEPT [22:4841]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [18:4894]
:KUBE-FIREWALL - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
-A INPUT -j KUBE-FIREWALL
-A OUTPUT -j KUBE-FIREWALL
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
COMMIT
# Completed on Wed Jul 16 23:16:54 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:16:54 2025
*nat
:PREROUTING ACCEPT [1:60]
:INPUT ACCEPT [1:60]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:KUBE-KUBELET-CANARY - [0:0]
COMMIT
# Completed on Wed Jul 16 23:16:54 2025

(⎈|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@k8s-w2 "sudo iptables-save | grep -v KUBE | grep -v FLANNEL | sudo iptables-restore"

(⎈|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@k8s-w2 sudo iptables-save
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:17:04 2025
*mangle
:PREROUTING ACCEPT [27:4802]
:INPUT ACCEPT [27:4802]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [35:5910]
:POSTROUTING ACCEPT [35:5910]
COMMIT
# Completed on Wed Jul 16 23:17:04 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:17:04 2025
*filter
:INPUT ACCEPT [27:4802]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [35:5910]
COMMIT
# Completed on Wed Jul 16 23:17:04 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:17:04 2025
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
COMMIT
# Completed on Wed Jul 16 23:17:04 2025

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide
NAME                      READY   STATUS    RESTARTS   AGE   IP           NODE      NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   0          20h   10.244.0.2   k8s-ctr   <none>           <none>
webpod-697b545f57-8dkv7   1/1     Running   0          20h   10.244.1.2   k8s-w1    <none>           <none>
webpod-697b545f57-rxtvt   1/1     Running   0          20h   10.244.2.4   k8s-w2    <none>           <none>

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.podCIDR}{"\n"}{end}'
k8s-ctr 10.244.0.0/24
k8s-w1  10.244.1.0/24
k8s-w2  10.244.2.0/24

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide
NAME                      READY   STATUS    RESTARTS   AGE   IP           NODE      NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   0          20h   10.244.0.2   k8s-ctr   <none>           <none>
webpod-697b545f57-8dkv7   1/1     Running   0          20h   10.244.1.2   k8s-w1    <none>           <none>
webpod-697b545f57-rxtvt   1/1     Running   0          20h   10.244.2.4   k8s-w2    <none>           <none>

(⎈|HomeLab:N/A) root@k8s-ctr:~# kc describe pod -n kube-system kube-controller-manager-k8s-ctr | grep Command: -A5
    Command:
      kube-controller-manager
      --allocate-node-cidrs=true
      --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
      --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
      --bind-address=127.0.0.1

 

Cilium 1.17.5 설치

# Cilium 설치 with Helm
helm repo add cilium https://helm.cilium.io/

# 모든 NIC 지정 + bpf.masq=true + NoIptablesRules
helm install cilium cilium/cilium --version 1.17.5 --namespace kube-system \
--set k8sServiceHost=192.168.10.100 --set k8sServicePort=6443 \
--set kubeProxyReplacement=true \
--set routingMode=native \
--set autoDirectNodeRoutes=true \
--set ipam.mode="cluster-pool" \
--set ipam.operator.clusterPoolIPv4PodCIDRList={"172.20.0.0/16"} \
--set ipv4NativeRoutingCIDR=172.20.0.0/16 \
--set endpointRoutes.enabled=true \
--set installNoConntrackIptablesRules=true \
--set bpf.masquerade=true \
--set ipv6.enabled=false

# 확인
helm get values cilium -n kube-system
helm list -A
kubectl get crd
watch -d kubectl get pod -A

kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg status --verbose
KubeProxyReplacement:   True   [eth0    10.0.2.15 fd17:625c:f037:2:a00:27ff:fe71:19d8 fe80::a00:27ff:fe71:19d8, eth1   192.168.10.102 fe80::a00:27ff:fed3:64b (Direct Routing)]
Routing:                Network: Native   Host: BPF
Masquerading:           BPF   [eth0, eth1]   172.20.0.0/16 [IPv4: Enabled, IPv6: Disabled]
...

# 노드에 iptables 확인
iptables -t nat -S
for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i sudo iptables -t nat -S ; echo; done

iptables-save
for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i sudo iptables-save ; echo; done

 

✅ 실행 결과

(⎈|HomeLab:N/A) root@k8s-ctr:~# helm repo add cilium https://helm.cilium.io/
"cilium" has been added to your repositories

(⎈|HomeLab:N/A) root@k8s-ctr:~# helm install cilium cilium/cilium --version 1.17.5 --namespace kube-system \
--set k8sServiceHost=192.168.10.100 --set k8sServicePort=6443 \
--set kubeProxyReplacement=true \
--set routingMode=native \
--set autoDirectNodeRoutes=true \
--set ipam.mode="cluster-pool" \
--set ipam.operator.clusterPoolIPv4PodCIDRList={"172.20.0.0/16"} \
--set ipv4NativeRoutingCIDR=172.20.0.0/16 \
--set endpointRoutes.enabled=true \
--set installNoConntrackIptablesRules=true \
--set bpf.masquerade=true \
--set ipv6.enabled=false
NAME: cilium
LAST DEPLOYED: Wed Jul 16 23:21:46 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble.

Your release version is 1.17.5.

For any further help, visit https://docs.cilium.io/en/v1.17/gettinghelp

(⎈|HomeLab:N/A) root@k8s-ctr:~# helm get values cilium -n kube-system
USER-SUPPLIED VALUES:
autoDirectNodeRoutes: true
bpf:
  masquerade: true
endpointRoutes:
  enabled: true
installNoConntrackIptablesRules: true
ipam:
  mode: cluster-pool
  operator:
    clusterPoolIPv4PodCIDRList:
    - 172.20.0.0/16
ipv4NativeRoutingCIDR: 172.20.0.0/16
ipv6:
  enabled: false
k8sServiceHost: 192.168.10.100
k8sServicePort: 6443
kubeProxyReplacement: true
routingMode: native
(⎈|HomeLab:N/A) root@k8s-ctr:~# helm list -A
NAME    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
cilium  kube-system     1               2025-07-16 23:21:46.681432666 +0900 KST deployed        cilium-1.17.5   1.17.5 

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get crd
NAME                                         CREATED AT
ciliumcidrgroups.cilium.io                   2025-07-16T14:21:55Z
ciliumclusterwidenetworkpolicies.cilium.io   2025-07-16T14:21:55Z
ciliumendpoints.cilium.io                    2025-07-16T14:21:55Z
ciliumexternalworkloads.cilium.io            2025-07-16T14:21:55Z
ciliumidentities.cilium.io                   2025-07-16T14:21:55Z
ciliuml2announcementpolicies.cilium.io       2025-07-16T14:21:55Z
ciliumloadbalancerippools.cilium.io          2025-07-16T14:21:55Z
ciliumnetworkpolicies.cilium.io              2025-07-16T14:21:55Z
ciliumnodeconfigs.cilium.io                  2025-07-16T14:21:55Z
ciliumnodes.cilium.io                        2025-07-16T14:21:55Z
ciliumpodippools.cilium.io                   2025-07-16T14:21:55Z

(⎈|HomeLab:N/A) root@k8s-ctr:~# iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N CILIUM_OUTPUT_nat
-N CILIUM_POST_nat
-N CILIUM_PRE_nat
-N KUBE-KUBELET-CANARY
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_nat" -j CILIUM_PRE_nat
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT_nat" -j CILIUM_OUTPUT_nat
-A POSTROUTING -m comment --comment "cilium-feeder: CILIUM_POST_nat" -j CILIUM_POST_nat

(⎈|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i sudo iptables -t nat -S ; echo; done
>> node : k8s-w1 <<
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N CILIUM_OUTPUT_nat
-N CILIUM_POST_nat
-N CILIUM_PRE_nat
-N KUBE-KUBELET-CANARY
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_nat" -j CILIUM_PRE_nat
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT_nat" -j CILIUM_OUTPUT_nat
-A POSTROUTING -m comment --comment "cilium-feeder: CILIUM_POST_nat" -j CILIUM_POST_nat

>> node : k8s-w2 <<
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N CILIUM_OUTPUT_nat
-N CILIUM_POST_nat
-N CILIUM_PRE_nat
-N KUBE-KUBELET-CANARY
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_nat" -j CILIUM_PRE_nat
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT_nat" -j CILIUM_OUTPUT_nat
-A POSTROUTING -m comment --comment "cilium-feeder: CILIUM_POST_nat" -j CILIUM_POST_nat

(⎈|HomeLab:N/A) root@k8s-ctr:~# iptables-save
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:23:50 2025
*mangle
:PREROUTING ACCEPT [117930:471500319]
:INPUT ACCEPT [117930:471500319]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [111818:44325438]
:POSTROUTING ACCEPT [111818:44325438]
:CILIUM_POST_mangle - [0:0]
:CILIUM_PRE_mangle - [0:0]
:KUBE-IPTABLES-HINT - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_mangle" -j CILIUM_PRE_mangle
-A POSTROUTING -m comment --comment "cilium-feeder: CILIUM_POST_mangle" -j CILIUM_POST_mangle
-A CILIUM_PRE_mangle ! -o lo -m socket --transparent -m mark ! --mark 0xe00/0xf00 -m mark ! --mark 0x800/0xf00 -m comment --comment "cilium: any->pod redirect proxied traffic to host proxy" -j MARK --set-xmark 0x200/0xffffffff
-A CILIUM_PRE_mangle -p tcp -m mark --mark 0xbb900200 -m comment --comment "cilium: TPROXY to host cilium-dns-egress proxy" -j TPROXY --on-port 37051 --on-ip 127.0.0.1 --tproxy-mark 0x200/0xffffffff
-A CILIUM_PRE_mangle -p udp -m mark --mark 0xbb900200 -m comment --comment "cilium: TPROXY to host cilium-dns-egress proxy" -j TPROXY --on-port 37051 --on-ip 127.0.0.1 --tproxy-mark 0x200/0xffffffff
COMMIT
# Completed on Wed Jul 16 23:23:50 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:23:50 2025
*raw
:PREROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:CILIUM_OUTPUT_raw - [0:0]
:CILIUM_PRE_raw - [0:0]
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_raw" -j CILIUM_PRE_raw
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT_raw" -j CILIUM_OUTPUT_raw
-A CILIUM_OUTPUT_raw -d 172.20.0.0/16 -m comment --comment "cilium: NOTRACK for pod traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -s 172.20.0.0/16 -m comment --comment "cilium: NOTRACK for pod traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -o lxc+ -m mark --mark 0xa00/0xfffffeff -m comment --comment "cilium: NOTRACK for proxy return traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -o cilium_host -m mark --mark 0xa00/0xfffffeff -m comment --comment "cilium: NOTRACK for proxy return traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -o lxc+ -m mark --mark 0x800/0xe00 -m comment --comment "cilium: NOTRACK for L7 proxy upstream traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -o cilium_host -m mark --mark 0x800/0xe00 -m comment --comment "cilium: NOTRACK for L7 proxy upstream traffic" -j CT --notrack
-A CILIUM_PRE_raw -d 172.20.0.0/16 -m comment --comment "cilium: NOTRACK for pod traffic" -j CT --notrack
-A CILIUM_PRE_raw -s 172.20.0.0/16 -m comment --comment "cilium: NOTRACK for pod traffic" -j CT --notrack
-A CILIUM_PRE_raw -m mark --mark 0x200/0xf00 -m comment --comment "cilium: NOTRACK for proxy traffic" -j CT --notrack
COMMIT
# Completed on Wed Jul 16 23:23:50 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:23:50 2025
*filter
:INPUT ACCEPT [117930:471500319]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [111818:44325438]
:CILIUM_FORWARD - [0:0]
:CILIUM_INPUT - [0:0]
:CILIUM_OUTPUT - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
-A INPUT -m comment --comment "cilium-feeder: CILIUM_INPUT" -j CILIUM_INPUT
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m comment --comment "cilium-feeder: CILIUM_FORWARD" -j CILIUM_FORWARD
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT" -j CILIUM_OUTPUT
-A OUTPUT -j KUBE-FIREWALL
-A CILIUM_FORWARD -o cilium_host -m comment --comment "cilium: any->cluster on cilium_host forward accept" -j ACCEPT
-A CILIUM_FORWARD -i cilium_host -m comment --comment "cilium: cluster->any on cilium_host forward accept (nodeport)" -j ACCEPT
-A CILIUM_FORWARD -i lxc+ -m comment --comment "cilium: cluster->any on lxc+ forward accept" -j ACCEPT
-A CILIUM_FORWARD -i cilium_net -m comment --comment "cilium: cluster->any on cilium_net forward accept (nodeport)" -j ACCEPT
-A CILIUM_FORWARD -o lxc+ -m comment --comment "cilium: any->cluster on lxc+ forward accept" -j ACCEPT
-A CILIUM_FORWARD -i lxc+ -m comment --comment "cilium: cluster->any on lxc+ forward accept (nodeport)" -j ACCEPT
-A CILIUM_INPUT -m mark --mark 0x200/0xf00 -m comment --comment "cilium: ACCEPT for proxy traffic" -j ACCEPT
-A CILIUM_OUTPUT -m mark --mark 0xa00/0xe00 -m comment --comment "cilium: ACCEPT for proxy traffic" -j ACCEPT
-A CILIUM_OUTPUT -m mark --mark 0x800/0xe00 -m comment --comment "cilium: ACCEPT for l7 proxy upstream traffic" -j ACCEPT
-A CILIUM_OUTPUT -m mark ! --mark 0xe00/0xf00 -m mark ! --mark 0xd00/0xf00 -m mark ! --mark 0x400/0xf00 -m mark ! --mark 0xa00/0xe00 -m mark ! --mark 0x800/0xe00 -m mark ! --mark 0xf00/0xf00 -m comment --comment "cilium: host->any mark as from host" -j MARK --set-xmark 0xc00/0xf00
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
COMMIT
# Completed on Wed Jul 16 23:23:50 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:23:50 2025
*nat
:PREROUTING ACCEPT [17:1040]
:INPUT ACCEPT [17:1040]
:OUTPUT ACCEPT [1480:89082]
:POSTROUTING ACCEPT [1480:89082]
:CILIUM_OUTPUT_nat - [0:0]
:CILIUM_POST_nat - [0:0]
:CILIUM_PRE_nat - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_nat" -j CILIUM_PRE_nat
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT_nat" -j CILIUM_OUTPUT_nat
-A POSTROUTING -m comment --comment "cilium-feeder: CILIUM_POST_nat" -j CILIUM_POST_nat
COMMIT
# Completed on Wed Jul 16 23:23:50 2025

(⎈|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i sudo iptables-save ; echo; done
>> node : k8s-w1 <<
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:23:56 2025
*mangle
:PREROUTING ACCEPT [26268:463447743]
:INPUT ACCEPT [26268:463447743]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [23886:1176414]
:POSTROUTING ACCEPT [23886:1176414]
:CILIUM_POST_mangle - [0:0]
:CILIUM_PRE_mangle - [0:0]
:KUBE-IPTABLES-HINT - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_mangle" -j CILIUM_PRE_mangle
-A POSTROUTING -m comment --comment "cilium-feeder: CILIUM_POST_mangle" -j CILIUM_POST_mangle
-A CILIUM_PRE_mangle ! -o lo -m socket --transparent -m mark ! --mark 0xe00/0xf00 -m mark ! --mark 0x800/0xf00 -m comment --comment "cilium: any->pod redirect proxied traffic to host proxy" -j MARK --set-xmark 0x200/0xffffffff
-A CILIUM_PRE_mangle -p tcp -m mark --mark 0xf9810200 -m comment --comment "cilium: TPROXY to host cilium-dns-egress proxy" -j TPROXY --on-port 33273 --on-ip 127.0.0.1 --tproxy-mark 0x200/0xffffffff
-A CILIUM_PRE_mangle -p udp -m mark --mark 0xf9810200 -m comment --comment "cilium: TPROXY to host cilium-dns-egress proxy" -j TPROXY --on-port 33273 --on-ip 127.0.0.1 --tproxy-mark 0x200/0xffffffff
COMMIT
# Completed on Wed Jul 16 23:23:56 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:23:56 2025
*raw
:PREROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:CILIUM_OUTPUT_raw - [0:0]
:CILIUM_PRE_raw - [0:0]
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_raw" -j CILIUM_PRE_raw
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT_raw" -j CILIUM_OUTPUT_raw
-A CILIUM_OUTPUT_raw -d 172.20.0.0/16 -m comment --comment "cilium: NOTRACK for pod traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -s 172.20.0.0/16 -m comment --comment "cilium: NOTRACK for pod traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -o lxc+ -m mark --mark 0xa00/0xfffffeff -m comment --comment "cilium: NOTRACK for proxy return traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -o cilium_host -m mark --mark 0xa00/0xfffffeff -m comment --comment "cilium: NOTRACK for proxy return traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -o lxc+ -m mark --mark 0x800/0xe00 -m comment --comment "cilium: NOTRACK for L7 proxy upstream traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -o cilium_host -m mark --mark 0x800/0xe00 -m comment --comment "cilium: NOTRACK for L7 proxy upstream traffic" -j CT --notrack
-A CILIUM_PRE_raw -d 172.20.0.0/16 -m comment --comment "cilium: NOTRACK for pod traffic" -j CT --notrack
-A CILIUM_PRE_raw -s 172.20.0.0/16 -m comment --comment "cilium: NOTRACK for pod traffic" -j CT --notrack
-A CILIUM_PRE_raw -m mark --mark 0x200/0xf00 -m comment --comment "cilium: NOTRACK for proxy traffic" -j CT --notrack
COMMIT
# Completed on Wed Jul 16 23:23:56 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:23:56 2025
*filter
:INPUT ACCEPT [26268:463447743]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [23886:1176414]
:CILIUM_FORWARD - [0:0]
:CILIUM_INPUT - [0:0]
:CILIUM_OUTPUT - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
-A INPUT -m comment --comment "cilium-feeder: CILIUM_INPUT" -j CILIUM_INPUT
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m comment --comment "cilium-feeder: CILIUM_FORWARD" -j CILIUM_FORWARD
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT" -j CILIUM_OUTPUT
-A OUTPUT -j KUBE-FIREWALL
-A CILIUM_FORWARD -o cilium_host -m comment --comment "cilium: any->cluster on cilium_host forward accept" -j ACCEPT
-A CILIUM_FORWARD -i cilium_host -m comment --comment "cilium: cluster->any on cilium_host forward accept (nodeport)" -j ACCEPT
-A CILIUM_FORWARD -i lxc+ -m comment --comment "cilium: cluster->any on lxc+ forward accept" -j ACCEPT
-A CILIUM_FORWARD -i cilium_net -m comment --comment "cilium: cluster->any on cilium_net forward accept (nodeport)" -j ACCEPT
-A CILIUM_FORWARD -o lxc+ -m comment --comment "cilium: any->cluster on lxc+ forward accept" -j ACCEPT
-A CILIUM_FORWARD -i lxc+ -m comment --comment "cilium: cluster->any on lxc+ forward accept (nodeport)" -j ACCEPT
-A CILIUM_INPUT -m mark --mark 0x200/0xf00 -m comment --comment "cilium: ACCEPT for proxy traffic" -j ACCEPT
-A CILIUM_OUTPUT -m mark --mark 0xa00/0xe00 -m comment --comment "cilium: ACCEPT for proxy traffic" -j ACCEPT
-A CILIUM_OUTPUT -m mark --mark 0x800/0xe00 -m comment --comment "cilium: ACCEPT for l7 proxy upstream traffic" -j ACCEPT
-A CILIUM_OUTPUT -m mark ! --mark 0xe00/0xf00 -m mark ! --mark 0xd00/0xf00 -m mark ! --mark 0x400/0xf00 -m mark ! --mark 0xa00/0xe00 -m mark ! --mark 0x800/0xe00 -m mark ! --mark 0xf00/0xf00 -m comment --comment "cilium: host->any mark as from host" -j MARK --set-xmark 0xc00/0xf00
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
COMMIT
# Completed on Wed Jul 16 23:23:56 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:23:56 2025
*nat
:PREROUTING ACCEPT [7:448]
:INPUT ACCEPT [7:448]
:OUTPUT ACCEPT [119:7432]
:POSTROUTING ACCEPT [119:7432]
:CILIUM_OUTPUT_nat - [0:0]
:CILIUM_POST_nat - [0:0]
:CILIUM_PRE_nat - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_nat" -j CILIUM_PRE_nat
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT_nat" -j CILIUM_OUTPUT_nat
-A POSTROUTING -m comment --comment "cilium-feeder: CILIUM_POST_nat" -j CILIUM_POST_nat
COMMIT
# Completed on Wed Jul 16 23:23:56 2025

>> node : k8s-w2 <<
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:23:56 2025
*mangle
:PREROUTING ACCEPT [20554:465979604]
:INPUT ACCEPT [20554:465979604]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [18562:1631771]
:POSTROUTING ACCEPT [18562:1631771]
:CILIUM_POST_mangle - [0:0]
:CILIUM_PRE_mangle - [0:0]
:KUBE-IPTABLES-HINT - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_mangle" -j CILIUM_PRE_mangle
-A POSTROUTING -m comment --comment "cilium-feeder: CILIUM_POST_mangle" -j CILIUM_POST_mangle
-A CILIUM_PRE_mangle ! -o lo -m socket --transparent -m mark ! --mark 0xe00/0xf00 -m mark ! --mark 0x800/0xf00 -m comment --comment "cilium: any->pod redirect proxied traffic to host proxy" -j MARK --set-xmark 0x200/0xffffffff
-A CILIUM_PRE_mangle -p tcp -m mark --mark 0xb39c0200 -m comment --comment "cilium: TPROXY to host cilium-dns-egress proxy" -j TPROXY --on-port 40115 --on-ip 127.0.0.1 --tproxy-mark 0x200/0xffffffff
-A CILIUM_PRE_mangle -p udp -m mark --mark 0xb39c0200 -m comment --comment "cilium: TPROXY to host cilium-dns-egress proxy" -j TPROXY --on-port 40115 --on-ip 127.0.0.1 --tproxy-mark 0x200/0xffffffff
COMMIT
# Completed on Wed Jul 16 23:23:56 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:23:56 2025
*raw
:PREROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:CILIUM_OUTPUT_raw - [0:0]
:CILIUM_PRE_raw - [0:0]
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_raw" -j CILIUM_PRE_raw
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT_raw" -j CILIUM_OUTPUT_raw
-A CILIUM_OUTPUT_raw -d 172.20.0.0/16 -m comment --comment "cilium: NOTRACK for pod traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -s 172.20.0.0/16 -m comment --comment "cilium: NOTRACK for pod traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -o lxc+ -m mark --mark 0xa00/0xfffffeff -m comment --comment "cilium: NOTRACK for proxy return traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -o cilium_host -m mark --mark 0xa00/0xfffffeff -m comment --comment "cilium: NOTRACK for proxy return traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -o lxc+ -m mark --mark 0x800/0xe00 -m comment --comment "cilium: NOTRACK for L7 proxy upstream traffic" -j CT --notrack
-A CILIUM_OUTPUT_raw -o cilium_host -m mark --mark 0x800/0xe00 -m comment --comment "cilium: NOTRACK for L7 proxy upstream traffic" -j CT --notrack
-A CILIUM_PRE_raw -d 172.20.0.0/16 -m comment --comment "cilium: NOTRACK for pod traffic" -j CT --notrack
-A CILIUM_PRE_raw -s 172.20.0.0/16 -m comment --comment "cilium: NOTRACK for pod traffic" -j CT --notrack
-A CILIUM_PRE_raw -m mark --mark 0x200/0xf00 -m comment --comment "cilium: NOTRACK for proxy traffic" -j CT --notrack
COMMIT
# Completed on Wed Jul 16 23:23:56 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:23:56 2025
*filter
:INPUT ACCEPT [20554:465979604]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [18562:1631771]
:CILIUM_FORWARD - [0:0]
:CILIUM_INPUT - [0:0]
:CILIUM_OUTPUT - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
-A INPUT -m comment --comment "cilium-feeder: CILIUM_INPUT" -j CILIUM_INPUT
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m comment --comment "cilium-feeder: CILIUM_FORWARD" -j CILIUM_FORWARD
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT" -j CILIUM_OUTPUT
-A OUTPUT -j KUBE-FIREWALL
-A CILIUM_FORWARD -o cilium_host -m comment --comment "cilium: any->cluster on cilium_host forward accept" -j ACCEPT
-A CILIUM_FORWARD -i cilium_host -m comment --comment "cilium: cluster->any on cilium_host forward accept (nodeport)" -j ACCEPT
-A CILIUM_FORWARD -i lxc+ -m comment --comment "cilium: cluster->any on lxc+ forward accept" -j ACCEPT
-A CILIUM_FORWARD -i cilium_net -m comment --comment "cilium: cluster->any on cilium_net forward accept (nodeport)" -j ACCEPT
-A CILIUM_FORWARD -o lxc+ -m comment --comment "cilium: any->cluster on lxc+ forward accept" -j ACCEPT
-A CILIUM_FORWARD -i lxc+ -m comment --comment "cilium: cluster->any on lxc+ forward accept (nodeport)" -j ACCEPT
-A CILIUM_INPUT -m mark --mark 0x200/0xf00 -m comment --comment "cilium: ACCEPT for proxy traffic" -j ACCEPT
-A CILIUM_OUTPUT -m mark --mark 0xa00/0xe00 -m comment --comment "cilium: ACCEPT for proxy traffic" -j ACCEPT
-A CILIUM_OUTPUT -m mark --mark 0x800/0xe00 -m comment --comment "cilium: ACCEPT for l7 proxy upstream traffic" -j ACCEPT
-A CILIUM_OUTPUT -m mark ! --mark 0xe00/0xf00 -m mark ! --mark 0xd00/0xf00 -m mark ! --mark 0x400/0xf00 -m mark ! --mark 0xa00/0xe00 -m mark ! --mark 0x800/0xe00 -m mark ! --mark 0xf00/0xf00 -m comment --comment "cilium: host->any mark as from host" -j MARK --set-xmark 0xc00/0xf00
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
COMMIT
# Completed on Wed Jul 16 23:23:56 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 23:23:56 2025
*nat
:PREROUTING ACCEPT [6:388]
:INPUT ACCEPT [6:388]
:OUTPUT ACCEPT [258:15788]
:POSTROUTING ACCEPT [258:15788]
:CILIUM_OUTPUT_nat - [0:0]
:CILIUM_POST_nat - [0:0]
:CILIUM_PRE_nat - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_nat" -j CILIUM_PRE_nat
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT_nat" -j CILIUM_OUTPUT_nat
-A POSTROUTING -m comment --comment "cilium-feeder: CILIUM_POST_nat" -j CILIUM_POST_nat
COMMIT
# Completed on Wed Jul 16 23:23:56 2025

 

PodCIDR IPAM 확인

kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.podCIDR}{"\n"}{end}'
kubectl get pod -owide
kubectl get ciliumnodes -o json | grep podCIDRs -A2
kubectl rollout restart deployment webpod
kubectl delete pod curl-pod --grace-period=0

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: curl-pod
  labels:
    app: curl
spec:
  nodeName: k8s-ctr
  containers:
  - name: curl
    image: nicolaka/netshoot
    command: ["tail"]
    args: ["-f", "/dev/null"]
  terminationGracePeriodSeconds: 0
EOF

kubectl get pod -owide
kubectl get ciliumendpoints
kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg endpoint list

# 통신 확인
kubectl exec -it curl-pod -- curl webpod | grep Hostname
kubectl exec -it curl-pod -- curl webpod | grep Hostname

 

✅ 실행 결과

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.podCIDR}{"\n"}{end}'
k8s-ctr 10.244.0.0/24
k8s-w1  10.244.1.0/24
k8s-w2  10.244.2.0/24

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide
NAME                      READY   STATUS    RESTARTS   AGE   IP           NODE      NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   0          20h   10.244.0.2   k8s-ctr   <none>           <none>
webpod-697b545f57-8dkv7   1/1     Running   0          20h   10.244.1.2   k8s-w1    <none>           <none>
webpod-697b545f57-rxtvt   1/1     Running   0          20h   10.244.2.4   k8s-w2    <none>           <none>

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnodes -o json | grep podCIDRs -A2
                    "podCIDRs": [
                        "172.20.0.0/24"
                    ],
--
                    "podCIDRs": [
                        "172.20.2.0/24"
                    ],
--
                    "podCIDRs": [
                        "172.20.1.0/24"
                    ],

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl rollout restart deployment webpod
deployment.apps/webpod restarted

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl delete pod curl-pod --grace-period=0
pod "curl-pod" deleted

(⎈|HomeLab:N/A) root@k8s-ctr:~# 
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: curl-pod
  labels:
    app: curl
spec:
  nodeName: k8s-ctr
  containers:
  - name: curl
    image: nicolaka/netshoot
    command: ["tail"]
    args: ["-f", "/dev/null"]
  terminationGracePeriodSeconds: 0
EOF
pod/curl-pod created

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide
NAME                      READY   STATUS    RESTARTS   AGE   IP             NODE      NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   0          33s   172.20.0.45    k8s-ctr   <none>           <none>
webpod-777d7b6d96-f92pm   1/1     Running   0          46s   172.20.1.2     k8s-w2    <none>           <none>
webpod-777d7b6d96-zlxmh   1/1     Running   0          44s   172.20.2.249   k8s-w1    <none>           <none>

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints
NAME                      SECURITY IDENTITY   ENDPOINT STATE   IPV4           IPV6
curl-pod                  22275               ready            172.20.0.45    
webpod-777d7b6d96-f92pm   4394                ready            172.20.1.2     
webpod-777d7b6d96-zlxmh   4394                ready            172.20.2.249   

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg endpoint list
ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                  IPv6   IPv4           STATUS   
           ENFORCEMENT        ENFORCEMENT                                                                                                                     
399        Disabled           Disabled          1          k8s:node-role.kubernetes.io/control-plane                                                          ready   
                                                           k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                
                                                           reserved:host                                                                                              
977        Disabled           Disabled          22275      k8s:app=curl                                                                        172.20.0.45    ready   
                                                           k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                     
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                                   
                                                           k8s:io.cilium.k8s.policy.serviceaccount=default                                                            
                                                           k8s:io.kubernetes.pod.namespace=default                                                                    
1046       Disabled           Disabled          27005      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system          172.20.0.140   ready   
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                                   
                                                           k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                            
                                                           k8s:io.kubernetes.pod.namespace=kube-system                                                                
                                                           k8s:k8s-app=kube-dns                                                                                       
1810       Disabled           Disabled          27005      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system          172.20.0.33    ready   
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                                   
                                                           k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                            
                                                           k8s:io.kubernetes.pod.namespace=kube-system                                                                
                                                           k8s:k8s-app=kube-dns                                                                                       
2491       Disabled           Disabled          4          reserved:health                                                                     172.20.0.103   ready   

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl webpod | grep Hostname
Hostname: webpod-777d7b6d96-zlxmh

 

Cilium 설치 확인

# cilium cli 설치
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz >/dev/null 2>&1
tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz

# cilium 상태 확인
which cilium
cilium status
cilium config view
kubectl get cm -n kube-system cilium-config -o json | jq

#
cilium config set debug true && watch kubectl get pod -A
cilium config view | grep -i debug


# cilium daemon = cilium-dbg
kubectl exec -n kube-system -c cilium-agent -it ds/cilium -- cilium-dbg config
kubectl exec -n kube-system -c cilium-agent -it ds/cilium -- cilium-dbg status --verbose

 

✅ 실행 결과
예뻐서 캡쳐로..

 

실리움 설치 후 테이블 확인

for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i ip -c addr show cilium_net  ; echo; done
for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i ip -c addr show cilium_host ; echo; done
for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i ip -c addr show lxc_health  ; echo; done
kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg status --verbose | grep Name -A20
kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg endpoint list | grep health
kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg status --all-addresses
kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium bpf ct list global | grep ICMP |head -n4
kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium bpf nat list | grep ICMP |head -n4

 

✅ 실행 결과

(⎈|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i ip -c addr show cilium_net  ; echo; done
>> node : k8s-w1 <<
7: cilium_net@cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d2:c7:01:22:08:e1 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::d0c7:1ff:fe22:8e1/64 scope link 
       valid_lft forever preferred_lft forever

>> node : k8s-w2 <<
9: cilium_net@cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 06:04:a2:74:bf:9c brd ff:ff:ff:ff:ff:ff
    inet6 fe80::404:a2ff:fe74:bf9c/64 scope link 
       valid_lft forever preferred_lft forever

(⎈|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i ip -c addr show cilium_host ; echo; done
>> node : k8s-w1 <<
8: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 5e:08:0b:83:4a:5e brd ff:ff:ff:ff:ff:ff
    inet 172.20.2.232/32 scope global cilium_host
       valid_lft forever preferred_lft forever
    inet6 fe80::5c08:bff:fe83:4a5e/64 scope link 
       valid_lft forever preferred_lft forever

>> node : k8s-w2 <<
10: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 1e:a1:1a:ea:8d:32 brd ff:ff:ff:ff:ff:ff
    inet 172.20.1.78/32 scope global cilium_host
       valid_lft forever preferred_lft forever
    inet6 fe80::1ca1:1aff:feea:8d32/64 scope link 
       valid_lft forever preferred_lft forever

(⎈|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i ip -c addr show lxc_health  ; echo; done
>> node : k8s-w1 <<
14: lxc_health@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 22:b5:82:4a:b3:df brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::20b5:82ff:fe4a:b3df/64 scope link 
       valid_lft forever preferred_lft forever

>> node : k8s-w2 <<
16: lxc_health@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 2e:b3:ae:24:20:f8 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::2cb3:aeff:fe24:20f8/64 scope link 
       valid_lft forever preferred_lft forever

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg status --verbose | grep Name -A20
...
  Name              IP              Node   Endpoints
  k8s-w2 (localhost):
    Host connectivity to 192.168.10.102: # <-- NodeIP
      ICMP to stack:   OK, RTT=107.459µs
      HTTP to agent:   OK, RTT=292.625µs
    Endpoint connectivity to 172.20.1.37:# <-- HealthIP
      ICMP to stack:   OK, RTT=804.542µs
      HTTP to agent:   OK, RTT=1.147083ms
  k8s-ctr:
    Host connectivity to 192.168.10.100: # <-- NodeIP
      ICMP to stack:   OK, RTT=722.625µs
      HTTP to agent:   OK, RTT=1.005542ms
    Endpoint connectivity to 172.20.0.103:# <-- HealthIP
      ICMP to stack:   OK, RTT=516.584µs
      HTTP to agent:   OK, RTT=1.108667ms
  k8s-w1:
    Host connectivity to 192.168.10.101: # <-- NodeIP
      ICMP to stack:   OK, RTT=860.041µs
      HTTP to agent:   OK, RTT=1.288292ms
    Endpoint connectivity to 172.20.2.139:# <-- HealthIP
      ICMP to stack:   OK, RTT=870.333µs

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg endpoint list | grep health
681        Disabled           Disabled          4          reserved:health                                                                 172.20.1.37   ready   

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg status --all-addresses 
| grep Allocated -A10
Allocated addresses:
  172.20.1.2 (default/webpod-777d7b6d96-f92pm [restored])
  172.20.1.37 (health)
  172.20.1.78 (router)
IPv4 BIG TCP:            Disabled
IPv6 BIG TCP:            Disabled
BandwidthManager:        Disabled
Routing:                 Network: Native   Host: BPF
Attach Mode:             TCX
Device Mode:             veth
Masquerading:            BPF   [eth0, eth1]   172.20.0.0/16 [IPv4: Enabled, IPv6: Disabled]

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium bpf ct list global | grep ICMP |head -n4
ICMP IN 192.168.10.101:58082 -> 172.20.1.37:0 expires=18435 Packets=0 Bytes=0 RxFlagsSeen=0x00 LastRxReport=18375 TxFlagsSeen=0x00 LastTxReport=18375 Flags=0x0000 [ ] RevNAT=0 SourceSecurityID=6 IfIndex=0 BackendID=0 
ICMP IN 10.0.2.15:53625 -> 172.20.1.37:0 expires=18435 Packets=0 Bytes=0 RxFlagsSeen=0x00 LastRxReport=18375 TxFlagsSeen=0x00 LastTxReport=18375 Flags=0x0000 [ ] RevNAT=0 SourceSecurityID=1 IfIndex=0 BackendID=0 
ICMP OUT 192.168.10.102:51498 -> 172.20.2.139:0 expires=18455 Packets=0 Bytes=0 RxFlagsSeen=0x00 LastRxReport=18395 TxFlagsSeen=0x00 LastTxReport=18395 Flags=0x0000 [ ] RevNAT=0 SourceSecurityID=0 IfIndex=0 BackendID=0 
ICMP IN 192.168.10.100:0 -> 172.20.1.37:0 related expires=18436 Packets=0 Bytes=0 RxFlagsSeen=0x00 LastRxReport=18376 TxFlagsSeen=0x00 LastTxReport=0 Flags=0x0000 [ ] RevNAT=0 SourceSecurityID=7 IfIndex=0 BackendID=0 

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium bpf nat list | grep ICMP |head -n4
ICMP OUT 192.168.10.102:50039 -> 192.168.10.100:0 XLATE_SRC 192.168.10.102:50039 Created=76sec ago NeedsCT=1
ICMP OUT 192.168.10.102:51498 -> 172.20.2.139:0 XLATE_SRC 192.168.10.102:51498 Created=26sec ago NeedsCT=1
ICMP IN 192.168.10.100:0 -> 192.168.10.102:50039 XLATE_DST 192.168.10.102:50039 Created=76sec ago NeedsCT=1
ICMP OUT 192.168.10.102:55474 -> 172.20.0.103:0 XLATE_SRC 192.168.10.102:55474 Created=6sec ago NeedsCT=1

 

라우팅 확인

for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i ip -c route | grep 172.20 | grep eth1 ; echo; done
kubectl get ciliumendpoints -A
for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i ip -c route | grep lxc ; echo; done

 

✅ 실행 결과

(⎈|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i ip -c route | grep 172.20 | grep eth1 ; echo; done
>> node : k8s-w1 <<
172.20.0.0/24 via 192.168.10.100 dev eth1 proto kernel 
172.20.1.0/24 via 192.168.10.102 dev eth1 proto kernel 

>> node : k8s-w2 <<
172.20.0.0/24 via 192.168.10.100 dev eth1 proto kernel 
172.20.2.0/24 via 192.168.10.101 dev eth1 proto kernel 

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints -A
NAMESPACE     NAME                       SECURITY IDENTITY   ENDPOINT STATE   IPV4           IPV6
default       curl-pod                   22275               ready            172.20.0.45    
default       webpod-777d7b6d96-f92pm    4394                ready            172.20.1.2     
default       webpod-777d7b6d96-zlxmh    4394                ready            172.20.2.249   
kube-system   coredns-674b8bbfcf-vtdl6   27005               ready            172.20.0.140   
kube-system   coredns-674b8bbfcf-z4lcw   27005               ready            172.20.0.33

(⎈|HomeLab:N/A) root@k8s-ctr:~# for i in w1 w2 ; do echo ">> node : k8s-$i <<"; sshpass -p 'vagrant' ssh vagrant@k8s-$i ip -c route | grep lxc ; echo; done
>> node : k8s-w1 <<
172.20.2.139 dev lxc_health proto kernel scope link 
172.20.2.249 dev lxceeb1d65fbb08 proto kernel scope link 

>> node : k8s-w2 <<
172.20.1.2 dev lxce5a4f801cf89 proto kernel scope link 
172.20.1.37 dev lxc_health proto kernel scope link 

Cilium 통신 확인

Cilium 정보 확인

# cilium 파드 이름
export CILIUMPOD0=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-ctr -o jsonpath='{.items[0].metadata.name}')
export CILIUMPOD1=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-w1  -o jsonpath='{.items[0].metadata.name}')
export CILIUMPOD2=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-w2  -o jsonpath='{.items[0].metadata.name}')
echo $CILIUMPOD0 $CILIUMPOD1 $CILIUMPOD2

# 단축키(alias) 지정
alias c0="kubectl exec -it $CILIUMPOD0 -n kube-system -c cilium-agent -- cilium"
alias c1="kubectl exec -it $CILIUMPOD1 -n kube-system -c cilium-agent -- cilium"
alias c2="kubectl exec -it $CILIUMPOD2 -n kube-system -c cilium-agent -- cilium"

alias c0bpf="kubectl exec -it $CILIUMPOD0 -n kube-system -c cilium-agent -- bpftool"
alias c1bpf="kubectl exec -it $CILIUMPOD1 -n kube-system -c cilium-agent -- bpftool"
alias c2bpf="kubectl exec -it $CILIUMPOD2 -n kube-system -c cilium-agent -- bpftool"

# 엔드포인트 정보 확인
kubectl get pod -owide
kubectl get svc,ep webpod
WEBPOD1IP=172.20.0.150

# BPF maps : 목적지 파드와 통신 시 어느곳으로 보내야 될지 확인할 수 있다
c0 map get cilium_ipcache
c0 map get cilium_ipcache | grep $WEBPOD1IP

# curl-pod 의 LXC 변수 지정
LXC=<k8s-ctr의 가장 나중에 lxc 이름>
LXC=lxce2b0659eacfa


# Node’s eBPF programs
## list of eBPF programs
c0bpf net show
c0bpf net show | grep $LXC 
c0bpf prog show id <출력된 prog id 입력>
c0bpf map list

 

✅ 실행 결과

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide
NAME                      READY   STATUS    RESTARTS   AGE   IP             NODE      NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   0          21m   172.20.0.45    k8s-ctr   <none>           <none>
webpod-777d7b6d96-f92pm   1/1     Running   0          21m   172.20.1.2     k8s-w2    <none>           <none>
webpod-777d7b6d96-zlxmh   1/1     Running   0          21m   172.20.2.249   k8s-w1    <none>           <none>

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc,ep webpod
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/webpod   ClusterIP   10.96.255.154   <none>        80/TCP    21h

NAME               ENDPOINTS                       AGE
endpoints/webpod   172.20.1.2:80,172.20.2.249:80   21h
(⎈|HomeLab:N/A) root@k8s-ctr:~# WEBPOD1IP=172.20.1.2

(⎈|HomeLab:N/A) root@k8s-ctr:~# c0 map get cilium_ipcache
Key                 Value                                                                   State   Error
172.20.0.140/32     identity=27005 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>         sync    
172.20.1.2/32       identity=4394 encryptkey=0 tunnelendpoint=192.168.10.102 flags=<none>   sync    
172.20.2.249/32     identity=4394 encryptkey=0 tunnelendpoint=192.168.10.101 flags=<none>   sync    
172.20.0.103/32     identity=4 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>             sync    
172.20.0.160/32     identity=1 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>             sync    
192.168.10.101/32   identity=6 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>             sync    
172.20.2.139/32     identity=4 encryptkey=0 tunnelendpoint=192.168.10.101 flags=<none>      sync    
192.168.10.102/32   identity=6 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>             sync    
10.0.2.15/32        identity=1 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>             sync    
172.20.2.232/32     identity=6 encryptkey=0 tunnelendpoint=192.168.10.101 flags=<none>      sync    
172.20.0.33/32      identity=27005 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>         sync    
172.20.1.78/32      identity=6 encryptkey=0 tunnelendpoint=192.168.10.102 flags=<none>      sync    
192.168.10.100/32   identity=1 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>             sync    
10.244.0.0/32       identity=1 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>             sync    
172.20.1.37/32      identity=4 encryptkey=0 tunnelendpoint=192.168.10.102 flags=<none>      sync    
0.0.0.0/0           identity=2 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>             sync    
172.20.0.45/32      identity=22275 encryptkey=0 tunnelendpoint=0.0.0.0 flags=<none>         sync    

(⎈|HomeLab:N/A) root@k8s-ctr:~# c0 map get cilium_ipcache | grep $WEBPOD1IP
172.20.1.2/32       identity=4394 encryptkey=0 tunnelendpoint=192.168.10.102 flags=<none>   sync   

(⎈|HomeLab:N/A) root@k8s-ctr:~# LXC=lxce2b0659eacfa

(⎈|HomeLab:N/A) root@k8s-ctr:~# c0bpf net show
xdp:

tc:
eth0(2) tcx/ingress cil_from_netdev prog_id 1532 link_id 16 
eth0(2) tcx/egress cil_to_netdev prog_id 1530 link_id 17 
eth1(3) tcx/ingress cil_from_netdev prog_id 1541 link_id 18 
eth1(3) tcx/egress cil_to_netdev prog_id 1538 link_id 19 
flannel.1(4) tcx/ingress cil_from_netdev prog_id 1551 link_id 20 
flannel.1(4) tcx/egress cil_to_netdev prog_id 1549 link_id 21 
cilium_net(7) tcx/ingress cil_to_host prog_id 1522 link_id 15 
cilium_host(8) tcx/ingress cil_to_host prog_id 1512 link_id 13 
cilium_host(8) tcx/egress cil_from_host prog_id 1509 link_id 14 
lxce2b0659eacfa(12) tcx/ingress cil_from_container prog_id 1487 link_id 26 
lxce2b0659eacfa(12) tcx/egress cil_to_container prog_id 1489 link_id 27 
lxc2be384ce1188(14) tcx/ingress cil_from_container prog_id 1468 link_id 24 
lxc2be384ce1188(14) tcx/egress cil_to_container prog_id 1465 link_id 25 
lxcdf2d2ff64754(16) tcx/ingress cil_from_container prog_id 1499 link_id 28 
lxcdf2d2ff64754(16) tcx/egress cil_to_container prog_id 1491 link_id 29 
lxc_health(18) tcx/ingress cil_from_container prog_id 1471 link_id 31 
lxc_health(18) tcx/egress cil_to_container prog_id 1474 link_id 32 

flow_dissector:

netfilter:

(⎈|HomeLab:N/A) root@k8s-ctr:~# c0bpf net show | grep $LXC 
lxce2b0659eacfa(12) tcx/ingress cil_from_container prog_id 1487 link_id 26 
lxce2b0659eacfa(12) tcx/egress cil_to_container prog_id 1489 link_id 27 

(⎈|HomeLab:N/A) root@k8s-ctr:~# c0bpf prog show id 1489
1489: sched_cls  name cil_to_container  tag 0b3125767ba1861c  gpl
        loaded_at 2025-07-16T14:36:31+0000  uid 0
        xlated 1448B  jited 1144B  memlock 4096B  map_ids 235,40,234
        btf_id 487

(⎈|HomeLab:N/A) root@k8s-ctr:~# c0bpf map list | grep 235: -A3
235: array  name .rodata.config  flags 0x480
        key 4B  value 64B  max_entries 1  memlock 8192B
        btf_id 478  frozen

(⎈|HomeLab:N/A) root@k8s-ctr:~# c0bpf map list | grep 40: -A3
40: percpu_hash  name cilium_metrics  flags 0x1
        key 8B  value 16B  max_entries 1024  memlock 19144B

(⎈|HomeLab:N/A) root@k8s-ctr:~# c0bpf map list | grep 234: -A3
234: prog_array  name cilium_calls_01  flags 0x0
        key 4B  value 4B  max_entries 50  memlock 720B
        owner_prog_type sched_cls  owner jited

 

다른 노드간 파드 -> 파드 통신

# vagrant ssh k8s-w1 , # vagrant ssh k8s-w2 각각 터미널 접속 후 아래 실행
ngrep -tW byline -d eth1 '' 'tcp port 80'


# [k8s-ctr] curl-pod 에서 curl 요청 시도
kubectl exec -it curl-pod -- curl $WEBPOD1IP
### ctr
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl $WEBPOD1IP
Hostname: webpod-777d7b6d96-f92pm
IP: 127.0.0.1
IP: ::1
IP: 172.20.1.2
IP: fe80::708c:1fff:fe63:a63b
RemoteAddr: 172.20.0.45:46690
GET / HTTP/1.1
Host: 172.20.1.2
User-Agent: curl/8.14.1
Accept: */*

### W1
root@k8s-w1:~# ngrep -tW byline -d eth1 '' 'tcp port 80'
interface: eth1 (192.168.10.0/255.255.255.0)
filter: ( tcp port 80 ) and ((ip || ip6) || (vlan && (ip || ip6)))

### W2
root@k8s-w2:~# ngrep -tW byline -d eth1 '' 'tcp port 80'
interface: eth1 (192.168.10.0/255.255.255.0)
filter: ( tcp port 80 ) and ((ip || ip6) || (vlan && (ip || ip6)))
####
T 2025/07/17 00:00:52.012216 172.20.0.45:58406 -> 172.20.1.2:80 [AP] #4
GET / HTTP/1.1.
Host: 172.20.1.2.
User-Agent: curl/8.14.1.
Accept: */*.
.

##
T 2025/07/17 00:00:52.013230 172.20.1.2:80 -> 172.20.0.45:58406 [AP] #6
HTTP/1.1 200 OK.
Date: Wed, 16 Jul 2025 15:00:52 GMT.
Content-Length: 205.
Content-Type: text/plain; charset=utf-8.
.
Hostname: webpod-777d7b6d96-f92pm
IP: 127.0.0.1
IP: ::1
IP: 172.20.1.2
IP: fe80::708c:1fff:fe63:a63b
RemoteAddr: 172.20.0.45:58406
GET / HTTP/1.1.
Host: 172.20.1.2.
User-Agent: curl/8.14.1.
Accept: */*.
.

 

다른 노드 간 '파드' -> '서비스(ClusterIP)' 통신 확인


위 그림은 전통적인 네트워크 기반 로드밸런싱(Network-based Load Balancing)과 소켓 기반 로드밸런싱(Socket-based Load Balancing)의 차이점을 시각적으로 보여줍니다.


네트워크 기반 로드밸런싱은 주로 iptables, IPVS, 또는 외부 로드밸런서 장비(L4/L7 스위치 등)를 활용하여, 네트워크 계층에서 트래픽을 여러 백엔드 서버(Pod)로 분산합니다. 이 방식은 패킷이 네트워크 스택을 따라 이동하면서 여러 번의 커널 공간과 사용자 공간을 오가게 되고, 각 단계마다 오버헤드가 발생합니다. 또한, iptables 규칙이 많아질수록 성능 저하와 관리의 어려움이 커집니다.

 

소켓 기반 로드밸런싱은 패킷이 애플리케이션 소켓에 도달하기 전에 커널에서 바로 로드밸런싱 결정을 내릴 수 있습니다. 이 방식은 불필요한 네트워크 스택 이동을 줄이고, 오버헤드를 최소화하며, 훨씬 더 빠르고 효율적인 트래픽 분산이 가능합니다.


특히 Cilium은 eBPF를 활용해 소켓 기반 로드밸런싱을 구현함으로써, 대규모 환경에서도 네트워크 지연을 최소화하고, 실시간으로 정책을 적용하며, 확장성과 성능을 크게 향상시킬 수 있습니다.

 

tcp dump를 통해 캡쳐되는 패킷 확인

kubectl exec -it curl-pod -- curl webpod
kubectl exec curl-pod -- tcpdump -enni any -q
c0 status --verbose | grep KubeProxyReplacement -A20
kubectl exec curl-pod -- strace -c curl -s webpod
kubectl exec curl-pod -- strace -s 65535 -f -tt curl -s webpod
kubectl exec curl-pod -- strace -e trace=connect     curl -s webpod
kubectl exec curl-pod -- strace -e trace=getsockname curl -s webpod
kubectl exec curl-pod -- strace -e trace=getsockopt curl -s webpod 

 

✅ 실행 결과

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl webpod
Hostname: webpod-777d7b6d96-zlxmh
IP: 127.0.0.1
IP: ::1
IP: 172.20.2.249
IP: fe80::b4d3:feff:feb2:ebb0
RemoteAddr: 172.20.0.45:53690
GET / HTTP/1.1
Host: webpod
User-Agent: curl/8.14.1
Accept: */*

root@k8s-w1:~# ngrep -tW byline -d eth1 '' 'tcp port 80'
interface: eth1 (192.168.10.0/255.255.255.0)
filter: ( tcp port 80 ) and ((ip || ip6) || (vlan && (ip || ip6)))
####
T 2025/07/17 00:09:20.262766 172.20.0.45:53690 -> 172.20.2.249:80 [AP] #4
GET / HTTP/1.1.
Host: webpod.
User-Agent: curl/8.14.1.
Accept: */*.
.

##
T 2025/07/17 00:09:20.263598 172.20.2.249:80 -> 172.20.0.45:53690 [AP] #6
HTTP/1.1 200 OK.
Date: Wed, 16 Jul 2025 15:09:20 GMT.
Content-Length: 203.
Content-Type: text/plain; charset=utf-8.
.
Hostname: webpod-777d7b6d96-zlxmh
IP: 127.0.0.1
IP: ::1
IP: 172.20.2.249
IP: fe80::b4d3:feff:feb2:ebb0
RemoteAddr: 172.20.0.45:53690
GET / HTTP/1.1.
Host: webpod.
User-Agent: curl/8.14.1.
Accept: */*.
.

####

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec curl-pod -- tcpdump -enni any -q
tcpdump: WARNING: any: That device doesn't support promiscuous mode
(Promiscuous mode not supported on the "any" device)
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
15:39:59.544308 eth0  Out ifindex 15 26:ba:34:bb:9f:fc 172.20.0.45.45810 > 172.20.0.140.53: UDP, length 73
15:39:59.544652 eth0  In  ifindex 15 ce:7b:b0:61:79:96 172.20.0.140.53 > 172.20.0.45.45810: UDP, length 121
15:39:59.544786 eth0  Out ifindex 15 26:ba:34:bb:9f:fc 172.20.0.45.45810 > 172.20.0.140.53: UDP, length 73
15:39:59.545005 eth0  In  ifindex 15 ce:7b:b0:61:79:96 172.20.0.140.53 > 172.20.0.45.45810: UDP, length 166
15:39:59.545669 eth0  Out ifindex 15 26:ba:34:bb:9f:fc 172.20.0.45.59926 > 172.20.1.2.80: tcp 0
15:39:59.546394 eth0  In  ifindex 15 ce:7b:b0:61:79:96 172.20.1.2.80 > 172.20.0.45.59926: tcp 0
15:39:59.546485 eth0  Out ifindex 15 26:ba:34:bb:9f:fc 172.20.0.45.59926 > 172.20.1.2.80: tcp 0
15:39:59.546760 eth0  Out ifindex 15 26:ba:34:bb:9f:fc 172.20.0.45.59926 > 172.20.1.2.80: tcp 70
15:39:59.547483 eth0  In  ifindex 15 ce:7b:b0:61:79:96 172.20.1.2.80 > 172.20.0.45.59926: tcp 0
15:39:59.548564 eth0  In  ifindex 15 ce:7b:b0:61:79:96 172.20.1.2.80 > 172.20.0.45.59926: tcp 319
15:39:59.548652 eth0  Out ifindex 15 26:ba:34:bb:9f:fc 172.20.0.45.59926 > 172.20.1.2.80: tcp 0
15:39:59.549038 eth0  Out ifindex 

(⎈|HomeLab:N/A) root@k8s-ctr:~# c0 status --verbose | grep KubeProxyReplacement -A20
KubeProxyReplacement Details:
  Status:                 True
  Socket LB:              Enabled
  Socket LB Tracing:      Enabled
  Socket LB Coverage:     Full
  Devices:                eth0    10.0.2.15 fd17:625c:f037:2:a00:27ff:fe71:19d8 fe80::a00:27ff:fe71:19d8, eth1   192.168.10.100 fe80::a00:27ff:fe36:808d (Direct Routing), flannel.1   10.244.0.0 fe80::98cb:2fff:fe7b:355b
  Mode:                   SNAT
  Backend Selection:      Random
  Session Affinity:       Enabled
  Graceful Termination:   Enabled
  NAT46/64 Support:       Disabled
  XDP Acceleration:       Disabled
  Services:
  - ClusterIP:      Enabled
  - NodePort:       Enabled (Range: 30000-32767) 
  - LoadBalancer:   Enabled 
  - externalIPs:    Enabled 
  - HostPort:       Enabled

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec curl-pod -- strace -c curl -s webpod
Hostname: webpod-777d7b6d96-f92pm
IP: 127.0.0.1
IP: ::1
IP: 172.20.1.2
IP: fe80::708c:1fff:fe63:a63b
RemoteAddr: 172.20.0.45:47474
GET / HTTP/1.1
Host: webpod
User-Agent: curl/8.14.1
Accept: */*

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 23.67    0.000698         232         3         1 connect
 19.94    0.000588          16        35           munmap
 14.58    0.000430          71         6         3 recvfrom
 10.38    0.000306         102         3           sendto
 10.00    0.000295          21        14           rt_sigprocmask
  6.51    0.000192           8        22           close
  4.82    0.000142           5        28           rt_sigaction
  3.80    0.000112          37         3         3 ioctl
  1.29    0.000038           4         9           ppoll
  1.25    0.000037          37         1           writev
  1.19    0.000035           2        14           mprotect
  0.85    0.000025           0        63           mmap
  0.31    0.000009           0        24           fcntl
  0.27    0.000008           0        27           read
  0.24    0.000007           1         5           getsockname
  0.24    0.000007           0        47        30 openat
  0.20    0.000006           1         5           setsockopt
  0.20    0.000006           1         4           socket
  0.07    0.000002           2         1           getsockopt
  0.03    0.000001           0         4           brk
  0.03    0.000001           1         1           getgid
  0.03    0.000001           1         1           getegid
  0.03    0.000001           0        12           fstat
  0.03    0.000001           1         1           getuid
  0.03    0.000001           0         2           geteuid
  0.00    0.000000           0        10           lseek
  0.00    0.000000           0         3           readv
  0.00    0.000000           0         1           eventfd2
  0.00    0.000000           0         1           newfstatat
  0.00    0.000000           0         1           set_tid_address
  0.00    0.000000           0         1           execve
  0.00    0.000000           0         1           getrandom
------ ----------- ----------- --------- --------- ----------------
100.00    0.002949           8       353        37 total

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec curl-pod -- strace -e trace=connect     curl -s webpod
connect(4, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("10.96.0.10")}, 16) = 0
connect(5, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("10.96.255.154")}, 16) = 0
connect(4, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("10.96.255.154")}, 16) = -1 EINPROGRESS (Operation in progress)
Hostname: webpod-777d7b6d96-f92pm
IP: 127.0.0.1
IP: ::1
IP: 172.20.1.2
IP: fe80::708c:1fff:fe63:a63b
RemoteAddr: 172.20.0.45:50086
GET / HTTP/1.1
Host: webpod
User-Agent: curl/8.14.1
Accept: */*

+++ exited with 0 +++

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec curl-pod -- strace -e trace=getsockname curl -s webpod
getsockname(4, {sa_family=AF_INET, sin_port=htons(42111), sin_addr=inet_addr("172.20.0.45")}, [128 => 16]) = 0
getsockname(5, {sa_family=AF_INET, sin_port=htons(55799), sin_addr=inet_addr("172.20.0.45")}, [16]) = 0
getsockname(4, {sa_family=AF_INET, sin_port=htons(44772), sin_addr=inet_addr("172.20.0.45")}, [128 => 16]) = 0
getsockname(4, {sa_family=AF_INET, sin_port=htons(44772), sin_addr=inet_addr("172.20.0.45")}, [128 => 16]) = 0
getsockname(4, {sa_family=AF_INET, sin_port=htons(44772), sin_addr=inet_addr("172.20.0.45")}, [128 => 16]) = 0
Hostname: webpod-777d7b6d96-f92pm
IP: 127.0.0.1
IP: ::1
IP: 172.20.1.2
IP: fe80::708c:1fff:fe63:a63b
RemoteAddr: 172.20.0.45:44772
GET / HTTP/1.1
Host: webpod
User-Agent: curl/8.14.1
Accept: */*

+++ exited with 0 +++

kubectl exec curl-pod -- strace -e trace=getsockopt curl -s webpod 
getsockopt(4, SOL_SOCKET, SO_ERROR, [0], [4]) = 0 # 소켓 연결 성공

 

 

 


마치며

이번 주는 Cilium의 기본적인 이론과 패킷 전달 방식을 정리해보았습니다.

 

마침 회사에서 GPU 인스턴스 기반 쿠버네티스 클러스터를 구축하는 일이 있어, 이번 스터디에서 배운 내용을 실무에 바로 적용해볼 수 있었습니다. 아직 NVIDIA GPU Operator 설치 작업이 남아 있지만, 이번 기회에 Cilium 환경을 성공적으로 구축할 수 있었던 것 같아 뿌듯합니다.

 

공부한 내용을 실무에 바로 활용해보니, 이론과 실제가 어떻게 연결되는지 체감할 수 있었고, 앞으로도 이런 경험을 계속 쌓아가고 싶다는 생각이 들었습니다.

 

추후에 정리가 더 되면 블로그에 자세한 내용을 공유해보겠습니다. 긴 글 읽어주셔서 감사합니다 :)

반응형

'클라우드 컴퓨팅 & NoSQL > [Cilium Study] 실리움 스터디' 카테고리의 다른 글

[6주차 - Cilium 스터디] Cilium ServiceMesh (25.08.17)  (0) 2025.08.23
[5주차 - Cilium 스터디] BGP Control Plane & ClusterMesh (25.08.10)  (4) 2025.08.16
[4주차 - Cilium 스터디] Networking - 노드에 파드들간 통신 2 & K8S 외부 노출 (25.08.03)  (4) 2025.08.08
[3주차 - Cilium 스터디] Networking (25.07.27)  (0) 2025.08.02
[2주차 - Cilium 스터디] (Observabilty) Hubbkem Prometheus/Grafana (25.07.20)  (3) 2025.07.26
    devlos
    devlos
    안녕하세요, Devlos 입니다. 새로 공부 중인 지식들을 공유하고, 명확히 이해하고자 블로그를 개설했습니다 :) 여러 DEVELOPER 분들과 자유롭게 지식을 공유하고 싶어요! 방문해 주셔서 감사합니다 😀 - DEVLOS -

    티스토리툴바