YJWANG

[Kubernetes] kubeadm 으로 설치 본문

60.Cloud/80.Kubernetes

[Kubernetes] kubeadm 으로 설치

왕영주 2021. 2. 9. 16:09

오랫동안 Kubespray만을 이용해서 설치해서 1.20 version을 한 번 kubeadm으로 설치해보고싶어졌다;
하여 아래와 같이 포스팅한다. 모든 Node에 작업이 필요한 경우는 ansible을 사용했다.

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

IPTables Requirments


[root@kubeadm_master_0 ~]# cat inventory 
master01 ansible_host="10.97.90.10"
master02 ansible_host="10.97.90.11"
worker01 ansible_host="10.97.90.20"
worker02 ansible_host="10.97.90.21"

[root@kubeadm_master_0 ~]# ansible -m shell -a 'modprobe br_netfilter' -k -i inventory all
SSH password: 
worker01 | CHANGED | rc=0 >>

master01 | CHANGED | rc=0 >>

worker02 | CHANGED | rc=0 >>

master02 | CHANGED | rc=0 >>

[root@kubeadm_master_0 ~]# ansible -m shell -a 'lsmod | grep br_netfilter' -k -i inventory all
SSH password: 
worker01 | CHANGED | rc=0 >>
br_netfilter           24576  0
bridge                192512  1 br_netfilter
master01 | CHANGED | rc=0 >>
br_netfilter           24576  0
bridge                192512  1 br_netfilter
worker02 | CHANGED | rc=0 >>
br_netfilter           24576  0
bridge                192512  1 br_netfilter
master02 | CHANGED | rc=0 >>
br_netfilter           24576  0
bridge                192512  1 br_netfilter

[root@kubeadm_master_0 ~]# ansible -m shell -k -i inventory all -a '\
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system
'
SSH password:

Install Docker


\[root@kubeadm\_master\_0 ~\]# ansible -m shell -a 'yum-config-manager --add-repo [https://download.docker.com/linux/centos/docker-ce.repo'](https://download.docker.com/linux/centos/docker-ce.repo') -k -i inventory all  
SSH password:  
worker02 | CHANGED | rc=0 >>  
다음 위치에서 레포 추가 : [https://download.docker.com/linux/centos/docker-ce.repo](https://download.docker.com/linux/centos/docker-ce.repo)  
worker01 | CHANGED | rc=0 >>  
다음 위치에서 레포 추가 : [https://download.docker.com/linux/centos/docker-ce.repo](https://download.docker.com/linux/centos/docker-ce.repo)  
master02 | CHANGED | rc=0 >>  
다음 위치에서 레포 추가 : [https://download.docker.com/linux/centos/docker-ce.repo](https://download.docker.com/linux/centos/docker-ce.repo)  
master01 | CHANGED | rc=0 >>  
다음 위치에서 레포 추가 : [https://download.docker.com/linux/centos/docker-ce.repo](https://download.docker.com/linux/centos/docker-ce.repo)

\[root@kubeadm\_master\_0 ~\]# ansible -m yum -a'name=docker-ce state=installed' -k -i inventory all  
SSH password:  
worker02 | CHANGED => {  
"ansible\_facts": {  
"discovered\_interpreter\_python": "/usr/libexec/platform-python"  
},  
"changed": true,  
"msg": "",  
"rc": 0,  
"results": \[  
"Installed: docker-ce-3:20.10.3-3.el8.x86\_64",  
"Installed: docker-ce-cli-1:20.10.3-3.el8.x86\_64",  
"Installed: libslirp-4.3.1-1.module\_el8.3.0+475+c50ce30b.x86\_64",  
"Installed: fuse-overlayfs-1.1.2-3.module\_el8.3.0+507+aa0970ae.x86\_64",  
"Installed: docker-ce-rootless-extras-20.10.3-3.el8.x86\_64",  
"Installed: fuse3-libs-3.2.1-12.el8.x86\_64",  
"Installed: container-selinux-2:2.144.0-1.module\_el8.3.0+475+c50ce30b.noarch",  
"Installed: slirp4netns-1.1.4-2.module\_el8.3.0+475+c50ce30b.x86\_64",  
"Installed: libnftnl-1.1.5-4.el8.x86\_64",  
"Installed: libcgroup-0.41-19.el8.x86\_64",  
"Installed: containerd.io-1.4.3-3.1.el8.x86\_64",  
"Installed: nftables-1:0.9.3-16.el8.x86\_64"  
\]  
}

-
...

\[root@kubeadm\_master\_0 ~\]# ansible -m shell -k -i inventory all -a 'systemctl enable docker --now'  
SSH password:  
worker02 | CHANGED | rc=0 >>  
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.  
master02 | CHANGED | rc=0 >>  
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.  
master01 | CHANGED | rc=0 >>  
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.  
worker01 | CHANGED | rc=0 >>  
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

Installing kubeadm, kubelet and kubectl


  • kubeadm: the command to bootstrap the cluster.
  • kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.
  • kubectl: the command line util to talk to your cluster.

\[root@kubeadm\_master\_0 ~\]# ansible -m shell -k -i inventory all -a '\\

> cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo  
> \[kubernetes\]  
> name=Kubernetes  
> baseurl=[https://packages.cloud.google.com/yum/repos/kubernetes-el7-\\$basearch](https://packages.cloud.google.com/yum/repos/kubernetes-el7-%5C$basearch)  
> enabled=1  
> gpgcheck=1  
> repo\_gpgcheck=1  
> gpgkey=[https://packages.cloud.google.com/yum/doc/yum-key.gpg](https://packages.cloud.google.com/yum/doc/yum-key.gpg) [https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg](https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg)  
> exclude=kubelet kubeadm kubectl  
> EOF
> 
> setenforce 0  
> sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config  
> yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes  
> systemctl enable --now kubelet  
> '  
> SSH password:

kubelet을 실행해도 아직은 active되지 않는다. activatiog 상태로 남아있으니 참고..

kubectl만 따로 설치하고싶은 경우 아래와 같이 하세요

kubectl을 실행할 Node에서만 설치해도 상관 없습니다.


\[root@kubeadm\_master\_0 ~\]# curl -LO "[https://dl.k8s.io/release/$](https://dl.k8s.io/release/$)(curl -L -s [https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"](https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl")

% Total % Received % Xferd Average Speed Time Time Time Current  
Dload Upload Total Spent Left Speed  
100 161 100 161 0 0 465 0 --:--:-- --:--:-- --:--:-- 466  
100 38.3M 100 38.3M 0 0 8608k 0 0:00:04 0:00:04 --:--:-- 9727k

\[root@kubeadm\_master\_0 ~\]# install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl  
\[root@kubeadm\_master\_0 ~\]# ls -l /usr/local/bin/kubectl  
\-rwxr-xr-x. 1 root root 40230912 2월 9 06:03 /usr/local/bin/kubectl

Init Cluster


pod network를 지정하여 cluster를 init한다.
이 때 hostname이 저처럼 _ 가 있으면 에러가납니다.. -로 바꾼후 진행했습니다.


\[root@kubeadm\_master\_0 ~\]# kubeadm init --pod-network-cidr=10.234.0.0/16  
\[init\] Using Kubernetes version: v1.20.2  
(생략)

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube  
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.  
Run "kubectl apply -f \[podnetwork\].yaml" with one of the options listed at:  
[https://kubernetes.io/docs/concepts/cluster-administration/addons/](https://kubernetes.io/docs/concepts/cluster-administration/addons/)

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.97.90.10:6443 --token lvol38.dmx2n3pdclbe1pus  
\--discovery-token-ca-cert-hash sha256:deb9e1700b16618829b07e007c0d8e265b8500d20c059218731dc65b7eeb96a7

kubectl 사용을위해 안내처럼 admin.conf를 아래와 같이 가져옵니다.


\[root@kubeadm\_master\_0 ~\]# mkdir -p $HOME/.kube  
\[root@kubeadm\_master\_0 ~\]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  
\[root@kubeadm\_master\_0 ~\]# sudo chown $(id -u):$(id -g) $HOME/.kube/config  
\[root@kubeadm\_master\_0 ~\]# kubectl config view  
apiVersion: v1  
clusters:

-   cluster:  
    certificate-authority-data: DATA+OMITTED  
    server: [https://10.97.90.10:6443](https://10.97.90.10:6443)  
    name: kubernetes  
    contexts:
-   context:  
    cluster: kubernetes  
    user: kubernetes-admin  
    name: kubernetes-admin@kubernetes  
    current-context: kubernetes-admin@kubernetes  
    kind: Config  
    preferences: {}  
    users:
-   name: kubernetes-admin  
    user:  
    client-certificate-data: REDACTED  
    client-key-data: REDACTED

Install Pod network add-on


아직 kubernetes pod 내부 네트워크가 없어서 통신이 안된다. 이를 구성하기 위해 cni project를 이용하여 구성한다.
calico를 사용할 예정이다.

우선 calico project의 manifest를 다운받는다.

[root@kubeadm_master_0 ~]# curl -o 01.tigera-operator.yaml https://docs.projectcalico.org/manifests/tigera-operator.yaml
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  235k  100  235k    0     0   122k      0  0:00:01  0:00:01 --:--:--  122k

[root@kubeadm_master_0 ~]# curl -o 02.custom-resources.yaml https://docs.projectcalico.org/manifests/custom-resources.yaml
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   545  100   545    0     0    696      0 --:--:-- --:--:-- --:--:--   696

이후 위에서 설정한 pod cidr로 yaml를 수정해준다.

  • # kubeadm init --pod-network-cidr=10.234.0.0/16
  • [root@kubeadm_master_0 ~]# cat 02.custom-resources.yaml # This section includes base Calico installation configuration. # For more information, see: https://docs.projectcalico.org/v3.17/reference/installation/api#operator.tigera.io/v1.Installation apiVersion: operator.tigera.io/v1 kind: Installation metadata: name: default spec: # Configures Calico networking. calicoNetwork: # Note: The ipPools section cannot be modified post-install. ipPools: - blockSize: 26 cidr: 10.234.0.0/16 encapsulation: VXLANCrossSubnet natOutgoing: Enabled nodeSelector: all()

Deploy

[root@kubeadm_master_0 ~]# kubectl apply -f 01.tigera-operator.yaml 
[root@kubeadm_master_0 ~]# kubectl apply -f 02.custom-resources.yaml

아래와 같이 pod가 다 올라올 때 까지 기다려준다 약 2~3분 소요된다.

[root@kubeadm_master_0 ~]# kubectl get pods -n calico-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-56689cf96-25dcw   1/1     Running   0          117s
calico-node-zkkgg                         1/1     Running   0          117s
calico-typha-868f57cd84-knf9n             1/1     Running   0          118s

이후 Master Node 한 대가 join 됐는지 확인한다.

[root@kubeadm_master_0 ~]# kubectl get nodes -o wide
NAME               STATUS   ROLES                  AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
kubeadm-master-0   Ready    control-plane,master   15m   v1.20.2   10.97.90.10   <none>        CentOS Linux 8 (Core)   4.18.0-193.6.3.el8_2.x86_64   docker://20.10.3

Worker Node 추가


이제 위에 kubeadm init 시 안내된 join 명령을 이용하여 worker node를 추가한다.

[root@kubeadm-worker-0 ~]# kubeadm join 10.97.90.10:6443 --token lvol38.dmx2n3pdclbe1pus \
>     --discovery-token-ca-cert-hash sha256:deb9e1700b16618829b07e007c0d8e265b8500d20c059218731dc65b7eeb96a7

시간이 지나면 Node가 추가됨을 볼 수 있다. 1~2분 정도 소요

[root@kubeadm-master-0 ~]# kubectl get nodes 
NAME               STATUS   ROLES                  AGE   VERSION
kubeadm-master-0   Ready    control-plane,master   21m   v1.20.2
kubeadm-worker-0   Ready    <none>                 81s   v1.20.2
반응형