YJWANG

Kubernetes On Openstack 구축하기 (With bastion host) 본문

60.Cloud

Kubernetes On Openstack 구축하기 (With bastion host)

왕영주 2021. 7. 6. 13:50

금번 포스팅에서는 Openstack 위에 Kubernetes 구축하는 방법에 대해 다루겠습니다.

버전 정보는 아래와 같으며 본 포스팅에서는 terraformkubespray를 이용하여 구축이 진행됩니다.

 

Terraform과 Ansible, Kubespray의 기본적인 내용은 알고 있다는 전제하에 진행됩니다.

 

버전 정보

OpenStack - wallaby
kubernetes - 1.21.x
Terraform - 0.14.11 (v1.15 이상부터는 option이 다릅니다.)
kubespray - master (2021.07 기준)

 

확인된 한계점

- Bastion host 없이 master에 floating ip를 주고 진행하는 경우 etcd member 등록이 안된다. (Floating IP)를 확인하지 못하기에
- etcd member에 가입하기위해 host_vars에 etcd_member_name이 필요한데 dynamic inventory (hosts)에서 이를 제공해주지 않음

 

Prerequisite


https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/openstack

 

이미지 준비

 

Kubernetes Node에서 사용할 OS 이미지를 준비합니다.

Ubuntu-20.04 버전을 사용하겠습니다.

# wget https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img

 

이미지 생성

# openstack image create --file focal-server-cloudimg-amd64.img --disk-format qcow2 ubuntu-2004

 

확인

# openstack image list --name ubuntu-2004
+--------------------------------------+-------------+--------+
| ID                                   | Name        | Status |
+--------------------------------------+-------------+--------+
| b75fa6ea-1d80-4da7-b8c3-b10860044b09 | ubuntu-2004 | active |
+--------------------------------------+-------------+--------+

 

kubespray 준비

 

다운로드

# git clone https://github.com/kubernetes-sigs/kubespray.git
# cd kubespray/

 

Cluster 이름 설정

# CLUSTER=test-k8s

 

inventory 파일 준비

# cp -LRp contrib/terraform/openstack/sample-inventory inventory/$CLUSTER
# cd inventory/$CLUSTER
# ln -s ../../contrib
# ln -s ../../contrib/terraform/terraform.py hosts

 

확인

# pwd
/root/kubespray/inventory/test-k8s

# ls -l
total 8
-rw-r--r-- 1 root root 1288 Jul  6 10:35 cluster.tfvars
lrwxrwxrwx 1 root root   13 Jul  6 10:43 contrib -> ../../contrib
drwxr-xr-x 4 root root 4096 Jul  6 10:35 group_vars
lrwxrwxrwx 1 root root   36 Jul  6 10:43 hosts -> ../../contrib/terraform/terraform.py

 

Terraform Vars 설정

 

cluster.tvars 파일로 생성될 Instance 및 Network등, 리소스 정보를 지정합니다.

금번 설정은 bastion host를 사용한다는 가정하게 설정핬습니다.

# cat cluster.tfvars 
# your Kubernetes cluster name here
cluster_name = "test-k8s"

# list of availability zones available in your OpenStack cluster
#az_list = ["nova"]

# SSH key to use for access to nodes
public_key_path = "~/.ssh/id_rsa.pub"

# image to use for bastion, masters, standalone etcd instances, and nodes
image = "ubuntu-2004"

# user on the node (ex. core on Container Linux, ubuntu on Ubuntu, etc.)
ssh_user = "ubuntu"

# 0|1 bastion nodes
number_of_bastions = 1

flavor_bastion = "2" # flavor ID를 입력합니다.

# standalone etcds
number_of_etcd = 0 # etcd 서버를 별도로 사용할 경우에만 입력합니다.

# masters
number_of_k8s_masters = 0 

number_of_k8s_masters_no_etcd = 0

number_of_k8s_masters_no_floating_ip = 1

number_of_k8s_masters_no_floating_ip_no_etcd = 0

flavor_k8s_master = "3"

master_volume_type = "__DEFAULT__"

master_root_volume_size_in_gb = "10"

# nodes
number_of_k8s_nodes = 0

number_of_k8s_nodes_no_floating_ip = 2

flavor_k8s_node = "3"

node_root_volume_size_in_gb = "10"

# GlusterFS
# either 0 or more than one
#number_of_gfs_nodes_no_floating_ip = 0
#gfs_volume_size_in_gb = 150
# Container Linux does not support GlusterFS
#image_gfs = "<image name>"
# May be different from other nodes
#ssh_user_gfs = "ubuntu"
#flavor_gfs_node = "<UUID>"

# networking
network_name = "k8s-net" # 생성될 k8s의 network 이름을 입력합니다.

external_net = "f51bc120-fe3b-4db2-afe0-69dbdd1a1652" # floating IP가 할당 해줄 Network를 입력합니다.

subnet_cidr = "10.123.123.0/24" # 생성될 k8s의 network cidr을 지정합니다.

floatingip_pool = "public1" # floating IP pool을 지정합니다.

bastion_allowed_remote_ips = ["0.0.0.0/0"]
k8s_allowed_remote_ips = ["0.0.0.0/0"]

 

Terraform provider를 initialize 해줍니다.

# terraform init ../../contrib/terraform/openstack

 

이후 terraform을 통해 Instance를 배포합니다.

# terraform apply -var-file=cluster.tfvars ../../contrib/terraform/openstack

 

배포가 완료되면 아래와 같이 Output이 표기됩니다.

Apply complete! Resources: 21 added, 0 changed, 0 destroyed.

Outputs:

bastion_fips = [
  "10.99.99.107",
]
floating_network_id = "f51bc120-fe3b-4db2-afe0-69dbdd1a1652"
k8s_master_fips = tolist([])
k8s_node_fips = []
private_subnet_id = "26095bbd-86e7-442d-97c0-c80b731f0430"
router_id = " 1c96b542-0c09-45d0-8ca2-3fa38c187e63 "

 

배포시 자동으로 ansible command가 bastion hosts를 통하여 동작할 수 있도록 아래 파일이 생성됩니다.

즉, proxyCommand를 위하여 추가 작업을 진행할 필요가 없습니다.

# cat group_vars/no_floating.yml 
ansible_ssh_common_args: "-o ProxyCommand='ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -W %h:%p -q ubuntu@10.99.99.107 {% if ansible_ssh_private_key_file is defined %}-i {{ ansible_ssh_private_key_file }}{% endif %}'"

 

생성된 서버 확인

# openstack server list --fit-width
+--------------------------------------+--------------------------+--------+--------------------------------------+-------------+-----------+
| ID                                   | Name                     | Status | Networks                             | Image       | Flavor    |
+--------------------------------------+--------------------------+--------+--------------------------------------+-------------+-----------+
| 8a77d1a5-94a2-4493-b6c5-fa3831ca0d18 | test-k8s-bastion-1       | ACTIVE | k8s-net=10.123.123.241, 10.99.99.107 | ubuntu-2004 | m1.small  |
| 71594ac5-6d6a-414a-b6a6-7d704433038c | test-k8s-k8s-node-nf-1   | ACTIVE | k8s-net=10.123.123.28                | ubuntu-2004 | m1.medium |
| a050ca26-c0c0-4e52-8041-1d3f923f840b | test-k8s-k8s-master-nf-1 | ACTIVE | k8s-net=10.123.123.245               | ubuntu-2004 | m1.medium |
| c024ab39-6330-4926-9111-6ad13761f9f8 | test-k8s-k8s-node-nf-2   | ACTIVE | k8s-net=10.123.123.152               | ubuntu-2004 | m1.medium |
+--------------------------------------+--------------------------+--------+--------------------------------------+-------------+-----------+

 

Ansible 통신 확인

 

bastion 서버가 정상적으로 booting된 이후 ansible command를 실행해봅니다.

host fingerprint를 등록하라고 나오면 등록합니다.

# ansible -m ping -i hosts all
[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details
test-k8s-bastion-1 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
test-k8s-k8s-master-nf-1 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
test-k8s-k8s-node-nf-1 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
test-k8s-k8s-node-nf-2 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}

 

Kubespray (Ansible) 설정

 

etcd 맴버 등록을 위해 hostvars를 추가합니다.

아래와 같이 etcd 맴버에 master node를 등록하지 않으면 etcd가 member 등록에 실패합니다. 자세한 내용은 kubespray 공식 git 페이지를 참조하시기 바랍니다.

# mkdir host_vars
# echo "etcd_member_name: etcd1" > host_vars/test-k8s-k8s-master-nf-1

 

kubespray group_vars를 설정합니다.

# grep -Ev "^$|^#" group_vars/all/all.yml 
---
etcd_data_dir: /var/lib/etcd
etcd_kubeadm_enabled: false
bin_dir: /usr/local/bin
loadbalancer_apiserver_port: 6443
loadbalancer_apiserver_healthcheck_port: 8081
cloud_provider: external
external_cloud_provider: openstack
no_proxy_exclude_workers: false


# grep -Ev "^$|^#" group_vars/k8s_cluster/k8s-cluster.yml 
---
kube_config_dir: /etc/kubernetes
kube_script_dir: "{{ bin_dir }}/kubernetes-scripts"
kube_manifest_dir: "{{ kube_config_dir }}/manifests"
kube_cert_dir: "{{ kube_config_dir }}/ssl"
kube_token_dir: "{{ kube_config_dir }}/tokens"
kube_api_anonymous_auth: true
kube_version: v1.21.1
local_release_dir: "/tmp/releases"
retry_stagger: 5
kube_cert_group: kube-cert
kube_log_level: 2
credentials_dir: "{{ inventory_dir }}/credentials"
kube_network_plugin: calico
kube_network_plugin_multus: false
kube_service_addresses: 10.233.0.0/18
kube_pods_subnet: 10.233.64.0/18
kube_network_node_prefix: 24
enable_dual_stack_networks: false
kube_service_addresses_ipv6: fd85:ee78:d8a6:8607::1000/116
kube_pods_subnet_ipv6: fd85:ee78:d8a6:8607::1:0000/112
kube_network_node_prefix_ipv6: 120
kube_apiserver_ip: "{{ kube_service_addresses|ipaddr('net')|ipaddr(1)|ipaddr('address') }}"
kube_apiserver_port: 6443  # (https)
kube_apiserver_insecure_port: 0  # (disabled)
kube_proxy_mode: ipvs
kube_proxy_strict_arp: false
kube_proxy_nodeport_addresses: >-
  {%- if kube_proxy_nodeport_addresses_cidr is defined -%}
  [{{ kube_proxy_nodeport_addresses_cidr }}]
  {%- else -%}
  []
  {%- endif -%}
kube_encrypt_secret_data: false
cluster_name: cluster.local
ndots: 2
dns_mode: coredns
enable_nodelocaldns: true
nodelocaldns_ip: 169.254.25.10
nodelocaldns_health_port: 9254
nodelocaldns_bind_metrics_host_ip: false
enable_coredns_k8s_external: false
coredns_k8s_external_zone: k8s_external.local
enable_coredns_k8s_endpoint_pod_names: false
resolvconf_mode: docker_dns
deploy_netchecker: false
skydns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(3)|ipaddr('address') }}"
skydns_server_secondary: "{{ kube_service_addresses|ipaddr('net')|ipaddr(4)|ipaddr('address') }}"
dns_domain: "{{ cluster_name }}"
container_manager: docker
kata_containers_enabled: false
kubeadm_certificate_key: "{{ lookup('password', credentials_dir + '/kubeadm_certificate_key.creds length=64 chars=hexdigits') | lower }}"
k8s_image_pull_policy: IfNotPresent
kubernetes_audit: false
dynamic_kubelet_configuration: false
default_kubelet_config_dir: "{{ kube_config_dir }}/dynamic_kubelet_dir"
dynamic_kubelet_configuration_dir: "{{ kubelet_config_dir | default(default_kubelet_config_dir) }}"
podsecuritypolicy_enabled: false
volume_cross_zone_attachment: false
persistent_volumes_enabled: false
event_ttl_duration: "1h0m0s"
auto_renew_certificates: false

 

LB를 Octavia로 사용하는 부분은 다음 포스팅에서 다루겠습니다.

 

배포

k8s cluster 배포 진행

# ansible-playbook -b -i hosts ../../cluster.yml

 

배포 완료

PLAY RECAP *********************************************************************************************************************************
localhost                  : ok=3    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
test-k8s-bastion-1         : ok=14   changed=1    unreachable=0    failed=0    skipped=19   rescued=0    ignored=0   
test-k8s-k8s-master-nf-1   : ok=577  changed=125  unreachable=0    failed=0    skipped=1140 rescued=0    ignored=2   
test-k8s-k8s-node-nf-1     : ok=370  changed=77   unreachable=0    failed=0    skipped=631  rescued=0    ignored=1   
test-k8s-k8s-node-nf-2     : ok=370  changed=77   unreachable=0    failed=0    skipped=630  rescued=0    ignored=1

 

배포 확인

proxyCommand를 이용하여 bastion host를 통해 master의 배포 상태를 확인합니다.

# ssh -o ProxyCommand="ssh -W %h:%p ubuntu@10.99.99.107" ubuntu@10.123.123.245 "sudo kubectl get pod -A"
NAMESPACE     NAME                                               READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-5b4d7b4594-vbkfb           1/1     Running   0          5m55s
kube-system   calico-node-j7xgl                                  1/1     Running   0          7m30s
kube-system   calico-node-lzlfk                                  1/1     Running   0          7m30s
kube-system   calico-node-s5xdc                                  0/1     Running   4          7m30s
kube-system   coredns-8474476ff8-c59sk                           1/1     Running   0          5m
kube-system   coredns-8474476ff8-vb4jt                           1/1     Running   0          4m32s
kube-system   dns-autoscaler-7df78bfcfb-z8w5w                    1/1     Running   0          4m43s
kube-system   kube-apiserver-test-k8s-k8s-master-nf-1            1/1     Running   0          10m
kube-system   kube-controller-manager-test-k8s-k8s-master-nf-1   1/1     Running   0          10m
kube-system   kube-proxy-7rls9                                   1/1     Running   0          8m25s
kube-system   kube-proxy-n54mt                                   1/1     Running   0          8m24s
kube-system   kube-proxy-pxkql                                   1/1     Running   0          8m25s
kube-system   kube-scheduler-test-k8s-k8s-master-nf-1            1/1     Running   0          10m
kube-system   nginx-proxy-test-k8s-k8s-node-nf-1                 1/1     Running   0          8m33s
kube-system   nginx-proxy-test-k8s-k8s-node-nf-2                 1/1     Running   0          8m40s
kube-system   nodelocaldns-56jpt                                 1/1     Running   0          4m37s
kube-system   nodelocaldns-bxqgg                                 1/1     Running   0          4m37s
kube-system   nodelocaldns-t8pwm                                 1/1     Running   0          4m38s
kube-system   openstack-cloud-controller-manager-5psbj           1/1     Running   0          6m13s

 

 

반응형