YJWANG

[Ceph-pacific] cephadm on ubuntu 20.04 [podman] 본문

60.Cloud/61.Ceph

[Ceph-pacific] cephadm on ubuntu 20.04 [podman]

왕영주 2021. 12. 6. 14:34

이전 포스팅 보기 : https://yjwang.tistory.com/119

이번엔 지난 포스팅과 다르게 podman 기반으로 Ceph cluster를 구축하겠습니다.

또한 orchastration module을 yaml을 이용해서 구성해보겠습니다.

 

podman 및 사전에 필요한 패키지와 환경을 구성하기 위해 ubuntu-cephadm-ansible project를 다운받습니다.

# git clone https://github.com/YoungjuWang/ubuntu-cephadm-ansible

 

inventory를 수정합니다.

# cd ubuntu-cephadm-ansible/
# cat ceph.inventory 
virt-go-u20-50 ansible_connection=local
virt-go-u20-51 ansible_host="10.62.62.51"
virt-go-u20-52 ansible_host="10.62.62.52"

[all:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no'

 

ssh-key를 생성한 다음 각 서버에 배포해줍니다.

# ssh-keygen
# ssh-copy-id 10.62.62.51
# ssh-copy-id 10.62.62.52

 

ansible connection을 check합니다.

# ansible -m ping -i ceph.inventory all
virt-go-u20-50 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
virt-go-u20-51 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
virt-go-u20-52 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}

 

playbook을 실행하여 환경 구성을 진행합니다.

# ansible-playbook -i ceph.inventory preflight.yml
...
PLAY RECAP **********************************************************************************************************
virt-go-u20-50             : ok=13   changed=10   unreachable=0    failed=0    skipped=5    rescued=0    ignored=0   
virt-go-u20-51             : ok=13   changed=9    unreachable=0    failed=0    skipped=5    rescued=0    ignored=0   
virt-go-u20-52             : ok=13   changed=10   unreachable=0    failed=0    skipped=5    rescued=0    ignored=0

참고로 위 ansible은 아래 과정을 차례대로 진행합니다.

  • /etc/hosts 설정
  • ubuntu auto-upgrade 중지
  • ca-certificate 패키지 업데이트
  • container-engine repo key 추가
  • container-engine repo 추가
  • container-engine 설치
  • Ceph repo key 추가
  • Ceph repo 추가
  • cephadm, ceph-common 설치
  • NTP 설정

 

구성이 끝나면 host와 daemon설정을 담은 yml을 생성합니다.

# initial-cluster.yml
---
service_type: host
addr: 10.62.62.50
hostname: virt-go-u20-50
labels:
  - _admin
  - mon
  - mgr
  - osd
---
service_type: host
addr: 10.62.62.51
hostname: virt-go-u20-51
labels:
  - mon
  - mgr
  - osd
---
service_type: host
addr: 10.62.62.52
hostname: virt-go-u20-52
labels:
  - mon
  - osd
---
service_type: mon
placement:
  label: "mon"
---
service_type: mgr
placement:
  label: "mgr"
---
service_type: osd
service_id: default_drive_group
placement:
  label: "osd"
data_devices:
  all: true

 

해당 파일을 이용하여 ceph-cluster를 구성합니다.

# cephadm bootstrap --mon-ip 10.62.62.50 --apply-spec initial-cluster.yml --initial-dashboard-user admin --initial-dashboard-password cephadmin
...
Applying initial-cluster.yml to cluster
Adding ssh key to virt-go-u20-51
Adding ssh key to virt-go-u20-52
Added host 'virt-go-u20-50' with addr '10.62.62.50'
Added host 'virt-go-u20-51' with addr '10.62.62.51'
Added host 'virt-go-u20-52' with addr '10.62.62.52'
Scheduled mon update...
Scheduled mgr update...
Scheduled osd.default_drive_group update...

You can access the Ceph CLI with:

	sudo /usr/sbin/cephadm shell --fsid b14a814e-5654-11ec-be8d-0d74dd311fe3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

Please consider enabling telemetry to help improve Ceph:

	ceph telemetry on

For more information see:

	https://docs.ceph.com/docs/pacific/mgr/telemetry/

Bootstrap complete.

initial-cluster.yml 파일에 명시된 대로 cluster가 생성됐어야합니다. 단, osd daemon 및 monitor daemon과 같은 daemon은 cluster setup 이후 ssh를 통해 진행되므로 추가 시간이 필요하니 기다리시가 바랍니다.

 

host가 정상적으로 등록됐는지 label은 맞게 들어가있는지 확인해봅니다.

# ceph orch host ls
HOST            ADDR         LABELS              STATUS  
virt-go-u20-50  10.62.62.50  _admin mon mgr osd          
virt-go-u20-51  10.62.62.51  mon mgr osd                 
virt-go-u20-52  10.62.62.52  mon osd

 

daemon의 placement 설정과 daemon 수가 맞게 실행됐는지 확인합니다.

# ceph orch ls
NAME                     PORTS        RUNNING  REFRESHED  AGE  PLACEMENT  
alertmanager             ?:9093,9094      1/1  10s ago    6m   count:1    
crash                                     3/3  13s ago    6m   *          
grafana                  ?:3000           1/1  10s ago    6m   count:1    
mgr                                       2/2  13s ago    5m   label:mgr  
mon                                       3/3  13s ago    5m   label:mon  
node-exporter            ?:9100           3/3  13s ago    6m   *          
osd.default_drive_group                  9/12  13s ago    5m   label:osd  
prometheus               ?:9095           1/1  10s ago    6m   count:1

 

osd가 정상적으로 실행됐는지 확인합니다.

# ceph orch ps --daemon_type=osd
NAME   HOST            PORTS  STATUS          REFRESHED   AGE  MEM USE  MEM LIM  VERSION  IMAGE ID      CONTAINER ID  
osd.0  virt-go-u20-52         running (105s)    57s ago  105s    28.1M    4096M  16.2.6   02a72919e474  4db6c721bf09  
osd.1  virt-go-u20-50         running (104s)    54s ago  104s    27.9M    4096M  16.2.6   02a72919e474  d6978b339555  
osd.2  virt-go-u20-51         running (104s)    57s ago  103s    28.4M    4096M  16.2.6   02a72919e474  7316c82b7495  
osd.3  virt-go-u20-50         running (94s)     54s ago   94s    27.3M    4096M  16.2.6   02a72919e474  8bede12640f4  
osd.4  virt-go-u20-52         running (97s)     57s ago   97s    29.4M    4096M  16.2.6   02a72919e474  7e0cfc95e388  
osd.5  virt-go-u20-51         running (95s)     57s ago   94s    26.4M    4096M  16.2.6   02a72919e474  f7dc104f56a2  
osd.6  virt-go-u20-50         running (82s)     54s ago   82s    27.1M    4096M  16.2.6   02a72919e474  434f92706adb  
osd.7  virt-go-u20-52         running (86s)     57s ago   86s    27.9M    4096M  16.2.6   02a72919e474  a5a4156e0845  
osd.8  virt-go-u20-51         running (83s)     57s ago   83s    30.3M    4096M  16.2.6   02a72919e474  d551f2644c09  

# ceph osd df tree
ID  CLASS  WEIGHT   REWEIGHT  SIZE    RAW USE  DATA      OMAP  META     AVAIL   %USE  VAR   PGS  STATUS  TYPE NAME              
-1         0.08817         -  90 GiB   57 MiB   3.0 MiB   0 B   54 MiB  90 GiB  0.06  1.00    -          root default           
-7         0.02939         -  30 GiB   19 MiB  1012 KiB   0 B   18 MiB  30 GiB  0.06  1.00    -              host virt-go-u20-50
 1    hdd  0.00980   1.00000  10 GiB  5.8 MiB   336 KiB   0 B  5.4 MiB  10 GiB  0.06  0.91   74      up          osd.1          
 3    hdd  0.00980   1.00000  10 GiB  7.5 MiB   336 KiB   0 B  7.2 MiB  10 GiB  0.07  1.18  103      up          osd.3          
 6    hdd  0.00980   1.00000  10 GiB  5.8 MiB   340 KiB   0 B  5.4 MiB  10 GiB  0.06  0.91   79      up          osd.6          
-5         0.02939         -  30 GiB   19 MiB  1016 KiB   0 B   18 MiB  30 GiB  0.06  1.00    -              host virt-go-u20-51
 2    hdd  0.00980   1.00000  10 GiB  5.8 MiB   340 KiB   0 B  5.5 MiB  10 GiB  0.06  0.92   74      up          osd.2          
 5    hdd  0.00980   1.00000  10 GiB  5.8 MiB   336 KiB   0 B  5.5 MiB  10 GiB  0.06  0.92   93      up          osd.5          
 8    hdd  0.00980   1.00000  10 GiB  7.5 MiB   340 KiB   0 B  7.1 MiB  10 GiB  0.07  1.17   89      up          osd.8          
-3         0.02939         -  30 GiB   19 MiB  1016 KiB   0 B   18 MiB  30 GiB  0.06  1.00    -              host virt-go-u20-52
 0    hdd  0.00980   1.00000  10 GiB  5.9 MiB   336 KiB   0 B  5.6 MiB  10 GiB  0.06  0.93   91      up          osd.0          
 4    hdd  0.00980   1.00000  10 GiB  5.8 MiB   340 KiB   0 B  5.5 MiB  10 GiB  0.06  0.92   90      up          osd.4          
 7    hdd  0.00980   1.00000  10 GiB  7.4 MiB   340 KiB   0 B  7.1 MiB  10 GiB  0.07  1.16   75      up          osd.7          
                       TOTAL  90 GiB   57 MiB   3.0 MiB   0 B   54 MiB  90 GiB  0.06                                            
MIN/MAX VAR: 0.91/1.18  STDDEV: 0.01

# ceph orch device ls
Hostname        Path      Type  Serial  Size   Health   Ident  Fault  Available  
virt-go-u20-50  /dev/vda  hdd           10.7G  Unknown  N/A    N/A    No         
virt-go-u20-50  /dev/vdb  hdd           10.7G  Unknown  N/A    N/A    No         
virt-go-u20-50  /dev/vdc  hdd           10.7G  Unknown  N/A    N/A    No         
virt-go-u20-51  /dev/vda  hdd           10.7G  Unknown  N/A    N/A    No         
virt-go-u20-51  /dev/vdb  hdd           10.7G  Unknown  N/A    N/A    No         
virt-go-u20-51  /dev/vdc  hdd           10.7G  Unknown  N/A    N/A    No         
virt-go-u20-52  /dev/vda  hdd           10.7G  Unknown  N/A    N/A    No         
virt-go-u20-52  /dev/vdb  hdd           10.7G  Unknown  N/A    N/A    No         
virt-go-u20-52  /dev/vdc  hdd           10.7G  Unknown  N/A    N/A    No

 

마지막으로 ceph cluster 상태를 확인합니다.

# ceph -s
  cluster:
    id:     b14a814e-5654-11ec-be8d-0d74dd311fe3
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum virt-go-u20-50,virt-go-u20-51,virt-go-u20-52 (age 3m)
    mgr: virt-go-u20-50.kkjhbk(active, since 5m), standbys: virt-go-u20-51.pxytsi
    osd: 9 osds: 9 up (since 2m), 9 in (since 2m)
 
  data:
    pools:   1 pools, 256 pgs
    objects: 0 objects, 0 B
    usage:   58 MiB used, 90 GiB / 90 GiB avail
    pgs:     256 active+clean
반응형