YJWANG

[Ceph-pacific] cephadm on ubuntu 20.04 [docker] 본문

60.Cloud/61.Ceph

[Ceph-pacific] cephadm on ubuntu 20.04 [docker]

왕영주 2021. 3. 17. 15:53

참고자료


 

Ceph 배포 검토 사항


이전에는 지속적으로 ceph-ansible을 통해서 배포했었는데

아래 공식 홈페이지 내용과 같이 cephadm을 추천하고 있고 새로운 기능을 지원한다고 쓰여있네요.

단, Octopus 이상의 버전과 꼭 Container 환경에서 실행해야 하는 제한 사항이 있으니 참고하시기바랍니다.

Recommended Methods

 

Pacific 버전으로로 설치할 예정입니다.

Ceph Active Releases

 

서버는 아래와 같이 3대를 사용할 예정입니다.

# virt-go list
---------------------------------------------
 RESOURCE     STATE                          
---------------------------------------------
 Data-Dir     /data/virt-go                  
 virt-go-net  10.62.62.xxx                   
 Images       c76  c79  c82  c83  u20  u2104 
                                             
---------------------------------------------


------------------------------------------------------------
 NUMBER  NAME             IP            SIZE   DESCRIPTION  
------------------------------------------------------------             
 20      virt-go-u20-20   10.62.62.20   20 GB  Ceph 20      
 30      virt-go-u20-30   10.62.62.30   20 GB  Ceph 30      
 40      virt-go-u20-40   10.62.62.40   20 GB  Ceph 40

 

사전작업


Install prerequisite

1st node가 될 서버에 ansible을 설치합니다.

root@virt-go-u20-20:~# apt update; apt install -y ansible

 

ubuntu-cephadm-ansible 프로젝트를 내려받습니다.

root@virt-go-u20-20:~# git clone https://github.com/YoungjuWang/ubuntu-cephadm-ansible.git

 

ubuntu-cephadm-ansible 폴더로 들어가서 vars.yml을 수정합니다.

root@virt-go-u20-50:~# cd ubuntu-cephadm-ansible/
# cat vars.yml 
container_engine: "docker"
ceph_origin: 'community'
ceph_release: 'pacific'
timezone: 'Asia/Seoul'
ntp_server: '0.kr.pool.ntp.org'

 

ansible inventory을 수정합니다.

root@virt-go-u20-20:~# cat ceph.inventory 
bootstrap ansible_connection=local
mon2 ansible_host="10.62.62.51"
mon3 ansible_host="10.62.62.52"

[all:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no'

 

ssh-key를 생성하고 copy합니다.

root@virt-go-u20-20:~# ssh-keygen -N "" -f ~/.ssh/id_rsa

root@virt-go-u20-20:~# ssh-copy-id 10.62.62.30
root@virt-go-u20-20:~# ssh-copy-id 10.62.62.40

 

ansible 명령을 통해 ssh connection이 정상인지 확인합니다.

root@virt-go-u20-20:~# ansible -i ceph.inventory -m ping all
virt-go-u20-20 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
virt-go-u20-30 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
virt-go-u20-40 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}

 

ansible-playbook을 통해 ceph 구성을 위해 필요한 패키지들을 각 서버에 설치합니다.

root@virt-go-u20-20:~# ansible-playbook -i ceph.inventory preflight.yml

 

Bootstrap new cluster

이제 Ceph Cluster를 배포합니다.

root@virt-go-u20-20:~# cephadm bootstrap --mon-ip 10.62.62.20
...
Ceph Dashboard is now available at:

	     URL: https://virt-go-u20-20:8443/
	    User: admin
	Password: teyoxeimla

Enabling client.admin keyring and conf on hosts with "admin" label
You can access the Ceph CLI with:

	sudo /usr/sbin/cephadm shell --fsid bf21e546-50d2-11ec-859e-91611f4703f0 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

Please consider enabling telemetry to help improve Ceph:

	ceph telemetry on

For more information see:

	https://docs.ceph.com/docs/pacific/mgr/telemetry/

Bootstrap complete.

위 command가 진행하는 항목은 아래와 같습니다.

  • Create a monitor and manager daemon for the new cluster on the local host.
  • Generate a new SSH key for the Ceph cluster and add it to the root user’s /root/.ssh/authorized_keys file.
  • Write a minimal configuration file to /etc/ceph/ceph.conf. This file is needed to communicate with the new cluster.
  • Write a copy of the client.admin administrative (privileged!) secret key to /etc/ceph/ceph.client.admin.keyring.
  • Write a copy of the public key to /etc/ceph/ceph.pub.

 

localhost node가 mgr 및 mon으로 등록됐는지 확인해봅니다.

root@virt-go-u20-20:~# ceph -s
  cluster:
    id:     bf21e546-50d2-11ec-859e-91611f4703f0
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 3
 
  services:
    mon: 1 daemons, quorum virt-go-u20-20 (age 4m)
    mgr: virt-go-u20-20.swgqax(active, since 60s)
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

 

Adding Host


https://docs.ceph.com/en/latest/cephadm/host-management/#cephadm-adding-hosts

이제 나머지 Node들도 추가해줍니다.

 

우선 ceph ssh-key를 배포합니다.

root@virt-go-u20-20:~# ssh-copy-id -f -i /etc/ceph/ceph.pub root@10.62.62.30
root@virt-go-u20-20:~# ssh-copy-id -f -i /etc/ceph/ceph.pub root@10.62.62.40

 

Add Host

root@virt-go-u20-20:~# ceph orch host add virt-go-u20-30 10.62.62.30 _admin
Added host 'virt-go-u20-30' with addr '10.62.62.30'

root@virt-go-u20-20:~# ceph orch host add virt-go-u20-40 10.62.62.40 _admin
Added host 'virt-go-u20-40' with addr '10.62.62.40'

 

host가 등록됐는지 확인합니다.

root@virt-go-u20-20:~# ceph orch host ls
HOST            ADDR         LABELS  STATUS  
virt-go-u20-20  10.62.62.20  _admin          
virt-go-u20-30  10.62.62.30  _admin          
virt-go-u20-40  10.62.62.40  _admin

 

이후 mon daemon이 잘 배포됐는지 확인합니다.

root@virt-go-u20-40:~# ceph -s
  cluster:
    id:     bf21e546-50d2-11ec-859e-91611f4703f0
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 3
 
  services:
    mon: 3 daemons, quorum virt-go-u20-20,virt-go-u20-30,virt-go-u20-40 (age 5m)
    mgr: virt-go-u20-20.swgqax(active, since 16m), standbys: virt-go-u20-30.dstwys
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

 

Deploy OSD


osd로 추가할 수 있는 device 들을 확인합니다.

root@virt-go-u20-20:~# ceph orch device ls
Hostname        Path      Type  Serial  Size   Health   Ident  Fault  Available  
virt-go-u20-20  /dev/vda  hdd           21.4G  Unknown  N/A    N/A    Yes        
virt-go-u20-20  /dev/vdb  hdd           21.4G  Unknown  N/A    N/A    Yes        
virt-go-u20-30  /dev/vda  hdd           21.4G  Unknown  N/A    N/A    Yes        
virt-go-u20-30  /dev/vdb  hdd           21.4G  Unknown  N/A    N/A    Yes        
virt-go-u20-40  /dev/vda  hdd           21.4G  Unknown  N/A    N/A    Yes        
virt-go-u20-40  /dev/vdb  hdd           21.4G  Unknown  N/A    N/A    Yes

 

각 서버의 vda만 추가해보도록 하겠습니다.

root@virt-go-u20-20:~# ceph orch daemon add osd virt-go-u20-20:/dev/vda
Created osd(s) 0 on host 'virt-go-u20-20'
root@virt-go-u20-20:~# ceph orch daemon add osd virt-go-u20-30:/dev/vda
Created osd(s) 1 on host 'virt-go-u20-30'
root@virt-go-u20-20:~# ceph orch daemon add osd virt-go-u20-40:/dev/vda
Created osd(s) 2 on host 'virt-go-u20-40'

 

확인된 osd를 확인합니다.

root@virt-go-u20-20:~# ceph -s
  cluster:
    id:     bf21e546-50d2-11ec-859e-91611f4703f0
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum virt-go-u20-20,virt-go-u20-30,virt-go-u20-40 (age 14m)
    mgr: virt-go-u20-20.swgqax(active, since 25m), standbys: virt-go-u20-30.dstwys
    osd: 3 osds: 3 up (since 64s), 3 in (since 90s)

 

나머지는 남은 osd를 모두 추가하는 명령으로 추가해줍니다.

root@virt-go-u20-20:~# ceph orch apply osd --all-available-devices
Scheduled osd.all-available-devices update...

root@virt-go-u20-20:~# ceph -s
  cluster:
    id:     bf21e546-50d2-11ec-859e-91611f4703f0
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum virt-go-u20-20,virt-go-u20-30,virt-go-u20-40 (age 16m)
    mgr: virt-go-u20-20.swgqax(active, since 27m), standbys: virt-go-u20-30.dstwys
    osd: 6 osds: 6 up (since 31s), 6 in (since 60s)
    
root@virt-go-u20-20:~# ceph orch device ls
Hostname        Path      Type  Serial  Size   Health   Ident  Fault  Available  
virt-go-u20-20  /dev/vda  hdd           21.4G  Unknown  N/A    N/A    No         
virt-go-u20-20  /dev/vdb  hdd           21.4G  Unknown  N/A    N/A    No         
virt-go-u20-30  /dev/vda  hdd           21.4G  Unknown  N/A    N/A    No         
virt-go-u20-30  /dev/vdb  hdd           21.4G  Unknown  N/A    N/A    No         
virt-go-u20-40  /dev/vda  hdd           21.4G  Unknown  N/A    N/A    No         
virt-go-u20-40  /dev/vdb  hdd           21.4G  Unknown  N/A    N/A    No

 

향후 disk 추가 시 자동으로 osd 추가됨. 만약 해당 기능을 disable 하려면 아래와 같이 진행
#ceph orch apply osd --all-available-devices --unmanaged=true

 

root@virt-go-u20-20:~# ceph osd df tree
ID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP  META     AVAIL    %USE  VAR   PGS  STATUS  TYPE NAME              
-1         0.11691         -  120 GiB   40 MiB  2.0 MiB   0 B   38 MiB  120 GiB  0.03  1.00    -          root default           
-3         0.03897         -   40 GiB   13 MiB  696 KiB   0 B   13 MiB   40 GiB  0.03  1.00    -              host virt-go-u20-20
 0    hdd  0.01949   1.00000   20 GiB  7.6 MiB  348 KiB   0 B  7.2 MiB   20 GiB  0.04  1.15   69      up          osd.0          
 5    hdd  0.01949   1.00000   20 GiB  5.7 MiB  348 KiB   0 B  5.3 MiB   20 GiB  0.03  0.85   59      up          osd.5          
-5         0.03897         -   40 GiB   13 MiB  696 KiB   0 B   13 MiB   40 GiB  0.03  1.00    -              host virt-go-u20-30
 1    hdd  0.01949   1.00000   20 GiB  7.5 MiB  348 KiB   0 B  7.2 MiB   20 GiB  0.04  1.14   69      up          osd.1          
 3    hdd  0.01949   1.00000   20 GiB  5.7 MiB  348 KiB   0 B  5.4 MiB   20 GiB  0.03  0.86   59      up          osd.3          
-7         0.03897         -   40 GiB   13 MiB  696 KiB   0 B   13 MiB   40 GiB  0.03  1.00    -              host virt-go-u20-40
 2    hdd  0.01949   1.00000   20 GiB  7.5 MiB  348 KiB   0 B  7.1 MiB   20 GiB  0.04  1.13   59      up          osd.2          
 4    hdd  0.01949   1.00000   20 GiB  5.8 MiB  348 KiB   0 B  5.4 MiB   20 GiB  0.03  0.87   69      up          osd.4          
                       TOTAL  120 GiB   40 MiB  2.0 MiB   0 B   38 MiB  120 GiB  0.03                                            
MIN/MAX VAR: 0.85/1.15  STDDEV: 0.00

 

Cephadm operation


root@virt-go-u20-20:~# ceph version
ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)

root@virt-go-u20-20:~# ceph orch status
Backend: cephadm
Available: Yes
Paused: No

root@virt-go-u20-20:~# ceph orch ps
NAME                          HOST            PORTS        STATUS         REFRESHED  AGE  MEM USE  MEM LIM  VERSION  IMAGE ID      CONTAINER ID  
alertmanager.virt-go-u20-20   virt-go-u20-20  *:9093,9094  running (24m)     2m ago  31m    13.7M        -  0.20.0   0881eb8f169f  e8c4533e4521  
crash.virt-go-u20-20          virt-go-u20-20               running (30m)     2m ago  30m    7111k        -  16.2.6   02a72919e474  f112fe01d0b3  
crash.virt-go-u20-30          virt-go-u20-30               running (25m)     2m ago  25m    7212k        -  16.2.6   02a72919e474  419aaa9d9273  
crash.virt-go-u20-40          virt-go-u20-40               running (19m)     2m ago  19m    8479k        -  16.2.6   02a72919e474  efc35bf42f47  
grafana.virt-go-u20-20        virt-go-u20-20  *:3000       running (29m)     2m ago  30m    23.3M        -  6.7.4    557c83e11646  43239ddd6a19  
mgr.virt-go-u20-20.swgqax     virt-go-u20-20  *:9283       running (33m)     2m ago  33m     416M        -  16.2.6   02a72919e474  f1ea70287671  
mgr.virt-go-u20-30.dstwys     virt-go-u20-30  *:8443,9283  running (25m)     2m ago  25m     367M        -  16.2.6   02a72919e474  0e4d1eea7adc  
mon.virt-go-u20-20            virt-go-u20-20               running (33m)     2m ago  33m    64.6M    2048M  16.2.6   02a72919e474  e18f69f830d3  
mon.virt-go-u20-30            virt-go-u20-30               running (24m)     2m ago  24m    76.5M    2048M  16.2.6   02a72919e474  30722bc4de1c  
mon.virt-go-u20-40            virt-go-u20-40               running (18m)     2m ago  18m    51.4M    2048M  16.2.6   02a72919e474  c806a646b723  
node-exporter.virt-go-u20-20  virt-go-u20-20  *:9100       running (30m)     2m ago  30m    9.93M        -  0.18.1   e5a616e4b9cf  c268d91b4431  
node-exporter.virt-go-u20-30  virt-go-u20-30  *:9100       running (24m)     2m ago  24m    9971k        -  0.18.1   e5a616e4b9cf  2ccb8c136e85  
node-exporter.virt-go-u20-40  virt-go-u20-40  *:9100       running (18m)     2m ago  18m    9799k        -  0.18.1   e5a616e4b9cf  7e5970cf4f07  
osd.0                         virt-go-u20-20               running (7m)      2m ago   7m    39.6M    4096M  16.2.6   02a72919e474  e48b70391b33  
osd.1                         virt-go-u20-30               running (5m)      2m ago   5m    41.0M    4096M  16.2.6   02a72919e474  7be96361233e  
osd.2                         virt-go-u20-40               running (5m)      2m ago   5m    37.6M    4096M  16.2.6   02a72919e474  b7cd255b3f03  
osd.3                         virt-go-u20-30               running (2m)      2m ago   2m    11.8M    4096M  16.2.6   02a72919e474  e3d35440e196  
osd.4                         virt-go-u20-40               running (2m)      2m ago   2m    12.1M    4096M  16.2.6   02a72919e474  3aeebfab7a56  
osd.5                         virt-go-u20-20               running (2m)      2m ago   2m        -    4096M  16.2.6   02a72919e474  f81adc71bc7d  
prometheus.virt-go-u20-20     virt-go-u20-20  *:9095       running (18m)     2m ago  29m    40.1M        -  2.18.1   de242295e225  20a87c7cfb68
반응형