YJWANG

[Ceph] Ceph Operation with Ceph-Ansible (Ubuntu 20.04) 본문

60.Cloud/61.Ceph

[Ceph] Ceph Operation with Ceph-Ansible (Ubuntu 20.04)

왕영주 2021. 3. 18. 15:03

Prerequisite


  • refer to : https://docs.ceph.com/en/latest/rados/operations/
  • Ceph Cluster
  • 여분의 Device (osd로 추가할)
  • Ceph-Ansible로 구축한 Ceph는 Ceph orch API를 지원하지 않기때문에 해당 명령어는 최대한 사용하지 않도록 하겠습니다.

Add OSD


현재 서버에는 파티셔닝 되지 않은 vdb disk가 여분으로 준비돼있습니다.

vdb                                                                                                   252:16   0  9.3G  0 disk 

 

그리고 현재 Ceph에는 vda만 OSD로 할당돼있습니다.

root@yjwang0-ceph-01:~/ceph-ansible# docker exec ceph-mon-yjwang0-ceph-01 ceph osd df tree
ID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP  META     AVAIL    %USE  VAR   PGS  STATUS  TYPE NAME               
-1         0.02728         -   28 GiB   19 MiB  696 KiB   0 B   18 MiB   28 GiB  0.07  1.00    -          root default            
-3         0.00909         -  9.3 GiB  6.4 MiB  232 KiB   0 B  6.1 MiB  9.3 GiB  0.07  1.00    -              host yjwang0-ceph-01
 0    hdd  0.00909   1.00000  9.3 GiB  6.4 MiB  232 KiB   0 B  6.1 MiB  9.3 GiB  0.07  1.00  128      up          osd.0           
-5         0.00909         -  9.3 GiB  6.4 MiB  232 KiB   0 B  6.1 MiB  9.3 GiB  0.07  1.00    -              host yjwang0-ceph-02
 2    hdd  0.00909   1.00000  9.3 GiB  6.4 MiB  232 KiB   0 B  6.1 MiB  9.3 GiB  0.07  1.00  128      up          osd.2           
-7         0.00909         -  9.3 GiB  6.4 MiB  232 KiB   0 B  6.1 MiB  9.3 GiB  0.07  1.00    -              host yjwang0-ceph-03
 1    hdd  0.00909   1.00000  9.3 GiB  6.4 MiB  232 KiB   0 B  6.1 MiB  9.3 GiB  0.07  1.00  128      up          osd.1   

 

root@yjwang0-ceph-01:~/ceph-ansible# docker exec ceph-mon-yjwang0-ceph-01 ceph osd ls
0
1
2

 

group_vars에 새로운 OSD를 추가합니다.

 root@yjwang0-ceph-01:~/ceph-ansible# grep -vE '^$|^#' group_vars/osds.yml 
---
dummy:
devices:
  - /dev/vda
  - /dev/vdb

 

이후 아래 Inventory file에서 yjwang0-ceph-01vdb만 osd로 추가될 수 있도록한다.

root@yjwang0-ceph-01:~/ceph-ansible# cat inventory.ini 
[mons]
yjwang0-ceph-01 ansible_host="10.99.70.30"
yjwang0-ceph-02 ansible_host="10.99.70.31"
yjwang0-ceph-03 ansible_host="10.99.70.32"

[osds:children]
mons

[mgrs:children]
mons

[monitoring:children]
mons

[clients:children]
mons

 

Playbook 실행 (limit option을 통해 특정 node의 osd만 추가할 수 있다.)

root@yjwang0-ceph-01:~/ceph-ansible# ansible-playbook -i inventory.ini site-container.yml.sample --limit yjwang0-ceph-01

 

추가된 OSD 정보 확인 (osd.3 이 추가됨)

root@yjwang0-ceph-01:~/ceph-ansible# docker exec ceph-mon-yjwang0-ceph-01 ceph osd df tree
ID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP   META     AVAIL    %USE  VAR   PGS  STATUS  TYPE NAME               
-1         0.03638         -   37 GiB   32 MiB  1.7 MiB    0 B   31 MiB   37 GiB  0.08  1.00    -          root default            
-3         0.01819         -   19 GiB   13 MiB  852 KiB    0 B   12 MiB   19 GiB  0.07  0.83    -              host yjwang0-ceph-01
 0    hdd  0.00909   1.00000  9.3 GiB  7.5 MiB  424 KiB    0 B  7.1 MiB  9.3 GiB  0.08  0.93   68      up          osd.0           
 3    hdd  0.00909   1.00000  9.3 GiB  5.8 MiB  428 KiB    0 B  5.4 MiB  9.3 GiB  0.06  0.72   60      up          osd.3   
...

 

root@yjwang0-ceph-01:~/ceph-ansible# docker exec ceph-mon-yjwang0-ceph-01 ceph osd ls
0
1
2
3

 

Ceph Repo 추가 후 ceph cli (ceph client) download


https://docs.ceph.com/en/latest/install/get-packages/

주의 사항
ceph node에서 실행하는 경우 ceph user를 host에서 임의로 추가로 생성한 후 /var/lib/ceph의 permission을 변경하기 때문에
user / group을 미리 생성해야합니다. 안그러면 mon container가 permission denied로 stop됩니다.

root@yjwang0-ceph-01:~# groupadd -g 167 ceph
root@yjwang0-ceph-01:~# useradd -g 167 -u 167 -d /var/lib/ceph -c "Ceph storage service" -s /usr/sbin/nologin ceph

Ubuntu

# wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
# echo deb https://download.ceph.com/debian-octopus/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
# apt update

Download Ceph CLI

root@yjwang0-ceph-01:~# apt install ceph-common -y

이후 Permission이 아래와 같아야합니다.

# ls -ld /var/lib/ceph/
drwxr-x--- 15 ceph ceph 4096 Mar 17 18:07 /var/lib/ceph/

Confirm

root@yjwang0-ceph-01:~# ceph health
HEALTH_OK

 

Health Check


check cluster status

root@yjwang0-ceph-01:~# ceph -s
  cluster:
    id:     590e311e-f12f-4d3e-ac01-89a8e039dae3
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum yjwang0-ceph-01,yjwang0-ceph-02,yjwang0-ceph-03 (age 113m)
    mgr: yjwang0-ceph-02(active, since 3h), standbys: yjwang0-ceph-03, yjwang0-ceph-01
    osd: 4 osds: 4 up (since 2h), 4 in (since 3h)

  data:
    pools:   1 pools, 128 pgs
    objects: 0 objects, 0 B
    usage:   34 MiB used, 37 GiB / 37 GiB avail
    pgs:     128 active+clean

 

check osd status

root@yjwang0-ceph-01:~# ceph osd status
ID  HOST              USED  AVAIL  WR OPS  WR DATA  RD OPS  RD DATA  STATE      
 0  yjwang0-ceph-01  7000k  9525M      0        0       0        0   exists,up  
 1  yjwang0-ceph-03  10.5M  9521M      0        0       0        0   exists,up  
 2  yjwang0-ceph-02  10.5M  9521M      0        0       0        0   exists,up  
 3  yjwang0-ceph-01  6236k  9525M      0        0       0        0   exists,up 

 

osd utilization

 root@yjwang0-ceph-01:~# ceph osd df tree
ID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP     META     AVAIL    %USE  VAR   PGS  STATUS  TYPE NAME               
-1         0.03638         -   37 GiB   34 MiB  1.8 MiB      0 B   32 MiB   37 GiB  0.09  1.00    -          root default            
-3         0.01819         -   19 GiB   13 MiB  932 KiB      0 B   12 MiB   19 GiB  0.07  0.76    -              host yjwang0-ceph-01
 0    hdd  0.00909   1.00000  9.3 GiB  6.8 MiB  464 KiB      0 B  6.4 MiB  9.3 GiB  0.07  0.80   68      up          osd.0           
 3    hdd  0.00909   1.00000  9.3 GiB  6.1 MiB  468 KiB      0 B  5.6 MiB  9.3 GiB  0.06  0.71   60      up          osd.3           
-5         0.00909         -  9.3 GiB   11 MiB  460 KiB      0 B   10 MiB  9.3 GiB  0.11  1.24    -              host yjwang0-ceph-02
 2    hdd  0.00909   1.00000  9.3 GiB   11 MiB  460 KiB      0 B   10 MiB  9.3 GiB  0.11  1.24  128      up          osd.2           
-7         0.00909         -  9.3 GiB   11 MiB  460 KiB      0 B   10 MiB  9.3 GiB  0.11  1.24    -              host yjwang0-ceph-03
 1    hdd  0.00909   1.00000  9.3 GiB   11 MiB  460 KiB      0 B   10 MiB  9.3 GiB  0.11  1.24  128      up          osd.1           
                       TOTAL   37 GiB   34 MiB  1.8 MiB  1.9 KiB   32 MiB   37 GiB  0.09                                             
MIN/MAX VAR: 0.71/1.24  STDDEV: 0.02

 

ceph mgr / mon stat

root@yjwang0-ceph-01:~# ceph mgr stat
{
    "epoch": 46,
    "available": true,
    "active_name": "yjwang0-ceph-02",
    "num_standby": 2
}

root@yjwang0-ceph-01:~# ceph mon stat
e1: 3 mons at {yjwang0-ceph-01=[v2:10.99.70.30:3300/0,v1:10.99.70.30:6789/0],yjwang0-ceph-02=[v2:10.99.70.31:3300/0,v1:10.99.70.31:6789/0],yjwang0-ceph-03=[v2:10.99.70.32:3300/0,v1:10.99.70.32:6789/0]}, election epoch 38, leader 0 yjwang0-ceph-01, quorum 0,1,2 yjwang0-ceph-01,yjwang0-ceph-02,yjwang0-ceph-03

 

Ceph rbd 추가


rbd를 관리할 땐 rbd command를 사용합니다.

root@yjwang0-ceph-01:~# which rbd
/usr/bin/rbd

 

rbd pool 생성

root@yjwang0-ceph-01:~# ceph osd pool create testp1
pool 'testp1' created

 

rbd pool init

root@yjwang0-ceph-01:~# rbd pool init testp1

 

rbd 생성

위에 생성한 testp1에 testp1-1 이라는 2GB짜리 Device Image 생성

root@yjwang0-ceph-01:~# rbd create --size 2048 testp1/testp1-1

 

생성한 rbd Image 정보 확인

root@yjwang0-ceph-01:~# rbd info testp1/testp1-1
rbd image 'testp1-1':
    size 2 GiB in 512 objects
    order 22 (4 MiB objects)
    snapshot_count: 0
    id: ad585f6648dd
    block_name_prefix: rbd_data.ad585f6648dd
    format: 2
    features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
    op_features: 
    flags: 
    create_timestamp: Thu Mar 18 14:02:41 2021
    access_timestamp: Thu Mar 18 14:02:41 2021
    modify_timestamp: Thu Mar 18 14:02:41 2021

 

해당 RBD를 OS에 volume으로 Mount 하기

rbd command를 사용할 수 있고 ceph keyring이 있어야합니다. 지금은 ceph node에서 테스트를 진행해보겠습니다.

생성한 Image를 mapping 하기 (admin keyring 사용)

root@yjwang0-ceph-01:~# rbd device map testp1/testp1-1 --id admin
/dev/rbd0

 

아래와 같이 librbd를 통해 block device가 mapping 됐습니다.

root@yjwang0-ceph-01:~# lsblk /dev/rbd0
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
rbd0 251:0    0   2G  0 disk 

 

Filesystem Format 및 Mount

root@yjwang0-ceph-01:~# mkfs.xfs /dev/rbd0
meta-data=/dev/rbd0              isize=512    agcount=8, agsize=65536 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=524288, imaxpct=25
         =                       sunit=16     swidth=16 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=16 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

root@yjwang0-ceph-01:~# mount /dev/rbd0 /mnt/rbd0

root@yjwang0-ceph-01:~# df -h /mnt/rbd0
Filesystem      Size  Used Avail Use% Mounted on
/dev/rbd0       2.0G   47M  2.0G   3% /mnt/rbd0

 

CephFS (NFS) 추가


CephFS를 사용하기 위해서는 MDS (Meta Data Server)가 필요하다. 현재 배포된 cluster에는 MDS가 없으므로 추가해준다.
만약 Ceph-Ansible을 사용하는 경우 Playbook 실행 시 남은 osd가 추가될 가능성이 있다. 이를 방지하려면 mds를 ansible이 아닌 수동으로 추가해야 한다.

우선 inventory를 수정한다.

root@yjwang0-ceph-01:~/ceph-ansible# cat inventory.ini 
[mons]
yjwang0-ceph-01 ansible_host="10.99.70.30"
yjwang0-ceph-02 ansible_host="10.99.70.31"
yjwang0-ceph-03 ansible_host="10.99.70.32"

[osds:children]
mons

[mdss:children]
mons
...

 

playbook을 실행한다.

root@yjwang0-ceph-01:~/ceph-ansible# ansible-playbook -i inventory.ini site-container.yml.sample

 

mds server 상태 확인

root@yjwang0-ceph-01:~/ceph-ansible# ceph mds stat
cephfs:1 {0=yjwang0-ceph-03=up:active} 2 up:standby

 

cephfs volume 생성

root@yjwang0-ceph-01:~/ceph-ansible# ceph fs volume create testfs
Volume created successfully (no MDS daemons created)

 

확인

root@yjwang0-ceph-01:~/ceph-ansible# ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
name: testfs, metadata pool: cephfs.testfs.meta, data pools: [cephfs.testfs.data ]

 

Mount CephFS with kernel module
만약 아래 파일이 없다면 ceph-common을 본 글에 있는 Ceph-Common 설치를 참조해서 설치하시기 바랍니다.

root@yjwang0-ceph-01:~/ceph-ansible# ls -l /usr/sbin/mount.ceph 
-rwxr-xr-x 1 root root 194984 Feb 23 23:22 /usr/sbin/mount.ceph

 

cephfs dir 생성

root@yjwang0-ceph-01:~/ceph-ansible# mkdir /mnt/cephfs

 

keyring은 admin keyring을 사용하여 mount 하겠습니다.

root@yjwang0-ceph-01:~/ceph-ansible# mount -t ceph 10.99.70.30:6789,10.99.70.31:6789,10.99.70.32:6789:/ /mnt/cephfs -o name=admin

 

mount 확인

root@yjwang0-ceph-01:~/ceph-ansible# df -h /mnt/cephfs
Filesystem                                            Size  Used Avail Use% Mounted on
10.99.70.30:6789,10.99.70.31:6789,10.99.70.32:6789:/   16G     0   16G   0% /mnt/cephfs

 

subvolume 생성 및 확인

root@yjwang0-ceph-01:~/ceph-ansible# ceph fs subvolume create cephfs sub1

root@yjwang0-ceph-01:~/ceph-ansible# ceph fs subvolume ls cephfs
[
    {
        "name": "sub1"
    }
]

 

subvolume mount

root@yjwang0-ceph-01:~/ceph-ansible# ceph fs subvolume getpath cephfs sub1
/volumes/_nogroup/sub1/1d021e2f-66c9-430f-9158-ef0df94edbc4

root@yjwang0-ceph-01:~/ceph-ansible# mount -t ceph 10.99.70.30:6789,10.99.70.31:6789,10.99.70.32:6789:/volumes/_nogroup/sub1/1d021e2f-66c9-430f-9158-ef0df94edbc4 /mnt/cephfs/sub1 -o name=admin

root@yjwang0-ceph-01:~/ceph-ansible# df -h /mnt/cephfs/sub1
Filesystem                                                                                                      Size  Used Avail Use% Mounted on
10.99.70.30:6789,10.99.70.31:6789,10.99.70.32:6789:/volumes/_nogroup/sub1/1d021e2f-66c9-430f-9158-ef0df94edbc4   16G     0   16G   0% /mnt/cephfs/sub1
반응형