YJWANG

[Rook] Kubernetes 1.20.x + Ceph 구축 본문

60.Cloud/80.Kubernetes

[Rook] Kubernetes 1.20.x + Ceph 구축

왕영주 2021. 3. 25. 15:16

Kubernetes 위에 Rook을 이용해서 Ceph를 배포하는 포스팅을 다루고자 합니다.
Rook은 v1.5를 사용했습니다.

Prerequisite


worker node의 rbd module을 load해준다.

root@yjwang0-k8s-03:~# modprobe rbd
root@yjwang0-k8s-04:~# modprobe rbd
root@yjwang0-k8s-05:~# modprobe rbd

Refer to


Rook을 이용한 k8s에 Ceph 배포


git clone

root@yjwang0-k8s-01:~# git clone --single-branch --branch v1.5.9 https://github.com/rook/rook.git

example manifest를 이용해서 배포 진행

root@yjwang0-k8s-01:~# cd rook/cluster/examples/kubernetes/ceph
root@yjwang0-k8s-01:~/rook/cluster/examples/kubernetes/ceph# kubectl create -f crds.yaml -f common.yaml -f operator.yaml
root@yjwang0-k8s-01:~/rook/cluster/examples/kubernetes/ceph# kubectl create -f cluster.yaml

배포 확인

root@yjwang0-k8s-01:~# kubectl get pod -n rook-ceph 
NAME                                                       READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-2brf2                                     3/3     Running     0          19m
csi-cephfsplugin-dgrh6                                     3/3     Running     0          19m
csi-cephfsplugin-provisioner-bc5cff84-t4ph2                6/6     Running     0          19m
csi-cephfsplugin-provisioner-bc5cff84-tcjqb                6/6     Running     0          19m
csi-cephfsplugin-scm2q                                     3/3     Running     0          3m37s
csi-rbdplugin-62lkr                                        3/3     Running     0          19m
csi-rbdplugin-gw2sr                                        3/3     Running     0          19m
csi-rbdplugin-provisioner-97957587f-2qg9v                  6/6     Running     0          19m
csi-rbdplugin-provisioner-97957587f-vb92v                  6/6     Running     0          19m
csi-rbdplugin-xwkzr                                        3/3     Running     0          3m37s
rook-ceph-crashcollector-yjwang0-k8s-03-655499d6bb-rvkjb   1/1     Running     0          88s
rook-ceph-crashcollector-yjwang0-k8s-04-66b6c8dfc9-h9f7s   1/1     Running     0          3m30s
rook-ceph-crashcollector-yjwang0-k8s-05-74d548969d-kxrbh   1/1     Running     0          3m6s
rook-ceph-mgr-a-5f754b6447-275tr                           1/1     Running     0          89s
rook-ceph-mon-a-6cd54b9578-2bmdz                           1/1     Running     0          3m33s
rook-ceph-mon-b-987c779fc-l697q                            1/1     Running     0          3m7s
rook-ceph-mon-c-588b6f6bc5-qjpwm                           1/1     Running     0          2m38s
rook-ceph-operator-6f7f6b96d-msg2l                         1/1     Running     0          21m
rook-ceph-osd-prepare-yjwang0-k8s-03-trbjg                 0/1     Completed   2          87s
rook-ceph-osd-prepare-yjwang0-k8s-04-p2cz5                 0/1     Completed   0          16s
rook-ceph-osd-prepare-yjwang0-k8s-05-wxxg6                 0/1     Completed   0          14s

Toolbox 배포
우선 manifest를 준비한다. 해당 파일도 git 으로 clone한 dir에 같이 있다.
https://rook.io/docs/rook/v1.5/ceph-toolbox.html

root@yjwang0-k8s-01:~# cat toolbox.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: rook-ceph-tools
  namespace: rook-ceph
  labels:
    app: rook-ceph-tools
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rook-ceph-tools
  template:
    metadata:
      labels:
        app: rook-ceph-tools
    spec:
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: rook-ceph-tools
        image: rook/ceph:v1.5.9
        command: ["/tini"]
        args: ["-g", "--", "/usr/local/bin/toolbox.sh"]
        imagePullPolicy: IfNotPresent
        env:
          - name: ROOK_CEPH_USERNAME
            valueFrom:
              secretKeyRef:
                name: rook-ceph-mon
                key: ceph-username
          - name: ROOK_CEPH_SECRET
            valueFrom:
              secretKeyRef:
                name: rook-ceph-mon
                key: ceph-secret
        volumeMounts:
          - mountPath: /etc/ceph
            name: ceph-config
          - name: mon-endpoint-volume
            mountPath: /etc/rook
      volumes:
        - name: mon-endpoint-volume
          configMap:
            name: rook-ceph-mon-endpoints
            items:
            - key: data
              path: mon-endpoints
        - name: ceph-config
          emptyDir: {}
      tolerations:
        - key: "node.kubernetes.io/unreachable"
          operator: "Exists"
          effect: "NoExecute"
          tolerationSeconds: 5

배포 진행

root@yjwang0-k8s-01:~# kubectl create -f rook/cluster/examples/kubernetes/ceph/toolbox.yaml

배포 확인

root@yjwang0-k8s-01:~# kubectl get pod -n rook-ceph rook-ceph-tools-6f58686b5d-dh6b7
NAME                               READY   STATUS    RESTARTS   AGE
rook-ceph-tools-6f58686b5d-dh6b7   1/1     Running   0          28s

ceph cluster status 확인

root@yjwang0-k8s-01:~# kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph -s
  cluster:
    id:     7da18f42-bfd5-4af3-8522-b770371ae949
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum a,b,c (age 99s)
    mgr: a(active, since 80s)
    osd: 3 osds: 3 up (since 9s), 3 in (since 9s)

  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   3.0 GiB used, 57 GiB / 60 GiB avail
    pgs:     1 active+clean

만약 배포중 문제가 생기면 아래 링크를 참조한다.

https://rook.io/docs/rook/v1.5/ceph-common-issues.html#osd-pods-are-not-created-on-my-deviceshttps://rook.io/docs/rook/v1.5/ceph-common-issues.html#osd-pods-are-not-created-on-my-devices

반응형