YJWANG

Libvirt - Ansible (using 'ansible-role-libvirt-vm') 본문

91.IaC

Libvirt - Ansible (using 'ansible-role-libvirt-vm')

왕영주 2021. 2. 18. 16:34

refer to


구성


> terraform을 잘 쓰고 있었는데 terraform 0.14 v과 libvirt-provider 호환이 잘 되지 않는 것 같아 버전 dependency가 걸리는 것이 싫어서 ansible로 왔습니다. 속도는 terraform이 훨씬 빠르네요

 

Ansible playbook 들을 위치하고 싶은 Dir에서 아래와 같이 실행합니다.

[root@cloud-test-2 libvirt_ansible]# git clone https://github.com/stackhpc/ansible-role-libvirt-vm.git

현재는 Ceph Storage를 위한 Node 3대를 만들 것이고 테스트용 서버 3대를 만든다고 생각하면된다.

먼저 필요한 패키지들을 설치해준다.

# yum -y install ansible sshpass python3-libvirt python3-lxml libguestfs-tools

이후 inventoryansible.cfg 파일을 만들고 ping module을 이용하여 연결 상태를 확인한다.

[root@cloud-test-2 libvirt_ansible]# cat inventory.ini 
[hypervisor]
cloud-test-2 ansible_host="127.0.0.1"

[hypervisor:vars]
ansible_connection=ssh
ansible_user=root
ansible_ssh_pass=xxxxx

### General Values 
root_path="/root/yjwnag/libvirt_ansible"
vm_header="yjwang"
env_ver="0"
management_net_address="10.99.99"

# Ubuntu 20.04
base_image='/usr/vm-template/focal-server-cloudimg-amd64.qcow2'

# CentOS 8.2
# base_image='/usr/vm-template/CentOS-8.2.qcow2.qcow2'

### Ceph Values
ceph_pool_path="/data/yjwang/ceph"
ceph_net_address="10.99.70"
ceph_root_gb=30

disable host key checking

[root@cloud-test-2 libvirt_ansible]# cat ansible.cfg 
[defaults]
host_key_checking = False

ad-hoc command

[root@cloud-test-2 libvirt_ansible]# ansible -m ping -i inventory.ini all
cloud-test-2 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/libexec/platform-python"
    },
    "changed": false,
    "ping": "pong"
}

이후 management network yaml 부터 생성하고

[root@cloud-test-2 libvirt_ansible]# cat 00-00-Create-Management-Net.yaml 
---
- name: create management network
  hosts: hypervisor
  tasks:
          - name: create management network
            virt_net:
                    name: '{{ vm_header }}{{ env_ver }}-management'
                    command: define
                    xml: '{{ lookup("template", "templates/management_net.xml.j2") }}'

          - name: Make net activate
            virt_net:
                    name: '{{ vm_header }}{{ env_ver }}-management'
                    state: active

          - name: Make net autostart when boot
            virt_net:
                    name: '{{ vm_header }}{{ env_ver }}-management'
                    autostart: yes

차례대로 ceph stroage pool

[root@cloud-test-2 libvirt_ansible]# cat 01-00-Create-Ceph-Pool.yaml 
---
- name: create ceph pool
  hosts: hypervisor
  tasks:
          - name: create pool path if it's not exist
            file:
                    path: '{{ ceph_pool_path }}'
                    state: directory
                    mode: '0755'

          - name: Set fcontext of above directory
            sefcontext:
                    target: '{{ ceph_pool_path }}(/.*)?'
                    setype: virt_image_t
                    state: present

          - name: restorecon
            command: restorecon -irv '{{ ceph_pool_path }}'

          - name: Create Ceph Pool
            virt_pool:
                    name: '{{ vm_header }}{{ env_ver }}-ceph'
                    command: define
                    xml: '{{ lookup("template", "templates/ceph_pool.xml.j2") }}'

          - name: Make pool activate
            virt_pool:
                    name: '{{ vm_header }}{{ env_ver }}-ceph'
                    state: active

          - name: Make pool autostart when boot
            virt_pool:
                    name: '{{ vm_header }}{{ env_ver }}-ceph'
                    autostart: yes

ceph net를 생성한다.

[root@cloud-test-2 libvirt_ansible]# cat 01-01-Create-Ceph-Net.yaml
---
- name: create ceph network
  hosts: hypervisor
  tasks:
          - name: create ceph network
            virt_net:
                    name: '{{ vm_header }}{{ env_ver }}-ceph'
                    command: define
                    xml: '{{ lookup("template", "templates/ceph_net.xml.j2") }}'

          - name: Make net activate
            virt_net:
                    name: '{{ vm_header }}{{ env_ver }}-ceph'
                    state: active

          - name: Make net autostart when boot
            virt_net:
                    name: '{{ vm_header }}{{ env_ver }}-ceph'
                    autostart: yes

이후 위에서 지정한 jinja template 파일을 생성해준다 (IP는 dhcp 서버인 dnsmasq 통해서 자동으로 지정되게 해놓았다)

[root@cloud-test-2 libvirt_ansible]# head -v -z templates/*
==> templates/ceph_netplan_config.yaml <==
network:
    version: 2
    renderer: networkd
    ethernets:
        ens2:
            dhcp4: true
        ens3:
            dhcp4: true

==> templates/ceph_net.xml.j2 <==
<network connections="1">
  <name>{{ vm_header }}{{ env_ver }}-ceph</name>
  <forward mode="nat">
    <nat>
      <port start="1024" end="65535"/>
    </nat>
  </forward>
  <ip family="ipv4" address="{{ ceph_net_address }}.1" prefix="24">
    <dhcp>
      <range start="{{ ceph_net_address }}.2" end="{{ ceph_net_address }}.254"/>
      <!-- Ceph -->
      <host mac='ca:fe:02:00:c1:30' name='{{ vm_header }}{{ env_ver }}-ceph01' ip='{{ ceph_net_address }}.30'/>
      <host mac='ca:fe:02:00:c1:31' name='{{ vm_header }}{{ env_ver }}-ceph02' ip='{{ ceph_net_address }}.31'/>
      <host mac='ca:fe:02:00:c1:32' name='{{ vm_header }}{{ env_ver }}-ceph03' ip='{{ ceph_net_address }}.32'/>
      <!-- k8s -->
      <host mac='ca:fe:02:00:c1:10' name='{{ vm_header }}{{ env_ver }}-k8s01' ip='{{ ceph_net_address }}.10'/>
      <host mac='ca:fe:02:00:c1:11' name='{{ vm_header }}{{ env_ver }}-k8s02' ip='{{ ceph_net_address }}.11'/>
      <host mac='ca:fe:02:00:c1:12' name='{{ vm_header }}{{ env_ver }}-k8s03' ip='{{ ceph_net_address }}.12'/>
      <host mac='ca:fe:02:00:c1:13' name='{{ vm_header }}{{ env_ver }}-k8s04' ip='{{ ceph_net_address }}.13'/>
      <host mac='ca:fe:02:00:c1:14' name='{{ vm_header }}{{ env_ver }}-k8s05' ip='{{ ceph_net_address }}.14'/>
      <!-- Openstack -->
      <host mac='ca:fe:02:00:c1:20' name='{{ vm_header }}{{ env_ver }}-stack01' ip='{{ ceph_net_address }}.20'/>
      <host mac='ca:fe:02:00:c1:21' name='{{ vm_header }}{{ env_ver }}-stack02' ip='{{ ceph_net_address }}.21'/>
      <host mac='ca:fe:02:00:c1:22' name='{{ vm_header }}{{ env_ver }}-stack03' ip='{{ ceph_net_address }}.22'/>
      <host mac='ca:fe:02:00:c1:23' name='{{ vm_header }}{{ env_ver }}-stack04' ip='{{ ceph_net_address }}.23'/>
      <host mac='ca:fe:02:00:c1:24' name='{{ vm_header }}{{ env_ver }}-stack05' ip='{{ ceph_net_address }}.24'/>
    </dhcp>
  </ip>
</network>

==> templates/ceph_pool.xml.j2 <==
<pool type="dir">
  <name>{{ vm_header }}{{ env_ver }}-ceph</name>
  <source>
  </source>
  <target>
    <path>{{ ceph_pool_path }}</path>
  </target>
</pool>

==> templates/management_net.xml.j2 <==
<network connections="1">
  <name>{{ vm_header }}{{ env_ver }}-management</name>
  <forward mode="nat">
    <nat>
      <port start="1024" end="65535"/>
    </nat>
  </forward>
  <ip family="ipv4" address="{{ management_net_address }}.1" prefix="24">
    <dhcp>
      <range start="{{ management_net_address }}.2" end="{{ management_net_address }}.254"/>
      <!-- Ceph -->
      <host mac='ca:fe:02:00:c0:30' name='{{ vm_header }}{{ env_ver }}-management01' ip='{{ management_net_address }}.30'/>
      <host mac='ca:fe:02:00:c0:31' name='{{ vm_header }}{{ env_ver }}-management02' ip='{{ management_net_address }}.31'/>
      <host mac='ca:fe:02:00:c0:32' name='{{ vm_header }}{{ env_ver }}-management03' ip='{{ management_net_address }}.32'/>
      <!-- k8s -->
      <host mac='ca:fe:02:00:c0:10' name='{{ vm_header }}{{ env_ver }}-k8s01' ip='{{ management_net_address }}.10'/>
      <host mac='ca:fe:02:00:c0:11' name='{{ vm_header }}{{ env_ver }}-k8s02' ip='{{ management_net_address }}.11'/>
      <host mac='ca:fe:02:00:c0:12' name='{{ vm_header }}{{ env_ver }}-k8s03' ip='{{ management_net_address }}.12'/>
      <host mac='ca:fe:02:00:c0:13' name='{{ vm_header }}{{ env_ver }}-k8s04' ip='{{ management_net_address }}.13'/>
      <host mac='ca:fe:02:00:c0:14' name='{{ vm_header }}{{ env_ver }}-k8s05' ip='{{ management_net_address }}.14'/>
      <!-- Openstack -->
      <host mac='ca:fe:02:00:c0:20' name='{{ vm_header }}{{ env_ver }}-stack01' ip='{{ management_net_address }}.20'/>
      <host mac='ca:fe:02:00:c0:21' name='{{ vm_header }}{{ env_ver }}-stack02' ip='{{ management_net_address }}.21'/>
      <host mac='ca:fe:02:00:c0:22' name='{{ vm_header }}{{ env_ver }}-stack03' ip='{{ management_net_address }}.22'/>
      <host mac='ca:fe:02:00:c0:23' name='{{ vm_header }}{{ env_ver }}-stack04' ip='{{ management_net_address }}.23'/>
      <host mac='ca:fe:02:00:c0:24' name='{{ vm_header }}{{ env_ver }}-stack05' ip='{{ management_net_address }}.24'/>
    </dhcp>
  </ip>
</network>

이후 role 및 pretasks를 이용하여 vm을 생성한다.

[root@cloud-test-2 libvirt_ansible]# cat 01-02-Ceph-Storage-3-Nodes.yaml 
# Ceph-Cluster Nodes

---
- name: Create VMs
  hosts: hypervisor
  pre_tasks:
          - name: Create OS image volumes
            shell:
                    cmd: | 
                            if [ ! -e {{ ceph_pool_path }}/{{ vm_header }}{{ env_ver }}-{{ item }}.qcow2 ]
                            then
                            cp {{ base_image }} {{ ceph_pool_path }}/{{ vm_header }}{{ env_ver }}-{{ item }}.qcow2
                            fi
            loop:
                    - ceph-01
                    - ceph-02
                    - ceph-03

          - name: Resize image volumes
            shell:
                    cmd: 'qemu-img resize {{ ceph_pool_path }}/{{ vm_header }}{{ env_ver }}-{{ item }}.qcow2 {{ ceph_root_gb }}G'
            loop:
                    - ceph-01
                    - ceph-02
                    - ceph-03
            ignore_errors: yes

          - name: Customize image volumes
            shell:
                    cmd: > 
                            /usr/bin/virt-customize -a {{ ceph_pool_path }}/{{ vm_header }}{{ env_ver }}-{{ item }}.qcow2
                            --hostname {{vm_header }}{{ env_ver }}-{{ item }}
                            --timezone Asia/Seoul
                            --root-password password:testtest
                            --uninstall cloud-init
                            --copy-in {{ root_path }}/templates/ceph_netplan_config.yaml:/etc/netplan/
                            --ssh-inject root
                            --run-command 'ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key -N ""'
            loop:
                    - ceph-01
                    - ceph-02
                    - ceph-03

  roles:
    - role: ansible-role-libvirt-vm # stackhpc.libvirt-vm # https://github.com/stackhpc/ansible-role-libvirt-vm
      libvirt_vms:
        - state: present # absent
          name: '{{ vm_header }}{{ env_ver }}-ceph-01'
          memory_mb: 8192 # 8GB # 16384(16GB)
          vcpus: 4
          volumes:
            - name: '{{ vm_header }}{{ env_ver }}-ceph-01.qcow2' 
              type: 'file'
              file_path: '{{ ceph_pool_path }}'
              format: 'qcow2'
              target: 'sda'

            - name: '{{ vm_header }}{{ env_ver }}-ceph-01-osd1'
              device: 'disk'
              format: 'qcow2'
              capacity: '10GB'
              pool: '{{ vm_header }}{{ env_ver }}-ceph'

            - name: '{{ vm_header }}{{ env_ver }}-ceph-01-osd2'
              device: 'disk'
              format: 'qcow2'
              capacity: '10GB'
              pool: '{{ vm_header }}{{ env_ver }}-ceph'

          interfaces:
            - network: '{{ vm_header }}{{ env_ver }}-management'
              mac: 'ca:fe:02:00:c0:30'

            - network: '{{ vm_header }}{{ env_ver }}-ceph'
              mac: 'ca:fe:02:00:c1:30'

        - state: present # absent
          name: '{{ vm_header }}{{ env_ver }}-ceph-02'
          memory_mb: 8192 # 8GB # 16384(16GB)
          vcpus: 4
          volumes:
            - name: '{{ vm_header }}{{ env_ver }}-ceph-02.qcow2' 
              type: 'file'
              file_path: '{{ ceph_pool_path }}'
              format: 'qcow2'
              target: 'sda'

            - name: '{{ vm_header }}{{ env_ver }}-ceph-02-osd1'
              device: 'disk'
              format: 'qcow2'
              capacity: '10GB'
              pool: '{{ vm_header }}{{ env_ver }}-ceph'

            - name: '{{ vm_header }}{{ env_ver }}-ceph-02-osd2'
              device: 'disk'
              format: 'qcow2'
              capacity: '10GB'
              pool: '{{ vm_header }}{{ env_ver }}-ceph'

          interfaces:
            - network: '{{ vm_header }}{{ env_ver }}-management'
              mac: 'ca:fe:02:00:c0:31'

            - network: '{{ vm_header }}{{ env_ver }}-ceph'
              mac: 'ca:fe:02:00:c1:31'

        - state: present # absent
          name: '{{ vm_header }}{{ env_ver }}-ceph-03'
          memory_mb: 8192 # 8GB # 16384(16GB)
          vcpus: 4
          volumes:
            - name: '{{ vm_header }}{{ env_ver }}-ceph-03.qcow2' 
              type: 'file'
              file_path: '{{ ceph_pool_path }}'
              format: 'qcow2'
              target: 'sda'

            - name: '{{ vm_header }}{{ env_ver }}-ceph-03-osd1'
              device: 'disk'
              format: 'qcow2'
              capacity: '10GB'
              pool: '{{ vm_header }}{{ env_ver }}-ceph'

            - name: '{{ vm_header }}{{ env_ver }}-ceph-03-osd2'
              device: 'disk'
              format: 'qcow2'
              capacity: '10GB'
              pool: '{{ vm_header }}{{ env_ver }}-ceph'

          interfaces:
            - network: '{{ vm_header }}{{ env_ver }}-management'
              mac: 'ca:fe:02:00:c0:32'

            - network: '{{ vm_header }}{{ env_ver }}-ceph'
              mac: 'ca:fe:02:00:c1:32'

이제 plabook을 차례대로 실행하면 key가 들어있는 서버에 접속할 수 있다.

[root@cloud-test-2 libvirt_ansible]# cat sh/01-create-ceph-node.sh 
#! /bin/bash

ansible-playbook -i ../inventory.ini ../01-00-Create-Ceph-Pool.yaml
ansible-playbook -i ../inventory.ini ../01-01-Create-Ceph-Net.yaml
ansible-playbook -i ../inventory.ini ../01-02-Ceph-Storage-3-Nodes.yaml

서버 접속 후 반영 확인

[root@cloud-test-2 libvirt_ansible]# ssh 10.99.99.30
Welcome to Ubuntu 20.04.1 LTS (GNU/Linux 5.4.0-52-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Thu Feb 18 16:34:08 KST 2021

  System load:  0.0               Processes:             135
  Usage of /:   70.6% of 1.96GB   Users logged in:       0
  Memory usage: 2%                IPv4 address for ens2: 10.99.99.30
  Swap usage:   0%                IPv4 address for ens3: 10.99.70.30


0 updates can be installed immediately.
0 of these updates are security updates.


Last login: Thu Feb 18 16:28:57 2021 from 10.99.99.1
반응형