Jump to: navigation, search

StarlingX/Containers/Applications/app-rook-ceph

StarlingX Rook Ceph App installation guide

Source

Tarball Package

  • Get the .tgz file located at:
   /usr/local/share/applications/helm/
  • Or build it yourself using the Debian Build environment:
   build-pkgs -c -p rook-ceph-helm,python3-k8sapp-rook-ceph,stx-rook-ceph-helm

Configuration

  • In a StarlingX system installed without a storage backend, add the Rook storage backend
   sysadmin@controller-0:~$ source /etc/platform/openrc
   [sysadmin@controller-0 ~(keystone_admin)]$ system storage-backend-add ceph-rook
   
   WARNING : THIS OPERATION IS NOT REVERSIBLE AND CANNOT BE CANCELLED. 
   
   Please set the 'confirmed' field to execute this operation for the ceph-rook backend.
   
   [sysadmin@controller-0 ~(keystone_admin)]$ system storage-backend-add ceph-rook --confirmed
   
   System configuration has changed.
   Please follow the administrator guide to complete configuring the system.
   
   +-----------------+-----------+-------------+----------------------------------------+----------+--------------+
   | name            | backend   | state       | task                                   | services | capabilities | 
   +-----------------+-----------+-------------+----------------------------------------+----------+--------------+
   | ceph-rook-store | ceph-rook | configuring | {'controller-0': 'applying-manifests'} | None     |              |
   +-----------------+-----------+-------------+----------------------------------------+----------+--------------+
   
   [sysadmin@controller-0 ~(keystone_admin)]$ system storage-backend-list    
   +-----------------+-----------+------------+------+----------+--------------+
   | name            | backend   | state      | task | services | capabilities |
   +-----------------+-----------+------------+------+----------+--------------+
   | ceph-rook-store | ceph-rook | configured | None | None     |              |
   +-----------------+-----------+------------+------+----------+--------------+
  • Upload the app (if needed)
   [sysadmin@controller-0 ~(keystone_admin)]$ system application-upload ~/rook-ceph-apps-23.09-42.tgz
   Then follow one of the following configurations according to you system:
   [AIO-SX Simplex]
   [AIO-DX Duplex]
   [Standard]

AIO-SX - Simplex

  • Add system labels for rook-ceph
   sysadmin@controller-0:~$ source /etc/platform/openrc
   [sysadmin@controller-0 ~(keystone_admin)]$ system host-label-assign controller-0 ceph-mon-placement=enabled
   
   +-------------+--------------------------------------+
   | Property    | Value                                |
   +-------------+--------------------------------------+
   | uuid        | 719c3828-3806-4711-8824-774c1037316d |
   | host_uuid   | aed595fc-9fba-4aab-981a-ae2c04193689 |
   | label_key   | ceph-mon-placement                   |
   | label_value | enabled                              |
   +-------------+--------------------------------------+
   [sysadmin@controller-0 ~(keystone_admin)]$system host-label-assign controller-0 ceph-mgr-placement=enabled
   
   +-------------+--------------------------------------+
   | Property    | Value                                |
   +-------------+--------------------------------------+
   | uuid        | 11bec756-0d76-4fce-89a6-c1f0a0677f4a |
   | host_uuid   | aed595fc-9fba-4aab-981a-ae2c04193689 |
   | label_key   | ceph-mgr-placement                   |
   | label_value | enabled                              | 
   +-------------+--------------------------------------+
   Proceed to OSD configuring [OSD Configurations]

AIO-DX - Duplex

  • Add system labels for rook-ceph
   sysadmin@controller-0:~$ source /etc/platform/openrc
   [sysadmin@controller-0 ~(keystone_admin)]$ system host-label-assign controller-0 ceph-mon-placement=enabled
   
   +-------------+--------------------------------------+
   | Property    | Value                                |
   +-------------+--------------------------------------+
   | uuid        | 719c3828-3806-4711-8824-774c1037316d |
   | host_uuid   | aed595fc-9fba-4aab-981a-ae2c04193689 |
   | label_key   | ceph-mon-placement                   |
   | label_value | enabled                              |
   +-------------+--------------------------------------+
   [sysadmin@controller-0 ~(keystone_admin)]$ system host-label-assign controller-0 ceph-mgr-placement=enabled
   
   +-------------+--------------------------------------+
   | Property    | Value                                |
   +-------------+--------------------------------------+
   | uuid        | 11bec756-0d76-4fce-89a6-c1f0a0677f4a |
   | host_uuid   | aed595fc-9fba-4aab-981a-ae2c04193689 |
   | label_key   | ceph-mgr-placement                   |
   | label_value | enabled                              | 
   +-------------+--------------------------------------+
   [sysadmin@controller-0 ~(keystone_admin)]$ system host-label-assign controller-1 ceph-mgr-placement=enabled;
   
   +-------------+--------------------------------------+
   | Property    | Value                                |
   +-------------+--------------------------------------+
   | uuid        | 9d92bac5-ba77-483a-9b2e-dfb57302a54b |
   | host_uuid   | f56121ea-cf37-48c2-9e77-5708417a6963 |
   | label_key   | ceph-mgr-placement                   |
   | label_value | enabled                              |
   +-------------+--------------------------------------+
   [sysadmin@controller-0 ~(keystone_admin)]$ system host-label-assign controller-1 ceph-mon-placement=enabled;
   system host-label-assign controller-1 ceph-mgr-placement=enabled;
   +-------------+--------------------------------------+
   | Property    | Value                                |
   +-------------+--------------------------------------+
   | uuid        | 9720faf3-7a77-4072-8c1e-ab6f236821a1 |
   | host_uuid   | f56121ea-cf37-48c2-9e77-5708417a6963 |
   | label_key   | ceph-mon-placement                   |
   | label_value | enabled                              |
   +-------------+--------------------------------------+
   Proceed to OSD configuring [OSD Configurations]

Standard

  • Add system labels for rook-ceph
   sysadmin@controller-0:~$ source /etc/platform/openrc
   [sysadmin@controller-0 ~(keystone_admin)]$ system host-label-assign controller-0 ceph-mon-placement=enabled
   
   +-------------+--------------------------------------+
   | Property    | Value                                |
   +-------------+--------------------------------------+
   | uuid        | 719c3828-3806-4711-8824-774c1037316d |
   | host_uuid   | aed595fc-9fba-4aab-981a-ae2c04193689 |
   | label_key   | ceph-mon-placement                   |
   | label_value | enabled                              |
   +-------------+--------------------------------------+
   [sysadmin@controller-0 ~(keystone_admin)]$ system host-label-assign controller-0 ceph-mgr-placement=enabled
   
   +-------------+--------------------------------------+
   | Property    | Value                                |
   +-------------+--------------------------------------+
   | uuid        | 11bec756-0d76-4fce-89a6-c1f0a0677f4a |
   | host_uuid   | aed595fc-9fba-4aab-981a-ae2c04193689 |
   | label_key   | ceph-mgr-placement                   |
   | label_value | enabled                              | 
   +-------------+--------------------------------------+
   [sysadmin@controller-0 ~(keystone_admin)]$ system host-label-assign controller-1 ceph-mgr-placement=enabled;
   
   +-------------+--------------------------------------+
   | Property    | Value                                |
   +-------------+--------------------------------------+
   | uuid        | 9d92bac5-ba77-483a-9b2e-dfb57302a54b |
   | host_uuid   | f56121ea-cf37-48c2-9e77-5708417a6963 |
   | label_key   | ceph-mgr-placement                   |
   | label_value | enabled                              |
   +-------------+--------------------------------------+
   [sysadmin@controller-0 ~(keystone_admin)]$ system host-label-assign controller-1 ceph-mon-placement=enabled;
   system host-label-assign controller-1 ceph-mgr-placement=enabled;
   +-------------+--------------------------------------+
   | Property    | Value                                |
   +-------------+--------------------------------------+
   | uuid        | 9720faf3-7a77-4072-8c1e-ab6f236821a1 |
   | host_uuid   | f56121ea-cf37-48c2-9e77-5708417a6963 |
   | label_key   | ceph-mon-placement                   |
   | label_value | enabled                              |
   +-------------+--------------------------------------+
  • Add a label for mon placement on your worker
   [sysadmin@controller-0 ~(keystone_admin)]$ system host-label-assign compute-0 ceph-mon-placement=enabled;
   system host-label-assign compute-0 ceph-mon-placement=enabled;
   +-------------+--------------------------------------+
   | Property    | Value                                |
   +-------------+--------------------------------------+
   | uuid        | 2e267859-e845-48e1-be61-ca3b376b4b30 |
   | host_uuid   | c102ceb8-7102-4fb7-acd7-0fea2a4fe8a1 |
   | label_key   | ceph-mon-placement                   |
   | label_value | enabled                              |
   +-------------+--------------------------------------+
   Proceed to OSD configuring [OSD Configurations]

OSD configurations

  • Identify the disk to use for bluestore OSD
   [sysadmin@controller-0 ~(keystone_admin)]$ system host-disk-list 1
   +-----------+---------+---------+-------+------------+-----+--------------------------------------------+
   | device_no | device_ | device_ | size_ | available_ | rpm | device_path                                |
   | de        | num     | type    | gib   | gib        |     |                                            |
   +-----------+---------+---------+-------+------------+-----+--------------------------------------------+
   | /dev/sda  | 2048    | SSD     | 520.0 | 0.0        | N/A | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
   | /dev/sdb  | 2064    | SSD     | 520.0 | 519.996    | N/A | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
   | /dev/sdc  | 2080    | SSD     | 520.0 | 519.996    | N/A | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
   +-----------+---------+---------+-------+------------+-----+--------------------------------------------+
   
  • Make sure that the disk has been purged
   [sysadmin@controller-0 ~(keystone_admin)]$ system host-disk-wipe -s --confirm controller-0 /dev/sdb
   
  • Set up the overrides with all desired OSD configurations (e.g.: using sdb as OSD)
   [sysadmin@controller-0 ~(keystone_admin)]$ cat <<EOF > /home/sysadmin/ceph-values.yml
   cephClusterSpec:
     storage:
       useAllNodes: false
       useAllDevices: false
       nodes:
       - name: controller-0
         devices:
         - name: /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0
   EOF
   [sysadmin@controller-0 ~(keystone_admin)]$ cat /home/sysadmin/ceph-values.yml
  • Set the overrides
   [sysadmin@controller-0 ~(keystone_admin)]$ system helm-override-list rook-ceph-apps
   [sysadmin@controller-0 ~(keystone_admin)]$ system helm-override-update rook-ceph-apps rook-ceph-cluster rook-ceph --values /home/sysadmin/ceph-values.yml
   Proceed to Installation step [Installation]

Installation

  • Verify current overrides
   [sysadmin@controller-0 ~(keystone_admin)]$ system helm-override-show rook-ceph-apps rook-ceph rook-ceph
   [sysadmin@controller-0 ~(keystone_admin)]$ system helm-override-show rook-ceph-apps rook-ceph-cluster rook-ceph
   [sysadmin@controller-0 ~(keystone_admin)]$ system helm-override-show rook-ceph-apps rook-ceph-provisioner rook-ceph
  • Apply the app
   [sysadmin@controller-0 ~(keystone_admin)]$ system application-apply rook-ceph-apps
   +---------------+----------------------------------+
   | Property      | Value                            |
   +---------------+----------------------------------+
   | active        | False                            |
   | app_version   | 23.09-41                         |
   | created_at    | 2023-12-18T23:25:07.201915+00:00 |
   | manifest_file | fluxcd-manifests                 |
   | manifest_name | rook-ceph-apps-fluxcd-manifests  |
   | name          | rook-ceph-apps                   |
   | progress      | None                             |
   | status        | applying                         |
   | updated_at    | 2023-12-19T16:13:46.924152+00:00 |
   +---------------+----------------------------------+
   Please use 'system application-list' or 'system application-show rook-ceph-apps' to view the current progress.
   [sysadmin@controller-0 ~(keystone_admin)]$ system application-list
   +--------------------------+----------+-------------------------------------------+----------+-----------+
   | application              | version  | manifest name                             | status   | progress  |
   +--------------------------+----------+-------------------------------------------+----------+-----------+
   | cert-manager             | 1.0-69   | cert-manager-fluxcd-manifests             | applied  | completed |
   | dell-storage             | 1.0-6    | dell-storage-fluxcd-manifests             | uploaded | completed |
   | nginx-ingress-controller | 1.0-50   | nginx-ingress-controller-fluxcd-manifests | applied  | completed |
   | oidc-auth-apps           | 1.0-45   | oidc-auth-apps-fluxcd-manifests           | uploaded | completed |
   | platform-integ-apps      | 1.2-119  | platform-integ-apps-fluxcd-manifests      | uploaded | completed |
   | rook-ceph-apps           | 23.09-41 | rook-ceph-apps-fluxcd-manifests           | applied  | completed |
   +--------------------------+----------+-------------------------------------------+----------+-----------+

Verifying the Installation

  • Verify that rook is running
   [sysadmin@controller-0 ~(keystone_admin)]$ NS=rook-ceph
   [sysadmin@controller-0 ~(keystone_admin)]$ kubectl get pods -n ${NS} -o wide -w 
   
   NAME                                            READY   STATUS      RESTARTS   AGE     IP              NODE         
   csi-cephfsplugin-provisioner-86d7d5d9b7-98gn2   6/6     Running     0          11m     172.16.192.101   controller-0
   csi-rbdplugin-nd8xk                             3/3     Running     0          11m     192.168.206.2    controller-0
   csi-rbdplugin-provisioner-c9cd4ffd7-mmfp9       6/6     Running     0          11m     172.16.192.100   controller-0
   rook-ceph-mds-kube-cephfs-a-7b544db87d-4ctrf    1/1     Running     0          11m     192.168.206.2    controller-0
   rook-ceph-mds-kube-cephfs-b-5fcbb478fd-qjw4g    1/1     Running     0          11m     192.168.206.2    controller-0
   rook-ceph-mgr-a-69fbb9d7b4-tcbt4                1/1     Running     0          11m     192.168.206.2    controller-0
   rook-ceph-mon-a-55bbd66678-9cvt6                1/1     Running     0          12m     192.168.206.2    controller-0
   rook-ceph-operator-5ddc68f467-kqbwt             1/1     Running     0          12m     172.16.192.95    controller-0
   rook-ceph-osd-0-bfbf89745-679kp                 1/1     Running     0          11m     192.168.206.2    controller-0
   rook-ceph-osd-1-69799854f5-qvgvg                1/1     Running     0          11m     192.168.206.2    controller-0
   rook-ceph-osd-prepare-controller-0-jmxmz        0/1     Completed   0          11m     192.168.206.2    controller-0
   rook-ceph-provision-g8qfg                       0/1     Completed   0          5m20s   172.16.192.104   controller-0
   rook-ceph-tools-644f4dbc4b-8qc2m                1/1     Running     0          12m     192.168.206.2    controller-0
   stx-ceph-manager-66764cb49d-hxt4d               1/1     Running     0          11m     172.16.192.98    controller-0
   stx-ceph-osd-audit-28484346-r5nfq               0/1     Completed   0          8s      192.168.206.2    controller-0


  • Verify that the cluster is operational
   [sysadmin@controller-0 ~(keystone_admin)]$ ceph -s
  • Verify the ceph version
   [sysadmin@controller-0 ~(keystone_admin)]$ceph version
   ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351) nautilus (stable)
  • Make sure that the disk is reported used
   [sysadmin@controller-0 ~(keystone_admin)]$ system host-disk-list controller-0
   +-----------+---------+---------+-------+------------+-----+--------------------------------------------+
   | device_no | device_ | device_ | size_ | available_ | rpm | device_path                                |
   | de        | num     | type    | gib   | gib        |     |                                            |
   +-----------+---------+---------+-------+------------+-----+--------------------------------------------+
   | /dev/sda  | 2048    | SSD     | 520.0 | 0.0        | N/A | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
   | /dev/sdb  | 2064    | SSD     | 520.0 | 0.0        | N/A | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
   | /dev/sdc  | 2080    | SSD     | 520.0 | 519.996    | N/A | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
   +-----------+---------+---------+-------+------------+-----+--------------------------------------------+
  • Make sure that the volume group is reported
   [sysadmin@controller-0 ~(keystone_admin)]$ system host-lvg-list controller-0
   +-------------------+-------------+--------+------------------+------------------+-------------+-------------+
   | LVG Name          | State       | Access | Total Size (GiB) | Avail Size (GiB) | Current PVs | Current LVs |
   +-------------------+-------------+--------+------------------+------------------+-------------+-------------+
   | cgts-vg           | provisioned | wz--n- | 488.406          | 261.593          | 1           | 15          |
   | ceph-2e5d9569-... | provisioned | wz--n- | 519.996          | 0.0              | 1           | 1           |
   +-------------------+-------------+--------+------------------+------------------+-------------+-------------+
  • Make sure that the physical volume is reported
   [sysadmin@controller-0 ~(keystone_admin)]$ system host-pv-list controller-0 --nowrap
  +-------------+--------------+--------------------------------------------------+----------+-----------+--------------+
  | lvm_pv_name | disk_or_part | disk_or_part_device_path                         | pv_state | pv_type   | lvm_vg_name  |
  |             | _device_node |                                                  |          |           |              |
  +-------------+--------------+--------------------------------------------------+----------+-----------+--------------+
  | /dev/sdb    | /dev/sdb     | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0       | provisnd | disk      | ceph-2e5d... |
  | /dev/sda5   | /dev/sda5    | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0-part5 | provisnd | partition | cgts-vg      |
  +-------------+--------------+--------------------------------------------------+----------+-----------+--------------+