Jump to: navigation, search

Difference between revisions of "StarlingX/Containers/Applications/app-rook-ceph"

(Testing)
Line 6: Line 6:
 
* From the Debian Build environment:
 
* From the Debian Build environment:
 
     build-pkgs -c -p rook-helm,python3-k8sapp-rook,stx-rook-ceph-helm
 
     build-pkgs -c -p rook-helm,python3-k8sapp-rook,stx-rook-ceph-helm
 
+
==Configuring==
 +
=== AIO-SX - Simplex ===
 +
=== AIO-DX - Duplex ===
 +
=== Standard ===
 
== Testing ==
 
== Testing ==
 
* Add the Rook storage backend
 
* Add the Rook storage backend

Revision as of 19:04, 8 March 2024

Application: Rook Ceph

Source

Building

  • From the Debian Build environment:
   build-pkgs -c -p rook-helm,python3-k8sapp-rook,stx-rook-ceph-helm

Configuring

AIO-SX - Simplex

AIO-DX - Duplex

Standard

Testing

  • Add the Rook storage backend
   sysadmin@controller-0:~$ source /etc/platform/openrc
   [sysadmin@controller-0 ~(keystone_admin)]$ system storage-backend-add ceph-rook
   
   WARNING : THIS OPERATION IS NOT REVERSIBLE AND CANNOT BE CANCELLED. 
   
   Please set the 'confirmed' field to execute this operation for the ceph-rook backend.
   
   [sysadmin@controller-0 ~(keystone_admin)]$ system storage-backend-add ceph-rook --confirmed
   
   System configuration has changed.
   Please follow the administrator guide to complete configuring the system.
   
   +-----------------+-----------+-------------+----------------------------------------+----------+--------------+
   | name            | backend   | state       | task                                   | services | capabilities | 
   +-----------------+-----------+-------------+----------------------------------------+----------+--------------+
   | ceph-rook-store | ceph-rook | configuring | {'controller-0': 'applying-manifests'} | None     |              |
   +-----------------+-----------+-------------+----------------------------------------+----------+--------------+
   
   [sysadmin@controller-0 ~(keystone_admin)]$ system storage-backend-list    
   +-----------------+-----------+------------+------+----------+--------------+
   | name            | backend   | state      | task | services | capabilities |
   +-----------------+-----------+------------+------+----------+--------------+
   | ceph-rook-store | ceph-rook | configured | None | None     |              |
   +-----------------+-----------+------------+------+----------+--------------+
  • Label node
   sysadmin@controller-0:~$ source /etc/platform/openrc
   [sysadmin@controller-0 ~(keystone_admin)]$ system host-label-assign controller-0 ceph-mon-placement=enabled
   +-------------+--------------------------------------+
   | Property    | Value                                |
   +-------------+--------------------------------------+
   | uuid        | 719c3828-3806-4711-8824-774c1037316d |
   | host_uuid   | aed595fc-9fba-4aab-981a-ae2c04193689 |
   | label_key   | ceph-mon-placement                   |
   | label_value | enabled                              |
   +-------------+--------------------------------------+
   [sysadmin@controller-0 ~(keystone_admin)]$system host-label-assign controller-0 ceph-mgr-placement=enabled
   +-------------+--------------------------------------+
   | Property    | Value                                |
   +-------------+--------------------------------------+
   | uuid        | 11bec756-0d76-4fce-89a6-c1f0a0677f4a |
   | host_uuid   | aed595fc-9fba-4aab-981a-ae2c04193689 |
   | label_key   | ceph-mgr-placement                   |
   | label_value | enabled                              | 
   +-------------+--------------------------------------+
   
  • Identify the disk to use for bluestore OSD
   [sysadmin@controller-0 ~(keystone_admin)]$ system host-disk-list 1
   +-----------+---------+---------+-------+------------+-----+--------------------------------------------+
   | device_no | device_ | device_ | size_ | available_ | rpm | device_path                                |
   | de        | num     | type    | gib   | gib        |     |                                            |
   +-----------+---------+---------+-------+------------+-----+--------------------------------------------+
   | /dev/sda  | 2048    | SSD     | 520.0 | 0.0        | N/A | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
   | /dev/sdb  | 2064    | SSD     | 520.0 | 519.996    | N/A | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
   | /dev/sdc  | 2080    | SSD     | 520.0 | 519.996    | N/A | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
   +-----------+---------+---------+-------+------------+-----+--------------------------------------------+
   
  • Make sure that the disk has been purged
   [sysadmin@controller-0 ~(keystone_admin)]$ system host-disk-wipe -s --confirm controller-0 /dev/sdb
   
  • Upload the app (if needed)
   [sysadmin@controller-0 ~(keystone_admin)]$ system application-upload ~/rook-ceph-apps-23.09-42.tgz
   
  • Set up the overrides (i.g.: using sdb as OSD)
   [sysadmin@controller-0 ~(keystone_admin)]$ cat <<EOF > /home/sysadmin/ceph-values.yml
   cephClusterSpec:
     storage:
       useAllNodes: false
       useAllDevices: false
       nodes:
       - name: controller-0
         devices:
         - name: /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0
   EOF
   [sysadmin@controller-0 ~(keystone_admin)]$ cat /home/sysadmin/ceph-values.yml
  • Set the overrides
   [sysadmin@controller-0 ~(keystone_admin)]$ system helm-override-list rook-ceph-apps
   [sysadmin@controller-0 ~(keystone_admin)]$ system helm-override-update rook-ceph-apps rook-ceph-cluster rook-ceph --values /home/sysadmin/ceph-values.yml
   
  • Show the current overrides
   [sysadmin@controller-0 ~(keystone_admin)]$ system helm-override-show rook-ceph-apps rook-ceph rook-ceph
   [sysadmin@controller-0 ~(keystone_admin)]$ system helm-override-show rook-ceph-apps rook-ceph-cluster rook-ceph
   [sysadmin@controller-0 ~(keystone_admin)]$ system helm-override-show rook-ceph-apps rook-ceph-provisioner rook-ceph
   
  • Apply the app
   [sysadmin@controller-0 ~(keystone_admin)]$ system application-apply rook-ceph-apps
   +---------------+----------------------------------+
   | Property      | Value                            |
   +---------------+----------------------------------+
   | active        | False                            |
   | app_version   | 23.09-41                         |
   | created_at    | 2023-12-18T23:25:07.201915+00:00 |
   | manifest_file | fluxcd-manifests                 |
   | manifest_name | rook-ceph-apps-fluxcd-manifests  |
   | name          | rook-ceph-apps                   |
   | progress      | None                             |
   | status        | applying                         |
   | updated_at    | 2023-12-19T16:13:46.924152+00:00 |
   +---------------+----------------------------------+
   Please use 'system application-list' or 'system application-show rook-ceph-apps' to view the current progress.
   [sysadmin@controller-0 ~(keystone_admin)]$ system application-list
   +--------------------------+----------+-------------------------------------------+----------+-----------+
   | application              | version  | manifest name                             | status   | progress  |
   +--------------------------+----------+-------------------------------------------+----------+-----------+
   | cert-manager             | 1.0-69   | cert-manager-fluxcd-manifests             | applied  | completed |
   | dell-storage             | 1.0-6    | dell-storage-fluxcd-manifests             | uploaded | completed |
   | nginx-ingress-controller | 1.0-50   | nginx-ingress-controller-fluxcd-manifests | applied  | completed |
   | oidc-auth-apps           | 1.0-45   | oidc-auth-apps-fluxcd-manifests           | uploaded | completed |
   | platform-integ-apps      | 1.2-119  | platform-integ-apps-fluxcd-manifests      | uploaded | completed |
   | rook-ceph-apps           | 23.09-41 | rook-ceph-apps-fluxcd-manifests           | applied  | completed |
   +--------------------------+----------+-------------------------------------------+----------+-----------+
  • verify that rook is running
   [sysadmin@controller-0 ~(keystone_admin)]$ NS=rook-ceph
   [sysadmin@controller-0 ~(keystone_admin)]$ kubectl get pods -n ${NS} -o wide -w 
   NAME                                            READY   STATUS      RESTARTS   AGE     IP              NODE         
   csi-cephfsplugin-ks2wb                          3/3     Running     0          15m     192.168.206.2   controller-0   
   csi-cephfsplugin-provisioner-6584bd6d89-4v2gs   6/6     Running     0          15m     172.16.192.90   controller-0 
   csi-rbdplugin-provisioner-f44bf5585-9hmpf       6/6     Running     0          15m     172.16.192.88   controller-0 
   csi-rbdplugin-sfsfl                             3/3     Running     0          15m     192.168.206.2   controller-0 
   rook-ceph-mds-kube-cephfs-a-b554b68f9-rk4gg     1/1     Running     0          15m     172.16.192.85   controller-0 
   rook-ceph-mds-kube-cephfs-b-5fdc77cd4f-g7j28    1/1     Running     0          15m     172.16.192.66   controller-0 
   rook-ceph-mgr-a-c9fd598f9-ktb95                 1/1     Running     0          15m     172.16.192.93   controller-0 
   rook-ceph-mon-a-6d9787757d-6pcn4                1/1     Running     0          16m     172.16.192.87   controller-0 
   rook-ceph-operator-766c496b49-ph8s5             1/1     Running     0          2m29s   172.16.192.89   controller-0 
   rook-ceph-osd-0-7b46d779bf-79gjz                1/1     Running     0          64s     172.16.192.98   controller-0 
   rook-ceph-osd-prepare-controller-0-v4jx5        0/1     Completed   0          99s     172.16.192.97   controller-0 
   rook-ceph-tools-6dd564bffd-qnvvd                1/1     Running     0          16m     172.16.192.65   controller-0 
  • verify that the cluster is operational
   [sysadmin@controller-0 ~(keystone_admin)]$ ceph -s
  • verify the ceph version
   [sysadmin@controller-0 ~(keystone_admin)]$ceph version
   ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351) nautilus (stable)
  • Make sure that the disk is reported used
   [sysadmin@controller-0 ~(keystone_admin)]$ system host-disk-list controller-0
   +-----------+---------+---------+-------+------------+-----+--------------------------------------------+
   | device_no | device_ | device_ | size_ | available_ | rpm | device_path                                |
   | de        | num     | type    | gib   | gib        |     |                                            |
   +-----------+---------+---------+-------+------------+-----+--------------------------------------------+
   | /dev/sda  | 2048    | SSD     | 520.0 | 0.0        | N/A | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
   | /dev/sdb  | 2064    | SSD     | 520.0 | 0.0        | N/A | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
   | /dev/sdc  | 2080    | SSD     | 520.0 | 519.996    | N/A | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
   +-----------+---------+---------+-------+------------+-----+--------------------------------------------+
  • Make sure that the volume group is reported
   [sysadmin@controller-0 ~(keystone_admin)]$ system host-lvg-list controller-0
   +-------------------+-------------+--------+------------------+------------------+-------------+-------------+
   | LVG Name          | State       | Access | Total Size (GiB) | Avail Size (GiB) | Current PVs | Current LVs |
   +-------------------+-------------+--------+------------------+------------------+-------------+-------------+
   | cgts-vg           | provisioned | wz--n- | 488.406          | 261.593          | 1           | 15          |
   | ceph-2e5d9569-... | provisioned | wz--n- | 519.996          | 0.0              | 1           | 1           |
   +-------------------+-------------+--------+------------------+------------------+-------------+-------------+
  • Make sure that the physical volume is reported
   [sysadmin@controller-0 ~(keystone_admin)]$ system host-pv-list controller-0 --nowrap
  +-------------+--------------+--------------------------------------------------+----------+-----------+--------------+
  | lvm_pv_name | disk_or_part | disk_or_part_device_path                         | pv_state | pv_type   | lvm_vg_name  |
  |             | _device_node |                                                  |          |           |              |
  +-------------+--------------+--------------------------------------------------+----------+-----------+--------------+
  | /dev/sdb    | /dev/sdb     | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0       | provisnd | disk      | ceph-2e5d... |
  | /dev/sda5   | /dev/sda5    | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0-part5 | provisnd | partition | cgts-vg      |
  +-------------+--------------+--------------------------------------------------+----------+-----------+--------------+