Difference between revisions of "StarlingX/Containers/Applications/app-rook-ceph"
< StarlingX | Containers | Applications
Caio.correa (talk | contribs) (→Testing) |
|||
Line 21: | Line 21: | ||
Please follow the administrator guide to complete configuring the system. | Please follow the administrator guide to complete configuring the system. | ||
− | + | + | +----------+-----------------+-----------+-------------+----------------------------------------+----------+--------------+ |
− | | uuid | + | | uuid | name | backend | state | task | services | capabilities | |
− | + | + | +----------+-----------------+-----------+-------------+----------------------------------------+----------+--------------+ |
− | | | + | | ###### | ceph-rook-store | ceph-rook | configuring | {'controller-0': 'applying-manifests'} | None | | |
− | + | + | +----------+-----------------+-----------+-------------+----------------------------------------+----------+--------------+ |
[sysadmin@controller-0 ~(keystone_admin)]$ system storage-backend-list | [sysadmin@controller-0 ~(keystone_admin)]$ system storage-backend-list |
Revision as of 11:43, 8 March 2024
Application: Rook Ceph
Source
Building
- From the Debian Build environment:
build-pkgs -c -p rook-helm,python3-k8sapp-rook,stx-rook-ceph-helm
Testing
- Add the Rook storage backend
sysadmin@controller-0:~$ source /etc/platform/openrc [sysadmin@controller-0 ~(keystone_admin)]$ system storage-backend-add ceph-rook WARNING : THIS OPERATION IS NOT REVERSIBLE AND CANNOT BE CANCELLED. Please set the 'confirmed' field to execute this operation for the ceph-rook backend. [sysadmin@controller-0 ~(keystone_admin)]$ system storage-backend-add ceph-rook --confirmed System configuration has changed. Please follow the administrator guide to complete configuring the system. +----------+-----------------+-----------+-------------+----------------------------------------+----------+--------------+ | uuid | name | backend | state | task | services | capabilities | +----------+-----------------+-----------+-------------+----------------------------------------+----------+--------------+ | ###### | ceph-rook-store | ceph-rook | configuring | {'controller-0': 'applying-manifests'} | None | | +----------+-----------------+-----------+-------------+----------------------------------------+----------+--------------+ [sysadmin@controller-0 ~(keystone_admin)]$ system storage-backend-list +--------------------------------------+-----------------+-----------+------------+------+----------+--------------+ | uuid | name | backend | state | task | services | capabilities | +--------------------------------------+-----------------+-----------+------------+------+----------+--------------+ | 4b685f39-d7e8-47d2-9050-008681d7a395 | ceph-rook-store | ceph-rook | configured | None | None | | +--------------------------------------+-----------------+-----------+------------+------+----------+--------------+
- Label node
sysadmin@controller-0:~$ source /etc/platform/openrc [sysadmin@controller-0 ~(keystone_admin)]$ system host-label-assign controller-0 ceph-mon-placement=enabled +-------------+--------------------------------------+ | Property | Value | +-------------+--------------------------------------+ | uuid | 719c3828-3806-4711-8824-774c1037316d | | host_uuid | aed595fc-9fba-4aab-981a-ae2c04193689 | | label_key | ceph-mon-placement | | label_value | enabled | +-------------+--------------------------------------+
[sysadmin@controller-0 ~(keystone_admin)]$system host-label-assign controller-0 ceph-mgr-placement=enabled +-------------+--------------------------------------+ | Property | Value | +-------------+--------------------------------------+ | uuid | 11bec756-0d76-4fce-89a6-c1f0a0677f4a | | host_uuid | aed595fc-9fba-4aab-981a-ae2c04193689 | | label_key | ceph-mgr-placement | | label_value | enabled | +-------------+--------------------------------------+
- Identify the disk to use for bluestore OSD
[sysadmin@controller-0 ~(keystone_admin)]$ system host-disk-list 1 +--------------------------------------+-----------+---------+---------+-------+------------+-----+---------------------+--------------------------------------------+ | uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_id | device_path | | | de | num | type | gib | gib | | | | +--------------------------------------+-----------+---------+---------+-------+------------+-----+---------------------+--------------------------------------------+ | 2f17be84-6e5f-4dcd-b998-227e9a2da5e0 | /dev/sda | 2048 | SSD | 520.0 | 0.0 | N/A | VB1657fd25-17a08d15 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 | | 3c14caa3-084a-4c97-b498-68459e3ca0e8 | /dev/sdb | 2064 | SSD | 520.0 | 519.996 | N/A | VB1b640dbf-f500b14e | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 | | 7d43b443-0066-401c-9725-00f6b110281d | /dev/sdc | 2080 | SSD | 520.0 | 519.996 | N/A | VBf08c1b36-a97efc64 | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 | +--------------------------------------+-----------+---------+---------+-------+------------+-----+---------------------+--------------------------------------------+
- Make sure that the disk has been purged
[sysadmin@controller-0 ~(keystone_admin)]$ system host-disk-wipe -s --confirm controller-0 /dev/sdb
- Upload the app (if needed)
[sysadmin@controller-0 ~(keystone_admin)]$ system application-upload ~/rook-ceph-apps-23.09-42.tgz
- Set up the overrides
[sysadmin@controller-0 ~(keystone_admin)]$ cat <<EOF > /home/sysadmin/ceph-values.yml cephClusterSpec: storage: useAllNodes: false useAllDevices: false nodes: - name: controller-0 devices: - name: sdb EOF [sysadmin@controller-0 ~(keystone_admin)]$ cat /home/sysadmin/ceph-values.yml
- Set the overrides
[sysadmin@controller-0 ~(keystone_admin)]$ system helm-override-list rook-ceph-apps [sysadmin@controller-0 ~(keystone_admin)]$ system helm-override-update rook-ceph-apps rook-ceph-cluster rook-ceph --values /home/sysadmin/ceph-values.yml
- Show the current overrides
[sysadmin@controller-0 ~(keystone_admin)]$ system helm-override-show rook-ceph-apps rook-ceph rook-ceph [sysadmin@controller-0 ~(keystone_admin)]$ system helm-override-show rook-ceph-apps rook-ceph-cluster rook-ceph [sysadmin@controller-0 ~(keystone_admin)]$ system helm-override-show rook-ceph-apps rook-ceph-provisioner rook-ceph
- Apply the app
[sysadmin@controller-0 ~(keystone_admin)]$ system application-apply rook-ceph-apps +---------------+----------------------------------+ | Property | Value | +---------------+----------------------------------+ | active | False | | app_version | 23.09-41 | | created_at | 2023-12-18T23:25:07.201915+00:00 | | manifest_file | fluxcd-manifests | | manifest_name | rook-ceph-apps-fluxcd-manifests | | name | rook-ceph-apps | | progress | None | | status | applying | | updated_at | 2023-12-19T16:13:46.924152+00:00 | +---------------+----------------------------------+ Please use 'system application-list' or 'system application-show rook-ceph-apps' to view the current progress.
[sysadmin@controller-0 ~(keystone_admin)]$ system application-list +--------------------------+----------+-------------------------------------------+------------------+----------+-----------+ | application | version | manifest name | manifest file | status | progress | +--------------------------+----------+-------------------------------------------+------------------+----------+-----------+ | cert-manager | 1.0-69 | cert-manager-fluxcd-manifests | fluxcd-manifests | applied | completed | | dell-storage | 1.0-6 | dell-storage-fluxcd-manifests | fluxcd-manifests | uploaded | completed | | nginx-ingress-controller | 1.0-50 | nginx-ingress-controller-fluxcd-manifests | fluxcd-manifests | applied | completed | | oidc-auth-apps | 1.0-45 | oidc-auth-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed | | platform-integ-apps | 1.2-119 | platform-integ-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed | | rook-ceph-apps | 23.09-41 | rook-ceph-apps-fluxcd-manifests | fluxcd-manifests | applied | completed | +--------------------------+----------+-------------------------------------------+------------------+----------+-----------+
- verify that rook is running
[sysadmin@controller-0 ~(keystone_admin)]$ NS=rook-ceph [sysadmin@controller-0 ~(keystone_admin)]$ kubectl get pods -n ${NS} -o wide -w NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES csi-cephfsplugin-ks2wb 3/3 Running 0 15m 192.168.206.2 controller-0 <none> <none> csi-cephfsplugin-provisioner-6584bd6d89-4v2gs 6/6 Running 0 15m 172.16.192.90 controller-0 <none> <none> csi-rbdplugin-provisioner-f44bf5585-9hmpf 6/6 Running 0 15m 172.16.192.88 controller-0 <none> <none> csi-rbdplugin-sfsfl 3/3 Running 0 15m 192.168.206.2 controller-0 <none> <none> rook-ceph-mds-kube-cephfs-a-b554b68f9-rk4gg 1/1 Running 0 15m 172.16.192.85 controller-0 <none> <none> rook-ceph-mds-kube-cephfs-b-5fdc77cd4f-g7j28 1/1 Running 0 15m 172.16.192.66 controller-0 <none> <none> rook-ceph-mgr-a-c9fd598f9-ktb95 1/1 Running 0 15m 172.16.192.93 controller-0 <none> <none> rook-ceph-mon-a-6d9787757d-6pcn4 1/1 Running 0 16m 172.16.192.87 controller-0 <none> <none> rook-ceph-operator-766c496b49-ph8s5 1/1 Running 0 2m29s 172.16.192.89 controller-0 <none> <none> rook-ceph-osd-0-7b46d779bf-79gjz 1/1 Running 0 64s 172.16.192.98 controller-0 <none> <none> rook-ceph-osd-prepare-controller-0-v4jx5 0/1 Completed 0 99s 172.16.192.97 controller-0 <none> <none> rook-ceph-tools-6dd564bffd-qnvvd 1/1 Running 0 16m 172.16.192.65 controller-0 <none> <none>
- verify that the cluster is operational
[sysadmin@controller-0 ~(keystone_admin)]$ ROOK_TOOLS_POD=$(kubectl -n ${NS} get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') [sysadmin@controller-0 ~(keystone_admin)]$ kubectl -n ${NS} exec -it $ROOK_TOOLS_POD -- ceph -s
- verify the ceph version
[sysadmin@controller-0 ~(keystone_admin)]$kubectl -n ${NS} exec -it $ROOK_TOOLS_POD -- ceph version ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351) nautilus (stable)
- Make sure that the disk is reported used
[sysadmin@controller-0 ~(keystone_admin)]$ system host-disk-list controller-0 +--------------------------------------+-----------+---------+---------+-------+------------+-----+---------------------+--------------------------------------------+ | uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_id | device_path | | | de | num | type | gib | gib | | | | +--------------------------------------+-----------+---------+---------+-------+------------+-----+---------------------+--------------------------------------------+ | 2f17be84-6e5f-4dcd-b998-227e9a2da5e0 | /dev/sda | 2048 | SSD | 520.0 | 0.0 | N/A | VB1657fd25-17a08d15 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 | | 3c14caa3-084a-4c97-b498-68459e3ca0e8 | /dev/sdb | 2064 | SSD | 520.0 | 0.0 | N/A | VB1b640dbf-f500b14e | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 | | 7d43b443-0066-401c-9725-00f6b110281d | /dev/sdc | 2080 | SSD | 520.0 | 519.996 | N/A | VBf08c1b36-a97efc64 | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 | +--------------------------------------+-----------+---------+---------+-------+------------+-----+---------------------+--------------------------------------------+
- Make sure that the volume group is reported
[sysadmin@controller-0 ~(keystone_admin)]$ system host-lvg-list controller-0 +--------------------------------------+-------------------------------------------+-------------+--------+------------------+------------------+-------------+-------------+ | UUID | LVG Name | State | Access | Total Size (GiB) | Avail Size (GiB) | Current PVs | Current LVs | +--------------------------------------+-------------------------------------------+-------------+--------+------------------+------------------+-------------+-------------+ | 457be756-90ae-45e0-8c2e-40eda89ada74 | cgts-vg | provisioned | wz--n- | 488.406 | 261.593 | 1 | 15 | | e3c7eb8e-0c64-4d38-a933-95dc9f3deec5 | ceph-2e5d9569-a999-4966-9035-8ddb50082ec7 | provisioned | wz--n- | 519.996 | 0.0 | 1 | 1 | +--------------------------------------+-------------------------------------------+-------------+--------+------------------+------------------+-------------+-------------+
- Make sure that the physical volume is reported
[sysadmin@controller-0 ~(keystone_admin)]$ system host-pv-list controller-0 --nowrap +--------------------------------------+-------------+--------------------------------------+--------------------------+--------------------------------------------------+-------------+-----------+-------------------------------------------+--------------------------------------+ | uuid | lvm_pv_name | disk_or_part_uuid | disk_or_part_device_node | disk_or_part_device_path | pv_state | pv_type | lvm_vg_name | ihost_uuid | +--------------------------------------+-------------+--------------------------------------+--------------------------+--------------------------------------------------+-------------+-----------+-------------------------------------------+--------------------------------------+ | c0f3a423-49a6-489d-ba56-78bd4e580082 | /dev/sdb | 3c14caa3-084a-4c97-b498-68459e3ca0e8 | /dev/sdb | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 | provisioned | disk | ceph-2e5d9569-a999-4966-9035-8ddb50082ec7 | aed595fc-9fba-4aab-981a-ae2c04193689 | | d682bfe0-f842-4fa1-b945-5f1520c47c23 | /dev/sda5 | 4b7b7956-238c-4da3-b777-8ff2d5262669 | /dev/sda5 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0-part5 | provisioned | partition | cgts-vg | aed595fc-9fba-4aab-981a-ae2c04193689 | +--------------------------------------+-------------+--------------------------------------+--------------------------+--------------------------------------------------+-------------+-----------+-------------------------------------------+--------------------------------------+