Difference between revisions of "StarlingX/Containers/Applications/app-ceph"
< StarlingX | Containers | Applications
(→Application: platform-armada-app) |
(→Testing) |
||
Line 9: | Line 9: | ||
== Testing == | == Testing == | ||
− | + | <big>'''Once you have platform-integ-apps applied, some of the tests you can run are:'''</big> | |
+ | |||
+ | '''Using Cephfs provisioner''' | ||
+ | |||
+ | 1. Create a PVC | ||
+ | a. Consider the PVC example. | ||
+ | --- | ||
+ | apiVersion: v1 | ||
+ | kind: PersistentVolumeClaim | ||
+ | metadata: | ||
+ | name: cephfs-pvc | ||
+ | spec: | ||
+ | accessModes: | ||
+ | - ReadWriteMany | ||
+ | resources: | ||
+ | requests: | ||
+ | storage: 1Gi | ||
+ | storageClassName: cephfs | ||
+ | b. Use "kubectl create -f <file.yaml>" | ||
+ | c. Check with "kubectl get pvc" if the PVC status is Bound. | ||
+ | |||
+ | 2. Create a pod | ||
+ | a. Consider the pod example | ||
+ | --- | ||
+ | apiVersion: v1 | ||
+ | kind: Pod | ||
+ | metadata: | ||
+ | name: csi-cephfs-demo-pod | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: web-server | ||
+ | image: docker.io/library/nginx:latest | ||
+ | volumeMounts: | ||
+ | - name: mypvc | ||
+ | mountPath: /var/lib/www | ||
+ | volumes: | ||
+ | - name: mypvc | ||
+ | persistentVolumeClaim: | ||
+ | claimName: cephfs-pvc | ||
+ | readOnly: false | ||
+ | b. Use "kubectl create -f <file.yaml>" | ||
+ | c. Check with "kubectl get pods" if the pod is running and succesfully attached to the PVC | ||
+ | |||
+ | 3. Resize the PVC created | ||
+ | a. Use "kubectl edit pvc <name_pvc>" | ||
+ | b. Increase the pvc size on resources.requests.storage field | ||
+ | c. Check with "kubectl get pvc" or "kubectl describe pvc <pvc_name>" if the PVC capacity changed after a few seconds | ||
+ | |||
+ | 4. Create a Volume Snapshot Class | ||
+ | a. Check existing overrides for the cephfs-provisioner chart. You will refer to this information in the following step. | ||
+ | system helm-override-show platform-integ-apps cephfs-provisioner kube-system | ||
+ | b. Update the 'snapshotClass.create' field to 'true' via helm | ||
+ | system helm-override-update platform-integ-apps cephfs-provisioner kube-system --set snapshotClass.create=True | ||
+ | c. Apply the overrides | ||
+ | system application-apply platform-integ-apps | ||
+ | d. After a few seconds, confirm the creation of the Volume Snapshot Class | ||
+ | ~(keystone_admin)]$ kubectl get volumesnapshotclasses.snapshot.storage.k8s.io | ||
+ | NAME DRIVER DELETIONPOLICY AGE | ||
+ | cephfs-snapshot cephfs.csi.ceph.com Delete 5s | ||
+ | |||
+ | 5. Create a PVC snapshot. | ||
+ | a. Consider the Cephfs Volume Snapshot yaml example | ||
+ | --- | ||
+ | apiVersion: snapshot.storage.k8s.io/v1 | ||
+ | kind: VolumeSnapshot | ||
+ | metadata: | ||
+ | name: <cephfs-pvc-snapshot-name> | ||
+ | spec: | ||
+ | volumeSnapshotClassName: cephfs-snapshot | ||
+ | source: | ||
+ | persistentVolumeClaimName: <cephfs-pvc-name> | ||
+ | b. Replace the values in the 'persistentVolumeClaimName' and 'name' fields | ||
+ | c. Create the Volume Snapshot | ||
+ | kubectl create -f cephfs-volume-snapshot.yaml | ||
+ | |||
+ | '''Using RBD provisioner''' | ||
+ | |||
+ | 1. Create a PVC | ||
+ | a. Consider the PVC example. | ||
+ | --- | ||
+ | apiVersion: v1 | ||
+ | kind: PersistentVolumeClaim | ||
+ | metadata: | ||
+ | name: rbd-pvc | ||
+ | spec: | ||
+ | accessModes: | ||
+ | - ReadWriteOnce | ||
+ | resources: | ||
+ | requests: | ||
+ | storage: 1Gi | ||
+ | storageClassName: general | ||
+ | b. Use "kubectl create -f <file.yaml>" | ||
+ | c. Check with "kubectl get pvc" if the PVC status is Bound | ||
+ | |||
+ | 2. Create a pod | ||
+ | a. Consider the pod example. | ||
+ | --- | ||
+ | apiVersion: v1 | ||
+ | kind: Pod | ||
+ | metadata: | ||
+ | name: csi-rbd-demo-pod | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: web-server | ||
+ | image: docker.io/library/nginx:latest | ||
+ | volumeMounts: | ||
+ | - name: mypvc | ||
+ | mountPath: /var/lib/www/html | ||
+ | volumes: | ||
+ | - name: mypvc | ||
+ | persistentVolumeClaim: | ||
+ | claimName: rbd-pvc | ||
+ | readOnly: false | ||
+ | b. Use "kubectl create -f <file.yaml>" | ||
+ | c. Check with "kubectl get pods" if the pod is running and succesfully attached to the PVC | ||
+ | |||
+ | 3. Resize the PVC created | ||
+ | a. Use "kubectl edit pvc <name_pvc>" | ||
+ | b. Increase the pvc size on resources.requests.storage field | ||
+ | c. Check with "kubectl get pvc" or "kubectl describe pvc <pvc_name>" if the PVC capacity changed after a few seconds | ||
+ | |||
+ | 4. Create a Volume Snapshot Class | ||
+ | a. Check existing overrides for the rbd-provisioner chart. You will refer to this information in the following step. | ||
+ | system helm-override-show platform-integ-apps rbd-provisioner kube-system | ||
+ | b. Update the 'snapshotClass.create' field to 'true' via helm | ||
+ | system helm-override-update platform-integ-apps rbd-provisioner kube-system --set snapshotClass.create=True | ||
+ | c. Apply the overrides | ||
+ | system application-apply platform-integ-apps | ||
+ | d. After a few seconds, confirm the creation of the Volume Snapshot Class | ||
+ | ~(keystone_admin)]$ kubectl get volumesnapshotclasses.snapshot.storage.k8s.io | ||
+ | NAME DRIVER DELETIONPOLICY AGE | ||
+ | rbd-snapshot rbd.csi.ceph.com Delete 5s | ||
+ | |||
+ | 5. Create a PVC snapshot | ||
+ | a. Consider the Cephfs Volume Snapshot yaml example | ||
+ | --- | ||
+ | apiVersion: snapshot.storage.k8s.io/v1 | ||
+ | kind: VolumeSnapshot | ||
+ | metadata: | ||
+ | name: <rbd-pvc-snapshot-name> | ||
+ | spec: | ||
+ | volumeSnapshotClassName: rbd-snapshot | ||
+ | source: | ||
+ | persistentVolumeClaimName: <rbd-pvc-name> | ||
+ | b. Replace the values in the 'persistentVolumeClaimName' and 'name' fields | ||
+ | c. Create the Volume Snapshot | ||
+ | kubectl create -f cephfs-volume-snapshot.yaml |
Revision as of 18:31, 24 January 2024
Application: platform-armada-app
Note: This application repo will be renamed in the future to app-ceph so that the armada reference is dropped and it is aligned with the new naming convention.
Source
Building
- From the Debian Build environment:
TBD
Testing
Once you have platform-integ-apps applied, some of the tests you can run are:
Using Cephfs provisioner
1. Create a PVC a. Consider the PVC example. --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi storageClassName: cephfs b. Use "kubectl create -f <file.yaml>" c. Check with "kubectl get pvc" if the PVC status is Bound.
2. Create a pod a. Consider the pod example --- apiVersion: v1 kind: Pod metadata: name: csi-cephfs-demo-pod spec: containers: - name: web-server image: docker.io/library/nginx:latest volumeMounts: - name: mypvc mountPath: /var/lib/www volumes: - name: mypvc persistentVolumeClaim: claimName: cephfs-pvc readOnly: false b. Use "kubectl create -f <file.yaml>" c. Check with "kubectl get pods" if the pod is running and succesfully attached to the PVC
3. Resize the PVC created a. Use "kubectl edit pvc <name_pvc>" b. Increase the pvc size on resources.requests.storage field c. Check with "kubectl get pvc" or "kubectl describe pvc <pvc_name>" if the PVC capacity changed after a few seconds
4. Create a Volume Snapshot Class a. Check existing overrides for the cephfs-provisioner chart. You will refer to this information in the following step. system helm-override-show platform-integ-apps cephfs-provisioner kube-system b. Update the 'snapshotClass.create' field to 'true' via helm system helm-override-update platform-integ-apps cephfs-provisioner kube-system --set snapshotClass.create=True c. Apply the overrides system application-apply platform-integ-apps d. After a few seconds, confirm the creation of the Volume Snapshot Class ~(keystone_admin)]$ kubectl get volumesnapshotclasses.snapshot.storage.k8s.io NAME DRIVER DELETIONPOLICY AGE cephfs-snapshot cephfs.csi.ceph.com Delete 5s
5. Create a PVC snapshot. a. Consider the Cephfs Volume Snapshot yaml example --- apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: <cephfs-pvc-snapshot-name> spec: volumeSnapshotClassName: cephfs-snapshot source: persistentVolumeClaimName: <cephfs-pvc-name> b. Replace the values in the 'persistentVolumeClaimName' and 'name' fields c. Create the Volume Snapshot kubectl create -f cephfs-volume-snapshot.yaml
Using RBD provisioner
1. Create a PVC a. Consider the PVC example. --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: rbd-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: general b. Use "kubectl create -f <file.yaml>" c. Check with "kubectl get pvc" if the PVC status is Bound
2. Create a pod a. Consider the pod example. --- apiVersion: v1 kind: Pod metadata: name: csi-rbd-demo-pod spec: containers: - name: web-server image: docker.io/library/nginx:latest volumeMounts: - name: mypvc mountPath: /var/lib/www/html volumes: - name: mypvc persistentVolumeClaim: claimName: rbd-pvc readOnly: false b. Use "kubectl create -f <file.yaml>" c. Check with "kubectl get pods" if the pod is running and succesfully attached to the PVC
3. Resize the PVC created a. Use "kubectl edit pvc <name_pvc>" b. Increase the pvc size on resources.requests.storage field c. Check with "kubectl get pvc" or "kubectl describe pvc <pvc_name>" if the PVC capacity changed after a few seconds
4. Create a Volume Snapshot Class a. Check existing overrides for the rbd-provisioner chart. You will refer to this information in the following step. system helm-override-show platform-integ-apps rbd-provisioner kube-system b. Update the 'snapshotClass.create' field to 'true' via helm system helm-override-update platform-integ-apps rbd-provisioner kube-system --set snapshotClass.create=True c. Apply the overrides system application-apply platform-integ-apps d. After a few seconds, confirm the creation of the Volume Snapshot Class ~(keystone_admin)]$ kubectl get volumesnapshotclasses.snapshot.storage.k8s.io NAME DRIVER DELETIONPOLICY AGE rbd-snapshot rbd.csi.ceph.com Delete 5s
5. Create a PVC snapshot a. Consider the Cephfs Volume Snapshot yaml example --- apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: <rbd-pvc-snapshot-name> spec: volumeSnapshotClassName: rbd-snapshot source: persistentVolumeClaimName: <rbd-pvc-name> b. Replace the values in the 'persistentVolumeClaimName' and 'name' fields c. Create the Volume Snapshot kubectl create -f cephfs-volume-snapshot.yaml