Jump to: navigation, search

StarlingX/stx.2018.10 Testplan Instructions

# DOMAIN TEST CASE STEPS/PARAMETERS EXPECTED RESULTS PRIORITY SIMPLEX BM SIMPLEX VIRT DUPLEX BM DUPLEX VIRT MULTINODE LOCAL STORAGE BM MULTINODE LOCAL STORAGE VIRT MULTINODE EXTERNAL STORAGE BM MULTINODE EXTERNAL STORAGE VIRT StoryBoard ID
1 System Power down/up recovery "- Cannot 'Power Off' an unlock host. Need to 'lock' the host first.
- Cannot 'lock' nor 'power off' an active controller in Multinode via CLI nor via Horizon, please check if it is possible on Simplex and Duplex.
- An active controller can be power off/on via VM.
- All hosts can be power off and power on via VM.

$ source /etc/nova/openrc

To show the hosts to power down/up:
$ system host-list

To lock a host:
$ system host-lock <hostname>

To power off a host:
$ system host-power-off <hostname>

To power on a host:
$ system host-power-on <hostname>

To unlock a host:
$ system host-unlock <hostname>
"
"- Hosts can lock, turn off and turn on.
- When unlock, a host reboot is extected
- Verify system is up

Note:
- If an active controller is power off via VM, the second controller turns ""Active"" and such powered off turns ""Standby"" when it is powered on."
1 x x x x x x x x
2 System Lock unlock active controller "Lock-Unlock Active controllers, no matter that the name says AIO , this test can be done on all configurations with at least 2 VM's up and running
$ source /etc/nova/openrc

- Cannot 'lock' an active controller in Multinode via CLI nor via Horizon, please check if it is possible on Simplex and Duplex.

To unlock active controller:
$ system host-unlock controller-0
"
- Verify all vms are up and running. -1 x x x x x x x x
3 System pmon monitored process "Get the list of process monitored by pmon (/etc/pmond.d)

$ source /etc/nova/openrc

To check all process:
$ ls /etc/pmon.d/
"
"- File should be available
- File should contain a list of process"
1 x x x x x x x x
4 System Verify branding "Display System information via cli and GUI
Horizon
Admin
System
System Information

$ source /etc/nova/openrc

To display System Information via CLI:
$ system service-list

To display System Information via Horizon:
Project -> Admin -> System -> System Information
"
"In system information you should look and see
Services
Compute Services
Block Storage Services
Network Agent
*This can be different in Starling-X*"
1 x x x x x x x x
5 System Lock/unlock 10 times "- Cannot 'lock' an active controller in Multinode via CLI nor via Horizon, please check if it is possible on Simplex and Duplex.

$ source /etc/nova/openrc

To show hosts to lock/unlock:
$ system host-list

To lock a host:
$ system host-lock <hostname>

To unlock a host:
$ system host-unlock <hostname>

Repeat 10 times
"
"- At the begining hosts must be 'unlocked', 'enabled' and 'available'.
- After lock the host should be 'locked'.
- When unlock, a host reboot is expected.
- When reboot finished hosts must be 'unlocked', 'enabled' and 'available'.
- No issues during this process

Notes:
Cannot lock an active controller via CLI nor Horizon"
-1 x x x x x x x x
6 System Verify host lock/unlock soak - 10 iterations of all host types "controller $ source /etc/nova/openrc
~(keystone_admin)$ system host-unlock controller-0
10 times

- Cannot 'lock' an active controller in Multinode via CLI nor via Horizon, please check if it is possible on Simplex and Duplex.

$ source /etc/nova/openrc

To show hosts to lock/unlock:
$ system host-list

To lock a host:
$ system host-lock <hostname>

To unlock a host:
$ system host-unlock <hostname>

Repeat 10 times
"
"- At the begining hosts must be 'unlocked', 'enabled' and 'available'.
- After lock the host should be 'locked'.
- When unlock, a host reboot is expected.
- When reboot finished hosts must be 'unlocked', 'enabled' and 'available'.
- No issues during this process
- No issues during soak operation

Notes:
Cannot lock an active controller via CLI nor Horizon
"
-1 x x x x x x x x
7 System Verify several controller swacts grep ""controller""

To swact controller:
$ system host-swact <hostname>
"
"- If you swact controller-0, controller-0 must be as 'Controller-Standby' and controller-1 must be as 'Controller-Active' or vice-versa
- No error messages after each swact action

Notes:
- swact can be done only to controllers"
-1 - - x x x x x x
8 System "Power down/up recovery recovery of system with storage
(dead office recovery)"
"1) Perform DOR
power down all nodes using vlm and script
wait for few sec
power up all nodes using vlm
"
"- 1) All nodes in the system recover after DOR
2) Storage node can be successfully deleted and re-added
3) Verify all alarms are raised and cleared within reasonable time.
4) Verify all VMs are assigned with Ip Address.
5)Verify the status of VMs are in ERROR after controller is up and before the computes are up.
"
1 - - - - x x x x
9 System Evacuation of VM via rebooting a compute node Evacuation of VM via rebooting a compute node vms should be working -1 - - - - x x x x
10 System Guest failures: kill VM Guest failures: kill VM 1 x x x x x x x x
11 System Guest failures: rebootVM Guest failures: rebootVM -1 x x x x x x x x
12 System Halt -f on active controller and confirm other controller takes activity; also then confirm can launch new VMs Halt -f on active controller Launch a VM after the activity is switched "1) confirm other controller takes activity
2) No change to the running services
3) No traffic loss
4) Vm is launched suceesfully
5) verify Vm can read/write to file"
1 - - x x x x x x
13 System kill Compute node and re-schedule VM elsewhere and validate no issues with re-attaching to Cinder Volume "Boot several VMs using cinder volumes (using CEPH for storage)
VMs are writing to disk while the test is performed.
use the following (or similar) script to write to VM's cinder volume

while (true)
do
date >> /root/write-test.bin
dd if=/dev/random of=/root/write-test.bin bs=1024k
sleep 1
done
"
"- Verify that after reboot -f of the compute node (where the VM is running) VM is relaunched successfully

and that it is able to continue writing to disk

(the dd script/command will not be persistent so it will have to be re-executed
"
-1 - - x x x x x x
14 System Kill kvm process multiple times-10 verify VM instance recovery Verify Kill of kvm process multiple times-10 verify VM instance recovery 1 x x x x x x x x
15 System Lock/unlock standby controller and swact 10 times grep ""controller""

To swact controller:
$ system host-swact <hostname>
"
"- At the begining hosts must be 'unlocked', 'enabled' and 'available'.
- After lock the host should be 'locked'.
- When unlock, a host reboot is expected.
- When reboot finished hosts must be 'unlocked', 'enabled' and 'available'.
- No issues during this process

Notes:
Cannot lock an active controller via CLI nor Horizon

- If you swact controller-0, controller-0 must be as 'Controller-Standby' and controller-1 must be as 'Controller-Active' or vice-versa
- No error messages after each swact action

Notes:
- swact can be done only to controllers"
1 - - x x x x x x
16 System Lock/unlock a compute "1.Start with 2 (and later 10) VMs on Compute with traffic
2.VMs with cinder volumes and pgbench running
3.Run mtce lock and then unlock


$ source /etc/nova/openrc

To show hosts to lock/unlock:
$ system host-list

To lock a host:
$ system host-lock <hostname>

To unlock a host:
$ system host-unlock <hostname>


"
"1.Lock success seeing state changes on GUI for migrating instances
2.No impact on traffic or ping tests to VMs being moved (or any other VMs in system)
3.No impact on controller activity (ie no swact on controller – this has been seen once before so keep an eye out) or on any other Compute blades.
4.Unlock successful with reboot completing in 3-4 minutes.
Note:

Ping loss can occur on management IPs, when routers exist on compute that is going to reboot. Thus router forced to migrate as well as the VMs.

Use compute without router on it (for checking: neutron host-list , neutron router-show), and no ping loss should be occurred or ping loss will be expected if router exist.

If the router is not on the compute and resetting executed then should not be seen more than 1 second or so of message loss per VM. But if the compute that is going to reset also has a router than can be seen up 20 seconds or so of interruption while the router is take down and moved to the other compute
"
1 - - x x x x x x
17 System Recovery after DOS "1) power down the storage nodes.
2) power up the storage node"
"Verify Storage node come online and available after power on
Verify that the VM can still use the cider volumes.
Verify that a ne vm can be launced and cider volume is usable.
"
1 - - - - - - x x
18 System Run 10 Controller swacts triggered by reboot -f command "0) Validate via system host-list sanity of the base config (computes in service, controller in active/standby, all VMs in Active/Running state)
1) Validate routing to VMs
2) Use sm-dump command to check that all resources are running on the active controller. Can also continuously monitor the controller services via running the following command in an ssh session to each controller:
$ watch -n 1 -d sm-dump

3) Ensure at least 2 VMs are running, both using cinder volumes, and at least one on each compute. Within the VM run the following dd command to monitor Guest access to the filesystem:
$ while (true) do date; dd if=/dev/urandom of=output.txt bs=1k count 1 conv=fsync | | break ; echo ; sleep 1; done 2>&1 | tee trace.txt

1.Start with 10 VMs spread on different computes (4 on virtual env)
2. Pings to the 10 VMs management ip for monitoring
3.Choose at least 2 VMs on different compute nodes and do a ssh session per VM (login as root/root) and run the following to write to the rootfs filesystem every second:
$ while (true) do date; dd if=/dev/urandom of=output.txt bs=1k count 1 conv=fsync | | break ; echo ; sleep 1; done 2>&1 | tee trace.txt
4.On the active controller, issues reboot -f command to reboot controller.
5.After controller reboot is complete, ensure you can launch a new VM, ping or login to that new VM, and delete the VM.
6.Wait 3 to 5 minutes and then repeat steps 1-5.
"
"- 1.Ensure no ping loss to any VM
2.Ensure all VMs stay in ACTIVE/Running state
3.Ensure no traffic loss
4.Ensure inactive controller (the controller that is not rebooted) takes activity and all resources are running on the newly active controller within 30 seconds. (Monitor via sm-dump command)
5.Ensure that all resources stay on the newly active Controller (ie: the controller that is not rebooted) and no swact back occurs within next 3-5 minutes before repeat TC.
6.Ensure all filesystems in Guest remain R/W - use mount command
7.Ensure the script (while command) continues to run in each of the 2 VMs. Record how long the script hangs during the swact. It is expected to be less than 20 seconds (or certainly less than 30 secs).
8.Ensure no GUI extended hang
9. Verify drbd-cinder did recover cleanly and go in sync (drbdadm status)
"
1 - - - - x x x x
19 System Run 10 evacuate tests "1.Start with 5 VMs per compute node (and 2 in virtual environment)
2.Identify which controller is the inactive controller
3.All VMs are in Active/Running State
4.All VMs are pingable
5.All resources are running on the active controller
6.Horizon GUI is up
7.Cinder Volumes are setup (all VMs boot from a cinder volume)
8.exercise the filesystem into the VMs
while (true)
do
date >> /root/write-test.bin
dd if=/dev/random of=/root/write-test.bin bs=1024k
sleep 1
done

Start by identifying a compute node which is running VMs

1.Trigger a 'VM evacuation' by issuing reboot -f to a compute node
2.Observer all VMs relaunch on remaining compute nodes in the system
3.Note the outage time
4.Repeat the steps 1-3 10 times
"
"1.All VMs successfully relaunch
2.Ping/ICM test successfully continues after relaunch
3.All VMs remain in Active/Running state
4.No impact to Horizon GUI
5.There are no stale NFS on the VMs - use mount cmd to verify that the rootfs is r/w
6.Relaunched VMs are able to contiue writing to the FS (re-run the dd command)
7.There is no impact to other VMs
"
1 - - x x x x x x
20 System Run 10 VM reboots and ensure automatic recovery "1) ping to the VM from the controller
2) From the VM console window, issue a reboot command.

Run 9 times more after the VM recovers
"
"1. VM should recover and reboot complete in a reasonable amount of time (depends on guest image).
2. Ping to VM will stop for duration of the reboot but should resume shortly after the VM reboot is complete.
3. Should be able to ssh to the VM after the VM recovers.
4. Ensure ifconfig in VM console or ssh session
"
1 x x x x x x x x
21 System Run 15 Controller swacts triggered by reboot command "0) Validate via system host-list sanity of the base config (computes in service, controller in active/standby, all VMs in Active/Running state)
1) Validate routing to VMs
2) Validate ingress/egress traffic using Ixia
3) use crm status command to check that all resources are running on the active controller 1.Start with 20 VMs and traffic
2.From NAT box, use monitor_timestamp.py tool to monitor pings to 20 VMs management ip
3.Choose 2 VMs per compute node (8 in total) and from the NAT box set up an ssh session per VM (login as root/root) and run the following script which will write to the rootfs filesystem every second:
1.while (true) do date; dd if=/dev/urandom of=output.txt bs=1k count=1 | | break ; echo ; sleep 1; done 2>&1 | tee trace.txt
4.On the active controller, issues reboot command to reboot controller.
5.After controller reboot is complete, wait 5 minutes and then repeat the sequence,
"
"- 1.Ensure no ping loss to any VM

2.Ensure all VMs stay in ACTIVE/Running state

3.Ensure no traffic loss

4.Ensure inactive controller (the controller that is not rebooted) takes activity and all resources are running on the newly active controller within 60 seconds. (Monitor via crm status command)

5.Ensure that all resources stay on the newly active Controller (ie: the controller that is not rebooted) and no swact back occurs within next 5 minutes before repeat TC.

6.Ensure all filesystems in Guest remain R/W - use mount command

7.Ensure the script (while command) continues to run in each of the 8 VMs

8.Ensure no GUI extended hang
"
-1 - - x x x x x x
22 System Run 5 controller swacts via cli followed by VM launch/delete "0) Validate via system host-list sanity of the base config (computes in service, controller in active/standby, all VMs in Active/Running state)
1) Validate routing to VMs
2) Use sm-dump command to check that all resources are running on the active controller. Can also continuously monitor the controller services via running the following command in an ssh session to each controller: watch -n 1 -d sm-dump
3) Run dd command from within the guest on at least 2 VMs, with VMs on separate computes. Also VMs must be using cinder volumes.

while (true) do date; dd if=/dev/urandom of=output.txt bs=1k count 1 conv=fsync | | break ; echo ; sleep 1; done 2>&1 | tee trace.txt
Initiate swact tests in the following sequence:

Loop the following block 5 times:
Using the new SM command swact the services from controller-1
<Verify that in less than 30 seconds all services are restarted on the active controller>
<Verify there is no loss of ping to all VMs>
<Verify there is no loss of traffic>
Wait 60 seconds after all services have changed activity and are stable.

Using the new SM command swact the services from controller-0
<Verify that in less than 30 seconds all services are restarted on tge active controller>
<Verify there is no loss of ping to all VMs>
<Verify there is no loss of traffic>
"
"- In less than 30 seconds all services are moved and started from acitve to standby controller.
While swact is in progress traffic routing through the VMs cannot be impacted
No interrupt to ping tests:
(used to validate ping)
All VMs remain in Active/Running state
Horizon GUI is available within 30 seconds
No stale NFS on the compute nodes - use mount cmd to verify that the rootfs is r/w
Validate access to Guest to filesystem is interrupted during swact but recovers after the swact. Record how long the filesystem access is interrupted for each test. Should be less than 30 seconds.
"
1 - - x x x x x x
23 System Run cold/live migrations (10 times) "1.Start with 5 VMs on each compute node (2 on virtual env)
2.Identify which controller is the inactive controller
3.All VMs are in Active/Running State
4.All VMs are pingable
5.Iaxia is injecting/receiving traffic (routed through the system)
6.All resources are running on the active controller
7.Horizon GUI is up
1.Using a 'one liner' script sniff out all VMs running on previously identified compute and live-migrate all VMs using:
nova migrate <instance id>
nova resize-confirm <instance id>
2.Observer all VMs migrate/relaunch on remaining compute nodes in the system
3.Note the outage time
4.Repeat the steps 1-3 10 times
"
"1.All VMs successfully migrate
2.There is some interrupt to ping tests
3.All VMs remain in Active/Running state
4.Horizon GUI is available within 30 seconds
5.There are no stale NFS on the VMs - use mount cmd to verify that the rootfs is r/w
6.Migrated VMs continue routing traffic after live migration is completed
7.There is no impact to other VMs"
1 - - x x x x x x
24 Maintenance Verify add/Delete controller "This action can be performed on active controller, deleting second controller from Horizon, Admin, Host Inventory, Delete host
"
Active controller should be able to delete second controller 1 - - x x x x x x
25 Maintenance Neutron - Verify that the same actions that were allowed on horizon can still take place with authentication "Part 1.

Verify neutron host-list
Expected result:
[root@controller-0 ~(keystone_admin)]# neutron host-list
+--------------------------------------+-----------+--------------+--------+---------+---------+-------+| id | name | availability | agents | subnets | routers | ports |
+--------------------------------------+-----------+--------------+--------+---------+---------+-------+| 821571de-783e-48a4-b552-bfb0d0d8992c | compute-0 | down | 3 | 0 | 0 | 0 |
+--------------------------------------+-----------+--------------+--------+---------+---------+-------+
Part 2
Verify neutron host-updated command

Input and expected results:
[root@controller-0 ~(keystone_admin)]# neutron host-update compute-0 --availability=down
Updated host: compute-0
[root@controller-0 ~(keystone_admin)]# neutron host-list
+--------------------------------------+-----------+--------------+--------+---------+---------+-------+| id | name | availability | agents | subnets | routers | ports |
+--------------------------------------+-----------+--------------+--------+---------+---------+-------+| 821571de-783e-48a4-b552-bfb0d0d8992c | compute-0 | down | 0 | 0 | 0 | 0 |
+--------------------------------------+-----------+--------------+--------+---------+---------+-------+
root@controller-0 ~(keystone_admin)]# neutron host-update compute-0 --availability=up
Updated host: compute-0
[root@controller-0 ~(keystone_admin)]# neutron host-list
+--------------------------------------+-----------+--------------+--------+---------+---------+-------+| id | name | availability | agents | subnets | routers | ports |
+--------------------------------------+-----------+--------------+--------+---------+---------+-------+| 821571de-783e-48a4-b552-bfb0d0d8992c | compute-0 | up | 0 | 0 | 0 | 0 |
+--------------------------------------+-----------+--------------+--------+---------+---------+-------+

Part 3

Verify neutron host-show compute-0

Expected result:
+--------------+--------------------------------------+| Field | Value |
+--------------+--------------------------------------+| agents | | | availability | up | | created_at | 2014-07-31 23:55:54.112188 | | id | 821571de-783e-48a4-b552-bfb0d0d8992c | | name | compute-0 | | ports | 0 | | routers | 0 | | subnets | 0 | | updated_at | 2014-08-01 00:09:08.063177 |
+--------------+--------------------------------------+
"
"Part 1.

Verify neutron host-list
Expected result:
[root@controller-0 ~(keystone_admin)]# neutron host-list
+--------------------------------------+-----------+--------------+--------+---------+---------+-------+| id | name | availability | agents | subnets | routers | ports |
+--------------------------------------+-----------+--------------+--------+---------+---------+-------+| 821571de-783e-48a4-b552-bfb0d0d8992c | compute-0 | down | 3 | 0 | 0 | 0 |
+--------------------------------------+-----------+--------------+--------+---------+---------+-------+
Part 2
Verify neutron host-updated command

Input and expected results:
[root@controller-0 ~(keystone_admin)]# neutron host-update compute-0 --availability=down
Updated host: compute-0
[root@controller-0 ~(keystone_admin)]# neutron host-list
+--------------------------------------+-----------+--------------+--------+---------+---------+-------+| id | name | availability | agents | subnets | routers | ports |
+--------------------------------------+-----------+--------------+--------+---------+---------+-------+| 821571de-783e-48a4-b552-bfb0d0d8992c | compute-0 | down | 0 | 0 | 0 | 0 |
+--------------------------------------+-----------+--------------+--------+---------+---------+-------+
root@controller-0 ~(keystone_admin)]# neutron host-update compute-0 --availability=up
Updated host: compute-0
[root@controller-0 ~(keystone_admin)]# neutron host-list
+--------------------------------------+-----------+--------------+--------+---------+---------+-------+| id | name | availability | agents | subnets | routers | ports |
+--------------------------------------+-----------+--------------+--------+---------+---------+-------+| 821571de-783e-48a4-b552-bfb0d0d8992c | compute-0 | up | 0 | 0 | 0 | 0 |
+--------------------------------------+-----------+--------------+--------+---------+---------+-------+

Part 3

Verify neutron host-show compute-0

Expected result:
+--------------+--------------------------------------+| Field | Value |
+--------------+--------------------------------------+| agents | | | availability | up | | created_at | 2014-07-31 23:55:54.112188 | | id | 821571de-783e-48a4-b552-bfb0d0d8992c | | name | compute-0 | | ports | 0 | | routers | 0 | | subnets | 0 | | updated_at | 2014-08-01 00:09:08.063177 |
+--------------+--------------------------------------+
"
2 - - x x x x x x
26 Maintenance Verify host power-down/power-up feature for locked compute node via CLI "1. Lock compute
system host-modify <compute-name/id> action=lock
2. Verify that compute is locked and has following states:
-'administrative' state is 'locked'
-'operational' state is 'disabled'
3. Power down compute
system host-power-off <compute name/id>
3. Wait until host is powered-down and has following states:
-'administrative' state is 'locked'
-'operational' state is 'disabled'
-'availability' state is 'power-off'
4. Power up compute
system host-power-on <compute name/id>
5. Wait until host is powered-on and has following states:
Verify that host is powered-on and has following states:
-'administrative' state is 'locked'
-'operational' state is 'disabled'
-'availability' state is 'online', ('offline' state before 'online' can be missing)
"
"- 3. Host is powered down and has following states:
-'administrative' state is 'locked'
-'operational' state is 'disabled'
-'availability' state is 'power-off'
5. Host is powered on and has following states:
-'administrative' state is 'locked'
-'operational' state is 'disabled'
-'availability' state is 'online'
"
1 - - - - x o o x
27 Maintenance Verify host power-down/power-up feature for locked compute node via GUI "1. Lock compute
Click on 'More' button and choose 'Lock Host' option from drop-down menu
2. Verify that compute is locked and has following states:
-'Admin State' is 'Locked'
-'Operational State' is 'Disabled'
3. Power down compute
Click on 'More' button and choose 'Power-Off Host' option from drop-down menu
3. Wait until host is powered-down and has following states:
-'Admin State' is 'Locked'
-'Operational State' is 'Disabled'
-'Availability State' state is 'Power-Off'
4. Power up compute
Click on 'More' button and choose 'Power-On Host' option from drop-down menu
5. Wait until host is powered-on and has following states:
-'Admin State' is 'Locked'
-'Operational State' is 'Disabled'
-'Availability State' state is 'Online
"
"- 3. Host is powered down and has following states:
-'Admin State' is 'Locked'
-'Operational State' is 'Disabled'
-'Availability State' state is 'Power-Off'
5. Host is powered on and has following states:
-'Admin State' is 'Locked'
-'Operational State' is 'Disabled'
-'Availability State' state is 'Online
"
1 - - - - o x x o
28 Maintenance Verify host power-down/power-up feature for locked controller node via CLI "1. Lock controller
system host-modify <controller-name/id> action=lock
2. Verify that controller is locked and has following states:
-'administrative' state is 'locked'
-'operational' state is 'disabled'
3. Power down controller
system host-power-off <controller name/id>
3. Wait until host is powered down and has following states:
-'administrative' state is 'locked'
-'operational' state is 'disabled'
-'availability' state is 'power-off'
4. Power up controller
system host-power-on <controller name/id>
5. Wait until host is powered-on and has following states:
-'administrative' state is 'locked'
-'operational' state is 'disabled'
-'availability' state is 'online'
"
"- 3. Host is powered down and has following states:
-'administrative' state is 'locked'
-'operational' state is 'disabled'
-'availability' state is 'power-off'
5. Host is powered on and has following states:
-'administrative' state is 'locked'
-'operational' state is 'disabled'
-'availability' state is 'online', ('offline' state before 'online' can be missing)

"
1 - - x o o x x o
29 Maintenance Verify host power-down/power-up feature for unlocked compute node via CLI "1. Verify that compute is unlocked and has following states:
-'administrative' state is 'unlocked'
-'operational' state is 'enabled'
2. Power down compute
system host-power-off <compute name/id>
3. Verify that power-down operation is rejected
"
"- 3. power-down operation is rejected. The following is seen on the cli
'power-down' is not a supported maintenance action
"
1 - - - - o x x o
30 Maintenance Verify host power-down/power-up feature for unlocked compute node via GUI "1. Verify that compute is unlocked and has following states:
-'Admin State' is 'Unlocked'
-'Operational State' is 'Enabled'
2. Verify that no 'Power-Off Host' option
Click on 'More' button and check that no 'Power-Off Host' option on drop-down menu
"
"- 2. No 'Power-Off Host' option




"
1 - - - - x o o x
31 Maintenance Verify host power-down/power-up feature for unlocked controller node via CLI "1. Verify that controller is unlocked and has following states:
-'administrative' state is 'unlocked'
-'operational' state is 'enabled'
2. Power down controller
system host-power-off <controller name/id>
3. Verify that power-down operation is rejected
"
"- 3. that power-own operation is rejected. Following returned reason is seen.Cannot 'Power-Off' an 'unlocked' Host; Please 'Lock' first





"
1 x o o x x o o x
32 Maintenance Verify host power-down/power-up feature for unlocked controller node via GUI "1. Verify that controller is unlocked and has following states:
-'Admin State' is 'Unlocked'
-'Operational State' is 'Enabled'
-'Availability State' state is 'Available
2. Verify that no 'Power-Off Host' option
Click on 'More' button and check that no 'Power-Off Host' option on drop-down menu
"
"- 2. no 'Power-Off Host' option





"
1 o x x o o x x o
33 Maintenance Verify host power-down/power-up feature for unlocked storage node via CLI "1. Verify that storage is unlocked and has following states:
-'administrative' state is 'unlocked'
-'operational' state is 'enabled'
2. Power down storage
system host-modify <storage name/id> action=power-off
3. Verify that power down operation is rejected
"
"- 3. Power down operation is rejected





"
1 - - - - - - x x
34 Maintenance Verify host power-down/power-up feature for unlocked storage node via GUI "1. Verify that storage is unlocked and has following states:
-'Admin State' is 'Unlocked'
-'Operational State' is 'Enabled'
2. Verify that no 'Power-Off Host' option
Click on 'More' button and check that no 'Power-Off Host' option on drop-down menu
"
"- 2. No 'Power-Off Host' option




"
1 - - - - - - x x
35 Maintenance Verify audit interval is 120 seconds for all groups after initial provisioning "
1) provision a host with BMC
2) get the list of available sensors using ""system host-sensor-list controller-0""
3) note the UUID of one available sensor and show its settings using:
""system host-sensor-show controller-0 ""
4) verify that the audit interval value is defaulted to 120secs
"
Audit internal parameter is 120 secs 2 x - x - x - x -
36 Maintenance Verify BMC auto monitoring "Via GUI:
1. Horizon -> Inventory -> Hosts -> Edit Hosts -> Board Management

Board Management Controller Type - Intergrated Lights Out External
Board Management Controller MAC Address - Node MAC Address
Board Management Controller IP Address - x.x.x.x
Board Management Controller User Name - sysamin
Board Management Controller Password - superuser
2. Verify the sensors values are read
Horizon -> Inventory -> Hosts -> <node> -> sensor
"
"1. information is displayed and matches with BIOS information
2. Check sensors have real-time values"
2 x - x - x - x -
37 Maintenance Verify BMC deprovisioning behavior (no board management controller) "Via GUI:

1. Horizon -> Inventory -> Hosts -> Edit Hosts -> Board Management
Remove all provisioned data
"
2 x - x - x - x -
38 Maintenance Verify BMC provisioning with bad IP leads to BMC access alarm "Via GUI:

1. Horizon -> Inventory -> Hosts -> Edit Hosts -> Board Management
Provide an invalid IP address
"
2 x - x - x - x -
39 Maintenance Verify changing BMC MAC or IP address to invalid values (sensors go offline) "Via GUI:

1. Horizon -> Inventory -> Hosts -> Edit Hosts -> Board Management
Provide an invalid MAC address
"
Check sensors go offline 2 x - x - x - x -
40 Maintenance double fault evacuation of vm with controller and compute nodes "description:
this test is a double fault scenario. initially a power failure scenario is executed on standby controller-1. then the power cable is pulled for compute-1 and plugged in back after a few (~30) seconds. the vms should evacuate immediately to compute-0. note that the vms should not enter fail state and wait for compute-1 to come up.
steps:
pre-requisite: start with a system that has been stable for up to 20 minutes
reboot the active controller.
after the active controller recovers, reboot (or power cycle) the standby controller and 1 compute node with vms.
should see the new alarm for a brief period of time.
after the alarm clears, vm recovery should begin within a relatively short period of time including evacuation of vms that were on the compute that was rebooted.
"
"verification points:
ensure the vms that were on the compute that was rebooted have been evacuated
verify appropriate alarms are raised and cleared after a brief period of time
"
2 - - - - x x x x
41 Maintenance Controller Node HeartBeat Failure Handling "1. Reboot inactive controller
reboot -f
"
"2. Verify that controller rebooted only once
"
2 - - x x x x x x
42 Maintenance Verify mtce heartbeat parameters can be modified by user and result are accurate "Try to make changes to the file /etc/mtc.ini directly from CLI
"
"Check system service-parameter-list and check the changes are not been applied.
"
-2 x x x x x x x x
43 Maintenance Lock force a compute with instances that won’t or can’t move (compute should be rebooted). "Install latest load.
Launch VMs
lock one of the compute
force lock the second compute
unlock the second compute
"
"- Verify VMs
- if the VMs are there verify vm migrate to other computes
- verify compute reboot and locked.
- verify unlock was successful.
"
-2 - - x x x x x x
44 Maintenance Pull Management Cable on Active Controller "Pull the management cable on the active controller node
(Disable port on virtual environmet)"
"- Ensure the active controller is automatically swacted and alarms are generated for OAM interface is down.
"
1 - - x x x x x x
45 Maintenance Verify that compute host will reboot if management network is admin down "1. ssh to compute host
2. disable management network (eth1)
ifconfig eth1 down
3. Verify that compute-0 start rebooting
4. Verify that compute-0 rebooted and become active
"
"4. compute-0 rebooted and become active




"
1 - - - - x x x x
46 Maintenance Verify that inactive controller host will reboot if management network is admin down "1. ssh to inactive (aka standby) controller host
2. disable management network (eth1)
ifconfig eth1 down
3. Verify that controller start rebooting
4. Verify that controller rebooted and become active
"
"4. controller rebooted and become active




"
1 - - x x x x x x
47 Maintenance Compute Resource Maintenance and Local Alarm "CLARIFY STEPS
1. Lock compute-1
2. create an instance (since compute-1 is locked, the instance runs on compute-0)
3. ssh to compute-0 from active controller. and modify the cpu alarm threshold of the controller to a reachable level by modifying /etc/rmon.d/cpu_resource.conf,
change the lines below to
minor_threshold = 20 ; minor cpu utilization threshold percentage
major_threshold = 35 ; major cpu utilization threshold percentage
critical_threshold = 50 ; critical cpu utilization threshold percentage (use 101 if unused)

(Note: Maintenance agent or mtcAgent only runs on the active controller so in the case of the compute alarm, the compute will get degraded by mtcAgent
running on the active controller. In this case for creation alarm condition on the compute - should be done ssh to the compute from the active controller and then verified the alarm code and degrade of the
compute on the active controller.)
4. Issue the command on active controller: system alarm-list
Make sure there are no alarms listed with Alarm ID 100.101 or 200.007. If there are delete these alarm with the command: system alarm-clear -u <uuid of alarm>

5. Put load on the processors by issuing the command (on instance):
dd if=/dev/zero of=/dev/null &
it may need to run multiple of this command to reach cpu threshold.

6. Run top command (on compute-0), once the cpu usage (user + system) reach above 50%

7. wait for 1 minute. Then issue command ""system alarm-list"" to see the alarm with alarm ids: 100.101 is present in the table.

8. adjust the number of ""dd if=/dev/zero of=/dev/null &"" command by running more or kill <id> to let the cpu usage to be between 20%~35%, repeat step7 to see alarm severity to be ""minor""

9. adjust the number of ""dd if=/dev/zero of=/dev/null &"" command by running more or kill <id> to let the cpu usage to be between 35%~50%, repeat step7 to see alarm severity to be ""major""
Do a: system host-list and make sure the node is in a ""degraded"" state.


10. adjust the number of ""dd if=/dev/zero of=/dev/null &"" command by running more or kill <id> to let the cpu usage to be above 50%, repeat step 7to see alarm severity to be ""critical""
Do a: system host-list and make sure the node is in a ""degraded"" state.

11 kill the dd load processes: pkill -int dd


12. wait for 1 minute, issue system host-list command to see the node become available.

13. Do a: system alarm-list and make sure that alarms with alarm ids: 100.101 is not present in the table.
"
"- 1. The table has alarm ids of: 100.101 and 200.007 after the processor load is applied, the compute node is in the degraded state.

2. The alarm ids: 100.101 and 200.007 are cleared after the processor load is removed, the node is in the available state.
"
2 - - - - x x x x
48 Maintenance Auto-Provision and Auto-Enable of controller-0 "boot controller-0 and execute config_controller and wait for success. Open UI.
"
"- You should see an unlocked-enabled-available controller-0
"
-1 x x x x x x x x
49 Maintenance Delete Compute Host "Verify Delete compute
Verify re-add deleted compute
"
"Verify VM actions after compute is added
"
-2 - - - - x x x x
50 Maintenance Verify Host 'Delete' Feature "Test Points:

1. Host should be removed from system host-list
2. Hostname should not show up in Nova service list
3. Hostname should not show up in Neutron host list

Optional: Host should be able to be re-added with same hostname after delete. Instructions to re-add are not included in these test case development instructions.

Actions:
system host-lock <host-id>
system host-delete <id>
"
"- Verify hostname is not in the tables that are displayed when the following commands are executed. Each command maps to a specific test point listed in the ""Validation Input"" section.

1. system host-list
2. nova service-list
3. neutron host-list
"
1 - - - - x x x x
51 Maintenance Verify if an alarm is present on node it is cleared after it is deleted "Generate an alarm
"
-2 - - - - x x x x
52 Maintenance Lock Compute Host With Live Instance Migration - Failure path "FIND A WAY TO MAKE LOCK FAIL
Action: Lock Compute-1
"
"Verify Compute-1 fails to lock
Verify all but one of Compute-1 VM instances are live migrated to Compute-2
"
-2 - - - - x x x x
53 Maintenance Lock Compute Host With Live Instance Migration - Success path "Action: Lock Compute-1


"
"- Verify:
Verify lock of Compute-1 completes and goes locked-disabled
Verify all VM Instances are 'running' on Compute-2
"
-1 - - - - x x x x
54 Maintenance swact rejected inactive controller "login to active controller
system host-swact <standby controller>
"
"- veirfy host-swact failure
"
-2 - - - - x x x x
55 Maintenance reject swact to failed controller "1. Make sure the inacitve controller is ""degraded"" o ""failed""
2. login to active controller

system host-swact <standby controller>
"
"- verify host-swact failure
"
2 - - - - x x x x
56 Maintenance verify that personality cannot be changed for host "1. Check host list (system host-list)
2. Attempt to set personality for controller (system host-update <host_name> personality=<value> - CLI, System->Inventory->Hosts->Edit Host - GUI)
3. Verify personality can not be changed for host, error reported.
"
"- The following fields can not be modified because this host <name> has been configured: hostname, personality, subfunctions


"
2 x o o x x o o x
57 Maintenance verify compute cant be swacted "1. Check host list (system host-list)
2. Swact compute (system host-swact <compute_name/id> - CLI, System->Inventory->Hosts->More->.... - GUI)
3. Verify compute can not be swacted,error reported (CLI).
4. Verify no option for compute swact (GUI)
"
2 - - - - x o o x
58 Maintenance name cannot be changed on unlocked host "1. Check host list (system host-list)
2. Set name for host (system host-modify <host_name/id> hostname=<new_name>)
3. Verify name can not be changed, error reported.
"
2 x o o x x o o x
59 Maintenance Verfiy SysInv DB rejects hosts with same hostname or mgmt_mac or mgmt_ip "Test Procedure:
1. Attempt to add system host-add of hosts with duplicate hostname/mgmt_mac/mgmt_ip are rejected from being put into database.
"
"- Verify that system host-show only returns the original db entry (if duplicate hostname) or none if rejected due to duplicate mgmt._mac/mgmt._ip

"
2 - - - - o x x o
60 Maintenance "Verify alarms show correct time zone
"
"Check alarms.
"
-2 x x x x x x x x
61 Maintenance Time stamp correct in Alarms after time zone has been updated "Verify alarms show correct time zone
Verify all logs show correct time zone
"
"Check alarms
Check al logs:
(list all logs under /var/log/…)
TiS logs should be written in the same time zone to avoid making big confusions
"
1 x o o x x o o x
62 Maintenance Time stamp correct in Logs after time zone has been updated "Verify it is possible to revert back to Universal UTC timezone
"
2 x x x x x x x x
63 Maintenance Time zone can revert to default setting "Verify timezone still in place after controller lock/unlock
"
2 - - x o o x x o
64 Maintenance Time zone persistent after controller lock and unlock "Verify timezone still in place after standby controller reboot
"
1 - - - - o x x o
65 Maintenance Time zone persistent after controller reboot "Verify timezone still in place after controller swact
"
2 - - - - x o o x
66 Maintenance Verify that the CPU data is seen on the Host Details Page in Horizon "1) Login to Horizon
2) Verify that the cpu info is seen on the Host Details Page for every host:

ex:
http://x.x.x.x:8080/admin/inventory/1/detail/
"
"- Verify that the following info is seen, ex:


CPU Topology

Processor Model:
Intel(R) Xeon(R) CPU E5-2690 V2 @ 3.00GHz
Processors:
2
Physical Cores Per Processor:
10
Hyper-Threading:
No
"
1 x x x x x x x x
67 Maintenance Verify that the Memory data is seen on the Host Details Page in Horizon "1) Login to Horizon
2) Verify that the memory info is seen on the Host Details Page.

"
"- Verify that the following info is seen.
–Memory Topology
DIMM, type, size, …
"
1 x x x x x x x x
68 High Availability Controlled Swact Timing "1. Verify that SM process is not using more than 10% of CPU at steady state on active controller.
[root@controller-0 ~(keystone_admin)]# top –p $(pid of sm)
2. Verify that SM process is not using more than 10% of CPU at steady state on standby controller.
[root@controller-1 ~(keystone_admin)]# top –p $(pid of sm)
3. Find beginning timestamp for all services for both controllers
Also find beginning timestamp for all service-group for both controllers
[root@controller-0 ~(keystone_admin)]#tail -f /var/log/smcustomer.log
[root@controller-1 ~(keystone_admin)]#tail -f /var/log/smcustomer.log
i.e. timestamp for cinder_ip service before swact:
[root@controller-1 ~(keystone_admin)]# sm-logs service
4426. cinder_ip enabled-active disabling 2018-07-22 04:44:39.571
4. Start monitoring of CPU usage for SM process on both controllers
[root@controller-0 ~(keystone_admin)]# top –p $(pid of sm)
[root@controller-1 ~(keystone_admin)]# top –p $(pid of sm)
5. Execute swact from GUI or via CLI
6. Verify that swact takes no longer than 30 seconds
Find end timestamp for all services and all service-group for both controllers
[root@controller-0 ~(keystone_admin)]# tail -f /var/log/smcustomer.log
[root@controller-1 ~(keystone_admin)]# tail -f /var/log/smcustomer.log
i.e. timestamp for cinder_ip service after swact:
[root@controller-1 ~(keystone_admin)]#tail -f /var/log/smcustomer.log
4501. cinder_ip disabling disabled 2018-07-22 04:44:42.001
7. Verify that SM process is not using more than 10% of CPU during swact on both controllers.
[root@controller-0 ~(keystone_admin)]# top –p $(pid of sm)
[root@controller-1 ~(keystone_admin)]# top –p $(pid of sm)"
"1. SM process is not using more than 10% of CPU at steady state on active controller.
2. SM process is not using more than 10% of CPU at steady state on standby controller.
6. controllers swact takes no longer than 30 seconds
7. SM process is not using more than 10% of CPU during swact on both controllers."
2 - - x x x x x x
69 High Availability Uncontrolled Swact Timing "1. Verify that SM process is not using more than 10% of CPU at steady state on active controller.
[root@controller-0 ~(keystone_admin)]# top –p $(pid of sm)
2. Verify that SM process is not using more than 10% of CPU at steady state on standby controller.
[root@controller-1 ~(keystone_admin)]# top –p $(pid of sm)
3. Find beginning timestamp for all services for both controllers.Also find beginning timestamp for all service-group for both controllers
[root@controller-0 ~(keystone_admin)]# tail -f /var/log/smcustomer.log
[root@controller-1 ~(keystone_admin)]# tail -f /var/log/smcustomer.log
4. Start monitoring of CPU usage for SM process on both controllers
[root@controller-0 ~(keystone_admin)]# top –p $(pid of sm)
[root@controller-1 ~(keystone_admin)]# top –p $(pid of sm)
5. Reboot active controller
reboot -f
6. Verify that the standby controller starts to go-active in under a second
7. Verify that swact takes no longer than 30 seconds
Find end timestamp for all services and all service-group for both controllers
[root@controller-0 ~(keystone_admin)]# tail -f /var/log/smcustomer.log
[root@controller-1 ~(keystone_admin)]# tail -f /var/log/smcustomer.log
8. Verify that SM process is not using more than 10% of CPU during swact on both controllers.
[root@controller-0 ~(keystone_admin)]# top –p $(pid of sm)
[root@controller-1 ~(keystone_admin)]# top –p $(pid of sm)"
"1. SM process is not using more than 10% of CPU at steady state on active controller.
2. SM process is not using more than 10% of CPU at steady state on standby controller.
6. standby controller starts to go-active in under a second
7. swact takes no longer than 30 seconds
8. SM process is not using more than 10% of CPU during swact on both controllers."
2 - - x x x x x x
70 High Availability Controller HA: Swact the controllers 20 times. Verify VM launch and deletions after every swact "INCLUDE THE VM LAUNCH AND DELETE
Swact action can be performed only on duplex-multinode configuration
Admin
Platform
Host Inventory
Active Controller--> Swact
"
Swact action should be able to perform without issues -1 - - x x x x x x
71 High Availability Halt standby controller from the Linux command line (halt -f) "1. ssh to standby controller and issue halt -f command:
halt -f
2. Wait 2 minutes
3. From VLM, send halted controller for a reboot"
"1. Standby controller halts (can only tell this from console)
2. During 2 minutes ensure no state change for any service on the active controller.
3. Standby controller reboots. Wait one minute after reboot is complete and then validate all services on standby controller are enabled-standby (via sm-dump tool).
Throughout the durantion of this test, verify no state change for any service on the active controller"
1 - - x x x x x x
72 High Availability Standby controller lock (GUI) Repeat sequence in TC945 but use GUI instead of CLI. instruction should be rejected 2 - - x x x x x x
73 High Availability Validate that you can use the https address to access Horizon Using a browser login to Horizon using https://<ip address of the lab> - Login is successful. 1 x x x x x x x x
74 High Availability Kill all services (one by one) managed by SM (on active controller) "On the active controller, one service/process at a time do the following:
kill <process id>
Wait for up to 60 or more seconds.
"
"- Process should be restarted within 60 seconds or so. (Wait up to 2 minutes.)
SM may or may not move the service to disabled for a short amount of time.
SM will set the current state of the service back to enabled-active if it changed the state to disabled when the process was killed.
"
1 x x x x x x x x
75 High Availability Kill Critical Process on the active Controller node "Prep:
a) Locate the process conf file:
e.g. cat /etc/lighttpd/lighttpd.conf
b) Locate the name of the daemon pid file that is configured in the conf file:
e.g. server.pid-file = ""/var/run/lighttpd.pid""
c) Rename the server pid-file so that the critical process is not respawned after it is killed:
e.g. mv /var/run/lighttpd.pid /var/run/lighttpd.pid~

Execution:
1) Kill a Critical process on the active controller:
pkill guestAgent
pkill lighttpd

could not observe active controller reset (it is transient)
The standby controller takes over - becomes active.
The previously active controller becomes standby.
The processes on the previously active controller which were killed come back up
"
"1) Verify that the controller goes for a reset and recovers.
2) Verify that the standby controller becomes active
3) Verify after the controller comes up it is the standby controller.
"
1 - - x x x x x x
76 High Availability Kill Critical Process on the standby Controller node "Prep:

Perform the following steps on the standy controller:

a) Locate a critical process that is pmon managed:

-> cd /etc/pmon.d
-> grep -r ""severity = critical""

b) E.g. cat /etc/pmon.d/sm.conf
c) Locate the name of the daemon pid file or startup script file that is configured in the conf file:
e.g. server.pid-file = ""/var/run/lighttpd.pid"" or script = /etc/init.d/sm
c) Rename the startup script file so that the critical process is not respawned after it is killed:
e.g. mv /etc/init.d/sm /etc/init.d/sm~

Execution:
1) Kill the critical process on the standby controller


"
"1) Verify that the controller goes for a reset and recovers.




















"
1 - - x x x x x x
77 High Availability Kill Major Process on the standby Controller node "1) Continuously Kill Major Pocess on the node until the threshold is reached
2) Continue killing the process
3) Stop killing the process
"
"1) Verify that the process id is changing for every kill.
2) Once the threshold is reached the node goes degraded
3) After you stop killing the process and the process stays up the node changes to available.
"
1 - - - - x x x x
78 High Availability Controller: Service Group Redundancy Alarm "1. Check standby controller (system host-list, system sda-list, sm-dump)
2. ssh to standby controller (ssh controller-1)
3. Execute reboot command (reboot -f)
4. Wait until standby controller is in failed state (system host-list)
5. Check that alarm raised (system alarm-list)
6. Wait until standby controller reboots
7. Verify that Cloud_Services group is standby on stadby controller (sm-dump)
8. Wait 2 minutes
9. Verify that alarm cleared.
"
"- Step 5 - Alarm raised

SM_ALARM_SERVICE_GROUP_REDUNDANCY:
fm_alarm_id = ""400.002""
fm_entity_type_id = ""host.service_domain.service_group""
fm_entity_instance_id = ""host=%s.service_domain=Controller.service_group=%s""

Step 9 - Alarm cleared

"
1 - - - - x x x x
79 High Availability Service Group State alarm xargs kill;sleep 1; system alarm-list
2. Verify that alarm entry is represented in GUI and includes:
- alarm ID - 300.003
- severity - Major
- entity instance ID service_domain=<domain_name>.service_group=<group_name>
- proposed repair action
Contact next level of support
- reason text
Service group failure; <list of affected services>.
Service group degraded; <list of affected services>.
Service group warning; <list of affected services>.
"
"Step 1 - Alarm raised
SM_ALARM_SERVICE_GROUP_STATE:
fm_alarm_id = ""400.001""
fm_entity_type_id = ""host.service_domain.service_group""
fm_entity_instance_id =
""host=%s.service_domain=Controller.service_group=%s""
Step 4 - Alarm cleared"
1 - - - - x x x x
80 High Availability system CLI - display HA system servicegroup list and state "1. source /etc/nova/openrc
2. system service-list
3. system servicegroup-list
"
"- Verify the following service groups are shown and their status reflected properly:
$ system servicegroup-list
+----+--------------------+---------+| id | servicename | state |
+----+--------------------+---------+| 1 | Cloud Services | enabled |
| 4 | Database Services | enabled |
| 3 | Messaging Services | enabled |
| 2 | Platform Services | enabled |
+----+--------------------+---------+
"
1 x x x x x x x x
81 High Availability System CLI - display HA system service list and state "Test Procedure:
1. source /etc/nova/openrc
2. system service-list
"
"[root@controller-0 0000:80:03.0(keystone_admin)]# system service-list
[root@controller-0 ~(keystone_admin)]# system sda-list
+--------------------------------------+---------------------+--------------+---------+--------+| uuid | service_group_name | node_name | state | status |
+--------------------------------------+---------------------+--------------+---------+--------+| c1af52b3-1892-4865-88a8-ca926264daa2 | Cloud_Services | controller-1 | standby | none |
| ec4ec3ee-bf54-4f01-8e8a-96396069fd67 | Cloud_Services | controller-0 | active | none |
| 12151a38-0127-4b84-84fc-26f772d58e15 | Controller_Services | controller-1 | standby | none |
| f020c3e7-020d-48eb-a5b7-4da9847ed437 | Controller_Services | controller-0 | active | none |
| a7c2b098-3e09-465c-848e-4fc4934a20a3 | Directory_Services | controller-0 | active | none |
| fa2efbda-566e-41f8-927c-4cfa20a8236f | Directory_Services | controller-1 | active | none |
| 7b888cd9-87ad-4c69-ba2b-cffbd60ef550 | OAM_Services | controller-0 | active | none |
| 88f594c7-815c-4f60-9925-b67d92d48802 | OAM_Services | controller-1 | standby | none |
| 0feddef5-b7c2-4df8-ab32-727f1e421cb4 | Web_Services | controller-1 | active | none |
| 70704519-8dbe-4bb8-96c2-faafd8ecd883 | Web_Services | controller-0 | active | none |
+--------------------------------------+---------------------+--------------+---------+--------+
"
1 x x x x x x x x
82 High Availability "With both controllers running, confirm crm_mon command and validate all services running on at least one controller" "Test Procedure:
1. Run crm status
2. Validate key items in output
3. Repeat steps 1-2 multiple of 5 times over course of a few minutes.
"
"- - Should see controller-0 & controller-1 online
- Among other services, should see these OpenStack services started on 1 controller:
keystone glance-reg glance-api neutron-svr nova-api nova-sched nova-conductor nov
"
2 - - x x x x x x
83 Fault Management Alarms: Verify order of columns are correct "Go to alarms (Fault Managment), historical alarms tab (Events) verify order is Alarm ID / Reason Text / Entity Instance ID / Severity / Timestamp

To check Alarms Table via Horizon:
Admin -> Platform -> Fault Management

To check Alarms Table via CLI:
$ fm alarm-list
"
Reason Text | Entity Instance ID | Severity | Timestamp" 1 x x x x x x x x
84 Fault Management Verify order of alarms reverse chronologically "To check Alarms Table via Horizon:
Admin -> Platform -> Fault Management

To check Alarms Table via CLI:
$ fm alarm-list
"
- Time stamps should be showed chronologically 1 x o o x x o o x
85 Fault Management Hierarchical Suppression of Alarms for Locked Compute Node "$ source /etc/nova/openrc

1. Lock compute node:
$ system host-lock compute-0

2. Verify that compute locked alarm displayed:
$ fm alarm-list

3. Verify that entity-instance-field and suppression set to True not shown:
$ fm alarm-show <uuid> <-- What UUID??

4. Unlock compute:
$ system host-unlock compute-0

5. Verify that alarm is cleared:
$ fm alarm-list
"
"- After lock compute, one alarm should be showed.
- When unlock compute, a reboot is expected.
- Alarm shold be gone after the problem is solved"
1 x x x x x x x x
86 Fault Management Hierarchical Suppression of Alarms for Locked Controller Node "$ source /etc/nova/openrc

1. Lock inactive controller:
$ system host-lock controller-1

2. Verify that controller locked alarm displayed:
$ fm alarm-list

3. Verify that entity-instance-field and suppression set to True not shown:
$ fm alarm-show <uuid> <-- What UUID??

4. Unlock controller:
$ system host-unlock controller-1

5. Verify that alarm is cleared:
$ fm alarm-list
"
"- After lock controller, several alarms should be showed.
- When unlock controller, a reboot is expected.
- When controller boot, alarms should be gone after the problem is solved."
1 - - x x x x x x
87 Fault Management Suppress alarm and verify alarms on CLI "DEFINE CLI COMMANDS


1) Reboot Controller-1
2) In Horizon Event Log tab , watch new Alarms appear
3) Note ID for one specific alarm
4) Switch to Events Suppression tab, and perform Event Suppress for alarm ID (= Event ID column) noted above
5) Go back to Event Log tab, and search for alarm ID (use entry filter to perform search)
6) Still in Event Log tab, select ""Show Suppressed"" filter button
7) Use ID from previous test
8) Switch to Events Suppression tab, and perform Event Unsuppress for alarm ID (= Event ID column) noted above
9) Go back to Event Log tab, and search for alarm ID (use entry filter to perform search)
10) Still in Event Log tab, select ""Hide suppressed"" filter button
"
"---> PASS Criteria 1: Alarms with suppressed alarm ID do not appear in alarm list
---> PASS Criteria 2: Alarms with suppressed alarm ID now appear in alarms list
---> PASS Criteria 3: Alarms with unsuppressed alarm ID still appears in alarms list
---> PASS Criteria 4: Alarms with unsuppressed alarm ID still appears in alarms list"
-1 - - x x x x x x
88 Fault Management Suppress alarm and verify alarms on GUI "1) Reboot Controller-1
2) In Horizon Event Log tab , watch new Alarms appear
3) Note ID for one specific alarm
4) Switch to Events Suppression tab, and perform Event Suppress for alarm ID (= Event ID column) noted above
5) Go back to Event Log tab, and search for alarm ID (use entry filter to perform search)
6) Still in Event Log tab, select ""Show Suppressed"" filter button
7) Use ID from previous test
8) Switch to Events Suppression tab, and perform Event Unsuppress for alarm ID (= Event ID column) noted above
9) Go back to Event Log tab, and search for alarm ID (use entry filter to perform search)
10) Still in Event Log tab, select ""Hide suppressed"" filter button
"
"---> PASS Criteria 1: Alarms with suppressed alarm ID do not appear in alarm list
---> PASS Criteria 2: Alarms with suppressed alarm ID now appear in alarms list
---> PASS Criteria 3: Alarms with unsuppressed alarm ID still appears in alarms list
---> PASS Criteria 4: Alarms with unsuppressed alarm ID still appears in alarms list"
1 - - x x x x x x
89 Fault Management Verify new Alarms are not listed when they are suppressed "[wrsroot@controller-0 ~(keystone_admin)]$ system event-suppress --alarm_id 200.001
Alarm ID: 200.001 suppressed.
+----------+------------+| Event ID | Status |
+----------+------------+
| 200.001 | suppressed |
+----------+------------+
[wrsroot@controller-0 ~(keystone_admin)]$
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | unlocked | enabled | available |
| 3 | compute-0 | compute | unlocked | enabled | available |
| 4 | compute-1 | compute | unlocked | enabled | available |
| 5 | compute-2 | compute | unlocked | enabled | available |
+----+--------------+-------------+----------------+-------------+--------------+
[wrsroot@controller-0 ~(keystone_admin)]$
[wrsroot@controller-0 ~(keystone_admin)]$ system alarm-list
+----------+---------------------------------------------------+----------------+----------+-------------+
| Alarm ID | Reason Text | Entity ID | Severity | Time Stamp |
+----------+---------------------------------------------------+----------------+----------+-------------+
| 100.103 | Platform Memory Usage threshold exceeded; | host=compute-0 | minor | 2017-03-27T |
| | threshold: 70%, actual: 70.01%. | | | 15:39:16. | | | | | | 488724 | | | | | | |
| 100.103 | Platform Memory Usage threshold exceeded; | host=compute-1 | minor | 2017-03-27T |
| | threshold: 70%, actual: 70.02%. | | | 15:37:20. | | | | | | 791919 | | | | | | |
| 100.103 | Platform Memory Usage threshold exceeded; | host=compute-2 | minor | 2017-03-27T |
| | threshold: 70%, actual: 70.00%. | | | 15:59:38. | | | | | | 711994 | | | | | | |
+----------+---------------------------------------------------+----------------+----------+-------------+
[wrsroot@controller-0 ~(keystone_admin)]$ system event-suppress --alarm_id 100.103
Alarm ID: 100.103 suppressed.
+----------+------------+
| Event ID | Status |
+----------+------------+
| 100.103 | suppressed |
| 200.001 | suppressed |
+----------+------------+
[wrsroot@controller-0 ~(keystone_admin)]$ system host-lock compute-0
[wrsroot@controller-0 ~(keystone_admin)]$ system alarm-list
+----------+-------------+-----------+----------+------------+
| Alarm ID | Reason Text | Entity ID | Severity | Time Stamp |
+----------+-------------+-----------+----------+------------+
+----------+-------------+-----------+----------+------------+
[wrsroot@controller-0 ~(keystone_admin)]$

"
1 x x x x x x x x
90 Fault Management SNMP CLI - Comunitites: Create new comunity. "All commands must be executed on the active controller's console, which can be accessed using
the OAM floating IP address. You must acquire Keystone admin credentials in order to execute
the commands.

$ source /etc/nova/openrc

1. Create community:
$ system snmp-comm-add -c <community_name>

2. Verify that created community displayed:
$ system snmp-comm-list
"
"- Community created.

"
-1 x x x x x x x x
91 Fault Management SNMP CLI- Communities: Check comunity details "All commands must be executed on the active controller's console, which can be accessed using
the OAM floating IP address. You must acquire Keystone admin credentials in order to execute
the commands.

$ source /etc/nova/openrc

1. Verify that created community displayed:
$ system snmp-comm-list

2. Check community details:
$ system snmp-comm-show <community>
"
"- Properties showed:
- Access
- Community
- Created_at
- UUID
- View "
1 x x x x x x x x
92 Fault Management SNMP CLI - Communities: Comunity can be deleted "1. Verify that created community displayed:
$ system snmp-comm-list

2. Check community details:
$ system snmp-comm-show <community>

3. Delete community:
$ system snmp-comm-delete <community>

4. Verify that community deleted:
$ system snmp-comm-list
"
- Community deleted. 1 x x x x x x x x
93 Fault Management SNMP V2C Agent init and launching on HA Controller(s) (w/default config) grep snmpd"" on controller-0.
2. Verify that snmp agent process is running.
3. Perform command ""cat /opt/platform/config/snmp/snmpd.conf"" on controller-0.
4. Verify that snmpd.conf file contains correct default configuration.
5. Verify that no rocommunity and trap2sink defined.
6. Perform command ""ps -ef | grep snmpd"" on controller-1.
7. Verify that there is no snmp agent process running.
8. Swact controllers (use command ""system host-modify controller-0 action=swact"")
9. Wait until all services started and master controller equals to Controller-1
10. Perform command ""ps -ef | grep snmpd"" on controller-1.
11. Verify that snmp agent process is running on controller-1.
12. Perform command ""cat /opt/platform/config/snmp/snmpd.conf"" on controller-1.
13. Verify that snmpd.conf file contains correct default configuration.
14. Verify that no rocommunity and trap2sink defined.
"
"- 2. '/usr/sbin/snmpd' process shall be displayed.
4. ""sysObjectID 1.3.6.1.4.1.731.10"" shall be displayed in snmpd.conf.
5. No rocommunity and trap2sink shall be defined.
7. No snmp agent process running.
11. '/usr/sbin/snmpd' process shall be displayed.
13. ""sysObjectID 1.3.6.1.4.1.731.10"" shall be displayed in snmpd.conf.
14. No rocommunity and trap2sink shall be defined.
"
1 - - x o o x x o
94 Fault Management SNMP CLI - System Group: Verify SNMP get/getNext for system group "1. Check SNMP get:
[root@controller-0 ~(keystone_admin)]# snmpget -v 2c -c test 128.224.151.243 1.3.6.1.2.1.1.5.0
SNMPv2-MIB::sysName.0 = STRING: controller-0
2. Check SNMP getNext
[root@controller-0 ~(keystone_admin)]# snmpgetnext -v 2c -c test 128.224.151.243 1.3.6.1.2.1.1.5.0
SNMPv2-MIB::sysLocation.0 = STRING: Unknown
"
"- Expected result:
SNMP get/getNext work
"
2 x o o x x o o x
95 Fault Management SNMP CLI - Trap dest: Verify trap receiver Property | Value |
+-----------+--------------------------------------+| access | ro
a08ebe90-b0e2-4c42-8ab8-d0d1392f1514 tets 1.3.6.1.4.1.731.10 |
+-----------+--------------------------------------+
2. Create trapdest (IP - local host IP):
system snmp-trapdest-add -i 147.11.119.53 -c tets
3. Configure SNMP manager:
Host - OAM controller IP ([root@controller-0 ~(keystone_admin)]# cat /etc/hosts -> e.g. 128.224.151.243 oamcontroller)
Community - 'SNMP community' in system snmp-comm-list
Object Id - 'View' in system snmp-comm-list
4. Run 'TrapViewer' (set community to name of created community) -> 'Start'
5. Restart snmpd (/etc/init.d/snmpd restart)
"
"- Expected result:
Verify that trap is received by the trap listener.
"
2 x o o x x o o x
96 Fault Management snmpd process recovers properly grep -v grep|grep snmpd
/usr/sbin/snmpd oamcontroller -lsd -lf /dev/null -p /var/run/snmpd.pid
date
sudo kill -9 6964
ps -ef|grep -v grep|grep snmpd
ps -ef|grep -v grep|grep snmpd
ps -ef|grep -v grep|grep snmpd
sudo service snmpd status
ps -ef|grep -v grep|grep snmpd
sudo service snmpd start
starting network management services: is already running ok
ps -ef|grep -v grep|grep snmpd
sudo service snmpd restart
snmpd is not running
starting network management services: is already running ok
ps -ef|grep -v grep|grep snmpd

verification points:
verify that the snmpd process can be successfully killed.
verify that when the process is restarted, it outputs the correct information and a new pid is generated for the service
"
"verification points: verify that the snmpd process can be successfully killed. verify that when the process is restarted, it outputs the correct information and a new pid is generated for the service" 1 x x x x x x x x
97 Fault Management Verification of SNMP get/getnext/set with and without configured community "1. Add communities by 'system snmp-comm-add –c test'
2. Verify that 'test' cummunity has been created
3. Verify that snmpd.conf is updated correctly.
4. Verify that an SNMP get/getnext, with a configured community string is successful.
5. Verify that an SNMP set, with a configured community string is unsuccessful( This is not supported in R3)
6. Delete community
7. Verify that community is deleted and snmpd.conf is updated correctly.
8. Verify that an SNMP get/getnext, with a community string that is not configured, is unsuccessful.
9. Verify that an SNMP set, with community that is not configured, is unsuccessful.
"
"- 1. system snmp-comm-add –c test
2. use command 'system snmp-comm-show test' or 'system snmp-comm-list'

3. cat /opt/platform/config/snmp/snmpd.conf
'rocommunity test' line shall be displayed.

4. snmpgetnext -v 2c -c <community> <host_ip> <OID>
where, host_ip is IP address of host where SNMP is running (by default snmpd is runing on oamcontroller, use cat /etc/hosts to get IP)
e.g.:
[root@controller-0 ~(keystone_admin)]# snmpgetnext -v 2c -c test 128.224.151.212 1.3.6.1.2.1.1.2
SNMPv2-MIB::sysObjectID.0 = OID: SNMPv2-SMI::enterprises.731.10
[root@controller-0 ~(keystone_admin)]# snmpget -v 2c -c test 128.224.151.212 1.3.6.1.2.1.1.2.0
SNMPv2-MIB::sysObjectID.0 = OID: SNMPv2-SMI::enterprises.731.10

5. snmpset -v 2c -c test 128.224.151.212 1.3.6.1.2.1.1.5.0 s ""test""
check that nothing changed by snmpget ( This is not supported in R3)

6. system snmp-comm-delete test

7. 'system snmp-comm-show test' or 'system snmp-comm-list
cat /opt/platform/config/snmp/snmpd.conf
'rocommunity test' line shall not be displayed.

8. [root@controller-0 ~(keystone_admin)]# snmpgetnext -v 2c -c test 128.224.151.212 1.3.6.1.2.1.1.2
Timeout: No Response from 128.224.151.212
[root@controller-0 ~(keystone_admin)]# snmpget -v 2c -c test 128.224.151.212 1.3.6.1.2.1.1.2.0
Timeout: No Response from 128.224.151.212

9. snmpset -v 2c -c test 128.224.151.212 1.3.6.1.2.1.1.5.0 s ""test""
check that nothing changed by snmpget
"
2 x o o x x o o x
98 Fault Management Verify that the SNMP warmStart trap is properly generated when controllers switchover "1. Provision a SNMP trap destination with the IP where the SNMP trap listener is running('system snmp-trapdest-add -i <ip_address> -c <comunity>')
2. Open the SNMP trap listener and enter proper community string or agent IP(run 'TrapViewer' (set community to name of created community) -> 'Start')
3. Perform controllers swact ('system host-modify controller-0 action=swact')
4. Verify that the SNMP trap is received by the SNMP trap listener
5. Verify that massage contains all nessery fields
"
2 - - x o o x x o
99 Fault Management SNMP - audit logging configuration "Purpose is to see that snmp audit logging can be successfully configured.

1. Create snmp system details:
system modify contact=""site-contact""
system modify location=""site-location""
system modify name=""system-name""
100 Fault Management SNMP - successful GETBULK request logging "On the CLI interface of the lab, use the following command to get bulk updates from the MIB table :

snmpbulkget -c <comm> -v2c <addr> OID(s)
"
"- Verify an audit log entry is directed to a file called /var/log/snmp-api.log

Verify the format of the log:
2017-02-07T13:47:51.000 controller-0 snmpd[2864]: info snmp-auditor transport:udp remote:10.10.10.30 reqid:1756865082 msg-type:GETNEXT version:v2c
2017-02-07T13:47:51.000 controller-0 snmpd[2864]: info snmp-auditor reqid:1756865082 oid:SNMPv2-MIB::sysDescr.0
2017-02-07T13:47:51.000 controller-0 snmpd[2864]: info snmp-auditor reqid:1756865082 oid:SNMPv2-MIB::sysLocation.0
2017-02-07T13:47:51.000 controller-0 snmpd[2864]: info snmp-auditor reqid:1756865082 oid:SNMPv2-MIB::sysObjectID.0 status:pass
2017-02-07T13:47:51.000 controller-0 snmpd[2864]: info snmp-auditor reqid:1756865082 oid:SNMPv2-MIB::sysServices.0 status:pass

The first line contains the request header information: (1)the variable names (OID’s) in the request, and (2) the OIDs in the reply (not necessarily the same).
The community name in the PDU should not be logged.
Thus for all successful ‘get’, ‘get-next’ and ‘get-bulk’ requests, logs contain the OID(s) in the requests, followed by OID(s) in the response. The latter contain status information (either ‘pass’ or an error code: ‘NoSuchInstance’ , ‘NoSuchObject’ or ‘EndOfMibView’).

The expectation is that the msg-type in the log records matches the snmp command (GET, GETNEXT, GETBULK). ‘snmpwalk’ generates a large number of get-next requests.

The request id (reqid) is unique – and allows correlation of log records.
"
2 x o o x x o o x
101 Fault Management SNMP - Verify log permissions and ownership "Validate the permissions and privileges on the /var/log/snmp-api.log file.
"
1 x x x x x x x x
102 Networking Off-line static configuration "1. Verify that default configuration is applied
system config-list
system config-section-list
2. Verify that default DNS configuration is correct
cat /opt/platform/config/cgcs_config
NAMESERVER_1=8.8.8.8
NAMESERVER_2=8.8.4.4
3. Power-off all hosts except active controller
system host-modify <host-name/id> action=lock
system host-modify <host name/id> action=power-off
4. Verify that all hosts become unavailable
5. Add new configuration and modify DNS section in it to the new value
system config-add
system config-list
system config-section-list
system config-section-modify <icsection_uuid>
6. Apply modified configuration
system modify <iconfig_uuid> action=apply
7. Verify that new configuration is applied
8. Power-on all hosts
system host-modify <host name/id> action=power-on
system host-modify <host name/id> action=unlock
9. Wait until all hosts become active
10. Verify that configuration is applied for all hosts when they become available.
system host-show compute-0"
"1. Default configuration is applied
5. New config is added
DNS section is in list of modifiable sections
DNS section is changed
7. New config is applied only for active controller
10. Configuration is applied for all hosts when they
become available."
1 x x x x x x x x
103 Networking "Off-line static configuration for ""External OAM"" Interface" "1. Verify that default configuration is applied
system config-list
2.Check default OAM config system oam-show
3.Modify cEXT_OAM section with compatible configuration
system oam-modify oam_c1_ip=10.10.10.11
4. Verify that new configuration is accepted
system oam-show
5. Verify that secstate for cEXT_OAM section is changed to
'modified'
system config-listt
6. Apply modified configuration
system config-modify <iconfig_uuid> action=apply
7. Verify that state of config is changed to 'applying'
system config-list
8. Verify that iconfig_applied is equal to uuid of applied config
(before applied) for all hosts
system host-show <host id>
9. Verify that iconfig_target is equal to uuid of applying config
(new config) for all hosts
system host-show <host id>
10. Unlock all locked hosts. Wait until all hosts become awailable
system host-lock controller-1
system host-show
system host-unlock controller-1
system host-show
11. verify that ip address for eth0 interface is changed to new
value and verify that oam controller IP address is changed to new
address. Repeat this step for all hosts
ssh to host
ifconfig (only for controllers)
cat /etc/hosts
12. Swact controllers. Wait until new master (controller-1) controller
appeared
13. Verify that keystone endpoints updated. It may take about 2
minutes for the endpoint to be updated after a swact.
keystone endpoint-list
14. Verify that iconfig_applied and iconfig_target are equal to
uuid of new applied config for new master host
system host-show <new master host id>"
"1. default configuration is applied
3. new configuration is accepted
4. secstate for cEXT_OAM section is changed to 'modified'
7. state of config is changed to 'applying'
8. iconfig_applied is equal to uuid of applied config (before
applied) for all hosts
9. iconfig_target is equal to uuid of applying config (new
config) for all hosts
11. ip address for eth0 interface is changed to new value
and verify that oam controller IP address is changed to new
address. Repeat this step for all hosts
13. keystone endpoints updated
14. iconfig_applied and iconfig_target are equal to uuid of
new applied config for new master host
system host-show <new master host id>"
1 x x x x x x x x
104 Networking On-line Static configuration validation: compatible NTP configuration should be applied via CLI "1. Verify that default configuration is applied
system config-list
system config-section-list
2. Verify that default NTP configuration is correct
cat /opt/platform/config/cgcs_config
NTP_SERVER_1=0.pool.ntp.org
NTP_SERVER_2=1.pool.ntp.org
NTP_SERVER_3=2.pool.ntp.org
3. Create new configuration
system config-add newconfig
4. Verify that new config is added
system config-list
system config-section-list
5. Modify NTP section with compatible configuration
system config-section-modify
6. Apply created config for controller-0
system modify <iconfig_uuid> action=apply
7. Check is lock/unlock action required.
If iconfig_applied != iconfig_target in system host-show additional
action is required: lock/unlock of host
Lock/Unlock host and wait until it become available
8. Verify that new configuration is applied
New configuration should appear in
/opt/platform/config/cgcs_config_reconfig
Also new configuration should be displayed via CLI
system config-section-show <section_uuid>"
"1. Default configuration is applied
2. Default NTP configuration is correct
4. New config is added
8. New configuration is applied
New configuration appeared in
/opt/platform/config/cgcs_config_reconfig
new configuration is displayed via CLI"
1 x x x x x x x x
105 Networking "On-line Static configuration validation: Not compatible configuration for ""External OAM"" Interface should not be applied" "1. Verify that default configuration is applied
system config-list
2. Check default OAM config
system oam-show
3. Modify cEXT_OAM section with not compatible configuration
system oam-modify oam_c0_ip=abcdafd
4. Verify that error message is returned
5. Verify that changes for theoam_c0_ipkeys which do not
correspond to IP/subnet changes will also be rejected."
"1. default configuration is applied
2. show OAM configurration
3. error message is returned
5. changes for theoam_c0_ipkeys which do not correspond to IP/subnet changes will also be rejected."
2 x x x x x x x x
106 Networking "On-line Static configuration validation : compatible configuration for ""External OAM"" Interface should be applied via CLI" "1. Verify that default configuration is applied
system config-list
2. Check default OAM
config system oam-show
3. Modify cEXT_OAM section with compatible configuration
system oam-modify oam_subnet=10.10.0.0/164.
Verify that changes for the oam_subnet keys is applied"
"1. default configuration is applied
2. show OAM configurration
3. show new OAM configuration
4.changes for the oam_subnet keys is applied"
1 x x - - - - - -
107 Networking On-line Static configuration validation: swact controllers should be rejected until controller-0 config is not up-to-date "1.Verify that default configuration is applied
system config-list
2. Check default OAM config
system oam-show
3. Modify cEXT_OAM section with compatible configuration
system oam-modify oam_c1_ip=10.10.10.11
system oam-modify oam_c0_ip=10.10.10.10
4. Verify that new configuration is accepted
system oam-show
5. Verify that secstate for cEXT_OAM section is changed to 'modified'
system config-list
6. Apply modified configuration
system config-modify <iconfig_uuid> action=apply
7. Verify that state of config is changed to 'applying'
system config-list
8. Verify that controller-1 cstatus is changed to ""Config out-of-date""
system host-show controller-1
9. Lock and unlock standby controller. Wait until standby controller become available.
system host-lock controller-1
system host-show
system host-unlock controller-1
system host-show
10. ssh to controller-1 and verify that ip address for eth0 interface is changed
ifconfig
11. Verify that oam controller IP address is changed to new address
system oam-show
12. Swact controllers. Wait until new master (controller-1) controller appeared
13. Verify that keystone endpoints updated
keystone endpoint-list
14. Verify that iconfig_applied and iconfig_target are equal to uuid of new applied config for new master host
system host-show <new master host id>
15. Try to swact controllers.
16. Verify that swact via CLI/GUI of controllers is not working until controller-0 config is not up-to-date .
Note:The User is prevented but the system (e.g. SM) is not prevented"
"1. Default configuration is applied
4. New configuration is accepted
5. secstate for cEXT_OAM section is changed to 'modified'
7. State of config is changed to 'applying'
10. oam controller IP address is changed to new address
13. keystone endpoints updated
14. iconfig_applied and iconfig_target are equal to uuid of new applied config for new master host
15. swact controllers is not working until controller-0 config is not up-to-date"
1 - - x x x x x x
108 Networking Provider network Down Alarm "1. Lock the compute node
2. Verify via GUI that appropriate alarm is raised - ID 300.003
3. Verify appropriate severity is displayed <major>
4. Verify that alarm entry is represented in GUI and includes:
- alarm ID - 300.003
- severity - Major
- entity instance ID host=<hostname>.
interface=<interface-uuid>
- proposed repair action
Enable compute hosts with required provider network connectivity.
- reason text
No enabled compute host with connectivity to provider network.

5. Verify alarm is present via CLI
system alarm-list
6. Verify correct information is shown
system alarm-show
7. Unlock computes and wait until they are available
8. Verify that alarm is cleared"
Alarm Raised with valid details. 1 x x x x x x x x
109 Networking Static configuration validation: compatible NTP configuration should be applied via GUI "1. Verify that default configuration is applied
2. Verify that default NTP configuration is correct
3. Create new configuration via GU
4. Verify that new config is added via GUI
5. Modify NTP section with compatible configuration via GUI
7. Verify that NTP section is changed to correct
8. Apply created config for controller-0 via GUI
9. Verify that configuration is applied
Note:
To apply configuration it is required to:
lock/unlock controller-1
swact controller-0
lock/unlock controller-0
swact controller-1 back to controller-0"
"1. Default configuration is applied
2. Default NTP configuration is correct
4. New config is added via GUI
7. NTP section is changed to correct
9. Configuration is applied
"
1 - - x x x x x x
110 Networking Verify alarm generation for neutron DHCP agent scheduling states "1. Stop DHCP agent
a) On the Compute node, go to /etc/pmon.d
b) vi neutron-dhcpagent.conf
c) Change mode = passive;
TO
mode = ignore;
save the file
d) stop the process using
/etc/init.d/neutron-dhcpagent stop
2. Verify alarm is generated
system alarm-list
3. Verify correct information is shown
system alarm-show
4. Verify that alarm is cleared after DHCP agent is restarted
a) Change back the mode to passive.
b) Restart the process on the compute
/etc/init.d/neutron-dhcpagent start
Starting neutron-dhcpagent...done."
"2. Alarm is present in 'system alarm-list' table 300.002 | host=compute-0.agent=DHCP agent | major| 2014-07-15T15:33:29.366915 | Agent did not report status within alive timeout interval |
+--------------------------------------+----------+---------------------------------+----------+----------------------------+------------------
-----------------------------------------+
3. Information is correct
4. Alarm is cleared"
1 - - x o x o o x
111 Networking Verify alarm generation for neutron L3 agent scheduling states "1. Stop L3 agent
a) On the compute node, go to /etc/pmon.d
b) vi neutron-l3-agent.conf
c) Change mode = passive ;
TO
mode = ignore ;
save the file
d) stop the process using
/etc/init.d/neutron-l3-agent stop
2. Verify alarm is generated
system alarm-list
3. Verify correct information is shown in
system alarm-show
4. Verify that alarm is cleared after L3 agent is restarted
a) Change back the mode to passive.
b) Restart the process on the compute
/etc/init.d/neutron-l3-agent start"
f6bb169c-bd76-4be0-b877-aac87ab3c78d | 300.002 | host=compute-0.agent=L3 agent | major |
2014-07-15T15:23:29.329892 | Agent did not report status within alive timeout interval |
+--------------------------------------+----------+-------------------------------+----------+----------------------------+----------------------
-------------------------------------+
3. Information is correct
4. Alarm is cleared"
1 - - o x o x x o
112 Networking Verify alarm generation for neutron provider network state "1. Lock all computes that host a particular provider network.
2. Verify alarm is generated via 'system alarm-list'
3. Verify correct information is shown in 'system alarm-show' output
4. Unlock computes and wait until they are available
5. Verify that alarm is cleared"
"2. Alarm is present in 'system alarm-list' table
3. Information is correct
5. Alarm is cleared"
-1 - - x x x x x x
113 Networking "On-line Static configuration validation: swact controllers should be rejected until controller-0 config for ""External OAM"" Interface is done" "1.Verify that default configuration is applied
system config-list
2. Check default OAM config
system oam-show
3. Modify cEXT_OAM section with compatible configuration
system oam-modify oam_c1_ip=10.10.10.11
system oam-modify oam_c0_ip=10.10.10.10
4. Verify that new configuration is accepted
system oam-show
5. Verify that secstate for cEXT_OAM section is changed to 'modified'
system config-list
6. Apply modified configuration
system config-modify <iconfig_uuid> action=apply
7. Verify that state of config is changed to 'applying'
system config-list
8. Verify that controller-1 cstatus is changed to ""Config out-of-date""
system host-show controller-1
9. Lock and unlock standby controller. Wait until standby controller become available.
system host-lock controller-1
system host-show
system host-unlock controller-1
system host-show
10. ssh to controller-1 and verify that ip address for eth0 interface is changed
ifconfig
11. Verify that oam controller IP address is changed to new address
system oam-show
12. Swact controllers. Wait until new master (controller-1) controller appeared
13. Verify that keystone endpoints updated
keystone endpoint-list
14. Verify that iconfig_applied and iconfig_target are equal to uuid of new applied config for new master host
system host-show <new master host id>
15. Try to swact controllers.
16. Verify that swact via CLI/GUI of controllers is not working until controller-0 config is not up-to-date .
Note:The User is prevented but the system (e.g. SM) is not prevented"
"1. Default configuration is applied
4. New configuration is accepted
5. secstate for cEXT_OAM section is changed to
'modified'
7. State of config is changed to 'applying'
10. oam controller IP address is changed to new
address
13. keystone endpoints updated
14. iconfig_applied and iconfig_target are equal to
uuid of new applied config for new master host
15. swact controllers is not working until controller-0
config is not up-to-date"
-1 - - x x x x x x
114 Networking "Verify appropriate values should be used for modifying of interfaces (CLI, GUI)" "1. Lock host and check host is administrative locked
2. Check list interfaces for this host (system host-if-list <hostname or id>)
3. Attempt to modify interface with wrong value.
4. Verify can not be modified and appropriate message appeared.
5. a) Attempt to modify interface with wrong value: System->Inventory->Hosts->Interfaces->Actions->Edit
Interface->Interface Name: test
b) Verify can't be modified and verify it in list interfaces(system host-if-list <hostname or id>)"
Interfaces cannot be modified 2 x x x x x x x x
115 Networking Verify ethernet management interface is updated successfully on controller "1. Ensure that ethernet type is configured on MGMT interface
2. Lock inactive controller.
3. Change MTU value for MGMT interface by command 'system host-if-modify controller-X ethX -m 9000'
4. Unlock inactive controller and swact.
5. Verify that /opt/platform/packstack/manifests/<IP>_interfaces.pp is updated correctly.
system host-if-show
6. Verify that 'ifconfig' shows new value of MTU for MGMT.
7. Lock/unlock one compute.
8. Verify that MTU is changed on compute node.
9. Verify that destination compute could be pinged via MGMT interface.
10. Execute 'mount' to display the NFS rsize/wsize values that should be a multiple of 1024 and less than the MTU on the interface the mountis using"
1 - - x x x x x x
116 Networking Verify ethernet OAM interface is updated successfully on controller. "1. Ensure that ethernet type is configured on OAM interface
2. Lock inactive controller.
3. Change MTU value for OAM interface by command 'system host-if-modify controller-0 ethX -m
9000'
4. Unlock inactive controller and swact
5. Verify that /opt/platform/packstack/manifests/<IP>_interfaces.pp is updated correctly.
system host-if-show
6. Verify that 'ifconfig' shows new value of MTU for OAM.
7. Verify that destination host could be pinged via OAM interface"
1 - - x x x x x x
117 Networking Verify LAG OAM interface is updated successfully on controller "1. Ensure that LAG type is configured on OAM interface
2. Lock inactive controller.
3. Change MTU value for OAM interface by command 'system host-if-modify controller-0 ethX -m 9000'
4. Unlock inactive controller and swact,
5. Verify that /opt/platform/packstack/manifests/<IP>_interfaces.pp is updated correctly.
system host-if-show
6. Verify that 'ifconfig' shows new value of MTU for OAM.
7. Verify that destination host could be pinged via OAM interface.
8. Execute 'mount' to display the NFS rsize/wsize values that should be a multiple of 1024 and less than the
MTU on the interface the mountis using."
2 - - x x x x x x
118 Networking Verify that internal customer managed tenant network works "1. Create a tenant subnet with ""--unmanaged"" flag in internal network
2. Boot VM with nic connected to this subnet
3. Verify network connectivity of booted VM"
1 x x x x x x x x
119 Networking Verify that System name can be modified via CLI and GUI "Using CLI display system name:
system show
Using CLI modify system name:
system modify name=ip1-4
Using CLI modify system description:
system modify description=""This system belongs to""
Verify that system description has been changed"
"In the GUI Inventory section under Systems tab verify that both ""Name"" and
""Description"" has changed
Using CLI verify that system description and name has been changed"
1 x x x x x x x x
120 Networking "Verify that unlocked powered off host can not be deleted (CLI, GUI)" "1. Check host is administrative unlocked(system host-list - CLI, System->Inventory->Hosts->Admin State:Unlocked)
2. Turn off host and check host availability is offline(System->Inventory->Hosts->Availability State:Offline, system
host-list->availability:offline - CLI)
3. Attempt to delete host(system host-delete <hostname or id> [<hostname or id> ...]-CLI, System->Inventory-
>Hosts->Actions->More - GUI)
4. Verify host can not be deleted because host is in administrative state 'unlocked' and appropriate message appeared:
'Unable to complete the action delete because Host <hostname> is in administrative state = unlocked.'
- CLI
5. Verify no action for host delete when host is administratively unlocked(System->Inventory->Hosts->Actions->More - GUI)"
"4. Verify host can not be deleted because host is in administrative state 'unlocked' and appropriate message
appeared: 'Unable to complete the action delete because Host <hostname> is in administrative state = unlocked.'
- CLI
5. Verify no action for host delete when host is administratively unlocked(System->Inventory->Hosts->Actions->More
- GUI)"
2 - - x o o x x o
121 Networking Verify that Vswitch CPU/Numa isolation parameters can be applied via iprofile "[root@controller-0~(keystone_admin)]# system host-modify 3 action=lock
1) create a profile
2) make some manual changes to the cpu assignments on the host
3) verify that the manual changes to effect
4) apply the profile
5) verify that the changes were applied by the profile"
Property | Value |
+--------------------+--------------------------------------+| uuid | b1bd1e14-e609-4cc9-9ffc-ffa057a7493e
profile1 2 10 No Processor 0: 0 Processor 0: 1-2 Processor 0: 3-9, Processor 1: 10-19 2014-04-30T13:27:24.236178+00:00 None |
+--------------------+--------------------------------------+
[root@controller-0 ~(keystone_admin)]# system cpuprofile-list
+--------------------------------------+-------------+------------+--------------------+----------------+----------------+------------------+------
--------------+| uuid | profilename | processors | phy cores per proc | hyperthreading | platform cores | vswitch
cores | vm cores |
Processor 0: 3-9,
| | | | | | Processor 1: 10-19 |
+--------------------------------------+-------------+------------+--------------------+----------------+--------------
2)
[root@controller-0 ~(keystone_admin)]# system host-cpu-modify compute-0 19 function=Vswitch
+-----------------------+-------------------------------------------+| Property | Value |
+-----------------------+-------------------------------------------+| logical_core | 19
1 12 0 Vswitch Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz 6 {} 9b55f884-b1b7-412f-ab8d-31f32179c36b 0f6b4773-9f20-4142-ac10-68add8e21400 a4dc1eb7-e6c2-4054-87ed-a5459b8bfd01 2014-04-30T10:37:56.271439+00:00 2014-04-30T13:35:29.246026+00:00 |
+-----------------------+-------------------------------------------+
3)
system host-cpu-list compute-0
[root@controller-0 ~(keystone_admin)]# system host-cpu-list compute-0
+--------------------------------------+----------+-----------+----------+--------+-------------------------------------------+-------------------+| uuid | log_core | processor | phy_core | thread | processor_model | assigned_function |
+--------------------------------------+----------+-----------+----------+--------+-------------------------------------------+-------------------+| e14bebea-8c09-4101-aa17-ce29d1d1ca29 | 0 | 0 | 0 | 0 | Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz| Platform
1 | 0 | 1 | 0 | Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz |
Vswitch
2 | 0 | 2 | 0 | Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz |
Vswitch
3 | 0 | 3 | 0 | Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz |
VMs
4 | 0 | 4 | 0 | Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz |
VMs |
....| 9b55f884-b1b7-412f-ab8d-31f32179c36b | 19 | 1 | 12 | 0 | Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz |
Vswitch |
+--------------------------------------+----------+-----------+----------+--------+-------------------------------------------+-------------------+
4) [root@controller-0 ~(keystone_admin)]# system host-apply-cpuprofile compute-0 profile1
5)
system host-cpu-list compute-0
[root@controller-0 ~(keystone_admin)]# system host-cpu-list compute-0
+--------------------------------------+----------+-----------+----------+--------+-------------------------------------------+-------------------+

+--------------------------------------+-------------+------------+--------------------+----------------+----------------+------------------+------
--------------+| b1bd1e14-e609-4cc9-9ffc-ffa057a7493e | profile1 | 2 | 10 | No | Processor 0: 0 | Processor 0: 1-2 |
"
1 x x x x x x x x
122 Networking interfaces cannot be deleted on unlocked host "1. Check host is administrative unlocked
2. Check list interfaces for this host(system host-if-list <hostname or id>)
3. Attempt to delete interface(system host-if-delete <hostname or id> <interface name or uuid> - CLI, System-
>Inventory->Hosts->Interfaces - GUI)
4. a) Verify interfaces can not be delete when host is administrative unlock and appropriate message
appeared:'Host must be locked.' -CLI
b) No action for interface delete
5. Attempt to lock host and verify interfaces can be deleted(for computes: system host-if-delete <hostname or
id> <interface name or uuid> - CLI, System->Inventory->Hosts->Interfaces->Actions->More - GUI)"
1 x x x x x x x x
123 Networking Appropriate values should be used to modify if ports "1. Lock host and check host is administrative locked
2. Check list interfaces for this host(system host-if-list <hostname or id>)
3. a) Attempt to modify ports of an interface with wrong value: system host-if-modify-ports [-t 1111] <hostname or
id> <interface name or uuid> <ports> [<ports> ...]
b) Verify ports of an interface can not be modified with wrong value and appropriate message appeared: system
host-if-modify-ports: error: argument -t/--iftype: invalid choice: '1111' (choose from 'ethernet', 'ae')
4. a) No wrong actions for change ports of an interface: System->Inventory->Hosts->Interfaces->Actions->Edit
Interface->Interface Type(GUI)"
2 x x x x x x x x
124 Networking Appropriate values should be used to add new interfaces "1. Check host exists(system host-list)
2. Lock host and check host is administrative locked
3. Attempt to add new interface with wrong value.
4. Verify can not add new interface with wrong value and appropriate message appeared.
5. Verify interface can't be created with invalid mtu value through GUI: System->Inventory->Hosts-><host>-
>Interfaces->Create Interface->MTU"
2 x x x x x x x x
125 Networking Verify GUI support for crypto device config "HOW CAN WE CONFIGURE THE CRYPTO DEVICE?
Description: Verify GUI can be used to view/configure crypto device.
Step-1: View the device list from GUI
Expected Result: Verify that it displays the device info properly
Step-2: Disable/Enable a device from GUI
Expected Result: Verify the device is disabled/enabled properly (use CLI to verify)"
-2 x x x x x x x x
126 Networking Verify Traffic Control Class setup "TRAFFIC CONTROL BEYOND OUR CURRENT UNDERSTANDING
Verify this on following Labs configuration:
1. No infra interface
2. Management and Infra on separate interfaces.
3. Infra consolidated over management.
4. Management and Infra consolidated (vlans) over pxe"
-2 x x x x x x x x
127 Networking Boot Instance with Virtio interfaces using new NIC "TEST FOR CX4 (MELLANOX)
1. Use labsetup script and launch instance that uses Virtio
interface
2. Perform all VM life cycle Operations on VM
- Stop, Pause, Unpause, Suspend, Resume, and Cold Migrate
should all work"
I think we can dismiss this one for the moment -2 - - x x x x x x
128 Networking Audit Provider Network Connectivity Test after all slave computes reboot "Step-1: Reboot one of the slave compute
Step-2: Check if the Provider Network Connectivity Test is listed
neutron providernet-connectivity-test-list"
-2 - - - - - - - -
129 Networking Audit Provider Network Connectivity Test after deleting vxlan segment range "Step-1: Create multiple provider networks of type vxlan
Step-2: Create multiple segment range for the vxlan
provider network type
Step-3: Attach it to the different computes
Step-4: Check if the Provider Network Connectivity Test is
listed
neutron providernet-connectivity-test-list
Step-5: Delete some of the segment range
Step-6: Check if the Provider Network Connectivity Test is
listed
neutron providernet-connectivity-test-list"
-2 - - - - - - - -
130 Networking Audit Provider Network Connectivity Test of type flat providernet_id | providernet_name |
type | host_name | segmentation_ids | status |
+--------------------------------------+------------------
+------+-----------+------------------+--------+| 6ad502a3-ebae-46f7-91f4-d430d8f57159 | group0-data0b |
vlan | compute-0 | 700-731 | PASS
group0-ext0 |
flat | compute-0 | * | PASS"
-2 - - - - - - - -
131 Networking Providernet Connectivity Test List using CLI "Step-1: Verify all parameters for Providernet-connectivity-test-list
providernet_name, providernet_id, host_id, audit_uuid, host_name, segmentation_id
Expected Result: Verify only the filtered options should be displayed in the output
Example:
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-connectivity-test-list
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-connectivity-test-list --providernet_id
413d3e2c-852c-48f5-b469-4f61d6ff4620
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-connectivity-test-list --providernet_name
group0-ext0
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-connectivity-test-list --segmentation_id
2553
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-connectivity-test-list --host-id ea04a045-
734f-4c93-a0a4-e941820e03e5"
1 x x x x x x x x
132 Networking Network Topology page in Horizon after clean install but prior to providernet and compute config "Precondition : Clean install, No provider network or compute hosts
Steps:
Install the lab, login to horizon prior to the configuration of the provider network and compute hosts.
Check the topology display in Horizon before any provider networks have been created"
1 x x x x x x x x
133 Networking Enable/Disable internal dns resolution 2+2 standard system "1) Install any 2+2 system
2) enable dns resolution by issuing the following commands:
system service-parameter-add network ml2 extension_drivers=dns
system service-parameter-add network default
dns_domain=wrs_dns.com
3) lock/unlock computes
4) Boot 2 Guestst
5) ping/ssh between guests via their hostnames
5a) Perform VM actions: stop/pause/resume
5b) Perform system operations:
lock/unlock computes
reboot compute hosts to trigger VM evacuation
6) Disable DNS resolution by deleting system service parameter associated with network/dns servvice, ex:
system service-parameter-delete 33acd5e0-354d-4d5d-a146-b257c8cfb05a
7) Lock unlock/computes
8) Boot 2 new guests
9) Attempt to ping/ssh between guests via their hostnames
"
"Check that
1) ml2 network service parameter was created with value=dns
2) Verify that VMs on the same network can access each other via their hostnames
3) Guests survive VM actions and system actions and that internal dns resolution is not impacted
4) After disabling/deleting internal dns resolution Guests"
1 - - - - x x x x
134 Networking Enable/Disable internal dns resolution All-In-One system "1) Install any AIO duplex system
2) enable dns resolution by issuing the following commands:
system service-parameter-add network ml2 extension_drivers=dns
system service-parameter-add network default
dns_domain=wrs_dns.com
3) lock/unlock controllers
4) Boot 2 Guests
5) ping/ssh between guests via their hostnames
5a) Perform VM actions: stop/pause/resume
5b) Perfomr system operations:
lock/unlock computes
reboot compute hosts to trigger VM evacuation
6) Disable DNS resolution by deleting system service parameter
associated with network/dns servvice, ex:
system service-parameter-delete 33acd5e0-354d-4d5d-a146-b257c8cfb05a
7) Lock unlock controllers
8) Boot 2 new guests
9) Attempt to ping/ssh between guests via their hostnames"
"Check that
1) ml2 network service parameter was created with value=dns
2) Verify that VMs on the same network can access each other via their hostnames
3) Guests survive VM actions and system actions and that internal dns resolution is not impacted
4) After disabling/deleting internal dns resolution Guests no longer resolve IPs via hostnames"
1 - - x x - - - -
135 Networking "Power down/up recovery test with direct connect CPE mode, followed by swact operations" "Precondition
System installed and running in CPE mode duplex-direct (peer-to-peer network connections)
Instances exist on both controllers
Cut the power to both controllers simultaneously (DOR
test)
Return power to both controllers
Perform swact operation
Repeat swact again"
"Confirm graceful recovery of both controllers and alarms
clearing.
Confirm instances recovery."
2 - - x x - - - -
136 Networking New cli commands for system modify system_mode=duplex-direct "Verify cli 'system modify system_mode' is no longer read only attribute
The admin should no longer have the following response returned when the system_mode value specified is valid ie. 'duplex' or 'duplex-direct'
""Invalid or Read-Only attribute: system_mode
Verify new cli system modify system_mode command can not be set to an empty value or invalid values
$system modify system_mode=
$system modify system_mode=NULL"
"feedback indicates the list of valid values
$ system modify system_mode
Invalid value for system_mode, it can only be
modified to 'duplex' or 'duplex-direct'"
2 - - x x - - - -
137 Networking New cli output for system_mode in 'system show' "Precondition:
CPE system installed in default 'duplex' mode
As admin user, run the following command
$system show
Confirm the system_mode is returned as 'duplex'
Precondition:
CPE system installed in 'duplex-direct' mode
As admin user, run the following command
$system show
Confirm the system_mode is returned as 'duplex-direct'"
Property | Value duplex-78_79 CPE duplex| Property | Value duplex-78_79 CPE duplex-direct
..."
1 - - x x - - - -
138 System Inventory change the dns server ip addresses using cli "description: change the dns server ip addresses using cli
step-1: change the ips of nameservers using cli:
system dns-modify nameservers=<ip1> […]
"
"verify: expected result:
- 250.001 configuration out-of-date alarm is raised temporarily, and then cleared automatically by the system
using other tools to check (i.e. nslookup) the dns servers are working
the changes are saved to persistent storages

Note:
the order of input DNS servers should be kept
"
1 x o o x x o o x
139 System Inventory change the dns server ip addresses using gui "step-1: change the ips of nameservers using gui: login as ‘admin’, select admin -> system -> system configurations -> dns -> edit dns
expected result:
ips of dns servers changed as expected and no errors found.
250.001 configuration out-of-date alarm is raised temporarily, and then cleared automatically by the system
using other tools to check (i.e. nslookup) the dns servers are working
the changes are saved to persistent storages

"
"expected result:
ips of dns servers changed as expected and no errors found.
250.001 configuration out-of-date alarm is raised temporarily, and then cleared automatically by the system
using other tools to check (i.e. nslookup) the dns servers are working
the changes are saved to persistent storages
"
1 o x x o o x x o
140 System Inventory change the mtu value of the data interface using api "description: change the mtu value of the data interface using gui
step-1: lock a compute node using gui
expected result: the node locked without any error
step-2: curl -i http://${OAM_IP}:6385/v1/iinterfaces/<uuid> patch \
-h ""content-type: application/json"" -h ""accept: application/json"" \
-h ""x-auth-token: $token"" \
-d '[{""path"": ""/imtu"", ""value"": ""new-value"", ""op"": ""replace""}]'
step-3: unlock the node



step-4: repeat the above steps on each compute nodes
"
"- description: change the mtu value of the data interface using gui
expected result: the node locked without any error

- expected result: the mtu value of changed to new value



- expected result:
- the node is able to boot into ‘unlock + enabled’ status
- the network is working without any issue on the interface

- expected result: all the compute nodes are able to boot into ‘unlock + enabled’ status
"
2 - - x x x x x x
141 System Inventory change the mtu value of the data interface using cli "description: change the mtu value of the data interface using cli
step-1: lock a compute node
step-2: use the system host-if-modify command to specify the interface and the new mtu value on the node


step-3: unlock the node



step-4: repeat the above steps on each compute nodes
"
"- expected result: the node locked without any error

- expected result: the mtu value of changed to new value
Note: mtu must be greater or equal to MTU of the underline provider network

- expected result:
- the node is able to boot into ‘unlock + enabled’ status
- the network is working without any issue on the interface

- expected result: all the compute nodes are able to boot into ‘unlock + enabled’ status
"
1 - - x x x x x x
142 System Inventory change the mtu value of the data interface using gui "description: change the mtu value of the data interface using gui
step-1: lock a compute node using gui
step-2: a) from system > inventory, select the hosts tab
b) from the edit menu for the standby controller, select lock host
c) click the name of the host, and then select the interfaces tab and click edit for the interface you want to change.
d) in the edit interface dialog, edit the mtu field, and then click save
step-3: unlock the node




step-4: repeat the above steps on each compute nodes
"
"- expected result: the node locked without any error

- expected result: the mtu value of changed to new value



- expected result:
- the node is able to boot into ‘unlock + enabled’ status
- the network is working without any issue on the interface


- expected result: all the compute nodes are able to boot into ‘unlock + enabled’ status
"
2 - - x x x x x x
143 System Inventory change the mtu value of the oam interface using gui "description: change the mtu value of the oam interface using gui
step-1: lock the standby controller
step-2: login horizon as ‘admin’, select admin -> system -> inventory -> hosts -> <compute-x> -> edit interface, input new mtu and save

step-3: unlock the standby controller



step-4: swact the controllers

step-5: lock the new standby controller

step-6: modify the mtu of the corresponding interface on the standby controller

step-7: unlock the standby controller



"
"- expected result: the standby controller locked without any error

- expected result: the mtu value of standby controller changed to new value

- expected result:
- the node is able to boot into ‘unlock + enabled’ status
- the network is working without any issue on the interface

- expected result: the previous standby controller changed to active controller

- expected result: the standby controller is locked without errors

- expected result: the mtu value changed successfully

- expected result:
- the node is able to boot into ‘unlock + enabled’ status
- the network is working without any issue on the interface
"
1 - - x x x x x x
144 System Inventory change the size of the image storage pool on a ceph-based system using gui "1. Admin > Platform > System Configuration Select the Ceph Storage Pools tab, Edit size of Ceph storage pools and Save

"
"- the sizes are saved, checking with:
ceph osd pool get-quota <name>
"
1 - - - - - - x x
145 System Inventory Check Resource Usage panel working properly "step 1: login to Horizon as admin
step 2: select System > Resource Usage > Usage Report
Click 'Download CSV Summary'
"
"- verify user admin logged in
- verify:
- all reports display without issue
- the usage report in cvs format downloaded in reasonable period of time
"
1 x o o x x o o x
146 System Inventory export hosts information host-bulk-export api "description: export the information of current hosts using api
step-1: curl -i http://<url>:6385/v1/ihosts/bulk_export -x get -h ""content-type: application/json"" -h ""accept: application/json"" -h ""x-auth-token: $token"" -d 'null'
"
"expected result: a file with the specified name is generated, containing the correct information of all hosts
"
2 o x x o o x x o
147 System Inventory export hosts information host-bulk-export cli "description: export the information of current hosts using cli
step-1: run system host-bulk-export –filename <hosts-file-name>
"
"expected result: a file with the specified name is generated, containing the correct information of all hosts
"
1 x o o x x o o x
148 System Inventory invalid inputs for number of hugepages will be rejected gui "description: invalid inputs will be rejected gui
step-1: lock a compute node
step-2: login as ‘admin’ to horizon gui, and select admin -> system -> inventory -> <compute-x> -> memory -> update memory. set invalid values to the number of (2m/1g) hugepages

"
"- expected result: the compute node locked

- expected result: the requests will be rejected with error messages

"
2 - - - - x x x x
149 System Inventory modify number of hugepages using cli "INCLUDE COMMANDS
description: modify number of hugepages using cli
step-1: lock a compute node
step-2: set the number of 2m/1g hugepages

step-3: unlock the compute

step-4: launch vms using hugepage memory
"
enabled | available’ status

- expected result: the correct number of hugepage memory is consumed by the vms from the correct compute/numa node
"
1 - - - - x x x x
150 System Inventory modify number of hugepages using gui "description: modify number of hugepages using gui
step-1: lock a compute node
step-2: login as ‘admin’ to horizon gui, and select admin -> system -> inventory -> <compute-x> -> memory -> update memory. set the number of 2m/1g hugepages

step-3: unlock the compute

step-4: launch vms using hugepage memory

"
enabled | available’ status

- expected result: the correct number of hugepage memory is consumed by the vms from the correct compute/numa node

"
2 - - - - x x x x
151 System Inventory query the product type on cpe system using cli "description: query the product type on cpe system using cli
step-1: run system show
"
"- verify the value of:
system type
system_mode
"
1 x o o x - - - -
152 System Inventory query the product type on cpe system using gui "description: query the product type on cpe system using gui
step-1: login as (admin), goto Admin > Platform > System Configuration > Systems
"
"- verify the value of:
Name
System Type
System Mode
Description
Version
"
1 o x x o - - - -
153 System Inventory query the product type on std system using cli "description: query the product type on std system using cli
step-1: run system show
expected result: verify the value of ‘system type’ is ‘standard’
"
"- verify the value of:
Name
System Type
System Mode
Description
Version
"
1 - - - - x o o x
154 System Inventory query the product type on std system using gui "description: query the product type on std system using gui
step-1: login as (admin), goto Admin > Platform > System Configuration > Systems
"
"- verify the value of:
Name
System Type
System Mode
Description
Version
"
1 - - - - o x x o
155 System Inventory Test creating and applying interface profile "step 1: create interface profile for a node
-cli: system ifprofile-add
-gui: Admin > Platform > Host Inventory
> Interfaces > Create Interface Profile
step 2: lock the node to test

step 3: delete interfaces of the node


step 4: apply the interface profile

step 5: unlock the node


"
"- verify the profile created successfully



- verify the node locked successfully
- verify the interfaces are deleted
Note:
- cannot delete interface of 'ethernet' type, change its type to 'none' instead
- leave mgmt interface unchanged
- verify the interfaces are recreated successfully

- verify:
- the node can be unlocked and get into unlocked, enabled and available states.
- the interfaces working without any issue
"
1 - - - - x x x x
156 System Inventory verify that alarm can be deleted using cli "description: verify that alarm can be deleted using cli
step-1: delete alarm using cli:
alarm-delete <uuid>
step-2: check if the alarm is till existing
"
"- expected result: verify the command is accepted and return success


- expected result: verify that alarm is deleted
"
1 x o o x x o o x
157 System Inventory verify that the cpu data can be seen via cli "description: verify that the cpu data can be seen via cli
step-1: list the cpu processors using system host-cpu-list 1
step-2: show the detailed information of a specific logical core

"
"- expected result: get the list without any errors

- expected result: the information about numa_node, physical_core, assigned_function and etc. are displayed without any errors

"
1 x x x x x x x x
158 System Inventory verify the system type is read-only and cannot be changed via cli "description: verify the system type is read-only and cannot be changed via cli
step-1: attempt to change the system type:
system modify system_type=’standard’ on cpe system
system modify system_type=’cpe’ on std system
"
"- expected result:
the requests will be rejected
appropriate error messages were produced
the system type remains unchanged
"
2 x o o x x o o x
159 System Inventory verify the system type is read-only and cannot be changed via gui "description: verify the system type is read-only and cannot be changed via gui
step-1: login onto horizon, make sure no interface to change system type
"
"expected result: no user (admin/tenant1…) can change the ‘system type’
"
2 o x x o o x x o
160 System Inventory verify wrong interface profiles will be rejected "description: verify applying wrong profiles after booting new host
step-1: attempt to apply cpu profile with mismatched number of cores
"
"- expected result: the action is rejected with error message

"
2 - - - - x x x x
161 System Inventory Verify CLI system infra-modify rejects action=any_value_not_apply "step 1: change the settings of infra-network
system infra-modify infra_end=<...> somekey=somevalue
"
"- verify:
- CLI failed with non-0 return code with error messages
- settings of infra-network remain unchanged
"
2 x x x x x x x x
162 System Inventory Verify CLI system ntp-modify rejects action=any_value_not_apply "step 1: system ntp-modify ntpservers=<server1...> action=abc123





"
"- verify:
- no error message, and return code is 0
- DB updated with new values (system ntp-show)
- configuration remains untouched
- no Config out-of-date
"
2 x o o x x o o x
163 System Inventory Verify CLI system pm-modify does not require option action=apply "step 1: system pm-modify retention_secs=nnnn






"
"- verify:
- no error message
- return code is 0
- new value is populated to DB and can be verified with system pm-show
- configuration file is also updated
- alarms 250.001 Config out-of-date are raised and cleared shortly automatically

"
2 o x x o o x x o
164 System inventory "PTP support - Controller Nodes act as a Boundary Clocks and synchronize its clock to a
Grand Master clock via OAM interface. Serve as a time source to other nodes.
Compute/Storage Nodes are Slave Clocks (Ordinary Clocks) and synchronize
its clock to a Controller Nodes via Management interface."
"API is provided to enable and configure PTP as follows: ""system ptp-modify --enabled=<true/false>"" is there to turn it on/off. Note that NTP must be disabled first before turning PTP service on. ""system ptp-modify --mode=<hardware/software/legacy>"" selects time stamping. Hardware timestamping is the default option and achieves best time syncing. ""system ptp-modify --transport=<l2,udp>"" switches between IEEE 802.3 or UDP network transport for PTP messaging. L2 is the default transport. ""system ptp-modify --mechanism=<e2e,p2p>"" sets the PTP delay mechanism. Options: default delay request-response (E2E) mechanism and peer delay (P2P). ""system ptp-show"" displays the current status of PTP service." 1 x x x x x x x x https://storyboard.openstack.org/#!/story/2002935
165 Horizon Horizon login screen displays StarlingX "In this case, should say ""StarlingX""" Image should be on first screen during loggin in Horizon 1 x x x x x x x x
166 Horizon Horizon system type shows correctly grep ""system""" "- ""system_mode"" refers to number of controllers:
- simplex (one controller)
- duplex (two controllers)
- ""system_type"" refers to configuration and number of additional servers:
- All-in-one (services onto de controller)
- Standard (services onto additional servers, it means onto computes)"
1 x x x x x x x x
167 Horizon "Edit Image for volume in Horizon and add Instance Auto Recovery, verify metadata updated" 1 o x x o o x x o
168 Horizon "Edit Image of snapshot in Horizon and add Instance Auto Recovery, verify metadata updated" 1 o x x o o x x o
169 Horizon Verify that after installation of the second controller the GUI does not stop working 1 - - x o o x x o
170 Horizon Horizon login time using All-in-one "Attempt to login on horizon using ""admin"", ""tenant1"", and ""tenant2""" - Ensure the login time < 5 seconds 2 x x x x - - - -
171 Horizon Horizon login time using Regular "Attempt to login on horizon using ""admin"", ""tenant1"", and ""tenant2""" - Ensure the login time < 5 seconds 2 - - - - x x - -
172 Horizon Horizon login time using Storage "Attempt to login on horizon using ""admin"", ""tenant1"", and ""tenant2""" - Ensure the login time < 5 seconds 2 - - - - - - x x
173 Storage Verify snapshot via cinder snapshot-create Property | Value |
+--------------------------------------------+--------------------------------------+| created_at | 2015-02-27T17:06:48.101887
None None 70eb8035-35d0-4513-9e8b-3e4f4aad4f0a {} 100% 3ad444f240c54c5598483107416f61cb 1 available db892165-87f9-4903-92d8-4e8fd83de431 None |
+--------------------------------------------+--------------------------------------+
Fri Feb 27 17:07:23 UTC 2015

"
verify that snapshot is created 1 x x x x x x x x
174 Storage 20 10gb volumes - controlled/uncontrolled swact "description: this test will check the controlled and uncontrolled swact performance of the system when there are 20 10gb volumes created.

assumption: lvm-based system is running release 3 with 20 10gb volumes associated with vms, created on the system. no unexpected alarms are present.

step-0: confirm lio is running on the system via 'sudo targetcli ls'
expected result: if the command can be executed, it means we are using lio.

step-1: issue a controlled swact and make note of the time. when the swact completes, make note of the time. note, a controlled swact is where the user issues system host-swact <host>
expected result: the user will make note of the duration of the swact.

step-2: repeat step 1, two more times and record the results.
expected result: the user will make note of the duration of the swact.

step-3: issue an uncontrolled swact and make note of the time. when the swact completes, make note of the time. note, the uncontrolled swact should be issued via running “sudo reboot” on the controller.
expected result: the user will make note of the duration of the swact.

step-4: repeat step 3, two more times.
expected results: the user will make note of the duration of the swact.

"
"- description: this test will check the controlled and uncontrolled swact performance of the system when there are 20 10gb volumes created.

assumption: lvm-based system is running release 3 with 20 10gb volumes associated with vms, created on the system. no unexpected alarms are present.

step-0: confirm lio is running on the system via 'sudo targetcli ls'
expected result: if the command can be executed, it means we are using lio.

step-1: issue a controlled swact and make note of the time. when the swact completes, make note of the time. note, a controlled swact is where the user issues system host-swact <host>
expected result: the user will make note of the duration of the swact.

step-2: repeat step 1, two more times and record the results.
expected result: the user will make note of the duration of the swact.

step-3: issue an uncontrolled swact and make note of the time. when the swact completes, make note of the time. note, the uncontrolled swact should be issued via running “sudo reboot” on the controller.
expected result: the user will make note of the duration of the swact.

step-4: repeat step 3, two more times.
expected results: the user will make note of the duration of the swact.
"
2 - - x x x x x x
175 Storage controlled/uncontrolled swact with no volumes "description: this test performs a baseline measurement of how long it takes to complete a swact in release 3 on an lvm system when there are no volumes created on the controller.

assumption: lvm-based system is running release 3 with no volumes created on the system. no unexpected alarms are present.

step-0: confirm lio is running on the system via 'sudo targetcli ls'
expected result: if the command can be executed, it means we are using lio.

step-1: issue a controlled swact and make note of the time. when the swact completes, make note of the time. note, a controlled swact is where the user issues system host-swact <host>
expected result: the user will make note of the duration of the swact.

step-2: repeat step 1, two more times and record the results.
expected result: the user will make note of the duration of the swact.

step-3: issue an uncontrolled swact and make note of the time. when the swact completes, make note of the time. note, the uncontrolled swact should be issued via running “sudo reboot” on the controller.
expected result: the user will make note of the duration of the swact.

step-4: repeat step 3, two more times.
expected results: the user will make note of the duration of the swact"
"description: this test performs a baseline measurement of how long it takes to complete a swact in release 3 on an lvm system when there are no volumes created on the controller.

assumption: lvm-based system is running release 3 with no volumes created on the system. no unexpected alarms are present.

step-0: confirm lio is running on the system via 'sudo targetcli ls'
expected result: if the command can be executed, it means we are using lio.

step-1: issue a controlled swact and make note of the time. when the swact completes, make note of the time. note, a controlled swact is where the user issues system host-swact <host>
expected result: the user will make note of the duration of the swact.

step-2: repeat step 1, two more times and record the results.
expected result: the user will make note of the duration of the swact.

step-3: issue an uncontrolled swact and make note of the time. when the swact completes, make note of the time. note, the uncontrolled swact should be issued via running “sudo reboot” on the controller.
expected result: the user will make note of the duration of the swact.

step-4: repeat step 3, two more times.
expected results: the user will make note of the duration of the swact.

"
2 - - x x x x x x
176 Storage Volume clone time awk '/ id / {print $4}') 1
date
id=”cinder show tst-clone1 | awk '/ status / {print \$4}'”
while [ “$(eval $id)” != “available” ]; do sleep 1; done
date
cinder list
Expected Result: Ensure the cloning takes less than 20 seconds.

Step-2: Run Step-1 a few times to ensure the results are consistently within the same range.
Expected Result: Volume creation times should be fairly consistent, and be less than 20 seconds (as reported in release 15.09).
"
awk '/ id / {print $4}') 1
date
id=”cinder show tst-clone1 | awk '/ status / {print \$4}'”
while [ “$(eval $id)” != “available” ]; do sleep 1; done
date
cinder list
Expected Result: Ensure the cloning takes less than 20 seconds.

Step-2: Run Step-1 a few times to ensure the results are consistently within the same range.
Expected Result: Volume creation times should be fairly consistent, and be less than 20 seconds (as reported in release 15.09).
"
2 x x x x x x x x
177 Storage Disable Local Storage "- Lock the local storage computes
- Use the GUI and/or the CLI to delete the nova-local group
- Unlock the node(s)
- Verify that the volume groups are no longer present on the compute nodes
- Verify that the host aggregate group is gone and the computes are all now included in the provider_providernet; host aggregate group
- Verify that launching an instance with a local storage flavour fails since the host aggregate group is gone and no computes have local storage.
"
2 - - - - x x x x
178 Storage Launch an instance using localstorage flavour boot from image "1 localstorage flavour (aggregate_instance_extra_specs=true)
2 boot from image
- verify that it is launched on local storage compute node (compute-0)
- check an LV on the hosting compute is created in nova-local VG
- verify LVs are created for ephemeral and swap disks
"
1 - - - - x x x x
179 Storage kill VM and restart VM and validate no issues with re-attaching to Cinder Volume with external storage "CEPH external storage
1) In ceph storage system, create two cinder volumes (2G) in size with one bootable. Ensure both volumes are created successfully
2) Launch VM from volume created in step 1 and the VM launch successfully
3) Attach the second non-bootable volume to the VM. On VM, mount the second volume, if not auto-mounted.
4) Power off the VM
5) Restart the VM and validate no issues on re-attaching to the Cinder volumes
"
"VM up and running, with the second cinder volume mounted" 1 - - - - - - x x https://storyboard.openstack.org/#!/story/2002820
180 Storage Verify logs of activity "Steps:
1) Examine the following log to ensure that the above failure scenarios were logged
/var/log/patching.log
2) Verify that appropriate/understandable error messages are in the log
3) "
verify that all logs are present or not empty 1 x o o x x o o x
181 Storage Enable swift on controllers "Swift can be turned on after config_controller by CLI:
system service-parameter-modify swift config service_enabled=true
system service-parameter-modify swift config fs_size_mb=50 (optional)
system service-parameter-apply swift

By default the filesystem size is 25MB. Users can optionally modify the
size before issuing service-parameter-apply:
system service-parameter-modify swift config fs_size_mb=50"
1 x x x x x x x x
182 Backup and restore Check cinder-backup service is running Check cinder-backup is in the SM services 1 x x x x x x x x https://storyboard.openstack.org/#!/story/2003115
183 Backup and restore Generate backup files from existing configuration "execute ""sudo config_controller --backup <backup_name>""" "sudo config_controller --backup <backupname>Performing backup (this might take several minutes):
Step 16 of 16 [
184 Backup and restore Verify that backup files contains customized files "after creating the tar files, untar system and verify that your customized files exists, such as cirros.img or basic.yaml" system should contain same structure as was located on active controller 1 x x x x x x x x
185 Backup and restore Backup system "in a clean environment, perform sudo config_controller --restore-system <backup_name_system.tgz>" system should be in the same way that the files were generated before 1 x x x x x x x x
186 Backup and restore Backup images "in a clean environment, perform sudo config_controller --restore-images <backup_name_images.tgz>" images shoule be in the same way that the files were generated before 1 x x x x x x x x
187 Nova Check Nova Diagnostics "Launch instance
Perform some actions on the instance
Run server diagnostics and confirm output is received

$nova diagnostics <instanceid>


Lock the instance and run nova diagnostics again to confirm output does not change









"
Property | Value |
+--------------------+------------+| cpu0_time | 6860000000
70000000 170000000 110000000 1048576 1048576 1016460 618 548099 27304 0 0 881312 -1 83308032 6630 8528896 2472 |


Nova diagnostic
The dictionary of data returned is specific to the driver. The output has changed to include the following Properties instead
config_drive (True or False_
cpu_details utilisation for ud eg. id 0,1 or 2 and time
disk_details read_requests, errors_count, read_bytes, write_requests, write_bytes
driver eg. libvirt
hypervisor eg. kvm
memory_details eg. where flavor is 4 ram {""used"": 4, ""maximum"": 4}
nic_details
num_cpus eg. where flavor has 4 vcpus (4)
num_nics
state eg. running
uptime
- Confirm output did not change for cpuX_time, memory,
memory actual, memory-available, memory-major_fault, memoryminor_
fault, memory-rss, memory-swap_in, memory_swap_out, memory-unused, vda_errors,vda_read, vda_read_req, vda_write, vda_write_req

In R5, the output is different. Output for the following will not change while instance is in the lock state.
(Uptime will continue to increment though)

config_drive _
cpu_details utilisation for ud eg. id 0,1 or 2 and time
disk_details read_requests, errors_count, read_bytes, write_requests, write_bytes
"
1 x x x x x x x x
188 Nova Flavor - Adding access to the flavor False'

The corresponding tenant users configured have access to the flavor for creating and resizing instances.
"
"-
Confirm only the tenant user specified can now access the flavor on creating an instance.


Confirm only the tenant user specified can now access the flavor on resizing an instance.

"
1 x o o x x o o x
189 Nova Flavor - Removing access to the flavor "Create a flavor

Remove Nova flavor access

$ nova flavor-show <flavorname>

Remove user access from the flavor
$ nova flavor-access-remove small.float 934cb48930ab4b6c819d0b1b191e795a


The corresponding tenant users removed do not have access to the flavor for creating and resizing instances.
"
"- Confirm the tenant user removed can not access the flavor for creating instance

Confirm the tenant user removed can not access the flavor for resizing instance

(Access permission for others in the access list should be retained).
"
1 o x x o o x x o
190 Nova Flavor - Flavor in use can not be modified (or deleted) "Create a flavor
Launch an instance using the new flavor.
Attempt to modify a setting in the flavor (that is in use by the instance.)

For example attempt to change
Disk size
VCPU setting
For example attempt to change or remove
any extra spec setting


Create a flavor
Launch an instance using the new flavor.
Attempt to delete the flavor (that is in use by the instance.)
"
"The deletion of a flavor should be rejected if in use by the instance


- The deletion of the flavor should be rejected if in use by the instance.


"
1 x o o x x o o x
191 Nova "ImageSnapshot - snapshot a running server, show snapshot and delete image" "Launch an instance on one of the computes as tenant
user eg. tenant1

On the cli, as the same tenant user run the following to get the
instance id
$nova list


Then create a new image (test1) by taking a snapshot of a
running server.
$ nova image-create gfdgfd test1
[wrsroot@controller-0 ~(keystone_tenant1)]$ nova
image-create 1d6e05df-c644-4c4f-8f92-2dbb9aad2ae6 test2

Run the following show command to confirm image is created.
$ nova image-show test1

Delete the snapshow using
$nova image-delete <imagesnapshowid>

Run the following to confirm the image snapshot has been removed.
$nova image-list
"
"- Confirm the image is created and is listed in the Images list in horizon for that user with type “Snapshot”


The image snapshot has been removed.
$nova image-list


Note in release 4:
This testcase requires the use of the glance command instead ie. glance image-list, glance image-delete
..instead of nova image-list
nova command deprecated

WARNING: Command image-list is deprecated and will be removed after Nova 15.0.0 is released. Use python-glanceclient or openstackclient instead.
ERROR (VersionNotFoundForAPIMethod): API version 'API Version Major: 2, Minor: 37' is not supported on 'list' method.
"
1 x o o x x o o x
192 Nova Aggregates - Update aggregates and host(s) "Add metadata to the aggregate
eg. add the availability zone


$nova aggregate-set-metadata <aggregatename> availability_zone=<value>
where eg. value is ""nova""

Add a host to the new aggregate group
$ nova aggregate-add-host compute-0 Group1

"
"- Confirm the metadata and associated host is listed in the
$ nova aggregate-details <aggregatename>

(Confirm that the group and availability zone is listed in the host aggregates page in Horizon and the availability zone includes the newly added zone in the list when the Launch Instance dialog is used)

"
1 x o o x x o o x
193 Nova CPUPolicy - CPU Policy set in Horizon extra specs "Pre-requirement: Create a flavor

Nova flavor 'Extra Spec' of 'CPU Policy' hw:cpu_policy
shared , dedicated
"
"- Verify corresponding options listed in Horizon Extra Spec drop down list.
If this extra spec is not added, the cpu policy is 'shared'
"
1 x x x x x x x x
194 Nova "CPUPolicy - From the cli, set CPU Policy in the extra spec (valid value)" "Pre-requirement: Create a flavor

Nova flavor 'Extra Spec' of 'CPU Policy'

hw:cpu_policy
shared , dedicated

From both Horizon and cli, verify setting CPU Policy in the extra spec to a valid value

$ nova flavor-key <flavor> set hw:cpu_policy=dedicated
$ nova flavor-key <flavor> set hw:cpu_policy=shared

Launch instance with each cpu policy setting
"
"- Verify corresponding options listed in Horizon Extra Spec drop down list.

Instance launches successfully.
"
1 x x x x x x x x
195 Nova "CPUPolicy - From both Horizon and cli, test validation of hw:cpu_policy - invalid or empty values" "From both Horizon and cli, test validation of invalid or empty entries for the hw:cpu_policy

$ nova flavor-key <flavor> set hw:cpu_policy=shard
$ nova flavor-key <flavor> set hw:cpu_policy=bogus
$ nova flavor-key <flavor> set hw:cpu_policy=
"
"- ERROR (BadRequest): invalid hw:cpu_policy '<value>', must be one of: dedicated, shared (HTTP 400)




"
-1 x x x x x x x x
196 Nova CPUPolicy - Test case sensitive entries for the value of the hw:cpu_policy "Test case sensitive entries for the hw:cpu_policy value

$ nova flavor-key <flavor> set hw:cpu_policy=SHARED
$ nova flavor-key <flavor> set hw:cpu_policy=DEDICATED
"
"- ERROR (BadRequest): invalid hw:cpu_policy <value>, must be 'dedicated' or 'shared' (HTTP 400)



"
2 x x x x x x x x
197 Nova "CPUScaling - Using cli/Horizon,verify error returned if the extra spec setting for minimum CPU exceeds flavor vCPUs set" "DEFINE STEPS
Nova Test Plan - Setting the CPU Scaling Range

nova/test_cpu_scale.py::
test_flavor_min_vcpus_invalid
"
2 x x x x x x x x
198 Nova CPUScaling - Verify error when attempt to set the minimum CPU to value below minimum valid range- zero or negative "DEFINE STEPS
Nova Test Plan - Setting the CPU Scaling Range


nova/test_cpu_scale.py::
test_flavor_min_vcpus_invalid
"
-1 x x x x x x x x
199 Nova Evacuations of instances with cpu thread policies using cli command "Perform instance evacuate using the followign command ensuring that the
instance evacuates to the respective compute node
(if a suitable host is available).

nova host-evacuate (evacuates all the instances)
"
"- The instances all evacuate successfully to the new host.
The instance is pinned according following evacuate.
"
1 - - x x x x x x
200 Nova MemPageSize -Verify invalid or empty values for hw:mem_page_size "Using the cli and horizon gui configuration - Test empty value is prevented, valid value is accepted

$ nova flavor-key <flavor> set hw:mem_page_size=

Valid value small, large, any, 2048, 1048576
"
value should be accepted 1 x x x x x x x x
201 Nova MigrationTime - Test Live migrate operation is not allowed (from the GUI) if the other compute is locked "Lock the only other compute(2)
Attempt to live migrate
After unlock the other compute, the migrate will succeed - assuming the timeout value is not too low
"
"- Live migrate will fail as expected.
Migrate of instance <instance> from host compute-0 failed
Migration will succeed if other host available.
"
1 - - x x x x x x
202 Nova MigrationTime - Create Flavor extra spec live migration max downtime within valid range for live migration max downtime "Create flavor with extra spec live migration max downtime setting within valid range for live migration max downtime

$ nova flavor-key <flavor>
set hw:wrs:live_migration_timeout=120
$ nova flavor-key <flavor>
set hw:wrs:live_migration_max_downtime=501
"
2 - - x x x x x x
203 Nova Server Actions -Rebuild interaction "$ nova list --all-tenants

1.Rebuild the instance using name parameter (with the same image) then resize
$ nova rebuild --name newthree <instanceid> <imageid>
$ nova resize <instanceid> <flavorid>
2. Rebuild the instance without the name parameter (with the same image) then resize
$ nova rebuild <instanceid> <imageid>
$ nova resize <instanceid> <flavorid>
3. Try to rebuild while in resize or pause state and confirm conflict is reported
$ nova resize 2d356f9b-7e80-4a81-8297-60faf2f6c7eddecac413-7e69-408b-8ca8-43a6ff45d2d7
[wrsroot@controller-0 ~(keystone_admin)]$ nova rebuild
2d356f9b-7e80-4a81-8297-60faf2f6c7ed 82dcea24-416a-4496-ac4e-6bbb7cc152ce
4. Try to rebuild instance while it is in stopped state. Should succeed.
nova stop <instanceid>
nova rebuild --name newthree <instanceid> <imageid>
Then Start the instance
nova start <instanceid>
5. Try to rebuild instance selecting a different image where the flavor has a disk size that is actually large enough to accommodate (eg. rebuild from cgcs-guest to centos)
6. Try to rebuild instance really soon after previous rebuild request
7. Try to rebuild instance using a different image where the flavor does not have a large enough disk size for the image selected. (eg. lvm backing)"
"- Success




- Success


- conflict is reported

ERROR (Conflict): Cannot 'rebuild' instance 2d356f9b-7e80-4a81-8297-60faf2f6c7ed while it is in vm_state resized (HTTP 409) (Request-ID: req-bf5b3986-4984-4657-9db2-7092c23081ce)

- Success





- Rebuild should succeed.



- Gracefully handle accordingly

- Should provide error feedback and prevent rebuild if disk
size is too small for image.

"
1 x x x x x x x x
204 Nova Server Actions - Nova delete instance in error state "Create image that requires large disk to be specified in the flavor
eg. image Ubuntu or centos

Create a flavor that is insufficient for the image

$nova flavor-create tst-flavor-hatong-Dedicated auto 256
1 1
$nova flavor-key tst-flavor-hatong-Dedicated set
hw:cpu_model=Haswell hw:cpu_policy=dedicated

Attempt to boot the instance (will error) with the image (Ubuntu or centos) and with the flavor that does not meet disk size requirement

$ nova boot --flavor tst-flavor-hatong-Dedicated --image ubutest_14 --nic net-id=250e54fe-9512-413b-8609-30481419c4d5 b


Run $nova show <instanceid> to see the instance in error

Attempt to delete the instance
$ nova delete <instance name>
"
fault | {""message"": ""Build of instance
88f76943-9a4d-4e28-8995-79792fed559f aborted:
Flavor's disk is too small for requested image. Flavor disk
is 1073741824 bytes, image is 2361393152 bytes."",
""code"": 500, ""created"": ""2016-10-07T17:18:35Z""} |


Should delete successfully




"
1 x x x x x x x x
205 Nova "SharedCPU - Using horizon/cli, create flavor with hw:cpu_policy “dedicated” and valid hw:wrs:shared_vcpu values" DEFINE STEPS 1 x x x x x x x x
206 Nova SharedCPU - Instance with hw:wrs:shared_vcpu respected on resize "Launch instance with a flavor with hw:wrs:shared_vcpu
eg. hw:wrs:shared_vcpu=1

As admin user, Resize instance to a flavor that does not have hw:wrs:shared_vcpu=1
"
"- On launch user will receive feedback when Shared CPU Assignment is not enabled

Instance resize succeeds.
"
1 x x x x x x x x
207 Nova "Local Storage - Neutron port status after successful migration ACTIVE (cold, live, block)" id | status |
+--------------------------------------+--------+| 3cf1a5eb-3eed-4596-b021-7e7051f98c5c | ACTIVE
ACTIVE ACTIVE |
+--------------------------------------+--------+


2.Cold migrate the instance to another compute and check the related neutron ports to ensure they are reporting ACTIVE

Repeat
On a system with at least 2 computes

1.Launch instance
Once active, look at the status of the related neutron ports – they should be reported as ACTIVE

$ neutron port-list --device_id=e80a0560-42f7-4d81-bbed-68b52664566b -c id -c status -c binding:host_id
| id | status
ACTIVE ACTIVE ACTIVE |

2. Live migrate an instance to another compute
Check the related neutron ports to ensure they are reporting ACTIVE

Repeat


1.Launch instance
Once active, look at the status of the related neutron ports – they should be reported as ACTIVE

$ neutron port-list --device_id=e80a0560-42f7-4d81-bbed-68b52664566b -c id -c status -c binding:host_id
| id | status
ACTIVE ACTIVE ACTIVE |

Live block migrate an instance to another compute
Check the related neutron ports to ensure they are reporting ACTIVE

Repeat
"
"- Expect them to be ACTIVE after cold migration.
- Expect them to be ACTIVE after live migration.
- Expect them to be ACTIVE after live block migration.







"
1 - - x x x x x x
208 Nova "Local Storage - Local LVM backed instance - cold, live, block migration combinations" "Boot instances from glance or cinder with storage type Local LVM Backed and verify migrations (aggregate_instance_extra_specs:storage local_lvm

Migrate (cold) to a compute with the same local_lvm backing.
Attempt live, or live block migrations when instance has local_lvm backing and attempt migrations to wrong storage backing.
System is already configured for specific storage backing
"
1 - - x x x x - -
209 Nova Local Storage - Launch instance from volume snapshot (created using Horizon) with Local_CoW Image Backed storage "Launch as instance from volume snapshot (created using horizon) with Local_CoW Image Backed storage

Precondition: compute(s) with local_image backing

Create a Volume
From horizon, create volume from an image (where image optionally has additional metadata such as cpu policy) and specifying Nova as the availability zone.

Run cinder command to ensure volume created
$cinder list

Create a snapshot using horizon as tenant/admin
Create a snapshot from the volume specifying a snapshot name (and optionally specifying a snapshot description)

Ensure the volume snapshot is listed in horizon and in cli
$ cinder snapshot-list

Run cinder snapshot-show <snapshotid> to view the properties and values of the snapshot

Look for failures in cinder log on create
$grep <id> /var/log/cinder/*.log

Boot instance(s) from volume snapshot (creates new volume)

“Launch as instance” from the volume snapshot in horizon. ‘Boot from volume snapshot (which creates a new volume)’ selecting the newly created Volume Snapshot.
When selecting the flavor, select one with local_image storage type.
(Optionally, select the option to Delete volume on terminate)
"
"- Booting instance from volume snapshot creates a new volume.
The instance launches

Ensure No errors in the cinder.log on creating the snapshot.

Ensure No errors in the nfv-vim.log in cinder snapshot creation.
"
1 x x x x x x x x
210 Nova Local Storage - Launch instance from volume snapshot (created using cli) with Local_CoW Image Backed storage "Launch as instance from volume snapshot (created using cli) with Local_CoW Image Backed storage without force flag option. Run cinder snapshot-show to view the snapshot properties. Look for creation failures in the cinder log.

Precondition: compute(s) with local_image backing

Create Volume
Create volume from an image (where image optionally has additional metadata such as hw_cpu_policy dedicated or shared) and specifying Nova as the availability zone.

Run cinder list command to get the volumeid
$cinder list

Scenario a. Create Snapshot Without Display Description
Snapshot create without --display-description parameter

Create snapshot from volume without any display-description specified

$ cinder snapshot-create --display-name WendySnap1 <volumeid>

List the newly created snapshot
$ cinder snapshot-list

Run cinder snapshot-show <snapshotid> to view the properties and values of the snapshot

$cinder snapshot-show <snapshotid>
Eg. $ cinder snapshot-show <id> displays property as follows:
created_at
display_description
display_name
id
metadata
os-extended-snapshot-attributes:progress
os-extended-snapshot-attributes:project_id
size
status
volume_id
wrs-snapshot:backup_status

Look for failures in cinder log on snapshot creation.
$grep <snapshotid> /var/log/cinder/*.log

Launch as instance from the newly created volume snapshot where the storage type specified in the flavor is local_image


"
1 x x x x x x x x
211 Nova Local Storage - Delete volume snapshot - instance has been launched from it "Delete the volume snapshot where instance has been launched from it

Delete the volume snapshot, where a local_image instance has been launched from the volume snapshot

Create volume(s) from images and list the volume(s)
$ nova volume-list --all-tenants
This should return Status (available, in-use), Display Name, Size, Volume Type and Attached to info)

Create a snapshot from the volume and launch an instance with the volume snapshot that was created prior to deleting the snapshot.

$nova volume-list --all-tenants
Create volume from image and create a snapshot of the new volume
$nova volume-snapshot-create --display-name <somename> <volumeid>

Launch an instance from the new snapshot (using boot from volume snapshot)
Delete the volume snapshot without deleting the instance first
$ nova volume-snapshot-list
$ nova volume-snapshot-delete <snapshotid>
"
"- Deleting the volume snapshot from horizon/ cli should be allowed if you first delete the instance.

"
1 x o o x x o o x
212 Nova Local Storage - Delete volume snapshot - instance has never been launched from it "Delete the volume snapshot where an instance has never been launched from it.

Create volume(s) from images and list the volume(s)
$ nova volume-list --all-tenants
This should return Status (available, in-use), Display Name, Size, Volume Type and Attached to info)

Do not launch an instance from the volume but just create the snapshot of the volume
$nova volume-snapshot-create --display-name noinstance <volumenotinuse>
List then delete the volume snapshot just created
$ nova volume-snapshot-list
$ nova volume-snapshot-delete <snapshotid>

List the volume snapshot to confirm it is deleted
$ nova volume-snapshot-list
"
"- Deleting the volume snapshot from horizon/ cli should be allowed if you first delete the instance.


"
2 x x x x x x x x
213 Nova Local Storage - Update metadata in volume and snapshot "DEFINE INSTRUCTIONS
Update metadata in volume and snapshot"
2 x x x x x x x x
214 Nova RecoveryHeartBeat - Using cli/horizon set flavor extra sw:wrs:guest:heartbeat to valid values "Verify valid values using the cli/GUI (including the following):

$ nova flavor-key <flavor> set sw:wrs:guest:heartbeat=true
$ nova flavor-key <flavor> set sw:wrs:guest:heartbeat=False
$ nova flavor-key <flavor> set sw:wrs:guest:heartbeat=TRUE
$ nova flavor-key <flavor> set sw:wrs:guest:heartbeat=FALSE

Run nova flavor-show <flavor> to confirm extra_spec is set
"
"- ""sw:wrs:guest:heartbeat"": ""<value>""" 2 x o o x x o o x
215 Security Admin Password change "To Update admin password:
$ source /etc/nova/openrc
$ openstack user set --password <new_password> <user>
"
"- Verify services are working as expected.
- Verify if password changed by log in into StarlingX Horizon

Notes:
Each time the password is updated, should exit from 'keyston_admin' authentication and enter again.
"
1 x x - - - - - -
216 Security "SSH - SSH root access sshd config file changed, Connection rejected." """Generate an SSH key-pair
$ ssh-keygen -t rsa""
""Copy the Public key over the Lab controller
$ scp ~/.ssh/<id_rsa.pub> wrsroot@<lab.ip>""
Copy the publick key from your wrsroot account into the “authorized_keys” file of the “root” account
""Adding ssh key:
login to controller
do sudo su to get to root
create folder/file: /root/.ssh/authorized_keys if they do not exist
cat /home/wrsroot/<id_rsa.pub/ >> /root/.ssh/authorized_keys""
""Now login from your desktop using
$ Ssh –I <public_key> root@<lab.ip>""
""On attempting to ssh with root(with/without password). The user will now get """"Permission denied"""" Error.
Even if user try ssh -l <key> he should not be prompt for password at all. The Denial output should be shown
before any password prompt.""
"
"This generates a set of keys (private key and pub key. The pub one has the .pub extention


This adds your key into the roots authorized_ssh key
"
1 - - x x x x x x
217 Security Passwd - wrsroot Password expiration. "Go to controller-0 node terminal and login as admin user. e.g. ""wrsroot""
""Type
$ sudo timedatectl set-ntp 0 to disable ntp automatic time synchronization.""
""Type
$ sudo timedatectl status
and check """"NTP enabled: no""""""
Take a snapshoot of the time-date.
""Set password maximum number during which a password is valid to 1 day by typing:
$ sudo chage -M 1 wrsroot""
""Type:
$ sudo chage -l wrosroot
and make sure the Maximum number of days between password change is set to 1.""
""Wait 24 hours or change the date-time of the system 1 day ahead by typing:
$ sudo timedatectl set-time 'YYYY-MM-DD'
where DD is one day ahead of the real time-date.""
""Type
$ sudo timedatectl status
and check the time-date is 1 day ahead of the real time.""
""From your host Attempt to ssh to the controller-0 after the password ages out
$ ssh -q wrsroot@
218 Security Keystone - Nova user Passwords are protected. """Login to controller-0. At the prompt, type:
$ keyring get CGCS admin""
""On controller-0 node enter as admin user
$ source /etc/nova/openrc""
""Make sure that a file and a directory are created in the following path:
/opt/platform/.keyring/.CREDENTIAL
/opt/platform/.keyring/(keyring specific directory with encrypted passwords)""
""Type:
$ more /etc/nova/openrc

Something similar to the next plain text should be displayed:
controller-0:~$ more /etc/nova/openrc
unset OS_SERVICE_TOKEN

export OS_ENDPOINT_TYPE=internalURL
export CINDER_ENDPOINT_TYPE=internalURL

export OS_USERNAME=admin
export OS_PASSWORD=`TERM=linux /opt/platform/.keyring/18.03/.C
REDENTIAL 2>/dev/null`
export OS_AUTH_URL=http://192.168.204.2:5000/v3

export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_IDENTITY_API_VERSION=3
export OS_REGION_NAME=RegionOne
export OS_INTERFACE=internal

if [ ! -z """"${OS_PASSWORD}"""" ]; then
export PS1='[\u@\h \W(keystone_$OS_USERNAME)]\$ '
else
echo 'Openstack Admin credentials can only be loaded from
the active controller.'
export PS1='\h:\w\$ '
fi""
""On controller-1 node enter as admin user
$ source /etc/nova/openrc
you should found the same /etc/nova/openrc content.""
""After sourcing the openrc, type:
$ env""
"
"It should return, the Horizon admin password.
Logged in as an admin successfully.
Files and directories created successfully.
Admin does not have the password stored in clear text.
Logged in as an admin successfully and content should be the same as Controller-0.
""You should be able to find;
OS_PASSWORD=<your admin password>""
"
1 x x x x x x x x
219 Security Keystone - Change admin password. """Change admin password via cli:
$ openstack user password set""
SWACT to standby controller and make sure the controller come up fine.
""At the prompt, type:
$ keyring get CGCS admin""
Lock standby controller (system host-lock controller-1)
""Change admin password via cli:
$ openstack user password set""
""At the prompt, type:
$ keyring get CGCS admin""
Unlock standby controller (system host-unlock controller-1)
""Change admin password via cli:
$ openstack user password set""
Reboot standby controller
To recover from reboot loop lock/unlock standby controller.
""At the prompt, type:
$ keyring get CGCS admin""
""Change admin password via cli on Active controller:
$ openstack user password set""
Logg in/out from active controller.
""Try to change admin password via cli on Active controller with empty string:
$ openstack user password set
Current password: <type current pass>
New Password: <Empty>
Repeat New Password: <Empty>""
"
"Admin password is changed successfully.
After SWACT the standby controller became active.
It should return, the Horizon admin password.
Verify standby controller is locked
Admin password is changed successfully.
Verify the password is changed (keyring get CGCS
Verify the standby controller comes up fine
Admin password is changed successfully.
Verify that the standby controller goes into reboot loop.
Verify after lock/unlock it recovers.
Verify the password is changed (keyring get CGCS
Admin password is changed successfully.
You are able to logged in/out to active controller successfully.
""Verify empty string is not accepted for keystone admin password and controller get back with following message:
""""No password was supplied, authentication will fail when a suer does not have a pasword.
Specify both the current password and a ne password""""""
"
1 - - x x x x x x
220 Security Keystone - Adding users to keystone user list via horizon. "From horizon as admin user go to identity tab -> users.
Hit ""+ Create User"" button.
""Enter at least required fields:
User Name:
Password:
Confirm Password:
Primary Project.""
Enter ""Create User"" button.
Refresh Identity --> Users List and make sure the new user is displayed.

"
""" Identity /Users "" Frame is displayed successfully.
Create user pop up window is displayed.
Required fields were entered successfully.
Horizon got back with message saying the new user is created successfully.
The new user is displayed successfully in identity --> User list.
"
1 x x x x x x x x
221 Security Passwd - user is capable of changing password quality. "Login to controller-0 admin user.
""To change password quality configuration on the controller, edit /etc/pam.d/common-password.
The password quality validation is configured via the first non-comment line:
""""password required pam_pwquality.so try_first_pass retry=3 authtok_type= difok=3 minlen=7 lcredit=-1 ucredit=-1 ocredit=-1 dcredit=-1 enforce_for_root debug""""
Change the minimum password length by changing the 'minlen' parameter to 9.""
Change the minimum number of characters that must change between subseqent passwords by editing the ""difok"" parameter to 3.
Change least one uppercase character in the password by adding 'ucredit=-1'
""Change the password on behalf a user. Sign on to """"root"""" or """"su"""" the """"root"""" account. Type:
$ sudo su""
""Make sure you are """"root"""" by typing:
$ whoami""
Change the password on behalf a user by typing ""passwd <user>""
Enter a password with 8 characters, 1 uppercase letter and 1 non-alphanumeric character.
Enter a password with 8 characters, none uppercase letter and 1 non-alphanumeric character.
Enter same old password and add characters until the lenght reach 9 characters, 1 uppercase letter and 1 non-alphanumeric character.
"
"Logged in to controller-0 successfully.
minlen parameter = 9 changed successfully.
difok parameter = 3 changed successfully that password must have at least three bytes that are not present in the old password.
ucredit parameter = -1 changed successfully.
Signed on to ""root"" or ""su"" successfully.
By typing whoami the system should get back with ""root"" successfully.
The system should get back with ""New Password:"" prompt request successfully.
The system should get back with ""BAD PASSWORD: The password is shorter than 9 characters"" message successfully.
The system should get back with ""BAD PASSWORD: The password contains less than 1 upper case letters"" message successfully.
""The system should get back with """"BAD PASSWORD""
e.g.
Madawa$ka1
MMMadawa$ka1
MMMMadawa$ka1
MMMMMadawa$ka1
MMMMMadawa$ka122222
MMMMMadawa$ka1222222""
"
1 x x x x x x x x
222 Security Passwd - wrsroot changed password and propagated. "Login to controller-0 using system admin user.
""Change the password on behalf wrsroot. Sign on to """"root"""" or """"su"""" the """"root"""" account. Type:
$ sudo su""
""Make sure you are """"root"""" by typing:
$ whoami""
Change the password on behalf wrsroot by typing ""passwd wrsroot""
Go through every single node into your cluster and make sure the new wrsroot password is propageted.
"
"Logged in to controller-0 successfully.
Signed on to ""root"" or ""su"" successfully.
By typing whoami the system should get back with ""root"" successfully.
The system should get back with ""New Password:"" prompt request successfully.
wrsroot new password is propagated.
"
1 x x x x x x x x
223 Security Horizon - relogin after timed out horizon session. "From horizon as admin user go to identity tab -> users.
Wait 'n' minutes until Horizon session expires.
Once the Horizon session expires make sure you can re-login using same user/password.
"
""" Identity /Users "" Frame is displayed successfully.
Session is expired successfully.
User is able to re-loged in using same credentials.
"
1 x x x x x x x x
224 Heat Create a stack from an example template file "Copy basic_instance.yaml to your active controller
execute ""openstack stack create --template basic_instance.yml <stack name>""
"
Field               | Value                                               |
+---------------------+-----------------------------------------------------+| id                  | 2076c514-acd9-4439-8250-39bd02924e9a               
ELIO                                                Simple template to deploy a single compute instance 2018-09-26T09:25:55Z                                2018-09-26T09:25:55Z                                CREATE_IN_PROGRESS                                  Stack CREATE started                                |
+---------------------+-----------------------------------------------------+'"
1 x x x x x x x x
225 Heat Show existing stack "execute ""openstack stack show <stack name>""" Field                 | Value                                                                                                                  |
+-----------------------+------------------------------------------------------------------------------------------------------------------------+| id                    | 2076c514-acd9-4439-8250-39bd02924e9a                                                                                  
ELIO                                                                                                                   Simple template to deploy a single compute instance                                                                    2018-09-26T09:25:55Z                                                                                                   2018-09-26T09:25:55Z                                                                                                   CREATE_COMPLETE                                                                                                        Stack CREATE completed successfully                                                                                    OS::project_id: 57841a45988840fdbb1b6c5579b10fc8                                                                       OS::stack_id: 2076c514-acd9-4439-8250-39bd02924e9a                                                                     OS::stack_name: ELIO                                                                                                                                                                                                                           []                                                                                                                                                                                                                                             - href: http://192.168.204.2:8004/v1/57841a45988840fdbb1b6c5579b10fc8/stacks/ELIO/2076c514-acd9-4439-8250-39bd02924e9a    rel: self                                                                                                                                                                                                                                    None                                                                                                                   True                                                                                                                   None                                                                                                                   20ddffa397024d53b6b415f55eb812d0                                                                                       []                                                                                                                     []                                                                                                                     None                                                                                                                   None                                                                                                                   None                                                                                                                   |
+-----------------------+------------------------------------------------------------------------------------------------------------------------+`
"
1 x x x x x x x x
226 Heat list existing stack "execute """"openstack stack list""" ID                                   | Stack Name | Project                          | Stack Status    | Creation Time        | Updated Time         |
+--------------------------------------+------------+----------------------------------+-----------------+----------------------+----------------------+| 2076c514-acd9-4439-8250-39bd02924e9a | ELIO       | 57841a45988840fdbb1b6c5579b10fc8 | CREATE_COMPLETE | 2018-09-26T09:25:55Z | 2018-09-26T09:25:55Z |
+--------------------------------------+------------+----------------------------------+-----------------+----------------------+----------------------+
"
1 x x x x x x x x
227 Heat update existing stack "execute ""openstack stack update --template basic_instance.yaml --parameter ""image=<other image than cirros>"" <stack name>""" Field | Value |
+---------------------+----------------------------------------------------------------+| id | 267a459a-a8cd-4d3e-b5a1-8c08e945764f
mystack The heat template is used to demo the 'console_urls' attribute of OS::Nova::Server. 2016-06-08T09:54:15 2016-06-08T10:41:18 UPDATE_IN_PROGRESS Stack UPDATE started |
+---------------------+----------------------------------------------------------------+"
2 x x x x x x x x
228 Installation and Config During config controller enter invalid values and verify that they are rejected - DB Size below and above "Using
system controllerfs-list
system controllerfs-modify database=

to modify DATABASE size to below and above DB size
"
"- The modification will be failed and the error msg shows like
[wrsroot@controller-0 ~(keystone_admin)]$ system controllerfs-modify database=22
backup size of 60 is insufficient. Minimum backup size of 62 is required based upon cgcs size 20 and database size 22. Rejecting modification request.
"
2 x x x x x x x x
229 Installation and Config During config controller enter invalid values and verify that they are rejected - OAM addresses are not in the oam pool selected "Define OAM pool.

1. Set External OAM gateway address out of pool specified.
2. Set External OAM floating address out of pool specified.
3. Set External OAM address for first controller node out of pool specified.
4. Set External OAM address for second controller node out of pool specified.
"
"1-4. All set values are rejected.

1. External OAM gateway address [10.10.10.1]: 10.10.20.1
Address must be in the external OAM subnet
External OAM gateway address [10.10.10.1]: 192.168.204.4
Address must be in the external OAM subnet
External OAM gateway address [10.10.10.1]:



2. External OAM floating address [10.10.10.2]: 10.10.20.2
Address must be in the external OAM subnet
External OAM floating address [10.10.10.2]: 192.168.204.4
Address must be in the external OAM subnet
External OAM floating address [10.10.10.2]:



3. External OAM address for first controller node [10.10.10.3]: 10.10.20.3
Address must be in the external OAM subnet
External OAM address for first controller node [10.10.10.3]: 192.168.204.4
Address must be in the external OAM subnet
External OAM address for first controller node [10.10.10.3]:



4. External OAM address for second controller node [10.10.10.4]: 10.10.20.4
Address must be in the external OAM subnet
External OAM address for second controller node [10.10.10.4]: 192.168.204.4
Address must be in the external OAM subnet
External OAM address for second controller node [10.10.10.4]:
"
2 x x x x x x x x
230 Installation and Config During config controller enter invalid values and verify that they are rejected - Valid format for IP "1. Set 'Management subnet' to invalid value.

2. Set 'Controller node floating address' to invalid value.

3. Set 'Address for first controller node' to invalid value.

4. Set 'Address for second controller node' to invalid value.

5. Set 'CGCS Resource Group NFS Address' to invalid value.

6. Set 'Platform Resource Group NFS Address' to invalid value
"
"Controller node floating address [192.168.204.2]: aaa
Invalid address - please enter a valid IPv4 address
Controller node floating address [192.168.204.2]:

Address for first controller node [192.168.204.3]: abcd.s
Invalid address - please enter a valid IPv4 address
Address for first controller node [192.168.204.3]:

Address for second controller node [192.168.204.4]: w.t.y.d
Invalid address - please enter a valid IPv4 address
Address for second controller node [192.168.204.4]: 123
Address must be in the management subnet

CGCS Resource Group NFS Address [192.168.204.5]: abc
Invalid address - please enter a valid IPv4 address
CGCS Resource Group NFS Address [192.168.204.5]:

Platform Resource Group NFS Address [192.168.204.6]: adcv
Invalid address - please enter a valid IPv4 address
Platform Resource Group NFS Address [192.168.204.6]:
"
2 x x x x x x x x
231 Installation and Config During config controller enter invalid values and verify that they are rejected -Image Volume Storage "Define minimal image and volume storage and available free disk space.

Step 1. Set value less than minimal image and volume storage in requested 'Image and volume storage' field.

Step 2. Set value greater than available free disk space in requested ' Image and volume storage ' field.
"
"Management subnet [192.168.204.0/24]:

Controller node floating address [192.168.204.2]: aaa
Invalid address - please enter a valid IPv4 address
Controller node floating address [192.168.204.2]:

Address for first controller node [192.168.204.3]: abcd.s
Invalid address - please enter a valid IPv4 address
Address for first controller node [192.168.204.3]:

Address for second controller node [192.168.204.4]: w.t.y.d
Invalid address - please enter a valid IPv4 address
Address for second controller node [192.168.204.4]: 123
Address must be in the management subnet

CGCS Resource Group NFS Address [192.168.204.5]: abc
Invalid address - please enter a valid IPv4 address
CGCS Resource Group NFS Address [192.168.204.5]:

Platform Resource Group NFS Address [192.168.204.6]: adcv
Invalid address - please enter a valid IPv4 address
Platform Resource Group NFS Address [192.168.204.6]:
- This system has 40 GiB free disk space available for the following:
- database storage (5 GiB minimum)
- image and volume storage (8 GiB minimum)

Step 1. Value less than minimal image and volume storage rejected:

Image and volume storage in GiB [12]: 7

Minimum size restriction not met

Image and volume storage in GiB [12]: 6

Minimum size restriction not met

Image and volume storage in GiB [12]: 0

Minimum size restriction not met

Image and volume storage in GiB [12]: -1

Minimum size restriction not met

Step 2. Value greater than available free disk space rejected (Note: after value is rejected, need to be entered 'data base storage' value again):

Image and volume storage in GiB [12]: 18

Disk size exceeded - please retry

Database storage in GiB [5]:

Image and volume storage in GiB [12]: 19
Disk size exceeded - please retry

Database storage in GiB [5]:

Image and volume storage in GiB [12]: 20

Disk size exceeded - please retry

Database storage in GiB [5]:

Image and volume storage in GiB [12]: 25

Disk size exceeded - please retry

Database storage in GiB [5]:
"
2 x x x x x x x x
232 Installation and Config Import profile and apply to multiple nodes Need to check this test since it seems to be similar to the way that we are using to create vms "- .xml is create
- profile has been imported
- applied to all nodes successfully"
2 - - x x x x x x
233 Installation and Config Swact controllers during configure compute -2 - - - - - - - -
234 Installation and Config "SysInv: ifprofile create, show attr, delete" "system host-list
system ifprofile-add data computedata compute-0
system ifprofile-list

NOTE: For Titanium Cloud Simplex or Duplex systems, hardware profiles are not
supported.

"
processors, phy cores per proc, hyperthreading information as well as the following output for each processor:
platform cores, vswitch cores, vm cores, shared cores
The ifprofile is created according to the configured compute node selected.

For example

$ system cpuprofile-add test5 1
+--------------------+-----------------------------------------------------------+| Property | Value |
+--------------------+-----------------------------------------------------------+| uuid | 6a8ae40b-1edf-4166-a529-31ed872e08f4
test5 2 10 Yes Processor 0: 0,2,20,22 Processor 1: 1,21 Processor 0: 4,24, Processor 1: 3,23 Processor 0: 6,8,10,12,14,16,18,26,28,30,32,34,36,38, Processor 1: 5,7,9,11,13,15,17,19,25,27,29,31,33,35,37,39 2015-12-21T20:13:09.747216+00:00 None

Verify show output for the newly created cpu profile contains the same output for the Property and Value.

$ system cpuprofile-list
$ system cpuprofile-show <id>

Verify that the system host-cpu-list contains a list of the following and that it corresponds to the cpuprofile that was created:
log_core, processor, phy_core | thread, processor_modelassigned_function

$ system cpuprofile-list <node>
eg. log_cores for assigned_function ""Platform""
0,2, 20, 22
eg. log_cores for assigned_function ""Shared""
3,4, 23. 24
"
-1 - - - - x x x x
235 Installation and Config Ceph back-end using static IP addressing - using non-interactive config_controller "Modify TiS_config.ini_centos

DYNAMIC_ALLOCATION = N
sudo config_controller --config-file TiS_config.ini_centos
"
"- TiS_config.ini_centos is modified
- config successfully
"
-2 x x x x x x x x
236 Installation and Config "Validate ability to edit Host Personality, Hostname in Horizon prior node provisioning" "On a standard or storage system, delete the nodes, like compute nodes
After nodes reboot, check horizon inventory
Provision personality to nodes
"
"- nodes were deleted, and reboot
- the nodes should show up in horizon inventory
- nodes should be edited to assigned hostname.
"
2 - - - - x x x x
237 Installation and Config Verify that logical volumes can be resized up on the controller (CLI/GUI) "using system controllerfs-modify to change lv size
$ system controllerfs-modify <fs name>=<size>

using horizon to change lv size.
The System Configuration pane is available from Admin > Platform > System
Configuration in the left-hand pane.
Select the Controller Filesystem tab
"
"- lv size is changed



"
1 x x x x x x x x
238 Installation and Config Attempt to decrease partition size "- Attempt to decrease the size of a partition (ie. before it has an association with the PV)

Modify size to same size eg.
$ system host-disk-partition-modify -s 147679 controller-0 52bc8839-27d5-4b89-ab6c-c8f7fb07ce5c
Requested partition size must be larger than current size: 147679 < 147679

Modify size to smaller size eg.
$ system host-disk-partition-modify -s 147600 controller-0 52bc8839-27d5-4b89-ab6c-c8f7fb07ce5c
Requested partition size must be larger than current size: 147600 < 147679

Attempt to decrease the size of the partiton via Horizon.


"
"- Ensure it is rejected with an informative error message
Requested partition size must be larger than current size: newsize < currentsize
- Confirm size reduction is not allowed and appropriate feedback is return.
(Note: This should not result in focus loss in horizon as it currently does)
"
2 x x x x x x x x
239 Installation and Config Attempt to increase partition size "$ system host-disk-partition-modify -s 1111111111111111111111111 controller-0 /dev/disk/by-path/pci-0000:00:1f.2-ata-2.0-part4
"
"- Invalid input for field/attribute size_mib. Value: '1111111111111111111111111'. Wrong type. Expected '<type 'int'>', got '<type 'unicode'>'
"
2 x x x x x x x x
240 Installation and Config Attempt unlock host where nova-local lvg exists but does not have physical volume nova-local | removing (on unlock) | wz--n- | 532676608 | 1 | 1

$system host-lvg-list <host>
7e2c49bd-543b-4d26-bfc7-bdd9284f079a | nova-local | removing (on unlock) | wz--n- | 532676608 | 1 | 1

Add nova-local volume group and confirm state changes to provisioned
$ system host-lvg-add controller-1 nova-local

Attempt unlocking the host without adding the PV for nova-local volume group
"
- Error: Unable to unlock host: controller-1. Reason/Action: A host with compute functionality requires a nova-local volume group prior to being enabled.The nova-local volume group does not contain any physical volumes in the adding or provisioned state. 2 - - x x x x x x
241 Installation and Config Controller node basic provisioning check Install a regular or storage system and ensure that cgts-vg uses the entire root disk of the controller node. "- Do a 'sudo vgdisplay' and confirm only cgts-vg is listed:
[wrsroot@controller-0 ~(keystone_admin)]$ sudo vgdisplay
Password:
--- Volume group ---
VG Name cgts-vg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 12
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 10
Open LV 10
Max PV 0
Cur PV 1
Act PV 1
VG Size 203.53 GiB
PE Size 32.00 MiB
Total PE 6513
Alloc PE / Size 5146 / 160.81 GiB
Free PE / Size 1367 / 42.72 GiB
VG UUID sHzbCh-MJwJ-fesN-Odzv-7r05-IxHn-RSjrUn


- system host-disk-list controller-0 to get disk uuid or device path
system host-disk-partition-show controller-0 a128a34b-54c4-4410-a5c8-88445dd2e515
system host-disk-partition-show controller-0 /dev/disk/by-path/pci-0000:09:00.0-sas-0x5001e67657b7e002-lun-0


Should return:
Partition not found by device path or uuid: a128a34b-54c4-4410-a5c8-88445dd2e515
Partition not found by device path or uuid: /dev/disk/by-path/pci-0000:09:00.0-sas-0x5001e67657b7e002-lun-0

"
1 - - - - x x x x
242 Installation and Config Validate modified system host-disk-list CLI output uuid | device_node | device_num |
device_type | size_mib | available_mib | rpm | serial_id| device_path |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+----------------------------------+-------------------------------------------------+| 787e8247-9daf-4e83-9619-6a44e2728794 | /dev/sda | 2048 | HDD| 857856 | 1 | Undetermined | 00d6919a2d9109481c0036abe060f681| /dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:0:0 |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+----------------------------------+-------------------------------------------------+
"
"- Confirm available_mib column is present

"
2 x x x x x x x x
243 Installation and Config Validate Partition Modify operations "From the cli, attempt to modify a partition size without specifying the -s parameter

$ system host-disk-partition-modify controller-0 <partitionid>
"
"- The command that is missing the -s parameter should provide the user with appropriate feedback that it is required.

No update parameters specified, partition is unchanged.
"
2 x x x x x x x x
244 Installation and Config "verify system install success in progress bar preinstall, installing, post install, installed" "description: to validate system install success with the display of preinstall, installing, post install, installed in progress bar

1. start installation according to installation procedure
2. once controller-0 install started verify horizon host inventory
3. verify system host-show command on cli for install_state and install_state_info
4. do step 2 and 3 for controller 1 compute and storage.
"
"expected results:
verify horizon bar and cli for the state preinstall, installing, post install, installed
verify horizon for install progress example: 2/1020
ensure installation success as per state.
"
1 x x x x x x x x
245 Duplex Add up to 4 compute nodes to Duplex deployment "FIND OUT THE PROCEDURE FIRST -
(taken from commit)
The AIO-DX system type was enhanced to allow the customer to
provision additional compute hosts.

However, the maintenance system made a global assumption about the
'system type' and would run the enable subfunction FSMs against
these compute only hosts. In doing so the unlock of these added
hosts would timeout and heartbeat would not enable.

This update ensures that the subfunction FSM is only run against
hosts that have both controller and compute functions."
-2 - - x x - - - - https://storyboard.openstack.org/#!/story/2002821
246 Simplex Unsupported sysInv Commands "Try to swact Simplex, this test is juts for Simplex. Go to horizon, Admin, Host Inventory or via CLI ""swact controller-0""
"
Instruction should be rejected 1 x x - - - - - -
247 Simplex Verify memory assignment on Simplex grep ""host""

To lock such host:
$ openstack host-lock <host>

To unlock such hosts:
$ openstack host-unlock <host>
"
"- System Memory should be able to perform lock using system memory
- A host can be locked if a vm is up and running on it.
- The vm should be migrated of host if the host is locked.
- When unlock, a reboot is expected.
"
2 x x x x x x x x
248 Simplex Verify migration is rejected on Simplex grep ""host""

To migrate vm:
$ openstack server migrate --live <target_host> <server_name>

"
"- Flavor created successfully
- Image created successfully
- Network created successfully
- Subnet created successfully
- Verify migration is rejected."
1 x x - - - - - -
249 Simplex Verify resize and rebuild on Simplex "- You need to have a vm up and running in order to be resized with a new flavor. Use the vm created during last test case.

$ source /etc/nova/openrc

To create a new flavor:
$ openstack flavor create --ram 2048 --disk 10 --vcpus 1 m1.small

To resize a vm running:
$ openstack server resize --flavor m1.small cirros-vm

To confirm resize (when Status field is VEIRFY_RESIZE):
$ openstack server resize --confirm cirros-vm

To check new flavor used for vm:
$ openstack server list (Flavor field)
"
"- When resizing, the vm status should be: RESIZE
- Should be able to confirme the resize of VM
- Wait for vm status: ACTIVE
- Verify the resize operation is successful in Flavor field of VM table.
- VM should be up and running."
1 x x x x x x x x
250 Simplex Reboot system 10 times "- This action can be performed on active controller, forcing reboot from vm, on bare metal should be pressing the reset button
- Cannot reboot an unlocked host. Need to lock the host first.
- Cannot 'lock' nor 'reboot' an active controller in Multinode via CLI nor via Horizon, please check if it is possible on Simplex and Duplex.

$ source /etc/nova/openrc

To show hosts to reboot
$ system host-list

To lock a host:
$ system host-lock <hostname>

To reboot a host:
$ system host-reboot <hostname>

To unlock host:
$ system host-unlock <hostname>
"
"- System should be able to recover from reboot, no need for instances
- When unlock, a host reboot is expected.
- All hosts must be 'unlocked', 'enabled' and 'available'.
- If the active controller is rebooted, the second controller turns ""Active"" and the rebooted turns ""Standby"" when boot."
1 x x - - - - - -