StarlingX/Testing Guide
This document contains the steps for validating a StarlingX System has been installed correctly.
Requirements
The recommended minimum requirements include:
System Requirements
- A StarlingX System
Launch an Instance
Download CirrOS Image
Download a CirrOS image in QCOW2 format from the CirrOS download page:
$ wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
Transfer the CirrOS QCOW2 image to the StarlingX System:
$ scp cirros-0.4.0-x86_64-disk.img wrsroot@10.10.10.3:~/
Acquire administrative privileges
On Controller-0, acquire Keystone administrative privileges:
controller-0:~$ source /etc/nova/openrc
Create OpenStack Images
~(keystone_admin)]$ openstack image create --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --public cirros
Create OpenStack Flavors
~(keystone_admin)]$ openstack flavor create --id 1 --ram 64 --disk 1 --vcpus 1 --public flavor.nano
~(keystone_admin)]$ openstack flavor create --id 2 --ram 128 --disk 2 --vcpus 1 --public flavor.micro
Create OpenStack Network
~(keystone_admin)]$ openstack network create network.one
Create OpenStack Sub Network
~(keystone_admin)]$ openstack subnet create --network network.one --ip-version 4 --subnet-range 192.168.1.0/24 --dhcp subnet.one
Create OpenStack Servers
~(keystone_admin)]$ openstack server create --flavor flavor.nano --image cirros --nic net-id=network.one server.nano
~(keystone_admin)]$ openstack server create --flavor flavor.micro --image cirros --nic net-id=network.one server.micro
Check OVS/DPDK
Check Neutron Agent List
On Controller-0, acquire Keystone administrative privileges:
controller-0:~$ source /etc/nova/openrc
~(keystone_admin)]$ neutron agent-list
Get Compute Node IP Address
~(keystone_admin)]$ system host-show compute-0 | grep mgmt_ip
| mgmt_ip | 192.168.204.119 |
~(keystone_admin)]$ system host-show compute-1 | grep mgmt_ip
| mgmt_ip | 192.168.204.113 |
Check Compute-0 Interface Type
Login to Compute-0 via ssh
~(keystone_admin)]$ ssh 192.168.204.119
Verify dpdk type is set for eth0 port using Open vSwitch utility:
compute-0:~$ sudo ovs-vsctl show
...
Port "eth0"
Interface "eth0"
type: dpdk
options: {dpdk-devargs="0000:00:09.0", n_rxq="1"}
...
ovs_version: "2.9.0"
Check Compute-1 Interface Type
Login to Compute-1 via ssh
~(keystone_admin)]$ ssh 192.168.204.113
Verify dpdk type is set for eth0 port using Open vSwitch utility:
compute-0:~$ sudo ovs-vsctl show
...
Port "eth0"
Interface "eth0"
type: dpdk
options: {dpdk-devargs="0000:00:09.0", n_rxq="1"}
ovs_version: "2.9.0"
...
Sanity
Simplex
Multinode
NAME | SUMMARY | PRECONDITIONS | STEPS | EXPECTED RESULTS | ||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Test_Install | Test the Installation for AIO simplex for R5 | "1. Install Ubuntu 16.04
2. Set proxies 3. Install virtualbox 5.1.30 --> https://www.virtualbox.org/wiki/Download_Old_Builds_5_1 4. Download ISO and License ssh madawaska@madbuild01.ostc.intel.com --> password ""Madawa$ka1 - bootimage-current.iso - wrslicenseR5txt 5. Clone repo tic_vb https://github.intel.com/Madawaska/tic_vb.git 6. Copy ""boot-current.iso"" to tic_vb repo and rename it as ""bootimage.iso""" |
"Follow the Steps defined on https://github.intel.com/Madawaska/tic_vb/blob/master/README.md#steps-for-simplex-r5
use http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img on private-net0." |
Installation should be completed successfully. | ||||||||||||||||||||||||||||||||||
Test_create_instance | Test that instances can be created successfully | Test_simplex_R5_Install = PASS | "Upload an image following steps defined on:
https://docs.openstack.org/horizon/latest/user/manage-images.html#upload-an-image" "Launch an isntance following steps defined on: https://docs.openstack.org/horizon/latest/user/launch-instances.html#launch-an-instance" |
Image is uploaded correctly
Instance should be launched correctly | ||||||||||||||||||||||||||||||||||
Test_ping_ssh_beetwen_2_instances | Test ping and ssh connection between 2 instances on the same controller | Test_create_instance = PASS |
|
| ||||||||||||||||||||||||||||||||||
Test_nova_actions_dedicated_auto_recover |
Launch a vm using cirros/ubuntu image with dedicated cpu policy, set vm to error state via nova cli, wait for vm to be autorecover, ensure vm in active state and still pingable |
Instances running |
- in controller go to source /etc//nova/openrc - Get the Instance ID $ openstack server list - Set to error state Instance $ openstack server set --state error <Instance-ID>
|
In horizon at Project --> Instances on CLI run ""$ openstack server list"" and verify the status"
In horizon at Project --> Instances on CLI run ""$ openstack server list"" and verify the status"
| ||||||||||||||||||||||||||||||||||
Test_nova_actions_shared_suspend_resume | "Launch a vm using cirros/centos-guest with shared cpu policy, suspend/resume it, ensure vm in active
state and still pingable" |
Instances running |
|
| ||||||||||||||||||||||||||||||||||
test_horaizon_auto_recovery_volume | Edit Image for volume in Horizon and add Instance Auto Recovery, verify metadata updated | simplex R5 installed | "Go to Titanium Cloud GUI and select the following options:
1. Images 2. Edit Meta Data 3. Selecte Auto Recovery" |
Image for Volume with Metadata updated with Instance Recovery option added | ||||||||||||||||||||||||||||||||||
test_horaizon_auto_recovery_snapshot | Edit Image for snapshot in Horizon and add Instance Auto Recovery, verify metadata updated |
1. simplex R5 installed 2. Snapshot created |
"Go to Titanium Cloud GUI and select the following options:
1. Snapshots 2. Edit Meta Data 3. Select Auto Recovery" |
Image for Snapshot with Metadata updated with Instance Recovery option added | ||||||||||||||||||||||||||||||||||
test_horaizon_sw_wrs_auto_recovery_metadata_update_volume | Update Metadata of Volume in Horizon, add property sw_wrs_auto_recovery and verify that it can be set |
1. simplex R5 installed 2. Volume created || "Go to Titanium Cloud GUI and select the following options: 1. Go to Volumes > Select a Volume > Edit Volume > Update Metada 2. In the Update Volume Metadata window search for: sw_wrs_auto_recovery 3. Select sw_wrs_recovery" |
property sw_wrs_auto_recovery added successfully to Volume | |||||||||||||||||||||||||||||||||||
test_horaizon_sw_wrs_auto_recovery_metadata_update_snapshot | Update Metadata of Snapshot in Horizon, add property sw_wrs_auto_recovery and verify that it can be set |
1. simplex R5 installed 2. Snapshot created |
"Go to Titanium Cloud GUI and select the following options:
1. Edit Spanposhot 2. Edit MEtadata 3. Select sw_wrs_recovery" |
property sw_wrs_auto_recovery added successfully to Snapshot | ||||||||||||||||||||||||||||||||||
Test_vm_meta_data_retrieval | "Launch a vm from image, ssh to vm and wget the instance_id metadata; ensure the instance_id
metatdata is the same as instance_name in ""openstack server show <vm-name>"" command" |
1.simplex R5 previously configured
2.controller Key Par added into Horizon : Project > Compute > Key Pars create a ssh for controller with the following command: ssh-keygen -f controller -t rsa -P "" |
"Go to Titanium Cloud GUI and select the following options:
1- Project 2- Instances 3- Launch instance" |
A box will appears in order to create the instance | ||||||||||||||||||||||||||||||||||
Test_horizon_login_time_regular | Login time to horizon set as Regular Multinode should be less than 5 seconds | Akraino pruduct should be installed and set as Regular Multinode | Open Horizon on a web browser, using default IP set while installing |
| ||||||||||||||||||||||||||||||||||
test_horizon_login_time_storage | Login time to horizon set as storage should be less than 5 seconds | Akraino pruduct should be installed and set as Storage |
|
| ||||||||||||||||||||||||||||||||||
test_horizon_login_time_all_in_one | Login time to horizon set as storage should be less than 5 seconds | Akraino pruduct should be installed and set as Simplex |
|
* Log in web page should be displayed
| ||||||||||||||||||||||||||||||||||
test_vif_model_from_image[avp] | Check that the hw_vif_model from image metadata is applied to the vif when the vif_model is not specified |
|
"Create a glance image with hw_vif_model image metadata set to avp
$ openstack image create --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --public cirros_avp $ openstack image set --property hw_vif_model=avp cirros_avp" "create a volume off that the created image $ nova boot --nic net-id=093d28e1-f0cd-4866-b45b-ecce07b1df55 --flavor 1 --block-device source=image,id=dc7e4edb-24f8-4cec-8288-71da1e074c5b,dest=volume,size=5,shutdown=preserve,bootindex=0 avpInstanceFromVolume" "Create a vm with 3 vnics: over mgmt-net with virtio vif_model, over tenant-net with no vif_model specified, over internal-net with avp vif_model. $ nova boot --flavor 1 --image cirros-avp \ --nic net-id=093d28e1-f0cd-4866-b45b-ecce07b1df55 \ --nic net-id=db172669-4c08-4460-a3da-2cc5775744fc,vif-model=avp \ --nic net-id=2ad6ee79-bb52-49a5-875d-12f4fa053150,vif-model=virtio \ test-av-vm " |
"Check that property was set correclty
$ openstack image show cirros_avp ==================================== |
hw_vif_model='avp', store='file' |
" "Volume created correctly $ cinder list" "Check via nova show that the hw_vif_model from image metadata is applied to the vif that did not specify the vif_model. With below command verify on field ""wrs-if:nics"" taht ""vif_model=avp"" for nic2 $ nova show test-av-vm ==================================== |
{""nic1"": {""vif_model"": ""avp"", ""network"": ""net"", ""port_id"": ""fdfcfed8-66c9-436b-91f9-163f645b66f2"", ""mtu"": 1500, ""mac_address"": ""fa:16:3e:1b:c7:6c"", ""vif_pci_address"": """"}" | ||||||||||||||||||||||||||||||||
test_ceilometer_<meter>_port_samples | Query ceilometer samples for meters, as well as resource id, ensuring samples exist | "Akraino pruduct should be installed and set as multi node with 2 controllers, 1 compute and a vswitch." |
name: is the name of the Ceilometer meter number: is the maximum number of samples to return query: is a list of metadata filters to apply to the samples, in the form 'metadata_type=filter_value; metadata_type=filter_value; ... e.g. ceilometer sample-list -m platform.cpu.util -l 10 -q 'metadata.host=controller-0'"
|
controller-X ~(keystone_admin)]$ "
|
Type | Unit |
+---------------------------------+------------+---------+ |
gauge | MHz | | gauge | percent | | cumulative | ns | | gauge | percent | | cumulative | ns | | gauge | percent | | cumulative | ns | | gauge | percent | | gauge | percent | | cumulative | ns | | delta | % | | delta | % | | delta | % |
+---------------------------------+------------+---------+"
|
Type | Unit | Resource ID | User ID | Project ID |
+---------------------------------+------------+---------+--------------------------------------+----------------------------------+----------------------------------+ |
gauge | MHz | controller-0_controller-0 | None | None | | gauge | percent | controller-0_controller-0 | None | None | | cumulative | ns | controller-0_controller-0 | None | None | | gauge | percent | controller-0_controller-0 | None | None | | cumulative | ns | controller-0_controller-0 | None | None | | gauge | percent | controller-0_controller-0 | None | None | | cumulative | ns | controller-0_controller-0 | None | None | | gauge | percent | controller-0_controller-0 | None | None | | gauge | percent | controller-0_controller-0 | None | None | | cumulative | ns | controller-0_controller-0 | None | None | | delta | % | 14f6bfca-7286-4470-a805-3dfab4ce1b89 | 0dcd12579b3c42f195d96ebecd9697bf | 17a35bdd8d024aaea88af6b112dcf697 | | delta | % | 14f6bfca-7286-4470-a805-3dfab4ce1b89 | 0dcd12579b3c42f195d96ebecd9697bf | 17a35bdd8d024aaea88af6b112dcf697 | | delta | % | 14f6bfca-7286-4470-a805-3dfab4ce1b89 | 0dcd12579b3c42f195d96ebecd9697bf | 17a35bdd8d024aaea88af6b112dcf697 |
+---------------------------------+------------+---------+--------------------------------------+----------------------------------+---"
+-------------+--------------------------------------------+ |
Value |
+-------------+--------------------------------------------+ |
{""host"": ""controller-0""} | | 17a35bdd8d024aaea88af6b112dcf697 | | 14f6bfca-7286-4470-a805-3dfab4ce1b89 | | 17a35bdd8d024aaea88af6b112dcf697:openstack | | 0dcd12579b3c42f195d96ebecd9697bf |
+-------------+--------------------------------------------+" |
test_nova_actions[ubuntu_14-shared-stop-start] | Cirros | simplex R5 previously configured |
|
+----------------------------+------------------------------------------------------------------------------+ |
Value |
+----------------------------+------------------------------------------------------------------------------+ |
False | | 0 | | 10 | | f743498f-e66f-4f21-8290-ee67045120ed | | p1.medium | | True | | aggregate_instance_extra_specs:storage='local_image', hw:cpu_policy='shared' | | 1024 | | 1.0 | | | | 2 |"
+--------------------------------------+-----------+------+------+-----------+-------+-----------+ |
Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+-----------+------+------+-----------+-------+-----------+ |
s.p2 | 2048 | 25 | 0 | 2 | True | | s.p1 | 512 | 1 | 0 | 1 | True | | p1.medium | 1024 | 10 | 0 | 2 | True |
+--------------------------------------+-----------+------+------+-----------+-------+-----------+"
The cpu_policy as shared must be present in the output"
| ||||||||||||||||||
test_statistics_for_one_meter[image.size] | Check ceilometer statistics for meter ‘image.size’ |
Click Create Provider Network. In the Create Provider Network window, complete the fields as required. Name The name of the provider network. Description A free-text field for reference. Type The type of provider network to be created. flat mapped directly to the physical network vlan supports multiple tenant networks using VLAN IDs. vxlan supports multiple tenant networks using VXLAN VNIs. MTU The maximum transmission unit for the Ethernet segment used to access the network. NOTE: To attach to the provider network, data interfaces must be configured with an equal or larger MTU. Click Provider Network button." |
controller-X ~(keystone_admin)]$ "
|
Period Start | Period End | Max | Min | Avg | Sum | Count | Duration | Duration Start | Duration End |
+--------+---------------------+---------------------+------------+------------+------------+--------------+-------+----------+---------------------+---------------------+ |
2018-04-25T19:10:08 | 2018-04-26T19:30:16 | 12716032.0 | 12716032.0 | 12716032.0 | 5493325824.0 | 432 | 87608.0 | 2018-04-25T19:10:08 | 2018-04-26T19:30:16 |
+--------+---------------------+---------------------+------------+------------+------------+--------------+-------+----------+---------------------+---------------------"
| |||||||||||||||||||||||||||||||||
Test_heat_template [WR_Neutron_ProviderNetRange.yaml] | Create new provider net and range via heat, and ensure new providernet and range is listed in neutron providernet-list |
Click Create Provider Network. In the Create Provider Network window, complete the fields as required. Name The name of the provider network. Description A free-text field for reference. Type The type of provider network to be created. flat mapped directly to the physical network vlan supports multiple tenant networks using VLAN IDs. vxlan supports multiple tenant networks using VXLAN VNIs. MTU The maximum transmission unit for the Ethernet segment used to access the network. NOTE: To attach to the provider network, data interfaces must be configured with an equal or larger MTU. Click Provider Network button." |
| |||||||||||||||||||||||||||||||||||
test_system_alarms_and_events_on_lock_unlock_compute | Lock a compute host, and ensure the relevant alarms and system events are generated Unlock the host and ensure alarm and events are cleared. | Akraino pruduct should be installed and set as multi node with 2 controllers and at least 1 compute. |
|
|