Difference between revisions of "StarlingX/Testing Guide"
(→Simplex) |
(→Multinode) |
||
(31 intermediate revisions by the same user not shown) | |||
Line 123: | Line 123: | ||
2. Set proxies 3. Install virtualbox 5.1.30 --> https://www.virtualbox.org/wiki/Download_Old_Builds_5_1 | 2. Set proxies 3. Install virtualbox 5.1.30 --> https://www.virtualbox.org/wiki/Download_Old_Builds_5_1 | ||
4. Download ISO and License | 4. Download ISO and License | ||
− | |||
- bootimage-current.iso | - bootimage-current.iso | ||
- wrslicenseR5txt | - wrslicenseR5txt | ||
5. Clone repo tic_vb | 5. Clone repo tic_vb | ||
− | https:// | + | https://git.openstack.org/cgit/openstack/stx-tools/tree/?id=a60042bbd11c387e95157fad255986712227dab6 |
6. Copy ""boot-current.iso"" to tic_vb repo and rename it as ""bootimage.iso""" | 6. Copy ""boot-current.iso"" to tic_vb repo and rename it as ""bootimage.iso""" | ||
|| "Follow the Steps defined on https://github.intel.com/Madawaska/tic_vb/blob/master/README.md#steps-for-simplex-r5 | || "Follow the Steps defined on https://github.intel.com/Madawaska/tic_vb/blob/master/README.md#steps-for-simplex-r5 | ||
Line 161: | Line 160: | ||
Launch a vm using cirros/ubuntu image with dedicated cpu policy, set vm to error state via nova cli, wait for vm to be autorecover, ensure vm in active state and still pingable | Launch a vm using cirros/ubuntu image with dedicated cpu policy, set vm to error state via nova cli, wait for vm to be autorecover, ensure vm in active state and still pingable | ||
|| Instances running | || Instances running | ||
− | || * Launch Instance with image with dedicated cpu policy | + | || |
+ | * Launch Instance with image with dedicated cpu policy | ||
* Ping to instance | * Ping to instance | ||
* Set to error state using as follows: | * Set to error state using as follows: | ||
Line 171: | Line 171: | ||
* Wait about 30s - 1 min | * Wait about 30s - 1 min | ||
* Ping to instance. | * Ping to instance. | ||
− | || * Instance is running correctly | + | || |
+ | * Instance is running correctly | ||
* Packages are transmitted correctly | * Packages are transmitted correctly | ||
* Verify that Instance is in error state: | * Verify that Instance is in error state: | ||
Line 192: | Line 193: | ||
|- | |- | ||
− | | | + | | test_horaizon_auto_recovery_volume |
− | | | + | || Edit Image for volume in Horizon and add Instance Auto Recovery, verify metadata updated |
− | | | + | || simplex R5 installed || "Go to Titanium Cloud GUI and select the following options: |
− | | | + | 1. Images |
− | | | + | 2. Edit Meta Data |
− | | | + | 3. Selecte Auto Recovery" |
− | | | + | || Image for Volume with Metadata updated with Instance Recovery option added |
− | | | + | |- |
− | | | + | |
− | | | + | | test_horaizon_auto_recovery_snapshot || Edit Image for snapshot in Horizon and add Instance Auto Recovery, verify metadata updated |
− | | | + | || |
− | | | + | 1. simplex R5 installed |
− | | | + | 2. Snapshot created |
− | | | + | || "Go to Titanium Cloud GUI and select the following options: |
− | | | + | 1. Snapshots |
− | | | + | 2. Edit Meta Data |
− | | | + | 3. Select Auto Recovery" |
− | | | + | || Image for Snapshot with Metadata updated with Instance Recovery option added |
− | | | + | |- |
+ | |||
+ | | test_horaizon_sw_wrs_auto_recovery_metadata_update_volume || Update Metadata of Volume in Horizon, add property sw_wrs_auto_recovery and verify that it can be set | ||
+ | || | ||
+ | 1. simplex R5 installed | ||
+ | 2. Volume created || "Go to Titanium Cloud GUI and select the following options: | ||
+ | 1. Go to Volumes > Select a Volume > Edit Volume > Update Metada | ||
+ | 2. In the Update Volume Metadata window search for: sw_wrs_auto_recovery | ||
+ | 3. Select sw_wrs_recovery" | ||
+ | || property sw_wrs_auto_recovery added successfully to Volume | ||
+ | |- | ||
+ | |||
+ | | test_horaizon_sw_wrs_auto_recovery_metadata_update_snapshot || Update Metadata of Snapshot in Horizon, add property sw_wrs_auto_recovery and verify that it can be set | ||
+ | || | ||
+ | 1. simplex R5 installed | ||
+ | 2. Snapshot created | ||
+ | || "Go to Titanium Cloud GUI and select the following options: | ||
+ | 1. Edit Spanposhot | ||
+ | 2. Edit MEtadata | ||
+ | 3. Select sw_wrs_recovery" | ||
+ | || property sw_wrs_auto_recovery added successfully to Snapshot | ||
+ | |- | ||
+ | |||
+ | | Test_vm_meta_data_retrieval || "Launch a vm from image, ssh to vm and wget the instance_id metadata; ensure the instance_id | ||
+ | metatdata is the same as instance_name in ""openstack server show <vm-name>"" command" | ||
+ | || 1.simplex R5 previously configured | ||
+ | 2.controller Key Par added into Horizon : Project > Compute > Key Pars | ||
+ | create a ssh for controller with the following command: | ||
+ | ssh-keygen -f controller -t rsa -P "" | ||
+ | || "Go to Titanium Cloud GUI and select the following options: | ||
+ | 1- Project | ||
+ | 2- Instances | ||
+ | 3- Launch instance" | ||
+ | || A box will appears in order to create the instance | ||
+ | |- | ||
+ | |||
+ | | test_horizon_login_time_all_in_one || Login time to horizon set as storage should be less than 5 seconds || Akraino pruduct should be installed and set as Simplex | ||
+ | || | ||
+ | * Open Horizon on a web browser, using default IP set while installing | ||
+ | * Fiil out user and password fields | ||
+ | * Click on "Sign In" Button and start a time counter | ||
+ | || * Log in web page should be displayed | ||
+ | * You can write on both fields | ||
+ | * Horizon web page should be displayed in less than 5 seconds | ||
+ | |- | ||
+ | |||
+ | | test_vif_model_from_image[avp] || Check that the hw_vif_model from image metadata is applied to the vif when the vif_model is not specified || | ||
+ | * Flavor created | ||
+ | * Cirros image uploaded | ||
+ | || "Create a glance image with hw_vif_model image metadata set to avp | ||
+ | $ openstack image create --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --public cirros_avp | ||
+ | $ openstack image set --property hw_vif_model=avp cirros_avp" | ||
+ | "create a volume off that the created image | ||
+ | |||
+ | $ nova boot --nic net-id=093d28e1-f0cd-4866-b45b-ecce07b1df55 --flavor 1 --block-device source=image,id=dc7e4edb-24f8-4cec-8288-71da1e074c5b,dest=volume,size=5,shutdown=preserve,bootindex=0 avpInstanceFromVolume" | ||
+ | "Create a vm with 3 vnics: over mgmt-net with virtio vif_model, over tenant-net with no vif_model specified, over internal-net with avp vif_model. | ||
+ | |||
+ | $ nova boot --flavor 1 --image cirros-avp \ | ||
+ | --nic net-id=093d28e1-f0cd-4866-b45b-ecce07b1df55 \ | ||
+ | --nic net-id=db172669-4c08-4460-a3da-2cc5775744fc,vif-model=avp \ | ||
+ | --nic net-id=2ad6ee79-bb52-49a5-875d-12f4fa053150,vif-model=virtio \ | ||
+ | test-av-vm " | ||
+ | || "Check that property was set correclty | ||
+ | $ openstack image show cirros_avp | ||
+ | |||
+ | ******************************************************************** | ||
+ | | properties | hw_vif_model='avp', store='file' | | ||
+ | " | ||
+ | "Volume created correctly | ||
+ | |||
+ | $ cinder list" | ||
+ | "Check via nova show that the hw_vif_model from image metadata is applied to the vif that did not specify the vif_model. | ||
+ | With below command verify on field ""wrs-if:nics"" taht ""vif_model=avp"" for nic2 | ||
+ | $ nova show test-av-vm | ||
+ | |||
+ | |||
+ | ********************************************************************** | ||
+ | | wrs-if:nics | {""nic1"": {""vif_model"": ""avp"", ""network"": ""net"", ""port_id"": ""fdfcfed8-66c9-436b-91f9-163f645b66f2"", ""mtu"": 1500, ""mac_address"": ""fa:16:3e:1b:c7:6c"", ""vif_pci_address"": """"}" | ||
+ | |- | ||
+ | |||
+ | | test_ceilometer_<meter>_port_samples || Query ceilometer samples for meters, as well as resource id, ensuring samples exist || "Akraino pruduct should be installed and set as multi node with 2 controllers, 1 compute and a vswitch." | ||
+ | || | ||
+ | * Go to controller node terminal and login as admin user, type $ . /etc/nova/openrc | ||
+ | * List your meter type list in your controller environment by typing $ ceilometer metertype-list | ||
+ | * Now list the available meters and their Reosurce id by typying $ ceilometer meter-list | ||
+ | * "To view a set of samples for a meter, type the following command $ ceilometer sample-list [-m name] [-l number] [-q query] where | ||
+ | name: is the name of the Ceilometer meter | ||
+ | number: is the maximum number of samples to return | ||
+ | query: is a list of metadata filters to apply to the samples, in the form 'metadata_type=filter_value; | ||
+ | metadata_type=filter_value; ... | ||
+ | e.g. ceilometer sample-list -m platform.cpu.util -l 10 -q 'metadata.host=controller-0'" | ||
+ | * Identify a meter name and their resource id and make a query by typing $ ceilometer statistics -m platform.cpu.util -p 10 -q 'metadata.host=controller-0' | ||
+ | * "To list the metadata associated with a meter, type the following: $ ceilometer resource-show [resource_id] " | ||
+ | || | ||
+ | * "Logged in as admin is done successfully with a prompt as follows : | ||
+ | controller-X ~(keystone_admin)]$ " | ||
+ | * "You should be getting a list of your meter type available in your controller. e.g.+---------------------------------+------------+---------+ | ||
+ | | Name | Type | Unit | | ||
+ | +---------------------------------+------------+---------+ | ||
+ | | compute.node.cpu.frequency | gauge | MHz | | ||
+ | | compute.node.cpu.idle.percent | gauge | percent | | ||
+ | | compute.node.cpu.idle.time | cumulative | ns | | ||
+ | | compute.node.cpu.iowait.percent | gauge | percent | | ||
+ | | compute.node.cpu.iowait.time | cumulative | ns | | ||
+ | | compute.node.cpu.kernel.percent | gauge | percent | | ||
+ | | compute.node.cpu.kernel.time | cumulative | ns | | ||
+ | | compute.node.cpu.percent | gauge | percent | | ||
+ | | compute.node.cpu.user.percent | gauge | percent | | ||
+ | | compute.node.cpu.user.time | cumulative | ns | | ||
+ | | platform.cpu.util | delta | % | | ||
+ | | platform.fs.util | delta | % | | ||
+ | | platform.mem.util | delta | % | | ||
+ | +---------------------------------+------------+---------+" | ||
+ | * "You should be getting a list of available meters and their resource id successfully. e.g.+---------------------------------+------------+---------+--------------------------------------+----------------------------------+----------------------------------+ | ||
+ | | Name | Type | Unit | Resource ID | User ID | Project ID | | ||
+ | +---------------------------------+------------+---------+--------------------------------------+----------------------------------+----------------------------------+ | ||
+ | | compute.node.cpu.frequency | gauge | MHz | controller-0_controller-0 | None | None | | ||
+ | | compute.node.cpu.idle.percent | gauge | percent | controller-0_controller-0 | None | None | | ||
+ | | compute.node.cpu.idle.time | cumulative | ns | controller-0_controller-0 | None | None | | ||
+ | | compute.node.cpu.iowait.percent | gauge | percent | controller-0_controller-0 | None | None | | ||
+ | | compute.node.cpu.iowait.time | cumulative | ns | controller-0_controller-0 | None | None | | ||
+ | | compute.node.cpu.kernel.percent | gauge | percent | controller-0_controller-0 | None | None | | ||
+ | | compute.node.cpu.kernel.time | cumulative | ns | controller-0_controller-0 | None | None | | ||
+ | | compute.node.cpu.percent | gauge | percent | controller-0_controller-0 | None | None | | ||
+ | | compute.node.cpu.user.percent | gauge | percent | controller-0_controller-0 | None | None | | ||
+ | | compute.node.cpu.user.time | cumulative | ns | controller-0_controller-0 | None | None | | ||
+ | | platform.cpu.util | delta | % | 14f6bfca-7286-4470-a805-3dfab4ce1b89 | 0dcd12579b3c42f195d96ebecd9697bf | 17a35bdd8d024aaea88af6b112dcf697 | | ||
+ | | platform.fs.util | delta | % | 14f6bfca-7286-4470-a805-3dfab4ce1b89 | 0dcd12579b3c42f195d96ebecd9697bf | 17a35bdd8d024aaea88af6b112dcf697 | | ||
+ | | platform.mem.util | delta | % | 14f6bfca-7286-4470-a805-3dfab4ce1b89 | 0dcd12579b3c42f195d96ebecd9697bf | 17a35bdd8d024aaea88af6b112dcf697 | | ||
+ | +---------------------------------+------------+---------+--------------------------------------+----------------------------------+---" | ||
+ | * You should be seeing a sample meter you picked up like the below example: +--------------------------------------+-------------------+-------+--------+------+---------------------+| Resource ID | Name | Type | Volume | Unit | Timestamp |+--------------------------------------+-------------------+-------+--------+------+---------------------+| 14f6bfca-7286-4470-a805-3dfab4ce1b89 | platform.cpu.util | delta | 34.0 | % | 2018-04-27T13:33:45 || 14f6bfca-7286-4470-a805-3dfab4ce1b89 | platform.cpu.util | delta | 30.0 | % | 2018-04-27T13:28:45 || 14f6bfca-7286-4470-a805-3dfab4ce1b89 | platform.cpu.util | delta | 27.0 | % | 2018-04-27T13:23:44 || 14f6bfca-7286-4470-a805-3dfab4ce1b89 | platform.cpu.util | delta | 41.0 | % | 2018-04-27T13:18:44 || 14f6bfca-7286-4470-a805-3dfab4ce1b89 | platform.cpu.util | delta | 35.0 | % | 2018-04-27T13:13:45 || 14f6bfca-7286-4470-a805-3dfab4ce1b89 | platform.cpu.util | delta | 36.0 | % | 2018-04-27T13:08:44 || 14f6bfca-7286-4470-a805-3dfab4ce1b89 | platform.cpu.util | delta | 31.0 | % | 2018-04-27T13:03:44 || 14f6bfca-7286-4470-a805-3dfab4ce1b89 | platform.cpu.util | delta | 29.0 | % | 2018-04-27T12:58:43 || 14f6bfca-7286-4470-a805-3dfab4ce1b89 | platform.cpu.util | delta | 29.0 | % | 2018-04-27T12:53:44 || 14f6bfca-7286-4470-a805-3dfab4ce1b89 | platform.cpu.util | delta | 30.0 | % | 2018-04-27T12:48:44 |+--------------------------------------+-------------------+-------+--------+------+---------------------+ | ||
+ | |||
+ | * "You should be able to see meter resource-show successfully. e.g. $ ceilometer resource-show 14f6bfca-7286-4470-a805-3dfab4ce1b89 | ||
+ | +-------------+--------------------------------------------+ | ||
+ | | Property | Value | | ||
+ | +-------------+--------------------------------------------+ | ||
+ | | metadata | {""host"": ""controller-0""} | | ||
+ | | project_id | 17a35bdd8d024aaea88af6b112dcf697 | | ||
+ | | resource_id | 14f6bfca-7286-4470-a805-3dfab4ce1b89 | | ||
+ | | source | 17a35bdd8d024aaea88af6b112dcf697:openstack | | ||
+ | | user_id | 0dcd12579b3c42f195d96ebecd9697bf | | ||
+ | +-------------+--------------------------------------------+" | ||
+ | |- | ||
+ | |||
+ | | test_nova_actions[ubuntu_14-shared-stop-start] || Cirros || simplex R5 previously configured | ||
+ | || | ||
+ | * In the controller create a flavor with shared cpu policy with the following command: openstack flavor create --public <flavor-name> --id auto --ram <value-in-megas> --disk <value-in-gigas> --vcpus <#-virtual-cpus> --property hw:cpu_policy=shared" | ||
+ | * "In the controller list the flavor created with openstack in order to get its ID with the command: openstack flavor list" | ||
+ | * "In order to confirm if the flavor was created correctly please type the following command in the controller: openstack flavor show <flavorID> | grep properties" | ||
+ | * "proceed to create a Image Project > Compute > Images > Create Image" | ||
+ | * "proceed to create an instance Project > Compute > Instances > Launch instance" | ||
+ | * "Stop the instance Project > compute > Instances > Actions > Pause instance" | ||
+ | * "Resume the instance Project > compute > Instances > Actions > Resume instance" | ||
+ | || | ||
+ | * "The flavor with shared cpu policy must be created without any issue, e.g: | ||
+ | |||
+ | +----------------------------+------------------------------------------------------------------------------+ | ||
+ | | Field | Value | | ||
+ | +----------------------------+------------------------------------------------------------------------------+ | ||
+ | | OS-FLV-DISABLED:disabled | False | | ||
+ | | OS-FLV-EXT-DATA:ephemeral | 0 | | ||
+ | | disk | 10 | | ||
+ | | id | f743498f-e66f-4f21-8290-ee67045120ed | | ||
+ | | name | p1.medium | | ||
+ | | os-flavor-access:is_public | True | | ||
+ | | properties | aggregate_instance_extra_specs:storage='local_image', hw:cpu_policy='shared' | | ||
+ | | ram | 1024 | | ||
+ | | rxtx_factor | 1.0 | | ||
+ | | swap | | | ||
+ | | vcpus | 2 |" | ||
+ | * "You will have a output like this: | ||
+ | |||
+ | +--------------------------------------+-----------+------+------+-----------+-------+-----------+ | ||
+ | | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public | | ||
+ | +--------------------------------------+-----------+------+------+-----------+-------+-----------+ | ||
+ | | bdbd31c1-166f-45ad-8e0f-de649f95d555 | s.p2 | 2048 | 25 | 0 | 2 | True | | ||
+ | | e0ec7bd8-fe18-4b4b-8328-08d786a1b4cc | s.p1 | 512 | 1 | 0 | 1 | True | | ||
+ | | f743498f-e66f-4f21-8290-ee67045120ed | p1.medium | 1024 | 10 | 0 | 2 | True | | ||
+ | +--------------------------------------+-----------+------+------+-----------+-------+-----------+" | ||
+ | * "You will have a output like this: | properties | aggregate_instance_extra_specs:storage='local_image', hw:cpu_policy='shared' | | ||
+ | The cpu_policy as shared must be present in the output" | ||
+ | * The image will created without any issues | ||
+ | * The instance will created without any issues | ||
+ | * The instance will stopped successfully | ||
+ | * The instance will resume successfully | ||
+ | |- | ||
+ | |||
+ | | test_statistics_for_one_meter[image.size] || Check ceilometer statistics for meter ‘image.size’ || | ||
+ | || | ||
+ | * Go to controller node terminal and enter as admin user, type $ . /etc/nova/openrc | ||
+ | * List your meter statistics typing $ ceilometer statistics -m image.size | ||
+ | || | ||
+ | * "Logged as admin is done successfully with a prompt as follows : | ||
+ | controller-X ~(keystone_admin)]$ " | ||
+ | * "You Should be seeing an entry with non-zero values for count, min, max, avg. +--------+---------------------+---------------------+------------+------------+------------+--------------+-------+----------+---------------------+---------------------+ | ||
+ | | Period | Period Start | Period End | Max | Min | Avg | Sum | Count | Duration | Duration Start | Duration End | | ||
+ | +--------+---------------------+---------------------+------------+------------+------------+--------------+-------+----------+---------------------+---------------------+ | ||
+ | | 0 | 2018-04-25T19:10:08 | 2018-04-26T19:30:16 | 12716032.0 | 12716032.0 | 12716032.0 | 5493325824.0 | 432 | 87608.0 | 2018-04-25T19:10:08 | 2018-04-26T19:30:16 | | ||
+ | +--------+---------------------+---------------------+------------+------------+------------+--------------+-------+----------+---------------------+---------------------" || | ||
+ | |- | ||
+ | |||
+ | | Test_heat_template [WR_Neutron_ProviderNetRange.yaml] || Create new provider net and range via heat, and ensure new providernet and range is listed in neutron providernet-list || | ||
+ | || | ||
+ | * Open a Browser with controller IP Address to open Horizon. e.g. https://10.10.10.10 | ||
+ | * Go to Admin --> Platfrom --> Provider Networks | ||
+ | * Create a provider network. | ||
+ | |||
+ | - Click Create Provider Network; In the Create Provider Network window, complete the fields as required. | ||
+ | - Name ; The name of the provider network. | ||
+ | - Description; A free-text field for reference. | ||
+ | - Type; The type of provider network to be created. | ||
+ | - Flat; mapped directly to the physical network. | ||
+ | - vlan ; supports multiple tenant networks using VLAN IDs. | ||
+ | - vxlan; supports multiple tenant networks using VXLAN VNIs. | ||
+ | - MTU; The maximum transmission unit for the Ethernet segment used to access the network. | ||
+ | |||
+ | * NOTE: To attach to the provider network, data interfaces must be configured with an equal or larger MTU. | ||
+ | |||
+ | - "Click Provider Network button." | ||
+ | || | ||
+ | * Akraino Edge login page should be displayed. | ||
+ | * Going to the path you should be able to see the The Provider Networks list. | ||
+ | * "The new provider network is added to the Provider Networks list successfully." | ||
+ | |- | ||
+ | |||
+ | | test_system_alarms_and_events_on_lock_unlock_compute || Lock a compute host, and ensure the relevant alarms and system events are generated Unlock the host and ensure alarm and events are cleared. || Akraino pruduct should be installed and set as multi node with 2 controllers and at least 1 compute. | ||
+ | || | ||
+ | * Open a Browser with controller IP Address to open Horizon. e.g. https://10.10.10.10 | ||
+ | * Go to Admin --> Platfrom --> Provider Network Topology | ||
+ | * Select the Compute node you want to lock. | ||
+ | * Go to Selected Enitty: <compute_node_name> and select the "Related Alarms" tab. Check the current status of the messages the compute node has. | ||
+ | * Go to Admin --> Host Inventory. | ||
+ | * Go to Actions column, in the arrow drop down list select lock compute. | ||
+ | * Make any proper modification into your compute node. | ||
+ | * Go back to Admin --> Platfrom --> Provider Network Topology. | ||
+ | * Go to Selected Enitty: <compute_node_name> and select the "Related Alarms" tab. Check there is new messages in the compute node coming from the changes/modifications you did. | ||
+ | * Go to Admin --> Host Inventory. | ||
+ | * Go to Actions column, in the arrow drop down list select unlock compute. | ||
+ | * Go to Admin --> Platfrom --> Provider Network Topology | ||
+ | * Select the Compute node you already unlocked. | ||
+ | * Go to Selected Enitty: <compute_node_name> and select the "Related Alarms" tab. | ||
+ | |||
+ | || | ||
+ | * Akraino Edge login page should be displayed. | ||
+ | * Going to the path you should be able to see the The Provider Networks Graphic. | ||
+ | * Compute lock is identify. | ||
+ | * You were able to identify the current log of messages. | ||
+ | * You will be able to see the list of controllers and Computes nodes. | ||
+ | * Compute host is locked successfully. | ||
+ | * Modification is done in the computes node. | ||
+ | * Going to the path you should be able to see the The Provider Networks Graphic. | ||
+ | * You were able to identify the new messages. | ||
+ | * You will be able to see the list of controllers and Computes nodes. | ||
+ | * Compute host is unlocked/rebooted successfully. | ||
+ | * Going to the path you should be able to see the The Provider Networks Graphic. | ||
+ | * Compute unlocked is identify. | ||
+ | * Check the current status of the messages of the compute node is empty. | ||
+ | |- | ||
+ | |||
+ | |} | ||
+ | |||
+ | === Multinode === | ||
+ | |||
+ | {| class="wikitable" | ||
+ | |- | ||
+ | ! NAME !! SUMMARY !! PRECONDITIONS !! STEPS !! EXPECTED RESULTS | ||
+ | |- | ||
+ | | Test_Install || Test the Installation for AIO simplex for R5 || "1. Install Ubuntu 16.04 | ||
+ | 2. Set proxies 3. Install virtualbox 5.1.30 --> https://www.virtualbox.org/wiki/Download_Old_Builds_5_1 | ||
+ | 4. Download ISO and License | ||
+ | - bootimage-current.iso | ||
+ | - wrslicenseR5txt | ||
+ | 5. Clone repo tic_vb | ||
+ | https://git.openstack.org/cgit/openstack/stx-tools/tree/?id=a60042bbd11c387e95157fad255986712227dab6 | ||
+ | 6. Copy ""boot-current.iso"" to tic_vb repo and rename it as ""bootimage.iso""" | ||
+ | || "Follow the Steps defined on https://github.intel.com/Madawaska/tic_vb/blob/master/README.md#steps-for-simplex-r5 | ||
+ | |||
+ | use http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img on private-net0." | ||
+ | || Installation should be completed successfully. | ||
+ | |||
+ | |- | ||
+ | | Test_create_instance || Test that instances can be created successfully || Test_simplex_R5_Install = PASS || "Upload an image following steps defined on: | ||
+ | https://docs.openstack.org/horizon/latest/user/manage-images.html#upload-an-image" | ||
+ | |||
+ | "Launch an isntance following steps defined on: | ||
+ | https://docs.openstack.org/horizon/latest/user/launch-instances.html#launch-an-instance" | ||
+ | || Image is uploaded correctly | ||
+ | Instance should be launched correctly | ||
+ | |||
+ | |- | ||
+ | | Test_ping_ssh_beetwen_2_instances || Test ping and ssh connection between 2 instances on the same controller || Test_create_instance = PASS || | ||
+ | * Launch Instance-1 | ||
+ | * Launch Instance-2 | ||
+ | * Ping form Instance-1 to Instance-2 | ||
+ | * Ping form Instance-2 to Instance-1 | ||
+ | * Establish SSH connection from Instance-1 to Instance-2 | ||
+ | || | ||
+ | * Instance is running correctly | ||
+ | * Instance is running correctly | ||
+ | * Packages are transmitted correctly | ||
+ | * Packages are transmitted correctly | ||
+ | * Connection is established correctly | ||
+ | |||
+ | |- | ||
+ | | Test_nova_actions_dedicated_auto_recover || | ||
+ | Launch a vm using cirros/ubuntu image with dedicated cpu policy, set vm to error state via nova cli, wait for vm to be autorecover, ensure vm in active state and still pingable | ||
+ | || Instances running | ||
+ | || | ||
+ | * Launch Instance with image with dedicated cpu policy | ||
+ | * Ping to instance | ||
+ | * Set to error state using as follows: | ||
+ | - in controller go to source /etc//nova/openrc | ||
+ | - Get the Instance ID | ||
+ | $ openstack server list | ||
+ | - Set to error state Instance | ||
+ | $ openstack server set --state error <Instance-ID> | ||
+ | * Wait about 30s - 1 min | ||
+ | * Ping to instance. | ||
+ | || | ||
+ | * Instance is running correctly | ||
+ | * Packages are transmitted correctly | ||
+ | * Verify that Instance is in error state: | ||
+ | In horizon at Project --> Instances | ||
+ | on CLI run ""$ openstack server list"" and verify the status" | ||
+ | * Verify that Instance is in active state again | ||
+ | In horizon at Project --> Instances | ||
+ | on CLI run ""$ openstack server list"" and verify the status" | ||
+ | * Packages are transmitted correctly | ||
+ | |- | ||
+ | |||
+ | | Test_nova_actions_shared_suspend_resume || "Launch a vm using cirros/centos-guest with shared cpu policy, suspend/resume it, ensure vm in active | ||
+ | state and still pingable" | ||
+ | || | ||
+ | Instances running | ||
+ | || | ||
+ | * Launch Instance with image with shared cpu policy | ||
+ | || | ||
+ | * Instance is running correctly | ||
+ | |- | ||
+ | |||
+ | | test_horaizon_auto_recovery_volume | ||
+ | || Edit Image for volume in Horizon and add Instance Auto Recovery, verify metadata updated | ||
+ | || simplex R5 installed || "Go to Titanium Cloud GUI and select the following options: | ||
+ | 1. Images | ||
+ | 2. Edit Meta Data | ||
+ | 3. Selecte Auto Recovery" | ||
+ | || Image for Volume with Metadata updated with Instance Recovery option added | ||
+ | |- | ||
+ | |||
+ | | test_horaizon_auto_recovery_snapshot || Edit Image for snapshot in Horizon and add Instance Auto Recovery, verify metadata updated | ||
+ | || | ||
+ | 1. simplex R5 installed | ||
+ | 2. Snapshot created | ||
+ | || "Go to Titanium Cloud GUI and select the following options: | ||
+ | 1. Snapshots | ||
+ | 2. Edit Meta Data | ||
+ | 3. Select Auto Recovery" | ||
+ | || Image for Snapshot with Metadata updated with Instance Recovery option added | ||
+ | |- | ||
+ | |||
+ | | test_horaizon_sw_wrs_auto_recovery_metadata_update_volume || Update Metadata of Volume in Horizon, add property sw_wrs_auto_recovery and verify that it can be set | ||
+ | || | ||
+ | 1. simplex R5 installed | ||
+ | 2. Volume created || "Go to Titanium Cloud GUI and select the following options: | ||
+ | 1. Go to Volumes > Select a Volume > Edit Volume > Update Metada | ||
+ | 2. In the Update Volume Metadata window search for: sw_wrs_auto_recovery | ||
+ | 3. Select sw_wrs_recovery" | ||
+ | || property sw_wrs_auto_recovery added successfully to Volume | ||
+ | |- | ||
+ | |||
+ | | test_horaizon_sw_wrs_auto_recovery_metadata_update_snapshot || Update Metadata of Snapshot in Horizon, add property sw_wrs_auto_recovery and verify that it can be set | ||
+ | || | ||
+ | 1. simplex R5 installed | ||
+ | 2. Snapshot created | ||
+ | || "Go to Titanium Cloud GUI and select the following options: | ||
+ | 1. Edit Spanposhot | ||
+ | 2. Edit MEtadata | ||
+ | 3. Select sw_wrs_recovery" | ||
+ | || property sw_wrs_auto_recovery added successfully to Snapshot | ||
+ | |- | ||
+ | |||
+ | | Test_vm_meta_data_retrieval || "Launch a vm from image, ssh to vm and wget the instance_id metadata; ensure the instance_id | ||
+ | metatdata is the same as instance_name in ""openstack server show <vm-name>"" command" | ||
+ | || 1.simplex R5 previously configured | ||
+ | 2.controller Key Par added into Horizon : Project > Compute > Key Pars | ||
+ | create a ssh for controller with the following command: | ||
+ | ssh-keygen -f controller -t rsa -P "" | ||
+ | || "Go to Titanium Cloud GUI and select the following options: | ||
+ | 1- Project | ||
+ | 2- Instances | ||
+ | 3- Launch instance" | ||
+ | || A box will appears in order to create the instance | ||
+ | |- | ||
+ | |||
+ | | Test_horizon_login_time_regular || Login time to horizon set as Regular Multinode should be less than 5 seconds || Akraino pruduct should be installed and set as Regular Multinode | ||
+ | || Open Horizon on a web browser, using default IP set while installing | ||
+ | || | ||
+ | * Log in web page should be displayed | ||
+ | * Fiil out user and password fields | ||
+ | * Click on "Sign In" Button and start a time counter | ||
+ | |- | ||
+ | |||
+ | | test_horizon_login_time_storage || Login time to horizon set as storage should be less than 5 seconds || Akraino pruduct should be installed and set as Storage | ||
+ | || | ||
+ | * Open Horizon on a web browser, using default IP set while installing | ||
+ | * Fiil out user and password fields | ||
+ | * Click on "Sign In" Button and start a time counter | ||
+ | || | ||
+ | * Log in web page should be displayed | ||
+ | * You can write on both fields | ||
+ | * Horizon web page should be displayed in less than 5 seconds | ||
+ | |- | ||
+ | |||
+ | | test_horizon_login_time_all_in_one || Login time to horizon set as storage should be less than 5 seconds || Akraino pruduct should be installed and set as Simplex | ||
+ | || | ||
+ | * Open Horizon on a web browser, using default IP set while installing | ||
+ | * Fiil out user and password fields | ||
+ | * Click on "Sign In" Button and start a time counter | ||
+ | || * Log in web page should be displayed | ||
+ | * You can write on both fields | ||
+ | * Horizon web page should be displayed in less than 5 seconds | ||
+ | |- | ||
+ | |||
+ | | test_vif_model_from_image[avp] || Check that the hw_vif_model from image metadata is applied to the vif when the vif_model is not specified || | ||
+ | * Flavor created | ||
+ | * Cirros image uploaded | ||
+ | || "Create a glance image with hw_vif_model image metadata set to avp | ||
+ | $ openstack image create --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --public cirros_avp | ||
+ | $ openstack image set --property hw_vif_model=avp cirros_avp" | ||
+ | "create a volume off that the created image | ||
+ | |||
+ | $ nova boot --nic net-id=093d28e1-f0cd-4866-b45b-ecce07b1df55 --flavor 1 --block-device source=image,id=dc7e4edb-24f8-4cec-8288-71da1e074c5b,dest=volume,size=5,shutdown=preserve,bootindex=0 avpInstanceFromVolume" | ||
+ | "Create a vm with 3 vnics: over mgmt-net with virtio vif_model, over tenant-net with no vif_model specified, over internal-net with avp vif_model. | ||
+ | |||
+ | $ nova boot --flavor 1 --image cirros-avp \ | ||
+ | --nic net-id=093d28e1-f0cd-4866-b45b-ecce07b1df55 \ | ||
+ | --nic net-id=db172669-4c08-4460-a3da-2cc5775744fc,vif-model=avp \ | ||
+ | --nic net-id=2ad6ee79-bb52-49a5-875d-12f4fa053150,vif-model=virtio \ | ||
+ | test-av-vm " | ||
+ | || "Check that property was set correclty | ||
+ | $ openstack image show cirros_avp | ||
+ | |||
+ | ************************************************************************** | ||
+ | | properties | hw_vif_model='avp', store='file' | | ||
+ | " | ||
+ | "Volume created correctly | ||
+ | |||
+ | $ cinder list" | ||
+ | "Check via nova show that the hw_vif_model from image metadata is applied to the vif that did not specify the vif_model. | ||
+ | With below command verify on field ""wrs-if:nics"" taht ""vif_model=avp"" for nic2 | ||
+ | $ nova show test-av-vm | ||
+ | |||
+ | |||
+ | ************************************************************************** | ||
+ | | wrs-if:nics | {""nic1"": {""vif_model"": ""avp"", ""network"": ""net"", ""port_id"": ""fdfcfed8-66c9-436b-91f9-163f645b66f2"", ""mtu"": 1500, ""mac_address"": ""fa:16:3e:1b:c7:6c"", ""vif_pci_address"": """"}" | ||
+ | |- | ||
+ | |||
+ | | test_ceilometer_<meter>_port_samples || Query ceilometer samples for meters, as well as resource id, ensuring samples exist || "Akraino pruduct should be installed and set as multi node with 2 controllers, 1 compute and a vswitch." | ||
+ | || | ||
+ | * Go to controller node terminal and login as admin user, type $ . /etc/nova/openrc | ||
+ | * List your meter type list in your controller environment by typing $ ceilometer metertype-list | ||
+ | * Now list the available meters and their Reosurce id by typying $ ceilometer meter-list | ||
+ | * "To view a set of samples for a meter, type the following command $ ceilometer sample-list [-m name] [-l number] [-q query] where | ||
+ | name: is the name of the Ceilometer meter | ||
+ | number: is the maximum number of samples to return | ||
+ | query: is a list of metadata filters to apply to the samples, in the form 'metadata_type=filter_value; | ||
+ | metadata_type=filter_value; ... | ||
+ | e.g. ceilometer sample-list -m platform.cpu.util -l 10 -q 'metadata.host=controller-0'" | ||
+ | * Identify a meter name and their resource id and make a query by typing $ ceilometer statistics -m platform.cpu.util -p 10 -q 'metadata.host=controller-0' | ||
+ | * "To list the metadata associated with a meter, type the following: $ ceilometer resource-show [resource_id] " | ||
+ | || | ||
+ | * "Logged in as admin is done successfully with a prompt as follows : | ||
+ | controller-X ~(keystone_admin)]$ " | ||
+ | * "You should be getting a list of your meter type available in your controller. e.g.+---------------------------------+------------+---------+ | ||
+ | | Name | Type | Unit | | ||
+ | +---------------------------------+------------+---------+ | ||
+ | | compute.node.cpu.frequency | gauge | MHz | | ||
+ | | compute.node.cpu.idle.percent | gauge | percent | | ||
+ | | compute.node.cpu.idle.time | cumulative | ns | | ||
+ | | compute.node.cpu.iowait.percent | gauge | percent | | ||
+ | | compute.node.cpu.iowait.time | cumulative | ns | | ||
+ | | compute.node.cpu.kernel.percent | gauge | percent | | ||
+ | | compute.node.cpu.kernel.time | cumulative | ns | | ||
+ | | compute.node.cpu.percent | gauge | percent | | ||
+ | | compute.node.cpu.user.percent | gauge | percent | | ||
+ | | compute.node.cpu.user.time | cumulative | ns | | ||
+ | | platform.cpu.util | delta | % | | ||
+ | | platform.fs.util | delta | % | | ||
+ | | platform.mem.util | delta | % | | ||
+ | +---------------------------------+------------+---------+" | ||
+ | * "You should be getting a list of available meters and their resource id successfully. e.g.+---------------------------------+------------+---------+--------------------------------------+----------------------------------+----------------------------------+ | ||
+ | | Name | Type | Unit | Resource ID | User ID | Project ID | | ||
+ | +---------------------------------+------------+---------+--------------------------------------+----------------------------------+----------------------------------+ | ||
+ | | compute.node.cpu.frequency | gauge | MHz | controller-0_controller-0 | None | None | | ||
+ | | compute.node.cpu.idle.percent | gauge | percent | controller-0_controller-0 | None | None | | ||
+ | | compute.node.cpu.idle.time | cumulative | ns | controller-0_controller-0 | None | None | | ||
+ | | compute.node.cpu.iowait.percent | gauge | percent | controller-0_controller-0 | None | None | | ||
+ | | compute.node.cpu.iowait.time | cumulative | ns | controller-0_controller-0 | None | None | | ||
+ | | compute.node.cpu.kernel.percent | gauge | percent | controller-0_controller-0 | None | None | | ||
+ | | compute.node.cpu.kernel.time | cumulative | ns | controller-0_controller-0 | None | None | | ||
+ | | compute.node.cpu.percent | gauge | percent | controller-0_controller-0 | None | None | | ||
+ | | compute.node.cpu.user.percent | gauge | percent | controller-0_controller-0 | None | None | | ||
+ | | compute.node.cpu.user.time | cumulative | ns | controller-0_controller-0 | None | None | | ||
+ | | platform.cpu.util | delta | % | 14f6bfca-7286-4470-a805-3dfab4ce1b89 | 0dcd12579b3c42f195d96ebecd9697bf | 17a35bdd8d024aaea88af6b112dcf697 | | ||
+ | | platform.fs.util | delta | % | 14f6bfca-7286-4470-a805-3dfab4ce1b89 | 0dcd12579b3c42f195d96ebecd9697bf | 17a35bdd8d024aaea88af6b112dcf697 | | ||
+ | | platform.mem.util | delta | % | 14f6bfca-7286-4470-a805-3dfab4ce1b89 | 0dcd12579b3c42f195d96ebecd9697bf | 17a35bdd8d024aaea88af6b112dcf697 | | ||
+ | +---------------------------------+------------+---------+--------------------------------------+----------------------------------+---" | ||
+ | * You should be seeing a sample meter you picked up like the below example: +--------------------------------------+-------------------+-------+--------+------+---------------------+| Resource ID | Name | Type | Volume | Unit | Timestamp |+--------------------------------------+-------------------+-------+--------+------+---------------------+| 14f6bfca-7286-4470-a805-3dfab4ce1b89 | platform.cpu.util | delta | 34.0 | % | 2018-04-27T13:33:45 || 14f6bfca-7286-4470-a805-3dfab4ce1b89 | platform.cpu.util | delta | 30.0 | % | 2018-04-27T13:28:45 || 14f6bfca-7286-4470-a805-3dfab4ce1b89 | platform.cpu.util | delta | 27.0 | % | 2018-04-27T13:23:44 || 14f6bfca-7286-4470-a805-3dfab4ce1b89 | platform.cpu.util | delta | 41.0 | % | 2018-04-27T13:18:44 || 14f6bfca-7286-4470-a805-3dfab4ce1b89 | platform.cpu.util | delta | 35.0 | % | 2018-04-27T13:13:45 || 14f6bfca-7286-4470-a805-3dfab4ce1b89 | platform.cpu.util | delta | 36.0 | % | 2018-04-27T13:08:44 || 14f6bfca-7286-4470-a805-3dfab4ce1b89 | platform.cpu.util | delta | 31.0 | % | 2018-04-27T13:03:44 || 14f6bfca-7286-4470-a805-3dfab4ce1b89 | platform.cpu.util | delta | 29.0 | % | 2018-04-27T12:58:43 || 14f6bfca-7286-4470-a805-3dfab4ce1b89 | platform.cpu.util | delta | 29.0 | % | 2018-04-27T12:53:44 || 14f6bfca-7286-4470-a805-3dfab4ce1b89 | platform.cpu.util | delta | 30.0 | % | 2018-04-27T12:48:44 |+--------------------------------------+-------------------+-------+--------+------+---------------------+ | ||
+ | |||
+ | * "You should be able to see meter resource-show successfully. e.g. $ ceilometer resource-show 14f6bfca-7286-4470-a805-3dfab4ce1b89 | ||
+ | +-------------+--------------------------------------------+ | ||
+ | | Property | Value | | ||
+ | +-------------+--------------------------------------------+ | ||
+ | | metadata | {""host"": ""controller-0""} | | ||
+ | | project_id | 17a35bdd8d024aaea88af6b112dcf697 | | ||
+ | | resource_id | 14f6bfca-7286-4470-a805-3dfab4ce1b89 | | ||
+ | | source | 17a35bdd8d024aaea88af6b112dcf697:openstack | | ||
+ | | user_id | 0dcd12579b3c42f195d96ebecd9697bf | | ||
+ | +-------------+--------------------------------------------+" | ||
+ | |- | ||
+ | |||
+ | | test_nova_actions[ubuntu_14-shared-stop-start] || Cirros || simplex R5 previously configured | ||
+ | || | ||
+ | * In the controller create a flavor with shared cpu policy with the following command: openstack flavor create --public <flavor-name> --id auto --ram <value-in-megas> --disk <value-in-gigas> --vcpus <#-virtual-cpus> --property hw:cpu_policy=shared" | ||
+ | * "In the controller list the flavor created with openstack in order to get its ID with the command: openstack flavor list" | ||
+ | * "In order to confirm if the flavor was created correctly please type the following command in the controller: openstack flavor show <flavorID> | grep properties" | ||
+ | * "proceed to create a Image Project > Compute > Images > Create Image" | ||
+ | * "proceed to create an instance Project > Compute > Instances > Launch instance" | ||
+ | * "Stop the instance Project > compute > Instances > Actions > Pause instance" | ||
+ | * "Resume the instance Project > compute > Instances > Actions > Resume instance" | ||
+ | || | ||
+ | * "The flavor with shared cpu policy must be created without any issue, e.g: | ||
+ | |||
+ | +----------------------------+------------------------------------------------------------------------------+ | ||
+ | | Field | Value | | ||
+ | +----------------------------+------------------------------------------------------------------------------+ | ||
+ | | OS-FLV-DISABLED:disabled | False | | ||
+ | | OS-FLV-EXT-DATA:ephemeral | 0 | | ||
+ | | disk | 10 | | ||
+ | | id | f743498f-e66f-4f21-8290-ee67045120ed | | ||
+ | | name | p1.medium | | ||
+ | | os-flavor-access:is_public | True | | ||
+ | | properties | aggregate_instance_extra_specs:storage='local_image', hw:cpu_policy='shared' | | ||
+ | | ram | 1024 | | ||
+ | | rxtx_factor | 1.0 | | ||
+ | | swap | | | ||
+ | | vcpus | 2 |" | ||
+ | * "You will have a output like this: | ||
+ | |||
+ | +--------------------------------------+-----------+------+------+-----------+-------+-----------+ | ||
+ | | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public | | ||
+ | +--------------------------------------+-----------+------+------+-----------+-------+-----------+ | ||
+ | | bdbd31c1-166f-45ad-8e0f-de649f95d555 | s.p2 | 2048 | 25 | 0 | 2 | True | | ||
+ | | e0ec7bd8-fe18-4b4b-8328-08d786a1b4cc | s.p1 | 512 | 1 | 0 | 1 | True | | ||
+ | | f743498f-e66f-4f21-8290-ee67045120ed | p1.medium | 1024 | 10 | 0 | 2 | True | | ||
+ | +--------------------------------------+-----------+------+------+-----------+-------+-----------+" | ||
+ | * "You will have a output like this: | properties | aggregate_instance_extra_specs:storage='local_image', hw:cpu_policy='shared' | | ||
+ | The cpu_policy as shared must be present in the output" | ||
+ | * The image will created without any issues | ||
+ | * The instance will created without any issues | ||
+ | * The instance will stopped successfully | ||
+ | * The instance will resume successfully | ||
+ | |- | ||
+ | |||
+ | | test_statistics_for_one_meter[image.size] || Check ceilometer statistics for meter ‘image.size’ || | ||
+ | || | ||
+ | * Go to controller node terminal and enter as admin user, type $ . /etc/nova/openrc | ||
+ | * List your meter statistics typing $ ceilometer statistics -m image.size || | ||
+ | "Logged as admin is done successfully with a prompt as follows : | ||
+ | controller-X ~(keystone_admin)]$ " | ||
+ | "You Should be seeing an entry with non-zero values for count, min, max, avg. +--------+---------------------+---------------------+------------+------------+------------+--------------+-------+----------+---------------------+---------------------+ | ||
+ | | Period | Period Start | Period End | Max | Min | Avg | Sum | Count | Duration | Duration Start | Duration End | | ||
+ | +--------+---------------------+---------------------+------------+------------+------------+--------------+-------+----------+---------------------+---------------------+ | ||
+ | | 0 | 2018-04-25T19:10:08 | 2018-04-26T19:30:16 | 12716032.0 | 12716032.0 | 12716032.0 | 5493325824.0 | 432 | 87608.0 | 2018-04-25T19:10:08 | 2018-04-26T19:30:16 | | ||
+ | +--------+---------------------+---------------------+------------+------------+------------+--------------+-------+----------+---------------------+---------------------" | ||
+ | |||
+ | |- | ||
+ | || | ||
+ | * "Logged as admin is done successfully with a prompt as follows : | ||
+ | controller-X ~(keystone_admin)]$ " | ||
+ | * "You Should be seeing an entry with non-zero values for count, min, max, avg. +--------+---------------------+---------------------+------------+------------+------------+--------------+-------+----------+---------------------+---------------------+ | ||
+ | | Period | Period Start | Period End | Max | Min | Avg | Sum | Count | Duration | Duration Start | Duration End | | ||
+ | +--------+---------------------+---------------------+------------+------------+------------+--------------+-------+----------+---------------------+---------------------+ | ||
+ | | 0 | 2018-04-25T19:10:08 | 2018-04-26T19:30:16 | 12716032.0 | 12716032.0 | 12716032.0 | 5493325824.0 | 432 | 87608.0 | 2018-04-25T19:10:08 | 2018-04-26T19:30:16 | | ||
+ | +--------+---------------------+---------------------+------------+------------+------------+--------------+-------+----------+---------------------+---------------------" | ||
+ | || | ||
+ | * Akraino Edge login page should be displayed. | ||
+ | * Going to the path you should be able to see the The Provider Networks Graphic. | ||
+ | * Compute lock is identify. | ||
+ | * You were able to identify the current log of messages. | ||
+ | * You will be able to see the list of controllers and Computes nodes. | ||
+ | * Compute host is locked successfully. | ||
+ | * Modification is done in the computes node. | ||
+ | * Going to the path you should be able to see the The Provider Networks Graphic. | ||
+ | * You were able to identify the new messages. | ||
+ | * You will be able to see the list of controllers and Computes nodes. | ||
+ | * Compute host is unlocked/rebooted successfully. | ||
+ | * Going to the path you should be able to see the The Provider Networks Graphic. | ||
+ | * Compute unlocked is identify. | ||
+ | * Check the current status of the messages of the compute node is empty. | ||
+ | |- | ||
+ | |||
+ | | Test_heat_template [WR_Neutron_ProviderNetRange.yaml] || Create new provider net and range via heat, and ensure new providernet and range is listed in neutron providernet-list || | ||
+ | || | ||
+ | * Open a Browser with controller IP Address to open Horizon. e.g. https://10.10.10.10 | ||
+ | * Go to Admin --> Platfrom --> Provider Networks | ||
+ | * Create a provider network. | ||
+ | |||
+ | - Click Create Provider Network; In the Create Provider Network window, complete the fields as required. | ||
+ | - Name ; The name of the provider network. | ||
+ | - Description; A free-text field for reference. | ||
+ | - Type; The type of provider network to be created. | ||
+ | - Flat; mapped directly to the physical network. | ||
+ | - vlan ; supports multiple tenant networks using VLAN IDs. | ||
+ | - vxlan; supports multiple tenant networks using VXLAN VNIs. | ||
+ | - MTU; The maximum transmission unit for the Ethernet segment used to access the network. | ||
+ | |||
+ | * NOTE: To attach to the provider network, data interfaces must be configured with an equal or larger MTU. | ||
+ | |||
+ | - "Click Provider Network button." | ||
+ | |||
+ | || | ||
+ | * Akraino Edge login page should be displayed. | ||
+ | * Going to the path you should be able to see the The Provider Networks list. | ||
+ | * "The new provider network is added to the Provider Networks list successfully." | ||
+ | |- | ||
+ | |||
+ | | test_system_alarms_and_events_on_lock_unlock_compute || Lock a compute host, and ensure the relevant alarms and system events are generated Unlock the host and ensure alarm and events are cleared. || Akraino pruduct should be installed and set as multi node with 2 controllers and at least 1 compute. | ||
+ | || | ||
+ | * Open a Browser with controller IP Address to open Horizon. e.g. https://10.10.10.10 | ||
+ | * Go to Admin --> Platfrom --> Provider Network Topology | ||
+ | * Select the Compute node you want to lock. | ||
+ | * Go to Selected Enitty: <compute_node_name> and select the "Related Alarms" tab. Check the current status of the messages the compute node has. | ||
+ | * Go to Admin --> Host Inventory. | ||
+ | * Go to Actions column, in the arrow drop down list select lock compute. | ||
+ | * Make any proper modification into your compute node. | ||
+ | * Go back to Admin --> Platfrom --> Provider Network Topology. | ||
+ | * Go to Selected Enitty: <compute_node_name> and select the "Related Alarms" tab. Check there is new messages in the compute node coming from the changes/modifications you did. | ||
+ | * Go to Admin --> Host Inventory. | ||
+ | * Go to Actions column, in the arrow drop down list select unlock compute. | ||
+ | * Go to Admin --> Platfrom --> Provider Network Topology | ||
+ | * Select the Compute node you already unlocked. | ||
+ | * Go to Selected Enitty: <compute_node_name> and select the "Related Alarms" tab. | ||
+ | || | ||
+ | * Akraino Edge login page should be displayed. | ||
+ | * Going to the path you should be able to see the The Provider Networks Graphic. | ||
+ | * Compute lock is identify. | ||
+ | * You were able to identify the current log of messages. | ||
+ | * You will be able to see the list of controllers and Computes nodes. | ||
+ | * Compute host is locked successfully. | ||
+ | * Modification is done in the computes node. | ||
+ | * Going to the path you should be able to see the The Provider Networks Graphic. | ||
+ | * You were able to identify the new messages. | ||
+ | * You will be able to see the list of controllers and Computes nodes. | ||
+ | * Compute host is unlocked/rebooted successfully. | ||
+ | * Going to the path you should be able to see the The Provider Networks Graphic. | ||
+ | * Compute unlocked is identify. | ||
+ | * Check the current status of the messages of the compute node is empty. | ||
+ | |||
+ | |||
|} | |} |
Latest revision as of 21:39, 27 August 2018
This document contains the steps for validating a StarlingX System has been installed correctly.
Requirements
The recommended minimum requirements include:
System Requirements
- A StarlingX System
Launch an Instance
Download CirrOS Image
Download a CirrOS image in QCOW2 format from the CirrOS download page:
$ wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
Transfer the CirrOS QCOW2 image to the StarlingX System:
$ scp cirros-0.4.0-x86_64-disk.img wrsroot@10.10.10.3:~/
Acquire administrative privileges
On Controller-0, acquire Keystone administrative privileges:
controller-0:~$ source /etc/nova/openrc
Create OpenStack Images
~(keystone_admin)]$ openstack image create --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --public cirros
Create OpenStack Flavors
~(keystone_admin)]$ openstack flavor create --id 1 --ram 64 --disk 1 --vcpus 1 --public flavor.nano
~(keystone_admin)]$ openstack flavor create --id 2 --ram 128 --disk 2 --vcpus 1 --public flavor.micro
Create OpenStack Network
~(keystone_admin)]$ openstack network create network.one
Create OpenStack Sub Network
~(keystone_admin)]$ openstack subnet create --network network.one --ip-version 4 --subnet-range 192.168.1.0/24 --dhcp subnet.one
Create OpenStack Servers
~(keystone_admin)]$ openstack server create --flavor flavor.nano --image cirros --nic net-id=network.one server.nano
~(keystone_admin)]$ openstack server create --flavor flavor.micro --image cirros --nic net-id=network.one server.micro
Check OVS/DPDK
Check Neutron Agent List
On Controller-0, acquire Keystone administrative privileges:
controller-0:~$ source /etc/nova/openrc
~(keystone_admin)]$ neutron agent-list
Get Compute Node IP Address
~(keystone_admin)]$ system host-show compute-0 | grep mgmt_ip
| mgmt_ip | 192.168.204.119 |
~(keystone_admin)]$ system host-show compute-1 | grep mgmt_ip
| mgmt_ip | 192.168.204.113 |
Check Compute-0 Interface Type
Login to Compute-0 via ssh
~(keystone_admin)]$ ssh 192.168.204.119
Verify dpdk type is set for eth0 port using Open vSwitch utility:
compute-0:~$ sudo ovs-vsctl show
...
Port "eth0"
Interface "eth0"
type: dpdk
options: {dpdk-devargs="0000:00:09.0", n_rxq="1"}
...
ovs_version: "2.9.0"
Check Compute-1 Interface Type
Login to Compute-1 via ssh
~(keystone_admin)]$ ssh 192.168.204.113
Verify dpdk type is set for eth0 port using Open vSwitch utility:
compute-0:~$ sudo ovs-vsctl show
...
Port "eth0"
Interface "eth0"
type: dpdk
options: {dpdk-devargs="0000:00:09.0", n_rxq="1"}
ovs_version: "2.9.0"
...
Sanity
Simplex
NAME | SUMMARY | PRECONDITIONS | STEPS | EXPECTED RESULTS | ||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Test_Install | Test the Installation for AIO simplex for R5 | "1. Install Ubuntu 16.04
2. Set proxies 3. Install virtualbox 5.1.30 --> https://www.virtualbox.org/wiki/Download_Old_Builds_5_1 4. Download ISO and License - bootimage-current.iso - wrslicenseR5txt 5. Clone repo tic_vb https://git.openstack.org/cgit/openstack/stx-tools/tree/?id=a60042bbd11c387e95157fad255986712227dab6 6. Copy ""boot-current.iso"" to tic_vb repo and rename it as ""bootimage.iso""" |
"Follow the Steps defined on https://github.intel.com/Madawaska/tic_vb/blob/master/README.md#steps-for-simplex-r5
use http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img on private-net0." |
Installation should be completed successfully. | ||||||||||||||||||||||||||||||||||
Test_create_instance | Test that instances can be created successfully | Test_simplex_R5_Install = PASS | "Upload an image following steps defined on:
https://docs.openstack.org/horizon/latest/user/manage-images.html#upload-an-image" "Launch an isntance following steps defined on: https://docs.openstack.org/horizon/latest/user/launch-instances.html#launch-an-instance" |
Image is uploaded correctly
Instance should be launched correctly | ||||||||||||||||||||||||||||||||||
Test_ping_ssh_beetwen_2_instances | Test ping and ssh connection between 2 instances on the same controller | Test_create_instance = PASS |
|
| ||||||||||||||||||||||||||||||||||
Test_nova_actions_dedicated_auto_recover |
Launch a vm using cirros/ubuntu image with dedicated cpu policy, set vm to error state via nova cli, wait for vm to be autorecover, ensure vm in active state and still pingable |
Instances running |
- in controller go to source /etc//nova/openrc - Get the Instance ID $ openstack server list - Set to error state Instance $ openstack server set --state error <Instance-ID>
|
In horizon at Project --> Instances on CLI run ""$ openstack server list"" and verify the status"
In horizon at Project --> Instances on CLI run ""$ openstack server list"" and verify the status"
| ||||||||||||||||||||||||||||||||||
Test_nova_actions_shared_suspend_resume | "Launch a vm using cirros/centos-guest with shared cpu policy, suspend/resume it, ensure vm in active
state and still pingable" |
Instances running |
|
| ||||||||||||||||||||||||||||||||||
test_horaizon_auto_recovery_volume | Edit Image for volume in Horizon and add Instance Auto Recovery, verify metadata updated | simplex R5 installed | "Go to Titanium Cloud GUI and select the following options:
1. Images 2. Edit Meta Data 3. Selecte Auto Recovery" |
Image for Volume with Metadata updated with Instance Recovery option added | ||||||||||||||||||||||||||||||||||
test_horaizon_auto_recovery_snapshot | Edit Image for snapshot in Horizon and add Instance Auto Recovery, verify metadata updated |
1. simplex R5 installed 2. Snapshot created |
"Go to Titanium Cloud GUI and select the following options:
1. Snapshots 2. Edit Meta Data 3. Select Auto Recovery" |
Image for Snapshot with Metadata updated with Instance Recovery option added | ||||||||||||||||||||||||||||||||||
test_horaizon_sw_wrs_auto_recovery_metadata_update_volume | Update Metadata of Volume in Horizon, add property sw_wrs_auto_recovery and verify that it can be set |
1. simplex R5 installed 2. Volume created || "Go to Titanium Cloud GUI and select the following options: 1. Go to Volumes > Select a Volume > Edit Volume > Update Metada 2. In the Update Volume Metadata window search for: sw_wrs_auto_recovery 3. Select sw_wrs_recovery" |
property sw_wrs_auto_recovery added successfully to Volume | |||||||||||||||||||||||||||||||||||
test_horaizon_sw_wrs_auto_recovery_metadata_update_snapshot | Update Metadata of Snapshot in Horizon, add property sw_wrs_auto_recovery and verify that it can be set |
1. simplex R5 installed 2. Snapshot created |
"Go to Titanium Cloud GUI and select the following options:
1. Edit Spanposhot 2. Edit MEtadata 3. Select sw_wrs_recovery" |
property sw_wrs_auto_recovery added successfully to Snapshot | ||||||||||||||||||||||||||||||||||
Test_vm_meta_data_retrieval | "Launch a vm from image, ssh to vm and wget the instance_id metadata; ensure the instance_id
metatdata is the same as instance_name in ""openstack server show <vm-name>"" command" |
1.simplex R5 previously configured
2.controller Key Par added into Horizon : Project > Compute > Key Pars create a ssh for controller with the following command: ssh-keygen -f controller -t rsa -P "" |
"Go to Titanium Cloud GUI and select the following options:
1- Project 2- Instances 3- Launch instance" |
A box will appears in order to create the instance | ||||||||||||||||||||||||||||||||||
test_horizon_login_time_all_in_one | Login time to horizon set as storage should be less than 5 seconds | Akraino pruduct should be installed and set as Simplex |
|
* Log in web page should be displayed
| ||||||||||||||||||||||||||||||||||
test_vif_model_from_image[avp] | Check that the hw_vif_model from image metadata is applied to the vif when the vif_model is not specified |
|
"Create a glance image with hw_vif_model image metadata set to avp
$ openstack image create --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --public cirros_avp $ openstack image set --property hw_vif_model=avp cirros_avp" "create a volume off that the created image $ nova boot --nic net-id=093d28e1-f0cd-4866-b45b-ecce07b1df55 --flavor 1 --block-device source=image,id=dc7e4edb-24f8-4cec-8288-71da1e074c5b,dest=volume,size=5,shutdown=preserve,bootindex=0 avpInstanceFromVolume" "Create a vm with 3 vnics: over mgmt-net with virtio vif_model, over tenant-net with no vif_model specified, over internal-net with avp vif_model. $ nova boot --flavor 1 --image cirros-avp \ --nic net-id=093d28e1-f0cd-4866-b45b-ecce07b1df55 \ --nic net-id=db172669-4c08-4460-a3da-2cc5775744fc,vif-model=avp \ --nic net-id=2ad6ee79-bb52-49a5-875d-12f4fa053150,vif-model=virtio \ test-av-vm " |
"Check that property was set correclty
$ openstack image show cirros_avp |
hw_vif_model='avp', store='file' |
" "Volume created correctly $ cinder list" "Check via nova show that the hw_vif_model from image metadata is applied to the vif that did not specify the vif_model. With below command verify on field ""wrs-if:nics"" taht ""vif_model=avp"" for nic2 $ nova show test-av-vm |
{""nic1"": {""vif_model"": ""avp"", ""network"": ""net"", ""port_id"": ""fdfcfed8-66c9-436b-91f9-163f645b66f2"", ""mtu"": 1500, ""mac_address"": ""fa:16:3e:1b:c7:6c"", ""vif_pci_address"": """"}" | ||||||||||||||||||||||||||||||||
test_ceilometer_<meter>_port_samples | Query ceilometer samples for meters, as well as resource id, ensuring samples exist | "Akraino pruduct should be installed and set as multi node with 2 controllers, 1 compute and a vswitch." |
name: is the name of the Ceilometer meter number: is the maximum number of samples to return query: is a list of metadata filters to apply to the samples, in the form 'metadata_type=filter_value; metadata_type=filter_value; ... e.g. ceilometer sample-list -m platform.cpu.util -l 10 -q 'metadata.host=controller-0'"
|
controller-X ~(keystone_admin)]$ "
|
Type | Unit |
+---------------------------------+------------+---------+ |
gauge | MHz | | gauge | percent | | cumulative | ns | | gauge | percent | | cumulative | ns | | gauge | percent | | cumulative | ns | | gauge | percent | | gauge | percent | | cumulative | ns | | delta | % | | delta | % | | delta | % |
+---------------------------------+------------+---------+"
|
Type | Unit | Resource ID | User ID | Project ID |
+---------------------------------+------------+---------+--------------------------------------+----------------------------------+----------------------------------+ |
gauge | MHz | controller-0_controller-0 | None | None | | gauge | percent | controller-0_controller-0 | None | None | | cumulative | ns | controller-0_controller-0 | None | None | | gauge | percent | controller-0_controller-0 | None | None | | cumulative | ns | controller-0_controller-0 | None | None | | gauge | percent | controller-0_controller-0 | None | None | | cumulative | ns | controller-0_controller-0 | None | None | | gauge | percent | controller-0_controller-0 | None | None | | gauge | percent | controller-0_controller-0 | None | None | | cumulative | ns | controller-0_controller-0 | None | None | | delta | % | 14f6bfca-7286-4470-a805-3dfab4ce1b89 | 0dcd12579b3c42f195d96ebecd9697bf | 17a35bdd8d024aaea88af6b112dcf697 | | delta | % | 14f6bfca-7286-4470-a805-3dfab4ce1b89 | 0dcd12579b3c42f195d96ebecd9697bf | 17a35bdd8d024aaea88af6b112dcf697 | | delta | % | 14f6bfca-7286-4470-a805-3dfab4ce1b89 | 0dcd12579b3c42f195d96ebecd9697bf | 17a35bdd8d024aaea88af6b112dcf697 |
+---------------------------------+------------+---------+--------------------------------------+----------------------------------+---"
+-------------+--------------------------------------------+ |
Value |
+-------------+--------------------------------------------+ |
{""host"": ""controller-0""} | | 17a35bdd8d024aaea88af6b112dcf697 | | 14f6bfca-7286-4470-a805-3dfab4ce1b89 | | 17a35bdd8d024aaea88af6b112dcf697:openstack | | 0dcd12579b3c42f195d96ebecd9697bf |
+-------------+--------------------------------------------+" |
test_nova_actions[ubuntu_14-shared-stop-start] | Cirros | simplex R5 previously configured |
|
+----------------------------+------------------------------------------------------------------------------+ |
Value |
+----------------------------+------------------------------------------------------------------------------+ |
False | | 0 | | 10 | | f743498f-e66f-4f21-8290-ee67045120ed | | p1.medium | | True | | aggregate_instance_extra_specs:storage='local_image', hw:cpu_policy='shared' | | 1024 | | 1.0 | | | | 2 |"
+--------------------------------------+-----------+------+------+-----------+-------+-----------+ |
Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+-----------+------+------+-----------+-------+-----------+ |
s.p2 | 2048 | 25 | 0 | 2 | True | | s.p1 | 512 | 1 | 0 | 1 | True | | p1.medium | 1024 | 10 | 0 | 2 | True |
+--------------------------------------+-----------+------+------+-----------+-------+-----------+"
The cpu_policy as shared must be present in the output"
| ||||||||||||||||||
test_statistics_for_one_meter[image.size] | Check ceilometer statistics for meter ‘image.size’ |
|
controller-X ~(keystone_admin)]$ "
|
Period Start | Period End | Max | Min | Avg | Sum | Count | Duration | Duration Start | Duration End |
+--------+---------------------+---------------------+------------+------------+------------+--------------+-------+----------+---------------------+---------------------+ |
2018-04-25T19:10:08 | 2018-04-26T19:30:16 | 12716032.0 | 12716032.0 | 12716032.0 | 5493325824.0 | 432 | 87608.0 | 2018-04-25T19:10:08 | 2018-04-26T19:30:16 |
+--------+---------------------+---------------------+------------+------------+------------+--------------+-------+----------+---------------------+---------------------" || | |||||||||||||||||||||||||||||||||
Test_heat_template [WR_Neutron_ProviderNetRange.yaml] | Create new provider net and range via heat, and ensure new providernet and range is listed in neutron providernet-list |
- Click Create Provider Network; In the Create Provider Network window, complete the fields as required. - Name ; The name of the provider network. - Description; A free-text field for reference. - Type; The type of provider network to be created. - Flat; mapped directly to the physical network. - vlan ; supports multiple tenant networks using VLAN IDs. - vxlan; supports multiple tenant networks using VXLAN VNIs. - MTU; The maximum transmission unit for the Ethernet segment used to access the network. * NOTE: To attach to the provider network, data interfaces must be configured with an equal or larger MTU. - "Click Provider Network button." |
| |||||||||||||||||||||||||||||||||||
test_system_alarms_and_events_on_lock_unlock_compute | Lock a compute host, and ensure the relevant alarms and system events are generated Unlock the host and ensure alarm and events are cleared. | Akraino pruduct should be installed and set as multi node with 2 controllers and at least 1 compute. |
|
|
Multinode
NAME | SUMMARY | PRECONDITIONS | STEPS | EXPECTED RESULTS | ||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Test_Install | Test the Installation for AIO simplex for R5 | "1. Install Ubuntu 16.04
2. Set proxies 3. Install virtualbox 5.1.30 --> https://www.virtualbox.org/wiki/Download_Old_Builds_5_1 4. Download ISO and License - bootimage-current.iso - wrslicenseR5txt 5. Clone repo tic_vb https://git.openstack.org/cgit/openstack/stx-tools/tree/?id=a60042bbd11c387e95157fad255986712227dab6 6. Copy ""boot-current.iso"" to tic_vb repo and rename it as ""bootimage.iso""" |
"Follow the Steps defined on https://github.intel.com/Madawaska/tic_vb/blob/master/README.md#steps-for-simplex-r5
use http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img on private-net0." |
Installation should be completed successfully. | ||||||||||||||||||||||||||||||||||
Test_create_instance | Test that instances can be created successfully | Test_simplex_R5_Install = PASS | "Upload an image following steps defined on:
https://docs.openstack.org/horizon/latest/user/manage-images.html#upload-an-image" "Launch an isntance following steps defined on: https://docs.openstack.org/horizon/latest/user/launch-instances.html#launch-an-instance" |
Image is uploaded correctly
Instance should be launched correctly | ||||||||||||||||||||||||||||||||||
Test_ping_ssh_beetwen_2_instances | Test ping and ssh connection between 2 instances on the same controller | Test_create_instance = PASS |
|
| ||||||||||||||||||||||||||||||||||
Test_nova_actions_dedicated_auto_recover |
Launch a vm using cirros/ubuntu image with dedicated cpu policy, set vm to error state via nova cli, wait for vm to be autorecover, ensure vm in active state and still pingable |
Instances running |
- in controller go to source /etc//nova/openrc - Get the Instance ID $ openstack server list - Set to error state Instance $ openstack server set --state error <Instance-ID>
|
In horizon at Project --> Instances on CLI run ""$ openstack server list"" and verify the status"
In horizon at Project --> Instances on CLI run ""$ openstack server list"" and verify the status"
| ||||||||||||||||||||||||||||||||||
Test_nova_actions_shared_suspend_resume | "Launch a vm using cirros/centos-guest with shared cpu policy, suspend/resume it, ensure vm in active
state and still pingable" |
Instances running |
|
| ||||||||||||||||||||||||||||||||||
test_horaizon_auto_recovery_volume | Edit Image for volume in Horizon and add Instance Auto Recovery, verify metadata updated | simplex R5 installed | "Go to Titanium Cloud GUI and select the following options:
1. Images 2. Edit Meta Data 3. Selecte Auto Recovery" |
Image for Volume with Metadata updated with Instance Recovery option added | ||||||||||||||||||||||||||||||||||
test_horaizon_auto_recovery_snapshot | Edit Image for snapshot in Horizon and add Instance Auto Recovery, verify metadata updated |
1. simplex R5 installed 2. Snapshot created |
"Go to Titanium Cloud GUI and select the following options:
1. Snapshots 2. Edit Meta Data 3. Select Auto Recovery" |
Image for Snapshot with Metadata updated with Instance Recovery option added | ||||||||||||||||||||||||||||||||||
test_horaizon_sw_wrs_auto_recovery_metadata_update_volume | Update Metadata of Volume in Horizon, add property sw_wrs_auto_recovery and verify that it can be set |
1. simplex R5 installed 2. Volume created || "Go to Titanium Cloud GUI and select the following options: 1. Go to Volumes > Select a Volume > Edit Volume > Update Metada 2. In the Update Volume Metadata window search for: sw_wrs_auto_recovery 3. Select sw_wrs_recovery" |
property sw_wrs_auto_recovery added successfully to Volume | |||||||||||||||||||||||||||||||||||
test_horaizon_sw_wrs_auto_recovery_metadata_update_snapshot | Update Metadata of Snapshot in Horizon, add property sw_wrs_auto_recovery and verify that it can be set |
1. simplex R5 installed 2. Snapshot created |
"Go to Titanium Cloud GUI and select the following options:
1. Edit Spanposhot 2. Edit MEtadata 3. Select sw_wrs_recovery" |
property sw_wrs_auto_recovery added successfully to Snapshot | ||||||||||||||||||||||||||||||||||
Test_vm_meta_data_retrieval | "Launch a vm from image, ssh to vm and wget the instance_id metadata; ensure the instance_id
metatdata is the same as instance_name in ""openstack server show <vm-name>"" command" |
1.simplex R5 previously configured
2.controller Key Par added into Horizon : Project > Compute > Key Pars create a ssh for controller with the following command: ssh-keygen -f controller -t rsa -P "" |
"Go to Titanium Cloud GUI and select the following options:
1- Project 2- Instances 3- Launch instance" |
A box will appears in order to create the instance | ||||||||||||||||||||||||||||||||||
Test_horizon_login_time_regular | Login time to horizon set as Regular Multinode should be less than 5 seconds | Akraino pruduct should be installed and set as Regular Multinode | Open Horizon on a web browser, using default IP set while installing |
| ||||||||||||||||||||||||||||||||||
test_horizon_login_time_storage | Login time to horizon set as storage should be less than 5 seconds | Akraino pruduct should be installed and set as Storage |
|
| ||||||||||||||||||||||||||||||||||
test_horizon_login_time_all_in_one | Login time to horizon set as storage should be less than 5 seconds | Akraino pruduct should be installed and set as Simplex |
|
* Log in web page should be displayed
| ||||||||||||||||||||||||||||||||||
test_vif_model_from_image[avp] | Check that the hw_vif_model from image metadata is applied to the vif when the vif_model is not specified |
|
"Create a glance image with hw_vif_model image metadata set to avp
$ openstack image create --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --public cirros_avp $ openstack image set --property hw_vif_model=avp cirros_avp" "create a volume off that the created image $ nova boot --nic net-id=093d28e1-f0cd-4866-b45b-ecce07b1df55 --flavor 1 --block-device source=image,id=dc7e4edb-24f8-4cec-8288-71da1e074c5b,dest=volume,size=5,shutdown=preserve,bootindex=0 avpInstanceFromVolume" "Create a vm with 3 vnics: over mgmt-net with virtio vif_model, over tenant-net with no vif_model specified, over internal-net with avp vif_model. $ nova boot --flavor 1 --image cirros-avp \ --nic net-id=093d28e1-f0cd-4866-b45b-ecce07b1df55 \ --nic net-id=db172669-4c08-4460-a3da-2cc5775744fc,vif-model=avp \ --nic net-id=2ad6ee79-bb52-49a5-875d-12f4fa053150,vif-model=virtio \ test-av-vm " |
"Check that property was set correclty
$ openstack image show cirros_avp |
hw_vif_model='avp', store='file' |
" "Volume created correctly $ cinder list" "Check via nova show that the hw_vif_model from image metadata is applied to the vif that did not specify the vif_model. With below command verify on field ""wrs-if:nics"" taht ""vif_model=avp"" for nic2 $ nova show test-av-vm |
{""nic1"": {""vif_model"": ""avp"", ""network"": ""net"", ""port_id"": ""fdfcfed8-66c9-436b-91f9-163f645b66f2"", ""mtu"": 1500, ""mac_address"": ""fa:16:3e:1b:c7:6c"", ""vif_pci_address"": """"}" | ||||||||||||||||||||||||||||||||
test_ceilometer_<meter>_port_samples | Query ceilometer samples for meters, as well as resource id, ensuring samples exist | "Akraino pruduct should be installed and set as multi node with 2 controllers, 1 compute and a vswitch." |
name: is the name of the Ceilometer meter number: is the maximum number of samples to return query: is a list of metadata filters to apply to the samples, in the form 'metadata_type=filter_value; metadata_type=filter_value; ... e.g. ceilometer sample-list -m platform.cpu.util -l 10 -q 'metadata.host=controller-0'"
|
controller-X ~(keystone_admin)]$ "
|
Type | Unit |
+---------------------------------+------------+---------+ |
gauge | MHz | | gauge | percent | | cumulative | ns | | gauge | percent | | cumulative | ns | | gauge | percent | | cumulative | ns | | gauge | percent | | gauge | percent | | cumulative | ns | | delta | % | | delta | % | | delta | % |
+---------------------------------+------------+---------+"
|
Type | Unit | Resource ID | User ID | Project ID |
+---------------------------------+------------+---------+--------------------------------------+----------------------------------+----------------------------------+ |
gauge | MHz | controller-0_controller-0 | None | None | | gauge | percent | controller-0_controller-0 | None | None | | cumulative | ns | controller-0_controller-0 | None | None | | gauge | percent | controller-0_controller-0 | None | None | | cumulative | ns | controller-0_controller-0 | None | None | | gauge | percent | controller-0_controller-0 | None | None | | cumulative | ns | controller-0_controller-0 | None | None | | gauge | percent | controller-0_controller-0 | None | None | | gauge | percent | controller-0_controller-0 | None | None | | cumulative | ns | controller-0_controller-0 | None | None | | delta | % | 14f6bfca-7286-4470-a805-3dfab4ce1b89 | 0dcd12579b3c42f195d96ebecd9697bf | 17a35bdd8d024aaea88af6b112dcf697 | | delta | % | 14f6bfca-7286-4470-a805-3dfab4ce1b89 | 0dcd12579b3c42f195d96ebecd9697bf | 17a35bdd8d024aaea88af6b112dcf697 | | delta | % | 14f6bfca-7286-4470-a805-3dfab4ce1b89 | 0dcd12579b3c42f195d96ebecd9697bf | 17a35bdd8d024aaea88af6b112dcf697 |
+---------------------------------+------------+---------+--------------------------------------+----------------------------------+---"
+-------------+--------------------------------------------+ |
Value |
+-------------+--------------------------------------------+ |
{""host"": ""controller-0""} | | 17a35bdd8d024aaea88af6b112dcf697 | | 14f6bfca-7286-4470-a805-3dfab4ce1b89 | | 17a35bdd8d024aaea88af6b112dcf697:openstack | | 0dcd12579b3c42f195d96ebecd9697bf |
+-------------+--------------------------------------------+" |
test_nova_actions[ubuntu_14-shared-stop-start] | Cirros | simplex R5 previously configured |
|
+----------------------------+------------------------------------------------------------------------------+ |
Value |
+----------------------------+------------------------------------------------------------------------------+ |
False | | 0 | | 10 | | f743498f-e66f-4f21-8290-ee67045120ed | | p1.medium | | True | | aggregate_instance_extra_specs:storage='local_image', hw:cpu_policy='shared' | | 1024 | | 1.0 | | | | 2 |"
+--------------------------------------+-----------+------+------+-----------+-------+-----------+ |
Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+-----------+------+------+-----------+-------+-----------+ |
s.p2 | 2048 | 25 | 0 | 2 | True | | s.p1 | 512 | 1 | 0 | 1 | True | | p1.medium | 1024 | 10 | 0 | 2 | True |
+--------------------------------------+-----------+------+------+-----------+-------+-----------+"
The cpu_policy as shared must be present in the output"
| ||||||||||||||||||
test_statistics_for_one_meter[image.size] | Check ceilometer statistics for meter ‘image.size’ |
"Logged as admin is done successfully with a prompt as follows : controller-X ~(keystone_admin)]$ " "You Should be seeing an entry with non-zero values for count, min, max, avg. +--------+---------------------+---------------------+------------+------------+------------+--------------+-------+----------+---------------------+---------------------+ |
Period Start | Period End | Max | Min | Avg | Sum | Count | Duration | Duration Start | Duration End |
+--------+---------------------+---------------------+------------+------------+------------+--------------+-------+----------+---------------------+---------------------+ |
2018-04-25T19:10:08 | 2018-04-26T19:30:16 | 12716032.0 | 12716032.0 | 12716032.0 | 5493325824.0 | 432 | 87608.0 | 2018-04-25T19:10:08 | 2018-04-26T19:30:16 |
+--------+---------------------+---------------------+------------+------------+------------+--------------+-------+----------+---------------------+---------------------" | ||||||||||||||||||||||||||||||||||
controller-X ~(keystone_admin)]$ "
|
Period Start | Period End | Max | Min | Avg | Sum | Count | Duration | Duration Start | Duration End |
+--------+---------------------+---------------------+------------+------------+------------+--------------+-------+----------+---------------------+---------------------+ |
2018-04-25T19:10:08 | 2018-04-26T19:30:16 | 12716032.0 | 12716032.0 | 12716032.0 | 5493325824.0 | 432 | 87608.0 | 2018-04-25T19:10:08 | 2018-04-26T19:30:16 |
+--------+---------------------+---------------------+------------+------------+------------+--------------+-------+----------+---------------------+---------------------" |
| |||||||||||||||||||||||||||||||||||
Test_heat_template [WR_Neutron_ProviderNetRange.yaml] | Create new provider net and range via heat, and ensure new providernet and range is listed in neutron providernet-list |
- Click Create Provider Network; In the Create Provider Network window, complete the fields as required. - Name ; The name of the provider network. - Description; A free-text field for reference. - Type; The type of provider network to be created. - Flat; mapped directly to the physical network. - vlan ; supports multiple tenant networks using VLAN IDs. - vxlan; supports multiple tenant networks using VXLAN VNIs. - MTU; The maximum transmission unit for the Ethernet segment used to access the network. * NOTE: To attach to the provider network, data interfaces must be configured with an equal or larger MTU. - "Click Provider Network button." |
| |||||||||||||||||||||||||||||||||||
test_system_alarms_and_events_on_lock_unlock_compute | Lock a compute host, and ensure the relevant alarms and system events are generated Unlock the host and ensure alarm and events are cleared. | Akraino pruduct should be installed and set as multi node with 2 controllers and at least 1 compute. |
|
|