Jump to: navigation, search

Difference between revisions of "StarlingX/Installation Guide Virtual Environment/Dedicated Storage"

(Provisioning Storage on a Compute-N Host)
 
(106 intermediate revisions by 5 users not shown)
Line 1: Line 1:
== Configure Virtual Servers ==
+
{{Warning|header='''Warning - Deprecated'''|body='''This wiki page is out of date and now deprecated. For the current version of the StarlingX documentation please see the [https://docs.starlingx.io/ documentation website].'''}}
  
Run the libvirt qemu setup script:
+
== Preparing Servers ==
 +
 
 +
=== Bare Metal ===
 +
 
 +
Required Servers:
 +
 
 +
* Controllers: 2
 +
* Storage
 +
** Replication factor of 2: 2 - 8
 +
** Replication factor of 3: 3 - 9
 +
* Computes: 2 - 100
 +
 
 +
==== Hardware Requirements ====
 +
 
 +
The recommended minimum requirements for the physical servers where StarlingX Dedicated Storage will be deployed, include:
 +
 
 +
* ‘Minimum’ Processor:
 +
** Dual-CPU Intel® Xeon® E5 26xx Family (SandyBridge) 8 cores/socket
 +
* Memory:
 +
** 64 GB Controller, Storage
 +
** 32 GB Compute
 +
* BIOS:
 +
** Hyper-Threading Tech: Enabled
 +
** Virtualization Technology: Enabled
 +
** VT for Directed I/O: Enabled
 +
** CPU Power and Performance Policy: Performance
 +
** CPU C State Control: Disabled
 +
** Plug & Play BMC Detection: Disabled
 +
* Primary Disk:
 +
** 500 GB SDD or NVMe Controller
 +
** 120 GB (min. 10K RPM)  Compute, Storage
 +
* Additional Disks:
 +
** 1 or more 500 GB disks (min. 10K RPM) Storage, Compute
 +
* Network Ports*
 +
** Management: 10GE Controller, Storage, Compute
 +
** OAM: 10GE Controller
 +
** Data: n x 10GE Compute
 +
 
 +
=== Virtual Environment ===
 +
 
 +
Run the libvirt qemu setup scripts. Setting up virtualized OAM and Management networks:
 +
<pre><nowiki>
 +
$ bash setup_network.sh
 +
</nowiki></pre>
 +
 
 +
Building xmls for definition of virtual servers:
 
<pre><nowiki>
 
<pre><nowiki>
$ bash setup_standard_controller.sh
+
$ bash setup_standard_controller.sh -i <starlingx iso image>
 
</nowiki></pre>
 
</nowiki></pre>
 +
 +
==== Accessing Virtual Server Consoles ====
 +
 +
The xml for virtual servers in stx-tools repo, deployment/libvirt, provides both graphical and text consoles.
 +
 +
Access the graphical console in virt-manager by right-click on the domain (the server) and selecting "Open".
 +
 +
Access the textual console with the command "virsh console $DOMAIN", where DOMAIN is the name of the server shown in virsh.
 +
 +
When booting the controller-0 for the first time, both the serial and graphical consoles will present the initial configuration menu for the cluster.  One can select serial or graphical console for controller-0.  For the other nodes however only serial is used, regardless of which option is selected.
 +
 +
Open the graphic console on all servers before powering them on to observe the boot device selection and PXI boot progress.  Run "virsh console $DOMAIN" command promptly after power on to see the initial boot sequence which follows the boot device selection.  One has a few seconds to do this.
  
 
==Controller-0 Host Installation==
 
==Controller-0 Host Installation==
  
Installing controller-0 involves initializing a host with software and then applying a configuration from the command line. The configured host becomes Controller-0. <br>
+
Installing controller-0 involves initializing a host with software and then applying a bootstrap configuration from the command line. The configured bootstrapped host becomes Controller-0. <br>
 
Procedure:
 
Procedure:
  
# Using an ISO image of StarlingX.
+
# Power on the server that will be controller-0 with the StarlingX ISO on a USB in a bootable USB slot.
 
# Configure the controller using the config_controller script.
 
# Configure the controller using the config_controller script.
  
 
===Initializing Controller-0===
 
===Initializing Controller-0===
This section describes how to initialize StarlingX in host Controller-0. Except where noted, all the commands must be executed from a console of the Workstation. Make sure Virtual Machine Manager is open.
+
This section describes how to initialize StarlingX in host Controller-0. Except where noted, all the commands must be executed from a console of the host.
 
 
From the KVM/VirtManager window, power on the host to be configured as Controller-0 and show the virtual machine console and details:
 
* When the installer is loaded and the installer welcome screen appears in the Controller-0 host, select the type of installation "Standard Controller Configuration".
 
* Select the "Graphical Console" as the console to use during installation.
 
* Select "Standard Security Boot Profile" as the Security Profile.
 
* Monitor the initialization until it is complete. When initialization is complete, a reboot is initiated on the Controller-0 host, briefly displays a GNU GRUB screen, and then boots automatically into the StarlingX image.
 
  
 +
Power on the host to be configured as Controller-0, with the StarlingX ISO on a USB in a bootable USB slot. Wait for the console to show the StarlingX ISO booting options:
 +
* '''Standard Controller Configuration'''
 +
** When the installer is loaded and the installer welcome screen appears in the Controller-0 host, select the type of installation "Standard Controller Configuration".
 +
* '''Graphical Console'''
 +
** Select the "Graphical Console" as the console to use during installation.
 +
* '''Standard Security Boot Profile'''
 +
** Select "Standard Security Boot Profile" as the Security Profile.
 +
<br>
 +
Monitor the initialization. When it is complete, a reboot is initiated on the Controller-0 host, briefly displays a GNU GRUB screen, and then boots automatically into the StarlingX image.
 +
<br>
 
Log into Controller-0 as user wrsroot, with password wrsroot. The first time you log in as wrsroot, you are required to change your password. Enter the current password (wrsroot):
 
Log into Controller-0 as user wrsroot, with password wrsroot. The first time you log in as wrsroot, you are required to change your password. Enter the current password (wrsroot):
 
<pre><nowiki>
 
<pre><nowiki>
Line 43: Line 104:
 
===Configuring Controller-0===
 
===Configuring Controller-0===
  
This section describes how to perform the Controller-0 configuration interactively. Except where noted, all the commands must be executed from the console of the active controller (here assumed to be controller-0.
+
This section describes how to perform the Controller-0 configuration interactively just to bootstrap system with minimum critical data. Except where noted, all the commands must be executed from the console of the active controller (here assumed to be controller-0).
  
When run interactively, the config_controller script presents a series of prompts for initial configuration of StarlingX. The script is used to configure the first controller in the StarlingX cluster as controller-0. The prompts are grouped by configuration area. To start the script interactively, use the following command with no parameters:
+
When run interactively, the config_controller script presents a series of prompts for initial configuration of StarlingX:
 +
* For the Virtual Environment, you can accept all the default values immediately after ‘system date and time’.
 +
* For a Physical Deployment, answer the bootstrap configuration questions with answers applicable to your particular physical setup.
 +
<br>
 +
The script is used to configure the first controller in the StarlingX cluster as controller-0. The prompts are grouped by configuration area. To start the script interactively, use the following command with no parameters:
  
 
<pre><nowiki>
 
<pre><nowiki>
Line 55: Line 120:
 
</nowiki></pre>
 
</nowiki></pre>
  
Accept all the default values immediately after ‘system date and time’, select default [y] for Distributed Cloud Configuration:
+
Accept all the default values immediately after ‘system date and time’
  
 
<pre><nowiki>
 
<pre><nowiki>
...
 
Distributed Cloud Configuration:
 
-----------------------------------------
 
 
Configure Distributed Cloud System Controller [y/N]: y
 
 
...
 
...
 
Applying configuration (this will take several minutes):
 
Applying configuration (this will take several minutes):
Line 79: Line 139:
 
Please complete any out of service commissioning steps with system commands and unlock controller to proceed.
 
Please complete any out of service commissioning steps with system commands and unlock controller to proceed.
 
</nowiki></pre>
 
</nowiki></pre>
 +
 +
After config_controller bootstrap configuration, REST API, CLI and Horizon interfaces are enabled on the controller-0 OAM IP Address.  The remaining installation instructions will use the CLI.
  
 
==Controller-0 and System Provision==
 
==Controller-0 and System Provision==
 
===Configuring Provider Networks at Installation===
 
 
You must set up provider networks at installation so that you can attach data interfaces and unlock the compute nodes.
 
  
 
On Controller-0, acquire Keystone administrative privileges:
 
On Controller-0, acquire Keystone administrative privileges:
Line 91: Line 149:
 
controller-0:~$ source /etc/nova/openrc
 
controller-0:~$ source /etc/nova/openrc
 
</nowiki></pre>
 
</nowiki></pre>
 +
 +
===Configuring Provider Networks at Installation===
 +
 +
You must set up provider networks at installation so that you can attach data interfaces and unlock the compute nodes.
  
 
Set up one provider network of the vlan type, named providernet-a:
 
Set up one provider network of the vlan type, named providernet-a:
Line 97: Line 159:
 
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-create providernet-a --type=vlan
 
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-create providernet-a --type=vlan
 
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-range-create --name providernet-a-range1 --range 100-400 providernet-a
 
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-range-create --name providernet-a-range1 --range 100-400 providernet-a
</nowiki></pre>
 
 
===Provisioning Cinder Storage on Controller-0===
 
 
Review the available disk space and capacity and obtain the uuid of the physical disk
 
 
<pre><nowiki>
 
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list controller-0
 
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
 
| uuid                                | device_no | device_ | device_ | size_ | available_ | rpm          |...
 
|                                      | de        | num    | type    | gib  | gib        |              |...
 
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
 
| 004f4c09-2f61-46c5-8def-99b2bdeed83c | /dev/sda  | 2048    | HDD    | 200.0 | 0.0        |              |...
 
| 89694799-0dd8-4532-8636-c0d8aabfe215 | /dev/sdb  | 2064    | HDD    | 200.0 | 199.997    |              |...
 
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
 
[wrsroot@controller-0 ~(keystone_admin)]$
 
</nowiki></pre>
 
 
Create the 'cinder-volumes' local volume group
 
 
<pre><nowiki>
 
[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-add controller-0 cinder-volumes
 
+-----------------+--------------------------------------+
 
| Property        | Value                                |
 
+-----------------+--------------------------------------+
 
| lvm_vg_name    | cinder-volumes                      |
 
| vg_state        | adding                              |
 
| uuid            | ece4c755-241c-4363-958e-85e9e3d12917 |
 
| ihost_uuid      | 150284e2-fb60-4169-ae75-7f444b8ca9bf |
 
| lvm_vg_access  | None                                |
 
| lvm_max_lv      | 0                                    |
 
| lvm_cur_lv      | 0                                    |
 
| lvm_max_pv      | 0                                    |
 
| lvm_cur_pv      | 0                                    |
 
| lvm_vg_size_gib | 0.00                                |
 
| lvm_vg_total_pe | 0                                    |
 
| lvm_vg_free_pe  | 0                                    |
 
| created_at      | 2018-08-22T03:59:30.685718+00:00    |
 
| updated_at      | None                                |
 
| parameters      | {u'lvm_type': u'thin'}              |
 
+-----------------+--------------------------------------+
 
</nowiki></pre>
 
 
Create a disk partition to add to the volume group based on uuid of the physical disk
 
 
<pre><nowiki>
 
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-add controller-0 89694799-0dd8-4532-8636-c0d8aabfe215 199 -t lvm_phys_vol
 
+-------------+--------------------------------------------------+
 
| Property    | Value                                            |
 
+-------------+--------------------------------------------------+
 
| device_path | /dev/disk/by-path/pci-0000:00:03.0-ata-2.0-part1 |
 
| device_node | /dev/sdb1                                        |
 
| type_guid  | ba5eba11-0000-1111-2222-000000000001            |
 
| type_name  | None                                            |
 
| start_mib  | None                                            |
 
| end_mib    | None                                            |
 
| size_mib    | 203776                                          |
 
| uuid        | 9ba2d76a-6ae2-4bfa-ad48-57b62d102e80            |
 
| ihost_uuid  | 150284e2-fb60-4169-ae75-7f444b8ca9bf            |
 
| idisk_uuid  | 89694799-0dd8-4532-8636-c0d8aabfe215            |
 
| ipv_uuid    | None                                            |
 
| status      | Creating                                        |
 
| created_at  | 2018-08-22T04:03:40.761221+00:00                |
 
| updated_at  | None                                            |
 
+-------------+--------------------------------------------------+
 
</nowiki></pre>
 
 
Wait for the new partition to be created (i.e. status=Ready)
 
 
<pre><nowiki>
 
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-list controller-0 --disk 89694799-0dd8-4532-8636-c0d8aabfe215
 
+--------------------------------------+-----------------------------+------------+...
 
| uuid                                | device_path                | device_nod |...
 
|                                      |                            | e          |...
 
+--------------------------------------+-----------------------------+------------+...
 
| 9ba2d76a-6ae2-4bfa-ad48-57b62d102e80 | /dev/disk/by-path/pci-0000: | /dev/sdb1  |...
 
|                                      | 00:03.0-ata-2.0-part1      |            |...
 
|                                      |                            |            |...
 
+--------------------------------------+-----------------------------+------------+...
 
</nowiki></pre>
 
 
Add the partition to the volume group
 
 
<pre><nowiki>
 
[wrsroot@controller-0 ~(keystone_admin)]$ system host-pv-add controller-0 cinder-volumes 9ba2d76a-6ae2-4bfa-ad48-57b62d102e80
 
+--------------------------+--------------------------------------------------+
 
| Property                | Value                                            |
 
+--------------------------+--------------------------------------------------+
 
| uuid                    | 060dc47e-bc17-40f4-8f09-5326ef0e86a5            |
 
| pv_state                | adding                                          |
 
| pv_type                  | partition                                        |
 
| disk_or_part_uuid        | 9ba2d76a-6ae2-4bfa-ad48-57b62d102e80            |
 
| disk_or_part_device_node | /dev/sdb1                                        |
 
| disk_or_part_device_path | /dev/disk/by-path/pci-0000:00:03.0-ata-2.0-part1 |
 
| lvm_pv_name              | /dev/sdb1                                        |
 
| lvm_vg_name              | cinder-volumes                                  |
 
| lvm_pv_uuid              | None                                            |
 
| lvm_pv_size_gib          | 0.0                                              |
 
| lvm_pe_total            | 0                                                |
 
| lvm_pe_alloced          | 0                                                |
 
| ihost_uuid              | 150284e2-fb60-4169-ae75-7f444b8ca9bf            |
 
| created_at              | 2018-08-22T04:06:54.008632+00:00                |
 
| updated_at              | None                                            |
 
+--------------------------+--------------------------------------------------+
 
 
</nowiki></pre>
 
</nowiki></pre>
  
 
===Adding a Ceph Storage Backend at Installation===
 
===Adding a Ceph Storage Backend at Installation===
  
Verify requirements
+
Add CEPH Storage backend:
  
 
<pre><nowiki>
 
<pre><nowiki>
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-add ceph [-s cinder[,glance[,swift]]]
+
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-add ceph -s cinder,glance,swift,nova
  
 
WARNING : THIS OPERATION IS NOT REVERSIBLE AND CANNOT BE CANCELLED.  
 
WARNING : THIS OPERATION IS NOT REVERSIBLE AND CANNOT BE CANCELLED.  
Line 216: Line 174:
 
Please set the 'confirmed' field to execute this operation for the ceph backend.
 
Please set the 'confirmed' field to execute this operation for the ceph backend.
 
</nowiki></pre>
 
</nowiki></pre>
 
Add CEPH storage
 
  
 
<pre><nowiki>
 
<pre><nowiki>
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-add ceph [-s cinder[,glance[,swift]]]
+
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-add ceph -s cinder,glance,swift,nova --confirmed
 
 
WARNING : THIS OPERATION IS NOT REVERSIBLE AND CANNOT BE CANCELLED.
 
 
 
By confirming this operation, Ceph backend will be created.
 
A minimum of 2 storage nodes are required to complete the configuration.
 
Please set the 'confirmed' field to execute this operation for the ceph backend.
 
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-add ceph [-s cinder[,glance[,swift]]] --confirmed
 
  
 
System configuration has changed.
 
System configuration has changed.
 
Please follow the administrator guide to complete configuring the system.
 
Please follow the administrator guide to complete configuring the system.
  
+--------------------------------------+------------+---------+------------+------+----------+--------------------------------------------------+
+
+--------------------------------------+------------+---------+-------------+--------------------+----------+...
| uuid                                | name      | backend | state     | task | services | capabilities                                    |
+
| uuid                                | name      | backend | state       | task               | services |...
+--------------------------------------+------------+---------+------------+------+----------+--------------------------------------------------+
+
+--------------------------------------+------------+---------+-------------+--------------------+----------+...
| 5131a848-25ea-4cd8-bbce-0d65c84183df | ceph-store | ceph    | configured | None | None     | {u'min_replication': u'1', u'replication': u'2'} |
+
| 48ddb10a-206c-42da-bb3f-f7160a356724 | ceph-store | ceph    | configuring | applying-manifests | cinder,  |...
| d63b05b2-5b61-408c-ac46-48ec48f4e4f0 | file-store | file    | configured | None | glance  | {}                                              |
+
|                                      |            |        |            |                    | glance,  |...
+--------------------------------------+------------+---------+------------+------+----------+--------------------------------------------------+
+
|                                      |            |        |            |                    | swift    |...
[wrsroot@controller-0 ~(keystone_admin)]$
+
|                                      |            |        |            |                   | nova     |...
 +
|                                      |            |        |            |                    |          |...
 +
| 55f49f86-3e01-4d03-a014-42e1b55ba487 | file-store | file    | configured | None               | glance  |...
 +
+--------------------------------------+------------+---------+-------------+--------------------+----------+...
 
</nowiki></pre>
 
</nowiki></pre>
  
Line 245: Line 197:
 
<pre><nowiki>
 
<pre><nowiki>
 
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-list
 
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-list
+--------------------------------------+------------+---------+------------+------+----------+--------------------------------------------------+
+
+--------------------------------------+------------+---------+------------+-------------------+-----------+...
| uuid                                | name      | backend | state      | task | services | capabilities                                    |
+
| uuid                                | name      | backend | state      | task             | services |...
+--------------------------------------+------------+---------+------------+------+----------+--------------------------------------------------+
+
+--------------------------------------+------------+---------+------------+-------------------+-----------+...
| 5131a848-25ea-4cd8-bbce-0d65c84183df | ceph-store | ceph    | configured | None | None     | {u'min_replication': u'1', u'replication': u'2'} |
+
| 48ddb10a-206c-42da-bb3f-f7160a356724 | ceph-store | ceph    | configured | provision-storage | cinder,  |...
| d63b05b2-5b61-408c-ac46-48ec48f4e4f0 | file-store | file    | configured | None | glance   | {}                                              |
+
|                                      |            |        |            |                  | glance,  |...
+--------------------------------------+------------+---------+------------+------+----------+--------------------------------------------------+
+
|                                      |            |        |            |                  | swift     |...
[wrsroot@controller-0 ~(keystone_admin)]$
+
|                                      |            |        |            |                  | nova      |...
 +
|                                      |            |        |            |                  |          |...
 +
| 55f49f86-3e01-4d03-a014-42e1b55ba487 | file-store | file    | configured | None             | glance   |...
 +
+--------------------------------------+------------+---------+------------+-------------------+-----------+...
 
</nowiki></pre>
 
</nowiki></pre>
  
 
===Unlocking Controller-0===
 
===Unlocking Controller-0===
  
You must unlock controller-0 so that you can use it to install the remaining hosts. On Controller-0, acquire Keystone administrative privileges:
+
You must unlock controller-0 so that you can use it to install the remaining hosts. Use the system host-unlock command:
 
 
<pre><nowiki>
 
controller-0:~$ source /etc/nova/openrc
 
</nowiki></pre>
 
 
 
Use the system host-unlock command:
 
  
 
<pre><nowiki>
 
<pre><nowiki>
Line 278: Line 227:
 
</nowiki></pre>
 
</nowiki></pre>
  
Verify that the Titanium Cloud controller services are running:
+
Verify that the StarlingX controller services are running:
  
 
<pre><nowiki>
 
<pre><nowiki>
[wrsroot@controller-0 ~(keystone_admin)]$ nova service-list
+
[wrsroot@controller-0 ~(keystone_admin)]$ system service-list
+--------------------------------------+------------------+--------------+----------+---------+-------+ ...
+
+-----+-------------------------------+--------------+----------------+
| Id                                  | Binary          | Host        | Zone     | Status  | State | ...
+
| id  | service_name                  | hostname     | state          |
+--------------------------------------+------------------+--------------+----------+---------+-------+ ...
+
+-----+-------------------------------+--------------+----------------+
| d7cdfaf0-9394-4053-b752-d609c837b729 | nova-conductor  | controller-0 | internal | enabled | up    | ...
+
...
| 692c2659-7188-42ec-9ad1-c88ac3020dc6 | nova-scheduler  | controller-0 | internal | enabled | up    | ...
+
| | oam-ip                        | controller-0 | enabled-active |
| 5c7c9aad-696f-4dd1-a497-464bdd525e0c | nova-consoleauth | controller-0 | internal | enabled | up    | ...
+
| | management-ip                | controller-0 | enabled-active |
+--------------------------------------+------------------+--------------+----------+---------+-------+ ...
+
...
 +
+-----+-------------------------------+--------------+----------------+
 
</nowiki></pre>
 
</nowiki></pre>
  
Line 326: Line 276:
  
 
<pre><nowiki>
 
<pre><nowiki>
system controllerfs-modify backup=42 database=12 img-conversions=12
+
[wrsroot@controller-0 ~(keystone_admin)]$ system controllerfs-modify backup=42 database=12 img-conversions=12
 
</nowiki></pre>
 
</nowiki></pre>
  
 
==Controller-1 / Storage Hosts / Compute Hosts Installation==
 
==Controller-1 / Storage Hosts / Compute Hosts Installation==
  
After initializing and configuring an active controller, you can add and configure a backup controller and additional compute or storage hosts. For each Node-N Host do the following:
+
After initializing and configuring an active controller, you can add and configure a backup controller and additional compute or storage hosts. For each host do the following:
 
 
===Initializing Node-N Host===
 
 
 
Start Node-N in the workstation:  
 
  
<pre><nowiki>
+
===Initializing Host===
$ sudo virsh start Node-N
 
</nowiki></pre>
 
  
In Node-N console you will see:
+
Power on Host. In host console you will see:
  
 
<pre><nowiki>
 
<pre><nowiki>
Line 350: Line 294:
 
</nowiki></pre>
 
</nowiki></pre>
  
===Updating Node-N Host Host Name and Personality===
+
===Updating Host Name and Personality===
  
 
On Controller-0, acquire Keystone administrative privileges:
 
On Controller-0, acquire Keystone administrative privileges:
Line 370: Line 314:
 
</nowiki></pre>
 
</nowiki></pre>
  
Use the system host-update to update Node-N host personality attribute:
+
Use the system host-add to update host personality attribute:
  
 
<pre><nowiki>
 
<pre><nowiki>
[wrsroot@controller-0 ~(keystone_admin)]$ system host-update 2 personality=controller hostname=controller-1
+
[wrsroot@controller-0 ~(keystone_admin)]$ system host-add -n <controller_name> -p <personality> -m <mac address>
 
</nowiki></pre>
 
</nowiki></pre>
  
===Monitoring Node-N Host===
+
'''REMARK:''' use the Mac Address for the specific network interface you are going to be connected. e.g. OAM network interface for "Controller-1" node, Management network interface for "Computes" and "Storage" nodes.
 +
 
 +
Check the '''NIC''' MAC Address from "Virtual Manager GUI" under ''"Show virtual hardware details - '''i'''" Main Banner --> NIC: --> specific "Bridge name:" under MAC Address text field.''
 +
 
 +
===Monitoring Host===
  
 
On Controller-0, you can monitor the installation progress by running the system host-show command for the host periodically. Progress is shown in the install_state field.
 
On Controller-0, you can monitor the installation progress by running the system host-show command for the host periodically. Progress is shown in the install_state field.
  
 
<pre><nowiki>
 
<pre><nowiki>
[wrsroot@controller-0 ~(keystone_admin)]$ system host-show Node-N | grep install
+
[wrsroot@controller-0 ~(keystone_admin)]$ system host-show <host> | grep install
 
| install_output      | text                                |
 
| install_output      | text                                |
 
| install_state      | booting                              |
 
| install_state      | booting                              |
Line 387: Line 335:
 
</nowiki></pre>
 
</nowiki></pre>
  
Wait while the Node-N is configured and rebooted. Up to 20 minutes may be required for a reboot, depending on hardware. When the reboot is complete, the Node-N is reported as Locked, Disabled, and Online.
+
Wait while the host is configured and rebooted. Up to 20 minutes may be required for a reboot, depending on hardware. When the reboot is complete, the host is reported as Locked, Disabled, and Online.
  
===Listing Node-N Hosts===
+
===Listing Hosts===
  
 
Once all Nodes have been installed, configured and rebooted, on Controller-0 list the hosts:
 
Once all Nodes have been installed, configured and rebooted, on Controller-0 list the hosts:
Line 405: Line 353:
 
| 7  | storage-2    | storage    | locked        | disabled    | online      |
 
| 7  | storage-2    | storage    | locked        | disabled    | online      |
 
+----+--------------+-------------+----------------+-------------+--------------+
 
+----+--------------+-------------+----------------+-------------+--------------+
[wrsroot@controller-0 ~(keystone_admin)]$
 
 
</nowiki></pre>
 
</nowiki></pre>
  
Line 421: Line 368:
 
...
 
...
 
+----+--------------+-------------+----------------+-------------+--------------+
 
+----+--------------+-------------+----------------+-------------+--------------+
[wrsroot@controller-0 ~(keystone_admin)]$
 
 
</nowiki></pre>
 
</nowiki></pre>
  
 
===Provisioning Network Interfaces on Controller-1===
 
===Provisioning Network Interfaces on Controller-1===
  
Provision the Controller-1 data interface
+
In order to list out hardware port names, types, pci-addresses that have been discovered:
 
 
<pre><nowiki>
 
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -n ens6 -nt oam controller-1 ens6
 
</nowiki></pre>
 
 
 
===Provisioning Storage on Controller-1===
 
 
 
Review the available disk space and capacity and obtain the uuid of the physical disk
 
 
 
<pre><nowiki>
 
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list controller-1
 
</nowiki></pre>
 
 
 
Assign Cinder storage to the physical disk
 
 
 
<pre><nowiki>
 
[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-add controller-1 cinder-volumes
 
</nowiki></pre>
 
 
 
Create a disk partition to add to the volume group based on uuid of the physical disk
 
  
 
<pre><nowiki>
 
<pre><nowiki>
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-add controller-1 70b83394-968e-4f0d-8a99-7985cd282a21 199 -t lvm_phys_vol
+
[wrsroot@controller-0 ~(keystone_admin)]$ system host-port-list controller-1
 
</nowiki></pre>
 
</nowiki></pre>
  
Wait for the new partition to be created (i.e. status=Ready)
+
Provision the oam interface for Controller-1.
  
 +
'''Temporal''' changes to host-if-modify command: check help 'system help host-if-modify'. If the help text lists '-c <class>' option then execute the following command; otherwise use the form with '-nt' listed below:
 
<pre><nowiki>
 
<pre><nowiki>
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-list controller-1 --disk 70b83394-968e-4f0d-8a99-7985cd282a21
+
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -n <oam interface> -c platform --networks oam controller-1 <oam interface>
 
</nowiki></pre>
 
</nowiki></pre>
 
Add the partition to the volume group
 
  
 
<pre><nowiki>
 
<pre><nowiki>
[wrsroot@controller-0 ~(keystone_admin)]$ system host-pv-add controller-1 cinder-volumes 16a1c5cb-620c-47a3-be4b-022eafd122ee
+
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -n <oam interface> -nt oam controller-1 <oam interface>
 
</nowiki></pre>
 
</nowiki></pre>
  
Line 473: Line 398:
  
 
Wait while the Controller-1 is rebooted. Up to 10 minutes may be required for a reboot, depending on hardware.
 
Wait while the Controller-1 is rebooted. Up to 10 minutes may be required for a reboot, depending on hardware.
 +
 +
'''REMARK:''' Controller-1 will remain in 'degraded' state until data-syncing is complete. The duration is dependant on the virtualization host's configuration - i.e., the number and configuration of physical disks used to host the nodes' virtual disks. Also, the management network is expected to have link capacity of 10000 (1000 is not supported due to excessive data-sync time). Use 'fm alarm-list' to confirm status.
  
 
<pre><nowiki>
 
<pre><nowiki>
Line 482: Line 409:
 
| 2  | controller-1 | controller  | unlocked      | enabled    | available    |
 
| 2  | controller-1 | controller  | unlocked      | enabled    | available    |
 
...
 
...
+----+--------------+-------------+----------------+-------------+--------------+
 
[wrsroot@controller-0 ~(keystone_admin)]$
 
 
</nowiki></pre>
 
</nowiki></pre>
  
 
==Storage Host Provisioning==
 
==Storage Host Provisioning==
 
===Provisioning Network Interfaces on a Storage Host===
 
 
None
 
  
 
===Provisioning Storage on a Storage Host===
 
===Provisioning Storage on a Storage Host===
Line 512: Line 433:
 
|                                      |          |        |        |      |            |              |...
 
|                                      |          |        |        |      |            |              |...
 
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
 
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
[wrsroot@controller-0 ~(keystone_admin)]$
 
 
</nowiki></pre>
 
</nowiki></pre>
  
 
Available storage tiers in Storage-N
 
Available storage tiers in Storage-N
 
<pre><nowiki>
 
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-tier-list ceph_cluster
 
Authorization failed: Unable to establish connection to http://192.168.204.2:5000/v3/auth/tokens
 
[wrsroot@controller-0 ~(keystone_admin)]$
 
</nowiki></pre>
 
 
Again
 
  
 
<pre><nowiki>
 
<pre><nowiki>
Line 532: Line 444:
 
| 4398d910-75e4-4e99-a57f-fc147fb87bdb | storage | in-use | 5131a848-25ea-4cd8-bbce-0d65c84183df |
 
| 4398d910-75e4-4e99-a57f-fc147fb87bdb | storage | in-use | 5131a848-25ea-4cd8-bbce-0d65c84183df |
 
+--------------------------------------+---------+--------+--------------------------------------+
 
+--------------------------------------+---------+--------+--------------------------------------+
[wrsroot@controller-0 ~(keystone_admin)]$
 
 
</nowiki></pre>
 
</nowiki></pre>
  
Line 556: Line 467:
 
| updated_at      | 2018-08-16T00:40:07.626762+00:00                |
 
| updated_at      | 2018-08-16T00:40:07.626762+00:00                |
 
+------------------+--------------------------------------------------+
 
+------------------+--------------------------------------------------+
[wrsroot@controller-0 ~(keystone_admin)]$
 
 
</nowiki></pre>
 
</nowiki></pre>
  
List the UUIDs
+
Create remaining available storage function (an OSD) in Storage-N based in the number of available physical disks.
 +
<br>
 +
List the OSDs:
  
 
<pre><nowiki>
 
<pre><nowiki>
Line 568: Line 480:
 
| 34989bad-67fc-49ea-9e9c-38ca4be95fad | osd      | 0    | {}          | c7cc08e6-ff18-4229-a79d-a04187de7b8d |
 
| 34989bad-67fc-49ea-9e9c-38ca4be95fad | osd      | 0    | {}          | c7cc08e6-ff18-4229-a79d-a04187de7b8d |
 
+--------------------------------------+----------+-------+--------------+--------------------------------------+
 
+--------------------------------------+----------+-------+--------------+--------------------------------------+
[wrsroot@controller-0 ~(keystone_admin)]$
 
 
</nowiki></pre>
 
</nowiki></pre>
  
Line 577: Line 488:
 
</nowiki></pre>
 
</nowiki></pre>
  
==Compute Host Provision==
+
'''REMARK:''' Before you continue, repeat Provisioning Storage steps on remaining storage nodes.
 +
 
 +
== Compute Host Provision ==
  
You must configure the network interfaces and the storage disks on a host before you can unlock it.  
+
You must configure the network interfaces and the storage disks on a host before you can unlock it. For each Compute Host do the following:
  
 
On Controller-0, acquire Keystone administrative privileges:
 
On Controller-0, acquire Keystone administrative privileges:
Line 587: Line 500:
 
</nowiki></pre>
 
</nowiki></pre>
  
===Provisioning Network Interfaces on a Compute-N Host===
+
===Provisioning Network Interfaces on a Compute Host===
 +
 
 +
On Controller-0, in order to list out hardware port names, types, pci-addresses that have been discovered:
 +
 
 +
* '''Only in Virtual Environment''': Ensure that the interface used is one of those attached to host bridge with model type "virtio" (i.e., eth1000 and eth1001).  The model type "e1000" emulated devices will not work for provider networks.
 +
 
 +
<pre><nowiki>
 +
[wrsroot@controller-0 ~(keystone_admin)]$ system host-port-list compute-0
 +
</nowiki></pre>
 +
 
 +
Provision the data interface for Compute. '''Temporal''' changes to host-if-modify command: check help 'system help host-if-modify'. If the help text lists '-c <class>' option then execute the following command; otherwise use the form with '-nt' listed below:
 +
<pre><nowiki>
 +
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -p providernet-a -c data compute-0 eth1000
 +
</nowiki></pre>
 +
 
 +
<pre><nowiki>
 +
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -p providernet-a -nt data compute-0 eth1000
 +
</nowiki></pre>
 +
 
 +
=== VSwitch Virtual Environment ===
  
On Controller-0, provision the Compute-N data interfaces
+
'''Only in Virtual Environment'''. If the compute has more than 4 cpus, the system will auto-configure the vswitch to use 2 cores.  However some virtual environments do not properly support multi-queue required in a multi-cpu environment.  Therefore run the following command to reduce the vswitch cores to 1:
  
 
<pre><nowiki>
 
<pre><nowiki>
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -p providernet-a -nt data compute-0 ens6
+
[wrsroot@controller-0 ~(keystone_admin)]$ system host-cpu-modify compute-0 -f vswitch -p0 1
+------------------+--------------------------------------+
+
+--------------------------------------+-------+-----------+-------+--------+...
| Property        | Value                                |
+
| uuid                                | log_c | processor | phy_c | thread |...
+------------------+--------------------------------------+
+
|                                     | ore  |          | ore  |       |...
| ifname          | ens6                                |
+
+--------------------------------------+-------+-----------+-------+--------+...
| networktype      | data                                |
+
| a3b5620c-28b1-4fe0-9e97-82950d8582c2 | 0    | 0        | 0    | 0      |...
| iftype          | ethernet                            |
+
| f2e91c2b-bfc5-4f2a-9434-bceb7e5722c3 | 1    | 0        | 1    | 0      |...
| ports            | [u'ens6']                            |
+
| 18a98743-fdc4-4c0c-990f-3c1cb2df8cb3 | 2     | 0        | 2     | 0     |...
| providernetworks | providernet-a                        |
+
| 690d25d2-4f99-4ba1-a9ba-0484eec21cc7 | 3    | 0        | 3    | 0     |...
| imac            | 08:00:27:f8:46:7e                    |
+
+--------------------------------------+-------+-----------+-------+--------+...
| imtu            | 1500                                |
 
| aemode          | None                                |
 
| schedpolicy      | None                                |
 
| txhashpolicy    | None                                |
 
| uuid            | f3158e60-a6ef-44eb-b902-1586fb79c362 |
 
| ihost_uuid      | f56921a6-8784-45ac-bd72-c0372cd95964 |
 
| vlan_id          | None                                |
 
| uses            | []                                  |
 
| used_by          | []                                  |
 
| created_at      | 2018-08-16T00:52:30.698025+00:00    |
 
| updated_at      | 2018-08-16T00:53:35.602066+00:00     |
 
| sriov_numvfs     | 0                                   |
 
| ipv4_mode        | disabled                            |
 
| ipv6_mode        | disabled                            |
 
| accelerated     | [False]                              |
 
+------------------+--------------------------------------+
 
[wrsroot@controller-0 ~(keystone_admin)]$
 
 
</nowiki></pre>
 
</nowiki></pre>
  
===Provisioning Storage on a Compute-N Host===
+
===Provisioning Storage on a Compute Host===
  
Review the available disk space and capacity and obtain the uuid of the physical disk
+
Review the available disk space and capacity and obtain the uuid(s) of the physical disk(s) to be used for nova local:
  
 
<pre><nowiki>
 
<pre><nowiki>
 
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list compute-0
 
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list compute-0
 +
+--------------------------------------+-----------+---------+---------+-------+------------+...
 +
| uuid                                | device_no | device_ | device_ | size_ | available_ |...
 +
|                                      | de        | num    | type    | gib  | gib        |...
 
+--------------------------------------+-----------+---------+---------+-------+------------+
 
+--------------------------------------+-----------+---------+---------+-------+------------+
| uuid                                | device_no | device_ | device_ | size_ | available_ |
+
| 14e52a55-f6a7-40ad-a0b1-11c2c3b6e7e9 | /dev/sda  | 2048    | HDD    | 292.  | 265.132    |...
|                                      | de        | num    | type    | gib  | gib        |
+
| a639914b-23a9-4071-9f25-a5f1960846cc | /dev/sdb  | 2064    | HDD    | 100.0 | 99.997    |...
+--------------------------------------+-----------+---------+---------+-------+------------+
+
+--------------------------------------+-----------+---------+---------+-------+------------+...
| 14e52a55-f6a7-40ad-a0b1-11c2c3b6e7e9 | /dev/sda  | 2048    | HDD    | 292.  | 265.132    |
 
|                                      |          |        |        | 968  |            |
 
|                                      |          |        |        |      |            |
 
| a639914b-23a9-4071-9f25-a5f1960846cc | /dev/sdb  | 2064    | HDD    | 100.0 | 99.997    |
 
|                                      |          |        |        |      |            |
 
|                                      |          |        |        |      |            |
 
| a50995d0-7048-4e91-852e-1e1fb113996b | /dev/sdc  | 2080    | HDD    | 4.0  | 3.997      |
 
|                                      |          |        |        |      |            |
 
|                                      |          |        |        |      |            |
 
+--------------------------------------+-----------+---------+---------+-------+------------+
 
 
</nowiki></pre>
 
</nowiki></pre>
  
Create the 'cinder-volumes' local volume group
+
Create the 'nova-local' local volume group:
  
 
<pre><nowiki>
 
<pre><nowiki>
Line 668: Line 576:
 
</nowiki></pre>
 
</nowiki></pre>
  
Create a disk partition to add to the volume group based on uuid of the physical disk
+
Create a disk partition to add to the volume group based on uuid of the physical disk:
  
 
<pre><nowiki>
 
<pre><nowiki>
Line 693: Line 601:
 
</nowiki></pre>
 
</nowiki></pre>
  
Remote RAW Ceph storage backed
+
Remote RAW Ceph storage backed will be used to back nova local ephemeral volumes:
  
 
<pre><nowiki>
 
<pre><nowiki>
[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-modify -b remote -s 2048 compute-0 nova-local
+
[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-modify -b remote compute-0 nova-local
argument of type 'NoneType' is not iterable
 
[wrsroot@controller-0 ~(keystone_admin)]$
 
 
</nowiki></pre>
 
</nowiki></pre>
  
Line 709: Line 615:
 
</nowiki></pre>
 
</nowiki></pre>
  
Wait while the Compute-N is rebooted. Up to 10 minutes may be required for a reboot, depending on hardware. The host is rebooted, and its Availability State is reported as In-Test.
+
Wait while the Compute-N is rebooted. Up to 10 minutes may be required for a reboot, depending on hardware. The host is rebooted, and its Availability State is reported as In-Test, followed by unlocked/enabled.
  
 
==System Health Check==
 
==System Health Check==
  
===Listing StarlingX Node-N Hosts===
+
===Listing StarlingX Nodes ===
 
 
After a few minutes, all nodes shall be reported as Unlocked, Enabled, and Available:
 
  
On Controller-0
+
On Controller-0, after a few minutes, all nodes shall be reported as Unlocked, Enabled, and Available:
  
 
<pre><nowiki>
 
<pre><nowiki>
Line 750: Line 654:
 
</nowiki></pre>
 
</nowiki></pre>
  
Your 'virtual' StarlingX deployment is now up and running with 2x HA Controllers with Cinder Storage, 1x Compute, 3x Storages and all OpenStack services up and running.  You can now proceed with standard OpenStack APIs, CLIs and/or Horizon to load Glance Images, configure Nova Flavors, configure Neutron networks and launch Nova Virtual Machines.
+
===System Alarm List===
 +
 
 +
When all nodes are Unlocked, Enabled and Available: check 'fm alarm-list' for issues.
 +
<br>
 +
<br>
 +
Your StarlingX deployment is now up and running with 2x HA Controllers with Cinder Storage, 1x Compute, 3x Storages and all OpenStack services up and running.  You can now proceed with standard OpenStack APIs, CLIs and/or Horizon to load Glance Images, configure Nova Flavors, configure Neutron networks and launch Nova Virtual Machines.

Latest revision as of 18:27, 19 February 2020

Warning icon.svg Warning - Deprecated

This wiki page is out of date and now deprecated. For the current version of the StarlingX documentation please see the documentation website.

Preparing Servers

Bare Metal

Required Servers:

  • Controllers: 2
  • Storage
    • Replication factor of 2: 2 - 8
    • Replication factor of 3: 3 - 9
  • Computes: 2 - 100

Hardware Requirements

The recommended minimum requirements for the physical servers where StarlingX Dedicated Storage will be deployed, include:

  • ‘Minimum’ Processor:
    • Dual-CPU Intel® Xeon® E5 26xx Family (SandyBridge) 8 cores/socket
  • Memory:
    • 64 GB Controller, Storage
    • 32 GB Compute
  • BIOS:
    • Hyper-Threading Tech: Enabled
    • Virtualization Technology: Enabled
    • VT for Directed I/O: Enabled
    • CPU Power and Performance Policy: Performance
    • CPU C State Control: Disabled
    • Plug & Play BMC Detection: Disabled
  • Primary Disk:
    • 500 GB SDD or NVMe Controller
    • 120 GB (min. 10K RPM) Compute, Storage
  • Additional Disks:
    • 1 or more 500 GB disks (min. 10K RPM) Storage, Compute
  • Network Ports*
    • Management: 10GE Controller, Storage, Compute
    • OAM: 10GE Controller
    • Data: n x 10GE Compute

Virtual Environment

Run the libvirt qemu setup scripts. Setting up virtualized OAM and Management networks:

$ bash setup_network.sh

Building xmls for definition of virtual servers:

$ bash setup_standard_controller.sh -i <starlingx iso image>

Accessing Virtual Server Consoles

The xml for virtual servers in stx-tools repo, deployment/libvirt, provides both graphical and text consoles.

Access the graphical console in virt-manager by right-click on the domain (the server) and selecting "Open".

Access the textual console with the command "virsh console $DOMAIN", where DOMAIN is the name of the server shown in virsh.

When booting the controller-0 for the first time, both the serial and graphical consoles will present the initial configuration menu for the cluster. One can select serial or graphical console for controller-0. For the other nodes however only serial is used, regardless of which option is selected.

Open the graphic console on all servers before powering them on to observe the boot device selection and PXI boot progress. Run "virsh console $DOMAIN" command promptly after power on to see the initial boot sequence which follows the boot device selection. One has a few seconds to do this.

Controller-0 Host Installation

Installing controller-0 involves initializing a host with software and then applying a bootstrap configuration from the command line. The configured bootstrapped host becomes Controller-0.
Procedure:

  1. Power on the server that will be controller-0 with the StarlingX ISO on a USB in a bootable USB slot.
  2. Configure the controller using the config_controller script.

Initializing Controller-0

This section describes how to initialize StarlingX in host Controller-0. Except where noted, all the commands must be executed from a console of the host.

Power on the host to be configured as Controller-0, with the StarlingX ISO on a USB in a bootable USB slot. Wait for the console to show the StarlingX ISO booting options:

  • Standard Controller Configuration
    • When the installer is loaded and the installer welcome screen appears in the Controller-0 host, select the type of installation "Standard Controller Configuration".
  • Graphical Console
    • Select the "Graphical Console" as the console to use during installation.
  • Standard Security Boot Profile
    • Select "Standard Security Boot Profile" as the Security Profile.


Monitor the initialization. When it is complete, a reboot is initiated on the Controller-0 host, briefly displays a GNU GRUB screen, and then boots automatically into the StarlingX image.
Log into Controller-0 as user wrsroot, with password wrsroot. The first time you log in as wrsroot, you are required to change your password. Enter the current password (wrsroot):

Changing password for wrsroot.
(current) UNIX Password:

Enter a new password for the wrsroot account:

New password:

Enter the new password again to confirm it:

Retype new password:

Controller-0 is initialized with StarlingX, and is ready for configuration.

Configuring Controller-0

This section describes how to perform the Controller-0 configuration interactively just to bootstrap system with minimum critical data. Except where noted, all the commands must be executed from the console of the active controller (here assumed to be controller-0).

When run interactively, the config_controller script presents a series of prompts for initial configuration of StarlingX:

  • For the Virtual Environment, you can accept all the default values immediately after ‘system date and time’.
  • For a Physical Deployment, answer the bootstrap configuration questions with answers applicable to your particular physical setup.


The script is used to configure the first controller in the StarlingX cluster as controller-0. The prompts are grouped by configuration area. To start the script interactively, use the following command with no parameters:

controller-0:~$ sudo config_controller
System Configuration
================
Enter ! at any prompt to abort...
...

Accept all the default values immediately after ‘system date and time’

...
Applying configuration (this will take several minutes):

01/08: Creating bootstrap configuration ... DONE
02/08: Applying bootstrap manifest ... DONE
03/08: Persisting local configuration ... DONE
04/08: Populating initial system inventory ... DONE
05:08: Creating system configuration ... DONE
06:08: Applying controller manifest ... DONE
07:08: Finalize controller configuration ... DONE
08:08: Waiting for service activation ... DONE

Configuration was applied

Please complete any out of service commissioning steps with system commands and unlock controller to proceed.

After config_controller bootstrap configuration, REST API, CLI and Horizon interfaces are enabled on the controller-0 OAM IP Address. The remaining installation instructions will use the CLI.

Controller-0 and System Provision

On Controller-0, acquire Keystone administrative privileges:

controller-0:~$ source /etc/nova/openrc

Configuring Provider Networks at Installation

You must set up provider networks at installation so that you can attach data interfaces and unlock the compute nodes.

Set up one provider network of the vlan type, named providernet-a:

[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-create providernet-a --type=vlan
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-range-create --name providernet-a-range1 --range 100-400 providernet-a

Adding a Ceph Storage Backend at Installation

Add CEPH Storage backend:

[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-add ceph -s cinder,glance,swift,nova

WARNING : THIS OPERATION IS NOT REVERSIBLE AND CANNOT BE CANCELLED. 

By confirming this operation, Ceph backend will be created.
A minimum of 2 storage nodes are required to complete the configuration.
Please set the 'confirmed' field to execute this operation for the ceph backend.
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-add ceph -s cinder,glance,swift,nova --confirmed

System configuration has changed.
Please follow the administrator guide to complete configuring the system.

+--------------------------------------+------------+---------+-------------+--------------------+----------+...
| uuid                                 | name       | backend | state       | task               | services |...
+--------------------------------------+------------+---------+-------------+--------------------+----------+...
| 48ddb10a-206c-42da-bb3f-f7160a356724 | ceph-store | ceph    | configuring | applying-manifests | cinder,  |...
|                                      |            |         |             |                    | glance,  |...
|                                      |            |         |             |                    | swift    |...
|                                      |            |         |             |                    | nova     |...
|                                      |            |         |             |                    |          |...
| 55f49f86-3e01-4d03-a014-42e1b55ba487 | file-store | file    | configured  | None               | glance   |...
+--------------------------------------+------------+---------+-------------+--------------------+----------+...

Confirm CEPH storage is configured

[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-list
+--------------------------------------+------------+---------+------------+-------------------+-----------+...
| uuid                                 | name       | backend | state      | task              | services  |...
+--------------------------------------+------------+---------+------------+-------------------+-----------+...
| 48ddb10a-206c-42da-bb3f-f7160a356724 | ceph-store | ceph    | configured | provision-storage | cinder,   |...
|                                      |            |         |            |                   | glance,   |...
|                                      |            |         |            |                   | swift     |...
|                                      |            |         |            |                   | nova      |...
|                                      |            |         |            |                   |           |...
| 55f49f86-3e01-4d03-a014-42e1b55ba487 | file-store | file    | configured | None              | glance    |...
+--------------------------------------+------------+---------+------------+-------------------+-----------+...

Unlocking Controller-0

You must unlock controller-0 so that you can use it to install the remaining hosts. Use the system host-unlock command:

[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock controller-0

The host is rebooted. During the reboot, the command line is unavailable, and any ssh connections are dropped. To monitor the progress of the reboot, use the controller-0 console.

Verifying the Controller-0 Configuration

On Controller-0, acquire Keystone administrative privileges:

controller-0:~$ source /etc/nova/openrc

Verify that the StarlingX controller services are running:

[wrsroot@controller-0 ~(keystone_admin)]$ system service-list
+-----+-------------------------------+--------------+----------------+
| id  | service_name                  | hostname     | state          |
+-----+-------------------------------+--------------+----------------+
...
| 1   | oam-ip                        | controller-0 | enabled-active |
| 2   | management-ip                 | controller-0 | enabled-active |
...
+-----+-------------------------------+--------------+----------------+

Verify that controller-0 is unlocked, enabled, and available:

[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
+----+--------------+-------------+----------------+-------------+--------------+

Provisioning Filesystem Storage

List the controller filesystems with status and current sizes

[wrsroot@controller-0 ~(keystone_admin)]$ system controllerfs-list
+--------------------------------------+-----------------+------+--------------------+------------+-------+
| UUID                                 | FS Name         | Size | Logical Volume     | Replicated | State |
|                                      |                 | in   |                    |            |       |
|                                      |                 | GiB  |                    |            |       |
+--------------------------------------+-----------------+------+--------------------+------------+-------+
| 4e31c4ea-6970-4fc6-80ba-431fdcdae15f | backup          | 5    | backup-lv          | False      | None  |
| 6c689cd7-2bef-4755-a2fb-ddd9504692f3 | database        | 5    | pgsql-lv           | True       | None  |
| 44c7d520-9dbe-41be-ac6a-5d02e3833fd5 | extension       | 1    | extension-lv       | True       | None  |
| 809a5ed3-22c0-4385-9d1e-dd250f634a37 | glance          | 8    | cgcs-lv            | True       | None  |
| 9c94ef09-c474-425c-a8ba-264e82d9467e | gnocchi         | 5    | gnocchi-lv         | False      | None  |
| 895222b3-3ce5-486a-be79-9fe21b94c075 | img-conversions | 8    | img-conversions-lv | False      | None  |
| 5811713f-def2-420b-9edf-6680446cd379 | scratch         | 8    | scratch-lv         | False      | None  |
+--------------------------------------+-----------------+------+--------------------+------------+-------+

Modify filesystem sizes

[wrsroot@controller-0 ~(keystone_admin)]$ system controllerfs-modify backup=42 database=12 img-conversions=12

Controller-1 / Storage Hosts / Compute Hosts Installation

After initializing and configuring an active controller, you can add and configure a backup controller and additional compute or storage hosts. For each host do the following:

Initializing Host

Power on Host. In host console you will see:

Waiting for this node to be configured.

Please configure the personality for this node from the
controller node in order to proceed.

Updating Host Name and Personality

On Controller-0, acquire Keystone administrative privileges:

controller-0:~$ source /etc/nova/openrc

Wait for Controller-0 to discover new host, list the host until new UNKNOWN host shows up in table:

[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 2  | None         | None        | locked         | disabled    | offline      |
+----+--------------+-------------+----------------+-------------+--------------+

Use the system host-add to update host personality attribute:

[wrsroot@controller-0 ~(keystone_admin)]$ system host-add -n <controller_name> -p <personality> -m <mac address>

REMARK: use the Mac Address for the specific network interface you are going to be connected. e.g. OAM network interface for "Controller-1" node, Management network interface for "Computes" and "Storage" nodes.

Check the NIC MAC Address from "Virtual Manager GUI" under "Show virtual hardware details - i" Main Banner --> NIC: --> specific "Bridge name:" under MAC Address text field.

Monitoring Host

On Controller-0, you can monitor the installation progress by running the system host-show command for the host periodically. Progress is shown in the install_state field.

[wrsroot@controller-0 ~(keystone_admin)]$ system host-show <host> | grep install
| install_output      | text                                 |
| install_state       | booting                              |
| install_state_info  | None                                 |

Wait while the host is configured and rebooted. Up to 20 minutes may be required for a reboot, depending on hardware. When the reboot is complete, the host is reported as Locked, Disabled, and Online.

Listing Hosts

Once all Nodes have been installed, configured and rebooted, on Controller-0 list the hosts:

[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 3  | controller-1 | controller  | locked         | disabled    | online      |
| 4  | compute-0    | compute     | locked         | disabled    | online      |
| 5  | storage-0    | storage     | locked         | disabled    | online      |
| 6  | storage-1    | storage     | locked         | disabled    | online      |
| 7  | storage-2    | storage     | locked         | disabled    | online      |
+----+--------------+-------------+----------------+-------------+--------------+

Controller-1 Provisioning

On Controller-0, list hosts

[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
...
| 2  | controller-1 | controller  | locked         | disabled    | online       |
...
+----+--------------+-------------+----------------+-------------+--------------+

Provisioning Network Interfaces on Controller-1

In order to list out hardware port names, types, pci-addresses that have been discovered:

[wrsroot@controller-0 ~(keystone_admin)]$ system host-port-list controller-1

Provision the oam interface for Controller-1.

Temporal changes to host-if-modify command: check help 'system help host-if-modify'. If the help text lists '-c <class>' option then execute the following command; otherwise use the form with '-nt' listed below:

[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -n <oam interface> -c platform --networks oam controller-1 <oam interface>
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -n <oam interface> -nt oam controller-1 <oam interface>

Unlocking Controller-1

Unlock Controller-1

[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock controller-1

Wait while the Controller-1 is rebooted. Up to 10 minutes may be required for a reboot, depending on hardware.

REMARK: Controller-1 will remain in 'degraded' state until data-syncing is complete. The duration is dependant on the virtualization host's configuration - i.e., the number and configuration of physical disks used to host the nodes' virtual disks. Also, the management network is expected to have link capacity of 10000 (1000 is not supported due to excessive data-sync time). Use 'fm alarm-list' to confirm status.

[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 2  | controller-1 | controller  | unlocked       | enabled     | available    |
...

Storage Host Provisioning

Provisioning Storage on a Storage Host

Available physical disks in Storage-N

[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list storage-0
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
| uuid                                 | device_no | device_ | device_ | size_ | available_ | rpm          |...
|                                      | de        | num     | type    | gib   | gib        |              |...
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
| a2bbfe1f-cf91-4d39-a2e8-a9785448aa56 | /dev/sda  | 2048    | HDD     | 292.  | 0.0        | Undetermined |...
|                                      |           |         |         | 968   |            |              |...
|                                      |           |         |         |       |            |              |...
| c7cc08e6-ff18-4229-a79d-a04187de7b8d | /dev/sdb  | 2064    | HDD     | 100.0 | 99.997     | Undetermined |...
|                                      |           |         |         |       |            |              |...
|                                      |           |         |         |       |            |              |...
| 1ece5d1b-5dcf-4e3c-9d10-ea83a19dd661 | /dev/sdc  | 2080    | HDD     | 4.0   | 3.997      |...
|                                      |           |         |         |       |            |              |...
|                                      |           |         |         |       |            |              |...
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+...

Available storage tiers in Storage-N

[wrsroot@controller-0 ~(keystone_admin)]$ system storage-tier-list ceph_cluster
+--------------------------------------+---------+--------+--------------------------------------+
| uuid                                 | name    | status | backend_using                        |
+--------------------------------------+---------+--------+--------------------------------------+
| 4398d910-75e4-4e99-a57f-fc147fb87bdb | storage | in-use | 5131a848-25ea-4cd8-bbce-0d65c84183df |
+--------------------------------------+---------+--------+--------------------------------------+

Create a storage function (an OSD) in Storage-N

[wrsroot@controller-0 ~(keystone_admin)]$ system host-stor-add storage-0 c7cc08e6-ff18-4229-a79d-a04187de7b8d
+------------------+--------------------------------------------------+
| Property         | Value                                            |
+------------------+--------------------------------------------------+
| osdid            | 0                                                |
| function         | osd                                              |
| journal_location | 34989bad-67fc-49ea-9e9c-38ca4be95fad             |
| journal_size_gib | 1024                                             |
| journal_path     | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part2 |
| journal_node     | /dev/sdb2                                        |
| uuid             | 34989bad-67fc-49ea-9e9c-38ca4be95fad             |
| ihost_uuid       | 4a5ed4fc-1d2b-4607-acf9-e50a3759c994             |
| idisk_uuid       | c7cc08e6-ff18-4229-a79d-a04187de7b8d             |
| tier_uuid        | 4398d910-75e4-4e99-a57f-fc147fb87bdb             |
| tier_name        | storage                                          |
| created_at       | 2018-08-16T00:39:44.409448+00:00                 |
| updated_at       | 2018-08-16T00:40:07.626762+00:00                 |
+------------------+--------------------------------------------------+

Create remaining available storage function (an OSD) in Storage-N based in the number of available physical disks.
List the OSDs:

[wrsroot@controller-0 ~(keystone_admin)]$ system host-stor-list storage-0
+--------------------------------------+----------+-------+--------------+--------------------------------------+
| uuid                                 | function | osdid | capabilities | idisk_uuid                           |
+--------------------------------------+----------+-------+--------------+--------------------------------------+
| 34989bad-67fc-49ea-9e9c-38ca4be95fad | osd      | 0     | {}           | c7cc08e6-ff18-4229-a79d-a04187de7b8d |
+--------------------------------------+----------+-------+--------------+--------------------------------------+

Unlock Storage-N

[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock storage-0

REMARK: Before you continue, repeat Provisioning Storage steps on remaining storage nodes.

Compute Host Provision

You must configure the network interfaces and the storage disks on a host before you can unlock it. For each Compute Host do the following:

On Controller-0, acquire Keystone administrative privileges:

controller-0:~$ source /etc/nova/openrc

Provisioning Network Interfaces on a Compute Host

On Controller-0, in order to list out hardware port names, types, pci-addresses that have been discovered:

  • Only in Virtual Environment: Ensure that the interface used is one of those attached to host bridge with model type "virtio" (i.e., eth1000 and eth1001). The model type "e1000" emulated devices will not work for provider networks.
[wrsroot@controller-0 ~(keystone_admin)]$ system host-port-list compute-0

Provision the data interface for Compute. Temporal changes to host-if-modify command: check help 'system help host-if-modify'. If the help text lists '-c <class>' option then execute the following command; otherwise use the form with '-nt' listed below:

[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -p providernet-a -c data compute-0 eth1000
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -p providernet-a -nt data compute-0 eth1000

VSwitch Virtual Environment

Only in Virtual Environment. If the compute has more than 4 cpus, the system will auto-configure the vswitch to use 2 cores. However some virtual environments do not properly support multi-queue required in a multi-cpu environment. Therefore run the following command to reduce the vswitch cores to 1:

[wrsroot@controller-0 ~(keystone_admin)]$ system host-cpu-modify compute-0 -f vswitch -p0 1
+--------------------------------------+-------+-----------+-------+--------+...
| uuid                                 | log_c | processor | phy_c | thread |...
|                                      | ore   |           | ore   |        |...
+--------------------------------------+-------+-----------+-------+--------+...
| a3b5620c-28b1-4fe0-9e97-82950d8582c2 | 0     | 0         | 0     | 0      |...
| f2e91c2b-bfc5-4f2a-9434-bceb7e5722c3 | 1     | 0         | 1     | 0      |...
| 18a98743-fdc4-4c0c-990f-3c1cb2df8cb3 | 2     | 0         | 2     | 0      |...
| 690d25d2-4f99-4ba1-a9ba-0484eec21cc7 | 3     | 0         | 3     | 0      |...
+--------------------------------------+-------+-----------+-------+--------+...

Provisioning Storage on a Compute Host

Review the available disk space and capacity and obtain the uuid(s) of the physical disk(s) to be used for nova local:

[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list compute-0
+--------------------------------------+-----------+---------+---------+-------+------------+...
| uuid                                 | device_no | device_ | device_ | size_ | available_ |...
|                                      | de        | num     | type    | gib   | gib        |...
+--------------------------------------+-----------+---------+---------+-------+------------+
| 14e52a55-f6a7-40ad-a0b1-11c2c3b6e7e9 | /dev/sda  | 2048    | HDD     | 292.  | 265.132    |...
| a639914b-23a9-4071-9f25-a5f1960846cc | /dev/sdb  | 2064    | HDD     | 100.0 | 99.997     |...
+--------------------------------------+-----------+---------+---------+-------+------------+...

Create the 'nova-local' local volume group:

[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-add compute-0 nova-local
+-----------------+-------------------------------------------------------------------+
| Property        | Value                                                             |
+-----------------+-------------------------------------------------------------------+
| lvm_vg_name     | nova-local                                                        |
| vg_state        | adding                                                            |
| uuid            | 37f4c178-f0fe-422d-b66e-24ae057da674                              |
| ihost_uuid      | f56921a6-8784-45ac-bd72-c0372cd95964                              |
| lvm_vg_access   | None                                                              |
| lvm_max_lv      | 0                                                                 |
| lvm_cur_lv      | 0                                                                 |
| lvm_max_pv      | 0                                                                 |
| lvm_cur_pv      | 0                                                                 |
| lvm_vg_size_gib | 0.00                                                              |
| lvm_vg_total_pe | 0                                                                 |
| lvm_vg_free_pe  | 0                                                                 |
| created_at      | 2018-08-16T00:57:46.340454+00:00                                  |
| updated_at      | None                                                              |
| parameters      | {u'concurrent_disk_operations': 2, u'instance_backing': u'image'} |
+-----------------+-------------------------------------------------------------------+

Create a disk partition to add to the volume group based on uuid of the physical disk:

[wrsroot@controller-0 ~(keystone_admin)]$ system host-pv-add compute-0 nova-local a639914b-23a9-4071-9f25-a5f1960846cc
+--------------------------+--------------------------------------------+
| Property                 | Value                                      |
+--------------------------+--------------------------------------------+
| uuid                     | 56fdb63a-1078-4394-b1ce-9a0b3bff46dc       |
| pv_state                 | adding                                     |
| pv_type                  | disk                                       |
| disk_or_part_uuid        | a639914b-23a9-4071-9f25-a5f1960846cc       |
| disk_or_part_device_node | /dev/sdb                                   |
| disk_or_part_device_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| lvm_pv_name              | /dev/sdb                                   |
| lvm_vg_name              | nova-local                                 |
| lvm_pv_uuid              | None                                       |
| lvm_pv_size_gib          | 0.0                                        |
| lvm_pe_total             | 0                                          |
| lvm_pe_alloced           | 0                                          |
| ihost_uuid               | f56921a6-8784-45ac-bd72-c0372cd95964       |
| created_at               | 2018-08-16T01:05:59.013257+00:00           |
| updated_at               | None                                       |
+--------------------------+--------------------------------------------+

Remote RAW Ceph storage backed will be used to back nova local ephemeral volumes:

[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-modify -b remote compute-0 nova-local

Unlocking a Compute Host

On Controller-0, use the system host-unlock command to unlock the Compute-N:

[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock compute-0

Wait while the Compute-N is rebooted. Up to 10 minutes may be required for a reboot, depending on hardware. The host is rebooted, and its Availability State is reported as In-Test, followed by unlocked/enabled.

System Health Check

Listing StarlingX Nodes

On Controller-0, after a few minutes, all nodes shall be reported as Unlocked, Enabled, and Available:

[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 3  | controller-1 | controller  | unlocked       | enabled     | available    |
| 4  | compute-0    | compute     | unlocked       | enabled     | available    |
| 5  | storage-0    | storage     | unlocked       | enabled     | available    |
| 6  | storage-1    | storage     | unlocked       | enabled     | available    |
| 7  | storage-2    | storage     | unlocked       | enabled     | available    |
+----+--------------+-------------+----------------+-------------+--------------+
[wrsroot@controller-0 ~(keystone_admin)]$ 

Checking StarlingX CEPH Health

[wrsroot@controller-0 ~(keystone_admin)]$ ceph -s
    cluster e14ebfd6-5030-4592-91c3-7e6146b3c910
     health HEALTH_OK
     monmap e1: 3 mons at {controller-0=192.168.204.3:6789/0,controller-1=192.168.204.4:6789/0,storage-0=192.168.204.204:6789/0}
            election epoch 22, quorum 0,1,2 controller-0,controller-1,storage-0
     osdmap e84: 2 osds: 2 up, 2 in
            flags sortbitwise,require_jewel_osds
      pgmap v168: 1600 pgs, 5 pools, 0 bytes data, 0 objects
            87444 kB used, 197 GB / 197 GB avail
                1600 active+clean
controller-0:~$ 

System Alarm List

When all nodes are Unlocked, Enabled and Available: check 'fm alarm-list' for issues.

Your StarlingX deployment is now up and running with 2x HA Controllers with Cinder Storage, 1x Compute, 3x Storages and all OpenStack services up and running. You can now proceed with standard OpenStack APIs, CLIs and/or Horizon to load Glance Images, configure Nova Flavors, configure Neutron networks and launch Nova Virtual Machines.