Jump to: navigation, search

StarlingX/Installation Guide Virtual Environment/Dedicated Storage

< StarlingX‎ | Installation Guide Virtual Environment
Revision as of 13:37, 22 August 2018 by Abraham.arce.moreno (talk | contribs) (Controller-1 / Storage Hosts / Compute Hosts Installation)

Configure Virtual Servers

Run the libvirt qemu setup script:

$ bash setup_standard_controller.sh

Standard Dedicated Storage Virtual Environment

Controller-0 Host Installation

Installing controller-0 involves initializing a host with software and then applying a configuration from the command line. The configured host becomes Controller-0.
Procedure:

  1. Using an ISO image of StarlingX.
  2. Configure the controller using the config_controller script.

Initializing Controller-0

This section describes how to initialize StarlingX in host Controller-0. Except where noted, all the commands must be executed from a console of the Workstation. Make sure Virtual Machine Manager is open.

From the KVM/VirtManager window, power on the host to be configured as Controller-0 and show the virtual machine console and details:

  • When the installer is loaded and the installer welcome screen appears in the Controller-0 host, select the type of installation "Standard Controller Configuration".
  • Select the "Graphical Console" as the console to use during installation.
  • Select "Standard Security Boot Profile" as the Security Profile.
  • Monitor the initialization until it is complete. When initialization is complete, a reboot is initiated on the Controller-0 host, briefly displays a GNU GRUB screen, and then boots automatically into the StarlingX image.

Log into Controller-0 as user wrsroot, with password wrsroot. The first time you log in as wrsroot, you are required to change your password. Enter the current password (wrsroot):

Changing password for wrsroot.
(current) UNIX Password:

Enter a new password for the wrsroot account:

New password:

Enter the new password again to confirm it:

Retype new password:

Controller-0 is initialized with StarlingX, and is ready for configuration.

Configuring Controller-0

This section describes how to perform the Controller-0 configuration interactively. Except where noted, all the commands must be executed from the console of the active controller (here assumed to be controller-0.

When run interactively, the config_controller script presents a series of prompts for initial configuration of StarlingX. The script is used to configure the first controller in the StarlingX cluster as controller-0. The prompts are grouped by configuration area. To start the script interactively, use the following command with no parameters:

controller-0:~$ sudo config_controller
System Configuration
================
Enter ! at any prompt to abort...
...

Accept all the default values immediately after ‘system date and time’, select default [y] for Distributed Cloud Configuration:

...
Distributed Cloud Configuration:
-----------------------------------------

Configure Distributed Cloud System Controller [y/N]: y
...
Applying configuration (this will take several minutes):

01/08: Creating bootstrap configuration ... DONE
02/08: Applying bootstrap manifest ... DONE
03/08: Persisting local configuration ... DONE
04/08: Populating initial system inventory ... DONE
05:08: Creating system configuration ... DONE
06:08: Applying controller manifest ... DONE
07:08: Finalize controller configuration ... DONE
08:08: Waiting for service activation ... DONE

Configuration was applied

Please complete any out of service commissioning steps with system commands and unlock controller to proceed.

Controller-0 and System Provision

Configuring Provider Networks at Installation

You must set up provider networks at installation so that you can attach data interfaces and unlock the compute nodes.

On Controller-0, acquire Keystone administrative privileges:

controller-0:~$ source /etc/nova/openrc

Set up one provider network of the vlan type, named providernet-a:

[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-create providernet-a --type=vlan
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-range-create --name providernet-a-range1 --range 100-400 providernet-a

Provisioning Cinder Storage on Controller-0

Review the available disk space and capacity and obtain the uuid of the physical disk

[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list controller-0
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
| uuid                                 | device_no | device_ | device_ | size_ | available_ | rpm          |...
|                                      | de        | num     | type    | gib   | gib        |              |...
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
| 004f4c09-2f61-46c5-8def-99b2bdeed83c | /dev/sda  | 2048    | HDD     | 200.0 | 0.0        |              |...
| 89694799-0dd8-4532-8636-c0d8aabfe215 | /dev/sdb  | 2064    | HDD     | 200.0 | 199.997    |              |...
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
[wrsroot@controller-0 ~(keystone_admin)]$ 

Create the 'cinder-volumes' local volume group

[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-add controller-0 cinder-volumes
+-----------------+--------------------------------------+
| Property        | Value                                |
+-----------------+--------------------------------------+
| lvm_vg_name     | cinder-volumes                       |
| vg_state        | adding                               |
| uuid            | ece4c755-241c-4363-958e-85e9e3d12917 |
| ihost_uuid      | 150284e2-fb60-4169-ae75-7f444b8ca9bf |
| lvm_vg_access   | None                                 |
| lvm_max_lv      | 0                                    |
| lvm_cur_lv      | 0                                    |
| lvm_max_pv      | 0                                    |
| lvm_cur_pv      | 0                                    |
| lvm_vg_size_gib | 0.00                                 |
| lvm_vg_total_pe | 0                                    |
| lvm_vg_free_pe  | 0                                    |
| created_at      | 2018-08-22T03:59:30.685718+00:00     |
| updated_at      | None                                 |
| parameters      | {u'lvm_type': u'thin'}               |
+-----------------+--------------------------------------+

Create a disk partition to add to the volume group based on uuid of the physical disk

[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-add controller-0 89694799-0dd8-4532-8636-c0d8aabfe215 199 -t lvm_phys_vol
+-------------+--------------------------------------------------+
| Property    | Value                                            |
+-------------+--------------------------------------------------+
| device_path | /dev/disk/by-path/pci-0000:00:03.0-ata-2.0-part1 |
| device_node | /dev/sdb1                                        |
| type_guid   | ba5eba11-0000-1111-2222-000000000001             |
| type_name   | None                                             |
| start_mib   | None                                             |
| end_mib     | None                                             |
| size_mib    | 203776                                           |
| uuid        | 9ba2d76a-6ae2-4bfa-ad48-57b62d102e80             |
| ihost_uuid  | 150284e2-fb60-4169-ae75-7f444b8ca9bf             |
| idisk_uuid  | 89694799-0dd8-4532-8636-c0d8aabfe215             |
| ipv_uuid    | None                                             |
| status      | Creating                                         |
| created_at  | 2018-08-22T04:03:40.761221+00:00                 |
| updated_at  | None                                             |
+-------------+--------------------------------------------------+

Wait for the new partition to be created (i.e. status=Ready)

[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-list controller-0 --disk 89694799-0dd8-4532-8636-c0d8aabfe215
+--------------------------------------+-----------------------------+------------+...
| uuid                                 | device_path                 | device_nod |...
|                                      |                             | e          |...
+--------------------------------------+-----------------------------+------------+...
| 9ba2d76a-6ae2-4bfa-ad48-57b62d102e80 | /dev/disk/by-path/pci-0000: | /dev/sdb1  |...
|                                      | 00:03.0-ata-2.0-part1       |            |...
|                                      |                             |            |...
+--------------------------------------+-----------------------------+------------+...

Add the partition to the volume group

[wrsroot@controller-0 ~(keystone_admin)]$ system host-pv-add controller-0 cinder-volumes 9ba2d76a-6ae2-4bfa-ad48-57b62d102e80
+--------------------------+--------------------------------------------------+
| Property                 | Value                                            |
+--------------------------+--------------------------------------------------+
| uuid                     | 060dc47e-bc17-40f4-8f09-5326ef0e86a5             |
| pv_state                 | adding                                           |
| pv_type                  | partition                                        |
| disk_or_part_uuid        | 9ba2d76a-6ae2-4bfa-ad48-57b62d102e80             |
| disk_or_part_device_node | /dev/sdb1                                        |
| disk_or_part_device_path | /dev/disk/by-path/pci-0000:00:03.0-ata-2.0-part1 |
| lvm_pv_name              | /dev/sdb1                                        |
| lvm_vg_name              | cinder-volumes                                   |
| lvm_pv_uuid              | None                                             |
| lvm_pv_size_gib          | 0.0                                              |
| lvm_pe_total             | 0                                                |
| lvm_pe_alloced           | 0                                                |
| ihost_uuid               | 150284e2-fb60-4169-ae75-7f444b8ca9bf             |
| created_at               | 2018-08-22T04:06:54.008632+00:00                 |
| updated_at               | None                                             |
+--------------------------+--------------------------------------------------+

Adding a Ceph Storage Backend at Installation

Verify requirements

[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-add ceph [-s cinder[,glance[,swift]]]

WARNING : THIS OPERATION IS NOT REVERSIBLE AND CANNOT BE CANCELLED. 

By confirming this operation, Ceph backend will be created.
A minimum of 2 storage nodes are required to complete the configuration.
Please set the 'confirmed' field to execute this operation for the ceph backend.

Add CEPH storage

[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-add ceph [-s cinder[,glance[,swift]]]

WARNING : THIS OPERATION IS NOT REVERSIBLE AND CANNOT BE CANCELLED. 

By confirming this operation, Ceph backend will be created.
A minimum of 2 storage nodes are required to complete the configuration.
Please set the 'confirmed' field to execute this operation for the ceph backend.
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-add ceph [-s cinder[,glance[,swift]]] --confirmed

System configuration has changed.
Please follow the administrator guide to complete configuring the system.

+--------------------------------------+------------+---------+------------+------+----------+--------------------------------------------------+
| uuid                                 | name       | backend | state      | task | services | capabilities                                     |
+--------------------------------------+------------+---------+------------+------+----------+--------------------------------------------------+
| 5131a848-25ea-4cd8-bbce-0d65c84183df | ceph-store | ceph    | configured | None | None     | {u'min_replication': u'1', u'replication': u'2'} |
| d63b05b2-5b61-408c-ac46-48ec48f4e4f0 | file-store | file    | configured | None | glance   | {}                                               |
+--------------------------------------+------------+---------+------------+------+----------+--------------------------------------------------+
[wrsroot@controller-0 ~(keystone_admin)]$ 

Confirm CEPH storage is configured

[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-list
+--------------------------------------+------------+---------+------------+------+----------+--------------------------------------------------+
| uuid                                 | name       | backend | state      | task | services | capabilities                                     |
+--------------------------------------+------------+---------+------------+------+----------+--------------------------------------------------+
| 5131a848-25ea-4cd8-bbce-0d65c84183df | ceph-store | ceph    | configured | None | None     | {u'min_replication': u'1', u'replication': u'2'} |
| d63b05b2-5b61-408c-ac46-48ec48f4e4f0 | file-store | file    | configured | None | glance   | {}                                               |
+--------------------------------------+------------+---------+------------+------+----------+--------------------------------------------------+
[wrsroot@controller-0 ~(keystone_admin)]$ 

Unlocking Controller-0

You must unlock controller-0 so that you can use it to install the remaining hosts. On Controller-0, acquire Keystone administrative privileges:

controller-0:~$ source /etc/nova/openrc

Use the system host-unlock command:

[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock controller-0

The host is rebooted. During the reboot, the command line is unavailable, and any ssh connections are dropped. To monitor the progress of the reboot, use the controller-0 console.

Verifying the Controller-0 Configuration

On Controller-0, acquire Keystone administrative privileges:

controller-0:~$ source /etc/nova/openrc

Verify that the Titanium Cloud controller services are running:

[wrsroot@controller-0 ~(keystone_admin)]$ nova service-list
+--------------------------------------+------------------+--------------+----------+---------+-------+ ...
| Id                                   | Binary           | Host         | Zone     | Status  | State | ...
+--------------------------------------+------------------+--------------+----------+---------+-------+ ...
| d7cdfaf0-9394-4053-b752-d609c837b729 | nova-conductor   | controller-0 | internal | enabled | up    | ...
| 692c2659-7188-42ec-9ad1-c88ac3020dc6 | nova-scheduler   | controller-0 | internal | enabled | up    | ...
| 5c7c9aad-696f-4dd1-a497-464bdd525e0c | nova-consoleauth | controller-0 | internal | enabled | up    | ...
+--------------------------------------+------------------+--------------+----------+---------+-------+ ...

Verify that controller-0 is unlocked, enabled, and available:

[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
+----+--------------+-------------+----------------+-------------+--------------+

Provisioning Filesystem Storage

List the controller filesystems with status and current sizes

[wrsroot@controller-0 ~(keystone_admin)]$ system controllerfs-list
+--------------------------------------+-----------------+------+--------------------+------------+-------+
| UUID                                 | FS Name         | Size | Logical Volume     | Replicated | State |
|                                      |                 | in   |                    |            |       |
|                                      |                 | GiB  |                    |            |       |
+--------------------------------------+-----------------+------+--------------------+------------+-------+
| 4e31c4ea-6970-4fc6-80ba-431fdcdae15f | backup          | 5    | backup-lv          | False      | None  |
| 6c689cd7-2bef-4755-a2fb-ddd9504692f3 | database        | 5    | pgsql-lv           | True       | None  |
| 44c7d520-9dbe-41be-ac6a-5d02e3833fd5 | extension       | 1    | extension-lv       | True       | None  |
| 809a5ed3-22c0-4385-9d1e-dd250f634a37 | glance          | 8    | cgcs-lv            | True       | None  |
| 9c94ef09-c474-425c-a8ba-264e82d9467e | gnocchi         | 5    | gnocchi-lv         | False      | None  |
| 895222b3-3ce5-486a-be79-9fe21b94c075 | img-conversions | 8    | img-conversions-lv | False      | None  |
| 5811713f-def2-420b-9edf-6680446cd379 | scratch         | 8    | scratch-lv         | False      | None  |
+--------------------------------------+-----------------+------+--------------------+------------+-------+

Modify filesystem sizes

system controllerfs-modify backup=42 database=12 img-conversions=12

Controller-1 / Storage Hosts / Compute Hosts Installation

After initializing and configuring an active controller, you can add and configure a backup controller and additional compute or storage hosts. For each Node-N Host do the following:

Initializing Node-N Host

Start Node-N in the workstation:

$ sudo virsh start Node-N

In Node-N console you will see:

Waiting for this node to be configured.

Please configure the personality for this node from the
controller node in order to proceed.

Updating Node-N Host Host Name and Personality

On Controller-0, acquire Keystone administrative privileges:

controller-0:~$ source /etc/nova/openrc

Wait for Controller-0 to discover new host, list the host until new UNKNOWN host shows up in table:

[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 2  | None         | None        | locked         | disabled    | offline      |
+----+--------------+-------------+----------------+-------------+--------------+

Use the system host-update to update Node-N host personality attribute:

[wrsroot@controller-0 ~(keystone_admin)]$ system host-update 2 personality=controller hostname=controller-1

Monitoring Node-N Host

On Controller-0, you can monitor the installation progress by running the system host-show command for the host periodically. Progress is shown in the install_state field.

[wrsroot@controller-0 ~(keystone_admin)]$ system host-show Node-N | grep install
| install_output      | text                                 |
| install_state       | booting                              |
| install_state_info  | None                                 |

Wait while the Node-N is configured and rebooted. Up to 20 minutes may be required for a reboot, depending on hardware. When the reboot is complete, the Node-N is reported as Locked, Disabled, and Online.

Listing Node-N Hosts

Once all Nodes have been installed, configured and rebooted, on Controller-0 list the hosts:

[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 3  | controller-1 | controller  | locked         | disabled    | online      |
| 4  | compute-0    | compute     | locked         | disabled    | online      |
| 5  | storage-0    | storage     | locked         | disabled    | online      |
| 6  | storage-1    | storage     | locked         | disabled    | online      |
| 7  | storage-2    | storage     | locked         | disabled    | online      |
+----+--------------+-------------+----------------+-------------+--------------+
[wrsroot@controller-0 ~(keystone_admin)]$ 

Controller-1 Provisioning

On Controller-0, list hosts

[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
...
| 2  | controller-1 | controller  | locked         | disabled    | online       |
...
+----+--------------+-------------+----------------+-------------+--------------+
[wrsroot@controller-0 ~(keystone_admin)]$ 

Provisioning Network Interfaces on Controller-1

Provision the Controller-1 data interface

[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -n ens6 -nt oam controller-1 ens6

Provisioning Storage on Controller-1

Review the available disk space and capacity and obtain the uuid of the physical disk

[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list controller-1

Assign Cinder storage to the physical disk

[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-add controller-1 cinder-volumes

Create a disk partition to add to the volume group based on uuid of the physical disk

[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-add controller-1 70b83394-968e-4f0d-8a99-7985cd282a21 199 -t lvm_phys_vol

Wait for the new partition to be created (i.e. status=Ready)

[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-list controller-1 --disk 70b83394-968e-4f0d-8a99-7985cd282a21

Add the partition to the volume group

[wrsroot@controller-0 ~(keystone_admin)]$ system host-pv-add controller-1 cinder-volumes 16a1c5cb-620c-47a3-be4b-022eafd122ee

Unlocking Controller-1

Unlock Controller-1

[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock controller-1

Wait while the Controller-1 is rebooted. Up to 10 minutes may be required for a reboot, depending on hardware.

[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 2  | controller-1 | controller  | unlocked       | enabled     | available    |
...
+----+--------------+-------------+----------------+-------------+--------------+
[wrsroot@controller-0 ~(keystone_admin)]$ 

Storage Host Provisioning

Provisioning Network Interfaces on a Storage Host

None

Provisioning Storage on a Storage Host

Available physical disks in Storage-N

[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list storage-0
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
| uuid                                 | device_no | device_ | device_ | size_ | available_ | rpm          |...
|                                      | de        | num     | type    | gib   | gib        |              |...
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
| a2bbfe1f-cf91-4d39-a2e8-a9785448aa56 | /dev/sda  | 2048    | HDD     | 292.  | 0.0        | Undetermined |...
|                                      |           |         |         | 968   |            |              |...
|                                      |           |         |         |       |            |              |...
| c7cc08e6-ff18-4229-a79d-a04187de7b8d | /dev/sdb  | 2064    | HDD     | 100.0 | 99.997     | Undetermined |...
|                                      |           |         |         |       |            |              |...
|                                      |           |         |         |       |            |              |...
| 1ece5d1b-5dcf-4e3c-9d10-ea83a19dd661 | /dev/sdc  | 2080    | HDD     | 4.0   | 3.997      |...
|                                      |           |         |         |       |            |              |...
|                                      |           |         |         |       |            |              |...
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
[wrsroot@controller-0 ~(keystone_admin)]$ 

Available storage tiers in Storage-N

[wrsroot@controller-0 ~(keystone_admin)]$ system storage-tier-list ceph_cluster
Authorization failed: Unable to establish connection to http://192.168.204.2:5000/v3/auth/tokens
[wrsroot@controller-0 ~(keystone_admin)]$ 

Again

[wrsroot@controller-0 ~(keystone_admin)]$ system storage-tier-list ceph_cluster
+--------------------------------------+---------+--------+--------------------------------------+
| uuid                                 | name    | status | backend_using                        |
+--------------------------------------+---------+--------+--------------------------------------+
| 4398d910-75e4-4e99-a57f-fc147fb87bdb | storage | in-use | 5131a848-25ea-4cd8-bbce-0d65c84183df |
+--------------------------------------+---------+--------+--------------------------------------+
[wrsroot@controller-0 ~(keystone_admin)]$ 

Create a storage function (an OSD) in Storage-N

[wrsroot@controller-0 ~(keystone_admin)]$ system host-stor-add storage-0 c7cc08e6-ff18-4229-a79d-a04187de7b8d
+------------------+--------------------------------------------------+
| Property         | Value                                            |
+------------------+--------------------------------------------------+
| osdid            | 0                                                |
| function         | osd                                              |
| journal_location | 34989bad-67fc-49ea-9e9c-38ca4be95fad             |
| journal_size_gib | 1024                                             |
| journal_path     | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part2 |
| journal_node     | /dev/sdb2                                        |
| uuid             | 34989bad-67fc-49ea-9e9c-38ca4be95fad             |
| ihost_uuid       | 4a5ed4fc-1d2b-4607-acf9-e50a3759c994             |
| idisk_uuid       | c7cc08e6-ff18-4229-a79d-a04187de7b8d             |
| tier_uuid        | 4398d910-75e4-4e99-a57f-fc147fb87bdb             |
| tier_name        | storage                                          |
| created_at       | 2018-08-16T00:39:44.409448+00:00                 |
| updated_at       | 2018-08-16T00:40:07.626762+00:00                 |
+------------------+--------------------------------------------------+
[wrsroot@controller-0 ~(keystone_admin)]$ 

List the UUIDs

[wrsroot@controller-0 ~(keystone_admin)]$ system host-stor-list storage-0
+--------------------------------------+----------+-------+--------------+--------------------------------------+
| uuid                                 | function | osdid | capabilities | idisk_uuid                           |
+--------------------------------------+----------+-------+--------------+--------------------------------------+
| 34989bad-67fc-49ea-9e9c-38ca4be95fad | osd      | 0     | {}           | c7cc08e6-ff18-4229-a79d-a04187de7b8d |
+--------------------------------------+----------+-------+--------------+--------------------------------------+
[wrsroot@controller-0 ~(keystone_admin)]$ 

Unlock Storage-N

[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock storage-0

Compute Host Provision

You must configure the network interfaces and the storage disks on a host before you can unlock it.

On Controller-0, acquire Keystone administrative privileges:

controller-0:~$ source /etc/nova/openrc

Provisioning Network Interfaces on a Compute-N Host

On Controller-0, provision the Compute-N data interfaces

[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -p providernet-a -nt data compute-0 ens6
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| ifname           | ens6                                 |
| networktype      | data                                 |
| iftype           | ethernet                             |
| ports            | [u'ens6']                            |
| providernetworks | providernet-a                        |
| imac             | 08:00:27:f8:46:7e                    |
| imtu             | 1500                                 |
| aemode           | None                                 |
| schedpolicy      | None                                 |
| txhashpolicy     | None                                 |
| uuid             | f3158e60-a6ef-44eb-b902-1586fb79c362 |
| ihost_uuid       | f56921a6-8784-45ac-bd72-c0372cd95964 |
| vlan_id          | None                                 |
| uses             | []                                   |
| used_by          | []                                   |
| created_at       | 2018-08-16T00:52:30.698025+00:00     |
| updated_at       | 2018-08-16T00:53:35.602066+00:00     |
| sriov_numvfs     | 0                                    |
| ipv4_mode        | disabled                             |
| ipv6_mode        | disabled                             |
| accelerated      | [False]                              |
+------------------+--------------------------------------+
[wrsroot@controller-0 ~(keystone_admin)]$ 

Provisioning Storage on a Compute-N Host

On Controller-0, ensure that provider networks are available for the data interfaces. Provision the data interfaces:

[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list compute-0
+--------------------------------------+-----------+---------+---------+-------+------------+
| uuid                                 | device_no | device_ | device_ | size_ | available_ |
|                                      | de        | num     | type    | gib   | gib        |
+--------------------------------------+-----------+---------+---------+-------+------------+
| 14e52a55-f6a7-40ad-a0b1-11c2c3b6e7e9 | /dev/sda  | 2048    | HDD     | 292.  | 265.132    |
|                                      |           |         |         | 968   |            |
|                                      |           |         |         |       |            |
| a639914b-23a9-4071-9f25-a5f1960846cc | /dev/sdb  | 2064    | HDD     | 100.0 | 99.997     |
|                                      |           |         |         |       |            |
|                                      |           |         |         |       |            |
| a50995d0-7048-4e91-852e-1e1fb113996b | /dev/sdc  | 2080    | HDD     | 4.0   | 3.997      |
|                                      |           |         |         |       |            |
|                                      |           |         |         |       |            |
+--------------------------------------+-----------+---------+---------+-------+------------+
[wrsroot@controller-0 ~(keystone_admin)]$ 
[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-add compute-0 nova-local
+-----------------+-------------------------------------------------------------------+
| Property        | Value                                                             |
+-----------------+-------------------------------------------------------------------+
| lvm_vg_name     | nova-local                                                        |
| vg_state        | adding                                                            |
| uuid            | 37f4c178-f0fe-422d-b66e-24ae057da674                              |
| ihost_uuid      | f56921a6-8784-45ac-bd72-c0372cd95964                              |
| lvm_vg_access   | None                                                              |
| lvm_max_lv      | 0                                                                 |
| lvm_cur_lv      | 0                                                                 |
| lvm_max_pv      | 0                                                                 |
| lvm_cur_pv      | 0                                                                 |
| lvm_vg_size_gib | 0.00                                                              |
| lvm_vg_total_pe | 0                                                                 |
| lvm_vg_free_pe  | 0                                                                 |
| created_at      | 2018-08-16T00:57:46.340454+00:00                                  |
| updated_at      | None                                                              |
| parameters      | {u'concurrent_disk_operations': 2, u'instance_backing': u'image'} |
+-----------------+-------------------------------------------------------------------+
[wrsroot@controller-0 ~(keystone_admin)]$ 
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-list compute-0 --disk a639914b-23a9-4071-9f25-a5f1960846cc
[wrsroot@controller-0 ~(keystone_admin)]$ system host-pv-add compute-0 nova-local a639914b-23a9-4071-9f25-a5f1960846cc
+--------------------------+--------------------------------------------+
| Property                 | Value                                      |
+--------------------------+--------------------------------------------+
| uuid                     | 56fdb63a-1078-4394-b1ce-9a0b3bff46dc       |
| pv_state                 | adding                                     |
| pv_type                  | disk                                       |
| disk_or_part_uuid        | a639914b-23a9-4071-9f25-a5f1960846cc       |
| disk_or_part_device_node | /dev/sdb                                   |
| disk_or_part_device_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| lvm_pv_name              | /dev/sdb                                   |
| lvm_vg_name              | nova-local                                 |
| lvm_pv_uuid              | None                                       |
| lvm_pv_size_gib          | 0.0                                        |
| lvm_pe_total             | 0                                          |
| lvm_pe_alloced           | 0                                          |
| ihost_uuid               | f56921a6-8784-45ac-bd72-c0372cd95964       |
| created_at               | 2018-08-16T01:05:59.013257+00:00           |
| updated_at               | None                                       |
+--------------------------+--------------------------------------------+
[wrsroot@controller-0 ~(keystone_admin)]$ 

Remote RAW Ceph storage backed

[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-modify -b remote -s 2048 compute-0 nova-local
argument of type 'NoneType' is not iterable
[wrsroot@controller-0 ~(keystone_admin)]$ 

Unlocking a Compute Host

On Controller-0, use the system host-unlock command to unlock the Compute-N:

[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock compute-0

Wait while the Compute-N is rebooted. Up to 10 minutes may be required for a reboot, depending on hardware. The host is rebooted, and its Availability State is reported as In-Test.

System Health Check

Listing StarlingX Node-N Hosts

After a few minutes, all nodes shall be reported as Unlocked, Enabled, and Available:

On Controller-0

[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 3  | controller-1 | controller  | unlocked       | enabled     | available    |
| 4  | compute-0    | compute     | unlocked       | enabled     | available    |
| 5  | storage-0    | storage     | unlocked       | enabled     | available    |
| 6  | storage-1    | storage     | unlocked       | enabled     | available    |
| 7  | storage-2    | storage     | unlocked       | enabled     | available    |
+----+--------------+-------------+----------------+-------------+--------------+
[wrsroot@controller-0 ~(keystone_admin)]$ 

Checking StarlingX CEPH Health

[wrsroot@controller-0 ~(keystone_admin)]$ ceph -s
    cluster e14ebfd6-5030-4592-91c3-7e6146b3c910
     health HEALTH_OK
     monmap e1: 3 mons at {controller-0=192.168.204.3:6789/0,controller-1=192.168.204.4:6789/0,storage-0=192.168.204.204:6789/0}
            election epoch 22, quorum 0,1,2 controller-0,controller-1,storage-0
     osdmap e84: 2 osds: 2 up, 2 in
            flags sortbitwise,require_jewel_osds
      pgmap v168: 1600 pgs, 5 pools, 0 bytes data, 0 objects
            87444 kB used, 197 GB / 197 GB avail
                1600 active+clean
controller-0:~$