StarlingX/Installation Guide Virtual Environment/Dedicated Storage
Contents
Nodes Configuration
Bare Metal
Minimum Quantity of Servers
- Controllers: 2
- Storage
- Replication factor of 2: 2 - 8
- Replication factor of 3: 3 - 9
- Computes: 2 - 100
Hardware Requirements
The recommended minimum requirements for the physical servers where StarlingX Dedicated Storage will be deployed, include:
- Processor: Dual-CPU Intel® Xeon®
- Memory:
- 64 GB Controller
- 32 GB Compute
- BIOS:
- Hyper-Threading Tech: Enabled
- Virtualization Technology: Enabled
- VT for Directed I/O: Enabled
- CPU Power and Performance Policy: Performance
- CPU C State Control: Disabled
- Plug & Play BMC Detection: Disabled
- Primary Disk:
- 500 GB SDD or NVMe
- Additional Disks:
- 500 GB SSD or NVMe (Controller / Storage / Compute)
- Network Ports*
- Management: 1 x 10GE
- OAM: 1 x 10GE
- Data: 1 x 10GE
Virtual Environment
Get the scripts
$ git clone https://git.starlingx.io/stx-tools
Run the libvirt qemu setup scripts:
$ bash setup_network.sh $ bash setup_standard_controller.sh -i <starlingx iso image>
Accessing Server Consoles
The xml for domains in stx-tools repo, deployment/libvirt, provides both graphical and text consoles.
Access the graphical console in virt-manager by right-click on the domain (the server) and selecting "Open".
Access the textual console with the command "virsh console $DOMAIN", where DOMAIN is the name of the server shown in virsh.
When booting the controller-0 for the first time, both the serial and graphical consoles will present the initial configuration menu for the cluster. One can select serial or graphical console for controller-0. For the other nodes however only serial is used, regardless of which option is selected.
Open the graphic console on all servers before powering them on to observe the boot device selection and PXI boot progress. Run "virsh console $DOMAIN" command promptly after power on to see the initial boot sequence which follows the boot device selection. One has a few seconds to do this.
Controller-0 Host Installation
Installing controller-0 involves initializing a host with software and then applying a configuration from the command line. The configured host becomes Controller-0.
Procedure:
- Using an ISO image of StarlingX.
- Configure the controller using the config_controller script.
Initializing Controller-0
This section describes how to initialize StarlingX in host Controller-0. Except where noted, all the commands must be executed from a console of the Workstation.
Power on the host to be configured as Controller-0 and wait for the console to show the StarlingX ISO booting options:
- Standard Controller Configuration
- When the installer is loaded and the installer welcome screen appears in the Controller-0 host, select the type of installation "Standard Controller Configuration".
- Graphical Console
- Select the "Graphical Console" as the console to use during installation.
- Standard Security Boot Profile
- Select "Standard Security Boot Profile" as the Security Profile.
- Initialization Complete
- Monitor the initialization until it is complete. When initialization is complete, a reboot is initiated on the Controller-0 host, briefly displays a GNU GRUB screen, and then boots automatically into the StarlingX image.
Log into Controller-0 as user wrsroot, with password wrsroot. The first time you log in as wrsroot, you are required to change your password. Enter the current password (wrsroot):
Changing password for wrsroot. (current) UNIX Password:
Enter a new password for the wrsroot account:
New password:
Enter the new password again to confirm it:
Retype new password:
Controller-0 is initialized with StarlingX, and is ready for configuration.
Configuring Controller-0
This section describes how to perform the Controller-0 configuration interactively. Except where noted, all the commands must be executed from the console of the active controller (here assumed to be controller-0.
When run interactively, the config_controller script presents a series of prompts for initial configuration of StarlingX. The script is used to configure the first controller in the StarlingX cluster as controller-0. The prompts are grouped by configuration area. To start the script interactively, use the following command with no parameters:
controller-0:~$ sudo config_controller System Configuration ================ Enter ! at any prompt to abort... ...
Accept all the default values immediately after ‘system date and time’
... Applying configuration (this will take several minutes): 01/08: Creating bootstrap configuration ... DONE 02/08: Applying bootstrap manifest ... DONE 03/08: Persisting local configuration ... DONE 04/08: Populating initial system inventory ... DONE 05:08: Creating system configuration ... DONE 06:08: Applying controller manifest ... DONE 07:08: Finalize controller configuration ... DONE 08:08: Waiting for service activation ... DONE Configuration was applied Please complete any out of service commissioning steps with system commands and unlock controller to proceed.
Controller-0 and System Provision
On Controller-0, acquire Keystone administrative privileges:
controller-0:~$ source /etc/nova/openrc
Configuring Provider Networks at Installation
You must set up provider networks at installation so that you can attach data interfaces and unlock the compute nodes.
Set up one provider network of the vlan type, named providernet-a:
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-create providernet-a --type=vlan [wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-range-create --name providernet-a-range1 --range 100-400 providernet-a
Provisioning Cinder Storage on Controller-0
Review the available disk space and capacity and obtain the uuid of the physical disk
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list controller-0 +--------------------------------------+-----------+---------+---------+-------+------------+--------------+... | uuid | device_no | device_ | device_ | size_ | available_ | rpm |... | | de | num | type | gib | gib | |... +--------------------------------------+-----------+---------+---------+-------+------------+--------------+... | 004f4c09-2f61-46c5-8def-99b2bdeed83c | /dev/sda | 2048 | HDD | 200.0 | 0.0 | |... | 89694799-0dd8-4532-8636-c0d8aabfe215 | /dev/sdb | 2064 | HDD | 200.0 | 199.997 | |... +--------------------------------------+-----------+---------+---------+-------+------------+--------------+... [wrsroot@controller-0 ~(keystone_admin)]$
Create the 'cinder-volumes' local volume group
[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-add controller-0 cinder-volumes +-----------------+--------------------------------------+ | Property | Value | +-----------------+--------------------------------------+ | lvm_vg_name | cinder-volumes | | vg_state | adding | | uuid | ece4c755-241c-4363-958e-85e9e3d12917 | | ihost_uuid | 150284e2-fb60-4169-ae75-7f444b8ca9bf | | lvm_vg_access | None | | lvm_max_lv | 0 | | lvm_cur_lv | 0 | | lvm_max_pv | 0 | | lvm_cur_pv | 0 | | lvm_vg_size_gib | 0.00 | | lvm_vg_total_pe | 0 | | lvm_vg_free_pe | 0 | | created_at | 2018-08-22T03:59:30.685718+00:00 | | updated_at | None | | parameters | {u'lvm_type': u'thin'} | +-----------------+--------------------------------------+
Create a disk partition to add to the volume group based on uuid of the physical disk
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-add controller-0 89694799-0dd8-4532-8636-c0d8aabfe215 199 -t lvm_phys_vol +-------------+--------------------------------------------------+ | Property | Value | +-------------+--------------------------------------------------+ | device_path | /dev/disk/by-path/pci-0000:00:03.0-ata-2.0-part1 | | device_node | /dev/sdb1 | | type_guid | ba5eba11-0000-1111-2222-000000000001 | | type_name | None | | start_mib | None | | end_mib | None | | size_mib | 203776 | | uuid | 9ba2d76a-6ae2-4bfa-ad48-57b62d102e80 | | ihost_uuid | 150284e2-fb60-4169-ae75-7f444b8ca9bf | | idisk_uuid | 89694799-0dd8-4532-8636-c0d8aabfe215 | | ipv_uuid | None | | status | Creating | | created_at | 2018-08-22T04:03:40.761221+00:00 | | updated_at | None | +-------------+--------------------------------------------------+
Wait for the new partition to be created (i.e. status=Ready)
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-list controller-0 --disk 89694799-0dd8-4532-8636-c0d8aabfe215 +--------------------------------------+...+------------+...+---------------------+----------+--------+ | uuid |...| device_nod |...| type_name | size_mib | status | | |...| e |...| | | | +--------------------------------------+...+------------+...+---------------------+----------+--------+ | 9ba2d76a-6ae2-4bfa-ad48-57b62d102e80 |...| /dev/sdb1 |...| LVM Physical Volume | 199.0 | Ready | | |...| |...| | | | | |...| |...| | | | +--------------------------------------+...+------------+...+---------------------+----------+--------+
Add the partition to the volume group
[wrsroot@controller-0 ~(keystone_admin)]$ system host-pv-add controller-0 cinder-volumes 9ba2d76a-6ae2-4bfa-ad48-57b62d102e80 +--------------------------+--------------------------------------------------+ | Property | Value | +--------------------------+--------------------------------------------------+ | uuid | 060dc47e-bc17-40f4-8f09-5326ef0e86a5 | | pv_state | adding | | pv_type | partition | | disk_or_part_uuid | 9ba2d76a-6ae2-4bfa-ad48-57b62d102e80 | | disk_or_part_device_node | /dev/sdb1 | | disk_or_part_device_path | /dev/disk/by-path/pci-0000:00:03.0-ata-2.0-part1 | | lvm_pv_name | /dev/sdb1 | | lvm_vg_name | cinder-volumes | | lvm_pv_uuid | None | | lvm_pv_size_gib | 0.0 | | lvm_pe_total | 0 | | lvm_pe_alloced | 0 | | ihost_uuid | 150284e2-fb60-4169-ae75-7f444b8ca9bf | | created_at | 2018-08-22T04:06:54.008632+00:00 | | updated_at | None | +--------------------------+--------------------------------------------------+
Adding a Ceph Storage Backend at Installation
Verify requirements
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-add ceph -s cinder,glance,swift,nova WARNING : THIS OPERATION IS NOT REVERSIBLE AND CANNOT BE CANCELLED. By confirming this operation, Ceph backend will be created. A minimum of 2 storage nodes are required to complete the configuration. Please set the 'confirmed' field to execute this operation for the ceph backend.
Add CEPH storage
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-add ceph -s cinder,glance,swift,nova --confirmed System configuration has changed. Please follow the administrator guide to complete configuring the system. +--------------------------------------+------------+---------+-------------+--------------------+----------+... | uuid | name | backend | state | task | services |... +--------------------------------------+------------+---------+-------------+--------------------+----------+... | 48ddb10a-206c-42da-bb3f-f7160a356724 | ceph-store | ceph | configuring | applying-manifests | cinder, |... | | | | | | glance, |... | | | | | | swift |... | | | | | | nova |... | | | | | | |... | 55f49f86-3e01-4d03-a014-42e1b55ba487 | file-store | file | configured | None | glance |... +--------------------------------------+------------+---------+-------------+--------------------+----------+...
Confirm CEPH storage is configured
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-list +--------------------------------------+------------+---------+------------+-------------------+-----------+... | uuid | name | backend | state | task | services |... +--------------------------------------+------------+---------+------------+-------------------+-----------+... | 48ddb10a-206c-42da-bb3f-f7160a356724 | ceph-store | ceph | configured | provision-storage | cinder, |... | | | | | | glance, |... | | | | | | swift |... | | | | | | nova |... | | | | | | |... | 55f49f86-3e01-4d03-a014-42e1b55ba487 | file-store | file | configured | None | glance |... +--------------------------------------+------------+---------+------------+-------------------+-----------+...
Unlocking Controller-0
You must unlock controller-0 so that you can use it to install the remaining hosts. Use the system host-unlock command:
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock controller-0
The host is rebooted. During the reboot, the command line is unavailable, and any ssh connections are dropped. To monitor the progress of the reboot, use the controller-0 console.
Verifying the Controller-0 Configuration
On Controller-0, acquire Keystone administrative privileges:
controller-0:~$ source /etc/nova/openrc
Verify that the StarlingX controller services are running:
[wrsroot@controller-0 ~(keystone_admin)]$ nova service-list +--------------------------------------+------------------+--------------+----------+---------+-------+ ... | Id | Binary | Host | Zone | Status | State | ... +--------------------------------------+------------------+--------------+----------+---------+-------+ ... | d7cdfaf0-9394-4053-b752-d609c837b729 | nova-conductor | controller-0 | internal | enabled | up | ... | 692c2659-7188-42ec-9ad1-c88ac3020dc6 | nova-scheduler | controller-0 | internal | enabled | up | ... | 5c7c9aad-696f-4dd1-a497-464bdd525e0c | nova-consoleauth | controller-0 | internal | enabled | up | ... +--------------------------------------+------------------+--------------+----------+---------+-------+ ...
Verify that controller-0 is unlocked, enabled, and available:
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | +----+--------------+-------------+----------------+-------------+--------------+
Provisioning Filesystem Storage
List the controller filesystems with status and current sizes
[wrsroot@controller-0 ~(keystone_admin)]$ system controllerfs-list +--------------------------------------+-----------------+------+--------------------+------------+-------+ | UUID | FS Name | Size | Logical Volume | Replicated | State | | | | in | | | | | | | GiB | | | | +--------------------------------------+-----------------+------+--------------------+------------+-------+ | 4e31c4ea-6970-4fc6-80ba-431fdcdae15f | backup | 5 | backup-lv | False | None | | 6c689cd7-2bef-4755-a2fb-ddd9504692f3 | database | 5 | pgsql-lv | True | None | | 44c7d520-9dbe-41be-ac6a-5d02e3833fd5 | extension | 1 | extension-lv | True | None | | 809a5ed3-22c0-4385-9d1e-dd250f634a37 | glance | 8 | cgcs-lv | True | None | | 9c94ef09-c474-425c-a8ba-264e82d9467e | gnocchi | 5 | gnocchi-lv | False | None | | 895222b3-3ce5-486a-be79-9fe21b94c075 | img-conversions | 8 | img-conversions-lv | False | None | | 5811713f-def2-420b-9edf-6680446cd379 | scratch | 8 | scratch-lv | False | None | +--------------------------------------+-----------------+------+--------------------+------------+-------+
Modify filesystem sizes
system controllerfs-modify backup=42 database=12 img-conversions=12
Controller-1 / Storage Hosts / Compute Hosts Installation
After initializing and configuring an active controller, you can add and configure a backup controller and additional compute or storage hosts. For each Node-N Host do the following:
Initializing Node-N Host
Bare Metal
Power on Node-N
Virtual Environment
Start Node-N in the workstation:
$ sudo virsh start Node-N
In Node-N console you will see:
Waiting for this node to be configured. Please configure the personality for this node from the controller node in order to proceed.
Updating Node-N Host Host Name and Personality
On Controller-0, acquire Keystone administrative privileges:
controller-0:~$ source /etc/nova/openrc
Wait for Controller-0 to discover new host, list the host until new UNKNOWN host shows up in table:
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 2 | None | None | locked | disabled | offline | +----+--------------+-------------+----------------+-------------+--------------+
Use the system host-add to update Node-N host personality attribute:
[wrsroot@controller-0 ~(keystone_admin)]$ system host-add -n <controller_name> -p <personality> -m <mac address>
REMARK: used the Mac address for the specific network interface you are going to be connected. e.g. OAM network interface for Controller-1 node, Management network interface for Computes and Storage nodes.
Monitoring Node-N Host
On Controller-0, you can monitor the installation progress by running the system host-show command for the host periodically. Progress is shown in the install_state field.
[wrsroot@controller-0 ~(keystone_admin)]$ system host-show Node-N | grep install | install_output | text | | install_state | booting | | install_state_info | None |
Wait while the Node-N is configured and rebooted. Up to 20 minutes may be required for a reboot, depending on hardware. When the reboot is complete, the Node-N is reported as Locked, Disabled, and Online.
Listing Node-N Hosts
Once all Nodes have been installed, configured and rebooted, on Controller-0 list the hosts:
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 3 | controller-1 | controller | locked | disabled | online | | 4 | compute-0 | compute | locked | disabled | online | | 5 | storage-0 | storage | locked | disabled | online | | 6 | storage-1 | storage | locked | disabled | online | | 7 | storage-2 | storage | locked | disabled | online | +----+--------------+-------------+----------------+-------------+--------------+ [wrsroot@controller-0 ~(keystone_admin)]$
Controller-1 Provisioning
On Controller-0, list hosts
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ ... | 2 | controller-1 | controller | locked | disabled | online | ... +----+--------------+-------------+----------------+-------------+--------------+ [wrsroot@controller-0 ~(keystone_admin)]$
Provisioning Network Interfaces on Controller-1
List all platform networks
[wrsroot@controller-0 ~(keystone_admin)]$ system network-list +----+--------------------------------------+-----------+-----------+---------+--------------------------------------+ | id | uuid | name | type | dynamic | pool_uuid | +----+--------------------------------------+-----------+-----------+---------+--------------------------------------+ | 1 | 28dc7fe6-0f5e-426a-94d8-b027817d337e | mgmt | mgmt | True | 4d7055b8-a730-49d6-b118-eb94fd65e535 | | 4 | 2f66010e-18e4-4170-a3be-6c0bdeb79566 | oam | oam | False | 87c4e34c-47f5-460d-a1a8-86528ef4586a | | 5 | 3b27b5fb-9a60-4cba-93ae-b6401ed6a158 | multicast | multicast | False | fd1ec52a-0ca9-4f58-b8d5-30c13eb80d3e | | 2 | 4fef09d2-03fb-4fd1-91b2-d54d3c59b281 | pxeboot | pxeboot | True | 46c9de62-af14-4d92-8d96-5bde1f03de97 | | 3 | 99a40b51-5843-4eec-92d7-0d1c49840f14 | infra | infra | True | a4ad705c-dc31-4e15-960f-d8b2c843d4b9 | +----+--------------------------------------+-----------+-----------+---------+--------------------------------------+
Provision the Controller-1 oam interface
REMARK: Before you run bellow command please login Controller-1 and make sure you have OAM interface up and IP address assigned and you can ping to Controller-0.
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -n ens6 -c platform --networks oam controller-1 ens6
Provisioning Storage on Controller-1
Review the available disk space and capacity and obtain the uuid of the physical disk
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list controller-1 +--------------------------------------+-----------+---------+---------+-------+------------+--------------+... | uuid | device_no | device_ | device_ | size_ | available_ | rpm |... | | de | num | type | gib | gib | |... +--------------------------------------+-----------+---------+---------+-------+------------+--------------+... | 67053c9e-cdae-4429-9443-d31bd5dbc7d2 | /dev/sda | 2048 | HDD | 200.0 | 0.0 | Undetermined |... | | | | | | | |... | | | | | | | |... | 942900fb-15b6-411d-9721-ede8c1922729 | /dev/sdb | 2064 | HDD | 200.0 | 199.997 | Undetermined |.. | | | | | | | |... | | | | | | | |... +--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
Assign Cinder storage to the physical disk
[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-add controller-1 cinder-volumes +-----------------+--------------------------------------+ | Property | Value | +-----------------+--------------------------------------+ | lvm_vg_name | cinder-volumes | | vg_state | adding | | uuid | a2700ead-6250-4dd4-be5f-4a843cae87de | | ihost_uuid | 1724e9b1-c794-44a8-a366-5f16a4276a4d | | lvm_vg_access | None | | lvm_max_lv | 0 | | lvm_cur_lv | 0 | | lvm_max_pv | 0 | | lvm_cur_pv | 0 | | lvm_vg_size_gib | 0.00 | | lvm_vg_total_pe | 0 | | lvm_vg_free_pe | 0 | | created_at | 2018-08-27T09:44:12.300129+00:00 | | updated_at | None | | parameters | {u'lvm_type': u'thin'} | +-----------------+--------------------------------------+
Create a disk partition to add to the volume group based on uuid of the physical disk
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-add controller-1 942900fb-15b6-411d-9721-ede8c1922729 199 -t lvm_phys_vol +-------------+--------------------------------------------------+ | Property | Value | +-------------+--------------------------------------------------+ | device_path | /dev/disk/by-path/pci-0000:00:03.0-ata-2.0-part1 | | device_node | /dev/sdb1 | | type_guid | ba5eba11-0000-1111-2222-000000000001 | | type_name | None | | start_mib | None | | end_mib | None | | size_mib | 203776 | | uuid | f5a7ecc3-8f4b-4e66-9b28-4b4a1744e9cf | | ihost_uuid | 1724e9b1-c794-44a8-a366-5f16a4276a4d | | idisk_uuid | 942900fb-15b6-411d-9721-ede8c1922729 | | ipv_uuid | None | | status | Creating (on unlock) | | created_at | 2018-08-27T09:46:49.894462+00:00 | | updated_at | None | +-------------+--------------------------------------------------+
Wait for the new partition to be created (i.e. status=Ready)
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-list controller-1 --disk 942900fb-15b6-411d-9721-ede8c1922729 +--------------------------------------+...+------------+...+-------+-------+--------------+ | uuid |...| device_nod |...| type_ | size_ | status | | |...| e |...| name | gib | | +--------------------------------------+...+------------+...+-------+-------+--------------+ | f5a7ecc3-8f4b-4e66-9b28-4b4a1744e9cf |...| /dev/sdb1 |...| None | 199.0 | Creating (on | | |...| |...| | | unlock) | | |...| |...| | | | +--------------------------------------+...+------------+...+-------+-------+--------------+
Add the partition to the volume group
[wrsroot@controller-0 ~(keystone_admin)]$ system host-pv-add controller-1 cinder-volumes f5a7ecc3-8f4b-4e66-9b28-4b4a1744e9cf
Unlocking Controller-1
Unlock Controller-1
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock controller-1
Wait while the Controller-1 is rebooted. Up to 10 minutes may be required for a reboot, depending on hardware.
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 2 | controller-1 | controller | unlocked | enabled | available | ... +----+--------------+-------------+----------------+-------------+--------------+ [wrsroot@controller-0 ~(keystone_admin)]$
Storage Host Provisioning
Provisioning Network Interfaces on a Storage Host
None
Provisioning Storage on a Storage Host
Available physical disks in Storage-N
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list storage-0 +--------------------------------------+-----------+---------+---------+-------+------------+--------------+... | uuid | device_no | device_ | device_ | size_ | available_ | rpm |... | | de | num | type | gib | gib | |... +--------------------------------------+-----------+---------+---------+-------+------------+--------------+... | a2bbfe1f-cf91-4d39-a2e8-a9785448aa56 | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined |... | | | | | 968 | | |... | | | | | | | |... | c7cc08e6-ff18-4229-a79d-a04187de7b8d | /dev/sdb | 2064 | HDD | 100.0 | 99.997 | Undetermined |... | | | | | | | |... | | | | | | | |... | 1ece5d1b-5dcf-4e3c-9d10-ea83a19dd661 | /dev/sdc | 2080 | HDD | 4.0 | 3.997 |... | | | | | | | |... | | | | | | | |... +--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
Available storage tiers in Storage-N
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-tier-list ceph_cluster +--------------------------------------+---------+--------+--------------------------------------+ | uuid | name | status | backend_using | +--------------------------------------+---------+--------+--------------------------------------+ | 4398d910-75e4-4e99-a57f-fc147fb87bdb | storage | in-use | 5131a848-25ea-4cd8-bbce-0d65c84183df | +--------------------------------------+---------+--------+--------------------------------------+
Create a storage function (an OSD) in Storage-N
[wrsroot@controller-0 ~(keystone_admin)]$ system host-stor-add storage-0 c7cc08e6-ff18-4229-a79d-a04187de7b8d +------------------+--------------------------------------------------+ | Property | Value | +------------------+--------------------------------------------------+ | osdid | 0 | | function | osd | | journal_location | 34989bad-67fc-49ea-9e9c-38ca4be95fad | | journal_size_gib | 1024 | | journal_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part2 | | journal_node | /dev/sdb2 | | uuid | 34989bad-67fc-49ea-9e9c-38ca4be95fad | | ihost_uuid | 4a5ed4fc-1d2b-4607-acf9-e50a3759c994 | | idisk_uuid | c7cc08e6-ff18-4229-a79d-a04187de7b8d | | tier_uuid | 4398d910-75e4-4e99-a57f-fc147fb87bdb | | tier_name | storage | | created_at | 2018-08-16T00:39:44.409448+00:00 | | updated_at | 2018-08-16T00:40:07.626762+00:00 | +------------------+--------------------------------------------------+
Create remaining available storage function (an OSD) in Storage-N based in the number of available physical disks.
List the UUIDs
[wrsroot@controller-0 ~(keystone_admin)]$ system host-stor-list storage-0 +--------------------------------------+----------+-------+--------------+--------------------------------------+ | uuid | function | osdid | capabilities | idisk_uuid | +--------------------------------------+----------+-------+--------------+--------------------------------------+ | 34989bad-67fc-49ea-9e9c-38ca4be95fad | osd | 0 | {} | c7cc08e6-ff18-4229-a79d-a04187de7b8d | +--------------------------------------+----------+-------+--------------+--------------------------------------+
Unlock Storage-N
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock storage-0
Compute Host Provision
You must configure the network interfaces and the storage disks on a host before you can unlock it.
On Controller-0, acquire Keystone administrative privileges:
controller-0:~$ source /etc/nova/openrc
Provisioning Network Interfaces on a Compute-N Host
On Controller-0, provision the Compute-N data interfaces
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -p providernet-a -c data compute-0 ens6 +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | ifname | ens6 | | iftype | ethernet | | ports | [u'ens6'] | | providernetworks | providernet-a | | imac | 08:00:27:f8:46:7e | | imtu | 1500 | | ifclass | data | | aemode | None | | schedpolicy | None | | txhashpolicy | None | | uuid | f3158e60-a6ef-44eb-b902-1586fb79c362 | | ihost_uuid | f56921a6-8784-45ac-bd72-c0372cd95964 | | vlan_id | None | | uses | [] | | used_by | [] | | created_at | 2018-08-16T00:52:30.698025+00:00 | | updated_at | 2018-08-16T00:53:35.602066+00:00 | | sriov_numvfs | 0 | | ipv4_mode | disabled | | ipv6_mode | disabled | | accelerated | [False] | +------------------+--------------------------------------+ [wrsroot@controller-0 ~(keystone_admin)]$
Provisioning Storage on a Compute-N Host
Review the available disk space and capacity and obtain the uuid of the physical disk
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list compute-0 +--------------------------------------+-----------+---------+---------+-------+------------+ | uuid | device_no | device_ | device_ | size_ | available_ | | | de | num | type | gib | gib | +--------------------------------------+-----------+---------+---------+-------+------------+ | 14e52a55-f6a7-40ad-a0b1-11c2c3b6e7e9 | /dev/sda | 2048 | HDD | 292. | 265.132 | | | | | | 968 | | | | | | | | | | a639914b-23a9-4071-9f25-a5f1960846cc | /dev/sdb | 2064 | HDD | 100.0 | 99.997 | | | | | | | | | | | | | | | | a50995d0-7048-4e91-852e-1e1fb113996b | /dev/sdc | 2080 | HDD | 4.0 | 3.997 | | | | | | | | | | | | | | | +--------------------------------------+-----------+---------+---------+-------+------------+
Create the 'cinder-volumes' local volume group
[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-add compute-0 nova-local +-----------------+-------------------------------------------------------------------+ | Property | Value | +-----------------+-------------------------------------------------------------------+ | lvm_vg_name | nova-local | | vg_state | adding | | uuid | 37f4c178-f0fe-422d-b66e-24ae057da674 | | ihost_uuid | f56921a6-8784-45ac-bd72-c0372cd95964 | | lvm_vg_access | None | | lvm_max_lv | 0 | | lvm_cur_lv | 0 | | lvm_max_pv | 0 | | lvm_cur_pv | 0 | | lvm_vg_size_gib | 0.00 | | lvm_vg_total_pe | 0 | | lvm_vg_free_pe | 0 | | created_at | 2018-08-16T00:57:46.340454+00:00 | | updated_at | None | | parameters | {u'concurrent_disk_operations': 2, u'instance_backing': u'image'} | +-----------------+-------------------------------------------------------------------+
Create a disk partition to add to the volume group based on uuid of the physical disk
[wrsroot@controller-0 ~(keystone_admin)]$ system host-pv-add compute-0 nova-local a639914b-23a9-4071-9f25-a5f1960846cc +--------------------------+--------------------------------------------+ | Property | Value | +--------------------------+--------------------------------------------+ | uuid | 56fdb63a-1078-4394-b1ce-9a0b3bff46dc | | pv_state | adding | | pv_type | disk | | disk_or_part_uuid | a639914b-23a9-4071-9f25-a5f1960846cc | | disk_or_part_device_node | /dev/sdb | | disk_or_part_device_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 | | lvm_pv_name | /dev/sdb | | lvm_vg_name | nova-local | | lvm_pv_uuid | None | | lvm_pv_size_gib | 0.0 | | lvm_pe_total | 0 | | lvm_pe_alloced | 0 | | ihost_uuid | f56921a6-8784-45ac-bd72-c0372cd95964 | | created_at | 2018-08-16T01:05:59.013257+00:00 | | updated_at | None | +--------------------------+--------------------------------------------+
Remote RAW Ceph storage backed
[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-modify -b remote compute-0 nova-local
Unlocking a Compute Host
On Controller-0, use the system host-unlock command to unlock the Compute-N:
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock compute-0
Wait while the Compute-N is rebooted. Up to 10 minutes may be required for a reboot, depending on hardware. The host is rebooted, and its Availability State is reported as In-Test.
System Health Check
Listing StarlingX Node-N Hosts
After a few minutes, all nodes shall be reported as Unlocked, Enabled, and Available:
On Controller-0
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 3 | controller-1 | controller | unlocked | enabled | available | | 4 | compute-0 | compute | unlocked | enabled | available | | 5 | storage-0 | storage | unlocked | enabled | available | | 6 | storage-1 | storage | unlocked | enabled | available | | 7 | storage-2 | storage | unlocked | enabled | available | +----+--------------+-------------+----------------+-------------+--------------+ [wrsroot@controller-0 ~(keystone_admin)]$
Checking StarlingX CEPH Health
[wrsroot@controller-0 ~(keystone_admin)]$ ceph -s cluster e14ebfd6-5030-4592-91c3-7e6146b3c910 health HEALTH_OK monmap e1: 3 mons at {controller-0=192.168.204.3:6789/0,controller-1=192.168.204.4:6789/0,storage-0=192.168.204.204:6789/0} election epoch 22, quorum 0,1,2 controller-0,controller-1,storage-0 osdmap e84: 2 osds: 2 up, 2 in flags sortbitwise,require_jewel_osds pgmap v168: 1600 pgs, 5 pools, 0 bytes data, 0 objects 87444 kB used, 197 GB / 197 GB avail 1600 active+clean controller-0:~$
Your StarlingX deployment is now up and running with 2x HA Controllers with Cinder Storage, 1x Compute, 3x Storages and all OpenStack services up and running. You can now proceed with standard OpenStack APIs, CLIs and/or Horizon to load Glance Images, configure Nova Flavors, configure Neutron networks and launch Nova Virtual Machines.
System Alarm List
When all nodes are Unlocked, Enabled and Available: check 'fm alarm-list' for issues.