Difference between revisions of "StarlingX/Installation Guide Virtual Environment/Controller Storage"
(→Initializing Node-N Host) |
(→Controller-1 / Compute Hosts Installation) |
||
Line 255: | Line 255: | ||
==Controller-1 / Compute Hosts Installation== | ==Controller-1 / Compute Hosts Installation== | ||
− | After initializing and configuring an active controller, you can add and configure a backup controller and additional compute | + | After initializing and configuring an active controller, you can add and configure a backup controller and additional compute or storage hosts. For each Node-N Host do the following: |
− | |||
− | For each Node-N Host do the following: | ||
===Initializing Node-N Host=== | ===Initializing Node-N Host=== |
Revision as of 13:37, 22 August 2018
Contents
Configure Virtual Servers
Run the libvirt qemu setup script:
$ bash setup_standard_controller.sh
Standard Controller Storage Virtual Environment
Controller-0 Host Installation
Installing controller-0 involves initializing a host with software and then applying a configuration from the command line. The configured host becomes Controller-0.
Procedure:
- Using an ISO image of StarlingX.
- Configure the controller using the config_controller script.
Initializing Controller-0
This section describes how to initialize StarlingX in host Controller-0. Except where noted, all the commands must be executed from a console of the Workstation. Make sure Virtual Machine Manager is open.
From the KVM/VirtManager window, power on the host to be configured as Controller-0 and show the virtual machine console and details:
- When the installer is loaded and the installer welcome screen appears in the Controller-0 host, select the type of installation "Standard Controller Configuration".
- Select the "Graphical Console" as the console to use during installation.
- Select "Standard Security Boot Profile" as the Security Profile.
- Monitor the initialization until it is complete. When initialization is complete, a reboot is initiated on the Controller-0 host, briefly displays a GNU GRUB screen, and then boots automatically into the StarlingX image.
Log into Controller-0 as user wrsroot, with password wrsroot. The first time you log in as wrsroot, you are required to change your password. Enter the current password (wrsroot):
Changing password for wrsroot. (current) UNIX Password:
Enter a new password for the wrsroot account:
New password:
Enter the new password again to confirm it:
Retype new password:
Controller-0 is initialized with StarlingX, and is ready for configuration.
Configuring Controller-0
This section describes how to perform the Controller-0 configuration interactively. Except where noted, all the commands must be executed from the console of the active controller (here assumed to be controller-0.
When run interactively, the config_controller script presents a series of prompts for initial configuration of StarlingX. The script is used to configure the first controller in the StarlingX cluster as controller-0. The prompts are grouped by configuration area. To start the script interactively, use the following command with no parameters:
controller-0:~$ sudo config_controller System Configuration ================ Enter ! at any prompt to abort... ...
Accept all the default values immediately after ‘system date and time’.
... Distributed Cloud Configuration: ----------------------------------------- Configure Distributed Cloud System Controller [y/N]: ... Applying configuration (this will take several minutes): 01/08: Creating bootstrap configuration ... DONE 02/08: Applying bootstrap manifest ... DONE 03/08: Persisting local configuration ... DONE 04/08: Populating initial system inventory ... DONE 05:08: Creating system configuration ... DONE 06:08: Applying controller manifest ... DONE 07:08: Finalize controller configuration ... DONE 08:08: Waiting for service activation ... DONE Configuration was applied Please complete any out of service commissioning steps with system commands and unlock controller to proceed.
Controller-0 and System Provision
Configuring Provider Networks at Installation
You must set up provider networks at installation so that you can attach data interfaces and unlock the compute nodes.
On Controller-0, acquire Keystone administrative privileges:
controller-0:~$ source /etc/nova/openrc
Set up one provider network of the vlan type, named providernet-a:
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-create providernet-a --type=vlan [wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-range-create --name providernet-a-range1 --range 100-400 providernet-a
OPTIONAL: Configuring Cinder on Controller Disk
Review the available disk space and capacity and obtain the uuid of the physical disk
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list controller-0 +--------------------------------------+-----------+---------+---------+-------+------------+--------------+... | uuid | device_no | device_ | device_ | size_ | available_ | rpm |... | | de | num | type | gib | gib | |... +--------------------------------------+-----------+---------+---------+-------+------------+--------------+... | 004f4c09-2f61-46c5-8def-99b2bdeed83c | /dev/sda | 2048 | HDD | 200.0 | 0.0 | |... | 89694799-0dd8-4532-8636-c0d8aabfe215 | /dev/sdb | 2064 | HDD | 200.0 | 199.997 | |... +--------------------------------------+-----------+---------+---------+-------+------------+--------------+... [wrsroot@controller-0 ~(keystone_admin)]$
Create the 'cinder-volumes' local volume group
[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-add controller-0 cinder-volumes +-----------------+--------------------------------------+ | Property | Value | +-----------------+--------------------------------------+ | lvm_vg_name | cinder-volumes | | vg_state | adding | | uuid | ece4c755-241c-4363-958e-85e9e3d12917 | | ihost_uuid | 150284e2-fb60-4169-ae75-7f444b8ca9bf | | lvm_vg_access | None | | lvm_max_lv | 0 | | lvm_cur_lv | 0 | | lvm_max_pv | 0 | | lvm_cur_pv | 0 | | lvm_vg_size_gib | 0.00 | | lvm_vg_total_pe | 0 | | lvm_vg_free_pe | 0 | | created_at | 2018-08-22T03:59:30.685718+00:00 | | updated_at | None | | parameters | {u'lvm_type': u'thin'} | +-----------------+--------------------------------------+
Create a disk partition to add to the volume group
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-add controller-0 89694799-0dd8-4532-8636-c0d8aabfe215 199 -t lvm_phys_vol +-------------+--------------------------------------------------+ | Property | Value | +-------------+--------------------------------------------------+ | device_path | /dev/disk/by-path/pci-0000:00:03.0-ata-2.0-part1 | | device_node | /dev/sdb1 | | type_guid | ba5eba11-0000-1111-2222-000000000001 | | type_name | None | | start_mib | None | | end_mib | None | | size_mib | 203776 | | uuid | 9ba2d76a-6ae2-4bfa-ad48-57b62d102e80 | | ihost_uuid | 150284e2-fb60-4169-ae75-7f444b8ca9bf | | idisk_uuid | 89694799-0dd8-4532-8636-c0d8aabfe215 | | ipv_uuid | None | | status | Creating | | created_at | 2018-08-22T04:03:40.761221+00:00 | | updated_at | None | +-------------+--------------------------------------------------+
Wait for the new partition to be created (i.e. status=Ready)
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-list controller-0 --disk 89694799-0dd8-4532-8636-c0d8aabfe215 +--------------------------------------+-----------------------------+------------+... | uuid | device_path | device_nod |... | | | e |... +--------------------------------------+-----------------------------+------------+... | 9ba2d76a-6ae2-4bfa-ad48-57b62d102e80 | /dev/disk/by-path/pci-0000: | /dev/sdb1 |... | | 00:03.0-ata-2.0-part1 | |... | | | |... +--------------------------------------+-----------------------------+------------+...
Add the partition to the volume group
[wrsroot@controller-0 ~(keystone_admin)]$ system host-pv-add controller-0 cinder-volumes 9ba2d76a-6ae2-4bfa-ad48-57b62d102e80 +--------------------------+--------------------------------------------------+ | Property | Value | +--------------------------+--------------------------------------------------+ | uuid | 060dc47e-bc17-40f4-8f09-5326ef0e86a5 | | pv_state | adding | | pv_type | partition | | disk_or_part_uuid | 9ba2d76a-6ae2-4bfa-ad48-57b62d102e80 | | disk_or_part_device_node | /dev/sdb1 | | disk_or_part_device_path | /dev/disk/by-path/pci-0000:00:03.0-ata-2.0-part1 | | lvm_pv_name | /dev/sdb1 | | lvm_vg_name | cinder-volumes | | lvm_pv_uuid | None | | lvm_pv_size_gib | 0.0 | | lvm_pe_total | 0 | | lvm_pe_alloced | 0 | | ihost_uuid | 150284e2-fb60-4169-ae75-7f444b8ca9bf | | created_at | 2018-08-22T04:06:54.008632+00:00 | | updated_at | None | +--------------------------+--------------------------------------------------+
Unlocking Controller-0
You must unlock controller-0 so that you can use it to install the remaining hosts. On Controller-0, acquire Keystone administrative privileges:
controller-0:~$ source /etc/nova/openrc
Use the system host-unlock command:
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock controller-0
The host is rebooted. During the reboot, the command line is unavailable, and any ssh connections are dropped. To monitor the progress of the reboot, use the controller-0 console.
Verifying the Controller-0 Configuration
On Controller-0, acquire Keystone administrative privileges:
controller-0:~$ source /etc/nova/openrc
Verify that the Titanium Cloud controller services are running:
[wrsroot@controller-0 ~(keystone_admin)]$ nova service-list +--------------------------------------+------------------+--------------+----------+---------+-------+ ... | Id | Binary | Host | Zone | Status | State | ... +--------------------------------------+------------------+--------------+----------+---------+-------+ ... | d7cdfaf0-9394-4053-b752-d609c837b729 | nova-conductor | controller-0 | internal | enabled | up | ... | 692c2659-7188-42ec-9ad1-c88ac3020dc6 | nova-scheduler | controller-0 | internal | enabled | up | ... | 5c7c9aad-696f-4dd1-a497-464bdd525e0c | nova-consoleauth | controller-0 | internal | enabled | up | ... +--------------------------------------+------------------+--------------+----------+---------+-------+ ...
Verify that controller-0 is unlocked, enabled, and available:
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | +----+--------------+-------------+----------------+-------------+--------------+
Controller-1 / Compute Hosts Installation
After initializing and configuring an active controller, you can add and configure a backup controller and additional compute or storage hosts. For each Node-N Host do the following:
Initializing Node-N Host
Start Node-N in the workstation:
$ sudo virsh start Node-N
In Node-N console you will see:
Waiting for this node to be configured. Please configure the personality for this node from the controller node in order to proceed.
Updating Node-N Host Host Name and Personality
On Controller-0, acquire Keystone administrative privileges:
controller-0:~$ source /etc/nova/openrc
Wait for Controller-0 to discover new host, list the host until new UNKNOWN host shows up in table:
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 2 | None | None | locked | disabled | offline | +----+--------------+-------------+----------------+-------------+--------------+
Use the system host-update to update Node-N host personality attribute:
[wrsroot@controller-0 ~(keystone_admin)]$ system host-update 2 personality=controller hostname=controller-1
Monitoring Node-N Host
On Controller-0, you can monitor the installation progress by running the system host-show command for the host periodically. Progress is shown in the install_state field.
[wrsroot@controller-0 ~(keystone_admin)]$ system host-show Node-N | grep install | install_output | text | | install_state | booting | | install_state_info | None |
Wait while the Node-N is configured and rebooted. Up to 20 minutes may be required for a reboot, depending on hardware. When the reboot is complete, the Node-N is reported as Locked, Disabled, and Online.
Listing Node-N Hosts
Once all Nodes have been installed, configured and rebooted, on Controller-0 list the hosts:
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 2 | controller-1 | controller | locked | disabled | online | | 3 | compute-0 | compute | locked | disabled | online | | 4 | compute-1 | compute | locked | disabled | online | +----+--------------+-------------+----------------+-------------+--------------+ [wrsroot@controller-0 ~(keystone_admin)]$
Controller-1 Provisioning
On Controller-0, list hosts
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ ... | 2 | controller-1 | controller | locked | disabled | online | ... +----+--------------+-------------+----------------+-------------+--------------+ [wrsroot@controller-0 ~(keystone_admin)]$
Provisioning Network Interfaces on Controller-1
Provision the Controller-1 data interface
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -n ens6 -nt oam controller-1 ens6 +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | ifname | ens6 | | networktype | oam | | iftype | ethernet | | ports | [] | | providernetworks | None | | imac | 52:54:00:da:8c:ad | | imtu | 1500 | | aemode | None | | schedpolicy | None | | txhashpolicy | None | | uuid | c0696cc4-ab3d-41b7-9a32-4b599d72050f | | ihost_uuid | 06827025-eacb-45e6-bb88-1a649f7404ec | | vlan_id | None | | uses | [] | | used_by | [] | | created_at | 2018-08-22T05:17:57.330642+00:00 | | updated_at | 2018-08-22T05:28:10.289533+00:00 | | sriov_numvfs | 0 | | ipv4_mode | static | | ipv6_mode | disabled | | accelerated | [] | +------------------+--------------------------------------+
Provisioning Storage on Controller-1
Review the available disk space and capacity and obtain the uuid of the physical disk
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list controller-1 +--------------------------------------+-----------+---------+---------+-------+------------+ | uuid | device_no | device_ | device_ | size_ | available_ | | | de | num | type | gib | gib | +--------------------------------------+-----------+---------+---------+-------+------------+ | f7ce53db-7843-457e-8422-3c8f9970b4f2 | /dev/sda | 2048 | HDD | 200.0 | 0.0 | | 70b83394-968e-4f0d-8a99-7985cd282a21 | /dev/sdb | 2064 | HDD | 200.0 | 199.997 | +--------------------------------------+-----------+---------+---------+-------+------------+
Assign Cinder storage to the physical disk
[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-add controller-1 cinder-volumes +-----------------+--------------------------------------+ | Property | Value | +-----------------+--------------------------------------+ | lvm_vg_name | cinder-volumes | | vg_state | adding | | uuid | 22d8b94a-200a-4fd5-b1f5-7015ddf10d0b | | ihost_uuid | 06827025-eacb-45e6-bb88-1a649f7404ec | | lvm_vg_access | None | | lvm_max_lv | 0 | | lvm_cur_lv | 0 | | lvm_max_pv | 0 | | lvm_cur_pv | 0 | | lvm_vg_size_gib | 0.00 | | lvm_vg_total_pe | 0 | | lvm_vg_free_pe | 0 | | created_at | 2018-08-22T05:33:44.608913+00:00 | | updated_at | None | | parameters | {u'lvm_type': u'thin'} | +-----------------+--------------------------------------+
Create a disk partition to add to the volume group based on uuid of the physical disk
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-add controller-1 70b83394-968e-4f0d-8a99-7985cd282a21 199 -t lvm_phys_vol +-------------+--------------------------------------------------+ | Property | Value | +-------------+--------------------------------------------------+ | device_path | /dev/disk/by-path/pci-0000:00:03.0-ata-2.0-part1 | | device_node | /dev/sdb1 | | type_guid | ba5eba11-0000-1111-2222-000000000001 | | type_name | None | | start_mib | None | | end_mib | None | | size_mib | 203776 | | uuid | 16a1c5cb-620c-47a3-be4b-022eafd122ee | | ihost_uuid | 06827025-eacb-45e6-bb88-1a649f7404ec | | idisk_uuid | 70b83394-968e-4f0d-8a99-7985cd282a21 | | ipv_uuid | None | | status | Creating (on unlock) | | created_at | 2018-08-22T05:36:42.123770+00:00 | | updated_at | None | +-------------+--------------------------------------------------+
Wait for the new partition to be created (i.e. status=Ready)
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-list controller-1 --disk 70b83394-968e-4f0d-8a99-7985cd282a21 +--------------------------------------+-----------------------------+------------+--------------------------------------+-------+--------+----------------------+ | uuid | device_path | device_nod | type_guid | type_ | size_g | status | | | | e | | name | ib | | +--------------------------------------+-----------------------------+------------+--------------------------------------+-------+--------+----------------------+ | 16a1c5cb-620c-47a3-be4b-022eafd122ee | /dev/disk/by-path/pci-0000: | /dev/sdb1 | ba5eba11-0000-1111-2222-000000000001 | None | 199.0 | Creating (on unlock) | | | 00:03.0-ata-2.0-part1 | | | | | | | | | | | | | | +--------------------------------------+-----------------------------+------------+--------------------------------------+-------+--------+----------------------+
Add the partition to the volume group
[wrsroot@controller-0 ~(keystone_admin)]$ system host-pv-add controller-1 cinder-volumes 16a1c5cb-620c-47a3-be4b-022eafd122ee +--------------------------+--------------------------------------------------+ | Property | Value | +--------------------------+--------------------------------------------------+ | uuid | 01d79ed2-717f-428e-b9bc-23894203b35b | | pv_state | adding | | pv_type | partition | | disk_or_part_uuid | 16a1c5cb-620c-47a3-be4b-022eafd122ee | | disk_or_part_device_node | /dev/sdb1 | | disk_or_part_device_path | /dev/disk/by-path/pci-0000:00:03.0-ata-2.0-part1 | | lvm_pv_name | /dev/sdb1 | | lvm_vg_name | cinder-volumes | | lvm_pv_uuid | None | | lvm_pv_size_gib | 0.0 | | lvm_pe_total | 0 | | lvm_pe_alloced | 0 | | ihost_uuid | 06827025-eacb-45e6-bb88-1a649f7404ec | | created_at | 2018-08-22T05:44:34.715289+00:00 | | updated_at | None | +--------------------------+--------------------------------------------------+
Unlocking Controller-1
Unlock Controller-1
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock controller-1
Wait while the Controller-1 is rebooted. Up to 10 minutes may be required for a reboot, depending on hardware.
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 2 | controller-1 | controller | unlocked | enabled | available | ... +----+--------------+-------------+----------------+-------------+--------------+ [wrsroot@controller-0 ~(keystone_admin)]$
Compute Host Provision
You must configure the network interfaces and the storage disks on a host before you can unlock it.
On Controller-0, acquire Keystone administrative privileges:
controller-0:~$ source /etc/nova/openrc
Provisioning Network Interfaces on a Compute Host
On Controller-0, provision the data interfaces
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -p providernet-a -nt data compute-0 ens6 [wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -p providernet-a -nt data compute-1 ens6
Provisioning Storage on a Compute Host
On Controller-0, ensure that provider networks are available for the data interfaces. Provision the data interfaces:
system host-list --nowrap &> /dev/null && NOWRAP="--nowrap" ALL_COMPUTE=`system host-list $NOWRAP | grep compute- | cut -d '|' -f 3` # for each compute node, we should run the followings for compute in $ALL_COMPUTE; do system host-cpu-modify ${compute} -f vswitch -p0 1 system host-lvg-add ${compute} nova-local system host-pv-add ${compute} nova-local $(system host-disk-list ${compute} $NOWRAP | grep /dev/sdb | awk '{print $2}') system host-lvg-modify -b image -s 10240 ${compute} nova-local done
Unlocking a Compute Host
On Controller-0, use the system host-unlock command to unlock the node:
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock compute-0 [wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock compute-1
Wait while the compute-N is rebooted. Up to 10 minutes may be required for a reboot, depending on hardware. The host is rebooted, and its Availability State is reported as In-Test.
System Health Check
After a few minutes, all nodes shall be reported as Unlocked, Enabled, and Available:
On Controller-0
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 2 | compute-0 | compute | unlocked | enabled | available | | 3 | compute-1 | compute | unlocked | enabled | available | +----+--------------+-------------+----------------+-------------+--------------+