Jump to: navigation, search

Difference between revisions of "StarlingX/Installation Guide Virtual Environment/Duplex"

(Created page with "==In Process==")
 
Line 1: Line 1:
==In Process==
+
==Simplex==
 +
 
 +
==Controller-0 Host Installation==
 +
 
 +
Installing controller-0 involves initializing a host with software and then applying a configuration from the command line. The configured host becomes Controller-0. <br>
 +
Procedure:
 +
 
 +
# Using an ISO image of StarlingX, initialize the controller host via Libvirt/QEMU.
 +
# Configure the controller using the config_controller script.
 +
 
 +
===Initializing Controller-0===
 +
This section describes how to initialize StarlingX in host Controller-0. Except where noted, all the commands must be executed from a console of the Workstation. Make sure Virtual Machine Manager is open.
 +
 
 +
Run the libvirt qemu setup script:
 +
<pre><nowiki>
 +
$ bash setup_all_in_one.sh
 +
</nowiki></pre>
 +
 
 +
From the KVM/VirtManager window, power on the host to be configured as Controller-0 and show the virtual machine console and details:
 +
* When the installer is loaded and the installer welcome screen appears in the Controller-0 host, select the type of installation "All-in-one Controller Configuration".
 +
* Select the "Graphical Console" as the console to use during installation.
 +
* Select "Standard Security Boot Profile" as the Security Profile.
 +
* Monitor the initialization until it is complete. When initialization is complete, a reboot is initiated on the Controller-0 host, briefly displays a GNU GRUB screen, and then boots automatically into the StarlingX image.
 +
 
 +
Log into Controller-0 as user wrsroot, with password wrsroot. The first time you log in as wrsroot, you are required to change your password. Enter the current password (wrsroot):
 +
<pre><nowiki>
 +
Changing password for wrsroot.
 +
(current) UNIX Password:
 +
</nowiki></pre>
 +
 
 +
Enter a new password for the wrsroot account:
 +
<pre><nowiki>
 +
New password:
 +
</nowiki></pre>
 +
 
 +
Enter the new password again to confirm it:
 +
<pre><nowiki>
 +
Retype new password:
 +
</nowiki></pre>
 +
 
 +
Controller-0 is initialized with StarlingX, and is ready for configuration.
 +
 
 +
===Configuring Controller-0===
 +
 
 +
This section describes how to perform the Controller-0 configuration interactively. Except where noted, all the commands must be executed from the console of the active controller (here assumed to be controller-0.
 +
 
 +
When run interactively, the config_controller script presents a series of prompts for initial configuration of StarlingX. The prompts are grouped by configuration area. To start the script interactively, use the following command, in the System mode select Simplex as the configuration option and accept all the default values for the rest of the options:
 +
 
 +
<pre><nowiki>
 +
controller-0:~$ sudo config_controller
 +
</nowiki></pre>
 +
 
 +
The output when config_controller script is run interactively is:
 +
 
 +
<pre><nowiki>
 +
...
 +
System mode. Available options are:
 +
 
 +
1) duplex-direct: two node-redundant configuration. Management and
 +
infrastructure networks are directly connected to peer ports
 +
2) duplex - two node redundant configuration
 +
3) simplex - single node non-redundant configuration
 +
System mode [duplex-direct]: 2
 +
...
 +
WARNING: Command should only be run from the console. Continuing with this
 +
terminal may cause loss of connectivity and configuration failure
 +
...
 +
Apply the above configuration? [y/n]: y
 +
 
 +
Applying configuration (this will take several minutes):
 +
 
 +
01/08: Creating bootstrap configuration ... DONE
 +
02/08: Applying bootstrap manifest ... DONE
 +
03/08: Persisting local configuration ... DONE
 +
04/08: Populating initial system inventory ... DONE
 +
05:08: Creating system configuration ... DONE
 +
06:08: Applying controller manifest ... DONE
 +
07:08: Finalize controller configuration ... DONE
 +
08:08: Waiting for service activation ... DONE
 +
 
 +
Configuration was applied
 +
 
 +
Please complete any out of service comissioning steps with system commands and unlock controller to proceed.
 +
</nowiki></pre>
 +
 
 +
==Controller-0 Host Provision==
 +
 
 +
On Controller-0, acquire Keystone administrative privileges:
 +
 
 +
<pre><nowiki>
 +
controller-0:~$ source /etc/nova/openrc
 +
</nowiki></pre>
 +
 
 +
On Controller-0, create a newshell script file, copy and paste the content below and run:
 +
 
 +
<source lang="sh">
 +
#!/usr/bin/env bash
 +
 
 +
system host-disk-list controller-0
 +
 
 +
NODE=controller-0
 +
DEVICE=/dev/sdb
 +
SIZE=$(system host-disk-list $NODE | grep $DEVICE | awk '{print $12}')
 +
DISK=$(system host-disk-list $NODE | grep $DEVICE | awk '{print $2}')
 +
# Create a partition for Cinder
 +
system host-disk-partition-add $NODE $DISK $SIZE -t lvm_phys_vol
 +
# Create the Volume Group
 +
system host-lvg-add $NODE cinder-volumes
 +
# Wait for partition to be created
 +
while true; do
 +
    system host-disk-partition-list $NODE --nowrap | grep $DEVICE | grep Ready;
 +
    if [ $? -eq 0 ]; then
 +
        break;
 +
    fi;
 +
    sleep 1;
 +
    echo "Waiting for Disk Partition for $DEVICE:$NODE"
 +
done
 +
 
 +
PARTITION=$(system host-disk-partition-list $NODE --disk $DISK --nowrap | grep part1 | awk '{print $2}')
 +
# Create the PV
 +
sleep 1
 +
system host-pv-add $NODE cinder-volumes $PARTITION
 +
sleep 1
 +
#Enable LVM Backend.
 +
 
 +
system storage-backend-add lvm -s cinder --confirmed
 +
 
 +
#Wait for backend to be configured:
 +
echo " This can take a few minutes..."
 +
while true; do
 +
    system storage-backend-list | grep lvm | grep configured;
 +
    if [ $? -eq 0 ]; then
 +
        break;
 +
    else sleep 10;
 +
    fi;
 +
    echo "Waiting for backend to be configured"
 +
done
 +
system storage-backend-list
 +
 
 +
# Add provider networks and assign segmentation ranges
 +
PHYSNET0='providernet-a'
 +
PHYSNET1='providernet-b'
 +
neutron providernet-create ${PHYSNET0} --type vlan
 +
neutron providernet-create ${PHYSNET1} --type vlan
 +
neutron providernet-range-create ${PHYSNET0} --name ${PHYSNET0}-a --range 400-499
 +
neutron providernet-range-create ${PHYSNET0} --name ${PHYSNET0}-b --range 10-10 --shared
 +
neutron providernet-range-create ${PHYSNET1} --name ${PHYSNET1}-a --range 500-599
 +
 
 +
# Create data interfaces
 +
DATA0IF=eth1000
 +
DATA1IF=eth1001
 +
COMPUTE='controller-0'
 +
system host-list --nowrap &> /dev/null && NOWRAP="--nowrap"
 +
SPL=/tmp/tmp-system-port-list
 +
SPIL=/tmp/tmp-system-host-if-list
 +
system host-port-list ${COMPUTE} $NOWRAP > ${SPL}
 +
system host-if-list -a ${COMPUTE} $NOWRAP > ${SPIL}
 +
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 +
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 +
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 +
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 +
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 +
DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 +
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 +
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 +
system host-if-modify -m 1500 -n data0 -p ${PHYSNET0} -nt data ${COMPUTE} ${DATA0IFUUID}
 +
system host-if-modify -m 1500 -n data1 -p ${PHYSNET1} -nt data ${COMPUTE} ${DATA1IFUUID}
 +
 
 +
# Add nova local backend
 +
system host-lvg-add ${COMPUTE} nova-local
 +
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
 +
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 +
PARTITION_SIZE=$(($(system host-disk-list ${COMPUTE} --nowrap  | grep ${ROOT_DISK} | awk '{print $12;}')/2))
 +
CGTS_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
 +
 
 +
while true; do
 +
    system host-disk-partition-list ${COMPUTE} | grep /dev/sda5 | grep Ready
 +
    if [ $? -eq 0 ]; then
 +
        break;
 +
    else sleep 2;
 +
    fi;
 +
    echo "Waiting to add disk partition"
 +
done
 +
system host-disk-partition-list ${COMPUTE}
 +
 
 +
CGTS_PARTITION_UUID=$(echo ${CGTS_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 +
sleep 1
 +
system host-pv-add ${COMPUTE} cgts-vg ${CGTS_PARTITION_UUID}
 +
sleep 1
 +
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
 +
 
 +
while true; do
 +
    system host-disk-partition-list ${COMPUTE} | grep /dev/sda6 | grep Ready
 +
    if [ $? -eq 0 ]; then
 +
        break;
 +
    else sleep 2;
 +
    fi;
 +
    echo "Waiting to add disk partition"
 +
done
 +
system host-disk-partition-list ${COMPUTE}
 +
 
 +
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 +
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
 +
sleep 1
 +
system host-lvg-modify -b image -s 10240 ${COMPUTE} nova-local
 +
sleep 10
 +
 
 +
### This will result in a reboot.
 +
system host-unlock controller-0
 +
echo " Watch CONSOLE to see progress. You will see things like "
 +
echo "    Applying manifest 127.168.204.3_patching.pp..."
 +
echo "    [DONE]"
 +
echo " Tailing /var/log/platform.log until reboot..."
 +
tail -f /var/log/platform.log
 +
</source>
 +
 
 +
The host is rebooted. During the reboot, the command line is unavailable, and any ssh connections are dropped. To monitor the progress of the reboot, use the controller-0 console.
 +
 
 +
===Verifying the Controller-0 Configuration===
 +
 
 +
On Controller-0, acquire Keystone administrative privileges:
 +
 
 +
<pre><nowiki>
 +
controller-0:~$ source /etc/nova/openrc
 +
</nowiki></pre>
 +
 
 +
Verify that the Titanium Cloud controller services are running:
 +
 
 +
<pre><nowiki>
 +
[wrsroot@controller-0 ~(keystone_admin)]$ nova service-list
 +
+--------------------------------------+------------------+--------------+----------+---------+-------+ ...
 +
| Id                                  | Binary          | Host        | Zone    | Status  | State | ...
 +
+--------------------------------------+------------------+--------------+----------+---------+-------+ ...
 +
| d7cdfaf0-9394-4053-b752-d609c837b729 | nova-conductor  | controller-0 | internal | enabled | up    | ...
 +
| 692c2659-7188-42ec-9ad1-c88ac3020dc6 | nova-scheduler  | controller-0 | internal | enabled | up    | ...
 +
| 5c7c9aad-696f-4dd1-a497-464bdd525e0c | nova-consoleauth | controller-0 | internal | enabled | up    | ...
 +
+--------------------------------------+------------------+--------------+----------+---------+-------+ ...
 +
</nowiki></pre>
 +
 
 +
If one of the listed services is down please reenable
 +
 
 +
<source lang="sh">
 +
#!/usr/bin/env bash
 +
 
 +
source /etc/nova/openrc
 +
 
 +
ALL_DISABLED_SERVICES=`nova service-list | grep disabled | awk -F "|" '{print $2}'`
 +
for service in $ALL_DISABLED_SERVICES; do
 +
    nova service-force-down --unset ${service}
 +
    nova service-enable ${service}
 +
done
 +
</source>
 +
 
 +
Verify that controller-0 is unlocked, enabled, and available:
 +
 
 +
<pre><nowiki>
 +
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
 +
+----+--------------+-------------+----------------+-------------+--------------+
 +
| id | hostname    | personality | administrative | operational | availability |
 +
+----+--------------+-------------+----------------+-------------+--------------+
 +
| 1  | controller-0 | controller  | unlocked      | enabled    | available    |
 +
+----+--------------+-------------+----------------+-------------+--------------+
 +
</nowiki></pre>

Revision as of 15:16, 11 August 2018

Simplex

Controller-0 Host Installation

Installing controller-0 involves initializing a host with software and then applying a configuration from the command line. The configured host becomes Controller-0.
Procedure:

  1. Using an ISO image of StarlingX, initialize the controller host via Libvirt/QEMU.
  2. Configure the controller using the config_controller script.

Initializing Controller-0

This section describes how to initialize StarlingX in host Controller-0. Except where noted, all the commands must be executed from a console of the Workstation. Make sure Virtual Machine Manager is open.

Run the libvirt qemu setup script:

$ bash setup_all_in_one.sh

From the KVM/VirtManager window, power on the host to be configured as Controller-0 and show the virtual machine console and details:

  • When the installer is loaded and the installer welcome screen appears in the Controller-0 host, select the type of installation "All-in-one Controller Configuration".
  • Select the "Graphical Console" as the console to use during installation.
  • Select "Standard Security Boot Profile" as the Security Profile.
  • Monitor the initialization until it is complete. When initialization is complete, a reboot is initiated on the Controller-0 host, briefly displays a GNU GRUB screen, and then boots automatically into the StarlingX image.

Log into Controller-0 as user wrsroot, with password wrsroot. The first time you log in as wrsroot, you are required to change your password. Enter the current password (wrsroot):

Changing password for wrsroot.
(current) UNIX Password:

Enter a new password for the wrsroot account:

New password:

Enter the new password again to confirm it:

Retype new password:

Controller-0 is initialized with StarlingX, and is ready for configuration.

Configuring Controller-0

This section describes how to perform the Controller-0 configuration interactively. Except where noted, all the commands must be executed from the console of the active controller (here assumed to be controller-0.

When run interactively, the config_controller script presents a series of prompts for initial configuration of StarlingX. The prompts are grouped by configuration area. To start the script interactively, use the following command, in the System mode select Simplex as the configuration option and accept all the default values for the rest of the options:

controller-0:~$ sudo config_controller

The output when config_controller script is run interactively is:

...
System mode. Available options are:

1) duplex-direct: two node-redundant configuration. Management and
infrastructure networks are directly connected to peer ports
2) duplex - two node redundant configuration
3) simplex - single node non-redundant configuration
System mode [duplex-direct]: 2
...
WARNING: Command should only be run from the console. Continuing with this
terminal may cause loss of connectivity and configuration failure
...
Apply the above configuration? [y/n]: y

Applying configuration (this will take several minutes):

01/08: Creating bootstrap configuration ... DONE
02/08: Applying bootstrap manifest ... DONE
03/08: Persisting local configuration ... DONE
04/08: Populating initial system inventory ... DONE
05:08: Creating system configuration ... DONE
06:08: Applying controller manifest ... DONE
07:08: Finalize controller configuration ... DONE
08:08: Waiting for service activation ... DONE

Configuration was applied

Please complete any out of service comissioning steps with system commands and unlock controller to proceed.

Controller-0 Host Provision

On Controller-0, acquire Keystone administrative privileges:

controller-0:~$ source /etc/nova/openrc

On Controller-0, create a newshell script file, copy and paste the content below and run:

#!/usr/bin/env bash

system host-disk-list controller-0

NODE=controller-0
DEVICE=/dev/sdb
SIZE=$(system host-disk-list $NODE | grep $DEVICE | awk '{print $12}')
DISK=$(system host-disk-list $NODE | grep $DEVICE | awk '{print $2}')
# Create a partition for Cinder
system host-disk-partition-add $NODE $DISK $SIZE -t lvm_phys_vol
# Create the Volume Group
system host-lvg-add $NODE cinder-volumes
# Wait for partition to be created
while true; do
    system host-disk-partition-list $NODE --nowrap | grep $DEVICE | grep Ready;
    if [ $? -eq 0 ]; then
        break;
    fi;
    sleep 1;
    echo "Waiting for Disk Partition for $DEVICE:$NODE"
done

PARTITION=$(system host-disk-partition-list $NODE --disk $DISK --nowrap | grep part1 | awk '{print $2}')
# Create the PV
sleep 1
system host-pv-add $NODE cinder-volumes $PARTITION
sleep 1
#Enable LVM Backend.

system storage-backend-add lvm -s cinder --confirmed

#Wait for backend to be configured:
echo " This can take a few minutes..."
while true; do
    system storage-backend-list | grep lvm | grep configured;
    if [ $? -eq 0 ]; then
        break;
    else sleep 10;
    fi;
    echo "Waiting for backend to be configured"
done
system storage-backend-list

# Add provider networks and assign segmentation ranges
PHYSNET0='providernet-a'
PHYSNET1='providernet-b'
neutron providernet-create ${PHYSNET0} --type vlan
neutron providernet-create ${PHYSNET1} --type vlan
neutron providernet-range-create ${PHYSNET0} --name ${PHYSNET0}-a --range 400-499
neutron providernet-range-create ${PHYSNET0} --name ${PHYSNET0}-b --range 10-10 --shared
neutron providernet-range-create ${PHYSNET1} --name ${PHYSNET1}-a --range 500-599

# Create data interfaces
DATA0IF=eth1000
DATA1IF=eth1001
COMPUTE='controller-0'
system host-list --nowrap &> /dev/null && NOWRAP="--nowrap"
SPL=/tmp/tmp-system-port-list
SPIL=/tmp/tmp-system-host-if-list
system host-port-list ${COMPUTE} $NOWRAP > ${SPL}
system host-if-list -a ${COMPUTE} $NOWRAP > ${SPIL}
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
system host-if-modify -m 1500 -n data0 -p ${PHYSNET0} -nt data ${COMPUTE} ${DATA0IFUUID}
system host-if-modify -m 1500 -n data1 -p ${PHYSNET1} -nt data ${COMPUTE} ${DATA1IFUUID}

# Add nova local backend
system host-lvg-add ${COMPUTE} nova-local
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
PARTITION_SIZE=$(($(system host-disk-list ${COMPUTE} --nowrap   | grep ${ROOT_DISK} | awk '{print $12;}')/2))
CGTS_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})

while true; do
    system host-disk-partition-list ${COMPUTE} | grep /dev/sda5 | grep Ready
    if [ $? -eq 0 ]; then
        break;
    else sleep 2;
    fi;
    echo "Waiting to add disk partition"
done
system host-disk-partition-list ${COMPUTE}

CGTS_PARTITION_UUID=$(echo ${CGTS_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
sleep 1
system host-pv-add ${COMPUTE} cgts-vg ${CGTS_PARTITION_UUID}
sleep 1
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})

while true; do
    system host-disk-partition-list ${COMPUTE} | grep /dev/sda6 | grep Ready
    if [ $? -eq 0 ]; then
        break;
    else sleep 2;
    fi;
    echo "Waiting to add disk partition"
done
system host-disk-partition-list ${COMPUTE}

NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
sleep 1
system host-lvg-modify -b image -s 10240 ${COMPUTE} nova-local
sleep 10

### This will result in a reboot.
system host-unlock controller-0
echo " Watch CONSOLE to see progress. You will see things like "
echo "    Applying manifest 127.168.204.3_patching.pp..."
echo "    [DONE]"
echo " Tailing /var/log/platform.log until reboot..."
tail -f /var/log/platform.log

The host is rebooted. During the reboot, the command line is unavailable, and any ssh connections are dropped. To monitor the progress of the reboot, use the controller-0 console.

Verifying the Controller-0 Configuration

On Controller-0, acquire Keystone administrative privileges:

controller-0:~$ source /etc/nova/openrc

Verify that the Titanium Cloud controller services are running:

[wrsroot@controller-0 ~(keystone_admin)]$ nova service-list
+--------------------------------------+------------------+--------------+----------+---------+-------+ ...
| Id                                   | Binary           | Host         | Zone     | Status  | State | ...
+--------------------------------------+------------------+--------------+----------+---------+-------+ ...
| d7cdfaf0-9394-4053-b752-d609c837b729 | nova-conductor   | controller-0 | internal | enabled | up    | ...
| 692c2659-7188-42ec-9ad1-c88ac3020dc6 | nova-scheduler   | controller-0 | internal | enabled | up    | ...
| 5c7c9aad-696f-4dd1-a497-464bdd525e0c | nova-consoleauth | controller-0 | internal | enabled | up    | ...
+--------------------------------------+------------------+--------------+----------+---------+-------+ ...

If one of the listed services is down please reenable

#!/usr/bin/env bash

source /etc/nova/openrc

ALL_DISABLED_SERVICES=`nova service-list | grep disabled | awk -F "|" '{print $2}'`
for service in $ALL_DISABLED_SERVICES; do
    nova service-force-down --unset ${service} 
    nova service-enable ${service}
done

Verify that controller-0 is unlocked, enabled, and available:

[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
+----+--------------+-------------+----------------+-------------+--------------+