Difference between revisions of "StarlingX/Installation Guide Virtual Environment/Simplex"
(→Controller-0 and System Provision) |
(→Deployment Environment Setup) |
||
Line 29: | Line 29: | ||
==Deployment Environment Setup== | ==Deployment Environment Setup== | ||
− | This section describes how to set up a StarlingX | + | This section describes how to set up a StarlingX Controller Storage system in a workstation computer. After completing these steps, you will be able to deploy and run your StarlingX system on the following Linux distribution: |
* Ubuntu 16.04 LTS 64-bit | * Ubuntu 16.04 LTS 64-bit | ||
Line 41: | Line 41: | ||
</nowiki></pre> | </nowiki></pre> | ||
− | === | + | ===Install stx-tools project=== |
− | + | Clone the stx-tools project. Usually you’ll want to clone it under your user’s home directory. | |
<pre><nowiki> | <pre><nowiki> | ||
− | $ | + | $ cd $HOME |
+ | $ git clone git://git.openstack.org/openstack/stx-tools | ||
</nowiki></pre> | </nowiki></pre> | ||
− | ===Installing | + | ===Installing Requirements and Dependencies=== |
− | + | Navigate to the stx-tools installation libvirt directory: | |
+ | <pre><nowiki> | ||
+ | $ cd $HOME/stx-tools/installation/libvirt/ | ||
+ | </nowiki></pre> | ||
+ | Install the required packages: | ||
<pre><nowiki> | <pre><nowiki> | ||
− | $ | + | $ bash install_packages.sh |
− | |||
</nowiki></pre> | </nowiki></pre> | ||
Line 63: | Line 67: | ||
<pre><nowiki> | <pre><nowiki> | ||
− | $ cp <starlingx iso image> $HOME/ | + | $ cp <starlingx iso image> $HOME/stx-tools/installation/libvirt/bootimage.iso |
</nowiki></pre> | </nowiki></pre> | ||
Revision as of 20:09, 24 July 2018
Contents
Simplex
Requirements
Different use cases require different configurations. For general StarlingX Simplex deployment, the recommended minimum requirements include:
Hardware Requirements
A workstation computer with:
- Processor: x86_64 only supported architecture with hardware virtualization extensions
- Memory: At least 32GB RAM
- Hard Disk: 500GB HDD
- Network: Two network adapters with active Internet connection
Software Requirements
A workstation computer with:
- Operating System: Freshly installed Ubuntu 16.04 LTS 64-bit
- Proxy settings configured (if applies)
- Git
- KVM/VirtManager
- Libvirt Library
- QEMU Full System Emulation Binaries
- <stx-deployment> project
- StarlingX ISO Image
Deployment Environment Setup
This section describes how to set up a StarlingX Controller Storage system in a workstation computer. After completing these steps, you will be able to deploy and run your StarlingX system on the following Linux distribution:
- Ubuntu 16.04 LTS 64-bit
Updating Your Operating System
Before proceeding with the build, ensure your OS is up to date. You’ll first need to update the local database list of available packages:
$ sudo apt-get update
Install stx-tools project
Clone the stx-tools project. Usually you’ll want to clone it under your user’s home directory.
$ cd $HOME $ git clone git://git.openstack.org/openstack/stx-tools
Installing Requirements and Dependencies
Navigate to the stx-tools installation libvirt directory:
$ cd $HOME/stx-tools/installation/libvirt/
Install the required packages:
$ bash install_packages.sh
Getting the StarlingX ISO Image
Follow the instructions from StarlingX/Developer_Guide to build a StarlingX ISO image. Copy the StarlingX ISO Image to the <stx-deployment> libvirt project directory naming it as bootimage.iso:
$ cp <starlingx iso image> $HOME/stx-tools/installation/libvirt/bootimage.iso
Controller-0 Host Installation
Installing controller-0 involves initializing a host with software and then applying a configuration from the command line. The configured host becomes Controller-0.
Procedure:
- Using an ISO image of StarlingX, initialize the controller host via Libvirt/QEMU.
- Configure the controller using the config_controller script.
Initializing Controller-0
This section describes how to initialize StarlingX in host Controller-0. Except where noted, all the commands must be executed from a console of the Workstation.
Navigate to the <stx-deployment> libvirt project directory:
$ cd <stx-deployment>/libvirt
Run the install packages script:
$ bash install_packages.sh
Run the libvirt qemu setup script:
$ bash setup_tic.sh
From the KVM/VirtManager window, power on the host to be configured as Controller-0 and show the virtual machine console and details:
- When the installer is loaded and the installer welcome screen appears in the Controller-0 host, select the type of installation "All-in-one Controller Configuration".
- Select the "Graphical Console" as the console to use during installation.
- Select "Standard Security Boot Profile" as the Security Profile.
- Monitor the initialization until it is complete. When initialization is complete, a reboot is initiated on the Controller-0 host, briefly displays a GNU GRUB screen, and then boots automatically into the StarlingX image.
Log into Controller-0 as user wrsroot, with password wrsroot. The first time you log in as wrsroot, you are required to change your password. Enter the current password (wrsroot):
Changing password for wrsroot. (current) UNIX Password:
Enter a new password for the wrsroot account:
New password:
Enter the new password again to confirm it:
Retype new password:
Controller-0 is initialized with StarlingX, and is ready for configuration.
Configuring Controller-0
This section describes how to perform the Controller-0 configuration interactively. Except where noted, all the commands must be executed from the console of the active controller (here assumed to be controller-0.
When run interactively, the config_controller script presents a series of prompts for initial configuration of StarlingX. The prompts are grouped by configuration area. To start the script interactively, use the following command, in the System mode select Simplex as the configuration option and accept all the default values for the rest of the options:
controller-0:~$ sudo config_controller
The output when config_controller script is run interactively is:
... System mode. Available options are: 1) duplex-direct: two node-redundant configuration. Management and infrastructure networks are directly connected to peer ports 2) duplex - two node redundant configuration 3) simplex - single node non-redundant configuration System mode [duplex-direct]: 3 ... WARNING: Command should only be run from the console. Continuing with this terminal may cause loss of connectivity and configuration failure ... Apply the above configuration? [y/n]: y Applying configuration (this will take several minutes): 01/08: Creating bootstrap configuration ... DONE 02/08: Applying bootstrap manifest ... DONE 03/08: Persisting local configuration ... DONE 04/08: Populating initial system inventory ... DONE 05:08: Creating system configuration ... DONE 06:08: Applying controller manifest ... DONE 07:08: Finalize controller configuration ... DONE 08:08: Waiting for service activation ... DONE Configuration was applied Please complete any out of service comissioning steps with system commands and unlock controller to proceed.
Controller-0 Host Provision
On Controller-0, acquire Keystone administrative privileges:
controller-0:~$ source /etc/nova/openrc
On Controller-0, create a newshell script file, copy and paste the content below and run:
#!/usr/bin/env bash
system host-disk-list controller-0
NODE=controller-0
DEVICE=/dev/sdb
SIZE=$(system host-disk-list $NODE | grep $DEVICE | awk '{print $12}')
DISK=$(system host-disk-list $NODE | grep $DEVICE | awk '{print $2}')
# Create a partition for Cinder
system host-disk-partition-add $NODE $DISK $SIZE -t lvm_phys_vol
# Create the Volume Group
system host-lvg-add $NODE cinder-volumes
# Wait for partition to be created
while true; do
system host-disk-partition-list $NODE --nowrap | grep $DEVICE | grep Ready;
if [ $? -eq 0 ]; then
break;
fi;
sleep 1;
echo "Waiting for Disk Partition for $DEVICE:$NODE"
done
PARTITION=$(system host-disk-partition-list $NODE --disk $DISK --nowrap | grep part1 | awk '{print $2}')
# Create the PV
sleep 1
system host-pv-add $NODE cinder-volumes $PARTITION
sleep 1
#Enable LVM Backend.
system storage-backend-add lvm -s cinder --confirmed
#Wait for backend to be configured:
echo " This can take a few minutes..."
while true; do
system storage-backend-list | grep lvm | grep configured;
if [ $? -eq 0 ]; then
break;
else sleep 10;
fi;
echo "Waiting for backend to be configured"
done
system storage-backend-list
# Add provider networks and assign segmentation ranges
PHYSNET0='providernet-a'
PHYSNET1='providernet-b'
neutron providernet-create ${PHYSNET0} --type vlan
neutron providernet-create ${PHYSNET1} --type vlan
neutron providernet-range-create ${PHYSNET0} --name ${PHYSNET0}-a --range 400-499
neutron providernet-range-create ${PHYSNET0} --name ${PHYSNET0}-b --range 10-10 --shared
neutron providernet-range-create ${PHYSNET1} --name ${PHYSNET1}-a --range 500-599
# Create data interfaces
DATA0IF=eth1000
DATA1IF=eth1001
COMPUTE='controller-0'
system host-list --nowrap &> /dev/null && NOWRAP="--nowrap"
SPL=/tmp/tmp-system-port-list
SPIL=/tmp/tmp-system-host-if-list
system host-port-list ${COMPUTE} $NOWRAP > ${SPL}
system host-if-list -a ${COMPUTE} $NOWRAP > ${SPIL}
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
system host-if-modify -m 1500 -n data0 -p ${PHYSNET0} -nt data ${COMPUTE} ${DATA0IFUUID}
system host-if-modify -m 1500 -n data1 -p ${PHYSNET1} -nt data ${COMPUTE} ${DATA1IFUUID}
# Add nova local backend
system host-lvg-add ${COMPUTE} nova-local
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
PARTITION_SIZE=$(($(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $12;}')/2))
CGTS_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
while true; do
system host-disk-partition-list ${COMPUTE} | grep /dev/sda5 | grep Ready
if [ $? -eq 0 ]; then
break;
else sleep 2;
fi;
echo "Waiting to add disk partition"
done
system host-disk-partition-list ${COMPUTE}
CGTS_PARTITION_UUID=$(echo ${CGTS_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
sleep 1
system host-pv-add ${COMPUTE} cgts-vg ${CGTS_PARTITION_UUID}
sleep 1
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
while true; do
system host-disk-partition-list ${COMPUTE} | grep /dev/sda6 | grep Ready
if [ $? -eq 0 ]; then
break;
else sleep 2;
fi;
echo "Waiting to add disk partition"
done
system host-disk-partition-list ${COMPUTE}
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
sleep 1
system host-lvg-modify -b image -s 10240 ${COMPUTE} nova-local
sleep 10
### This will result in a reboot.
system host-unlock controller-0
echo " Watch CONSOLE to see progress. You will see things like "
echo " Applying manifest 127.168.204.3_patching.pp..."
echo " [DONE]"
echo " Tailing /var/log/platform.log until reboot..."
tail -f /var/log/platform.log
The host is rebooted. During the reboot, the command line is unavailable, and any ssh connections are dropped. To monitor the progress of the reboot, use the controller-0 console.
Verifying the Controller-0 Configuration
On Controller-0, acquire Keystone administrative privileges:
controller-0:~$ source /etc/nova/openrc
Verify that the Titanium Cloud controller services are running:
[wrsroot@controller-0 ~(keystone_admin)]$ nova service-list +--------------------------------------+------------------+--------------+----------+---------+-------+ ... | Id | Binary | Host | Zone | Status | State | ... +--------------------------------------+------------------+--------------+----------+---------+-------+ ... | d7cdfaf0-9394-4053-b752-d609c837b729 | nova-conductor | controller-0 | internal | enabled | up | ... | 692c2659-7188-42ec-9ad1-c88ac3020dc6 | nova-scheduler | controller-0 | internal | enabled | up | ... | 5c7c9aad-696f-4dd1-a497-464bdd525e0c | nova-consoleauth | controller-0 | internal | enabled | up | ... +--------------------------------------+------------------+--------------+----------+---------+-------+ ...
If one of the listed services is down please reenable
#!/usr/bin/env bash
source /etc/nova/openrc
ALL_DISABLED_SERVICES=`nova service-list | grep disabled | awk -F "|" '{print $2}'`
for service in $ALL_DISABLED_SERVICES; do
nova service-force-down --unset ${service}
nova service-enable ${service}
done
Verify that controller-0 is unlocked, enabled, and available:
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | +----+--------------+-------------+----------------+-------------+--------------+