Jump to: navigation, search

Difference between revisions of "StarlingX/Containers/InstallationOnAIODX"

(Configure Data Interfaces for Controller-1)
(Configure Data Interfaces for Controller-1)
Line 173: Line 173:
  
 
echo ">>> Configuring OAM Network"
 
echo ">>> Configuring OAM Network"
system host-if-modify -n oam0 -c platform --networks oam ${COMPUTE} $(system host-if-list -a $COMPUTE  $NOWRAP | awk -v OAM_IF=$OAM_IF '{if ($4 == OAM_IF) { print $2;}}')
+
system host-if-modify -n oam0 -c platform ${COMPUTE} $(system host-if-list -a $COMPUTE  $NOWRAP | awk -v OAM_IF=$OAM_IF '{if ($4 == OAM_IF) { print $2;}}')
 +
system interface-network-assign controller-1 oam0 oam
  
 
echo ">>> Configuring Cluster Host Interface"
 
echo ">>> Configuring Cluster Host Interface"
system host-if-modify $COMPUTE mgmt0 --networks cluster-host
+
system interface-network-assign controller-1 mgmt0 cluster-host
  
 
echo ">>> Configuring Data Networks"
 
echo ">>> Configuring Data Networks"

Revision as of 15:35, 13 June 2019

Installing StarlingX with containers: All in One Duplex configuration

WARNING: DO NOT EDIT THIS WIKI CONTENT.

The information on this wiki page is in the process of transitioning to "Deploy/Install" guides that are being created as part of the StarlingX documentation. Consequently, do not make edits to the content in this wiki page. If you have changes that need to be made to the installation process described on this page of the wiki, contact StarlingX Documentation Team.

History

  • January 24, 2019: Initial draft
  • January 28, 2019: Configure datanetworks in sysinv, prior to referencing it in the 'system host-if-modify/host-if-add command'. Needed on loads Jan 25, 2019 or later.
  • January 29, 2019: Removed obsolete neutron host/interface configuration and updated DNS instructions.

Introduction

These instructions are for an All-in-one duplex system (AIO-DX) in VirtualBox. Other configurations are in development.

Installing on bare metal is also possible, however the the process would have to be adapted for the specific hardware configuration.

Note: These instructions are valid for a load built on January 25, 2019 or later.

Building the Software

Refer to these instructions on the AIO SX page Building the Software

Setup the VirtualBox VM

Refer to these instructions on the AIO SX page Setup the VirtualBox VM

Remember to setup TWO VMs.

VirtualBox Nat Networking

Refer to these instructions on the AIO SX page VirtualBox Nat Networking

Setup Controller-0

Install StarlingX

Boot the VM from the ISO media. Select the following options for installation:

  • All-in-one Controller
  • Graphical Console
  • Standard Security Profile

Once booted, log into Controller-0 as user wrsroot, with password wrsroot. The first time you log in as wrsroot, you are required to change your password. Enter the current password (wrsroot):

 
Changing password for wrsroot.
(current) UNIX Password: wrsroot

Enter a new password for the wrsroot account and confirm it.

Bootstrap the controller

To run the playbook, you need to first set up external connectivity.

ip address add 10.10.10.3/24 dev enp0s3
ip link set up dev enp0s3
ip route add default via 10.10.10.1 dev enp0s3
ping 8.8.8.8

For dual controller system, the system_mode (default value is simplex) must be set to "duplex". See description of Ansible bootstrap playbook on AIO SX page for details Bootstrap the controller

Sample /home/wrsroot/localhost.yml override file. Add more (or remove) parameters or change values to your custom values as needed. The only mandatory parameter that needs to be overwritten from the default is the system_mode. You can also update the defaults for OAM IPs if you use a non default configuration:

# Mandatory
system_mode: duplex

# Optional
external_oam_subnet: 10.10.10.0/24
external_oam_gateway_address: 10.10.10.1
external_oam_floating_address: 10.10.10.3
external_oam_node_0_address: 10.10.10.4
external_oam_node_1_address: 10.10.10.5
management_subnet: 192.168.204.0/24
dns_servers:
  - 8.8.4.4
admin_password: St8rlingX*
ansible_become_pass: St8rlingX*

After setting up external connectivity and creating the localhost.yml:

  • Run the local playbook with override file
 ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml 

OR

  • Run the local playbook with custom wrsroot and admin passwords and duplex system_mode, specified at the command line:
ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml -e "ansible_become_pass=<custom-wrsroot-password> admin_password=<custom-admin-password> system_mode=duplex" 

Provisioning the platform

Configure OAM, Management and Cluster interfaces

source /etc/platform/openrc
OAM_IF=enp0s3
MGMT_IF=enp0s8
system host-if-modify controller-0 lo -c none
IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6 =="lo") print $4;}')
for UUID in $IFNET_UUIDS; do
    system interface-network-remove ${UUID}
done
system host-if-modify controller-0 $OAM_IF -c platform
system interface-network-assign controller-0 $OAM_IF oam
system host-if-modify controller-0 $MGMT_IF -c platform
system interface-network-assign controller-0 $MGMT_IF mgmt
system interface-network-assign controller-0 $MGMT_IF cluster-host

Set the ntp server

Refer to these instructions on the AIO SX page Set the ntp server

Configure the vswitch type (optional)

Refer to these instructions on the AIO SX page Configure the vswitch type

Configure data interfaces

Refer to these instructions on the AIO SX page Configure data interfaces

Prepare the host for running the containerized services

Refer to these instructions on the AIO SX page Prepare the host for running the containerized services

Setup partitions for Controller-0

Refer to these instructions on the AIO SX page Setup partitions for Controller-0

Configure Ceph for Controller-0

Refer to these instructions on the AIO SX page Configure Ceph for Controller-0

Unlock Controller-0

source /etc/platform/openrc
system host-unlock controller-0

Boot the second AIO controller

Boot the second VM (without an ISO media mounted) Hit F12 immediately when the VM starts to select a different boot option - select the "lan" option to force a network boot.

At the controller-1 console, you will see a message instructing you to configure the personality of the node. Do this from a shell on controller-0 as follows:

source /etc/platform/openrc
system host-list

Results indicate that ID 2 is the un-provisioned controller

+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 2  | None         | None        | locked         | disabled    | offline      |
+----+--------------+-------------+----------------+-------------+--------------+
system host-update 2 personality=controller

The packages will install and the controller will reboot.

Provisioning the second AIO controller

Configure Data Interfaces for Controller-1

source /etc/platform/openrc
export COMPUTE='controller-1' 
PHYSNET0='physnet0' 
PHYSNET1='physnet1' 
OAM_IF=enp0s3
DATA0IF=eth1000 
DATA1IF=eth1001 
NOWRAP="--nowrap"

echo ">>> Configuring OAM Network"
system host-if-modify -n oam0 -c platform ${COMPUTE} $(system host-if-list -a $COMPUTE  $NOWRAP | awk -v OAM_IF=$OAM_IF '{if ($4 == OAM_IF) { print $2;}}')
system interface-network-assign controller-1 oam0 oam

echo ">>> Configuring Cluster Host Interface"
system interface-network-assign controller-1 mgmt0 cluster-host

echo ">>> Configuring Data Networks"
SPL=/tmp/tmp-system-port-list  
SPIL=/tmp/tmp-system-host-if-list  
system host-port-list ${COMPUTE} $NOWRAP > ${SPL}  
system host-if-list -a ${COMPUTE} $NOWRAP > ${SPIL}  
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')  
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')  
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')  
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')  
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')  
DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')  
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')  
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')  
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}  
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}

Prepare Controller-1 for running the containerized services

source /etc/platform/openrc

system host-label-assign controller-1 openstack-control-plane=enabled
system host-label-assign controller-1 openstack-compute-node=enabled
system host-label-assign controller-1 openvswitch=enabled
system host-label-assign controller-1 sriov=enabled

Setup Partitions for Controller-1

These disks are added on unlock. Using same sizes as controller-0. 24G for nova-local, 6G for cgts-vg

source /etc/platform/openrc
export COMPUTE=controller-1

echo ">>> Getting root disk info"
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"

echo ">>>> Configuring nova-local"
NOVA_SIZE=24
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-lvg-add ${COMPUTE} nova-local
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}

echo ">>>> Extending cgts-vg"
PARTITION_SIZE=6
CGTS_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
CGTS_PARTITION_UUID=$(echo ${CGTS_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-pv-add ${COMPUTE} cgts-vg ${CGTS_PARTITION_UUID}

Setup Ceph for Controller-1

source /etc/platform/openrc

echo ">>> Get disk & tier info"
HOST="controller-1"
DISKS=$(system host-disk-list ${HOST})
TIERS=$(system storage-tier-list ceph_cluster)
echo "Disks:"
echo "$DISKS"
echo "Tiers:"
echo "$TIERS"

echo ">>> Add OSDs to primary tier"
system host-stor-add ${HOST} $(echo "$DISKS" | grep /dev/sdb | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')

echo ">>> system host-stor-list ${HOST}"
system host-stor-list ${HOST}
echo ">>> ceph osd tree"
ceph osd tree

Unlock Controller-1

source /etc/platform/openrc
system host-unlock controller-1

Wait for controller-1 to reboot before proceeding.

Using sysinv to bring up/down the containerized services

Generate the stx-openstack application tarball

Refer to these instructions on the AIO SX page Generate the stx-openstack application tarball

Stage application for deployment

Refer to these instructions on the AIO SX page Stage application for deployment

Bring Up Services

Refer to these instructions on the AIO SX page Bring Up Services

Verify the cluster endpoints

Refer to these instructions on the AIO SX page here

Provider/tenant networking setup

Refer to these instructions on the AIO SX page here

Additional Setup Instructions

Refer to these instructions on the AIO SX page Additional Setup Instructions

Horizon access

Refer to these instructions on the AIO SX page here

Known Issues and Troubleshooting

None