Jump to: navigation, search

Difference between revisions of "StarlingX/Containers/InstallationOnAIODX"

(Prepare the hosts for running the containerized services)
(Using sysinv to bring up/down the containerized services)
Line 413: Line 413:
 
== Using sysinv to bring up/down the containerized services ==
 
== Using sysinv to bring up/down the containerized services ==
  
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Using_sysinv_to_bring_up.2Fdown_the_containerized_services| here ]]
+
Similar to the AIO SX instructions.
 +
 
 +
=== Generate the stx-openstack application tarball ===
 +
In a development environment, run the following command to construct the application tarballs.
 +
 
 +
<pre>
 +
$MY_REPO_ROOT_DIR/cgcs-root/build-tools/build-helm-charts.sh
 +
</pre>
 +
 
 +
The resulting tarballs can be found under $MY_WORKSPACE/std/build-helm/stx.
 +
 
 +
Currently it produces 2 application tarballs, one with tests enabled and one without. Transfer the selected tarball to your lab/virtual box.
 +
 
 +
Alternatively the stx-openstack application tarballs are generated with each build on the CENGN mirror.
 +
These are present in builds after 2018-12-12 and can be found under <build>/outputs/helm-charts/.
 +
 
 +
=== Stage application for deployment ===
 +
Transfer the application tarball onto your active controller.
 +
 
 +
Use sysinv to upload the application tarball.
 +
 
 +
<pre>
 +
system application-upload stx-openstack helm-charts-manifest-no-tests.tgz
 +
system application-list
 +
</pre>
 +
 
 +
=== Bring Up Services ===
 +
Use sysinv to apply the application.
 +
 
 +
You can monitor the progress by watching system application-list
 +
 
 +
<pre>
 +
system application-apply stx-openstack
 +
watch -n 1 system application-list
 +
</pre>
 +
 
 +
Refer to AIO SX instructions for how to delete, remove and troubleshoot stx-openstack
  
 
== Verify the cluster endpoints ==
 
== Verify the cluster endpoints ==

Revision as of 22:18, 24 January 2019

Installing StarlingX with containers: All in One Duplex configuration

History

January 24, 2019: Initial draft

Introduction

These instructions are for an All-in-one duplex system in VirtualBox. Other configurations are in development.

Installing on bare metal is also possible, however the the process would have to be adapted for the specific hardware configuration.

Note: These instructions are valid for a load built on January 24, 2019 or later.

Building the Software

Follow the standard build process in the StarlingX Developer Guide.

Alternatively a prebuilt iso can be used, all required packages are provided by the StarlingX CENGN mirror

Setup the VirtualBox VM

Refer to these instructions on the AIO SX page Setup_the_VirtualBox_VM

Remember to setup TWO VMs.

VirtualBox Nat Networking

Refer to these instructions on the AIO SX page VirtualBox_Nat_Networking

Setup Controller-0

Install StarlingX ISO

Boot the VM from the ISO media. Select the following options for installation:

  • All-in-one Controller
  • Graphical Console
  • Standard Security Profile

Once booted, log into Controller-0 as user wrsroot, with password wrsroot. The first time you log in as wrsroot, you are required to change your password. Enter the current password (wrsroot):

 
Changing password for wrsroot.
(current) UNIX Password: wrsroot

Enter a new password for the wrsroot account and confirm it.


Docker Proxy Configuration

Refer to these instructions on the AIO SX page Docker_Proxy_Configuration This step will change once a change to config_controller is merged.

Run config_controller

sudo config_controller --kubernetes

Use default settings during config_controller, except for the following System mode: duplex

If you do not have direct access to the google DNS nameserver(s) 8.8.8.8 , 8.8.4.4 you will need to configure that when prompted. Press Enter to choose the default, or type a new entry.

If you do not have direct access to the public docker registry (https://hub.docker.com/u/starlingx) and instead use a proxy for internet access, you will need to add proxy information when prompted.

After you apply the configuration

The following configuration will be applied:

System Configuration
--------------------
Time Zone: UTC
System mode: duplex

PXEBoot Network Configuration
-----------------------------
Separate PXEBoot network not configured
PXEBoot Controller floating hostname: pxecontroller

Management Network Configuration
--------------------------------
Management interface name: enp0s8
Management interface: enp0s8
Management interface MTU: 1500
Management subnet: 192.168.204.0/24
Controller floating address: 192.168.204.2
Controller 0 address: 192.168.204.3
Controller 1 address: 192.168.204.4
NFS Management Address 1: 192.168.204.5
NFS Management Address 2: 192.168.204.6
Controller floating hostname: controller
Controller hostname prefix: controller-
OAM Controller floating hostname: oamcontroller
Dynamic IP address allocation is selected
Management multicast subnet: 239.1.1.0/28

Infrastructure Network Configuration
------------------------------------
Infrastructure interface not configured

Kubernetes Cluster Network Configuration
----------------------------------------
Cluster pod network subnet: 172.16.0.0/16
Cluster service network subnet: 10.96.0.0/12
Cluster host interface name: enp0s8
Cluster host interface: enp0s8
Cluster host interface MTU: 1500
Cluster host subnet: 192.168.206.0/24

External OAM Network Configuration
----------------------------------
External OAM interface name: enp0s3
External OAM interface: enp0s3
External OAM interface MTU: 1500
External OAM subnet: 10.10.10.0/24
External OAM gateway address: 10.10.10.1
External OAM floating address: 10.10.10.2
External OAM 0 address: 10.10.10.3
External OAM 1 address: 10.10.10.4

DNS Configuration
-----------------
Nameserver 1: 8.8.8.8

Apply the above configuration? [y/n]: y

Applying configuration (this will take several minutes):

01/08: Creating bootstrap configuration ... DONE
02/08: Applying bootstrap manifest ... DONE
03/08: Persisting local configuration ... DONE
04/08: Populating initial system inventory ... DONE
05/08: Creating system configuration ... DONE
06/08: Applying controller manifest ... DONE
07/08: Finalize controller configuration ... DONE
08/08: Waiting for service activation ... DONE

Configuration was applied

Please complete any out of service commissioning steps with system commands and
unlock controller to proceed.

In this example only one default Nameserver (8.8.8.8) was configured

Provisioning the platform

Set the ntp server

source /etc/platform/openrc
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org

Configure data interfaces

source /etc/platform/openrc
export COMPUTE=controller-0
DATA0IF=eth1000 
DATA1IF=eth1001 
PHYSNET0='physnet0'
PHYSNET1='physnet1'
SPL=/tmp/tmp-system-port-list  
SPIL=/tmp/tmp-system-host-if-list  
NOWRAP="--nowrap"
system host-port-list ${COMPUTE} $NOWRAP > ${SPL}
system host-if-list -a ${COMPUTE} $NOWRAP > ${SPIL}
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')  
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')  
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')  
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')  
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')  
DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')  
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')  
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')  
system host-if-modify -m 1500 -n data0 -p ${PHYSNET0} -c data ${COMPUTE} ${DATA0IFUUID}  
system host-if-modify -m 1500 -n data1 -p ${PHYSNET1} -c data ${COMPUTE} ${DATA1IFUUID}

Setup partitions for Controller-0

This configures Nova-Local as 10G and cgts-vg as 30G

export COMPUTE=controller-0
source /etc/platform/openrc

echo ">>> Getting root disk info"
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"

echo ">>>> Configuring nova-local"
PARTITION_SIZE=10
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-lvg-add ${COMPUTE} nova-local
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
sleep 2

echo ">>> Wait for partition $NOVA_PARTITION_UUID to be ready."
while true; do system host-disk-partition-list $COMPUTE --nowrap | grep $NOVA_PARTITION_UUID | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done

echo ">>>> Extending cgts-vg"
PARTITION_SIZE=30
CGTS_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
CGTS_PARTITION_UUID=$(echo ${CGTS_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')

echo ">>> Wait for partition $CGTS_PARTITION_UUID to be ready"
while true; do system host-disk-partition-list $COMPUTE --nowrap | grep $CGTS_PARTITION_UUID | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done

system host-pv-add ${COMPUTE} cgts-vg ${CGTS_PARTITION_UUID}
sleep 2

echo ">>> Waiting for cgts-vg to be ready"
while true; do system host-pv-list ${COMPUTE} | grep cgts-vg | grep adding; if [ $? -ne 0 ]; then break; fi; sleep 1; done

Configure Ceph for Controller-0

source /etc/platform/openrc
echo ">>> Enable primary Ceph backend"
system storage-backend-add ceph --confirmed

echo ">>> Wait for primary ceph backend to be configured"
echo ">>> This step really takes a long time"
while true; do system storage-backend-list | grep ceph-store | grep configured; if [ $? -eq 0 ]; then break; else sleep 10; fi; done

echo ">>> Ceph health"
ceph -s

DISKS=$(system host-disk-list 1)
TIERS=$(system storage-tier-list ceph_cluster)

echo ">>> Add OSDs to primary tier"
system host-stor-add 1 $(echo "$DISKS" | grep /dev/sdb | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')

echo ">>> system host-stor-list 1"
system host-stor-list 1

echo ">>> ceph osd tree"
ceph osd tree

Unlock Controller-0

source /etc/platform/openrc
system host-unlock controller-0

Boot the second AIO controller

Boot the second VM (without an ISO media mounted) Hit F12 immediately when the VM starts to select a different boot option - select the "lan" option to force a network boot.

At the controller-1 console, you will see a message instructing you to configure the personality of the node. Do this from a shell on controller-0 as follows:

source /etc/platform/openrc
system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 2  | None         | None        | locked         | disabled    | offline      |
+----+--------------+-------------+----------------+-------------+--------------+
[wrsroot@controller-0 ~(keystone_admin)]# system host-update 2 personality=controller

+---------------------+--------------------------------------+
| Property            | Value                                |
+---------------------+--------------------------------------+
| action              | none                                 |
| administrative      | locked                               |
| availability        | offline                              |
| bm_ip               | None                                 |
| bm_type             | None                                 |
| bm_username         | None                                 |
| boot_device         | sda                                  |
| capabilities        | {}                                   |
| config_applied      | None                                 |
| config_status       | None                                 |
| config_target       | None                                 |
| console             | ttyS0,115200                         |
| created_at          | 2019-01-24T21:02:56.679215+00:00     |
| hostname            | controller-1                         |
| id                  | 2                                    |
| install_output      | text                                 |
| install_state       | None                                 |
| install_state_info  | None                                 |
| invprovision        | None                                 |
| location            | {}                                   |
| mgmt_ip             | 192.168.204.4                        |
| mgmt_mac            | 08:00:27:27:64:df                    |
| operational         | disabled                             |
| personality         | controller                           |
| reserved            | False                                |
| rootfs_device       | sda                                  |
| serialid            | None                                 |
| software_load       | 19.01                                |
| subfunction_avail   | not-installed                        |
| subfunction_oper    | disabled                             |
| subfunctions        | controller,worker                    |
| task                | None                                 |
| tboot               | false                                |
| ttys_dcd            | None                                 |
| updated_at          | None                                 |
| uptime              | 0                                    |
| uuid                | affb8a7a-f1b7-4a95-b531-2b399df05376 |
| vim_progress_status | None                                 |
+---------------------+--------------------------------------+

The packages will install and the controller will reboot.

Provisioning the second AIO controller

Configure Data Interfaces for Controller-1

source /etc/platform/openrc
export COMPUTE='controller-1' 
PHYSNET0='physnet0' 
PHYSNET1='physnet1' 
OAM_IF=enp0s3
DATA0IF=eth1000 
DATA1IF=eth1001 
NOWRAP="--nowrap"

echo ">>> Configuring OAM Network"
system host-if-modify -n oam0 -c platform --networks oam ${COMPUTE} $(system host-if-list -a $COMPUTE  $NOWRAP | awk -v OAM_IF=$OAM_IF '{if ($4 == OAM_IF) { print $2;}}')

echo ">>> Configuring Cluster Host Interface"
system host-if-modify $COMPUTE mgmt0 --networks cluster-host

echo ">>> Configuring Data Networks"
SPL=/tmp/tmp-system-port-list  
SPIL=/tmp/tmp-system-host-if-list  
system host-port-list ${COMPUTE} $NOWRAP > ${SPL}  
system host-if-list -a ${COMPUTE} $NOWRAP > ${SPIL}  
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')  
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')  
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')  
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')  
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')  
DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')  
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')  
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')  
system host-if-modify -m 1500 -n data0 -p ${PHYSNET0} -c data ${COMPUTE} ${DATA0IFUUID}  
system host-if-modify -m 1500 -n data1 -p ${PHYSNET1} -c data ${COMPUTE} ${DATA1IFUUID}

Setup Partitions for Controller-1

These disks are added on unlock. Using same sizes as controller-0. 10G for nova-local, 30G for cgts-vg

source /etc/platform/openrc
export COMPUTE=controller-1

echo ">>> Getting root disk info"
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"

echo ">>>> Configuring nova-local"
PARTITION_SIZE=10
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-lvg-add ${COMPUTE} nova-local
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}

echo ">>>> Extending cgts-vg"
PARTITION_SIZE=30
CGTS_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
CGTS_PARTITION_UUID=$(echo ${CGTS_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-pv-add ${COMPUTE} cgts-vg ${CGTS_PARTITION_UUID}

Setup Ceph for Controller-1

source /etc/platform/openrc

echo ">>> Get disk & tier info"
HOST="controller-1"
DISKS=$(system host-disk-list ${HOST})
TIERS=$(system storage-tier-list ceph_cluster)
echo "Disks:"
echo "$DISKS"
echo "Tiers:"
echo "$TIERS"

echo ">>> Add OSDs to primary tier"
system host-stor-add ${HOST} $(echo "$DISKS" | grep /dev/sdb | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')

echo ">>> system host-stor-list ${HOST}"
system host-stor-list ${HOST}
echo ">>> ceph osd tree"
ceph osd tree

Unlock Controller-1

source /etc/platform/openrc
system host-unlock controller-1

Wait for controller-1 to reboot before proceeding.

Prepare the hosts for running the containerized services

source /etc/platform/openrc

system host-label-assign controller-0 openstack-control-plane=enabled
system host-label-assign controller-0 openstack-compute-node=enabled
system host-label-assign controller-0 openvswitch=enabled

system host-label-assign controller-1 openstack-control-plane=enabled
system host-label-assign controller-1 openstack-compute-node=enabled
system host-label-assign controller-1 openvswitch=enabled

kubectl get nodes --show-labels

Using sysinv to bring up/down the containerized services

Similar to the AIO SX instructions.

Generate the stx-openstack application tarball

In a development environment, run the following command to construct the application tarballs.

$MY_REPO_ROOT_DIR/cgcs-root/build-tools/build-helm-charts.sh

The resulting tarballs can be found under $MY_WORKSPACE/std/build-helm/stx.

Currently it produces 2 application tarballs, one with tests enabled and one without. Transfer the selected tarball to your lab/virtual box.

Alternatively the stx-openstack application tarballs are generated with each build on the CENGN mirror. These are present in builds after 2018-12-12 and can be found under <build>/outputs/helm-charts/.

Stage application for deployment

Transfer the application tarball onto your active controller.

Use sysinv to upload the application tarball.

system application-upload stx-openstack helm-charts-manifest-no-tests.tgz
system application-list

Bring Up Services

Use sysinv to apply the application.

You can monitor the progress by watching system application-list

system application-apply stx-openstack
watch -n 1 system application-list

Refer to AIO SX instructions for how to delete, remove and troubleshoot stx-openstack

Verify the cluster endpoints

Refer to these instructions on the AIO SX page here

Provider/tenant networking setup

Refer to these instructions on the AIO SX page here

Horizon access

Refer to these instructions on the AIO SX page here

Known Issues and Troubleshooting

None