Jump to: navigation, search

Difference between revisions of "StarlingX/Containers/InstallationOnAIODX"

 
(41 intermediate revisions by 10 users not shown)
Line 1: Line 1:
= Installing StarlingX with containers: All in One Duplex configuration =
+
{{Warning|header='''Warning - Deprecated'''|body='''This wiki page is out of date and now deprecated. For the current and upcoming versions, see [https://docs.starlingx.io/deploy_install_guides/index.html StarlingX Installation and Deployment guides]'''}}
  
== History ==
+
= Documentation Contribution =
  
* '''January 24, 2019:'''  Initial draft
+
You might consider contributing to StarlingX documentation if you find a bug or have a suggestions for improvement.
* '''January 28, 2019:'''  Configure datanetworks in sysinv, prior to referencing it in the 'system host-if-modify/host-if-add command'. Needed on loads Jan 25, 2019 or later.
+
To get started:
* '''January 29, 2019:''' Removed obsolete neutron host/interface configuration and updated DNS instructions.
 
  
== Introduction ==
+
* Please use "[https://docs.starlingx.io/contributor/index.html Contribute]" guides.
 +
* Launch a bug in [https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.docs StarlingX Launchpad] with the tag ''stx.docs''.
  
These instructions are for an All-in-one duplex system in VirtualBox. Other configurations are in development.
+
= History =
  
Installing on bare metal is also possible, however the the process would have to be adapted for the specific hardware configuration.
+
Go to [https://wiki.openstack.org/w/index.php?title=StarlingX/Containers/InstallationOnAIODX&action=history Page > History] link if you want to:
  
'''Note''': These instructions are valid for a load built on '''January 25, 2019''' or later.
+
* See the old content of this page
 
+
* Compare revisions
== Building the Software ==
 
 
 
Follow the standard build process in the [https://docs.starlingx.io/developer_guide/index.html StarlingX Developer Guide].
 
 
 
Alternatively a prebuilt iso can be used, all required packages are provided by the  [http://mirror.starlingx.cengn.ca/mirror/starlingx/  StarlingX CENGN mirror]
 
 
 
== Setup the VirtualBox VM ==
 
 
 
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Setup_the_VirtualBox_VM|  Setup_the_VirtualBox_VM]]
 
 
 
Remember to setup TWO VMs.
 
 
 
=== VirtualBox Nat Networking ===
 
 
 
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#VirtualBox_Nat_Networking|  VirtualBox_Nat_Networking]]
 
 
 
== Setup Controller-0 ==
 
=== Install StarlingX  ISO ===
 
 
 
Boot the VM from the ISO media. Select the following options for installation:
 
*All-in-one Controller
 
*Graphical Console
 
*Standard Security Profile
 
 
 
Once booted, log into Controller-0 as user wrsroot, with password wrsroot. The first time you log in as wrsroot, you are required to change your password. Enter the current password (wrsroot):
 
 
 
<pre>
 
Changing password for wrsroot.
 
(current) UNIX Password: wrsroot
 
</pre>
 
 
 
Enter a new password for the wrsroot account and confirm it.
 
 
 
 
 
=== Docker Proxy Configuration ===
 
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Docker_Proxy_Configuration|  Docker_Proxy_Configuration]]
 
This step will change once a change to config_controller is merged.
 
 
 
=== Run config_controller ===
 
 
 
<code>sudo config_controller --kubernetes</code>
 
 
 
Use default settings during config_controller, except for the following:
 
* System mode: '''duplex'''
 
* If you do not have direct access to the google DNS  nameserver(s) 8.8.8.8 , 8.8.4.4 you will need to configure that when prompted. Press Enter to choose the default, or type a new entry.
 
* If you do not have direct access to the public docker registry (https://hub.docker.com/u/starlingx) and instead use a proxy for internet access, you will need to add proxy information
 
when prompted.
 
 
 
After you apply the configuration
 
<pre>
 
The following configuration will be applied:
 
 
 
System Configuration
 
--------------------
 
Time Zone: UTC
 
System mode: duplex
 
 
 
PXEBoot Network Configuration
 
-----------------------------
 
Separate PXEBoot network not configured
 
PXEBoot Controller floating hostname: pxecontroller
 
 
 
Management Network Configuration
 
--------------------------------
 
Management interface name: enp0s8
 
Management interface: enp0s8
 
Management interface MTU: 1500
 
Management subnet: 192.168.204.0/24
 
Controller floating address: 192.168.204.2
 
Controller 0 address: 192.168.204.3
 
Controller 1 address: 192.168.204.4
 
NFS Management Address 1: 192.168.204.5
 
NFS Management Address 2: 192.168.204.6
 
Controller floating hostname: controller
 
Controller hostname prefix: controller-
 
OAM Controller floating hostname: oamcontroller
 
Dynamic IP address allocation is selected
 
Management multicast subnet: 239.1.1.0/28
 
 
 
Infrastructure Network Configuration
 
------------------------------------
 
Infrastructure interface not configured
 
 
 
Kubernetes Cluster Network Configuration
 
----------------------------------------
 
Cluster pod network subnet: 172.16.0.0/16
 
Cluster service network subnet: 10.96.0.0/12
 
Cluster host interface name: enp0s8
 
Cluster host interface: enp0s8
 
Cluster host interface MTU: 1500
 
Cluster host subnet: 192.168.206.0/24
 
 
 
External OAM Network Configuration
 
----------------------------------
 
External OAM interface name: enp0s3
 
External OAM interface: enp0s3
 
External OAM interface MTU: 1500
 
External OAM subnet: 10.10.10.0/24
 
External OAM gateway address: 10.10.10.1
 
External OAM floating address: 10.10.10.2
 
External OAM 0 address: 10.10.10.3
 
External OAM 1 address: 10.10.10.4
 
 
 
DNS Configuration
 
-----------------
 
Nameserver 1: 8.8.8.8
 
 
 
Apply the above configuration? [y/n]: y
 
 
 
Applying configuration (this will take several minutes):
 
 
 
01/08: Creating bootstrap configuration ... DONE
 
02/08: Applying bootstrap manifest ... DONE
 
03/08: Persisting local configuration ... DONE
 
04/08: Populating initial system inventory ... DONE
 
05/08: Creating system configuration ... DONE
 
06/08: Applying controller manifest ... DONE
 
07/08: Finalize controller configuration ... DONE
 
08/08: Waiting for service activation ... DONE
 
 
 
Configuration was applied
 
 
 
Please complete any out of service commissioning steps with system commands and
 
unlock controller to proceed.
 
</pre>
 
 
 
In this example only one default Nameserver (8.8.8.8) was configured
 
 
 
=== Provisioning the platform ===
 
 
 
 
 
==== Set the ntp server ====
 
source /etc/platform/openrc
 
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
 
 
 
==== Configure data interfaces ====
 
source /etc/platform/openrc
 
export COMPUTE=controller-0
 
DATA0IF=eth1000
 
DATA1IF=eth1001
 
PHYSNET0='physnet0'
 
PHYSNET1='physnet1'
 
SPL=/tmp/tmp-system-port-list 
 
SPIL=/tmp/tmp-system-host-if-list 
 
NOWRAP="--nowrap"
 
system host-port-list ${COMPUTE} $NOWRAP > ${SPL}
 
system host-if-list -a ${COMPUTE} $NOWRAP > ${SPIL}
 
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}') 
 
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}') 
 
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}') 
 
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}') 
 
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}') 
 
DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}') 
 
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}') 
 
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}') 
 
# configure the datanetworks in sysinv, prior to referencing it in the 'system host-if-modify command'
 
system datanetwork-add ${PHYSNET0} vlan
 
system datanetwork-add ${PHYSNET1} vlan
 
# the host-if-modify '-p' flag is deprecated in favor of  the '-d' flag for assignment of datanetworks.
 
system host-if-modify -m 1500 -n data0 -d ${PHYSNET0} -c data ${COMPUTE} ${DATA0IFUUID} 
 
system host-if-modify -m 1500 -n data1 -d ${PHYSNET1} -c data ${COMPUTE} ${DATA1IFUUID}
 
 
 
==== Setup partitions for Controller-0 ====
 
This configures Nova-Local as 10G and cgts-vg as 30G
 
<pre>
 
export COMPUTE=controller-0
 
source /etc/platform/openrc
 
 
 
echo ">>> Getting root disk info"
 
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
 
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
 
 
 
echo ">>>> Configuring nova-local"
 
PARTITION_SIZE=10
 
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
 
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 
system host-lvg-add ${COMPUTE} nova-local
 
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
 
sleep 2
 
 
 
echo ">>> Wait for partition $NOVA_PARTITION_UUID to be ready."
 
while true; do system host-disk-partition-list $COMPUTE --nowrap | grep $NOVA_PARTITION_UUID | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done
 
 
 
echo ">>>> Extending cgts-vg"
 
PARTITION_SIZE=30
 
CGTS_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
 
CGTS_PARTITION_UUID=$(echo ${CGTS_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 
 
 
echo ">>> Wait for partition $CGTS_PARTITION_UUID to be ready"
 
while true; do system host-disk-partition-list $COMPUTE --nowrap | grep $CGTS_PARTITION_UUID | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done
 
 
 
system host-pv-add ${COMPUTE} cgts-vg ${CGTS_PARTITION_UUID}
 
sleep 2
 
 
 
echo ">>> Waiting for cgts-vg to be ready"
 
while true; do system host-pv-list ${COMPUTE} | grep cgts-vg | grep adding; if [ $? -ne 0 ]; then break; fi; sleep 1; done
 
 
 
</pre>
 
 
 
==== Configure Ceph for Controller-0 ====
 
<pre>
 
source /etc/platform/openrc
 
echo ">>> Enable primary Ceph backend"
 
system storage-backend-add ceph --confirmed
 
 
 
echo ">>> Wait for primary ceph backend to be configured"
 
echo ">>> This step really takes a long time"
 
while true; do system storage-backend-list | grep ceph-store | grep configured; if [ $? -eq 0 ]; then break; else sleep 10; fi; done
 
 
 
echo ">>> Ceph health"
 
ceph -s
 
 
 
DISKS=$(system host-disk-list 1)
 
TIERS=$(system storage-tier-list ceph_cluster)
 
 
 
echo ">>> Add OSDs to primary tier"
 
system host-stor-add 1 $(echo "$DISKS" | grep /dev/sdb | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
 
 
 
echo ">>> system host-stor-list 1"
 
system host-stor-list 1
 
 
 
echo ">>> ceph osd tree"
 
ceph osd tree
 
</pre>
 
 
 
==== Unlock Controller-0 ====
 
source /etc/platform/openrc
 
system host-unlock controller-0
 
 
 
== Boot the second AIO controller ==
 
 
 
Boot the second VM (without an ISO media mounted) 
 
Hit F12 immediately when the VM starts to select a different boot option - select the "lan" option to force a network boot.
 
 
 
At the controller-1 console, you will see a message instructing you to configure the personality of the node. Do this from a shell on controller-0 as follows:
 
 
 
<pre>
 
source /etc/platform/openrc
 
system host-list
 
+----+--------------+-------------+----------------+-------------+--------------+
 
| id | hostname    | personality | administrative | operational | availability |
 
+----+--------------+-------------+----------------+-------------+--------------+
 
| 1  | controller-0 | controller  | unlocked      | enabled    | available    |
 
| 2  | None        | None        | locked        | disabled    | offline      |
 
+----+--------------+-------------+----------------+-------------+--------------+
 
[wrsroot@controller-0 ~(keystone_admin)]# system host-update 2 personality=controller
 
 
 
+---------------------+--------------------------------------+
 
| Property            | Value                                |
 
+---------------------+--------------------------------------+
 
| action              | none                                |
 
| administrative      | locked                              |
 
| availability        | offline                              |
 
| bm_ip              | None                                |
 
| bm_type            | None                                |
 
| bm_username        | None                                |
 
| boot_device        | sda                                  |
 
| capabilities        | {}                                  |
 
| config_applied      | None                                |
 
| config_status      | None                                |
 
| config_target      | None                                |
 
| console            | ttyS0,115200                        |
 
| created_at          | 2019-01-24T21:02:56.679215+00:00    |
 
| hostname            | controller-1                        |
 
| id                  | 2                                    |
 
| install_output      | text                                |
 
| install_state      | None                                |
 
| install_state_info  | None                                |
 
| invprovision        | None                                |
 
| location            | {}                                  |
 
| mgmt_ip            | 192.168.204.4                        |
 
| mgmt_mac            | 08:00:27:27:64:df                    |
 
| operational        | disabled                            |
 
| personality        | controller                          |
 
| reserved            | False                                |
 
| rootfs_device      | sda                                  |
 
| serialid            | None                                |
 
| software_load      | 19.01                                |
 
| subfunction_avail  | not-installed                        |
 
| subfunction_oper    | disabled                            |
 
| subfunctions        | controller,worker                    |
 
| task                | None                                |
 
| tboot              | false                                |
 
| ttys_dcd            | None                                |
 
| updated_at          | None                                |
 
| uptime              | 0                                    |
 
| uuid                | affb8a7a-f1b7-4a95-b531-2b399df05376 |
 
| vim_progress_status | None                                |
 
+---------------------+--------------------------------------+
 
 
 
</pre>
 
 
 
The packages will install and the controller will reboot.
 
 
 
== Provisioning the second AIO controller  ==
 
 
 
=== Configure Data Interfaces for Controller-1 ===
 
<pre>
 
source /etc/platform/openrc
 
export COMPUTE='controller-1'
 
PHYSNET0='physnet0'
 
PHYSNET1='physnet1'
 
OAM_IF=enp0s3
 
DATA0IF=eth1000
 
DATA1IF=eth1001
 
NOWRAP="--nowrap"
 
 
 
echo ">>> Configuring OAM Network"
 
system host-if-modify -n oam0 -c platform --networks oam ${COMPUTE} $(system host-if-list -a $COMPUTE  $NOWRAP | awk -v OAM_IF=$OAM_IF '{if ($4 == OAM_IF) { print $2;}}')
 
 
 
echo ">>> Configuring Cluster Host Interface"
 
system host-if-modify $COMPUTE mgmt0 --networks cluster-host
 
 
 
echo ">>> Configuring Data Networks"
 
SPL=/tmp/tmp-system-port-list 
 
SPIL=/tmp/tmp-system-host-if-list 
 
system host-port-list ${COMPUTE} $NOWRAP > ${SPL} 
 
system host-if-list -a ${COMPUTE} $NOWRAP > ${SPIL} 
 
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}') 
 
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}') 
 
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}') 
 
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}') 
 
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}') 
 
DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}') 
 
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}') 
 
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}') 
 
system host-if-modify -m 1500 -n data0 -p ${PHYSNET0} -c data ${COMPUTE} ${DATA0IFUUID} 
 
system host-if-modify -m 1500 -n data1 -p ${PHYSNET1} -c data ${COMPUTE} ${DATA1IFUUID}
 
</pre>
 
 
 
=== Setup Partitions for Controller-1 ===
 
These disks are added on unlock.  Using same sizes as controller-0.  10G for nova-local, 30G for cgts-vg
 
<pre>
 
source /etc/platform/openrc
 
export COMPUTE=controller-1
 
 
 
echo ">>> Getting root disk info"
 
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
 
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
 
 
 
echo ">>>> Configuring nova-local"
 
PARTITION_SIZE=10
 
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
 
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 
system host-lvg-add ${COMPUTE} nova-local
 
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
 
 
 
echo ">>>> Extending cgts-vg"
 
PARTITION_SIZE=30
 
CGTS_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
 
CGTS_PARTITION_UUID=$(echo ${CGTS_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 
system host-pv-add ${COMPUTE} cgts-vg ${CGTS_PARTITION_UUID}
 
</pre>
 
 
 
=== Setup Ceph for Controller-1 ===
 
<pre>
 
source /etc/platform/openrc
 
 
 
echo ">>> Get disk & tier info"
 
HOST="controller-1"
 
DISKS=$(system host-disk-list ${HOST})
 
TIERS=$(system storage-tier-list ceph_cluster)
 
echo "Disks:"
 
echo "$DISKS"
 
echo "Tiers:"
 
echo "$TIERS"
 
 
 
echo ">>> Add OSDs to primary tier"
 
system host-stor-add ${HOST} $(echo "$DISKS" | grep /dev/sdb | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
 
 
 
echo ">>> system host-stor-list ${HOST}"
 
system host-stor-list ${HOST}
 
echo ">>> ceph osd tree"
 
ceph osd tree
 
</pre>
 
 
 
=== Unlock Controller-1 ===
 
source /etc/platform/openrc
 
system host-unlock controller-1
 
 
 
Wait for controller-1 to reboot before proceeding.
 
 
 
== Prepare the hosts for running the containerized services ==
 
<pre>
 
source /etc/platform/openrc
 
 
 
system host-label-assign controller-0 openstack-control-plane=enabled
 
system host-label-assign controller-0 openstack-compute-node=enabled
 
system host-label-assign controller-0 openvswitch=enabled
 
 
 
system host-label-assign controller-1 openstack-control-plane=enabled
 
system host-label-assign controller-1 openstack-compute-node=enabled
 
system host-label-assign controller-1 openvswitch=enabled
 
 
 
kubectl get nodes --show-labels
 
</pre>
 
 
 
== Using sysinv to bring up/down the containerized services ==
 
 
 
Similar to the AIO SX instructions.
 
 
 
=== Generate the stx-openstack application tarball ===
 
In a development environment, run the following command to construct the application tarballs.
 
 
 
<pre>
 
$MY_REPO_ROOT_DIR/cgcs-root/build-tools/build-helm-charts.sh
 
</pre>
 
 
 
The resulting tarballs can be found under $MY_WORKSPACE/std/build-helm/stx.
 
 
 
Currently it produces 2 application tarballs, one with tests enabled and one without. Transfer the selected tarball to your lab/virtual box.
 
 
 
Alternatively the [http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_green_build/outputs/helm-charts/ stx-openstack application tarballs] are generated with each build on the CENGN mirror.
 
 
 
=== Stage application for deployment ===
 
Transfer the application tarball onto your active controller.
 
 
 
Use sysinv to upload the application tarball.
 
 
 
<pre>
 
system application-upload stx-openstack helm-charts-manifest-no-tests.tgz
 
system application-list
 
</pre>
 
 
 
=== Bring Up Services ===
 
Use sysinv to apply the application.
 
 
 
You can monitor the progress  by watching system application-list
 
 
 
<pre>
 
system application-apply stx-openstack
 
watch -n 1 system application-list
 
</pre>
 
 
 
Refer to AIO SX instructions for how to delete, remove and troubleshoot stx-openstack
 
 
 
== Verify the cluster endpoints ==
 
 
 
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Verify_the_cluster_endpoints|  here ]]
 
 
 
== Provider/tenant networking setup ==
 
 
 
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Provider.2Ftenant_networking_setup|  here ]]
 
 
 
== Horizon access ==
 
 
 
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Horizon_access|  here ]]
 
 
 
== Known Issues and Troubleshooting ==
 
 
 
None
 

Latest revision as of 17:58, 2 August 2019

Warning icon.svg Warning - Deprecated

This wiki page is out of date and now deprecated. For the current and upcoming versions, see StarlingX Installation and Deployment guides

Documentation Contribution

You might consider contributing to StarlingX documentation if you find a bug or have a suggestions for improvement. To get started:

History

Go to Page > History link if you want to:

  • See the old content of this page
  • Compare revisions