Jump to: navigation, search

Difference between revisions of "StarlingX/Containers/InstallationOnStandard"

(Provisioning controller-0)
(Create the volume group for nova)
(25 intermediate revisions by 10 users not shown)
Line 1: Line 1:
 +
{{Warning|header='''Warning - Deprecated'''|body='''This wiki page is out of date and now deprecated. For the current and upcoming versions, see [https://docs.starlingx.io/deploy_install_guides/index.html StarlingX Installation and Deployment guides]'''}}
 +
 
= Installing StarlingX with containers: Standard configuration =
 
= Installing StarlingX with containers: Standard configuration =
 +
 +
'''WARNING:'''
 +
'''DO NOT EDIT THIS WIKI CONTENT.'''
 +
 +
The information on this wiki page is in the process of transitioning to "Deploy/Install" guides that are being created as part of the StarlingX documentation.
 +
Consequently, do not make edits to the content in this wiki page.  If you have changes that need to be made to the installation process described on this page of
 +
the wiki, contact StarlingX Documentation Team.
  
 
== History ==
 
== History ==
Line 23: Line 32:
 
       * Memory size:
 
       * Memory size:
 
         * Controller nodes: 16384 MB
 
         * Controller nodes: 16384 MB
         * Compute nodes: 4096 MB
+
         * Compute nodes: 10240 MB
 
       * Storage:  
 
       * Storage:  
 
         * Recommend to use VDI and dynamically allocated disks
 
         * Recommend to use VDI and dynamically allocated disks
Line 31: Line 40:
 
         * Compute nodes; at least one disk is required:
 
         * Compute nodes; at least one disk is required:
 
               * 240GB disk for a root disk  
 
               * 240GB disk for a root disk  
        * System->Processors:  
+
      * System->Processors:  
            * Controller nodes: 4 cpu
+
        * Controller nodes: 4 cpu
            * Compute nodes: 3 cpu
+
        * Compute nodes: 3 cpu
        * Network:
+
      * Network:
            * Controller nodes:
+
        * Controller nodes:
              * OAM network:
+
            * OAM network:
                  OAM interface must have external connectivity, for now we will use a NatNetwork
+
              OAM interface must have external connectivity, for now we will use a NatNetwork
                  * Adapter 1: NAT Network; Name: NatNetwork Follow the instructions at [[#VirtualBox Nat Networking]]
+
              * Adapter 1: NAT Network; Name: NatNetwork Follow the instructions at [[#VirtualBox Nat Networking]]
              * Internal management network:
+
            * Internal management network:
                  * Adapter 2: Internal Network, Name: intnet-management; Intel PRO/1000MT Desktop, Advanced: Promiscuous Mode: Allow All;
+
              * Adapter 2: Internal Network, Name: intnet-management; Intel PRO/1000MT Desktop, Advanced: Promiscuous Mode: Allow All;
            * Compute nodes:
+
        * Compute nodes:
              * Usused network  
+
            * Unused network  
                  * Adapter 1: Internal Network, Name: intnet-unused; Advanced: Intel PRO/1000MT Desktop, Promiscuous Mode: Allow All (Optional - if infrastructure network will be used then set "Name" to "intnet-infra")
+
              * Adapter 1: Internal Network, Name: intnet-unused; Advanced: Intel PRO/1000MT Desktop, Promiscuous Mode: Allow All (Optional - if infrastructure network will be used then set "Name" to "intnet-infra")
              * Internal management network:
+
            * Internal management network:
                  * Adapter 2: Internal Network, Name: intnet-management; Intel PRO/1000MT Desktop, Advanced: Promiscuous Mode: Allow All;
+
              * Adapter 2: Internal Network, Name: intnet-management; Intel PRO/1000MT Desktop, Advanced: Promiscuous Mode: Allow All;
              * Data Network
+
            * Data Network
                  * Adapter 3: Internal Network, Name: intnet-data1; Advanced: Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All
+
              * Adapter 3: Internal Network, Name: intnet-data1; Advanced: Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All
                  * Adapter 4: Internal Network, Name: intnet-data2; Advanced: Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All
+
              * Adapter 4: Internal Network, Name: intnet-data2; Advanced: Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All
        * Serial Ports: Select this to use a serial console.
+
      * Serial Ports: Select this to use a serial console.
            * Windows: Select "Enable Serial Port", port mode to "Host Pipe". Select "Create Pipe" (or deselect "Connect to existing pipe/socket") and then give a Port/File Path of something like "\\.\pipe\controller-0" or "\\.\pipe\compute-1" which you can later use in PuTTY to connect to the console. Choose speed of 9600 or 38400.
+
        * Windows: Select "Enable Serial Port", port mode to "Host Pipe". Select "Create Pipe" (or deselect "Connect to existing pipe/socket") and then give a Port/File Path of something like "\\.\pipe\controller-0" or "\\.\pipe\compute-1" which you can later use in PuTTY to connect to the console. Choose speed of 9600 or 38400.
            * Linux: Select "Enable Serial Port" and set the port mode to "Host Pipe". Select "Create Pipe" (or deselect "Connect to existing pipe/socket") and then give a Port/File Path of something like "/tmp/controller_serial" which you can later use with socat - for example: socat UNIX-CONNECT:/tmp/controller_serial stdio,raw,echo=0,icanon=0
+
        * Linux: Select "Enable Serial Port" and set the port mode to "Host Pipe". Select "Create Pipe" (or deselect "Connect to existing pipe/socket") and then give a Port/File Path of something like "/tmp/controller_serial" which you can later use with socat - for example: socat UNIX-CONNECT:/tmp/controller_serial stdio,raw,echo=0,icanon=0
  
 
Set the boot priority for interface 2 (eth1) on ALL VMs (controller, compute and storage)
 
Set the boot priority for interface 2 (eth1) on ALL VMs (controller, compute and storage)
Line 91: Line 100:
 
| controller-ssh || TCP || || 22 || 10.10.10.3 || 22
 
| controller-ssh || TCP || || 22 || 10.10.10.3 || 22
 
|-
 
|-
| controller-http || TCP || || 80 || 10.10.10.3 || 80
+
| controller-http || TCP || || 80 || 10.10.10.3 || 8080
 
|-
 
|-
| controller-https || TCP ||  || 443 || 10.10.10.3 || 443
+
| controller-https || TCP ||  || 443 || 10.10.10.3 || 8443
 
|-
 
|-
 
| controller-ostk-http || TCP ||  || 31000 || 10.10.10.3 || 31000
 
| controller-ostk-http || TCP ||  || 31000 || 10.10.10.3 || 31000
Line 102: Line 111:
 
|-
 
|-
 
|}
 
|}
 
  
 
== Setup Controller-0 ==
 
== Setup Controller-0 ==
Line 122: Line 130:
 
Enter a new password for the wrsroot account and confirm it.
 
Enter a new password for the wrsroot account and confirm it.
  
=== Run config_controller ===
+
=== Bootstrap the controller ===
 +
Refer to these instructions on the AIO DX page [[StarlingX/Containers/InstallationOnAIODX#Bootstrap_the_controller| Bootstrap the controller]]
  
<code>sudo config_controller --kubernetes</code>
+
=== Provisioning controller-0 ===
  
Use default settings during config_controller, except for the following:
+
==== Configure OAM, Management and Cluster interfaces ====
* External OAM floating address: 10.10.10.3
+
Refer to these instructions on the AIO DX page [[StarlingX/Containers/InstallationOnAIODX#Configure_OAM.2C_Management_and_Cluster_interfaces| Configure OAM, Management and Cluster interfaces]]
* External OAM address for first controller node: 10.10.10.4
 
* External OAM address for second controller node 10.10.10.5
 
* If you do not have direct access to the google DNS nameserver(s) 8.8.8.8 , 8.8.4.4 you will need to configure that when prompted. Press Enter to choose the default, or type a new entry.
 
* If you do not have direct access to the public docker registry (https://hub.docker.com/u/starlingx) and instead use a proxy for internet access, you will need to add proxy information when prompted. (Storyboard 2004710 was merged on Jan 30, 2019. )
 
  
The system configuration should look like this:
+
==== (Hardware lab only) Set the ntp server====
<pre>
+
Refer to these instructions on the AIO SX page [https://wiki.openstack.org/wiki/StarlingX/Containers/Installation#.28Hardware_lab_only.29_Set_the_ntp_server Set the ntp server]
System Configuration
 
--------------------
 
Time Zone: UTC
 
System mode: duplex
 
Distributed Cloud System Controller: no
 
  
PXEBoot Network Configuration
+
==== Configure the vswitch type (optional) ====
-----------------------------
+
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Configure_the_vswitch_type_.28optional.29Configure the vswitch type]]
Separate PXEBoot network not configured
 
PXEBoot Controller floating hostname: pxecontroller
 
 
 
Management Network Configuration
 
--------------------------------
 
Management interface name: enp0s8
 
Management interface: enp0s8
 
Management interface MTU: 1500
 
Management subnet: 192.168.204.0/24
 
Controller floating address: 192.168.204.2
 
Controller 0 address: 192.168.204.3
 
Controller 1 address: 192.168.204.4
 
NFS Management Address 1: 192.168.204.5
 
NFS Management Address 2: 192.168.204.6
 
Controller floating hostname: controller
 
Controller hostname prefix: controller-
 
OAM Controller floating hostname: oamcontroller
 
Dynamic IP address allocation is selected
 
Management multicast subnet: 239.1.1.0/28
 
 
 
Infrastructure Network Configuration
 
------------------------------------
 
Infrastructure interface not configured
 
 
 
External OAM Network Configuration
 
----------------------------------
 
External OAM interface name: enp0s3
 
External OAM interface: enp0s3
 
External OAM interface MTU: 1500
 
External OAM subnet: 10.10.10.0/24
 
External OAM gateway address: 10.10.10.1
 
External OAM floating address: 10.10.10.3
 
External OAM 0 address: 10.10.10.4
 
External OAM 1 address: 10.10.10.5
 
 
 
DNS Configuration
 
-----------------
 
Nameserver 1: 8.8.8.8
 
</pre>
 
 
 
=== Provisioning controller-0 ===
 
 
 
==== Set the ntp server ====
 
 
 
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Set_the_ntp_serverSet the ntp server]]
 
  
 
==== Prepare the host for running the containerized services ====
 
==== Prepare the host for running the containerized services ====
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Prepare_the_host_for_running_the_containerized_services| Prepare the host for running the containerized services]]
+
 
 
+
* On the controller node, apply the node label for controller functions
==== Enable the Ceph backend ====
 
  
Enable ceph backend and wait for 'applying-manifests' task to complete
 
 
<pre>
 
<pre>
 
source /etc/platform/openrc
 
source /etc/platform/openrc
system storage-backend-add ceph --confirmed
+
system host-label-assign controller-0 openstack-control-plane=enabled
 
 
while [ $(system storage-backend-list | awk '/ceph-store/{print $8}') != 'configured' ]; do echo 'Waiting for ceph.'; sleep 5; done
 
system storage-backend-list
 
 
</pre>
 
</pre>
  
Line 255: Line 205:
 
| 4  | compute-1    | worker      | locked        | disabled    | online      |
 
| 4  | compute-1    | worker      | locked        | disabled    | online      |
 
+----+--------------+-------------+----------------+-------------+--------------+
 
+----+--------------+-------------+----------------+-------------+--------------+
 +
</pre>
 +
 +
== Prepare the remaining hosts for running the containerized services ==
 +
 
 +
* On the controller node, apply all the node labels for each controller and compute functions
 +
 +
<pre>
 +
source /etc/platform/openrc
 +
system host-label-assign controller-1 openstack-control-plane=enabled
 +
for NODE in compute-0 compute-1; do
 +
  system host-label-assign $NODE  openstack-compute-node=enabled
 +
  system host-label-assign $NODE  openvswitch=enabled
 +
  system host-label-assign $NODE  sriov=enabled
 +
done
 
</pre>
 
</pre>
  
Line 265: Line 229:
 
<pre>
 
<pre>
 
source /etc/platform/openrc
 
source /etc/platform/openrc
system host-if-modify -n oam0 -c platform --networks oam controller-1 $(system host-if-list -a controller-1 | awk '/enp0s3/{print $2}')
+
system host-if-modify -n oam0 -c platform controller-1 $(system host-if-list -a controller-1 | awk '/enp0s3/{print $2}')
system host-if-modify controller-1 mgmt0 --networks cluster-host
+
system interface-network-assign controller-1 oam0 oam
 +
system interface-network-assign controller-1 mgmt0 cluster-host
 
</pre>
 
</pre>
  
Line 344: Line 309:
 
for COMPUTE in compute-0 compute-1; do
 
for COMPUTE in compute-0 compute-1; do
 
   echo "Configuring nova local for: $COMPUTE"
 
   echo "Configuring nova local for: $COMPUTE"
  set -ex
 
 
   ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
 
   ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
   ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | awk /${ROOT_DISK}/'{print $2}')
+
   ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 
   PARTITION_SIZE=10
 
   PARTITION_SIZE=10
 
   NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
 
   NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
Line 352: Line 316:
 
   system host-lvg-add ${COMPUTE} nova-local
 
   system host-lvg-add ${COMPUTE} nova-local
 
   system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
 
   system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
  system host-lvg-modify -b image ${COMPUTE} nova-local
 
  set +ex
 
 
done
 
done
 
</pre>
 
</pre>
Line 384: Line 346:
 
   DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 
   DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 
   DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 
   DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
   system host-if-modify -m 1500 -n data0 -d ${PHYSNET0} -c data ${COMPUTE} ${DATA0IFUUID}
+
   system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
   system host-if-modify -m 1500 -n data1 -d ${PHYSNET1} -c data ${COMPUTE} ${DATA1IFUUID}
+
   system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
 +
  system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
 +
  system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
 
   set +ex
 
   set +ex
 
done
 
done
Line 394: Line 358:
 
<pre>
 
<pre>
 
for COMPUTE in compute-0 compute-1; do
 
for COMPUTE in compute-0 compute-1; do
   system host-if-modify $COMPUTE mgmt0 --networks cluster-host
+
   system interface-network-assign $COMPUTE mgmt0 cluster-host
 
done
 
done
 
</pre>
 
</pre>
Line 435: Line 399:
  
 
== Add Ceph OSDs to controllers ==
 
== Add Ceph OSDs to controllers ==
 
* Lock controller-1
 
  
 
<pre>
 
<pre>
system host-lock controller-1
+
HOST=controller-0
</pre>
 
 
 
* Wait for node to be locked.
 
 
 
* Add OSD(s) to controller-1
 
<pre>
 
HOST=controller-1
 
 
DISKS=$(system host-disk-list ${HOST})
 
DISKS=$(system host-disk-list ${HOST})
 
TIERS=$(system storage-tier-list ceph_cluster)
 
TIERS=$(system storage-tier-list ceph_cluster)
 
OSDs="/dev/sdb"
 
OSDs="/dev/sdb"
 
for OSD in $OSDs; do
 
for OSD in $OSDs; do
     system host-stor-add ${HOST} $(echo "$DISKS" | grep /dev/sdb | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
+
     system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
 +
    while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
 
done
 
done
</pre>
 
 
* Unlock controller-1
 
<pre>
 
system host-unlock controller-1
 
</pre>
 
  
* Wait controller-1 to be available
+
system host-stor-list $HOST
<pre>
 
[root@controller-0 wrsroot(keystone_admin)]# system host-list
 
+----+--------------+-------------+----------------+-------------+--------------+
 
| id | hostname    | personality | administrative | operational | availability |
 
+----+--------------+-------------+----------------+-------------+--------------+
 
| 1  | controller-0 | controller  | unlocked      | enabled    | available    |
 
| 2  | controller-1 | controller  | unlocked      | enabled    | available    |
 
| 3  | compute-0    | worker      | unlocked      | enabled    | available    |
 
| 4  | compute-1    | worker      | unlocked      | enabled    | available    |
 
+----+--------------+-------------+----------------+-------------+--------------+
 
</pre>
 
  
* Swact controllers
+
HOST=controller-1
<pre>
 
system host-swact controller-0
 
</pre>
 
 
 
Wait for swact to complete and services to stabilize (approximately 30s). You may get disconnect if you connected over OAM floating IP. Reconnect or connect to controller-1.
 
<pre>
 
controller-1:/home/wrsroot# source /etc/platform/openrc
 
[root@controller-1 wrsroot(keystone_admin)]# system host-show controller-1 | grep Controller-Active
 
| capabilities        | {u'stor_function': u'monitor', u'Personality': u'Controller-Active'} |
 
</pre>
 
 
 
* Lock controller-0
 
<pre>
 
system host-lock controller-0
 
</pre>
 
 
 
* Wait controller-0 to be locked
 
 
 
* Add OSD(s) to controller-0
 
<pre>
 
HOST=controller-0
 
 
DISKS=$(system host-disk-list ${HOST})
 
DISKS=$(system host-disk-list ${HOST})
 
TIERS=$(system storage-tier-list ceph_cluster)
 
TIERS=$(system storage-tier-list ceph_cluster)
 
OSDs="/dev/sdb"
 
OSDs="/dev/sdb"
 
for OSD in $OSDs; do
 
for OSD in $OSDs; do
     system host-stor-add ${HOST} $(echo "$DISKS" | grep /dev/sdb | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
+
     system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
 +
    while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
 
done
 
done
</pre>
 
  
* Unlock controller-0
+
system host-stor-list $HOST
<pre>
 
system host-unlock controller-0
 
 
</pre>
 
</pre>
  
* Wait for controller-0 to be available. At this point ceph should report HEALTH_OK and two OSDs configured one for each controller:
+
At this point ceph should report HEALTH_OK and two OSDs configured one for each controller:
 
<pre>
 
<pre>
 
[root@controller-0 wrsroot(keystone_admin)]# ceph -s
 
[root@controller-0 wrsroot(keystone_admin)]# ceph -s
Line 528: Line 444:
 
-3 0.00850        host controller-1                                         
 
-3 0.00850        host controller-1                                         
 
  0 0.00850            osd.0                  up  1.00000          1.00000  
 
  0 0.00850            osd.0                  up  1.00000          1.00000  
</pre>
 
 
== Prepare the host for running the containerized services ==
 
 
 
* On the controller node, apply all the node labels for each controller and compute functions
 
 
<pre>
 
source /etc/platform/openrc
 
for NODE in controller-0 controller-1; do
 
  system host-label-assign $NODE openstack-control-plane=enabled
 
done
 
for NODE in compute-0 compute-1; do
 
  system host-label-assign $NODE  openstack-compute-node=enabled
 
  system host-label-assign $NODE  openvswitch=enabled
 
done
 
kubectl get nodes --show-labels
 
 
</pre>
 
</pre>
  

Revision as of 17:31, 16 July 2019

Warning icon.svg Warning - Deprecated

This wiki page is out of date and now deprecated. For the current and upcoming versions, see StarlingX Installation and Deployment guides

Contents

Installing StarlingX with containers: Standard configuration

WARNING: DO NOT EDIT THIS WIKI CONTENT.

The information on this wiki page is in the process of transitioning to "Deploy/Install" guides that are being created as part of the StarlingX documentation. Consequently, do not make edits to the content in this wiki page. If you have changes that need to be made to the installation process described on this page of the wiki, contact StarlingX Documentation Team.

History

  • January 29, 2019: Removed obsolete neutron host/interface configuration and updated DNS instructions.
  • January 29, 2019: Configure datanetworks in sysinv, prior to referencing it in the 'system host-if-modify/host-if-add command'.

Introduction

These instructions are for a Standard, 2 controllers and 2 computes (2+2) configuration, in VirtualBox. Other configurations are in development. Installing on bare metal is also possible, however the the process would have to be adapted for the specific hardware configuration.

Note: These instructions are valid for a load built on January 25, 2019 or later.

Building the Software

Refer to these instructions on the AIO SX page Building the Software

Setup the VirtualBox VM

Create a virtual machine for the system with the following options:

     * Type: Linux
     * Version: Other Linux (64-bit)
     * Memory size:
        * Controller nodes: 16384 MB
        * Compute nodes: 10240 MB
     * Storage: 
        * Recommend to use VDI and dynamically allocated disks
        * Controller nodes; at least two disks are required:
             * 240GB disk for a root disk 
             * 50GB for an OSD
        * Compute nodes; at least one disk is required:
             * 240GB disk for a root disk 
     * System->Processors: 
        * Controller nodes: 4 cpu
        * Compute nodes: 3 cpu
     * Network:
        * Controller nodes:
           * OAM network:
              OAM interface must have external connectivity, for now we will use a NatNetwork
              * Adapter 1: NAT Network; Name: NatNetwork Follow the instructions at #VirtualBox Nat Networking
           * Internal management network:
              * Adapter 2: Internal Network, Name: intnet-management; Intel PRO/1000MT Desktop, Advanced: Promiscuous Mode: Allow All;
        * Compute nodes:
           * Unused network 
              * Adapter 1: Internal Network, Name: intnet-unused; Advanced: Intel PRO/1000MT Desktop, Promiscuous Mode: Allow All (Optional - if infrastructure network will be used then set "Name" to "intnet-infra")
           * Internal management network:
              * Adapter 2: Internal Network, Name: intnet-management; Intel PRO/1000MT Desktop, Advanced: Promiscuous Mode: Allow All;
           * Data Network
              * Adapter 3: Internal Network, Name: intnet-data1; Advanced: Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All
              * Adapter 4: Internal Network, Name: intnet-data2; Advanced: Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All
     * Serial Ports: Select this to use a serial console.
        * Windows: Select "Enable Serial Port", port mode to "Host Pipe". Select "Create Pipe" (or deselect "Connect to existing pipe/socket") and then give a Port/File Path of something like "\\.\pipe\controller-0" or "\\.\pipe\compute-1" which you can later use in PuTTY to connect to the console. Choose speed of 9600 or 38400.
        * Linux: Select "Enable Serial Port" and set the port mode to "Host Pipe". Select "Create Pipe" (or deselect "Connect to existing pipe/socket") and then give a Port/File Path of something like "/tmp/controller_serial" which you can later use with socat - for example: socat UNIX-CONNECT:/tmp/controller_serial stdio,raw,echo=0,icanon=0

Set the boot priority for interface 2 (eth1) on ALL VMs (controller, compute and storage)

# First list the VMs
abc@server:~$ VBoxManage list vms
"controller-0" {3db3a342-780f-41d5-a012-dbe6d3591bf1}
"controller-1" {ad89a706-61c6-4c27-8c78-9729ade01460}
"compute-0" {41e80183-2497-4e31-bffd-2d8ec5bcb397}
"compute-1" {68382c1d-9b67-4f3b-b0d5-ebedbe656246}
"storage-0" {7eddce9e-b814-4c40-94ce-2cde1fd2d168}

# Then set the priority for interface 2. Do this for ALL VMs.
# Command syntax: VBoxManage modifyvm <uuid> --nicbootprio2 1
abc@server:~$ VBoxManage modifyvm 3db3a342-780f-41d5-a012-dbe6d3591bf1 --nicbootprio2 1

#OR do them all with a foreach loop in linux
abc@server:~$ for f in $(VBoxManage list vms | cut -f 1 -d " " | sed 's/"//g'); do echo $f; VBoxManage modifyvm $f --nicbootprio2 1; done

# NOTE: In windows, you need to specify the full path to the VBoxManage executable - for example:
"\Program Files\Oracle\VirtualBox\VBoxManage.exe"

VirtualBox Nat Networking

First add a NAT Network in VirtualBox:

 * Select File -> Preferences menu
 * Choose Network, "Nat Networks" tab should be selected
   * Click on plus icon to add a network, which will add a network named NatNetwork
   * Edit the NatNetwork (gear or screwdriver icon)
     * Network CIDR: 10.10.10.0/24 (to match OAM network specified in config_controller)
     * Disable "Supports DHCP"
     * Enable "Supports IPv6"
     * Select "Port Forwarding" and add any rules you desire. Some examples:
Name Protocol Host IP Host Port Guest IP Guest Port
controller-ssh TCP 22 10.10.10.3 22
controller-http TCP 80 10.10.10.3 8080
controller-https TCP 443 10.10.10.3 8443
controller-ostk-http TCP 31000 10.10.10.3 31000
controller-0-ssh TCP 23 10.10.10.4 22
controller-1-ssh TCP 24 10.10.10.4 22

Setup Controller-0

Install StarlingX

Boot the VM from the ISO media. Select the following options for installation:

  • Standard Controller Configuration
  • Graphical Console
  • STANDARD Security Boot Profile

Log into Controller-0 as user wrsroot, with password wrsroot. The first time you log in as wrsroot, you are required to change your password. Enter the current password (wrsroot):

 
Changing password for wrsroot.
(current) UNIX Password: wrsroot

Enter a new password for the wrsroot account and confirm it.

Bootstrap the controller

Refer to these instructions on the AIO DX page Bootstrap the controller

Provisioning controller-0

Configure OAM, Management and Cluster interfaces

Refer to these instructions on the AIO DX page Configure OAM, Management and Cluster interfaces

(Hardware lab only) Set the ntp server

Refer to these instructions on the AIO SX page Set the ntp server

Configure the vswitch type (optional)

Refer to these instructions on the AIO SX page Configure the vswitch type

Prepare the host for running the containerized services

  • On the controller node, apply the node label for controller functions
source /etc/platform/openrc
system host-label-assign controller-0 openstack-control-plane=enabled

Unlock controller-0

source /etc/platform/openrc
system host-unlock controller-0

Install remaining hosts

PXE boot hosts

Power-on, the remaining hosts, they should PXEboot from the controller.

Press F-12 for network boot if they do not.

Once booted from PXE, hosts should be visible with Check with 'system host-list':

[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 2  | None         | None        | locked         | disabled    | offline      |
| 3  | None         | None        | locked         | disabled    | offline      |
| 4  | None         | None        | locked         | disabled    | offline      |
+----+--------------+-------------+----------------+-------------+--------------+

Configure host personalities

source /etc/platform/openrc
system host-update 2 personality=controller
system host-update 3 personality=worker hostname=compute-0
system host-update 4 personality=worker hostname=compute-1

At this point hosts should start installing.

Wait for hosts to become online

Once all Nodes have been installed and rebooted, on Controller-0 list the hosts:

+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 2  | controller-1 | controller  | locked         | disabled    | online       |
| 3  | compute-0    | worker      | locked         | disabled    | online       |
| 4  | compute-1    | worker      | locked         | disabled    | online       |
+----+--------------+-------------+----------------+-------------+--------------+

Prepare the remaining hosts for running the containerized services

  • On the controller node, apply all the node labels for each controller and compute functions
source /etc/platform/openrc
system host-label-assign controller-1 openstack-control-plane=enabled
for NODE in compute-0 compute-1; do
  system host-label-assign $NODE  openstack-compute-node=enabled
  system host-label-assign $NODE  openvswitch=enabled
  system host-label-assign $NODE  sriov=enabled
done

Provisioning controller-1

Add interfaces on Controller-1

  • Add the OAM Interface on Controller-1
  • Add the Cluster-Host Interface on Controller-1
source /etc/platform/openrc
system host-if-modify -n oam0 -c platform controller-1 $(system host-if-list -a controller-1 | awk '/enp0s3/{print $2}')
system interface-network-assign controller-1 oam0 oam
system interface-network-assign controller-1 mgmt0 cluster-host

Unlock Controller-1

source /etc/platform/openrc
system host-unlock controller-1

Wait for node to be available:

+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 2  | controller-1 | controller  | unlocked       | enabled     | available    |
| 3  | compute-0    | worker      | locked         | disabled    | online       |
| 4  | compute-1    | worker      | locked         | disabled    | online       |
+----+--------------+-------------+----------------+-------------+--------------+
  • Ceph cluster shows a quorum with controller-0 and controller-1
[root@controller-0 wrsroot(keystone_admin)]# ceph -s
    cluster 93f79bcb-526f-4396-84a4-a29c93614d09
     health HEALTH_ERR
            128 pgs are stuck inactive for more than 300 seconds
            128 pgs stuck inactive
            128 pgs stuck unclean
            no osds
     monmap e1: 2 mons at {controller-0=192.168.204.3:6789/0,controller-1=192.168.204.4:6789/0}
            election epoch 6, quorum 0,1 controller-0,controller-1
     osdmap e2: 0 osds: 0 up, 0 in
            flags sortbitwise,require_jewel_osds
      pgmap v3: 128 pgs, 2 pools, 0 bytes data, 0 objects
            0 kB used, 0 kB / 0 kB avail
                 128 creating

Provisioning computes

Add the third Ceph monitor to a compute node (Standard Only)

[root@controller-0 wrsroot(keystone_admin)]# system ceph-mon-add compute-0
+--------------+------------------------------------------------------------------+
| Property     | Value                                                            |
+--------------+------------------------------------------------------------------+
| uuid         | f76bc385-190c-4d9a-aa0f-107346a9907b                             |
| ceph_mon_gib | 20                                                               |
| created_at   | 2019-01-17T12:32:33.372098+00:00                                 |
| updated_at   | None                                                             |
| state        | configuring                                                      |
| task         | {u'controller-1': 'configuring', u'controller-0': 'configuring'} |
+--------------+------------------------------------------------------------------+

Wait for compute monitor to be configured:

[root@controller-0 wrsroot(keystone_admin)]# system ceph-mon-list
+--------------------------------------+-------+--------------+------------+------+
| uuid                                 | ceph_ | hostname     | state      | task |
|                                      | mon_g |              |            |      |
|                                      | ib    |              |            |      |
+--------------------------------------+-------+--------------+------------+------+
| 64176b6c-e284-4485-bb2a-115dee215279 | 20    | controller-1 | configured | None |
| a9ca151b-7f2c-4551-8167-035d49e2df8c | 20    | controller-0 | configured | None |
| f76bc385-190c-4d9a-aa0f-107346a9907b | 20    | compute-0    | configured | None |
+--------------------------------------+-------+--------------+------------+------+

Create the volume group for nova

for COMPUTE in compute-0 compute-1; do
  echo "Configuring nova local for: $COMPUTE"
  ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
  ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
  PARTITION_SIZE=10
  NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
  NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
  system host-lvg-add ${COMPUTE} nova-local
  system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
done

Configure data interfaces for computes

DATA0IF=eth1000
DATA1IF=eth1001
PHYSNET0='physnet0'
PHYSNET1='physnet1'
SPL=/tmp/tmp-system-port-list
SPIL=/tmp/tmp-system-host-if-list

# configure the datanetworks in sysinv, prior to referencing it in the 'system host-if-modify command'.
system datanetwork-add ${PHYSNET0} vlan
system datanetwork-add ${PHYSNET1} vlan

for COMPUTE in compute-0 compute-1; do
  echo "Configuring interface for: $COMPUTE"
  set -ex
  system host-port-list ${COMPUTE} --nowrap > ${SPL}
  system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
  DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
  DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
  DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
  DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
  DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
  DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
  DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
  DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
  system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
  system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
  system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
  system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
  set +ex
done

Setup the cluster-host interfaces on the computes to the management network (enp0s8)

for COMPUTE in compute-0 compute-1; do
   system interface-network-assign $COMPUTE mgmt0 cluster-host
done

Unlock compute nodes

for COMPUTE in compute-0 compute-1; do
   system host-unlock $COMPUTE
done
  • After the hosts are available, test that Ceph cluster is operational and that all 3 monitors (controller-0, controller-1 & compute-0) have joined the monitor quorum:
[root@controller-0 wrsroot(keystone_admin)]# system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 2  | controller-1 | controller  | unlocked       | enabled     | available    |
| 3  | compute-0    | worker      | unlocked       | enabled     | available    |
| 4  | compute-1    | worker      | unlocked       | enabled     | available    |
+----+--------------+-------------+----------------+-------------+--------------+
[root@controller-0 wrsroot(keystone_admin)]# ceph -s
    cluster 93f79bcb-526f-4396-84a4-a29c93614d09
     health HEALTH_ERR
            128 pgs are stuck inactive for more than 300 seconds
            128 pgs stuck inactive
            128 pgs stuck unclean
            no osds
     monmap e2: 3 mons at {compute-0=192.168.204.182:6789/0,controller-0=192.168.204.3:6789/0,controller-1=192.168.204.4:6789/0}
            election epoch 14, quorum 0,1,2 controller-0,controller-1,compute-0
     osdmap e11: 0 osds: 0 up, 0 in
            flags sortbitwise,require_jewel_osds
      pgmap v12: 128 pgs, 2 pools, 0 bytes data, 0 objects
            0 kB used, 0 kB / 0 kB avail
                 128 creating

Add Ceph OSDs to controllers

HOST=controller-0
DISKS=$(system host-disk-list ${HOST})
TIERS=$(system storage-tier-list ceph_cluster)
OSDs="/dev/sdb"
for OSD in $OSDs; do
    system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
    while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
done

system host-stor-list $HOST

HOST=controller-1
DISKS=$(system host-disk-list ${HOST})
TIERS=$(system storage-tier-list ceph_cluster)
OSDs="/dev/sdb"
for OSD in $OSDs; do
    system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
    while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
done

system host-stor-list $HOST

At this point ceph should report HEALTH_OK and two OSDs configured one for each controller:

[root@controller-0 wrsroot(keystone_admin)]# ceph -s
    cluster 93f79bcb-526f-4396-84a4-a29c93614d09
     health HEALTH_OK
     monmap e2: 3 mons at {compute-0=192.168.204.182:6789/0,controller-0=192.168.204.3:6789/0,controller-1=192.168.204.4:6789/0}
            election epoch 22, quorum 0,1,2 controller-0,controller-1,compute-0
     osdmap e31: 2 osds: 2 up, 2 in
            flags sortbitwise,require_jewel_osds
      pgmap v73: 384 pgs, 6 pools, 1588 bytes data, 1116 objects
            90044 kB used, 17842 MB / 17929 MB avail
                 384 active+clean
[root@controller-1 wrsroot(keystone_admin)]# ceph osd tree
ID WEIGHT  TYPE NAME                      UP/DOWN REWEIGHT PRIMARY-AFFINITY                                  
-1 0.01700 root storage-tier                                                
-2 0.01700     chassis group-0                                              
-4 0.00850         host controller-0                                        
 1 0.00850             osd.1                   up  1.00000          1.00000 
-3 0.00850         host controller-1                                        
 0 0.00850             osd.0                   up  1.00000          1.00000 

Using sysinv to bring up/down the containerized services

Generate the stx-openstack application tarball

Refer to these instructions on the AIO SX page Generate the stx-openstack application tarball

Stage application for deployment

Refer to these instructions on the AIO SX page Stage application for deployment

Bring Up Services

Refer to these instructions on the AIO SX page Bring Up Services

Verify the cluster endpoints

Refer to these instructions on the AIO SX page here

Provider/tenant networking setup

Refer to these instructions on the AIO SX page here

Additional Setup Instructions

Refer to these instructions on the AIO SX page Additional Setup Instructions

Horizon access

Refer to these instructions on the AIO SX page here

Known Issues and Troubleshooting

None