Jump to: navigation, search

Difference between revisions of "StarlingX/Containers/InstallationOnStandard"

(Provisioning controller-0)
(Create the volume group for nova)
(47 intermediate revisions by 12 users not shown)
Line 1: Line 1:
 +
{{Warning|header='''Warning - Deprecated'''|body='''This wiki page is out of date and now deprecated. For the current and upcoming versions, see [https://docs.starlingx.io/deploy_install_guides/index.html StarlingX Installation and Deployment guides]'''}}
 +
 
= Installing StarlingX with containers: Standard configuration =
 
= Installing StarlingX with containers: Standard configuration =
 +
 +
'''WARNING:'''
 +
'''DO NOT EDIT THIS WIKI CONTENT.'''
 +
 +
The information on this wiki page is in the process of transitioning to "Deploy/Install" guides that are being created as part of the StarlingX documentation.
 +
Consequently, do not make edits to the content in this wiki page.  If you have changes that need to be made to the installation process described on this page of
 +
the wiki, contact StarlingX Documentation Team.
  
 
== History ==
 
== History ==
Line 23: Line 32:
 
       * Memory size:
 
       * Memory size:
 
         * Controller nodes: 16384 MB
 
         * Controller nodes: 16384 MB
         * Compute nodes: 4096 MB
+
         * Compute nodes: 10240 MB
 
       * Storage:  
 
       * Storage:  
 
         * Recommend to use VDI and dynamically allocated disks
 
         * Recommend to use VDI and dynamically allocated disks
Line 31: Line 40:
 
         * Compute nodes; at least one disk is required:
 
         * Compute nodes; at least one disk is required:
 
               * 240GB disk for a root disk  
 
               * 240GB disk for a root disk  
        * System->Processors:  
+
      * System->Processors:  
            * Controller nodes: 4 cpu
+
        * Controller nodes: 4 cpu
            * Compute nodes: 3 cpu
+
        * Compute nodes: 3 cpu
        * Network:
+
      * Network:
            * Controller nodes:
+
        * Controller nodes:
              * OAM network:
+
            * OAM network:
                  OAM interface must have external connectivity, for now we will use a NatNetwork
+
              OAM interface must have external connectivity, for now we will use a NatNetwork
                  * Adapter 1: NAT Network; Name: NatNetwork Follow the instructions at [[#VirtualBox Nat Networking]]
+
              * Adapter 1: NAT Network; Name: NatNetwork Follow the instructions at [[#VirtualBox Nat Networking]]
              * Internal management network:
+
            * Internal management network:
                  * Adapter 2: Internal Network, Name: intnet-management; Intel PRO/1000MT Desktop, Advanced: Promiscuous Mode: Allow All;
+
              * Adapter 2: Internal Network, Name: intnet-management; Intel PRO/1000MT Desktop, Advanced: Promiscuous Mode: Allow All;
            * Compute nodes:
+
        * Compute nodes:
              * Usused network  
+
            * Unused network  
                  * Adapter 1: Internal Network, Name: intnet-unused; Advanced: Intel PRO/1000MT Desktop, Promiscuous Mode: Allow All (Optional - if infrastructure network will be used then set "Name" to "intnet-infra")
+
              * Adapter 1: Internal Network, Name: intnet-unused; Advanced: Intel PRO/1000MT Desktop, Promiscuous Mode: Allow All (Optional - if infrastructure network will be used then set "Name" to "intnet-infra")
              * Internal management network:
+
            * Internal management network:
                  * Adapter 2: Internal Network, Name: intnet-management; Intel PRO/1000MT Desktop, Advanced: Promiscuous Mode: Allow All;
+
              * Adapter 2: Internal Network, Name: intnet-management; Intel PRO/1000MT Desktop, Advanced: Promiscuous Mode: Allow All;
              * Data Network
+
            * Data Network
                  * Adapter 3: Internal Network, Name: intnet-data1; Advanced: Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All
+
              * Adapter 3: Internal Network, Name: intnet-data1; Advanced: Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All
                  * Adapter 4: Internal Network, Name: intnet-data2; Advanced: Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All
+
              * Adapter 4: Internal Network, Name: intnet-data2; Advanced: Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All
        * Serial Ports: Select this to use a serial console.
+
      * Serial Ports: Select this to use a serial console.
            * Windows: Select "Enable Serial Port", port mode to "Host Pipe". Select "Create Pipe" (or deselect "Connect to existing pipe/socket") and then give a Port/File Path of something like "\\.\pipe\controller-0" or "\\.\pipe\compute-1" which you can later use in PuTTY to connect to the console. Choose speed of 9600 or 38400.
+
        * Windows: Select "Enable Serial Port", port mode to "Host Pipe". Select "Create Pipe" (or deselect "Connect to existing pipe/socket") and then give a Port/File Path of something like "\\.\pipe\controller-0" or "\\.\pipe\compute-1" which you can later use in PuTTY to connect to the console. Choose speed of 9600 or 38400.
            * Linux: Select "Enable Serial Port" and set the port mode to "Host Pipe". Select "Create Pipe" (or deselect "Connect to existing pipe/socket") and then give a Port/File Path of something like "/tmp/controller_serial" which you can later use with socat - for example: socat UNIX-CONNECT:/tmp/controller_serial stdio,raw,echo=0,icanon=0
+
        * Linux: Select "Enable Serial Port" and set the port mode to "Host Pipe". Select "Create Pipe" (or deselect "Connect to existing pipe/socket") and then give a Port/File Path of something like "/tmp/controller_serial" which you can later use with socat - for example: socat UNIX-CONNECT:/tmp/controller_serial stdio,raw,echo=0,icanon=0
  
 
Set the boot priority for interface 2 (eth1) on ALL VMs (controller, compute and storage)
 
Set the boot priority for interface 2 (eth1) on ALL VMs (controller, compute and storage)
Line 91: Line 100:
 
| controller-ssh || TCP || || 22 || 10.10.10.3 || 22
 
| controller-ssh || TCP || || 22 || 10.10.10.3 || 22
 
|-
 
|-
| controller-http || TCP || || 80 || 10.10.10.3 || 80
+
| controller-http || TCP || || 80 || 10.10.10.3 || 8080
 
|-
 
|-
| controller-https || TCP ||  || 443 || 10.10.10.3 || 443
+
| controller-https || TCP ||  || 443 || 10.10.10.3 || 8443
 
|-
 
|-
 
| controller-ostk-http || TCP ||  || 31000 || 10.10.10.3 || 31000
 
| controller-ostk-http || TCP ||  || 31000 || 10.10.10.3 || 31000
Line 103: Line 112:
 
|}
 
|}
  
 +
== Setup Controller-0 ==
  
== Install StarlingX ==
+
=== Install StarlingX ===
  
 
Boot the VM from the ISO media. Select the following options for installation:
 
Boot the VM from the ISO media. Select the following options for installation:
Line 120: Line 130:
 
Enter a new password for the wrsroot account and confirm it.
 
Enter a new password for the wrsroot account and confirm it.
  
== Initial Configuration ==
+
=== Bootstrap the controller ===
 +
Refer to these instructions on the AIO DX page [[StarlingX/Containers/InstallationOnAIODX#Bootstrap_the_controller| Bootstrap the controller]]
  
'''Note:''' If you do not have direct access to the public docker registry (https://hub.docker.com/u/starlingx) and instead use a proxy for internet access, a workaround is required until this StoryBoard is implemented: https://storyboard.openstack.org/#!/story/2004710 
+
=== Provisioning controller-0 ===
  
Add proxy for docker
+
==== Configure OAM, Management and Cluster interfaces ====
 +
Refer to these instructions on the AIO DX page [[StarlingX/Containers/InstallationOnAIODX#Configure_OAM.2C_Management_and_Cluster_interfaces| Configure OAM, Management and Cluster interfaces]]
  
<pre>
+
==== (Hardware lab only) Set the ntp server====
sudo mkdir -p /etc/systemd/system/docker.service.d
+
Refer to these instructions on the AIO SX page [https://wiki.openstack.org/wiki/StarlingX/Containers/Installation#.28Hardware_lab_only.29_Set_the_ntp_server Set the ntp server]
sudo vi /etc/systemd/system/docker.service.d/http-proxy.conf
 
</pre>
 
  
Add following lines with your proxy infomation to http-proxy.conf
+
==== Configure the vswitch type (optional) ====
<pre>
+
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Configure_the_vswitch_type_.28optional.29|  Configure the vswitch type]]
[Service]
 
Environment="HTTP_PROXY=<your_proxy>" "HTTPS_PROXY=<your_proxy>" "NO_PROXY=localhost,127.0.0.1,192.168.204.2,<your_no_proxy_ip>"
 
</pre>
 
Do '''NOT''' use wildcard in NO_PROXY variable.  
 
  
Run config_controller
+
==== Prepare the host for running the containerized services ====
 +
 
 +
* On the controller node, apply the node label for controller functions
  
<code>sudo config_controller --kubernetes</code>
 
 
Use default settings during config_controller, except for the following:
 
* External OAM floating address: 10.10.10.3
 
* External OAM address for first controller node: 10.10.10.4
 
* External OAM address for second controller node 10.10.10.5
 
* If you do not have direct access to the google DNS nameserver(s) 8.8.8.8 , 8.8.4.4 you will need to configure that when prompted. Press Enter to choose the default, or type a new entry.
 
 
 
The system configuration should look like this:
 
 
<pre>
 
<pre>
System Configuration
+
source /etc/platform/openrc
--------------------
+
system host-label-assign controller-0 openstack-control-plane=enabled
Time Zone: UTC
 
System mode: duplex
 
Distributed Cloud System Controller: no
 
 
 
PXEBoot Network Configuration
 
-----------------------------
 
Separate PXEBoot network not configured
 
PXEBoot Controller floating hostname: pxecontroller
 
 
 
Management Network Configuration
 
--------------------------------
 
Management interface name: enp0s8
 
Management interface: enp0s8
 
Management interface MTU: 1500
 
Management subnet: 192.168.204.0/24
 
Controller floating address: 192.168.204.2
 
Controller 0 address: 192.168.204.3
 
Controller 1 address: 192.168.204.4
 
NFS Management Address 1: 192.168.204.5
 
NFS Management Address 2: 192.168.204.6
 
Controller floating hostname: controller
 
Controller hostname prefix: controller-
 
OAM Controller floating hostname: oamcontroller
 
Dynamic IP address allocation is selected
 
Management multicast subnet: 239.1.1.0/28
 
 
 
Infrastructure Network Configuration
 
------------------------------------
 
Infrastructure interface not configured
 
 
 
External OAM Network Configuration
 
----------------------------------
 
External OAM interface name: enp0s3
 
External OAM interface: enp0s3
 
External OAM interface MTU: 1500
 
External OAM subnet: 10.10.10.0/24
 
External OAM gateway address: 10.10.10.1
 
External OAM floating address: 10.10.10.3
 
External OAM 0 address: 10.10.10.4
 
External OAM 1 address: 10.10.10.5
 
 
 
DNS Configuration
 
-----------------
 
Nameserver 1: 8.8.8.8
 
 
</pre>
 
</pre>
  
== Provisioning controller-0 ==
+
====  Unlock controller-0 ====
 
 
==== Set the ntp server ====
 
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Set_the_ntp_server|  Set the ntp server]]
 
 
 
* Enable the Ceph backend
 
  
 
<pre>
 
<pre>
system storage-backend-add ceph --confirmed
+
source /etc/platform/openrc
 +
system host-unlock controller-0
 
</pre>
 
</pre>
  
* Wait for 'applying-manifests' task to complete
+
== Install remaining hosts ==
  
<pre>
+
=== PXE boot hosts ===
while [ $(system storage-backend-list | awk '/ceph-store/{print $8}') != 'configured' ]; do echo 'Waiting for ceph to be configured'; sleep 5; done
+
Power-on, the remaining hosts, they should PXEboot from the controller.
system storage-backend-list
 
</pre>
 
  
* Unlock controller-0
+
Press F-12 for network boot if they do not.
  
<pre>
+
Once booted from PXE, hosts should be visible with Check with 'system host-list':
system host-unlock controller-0
 
</pre>
 
 
 
== Install remaining hosts ==
 
 
 
* PXE boot hosts
 
Power-on, the remaining hosts, they should PXEboot from the controller. Press F-12 for network boot if they do not. Once booted from PXE, hosts should be visible with Check with 'system host-list':
 
  
 
<pre>
 
<pre>
Line 240: Line 181:
 
</pre>
 
</pre>
  
* Configure host personalities
+
=== Configure host personalities ===
  
 
<pre>
 
<pre>
Line 251: Line 192:
 
At this point hosts should start installing.
 
At this point hosts should start installing.
  
* Wait for hosts to become online
+
=== Wait for hosts to become online ===
 +
 
 
Once all Nodes have been installed and rebooted, on Controller-0 list the hosts:
 
Once all Nodes have been installed and rebooted, on Controller-0 list the hosts:
  
Line 265: Line 207:
 
</pre>
 
</pre>
  
== Provisioning controller-1 ==
+
== Prepare the remaining hosts for running the containerized services ==
 
+
 
* Add the OAM inteface on controller-1
+
* On the controller node, apply all the node labels for each controller and compute functions
  
 
<pre>
 
<pre>
system host-if-modify -n oam0 -c platform --networks oam controller-1 $(system host-if-list -a controller-1 | awk '/enp0s3/{print $2}')
+
source /etc/platform/openrc
 +
system host-label-assign controller-1 openstack-control-plane=enabled
 +
for NODE in compute-0 compute-1; do
 +
  system host-label-assign $NODE  openstack-compute-node=enabled
 +
  system host-label-assign $NODE  openvswitch=enabled
 +
  system host-label-assign $NODE  sriov=enabled
 +
done
 
</pre>
 
</pre>
  
* Add the Cluster-host interface on controller-1
+
== Provisioning controller-1 ==
 +
 
 +
=== Add interfaces on Controller-1 ===
 +
* Add the OAM Interface on Controller-1
 +
* Add the Cluster-Host Interface on Controller-1
 +
 
 
<pre>
 
<pre>
system host-if-modify controller-1 mgmt0 --networks cluster-host
+
source /etc/platform/openrc
 +
system host-if-modify -n oam0 -c platform controller-1 $(system host-if-list -a controller-1 | awk '/enp0s3/{print $2}')
 +
system interface-network-assign controller-1 oam0 oam
 +
system interface-network-assign controller-1 mgmt0 cluster-host
 
</pre>
 
</pre>
  
* Unlock controller-1
+
=== Unlock Controller-1 ===
 
<pre>
 
<pre>
 +
source /etc/platform/openrc
 
system host-unlock controller-1
 
system host-unlock controller-1
 
</pre>
 
</pre>
Line 315: Line 272:
  
 
== Provisioning computes ==
 
== Provisioning computes ==
* Add the third Ceph monitor to a compute node
+
 
 +
=== Add the third Ceph monitor to a compute node (Standard Only) ===
  
 
<pre>
 
<pre>
Line 346: Line 304:
 
</pre>
 
</pre>
  
* Create the volume group for nova.
+
=== Create the volume group for nova ===
  
 
<pre>
 
<pre>
 
for COMPUTE in compute-0 compute-1; do
 
for COMPUTE in compute-0 compute-1; do
 
   echo "Configuring nova local for: $COMPUTE"
 
   echo "Configuring nova local for: $COMPUTE"
  set -ex
 
 
   ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
 
   ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
   ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | awk /${ROOT_DISK}/'{print $2}')
+
   ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 
   PARTITION_SIZE=10
 
   PARTITION_SIZE=10
 
   NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
 
   NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
Line 359: Line 316:
 
   system host-lvg-add ${COMPUTE} nova-local
 
   system host-lvg-add ${COMPUTE} nova-local
 
   system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
 
   system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
  system host-lvg-modify -b image ${COMPUTE} nova-local
 
  set +ex
 
 
done
 
done
 
</pre>
 
</pre>
  
* Configure data interfaces
+
=== Configure data interfaces for computes ===
  
 
<pre>
 
<pre>
Line 391: Line 346:
 
   DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 
   DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 
   DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 
   DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
   system host-if-modify -m 1500 -n data0 -d ${PHYSNET0} -c data ${COMPUTE} ${DATA0IFUUID}
+
   system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
   system host-if-modify -m 1500 -n data1 -d ${PHYSNET1} -c data ${COMPUTE} ${DATA1IFUUID}
+
   system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
 +
  system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
 +
  system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
 
   set +ex
 
   set +ex
 
done
 
done
 
</pre>
 
</pre>
  
* Setup the cluster-host interfaces on the computes to the management network (enp0s8)
+
=== Setup the cluster-host interfaces on the computes to the management network (enp0s8) ===
  
 
<pre>
 
<pre>
 
for COMPUTE in compute-0 compute-1; do
 
for COMPUTE in compute-0 compute-1; do
   system host-if-modify -n clusterhst -c platform --networks cluster-host $COMPUTE $(system host-if-list -a $COMPUTE | awk '/enp0s8/{print $2}')
+
   system interface-network-assign $COMPUTE mgmt0 cluster-host
 
done
 
done
 
</pre>
 
</pre>
  
* Unlock compute nodes
+
=== Unlock compute nodes ===
  
 
<pre>
 
<pre>
Line 442: Line 399:
  
 
== Add Ceph OSDs to controllers ==
 
== Add Ceph OSDs to controllers ==
 
* Lock controller-1
 
  
 
<pre>
 
<pre>
system host-lock controller-1
+
HOST=controller-0
</pre>
 
 
 
* Wait for node to be locked.
 
 
 
* Add OSD(s) to controller-1
 
HOST=controller-1
 
 
DISKS=$(system host-disk-list ${HOST})
 
DISKS=$(system host-disk-list ${HOST})
 
TIERS=$(system storage-tier-list ceph_cluster)
 
TIERS=$(system storage-tier-list ceph_cluster)
 
OSDs="/dev/sdb"
 
OSDs="/dev/sdb"
 
for OSD in $OSDs; do
 
for OSD in $OSDs; do
     system host-stor-add ${HOST} $(echo "$DISKS" | grep /dev/sdb | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
+
     system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
 +
    while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
 
done
 
done
  
* Unlock controller-1
+
system host-stor-list $HOST
<pre>
 
system host-unlock controller-1
 
</pre>
 
  
* Wait controller-1 to be available
+
HOST=controller-1
<pre>
 
[root@controller-0 wrsroot(keystone_admin)]# system host-list
 
+----+--------------+-------------+----------------+-------------+--------------+
 
| id | hostname    | personality | administrative | operational | availability |
 
+----+--------------+-------------+----------------+-------------+--------------+
 
| 1  | controller-0 | controller  | unlocked      | enabled    | available    |
 
| 2  | controller-1 | controller  | unlocked      | enabled    | available    |
 
| 3  | compute-0    | worker      | unlocked      | enabled    | available    |
 
| 4  | compute-1    | worker      | unlocked      | enabled    | available    |
 
+----+--------------+-------------+----------------+-------------+--------------+
 
</pre>
 
 
 
* Swact controllers
 
<pre>
 
system host-swact controller-0
 
</pre>
 
 
 
Wait for swact to complete and services to stabilize (approximately 30s). You may get disconnect if you connected over OAM floating IP. Reconnect or connect to controller-1.
 
<pre>
 
controller-1:/home/wrsroot# source /etc/platform/openrc
 
[root@controller-1 wrsroot(keystone_admin)]# system host-show controller-1 | grep Controller-Active
 
| capabilities        | {u'stor_function': u'monitor', u'Personality': u'Controller-Active'} |
 
</pre>
 
 
 
* Lock controller-0
 
<pre>
 
system host-lock controller-0
 
</pre>
 
 
 
* Wait controller-0 to be locked
 
 
 
* Add OSD(s) to controller-0
 
<pre>
 
HOST=controller-0
 
 
DISKS=$(system host-disk-list ${HOST})
 
DISKS=$(system host-disk-list ${HOST})
 
TIERS=$(system storage-tier-list ceph_cluster)
 
TIERS=$(system storage-tier-list ceph_cluster)
 
OSDs="/dev/sdb"
 
OSDs="/dev/sdb"
 
for OSD in $OSDs; do
 
for OSD in $OSDs; do
     system host-stor-add ${HOST} $(echo "$DISKS" | grep /dev/sdb | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
+
     system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
 +
    while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
 
done
 
done
</pre>
 
  
* Unlock controller-0
+
system host-stor-list $HOST
<pre>
 
system host-unlock controller-0
 
 
</pre>
 
</pre>
  
* Wait for controller-0 to be available. At this point ceph should report HEALTH_OK and two OSDs configured one for each controller:
+
At this point ceph should report HEALTH_OK and two OSDs configured one for each controller:
 
<pre>
 
<pre>
 
[root@controller-0 wrsroot(keystone_admin)]# ceph -s
 
[root@controller-0 wrsroot(keystone_admin)]# ceph -s
Line 533: Line 444:
 
-3 0.00850        host controller-1                                         
 
-3 0.00850        host controller-1                                         
 
  0 0.00850            osd.0                  up  1.00000          1.00000  
 
  0 0.00850            osd.0                  up  1.00000          1.00000  
</pre>
 
 
== Prepare the host for running the containerized services ==
 
 
 
* On the controller node, apply all the node labels for each controller and compute functions
 
 
<pre>
 
source /etc/platform/openrc
 
for NODE in controller-0 controller-1; do
 
  system host-label-assign $NODE openstack-control-plane=enabled
 
done
 
for NODE in compute-0 compute-1; do
 
  system host-label-assign $NODE  openstack-compute-node=enabled
 
  system host-label-assign $NODE  openvswitch=enabled
 
done
 
kubectl get nodes --show-labels
 
 
</pre>
 
</pre>
  
 
== Using sysinv to bring up/down the containerized services ==
 
== Using sysinv to bring up/down the containerized services ==
  
* Generate the stx-openstack application tarball. In a development environment, run the following command to construct the application tarballs. The tarballs can be found under $MY_WORKSPACE/containers/build-helm/stx. Currently it produces 2 application tarballs, one with tests enabled and one without. Transfer the selected tarball to your lab/virtual box.
+
=== Generate the stx-openstack application tarball ===
<pre>
 
$MY_REPO_ROOT_DIR/cgcs-root/build-tools/build-helm-charts.sh
 
</pre>
 
  
* Alternatively the stx-openstack application tarballs are generated with each build on the CENGN mirror. These are present in builds after 2018-12-12 and can be found under <build>/outputs/helm-charts/.
+
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Generate_the_stx-openstack_application_tarball|  Generate the stx-openstack application tarball]]
  
* Download helm charts to active controller
+
=== Stage application for deployment ===
  
* Stage application for deployment: Use sysinv to upload the application tarball.
+
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Stage_application_for_deployment|  Stage application for deployment]]
  
<pre>
+
=== Bring Up Services ===
system application-upload stx-openstack helm-charts-manifest-no-tests.tgz
 
system application-list
 
</pre>
 
  
* Bring Up Services: Use sysinv to apply the application. You can monitor the progress either by watching system application-list (watch -n 1.0 system application-list) or tailing Armada execution log (sudo docker exec armada_service tailf stx-openstack-apply.log).
+
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Bring_Up_Services|  Bring Up Services]]
  
<pre>
+
=== Verify the cluster endpoints ===
system application-apply stx-openstack
 
system application-list
 
</pre>
 
 
 
Skip to [[#Verify the cluster endpoints]] to continue the setup.
 
 
 
The following commands are for reference.
 
 
 
----
 
 
 
* Bring Down Services: Use sysinv to uninstall the application.
 
 
 
<pre>
 
system application-remove stx-openstack
 
system application-list
 
</pre>
 
 
 
* Delete Services: Use sysinv to delete the application definition.
 
 
 
<pre>
 
system application-delete stx-openstack
 
system application-list
 
</pre>
 
 
 
* Bring Down Services: Clean up and stragglers (volumes and pods)
 
 
 
<pre>
 
# Watch and wait for the pods to terminate
 
kubectl get pods -n openstack -o wide -w
 
 
 
# Armada Workaround: delete does not clean up the old test pods. Sooo... Delete them.
 
kubectl get pods -n openstack | awk '/osh-.*-test/{print $1}' | xargs -i kubectl delete pods -n openstack --force --grace-period=0 {}
 
 
 
# Cleanup all PVCs
 
kubectl get pvc --all-namespaces; kubectl get pv --all-namespaces
 
kubectl delete pvc --all --namespace openstack; kubectl delete pv --all --namespace openstack
 
kubectl get pvc --all-namespaces; kubectl get pv --all-namespaces
 
 
 
# Useful to cleanup the mariadb grastate data.
 
kubectl get configmaps -n openstack | awk '/osh-/{print $1}' | xargs -i kubectl delete configmaps -n openstack {}
 
 
 
# Remove all the contents of the ceph pools. I have seen orphaned contents here that take up space.
 
for p in cinder-volumes images kube-rbd; do rbd -p $p ls | xargs -i rbd -p $p snap unprotect {}@snap; done
 
for p in cinder-volumes images kube-rbd; do rbd -p $p ls | xargs -i rbd -p $p snap purge {}; done
 
for p in cinder-volumes images kube-rbd; do rbd -p $p ls | xargs -i rbd -p $p rm {}; done
 
</pre>
 
 
 
== Verify the cluster endpoints ==
 
  
 
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Verify_the_cluster_endpoints|  here ]]
 
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Verify_the_cluster_endpoints|  here ]]
Line 625: Line 466:
 
== Provider/tenant networking setup ==
 
== Provider/tenant networking setup ==
  
* Create the providernets
+
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Provider.2Ftenant_networking_setup|  here ]]
 
 
<pre>
 
PHYSNET0='physnet0'
 
PHYSNET1='physnet1'
 
neutron providernet-create ${PHYSNET0} --type vlan
 
neutron providernet-create ${PHYSNET1} --type vlan
 
</pre>
 
  
* Setup tenant networking (adapt based on lab config)
+
== Additional Setup Instructions ==
  
<pre>
+
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Additional_Setup_Instructions| Additional Setup Instructions]]
ADMINID=`openstack project list | grep admin | awk '{print $2}'`
 
PHYSNET0='physnet0'
 
PHYSNET1='physnet1'
 
PUBLICNET='public-net0'
 
PRIVATENET='private-net0'
 
INTERNALNET='internal-net0'
 
EXTERNALNET='external-net0'
 
PUBLICSUBNET='public-subnet0'
 
PRIVATESUBNET='private-subnet0'
 
INTERNALSUBNET='internal-subnet0'
 
EXTERNALSUBNET='external-subnet0'
 
PUBLICROUTER='public-router0'
 
PRIVATEROUTER='private-router0'
 
neutron providernet-range-create ${PHYSNET0} --name ${PHYSNET0}-a --range 400-499
 
neutron providernet-range-create ${PHYSNET0} --name ${PHYSNET0}-b --range 10-10 --shared
 
neutron providernet-range-create ${PHYSNET1} --name ${PHYSNET1}-a --range 500-599
 
neutron net-create --tenant-id ${ADMINID} --provider:network_type=vlan --provider:physical_network=${PHYSNET0} --provider:segmentation_id=10 --router:external ${EXTERNALNET}
 
neutron net-create --tenant-id ${ADMINID} --provider:network_type=vlan --provider:physical_network=${PHYSNET0} --provider:segmentation_id=400 ${PUBLICNET}
 
neutron net-create --tenant-id ${ADMINID} --provider:network_type=vlan --provider:physical_network=${PHYSNET1} --provider:segmentation_id=500 ${PRIVATENET}
 
neutron net-create --tenant-id ${ADMINID} ${INTERNALNET}
 
PUBLICNETID=`neutron net-list | grep ${PUBLICNET} | awk '{print $2}'`
 
PRIVATENETID=`neutron net-list | grep ${PRIVATENET} | awk '{print $2}'`
 
INTERNALNETID=`neutron net-list | grep ${INTERNALNET} | awk '{print $2}'`
 
EXTERNALNETID=`neutron net-list | grep ${EXTERNALNET} | awk '{print $2}'`
 
neutron subnet-create --tenant-id ${ADMINID} --name ${PUBLICSUBNET} ${PUBLICNET} 192.168.101.0/24
 
neutron subnet-create --tenant-id ${ADMINID} --name ${PRIVATESUBNET} ${PRIVATENET} 192.168.201.0/24
 
neutron subnet-create --tenant-id ${ADMINID} --name ${INTERNALSUBNET} --no-gateway ${INTERNALNET} 10.10.0.0/24
 
neutron subnet-create --tenant-id ${ADMINID} --name ${EXTERNALSUBNET} --gateway 192.168.1.1 --disable-dhcp ${EXTERNALNET} 192.168.1.0/24
 
neutron router-create ${PUBLICROUTER}
 
neutron router-create ${PRIVATEROUTER}
 
PRIVATEROUTERID=`neutron router-list | grep ${PRIVATEROUTER} | awk '{print $2}'`
 
PUBLICROUTERID=`neutron router-list | grep ${PUBLICROUTER} | awk '{print $2}'`
 
neutron router-gateway-set --disable-snat ${PUBLICROUTERID} ${EXTERNALNETID}
 
neutron router-gateway-set --disable-snat ${PRIVATEROUTERID} ${EXTERNALNETID}
 
neutron router-interface-add ${PUBLICROUTER} ${PUBLICSUBNET}
 
neutron router-interface-add ${PRIVATEROUTER} ${PRIVATESUBNET}
 
</pre>
 
  
 
== Horizon access ==
 
== Horizon access ==
  
 
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Horizon_access|  here ]]
 
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Horizon_access|  here ]]
 +
 +
== Known Issues and Troubleshooting ==
 +
 +
None

Revision as of 17:31, 16 July 2019

Warning icon.svg Warning - Deprecated

This wiki page is out of date and now deprecated. For the current and upcoming versions, see StarlingX Installation and Deployment guides

Contents

Installing StarlingX with containers: Standard configuration

WARNING: DO NOT EDIT THIS WIKI CONTENT.

The information on this wiki page is in the process of transitioning to "Deploy/Install" guides that are being created as part of the StarlingX documentation. Consequently, do not make edits to the content in this wiki page. If you have changes that need to be made to the installation process described on this page of the wiki, contact StarlingX Documentation Team.

History

  • January 29, 2019: Removed obsolete neutron host/interface configuration and updated DNS instructions.
  • January 29, 2019: Configure datanetworks in sysinv, prior to referencing it in the 'system host-if-modify/host-if-add command'.

Introduction

These instructions are for a Standard, 2 controllers and 2 computes (2+2) configuration, in VirtualBox. Other configurations are in development. Installing on bare metal is also possible, however the the process would have to be adapted for the specific hardware configuration.

Note: These instructions are valid for a load built on January 25, 2019 or later.

Building the Software

Refer to these instructions on the AIO SX page Building the Software

Setup the VirtualBox VM

Create a virtual machine for the system with the following options:

     * Type: Linux
     * Version: Other Linux (64-bit)
     * Memory size:
        * Controller nodes: 16384 MB
        * Compute nodes: 10240 MB
     * Storage: 
        * Recommend to use VDI and dynamically allocated disks
        * Controller nodes; at least two disks are required:
             * 240GB disk for a root disk 
             * 50GB for an OSD
        * Compute nodes; at least one disk is required:
             * 240GB disk for a root disk 
     * System->Processors: 
        * Controller nodes: 4 cpu
        * Compute nodes: 3 cpu
     * Network:
        * Controller nodes:
           * OAM network:
              OAM interface must have external connectivity, for now we will use a NatNetwork
              * Adapter 1: NAT Network; Name: NatNetwork Follow the instructions at #VirtualBox Nat Networking
           * Internal management network:
              * Adapter 2: Internal Network, Name: intnet-management; Intel PRO/1000MT Desktop, Advanced: Promiscuous Mode: Allow All;
        * Compute nodes:
           * Unused network 
              * Adapter 1: Internal Network, Name: intnet-unused; Advanced: Intel PRO/1000MT Desktop, Promiscuous Mode: Allow All (Optional - if infrastructure network will be used then set "Name" to "intnet-infra")
           * Internal management network:
              * Adapter 2: Internal Network, Name: intnet-management; Intel PRO/1000MT Desktop, Advanced: Promiscuous Mode: Allow All;
           * Data Network
              * Adapter 3: Internal Network, Name: intnet-data1; Advanced: Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All
              * Adapter 4: Internal Network, Name: intnet-data2; Advanced: Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All
     * Serial Ports: Select this to use a serial console.
        * Windows: Select "Enable Serial Port", port mode to "Host Pipe". Select "Create Pipe" (or deselect "Connect to existing pipe/socket") and then give a Port/File Path of something like "\\.\pipe\controller-0" or "\\.\pipe\compute-1" which you can later use in PuTTY to connect to the console. Choose speed of 9600 or 38400.
        * Linux: Select "Enable Serial Port" and set the port mode to "Host Pipe". Select "Create Pipe" (or deselect "Connect to existing pipe/socket") and then give a Port/File Path of something like "/tmp/controller_serial" which you can later use with socat - for example: socat UNIX-CONNECT:/tmp/controller_serial stdio,raw,echo=0,icanon=0

Set the boot priority for interface 2 (eth1) on ALL VMs (controller, compute and storage)

# First list the VMs
abc@server:~$ VBoxManage list vms
"controller-0" {3db3a342-780f-41d5-a012-dbe6d3591bf1}
"controller-1" {ad89a706-61c6-4c27-8c78-9729ade01460}
"compute-0" {41e80183-2497-4e31-bffd-2d8ec5bcb397}
"compute-1" {68382c1d-9b67-4f3b-b0d5-ebedbe656246}
"storage-0" {7eddce9e-b814-4c40-94ce-2cde1fd2d168}

# Then set the priority for interface 2. Do this for ALL VMs.
# Command syntax: VBoxManage modifyvm <uuid> --nicbootprio2 1
abc@server:~$ VBoxManage modifyvm 3db3a342-780f-41d5-a012-dbe6d3591bf1 --nicbootprio2 1

#OR do them all with a foreach loop in linux
abc@server:~$ for f in $(VBoxManage list vms | cut -f 1 -d " " | sed 's/"//g'); do echo $f; VBoxManage modifyvm $f --nicbootprio2 1; done

# NOTE: In windows, you need to specify the full path to the VBoxManage executable - for example:
"\Program Files\Oracle\VirtualBox\VBoxManage.exe"

VirtualBox Nat Networking

First add a NAT Network in VirtualBox:

 * Select File -> Preferences menu
 * Choose Network, "Nat Networks" tab should be selected
   * Click on plus icon to add a network, which will add a network named NatNetwork
   * Edit the NatNetwork (gear or screwdriver icon)
     * Network CIDR: 10.10.10.0/24 (to match OAM network specified in config_controller)
     * Disable "Supports DHCP"
     * Enable "Supports IPv6"
     * Select "Port Forwarding" and add any rules you desire. Some examples:
Name Protocol Host IP Host Port Guest IP Guest Port
controller-ssh TCP 22 10.10.10.3 22
controller-http TCP 80 10.10.10.3 8080
controller-https TCP 443 10.10.10.3 8443
controller-ostk-http TCP 31000 10.10.10.3 31000
controller-0-ssh TCP 23 10.10.10.4 22
controller-1-ssh TCP 24 10.10.10.4 22

Setup Controller-0

Install StarlingX

Boot the VM from the ISO media. Select the following options for installation:

  • Standard Controller Configuration
  • Graphical Console
  • STANDARD Security Boot Profile

Log into Controller-0 as user wrsroot, with password wrsroot. The first time you log in as wrsroot, you are required to change your password. Enter the current password (wrsroot):

 
Changing password for wrsroot.
(current) UNIX Password: wrsroot

Enter a new password for the wrsroot account and confirm it.

Bootstrap the controller

Refer to these instructions on the AIO DX page Bootstrap the controller

Provisioning controller-0

Configure OAM, Management and Cluster interfaces

Refer to these instructions on the AIO DX page Configure OAM, Management and Cluster interfaces

(Hardware lab only) Set the ntp server

Refer to these instructions on the AIO SX page Set the ntp server

Configure the vswitch type (optional)

Refer to these instructions on the AIO SX page Configure the vswitch type

Prepare the host for running the containerized services

  • On the controller node, apply the node label for controller functions
source /etc/platform/openrc
system host-label-assign controller-0 openstack-control-plane=enabled

Unlock controller-0

source /etc/platform/openrc
system host-unlock controller-0

Install remaining hosts

PXE boot hosts

Power-on, the remaining hosts, they should PXEboot from the controller.

Press F-12 for network boot if they do not.

Once booted from PXE, hosts should be visible with Check with 'system host-list':

[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 2  | None         | None        | locked         | disabled    | offline      |
| 3  | None         | None        | locked         | disabled    | offline      |
| 4  | None         | None        | locked         | disabled    | offline      |
+----+--------------+-------------+----------------+-------------+--------------+

Configure host personalities

source /etc/platform/openrc
system host-update 2 personality=controller
system host-update 3 personality=worker hostname=compute-0
system host-update 4 personality=worker hostname=compute-1

At this point hosts should start installing.

Wait for hosts to become online

Once all Nodes have been installed and rebooted, on Controller-0 list the hosts:

+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 2  | controller-1 | controller  | locked         | disabled    | online       |
| 3  | compute-0    | worker      | locked         | disabled    | online       |
| 4  | compute-1    | worker      | locked         | disabled    | online       |
+----+--------------+-------------+----------------+-------------+--------------+

Prepare the remaining hosts for running the containerized services

  • On the controller node, apply all the node labels for each controller and compute functions
source /etc/platform/openrc
system host-label-assign controller-1 openstack-control-plane=enabled
for NODE in compute-0 compute-1; do
  system host-label-assign $NODE  openstack-compute-node=enabled
  system host-label-assign $NODE  openvswitch=enabled
  system host-label-assign $NODE  sriov=enabled
done

Provisioning controller-1

Add interfaces on Controller-1

  • Add the OAM Interface on Controller-1
  • Add the Cluster-Host Interface on Controller-1
source /etc/platform/openrc
system host-if-modify -n oam0 -c platform controller-1 $(system host-if-list -a controller-1 | awk '/enp0s3/{print $2}')
system interface-network-assign controller-1 oam0 oam
system interface-network-assign controller-1 mgmt0 cluster-host

Unlock Controller-1

source /etc/platform/openrc
system host-unlock controller-1

Wait for node to be available:

+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 2  | controller-1 | controller  | unlocked       | enabled     | available    |
| 3  | compute-0    | worker      | locked         | disabled    | online       |
| 4  | compute-1    | worker      | locked         | disabled    | online       |
+----+--------------+-------------+----------------+-------------+--------------+
  • Ceph cluster shows a quorum with controller-0 and controller-1
[root@controller-0 wrsroot(keystone_admin)]# ceph -s
    cluster 93f79bcb-526f-4396-84a4-a29c93614d09
     health HEALTH_ERR
            128 pgs are stuck inactive for more than 300 seconds
            128 pgs stuck inactive
            128 pgs stuck unclean
            no osds
     monmap e1: 2 mons at {controller-0=192.168.204.3:6789/0,controller-1=192.168.204.4:6789/0}
            election epoch 6, quorum 0,1 controller-0,controller-1
     osdmap e2: 0 osds: 0 up, 0 in
            flags sortbitwise,require_jewel_osds
      pgmap v3: 128 pgs, 2 pools, 0 bytes data, 0 objects
            0 kB used, 0 kB / 0 kB avail
                 128 creating

Provisioning computes

Add the third Ceph monitor to a compute node (Standard Only)

[root@controller-0 wrsroot(keystone_admin)]# system ceph-mon-add compute-0
+--------------+------------------------------------------------------------------+
| Property     | Value                                                            |
+--------------+------------------------------------------------------------------+
| uuid         | f76bc385-190c-4d9a-aa0f-107346a9907b                             |
| ceph_mon_gib | 20                                                               |
| created_at   | 2019-01-17T12:32:33.372098+00:00                                 |
| updated_at   | None                                                             |
| state        | configuring                                                      |
| task         | {u'controller-1': 'configuring', u'controller-0': 'configuring'} |
+--------------+------------------------------------------------------------------+

Wait for compute monitor to be configured:

[root@controller-0 wrsroot(keystone_admin)]# system ceph-mon-list
+--------------------------------------+-------+--------------+------------+------+
| uuid                                 | ceph_ | hostname     | state      | task |
|                                      | mon_g |              |            |      |
|                                      | ib    |              |            |      |
+--------------------------------------+-------+--------------+------------+------+
| 64176b6c-e284-4485-bb2a-115dee215279 | 20    | controller-1 | configured | None |
| a9ca151b-7f2c-4551-8167-035d49e2df8c | 20    | controller-0 | configured | None |
| f76bc385-190c-4d9a-aa0f-107346a9907b | 20    | compute-0    | configured | None |
+--------------------------------------+-------+--------------+------------+------+

Create the volume group for nova

for COMPUTE in compute-0 compute-1; do
  echo "Configuring nova local for: $COMPUTE"
  ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
  ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
  PARTITION_SIZE=10
  NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
  NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
  system host-lvg-add ${COMPUTE} nova-local
  system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
done

Configure data interfaces for computes

DATA0IF=eth1000
DATA1IF=eth1001
PHYSNET0='physnet0'
PHYSNET1='physnet1'
SPL=/tmp/tmp-system-port-list
SPIL=/tmp/tmp-system-host-if-list

# configure the datanetworks in sysinv, prior to referencing it in the 'system host-if-modify command'.
system datanetwork-add ${PHYSNET0} vlan
system datanetwork-add ${PHYSNET1} vlan

for COMPUTE in compute-0 compute-1; do
  echo "Configuring interface for: $COMPUTE"
  set -ex
  system host-port-list ${COMPUTE} --nowrap > ${SPL}
  system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
  DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
  DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
  DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
  DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
  DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
  DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
  DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
  DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
  system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
  system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
  system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
  system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
  set +ex
done

Setup the cluster-host interfaces on the computes to the management network (enp0s8)

for COMPUTE in compute-0 compute-1; do
   system interface-network-assign $COMPUTE mgmt0 cluster-host
done

Unlock compute nodes

for COMPUTE in compute-0 compute-1; do
   system host-unlock $COMPUTE
done
  • After the hosts are available, test that Ceph cluster is operational and that all 3 monitors (controller-0, controller-1 & compute-0) have joined the monitor quorum:
[root@controller-0 wrsroot(keystone_admin)]# system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 2  | controller-1 | controller  | unlocked       | enabled     | available    |
| 3  | compute-0    | worker      | unlocked       | enabled     | available    |
| 4  | compute-1    | worker      | unlocked       | enabled     | available    |
+----+--------------+-------------+----------------+-------------+--------------+
[root@controller-0 wrsroot(keystone_admin)]# ceph -s
    cluster 93f79bcb-526f-4396-84a4-a29c93614d09
     health HEALTH_ERR
            128 pgs are stuck inactive for more than 300 seconds
            128 pgs stuck inactive
            128 pgs stuck unclean
            no osds
     monmap e2: 3 mons at {compute-0=192.168.204.182:6789/0,controller-0=192.168.204.3:6789/0,controller-1=192.168.204.4:6789/0}
            election epoch 14, quorum 0,1,2 controller-0,controller-1,compute-0
     osdmap e11: 0 osds: 0 up, 0 in
            flags sortbitwise,require_jewel_osds
      pgmap v12: 128 pgs, 2 pools, 0 bytes data, 0 objects
            0 kB used, 0 kB / 0 kB avail
                 128 creating

Add Ceph OSDs to controllers

HOST=controller-0
DISKS=$(system host-disk-list ${HOST})
TIERS=$(system storage-tier-list ceph_cluster)
OSDs="/dev/sdb"
for OSD in $OSDs; do
    system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
    while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
done

system host-stor-list $HOST

HOST=controller-1
DISKS=$(system host-disk-list ${HOST})
TIERS=$(system storage-tier-list ceph_cluster)
OSDs="/dev/sdb"
for OSD in $OSDs; do
    system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
    while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
done

system host-stor-list $HOST

At this point ceph should report HEALTH_OK and two OSDs configured one for each controller:

[root@controller-0 wrsroot(keystone_admin)]# ceph -s
    cluster 93f79bcb-526f-4396-84a4-a29c93614d09
     health HEALTH_OK
     monmap e2: 3 mons at {compute-0=192.168.204.182:6789/0,controller-0=192.168.204.3:6789/0,controller-1=192.168.204.4:6789/0}
            election epoch 22, quorum 0,1,2 controller-0,controller-1,compute-0
     osdmap e31: 2 osds: 2 up, 2 in
            flags sortbitwise,require_jewel_osds
      pgmap v73: 384 pgs, 6 pools, 1588 bytes data, 1116 objects
            90044 kB used, 17842 MB / 17929 MB avail
                 384 active+clean
[root@controller-1 wrsroot(keystone_admin)]# ceph osd tree
ID WEIGHT  TYPE NAME                      UP/DOWN REWEIGHT PRIMARY-AFFINITY                                  
-1 0.01700 root storage-tier                                                
-2 0.01700     chassis group-0                                              
-4 0.00850         host controller-0                                        
 1 0.00850             osd.1                   up  1.00000          1.00000 
-3 0.00850         host controller-1                                        
 0 0.00850             osd.0                   up  1.00000          1.00000 

Using sysinv to bring up/down the containerized services

Generate the stx-openstack application tarball

Refer to these instructions on the AIO SX page Generate the stx-openstack application tarball

Stage application for deployment

Refer to these instructions on the AIO SX page Stage application for deployment

Bring Up Services

Refer to these instructions on the AIO SX page Bring Up Services

Verify the cluster endpoints

Refer to these instructions on the AIO SX page here

Provider/tenant networking setup

Refer to these instructions on the AIO SX page here

Additional Setup Instructions

Refer to these instructions on the AIO SX page Additional Setup Instructions

Horizon access

Refer to these instructions on the AIO SX page here

Known Issues and Troubleshooting

None