Jump to: navigation, search

Difference between revisions of "StarlingX/Containers/InstallationOnStandard"

(Installing StarlingX with containers: Standard configuration)
(Provisioning controller-0)
Line 201: Line 201:
 
== Provisioning controller-0 ==
 
== Provisioning controller-0 ==
  
* Set the ntp server
+
==== Set the ntp server ====
 
+
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Set_the_ntp_server|  Set the ntp server]]
<pre>
 
source /etc/platform/openrc
 
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
 
</pre>
 
  
 
* Enable the Ceph backend
 
* Enable the Ceph backend

Revision as of 16:15, 30 January 2019

Installing StarlingX with containers: Standard configuration

History

  • January 29, 2019: Removed obsolete neutron host/interface configuration and updated DNS instructions.
  • January 29, 2019: Configure datanetworks in sysinv, prior to referencing it in the 'system host-if-modify/host-if-add command'.

Introduction

These instructions are for a Standard, 2 controllers and 2 computes (2+2) configuration, in VirtualBox. Other configurations are in development. Installing on bare metal is also possible, however the the process would have to be adapted for the specific hardware configuration.

Note: These instructions are valid for a load built on January 25, 2019 or later.

Building the Software

Refer to these instructions on the AIO SX page Building the Software

Setup the VirtualBox VM

Create a virtual machine for the system with the following options:

     * Type: Linux
     * Version: Other Linux (64-bit)
     * Memory size:
        * Controller nodes: 16384 MB
        * Compute nodes: 4096 MB
     * Storage: 
        * Recommend to use VDI and dynamically allocated disks
        * Controller nodes; at least two disks are required:
             * 240GB disk for a root disk 
             * 50GB for an OSD
        * Compute nodes; at least one disk is required:
             * 240GB disk for a root disk 
        * System->Processors: 
           * Controller nodes: 4 cpu
           * Compute nodes: 3 cpu
        * Network:
           * Controller nodes:
              * OAM network:
                 OAM interface must have external connectivity, for now we will use a NatNetwork
                 * Adapter 1: NAT Network; Name: NatNetwork Follow the instructions at #VirtualBox Nat Networking
              * Internal management network:
                 * Adapter 2: Internal Network, Name: intnet-management; Intel PRO/1000MT Desktop, Advanced: Promiscuous Mode: Allow All;
           * Compute nodes:
              * Usused network 
                 * Adapter 1: Internal Network, Name: intnet-unused; Advanced: Intel PRO/1000MT Desktop, Promiscuous Mode: Allow All (Optional - if infrastructure network will be used then set "Name" to "intnet-infra")
              * Internal management network:
                 * Adapter 2: Internal Network, Name: intnet-management; Intel PRO/1000MT Desktop, Advanced: Promiscuous Mode: Allow All;
              * Data Network
                 * Adapter 3: Internal Network, Name: intnet-data1; Advanced: Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All
                 * Adapter 4: Internal Network, Name: intnet-data2; Advanced: Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All
        * Serial Ports: Select this to use a serial console.
           * Windows: Select "Enable Serial Port", port mode to "Host Pipe". Select "Create Pipe" (or deselect "Connect to existing pipe/socket") and then give a Port/File Path of something like "\\.\pipe\controller-0" or "\\.\pipe\compute-1" which you can later use in PuTTY to connect to the console. Choose speed of 9600 or 38400.
           * Linux: Select "Enable Serial Port" and set the port mode to "Host Pipe". Select "Create Pipe" (or deselect "Connect to existing pipe/socket") and then give a Port/File Path of something like "/tmp/controller_serial" which you can later use with socat - for example: socat UNIX-CONNECT:/tmp/controller_serial stdio,raw,echo=0,icanon=0

Set the boot priority for interface 2 (eth1) on ALL VMs (controller, compute and storage)

# First list the VMs
abc@server:~$ VBoxManage list vms
"controller-0" {3db3a342-780f-41d5-a012-dbe6d3591bf1}
"controller-1" {ad89a706-61c6-4c27-8c78-9729ade01460}
"compute-0" {41e80183-2497-4e31-bffd-2d8ec5bcb397}
"compute-1" {68382c1d-9b67-4f3b-b0d5-ebedbe656246}
"storage-0" {7eddce9e-b814-4c40-94ce-2cde1fd2d168}

# Then set the priority for interface 2. Do this for ALL VMs.
# Command syntax: VBoxManage modifyvm <uuid> --nicbootprio2 1
abc@server:~$ VBoxManage modifyvm 3db3a342-780f-41d5-a012-dbe6d3591bf1 --nicbootprio2 1

#OR do them all with a foreach loop in linux
abc@server:~$ for f in $(VBoxManage list vms | cut -f 1 -d " " | sed 's/"//g'); do echo $f; VBoxManage modifyvm $f --nicbootprio2 1; done

# NOTE: In windows, you need to specify the full path to the VBoxManage executable - for example:
"\Program Files\Oracle\VirtualBox\VBoxManage.exe"

VirtualBox Nat Networking

First add a NAT Network in VirtualBox:

 * Select File -> Preferences menu
 * Choose Network, "Nat Networks" tab should be selected
   * Click on plus icon to add a network, which will add a network named NatNetwork
   * Edit the NatNetwork (gear or screwdriver icon)
     * Network CIDR: 10.10.10.0/24 (to match OAM network specified in config_controller)
     * Disable "Supports DHCP"
     * Enable "Supports IPv6"
     * Select "Port Forwarding" and add any rules you desire. Some examples:
Name Protocol Host IP Host Port Guest IP Guest Port
controller-ssh TCP 22 10.10.10.3 22
controller-http TCP 80 10.10.10.3 80
controller-https TCP 443 10.10.10.3 443
controller-ostk-http TCP 31000 10.10.10.3 31000
controller-0-ssh TCP 23 10.10.10.4 22
controller-1-ssh TCP 24 10.10.10.4 22


Install StarlingX

Boot the VM from the ISO media. Select the following options for installation:

  • Standard Controller Configuration
  • Graphical Console
  • STANDARD Security Boot Profile

Log into Controller-0 as user wrsroot, with password wrsroot. The first time you log in as wrsroot, you are required to change your password. Enter the current password (wrsroot):

 
Changing password for wrsroot.
(current) UNIX Password: wrsroot

Enter a new password for the wrsroot account and confirm it.

Initial Configuration

Note: If you do not have direct access to the public docker registry (https://hub.docker.com/u/starlingx) and instead use a proxy for internet access, a workaround is required until this StoryBoard is implemented: https://storyboard.openstack.org/#!/story/2004710

Add proxy for docker

sudo mkdir -p /etc/systemd/system/docker.service.d
sudo vi /etc/systemd/system/docker.service.d/http-proxy.conf

Add following lines with your proxy infomation to http-proxy.conf

[Service]
Environment="HTTP_PROXY=<your_proxy>" "HTTPS_PROXY=<your_proxy>" "NO_PROXY=localhost,127.0.0.1,192.168.204.2,<your_no_proxy_ip>"

Do NOT use wildcard in NO_PROXY variable.

Run config_controller

sudo config_controller --kubernetes

Use default settings during config_controller, except for the following:

  • External OAM floating address: 10.10.10.3
  • External OAM address for first controller node: 10.10.10.4
  • External OAM address for second controller node 10.10.10.5
  • If you do not have direct access to the google DNS nameserver(s) 8.8.8.8 , 8.8.4.4 you will need to configure that when prompted. Press Enter to choose the default, or type a new entry.


The system configuration should look like this:

System Configuration
--------------------
Time Zone: UTC
System mode: duplex
Distributed Cloud System Controller: no

PXEBoot Network Configuration
-----------------------------
Separate PXEBoot network not configured
PXEBoot Controller floating hostname: pxecontroller

Management Network Configuration
--------------------------------
Management interface name: enp0s8
Management interface: enp0s8
Management interface MTU: 1500
Management subnet: 192.168.204.0/24
Controller floating address: 192.168.204.2
Controller 0 address: 192.168.204.3
Controller 1 address: 192.168.204.4
NFS Management Address 1: 192.168.204.5
NFS Management Address 2: 192.168.204.6
Controller floating hostname: controller
Controller hostname prefix: controller-
OAM Controller floating hostname: oamcontroller
Dynamic IP address allocation is selected
Management multicast subnet: 239.1.1.0/28

Infrastructure Network Configuration
------------------------------------
Infrastructure interface not configured

External OAM Network Configuration
----------------------------------
External OAM interface name: enp0s3
External OAM interface: enp0s3
External OAM interface MTU: 1500
External OAM subnet: 10.10.10.0/24
External OAM gateway address: 10.10.10.1
External OAM floating address: 10.10.10.3
External OAM 0 address: 10.10.10.4
External OAM 1 address: 10.10.10.5

DNS Configuration
-----------------
Nameserver 1: 8.8.8.8

Provisioning controller-0

Set the ntp server

Refer to these instructions on the AIO SX page Set the ntp server

  • Enable the Ceph backend
system storage-backend-add ceph --confirmed
  • Wait for 'applying-manifests' task to complete
while [ $(system storage-backend-list | awk '/ceph-store/{print $8}') != 'configured' ]; do echo 'Waiting for ceph to be configured'; sleep 5; done
system storage-backend-list
  • Unlock controller-0
system host-unlock controller-0

Install remaining hosts

  • PXE boot hosts

Power-on, the remaining hosts, they should PXEboot from the controller. Press F-12 for network boot if they do not. Once booted from PXE, hosts should be visible with Check with 'system host-list':

[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 2  | None         | None        | locked         | disabled    | offline      |
| 3  | None         | None        | locked         | disabled    | offline      |
| 4  | None         | None        | locked         | disabled    | offline      |
+----+--------------+-------------+----------------+-------------+--------------+
  • Configure host personalities
source /etc/platform/openrc
system host-update 2 personality=controller
system host-update 3 personality=worker hostname=compute-0
system host-update 4 personality=worker hostname=compute-1

At this point hosts should start installing.

  • Wait for hosts to become online

Once all Nodes have been installed and rebooted, on Controller-0 list the hosts:

+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 2  | controller-1 | controller  | locked         | disabled    | online       |
| 3  | compute-0    | worker      | locked         | disabled    | online       |
| 4  | compute-1    | worker      | locked         | disabled    | online       |
+----+--------------+-------------+----------------+-------------+--------------+

Provisioning controller-1

  • Add the OAM inteface on controller-1
system host-if-modify -n oam0 -c platform --networks oam controller-1 $(system host-if-list -a controller-1 | awk '/enp0s3/{print $2}')
  • Add the Cluster-host interface on controller-1
system host-if-modify controller-1 mgmt0 --networks cluster-host
  • Unlock controller-1
system host-unlock controller-1

Wait for node to be available:

+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 2  | controller-1 | controller  | unlocked       | enabled     | available    |
| 3  | compute-0    | worker      | locked         | disabled    | online       |
| 4  | compute-1    | worker      | locked         | disabled    | online       |
+----+--------------+-------------+----------------+-------------+--------------+
  • Ceph cluster shows a quorum with controller-0 and controller-1
[root@controller-0 wrsroot(keystone_admin)]# ceph -s
    cluster 93f79bcb-526f-4396-84a4-a29c93614d09
     health HEALTH_ERR
            128 pgs are stuck inactive for more than 300 seconds
            128 pgs stuck inactive
            128 pgs stuck unclean
            no osds
     monmap e1: 2 mons at {controller-0=192.168.204.3:6789/0,controller-1=192.168.204.4:6789/0}
            election epoch 6, quorum 0,1 controller-0,controller-1
     osdmap e2: 0 osds: 0 up, 0 in
            flags sortbitwise,require_jewel_osds
      pgmap v3: 128 pgs, 2 pools, 0 bytes data, 0 objects
            0 kB used, 0 kB / 0 kB avail
                 128 creating

Provisioning computes

  • Add the third Ceph monitor to a compute node
[root@controller-0 wrsroot(keystone_admin)]# system ceph-mon-add compute-0
+--------------+------------------------------------------------------------------+
| Property     | Value                                                            |
+--------------+------------------------------------------------------------------+
| uuid         | f76bc385-190c-4d9a-aa0f-107346a9907b                             |
| ceph_mon_gib | 20                                                               |
| created_at   | 2019-01-17T12:32:33.372098+00:00                                 |
| updated_at   | None                                                             |
| state        | configuring                                                      |
| task         | {u'controller-1': 'configuring', u'controller-0': 'configuring'} |
+--------------+------------------------------------------------------------------+

Wait for compute monitor to be configured:

[root@controller-0 wrsroot(keystone_admin)]# system ceph-mon-list
+--------------------------------------+-------+--------------+------------+------+
| uuid                                 | ceph_ | hostname     | state      | task |
|                                      | mon_g |              |            |      |
|                                      | ib    |              |            |      |
+--------------------------------------+-------+--------------+------------+------+
| 64176b6c-e284-4485-bb2a-115dee215279 | 20    | controller-1 | configured | None |
| a9ca151b-7f2c-4551-8167-035d49e2df8c | 20    | controller-0 | configured | None |
| f76bc385-190c-4d9a-aa0f-107346a9907b | 20    | compute-0    | configured | None |
+--------------------------------------+-------+--------------+------------+------+
  • Create the volume group for nova.
for COMPUTE in compute-0 compute-1; do
  echo "Configuring nova local for: $COMPUTE"
  set -ex
  ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
  ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | awk /${ROOT_DISK}/'{print $2}')
  PARTITION_SIZE=10
  NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
  NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
  system host-lvg-add ${COMPUTE} nova-local
  system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
  system host-lvg-modify -b image ${COMPUTE} nova-local
  set +ex
done
  • Configure data interfaces
DATA0IF=eth1000
DATA1IF=eth1001
PHYSNET0='physnet0'
PHYSNET1='physnet1'
SPL=/tmp/tmp-system-port-list
SPIL=/tmp/tmp-system-host-if-list

# configure the datanetworks in sysinv, prior to referencing it in the 'system host-if-modify command'.
system datanetwork-add ${PHYSNET0} vlan
system datanetwork-add ${PHYSNET1} vlan

for COMPUTE in compute-0 compute-1; do
  echo "Configuring interface for: $COMPUTE"
  set -ex
  system host-port-list ${COMPUTE} --nowrap > ${SPL}
  system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
  DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
  DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
  DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
  DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
  DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
  DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
  DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
  DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
  system host-if-modify -m 1500 -n data0 -d ${PHYSNET0} -c data ${COMPUTE} ${DATA0IFUUID}
  system host-if-modify -m 1500 -n data1 -d ${PHYSNET1} -c data ${COMPUTE} ${DATA1IFUUID}
  set +ex
done
  • Setup the cluster-host interfaces on the computes to the management network (enp0s8)
for COMPUTE in compute-0 compute-1; do
   system host-if-modify -n clusterhst -c platform --networks cluster-host $COMPUTE $(system host-if-list -a $COMPUTE | awk '/enp0s8/{print $2}')
done
  • Unlock compute nodes
for COMPUTE in compute-0 compute-1; do
   system host-unlock $COMPUTE
done
  • After the hosts are available, test that Ceph cluster is operational and that all 3 monitors (controller-0, controller-1 & compute-0) have joined the monitor quorum:
[root@controller-0 wrsroot(keystone_admin)]# system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 2  | controller-1 | controller  | unlocked       | enabled     | available    |
| 3  | compute-0    | worker      | unlocked       | enabled     | available    |
| 4  | compute-1    | worker      | unlocked       | enabled     | available    |
+----+--------------+-------------+----------------+-------------+--------------+
[root@controller-0 wrsroot(keystone_admin)]# ceph -s
    cluster 93f79bcb-526f-4396-84a4-a29c93614d09
     health HEALTH_ERR
            128 pgs are stuck inactive for more than 300 seconds
            128 pgs stuck inactive
            128 pgs stuck unclean
            no osds
     monmap e2: 3 mons at {compute-0=192.168.204.182:6789/0,controller-0=192.168.204.3:6789/0,controller-1=192.168.204.4:6789/0}
            election epoch 14, quorum 0,1,2 controller-0,controller-1,compute-0
     osdmap e11: 0 osds: 0 up, 0 in
            flags sortbitwise,require_jewel_osds
      pgmap v12: 128 pgs, 2 pools, 0 bytes data, 0 objects
            0 kB used, 0 kB / 0 kB avail
                 128 creating

Add Ceph OSDs to controllers

  • Lock controller-1
system host-lock controller-1
  • Wait for node to be locked.
  • Add OSD(s) to controller-1

HOST=controller-1 DISKS=$(system host-disk-list ${HOST}) TIERS=$(system storage-tier-list ceph_cluster) OSDs="/dev/sdb" for OSD in $OSDs; do

   system host-stor-add ${HOST} $(echo "$DISKS" | grep /dev/sdb | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')

done

  • Unlock controller-1
system host-unlock controller-1
  • Wait controller-1 to be available
[root@controller-0 wrsroot(keystone_admin)]# system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 2  | controller-1 | controller  | unlocked       | enabled     | available    |
| 3  | compute-0    | worker      | unlocked       | enabled     | available    |
| 4  | compute-1    | worker      | unlocked       | enabled     | available    |
+----+--------------+-------------+----------------+-------------+--------------+
  • Swact controllers
system host-swact controller-0

Wait for swact to complete and services to stabilize (approximately 30s). You may get disconnect if you connected over OAM floating IP. Reconnect or connect to controller-1.

controller-1:/home/wrsroot# source /etc/platform/openrc
[root@controller-1 wrsroot(keystone_admin)]# system host-show controller-1 | grep Controller-Active
| capabilities        | {u'stor_function': u'monitor', u'Personality': u'Controller-Active'} |
  • Lock controller-0
system host-lock controller-0
  • Wait controller-0 to be locked
  • Add OSD(s) to controller-0
HOST=controller-0
DISKS=$(system host-disk-list ${HOST})
TIERS=$(system storage-tier-list ceph_cluster)
OSDs="/dev/sdb"
for OSD in $OSDs; do
    system host-stor-add ${HOST} $(echo "$DISKS" | grep /dev/sdb | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
done
  • Unlock controller-0
system host-unlock controller-0
  • Wait for controller-0 to be available. At this point ceph should report HEALTH_OK and two OSDs configured one for each controller:
[root@controller-0 wrsroot(keystone_admin)]# ceph -s
    cluster 93f79bcb-526f-4396-84a4-a29c93614d09
     health HEALTH_OK
     monmap e2: 3 mons at {compute-0=192.168.204.182:6789/0,controller-0=192.168.204.3:6789/0,controller-1=192.168.204.4:6789/0}
            election epoch 22, quorum 0,1,2 controller-0,controller-1,compute-0
     osdmap e31: 2 osds: 2 up, 2 in
            flags sortbitwise,require_jewel_osds
      pgmap v73: 384 pgs, 6 pools, 1588 bytes data, 1116 objects
            90044 kB used, 17842 MB / 17929 MB avail
                 384 active+clean
[root@controller-1 wrsroot(keystone_admin)]# ceph osd tree
ID WEIGHT  TYPE NAME                      UP/DOWN REWEIGHT PRIMARY-AFFINITY                                  
-1 0.01700 root storage-tier                                                
-2 0.01700     chassis group-0                                              
-4 0.00850         host controller-0                                        
 1 0.00850             osd.1                   up  1.00000          1.00000 
-3 0.00850         host controller-1                                        
 0 0.00850             osd.0                   up  1.00000          1.00000 

Prepare the host for running the containerized services

  • On the controller node, apply all the node labels for each controller and compute functions
source /etc/platform/openrc
for NODE in controller-0 controller-1; do
  system host-label-assign $NODE openstack-control-plane=enabled
done
for NODE in compute-0 compute-1; do
  system host-label-assign $NODE  openstack-compute-node=enabled
  system host-label-assign $NODE  openvswitch=enabled
done
kubectl get nodes --show-labels

Using sysinv to bring up/down the containerized services

  • Generate the stx-openstack application tarball. In a development environment, run the following command to construct the application tarballs. The tarballs can be found under $MY_WORKSPACE/containers/build-helm/stx. Currently it produces 2 application tarballs, one with tests enabled and one without. Transfer the selected tarball to your lab/virtual box.
$MY_REPO_ROOT_DIR/cgcs-root/build-tools/build-helm-charts.sh
  • Alternatively the stx-openstack application tarballs are generated with each build on the CENGN mirror. These are present in builds after 2018-12-12 and can be found under <build>/outputs/helm-charts/.
  • Download helm charts to active controller
  • Stage application for deployment: Use sysinv to upload the application tarball.
system application-upload stx-openstack helm-charts-manifest-no-tests.tgz
system application-list
  • Bring Up Services: Use sysinv to apply the application. You can monitor the progress either by watching system application-list (watch -n 1.0 system application-list) or tailing Armada execution log (sudo docker exec armada_service tailf stx-openstack-apply.log).
system application-apply stx-openstack
system application-list

Skip to #Verify the cluster endpoints to continue the setup.

The following commands are for reference.


  • Bring Down Services: Use sysinv to uninstall the application.
system application-remove stx-openstack
system application-list
  • Delete Services: Use sysinv to delete the application definition.
system application-delete stx-openstack
system application-list
  • Bring Down Services: Clean up and stragglers (volumes and pods)
# Watch and wait for the pods to terminate
kubectl get pods -n openstack -o wide -w

# Armada Workaround: delete does not clean up the old test pods. Sooo... Delete them.
kubectl get pods -n openstack | awk '/osh-.*-test/{print $1}' | xargs -i kubectl delete pods -n openstack --force --grace-period=0 {}

# Cleanup all PVCs
kubectl get pvc --all-namespaces; kubectl get pv --all-namespaces
kubectl delete pvc --all --namespace openstack; kubectl delete pv --all --namespace openstack
kubectl get pvc --all-namespaces; kubectl get pv --all-namespaces

# Useful to cleanup the mariadb grastate data.
kubectl get configmaps -n openstack | awk '/osh-/{print $1}' | xargs -i kubectl delete configmaps -n openstack {}

# Remove all the contents of the ceph pools. I have seen orphaned contents here that take up space.
for p in cinder-volumes images kube-rbd; do rbd -p $p ls | xargs -i rbd -p $p snap unprotect {}@snap; done
for p in cinder-volumes images kube-rbd; do rbd -p $p ls | xargs -i rbd -p $p snap purge {}; done
for p in cinder-volumes images kube-rbd; do rbd -p $p ls | xargs -i rbd -p $p rm {}; done

Verify the cluster endpoints

Refer to these instructions on the AIO SX page here

Provider/tenant networking setup

  • Create the providernets
PHYSNET0='physnet0'
PHYSNET1='physnet1'
neutron providernet-create ${PHYSNET0} --type vlan
neutron providernet-create ${PHYSNET1} --type vlan
  • Setup tenant networking (adapt based on lab config)
ADMINID=`openstack project list | grep admin | awk '{print $2}'`
PHYSNET0='physnet0'
PHYSNET1='physnet1'
PUBLICNET='public-net0'
PRIVATENET='private-net0'
INTERNALNET='internal-net0'
EXTERNALNET='external-net0'
PUBLICSUBNET='public-subnet0'
PRIVATESUBNET='private-subnet0'
INTERNALSUBNET='internal-subnet0'
EXTERNALSUBNET='external-subnet0'
PUBLICROUTER='public-router0'
PRIVATEROUTER='private-router0'
neutron providernet-range-create ${PHYSNET0} --name ${PHYSNET0}-a --range 400-499
neutron providernet-range-create ${PHYSNET0} --name ${PHYSNET0}-b --range 10-10 --shared
neutron providernet-range-create ${PHYSNET1} --name ${PHYSNET1}-a --range 500-599
neutron net-create --tenant-id ${ADMINID} --provider:network_type=vlan --provider:physical_network=${PHYSNET0} --provider:segmentation_id=10 --router:external ${EXTERNALNET}
neutron net-create --tenant-id ${ADMINID} --provider:network_type=vlan --provider:physical_network=${PHYSNET0} --provider:segmentation_id=400 ${PUBLICNET}
neutron net-create --tenant-id ${ADMINID} --provider:network_type=vlan --provider:physical_network=${PHYSNET1} --provider:segmentation_id=500 ${PRIVATENET}
neutron net-create --tenant-id ${ADMINID} ${INTERNALNET}
PUBLICNETID=`neutron net-list | grep ${PUBLICNET} | awk '{print $2}'`
PRIVATENETID=`neutron net-list | grep ${PRIVATENET} | awk '{print $2}'`
INTERNALNETID=`neutron net-list | grep ${INTERNALNET} | awk '{print $2}'`
EXTERNALNETID=`neutron net-list | grep ${EXTERNALNET} | awk '{print $2}'`
neutron subnet-create --tenant-id ${ADMINID} --name ${PUBLICSUBNET} ${PUBLICNET} 192.168.101.0/24
neutron subnet-create --tenant-id ${ADMINID} --name ${PRIVATESUBNET} ${PRIVATENET} 192.168.201.0/24
neutron subnet-create --tenant-id ${ADMINID} --name ${INTERNALSUBNET} --no-gateway  ${INTERNALNET} 10.10.0.0/24
neutron subnet-create --tenant-id ${ADMINID} --name ${EXTERNALSUBNET} --gateway 192.168.1.1 --disable-dhcp ${EXTERNALNET} 192.168.1.0/24
neutron router-create ${PUBLICROUTER}
neutron router-create ${PRIVATEROUTER}
PRIVATEROUTERID=`neutron router-list | grep ${PRIVATEROUTER} | awk '{print $2}'`
PUBLICROUTERID=`neutron router-list | grep ${PUBLICROUTER} | awk '{print $2}'`
neutron router-gateway-set --disable-snat ${PUBLICROUTERID} ${EXTERNALNETID}
neutron router-gateway-set --disable-snat ${PRIVATEROUTERID} ${EXTERNALNETID}
neutron router-interface-add ${PUBLICROUTER} ${PUBLICSUBNET}
neutron router-interface-add ${PRIVATEROUTER} ${PRIVATESUBNET}

Horizon access

Refer to these instructions on the AIO SX page here