StarlingX/Containers/InstallationOnStandard
Contents
- 1 Installing StarlingX with containers: Standard configuration
- 1.1 Introduction
- 1.2 Building the Software
- 1.3 Setup the VirtualBox VM
- 1.4 Install StarlingX
- 1.5 Initial Configuration
- 1.6 Provisioning controller-0
- 1.7 Install remaining hosts
- 1.8 Provisioning controller-1
- 1.9 Provisioning computes
- 1.10 Add Ceph OSDs to controllers
- 1.11 Prepare the host for running the containerized services
- 1.12 Using sysinv to bring up/down the containerized services
- 1.13 Verify the cluster endpoints
- 1.14 Provider/tenant networking setup
- 1.15 Horizon access
- 1.16 After controller node reboot
- 1.17 VirtualBox Nat Networking
Installing StarlingX with containers: Standard configuration
Introduction
These instructions are for a Standard, 2 controllers and 2 computes (2+2) configuration, in VirtualBox. Other configurations are in development. Installing on bare metal is also possible, however the the process would have to be adapted for the specific hardware configuration.
Note: These instructions are valid for a load built on January 21, 2019 or later.
Building the Software
Follow the standard build process in the StarlingX Developer Guide. Alternatively a prebuilt iso can be used, all required pacakges are provided by the StarlingX CENGN mirror.
Setup the VirtualBox VM
Create a virtual machine for the system with the following options:
* Type: Linux * Version: Other Linux (64-bit) * Memory size: * Controller nodes: 16384 MB * Compute nodes: 4096 MB * Storage: * Recommend to use VDI and dynamically allocated disks * Controller nodes; at least two disks are required: * 240GB disk for a root disk * 50GB for an OSD * Compute nodes; at least one disk is required: * 240GB disk for a root disk * System->Processors: * Controller nodes: 4 cpu * Compute nodes: 3 cpu * Network: * Controller nodes: * OAM network: OAM interface must have external connectivity, for now we will use a NatNetwork * Adapter 1: NAT Network; Name: NatNetwork Follow the instructions at #VirtualBox Nat Networking * Internal management network: * Adapter 2: Internal Network, Name: intnet-management; Intel PRO/1000MT Desktop, Advanced: Promiscuous Mode: Allow All; * Compute nodes: * Usused network * Adapter 1: Internal Network, Name: intnet-unused; Advanced: Intel PRO/1000MT Desktop, Promiscuous Mode: Allow All (Optional - if infrastructure network will be used then set "Name" to "intnet-infra") * Internal management network: * Adapter 2: Internal Network, Name: intnet-management; Intel PRO/1000MT Desktop, Advanced: Promiscuous Mode: Allow All; * Data Network * Adapter 3: Internal Network, Name: intnet-data1; Advanced: Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All * Adapter 4: Internal Network, Name: intnet-data2; Advanced: Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All * Serial Ports: Select this to use a serial console. * Windows: Select "Enable Serial Port", port mode to "Host Pipe". Select "Create Pipe" (or deselect "Connect to existing pipe/socket") and then give a Port/File Path of something like "\\.\pipe\controller-0" or "\\.\pipe\compute-1" which you can later use in PuTTY to connect to the console. Choose speed of 9600 or 38400. * Linux: Select "Enable Serial Port" and set the port mode to "Host Pipe". Select "Create Pipe" (or deselect "Connect to existing pipe/socket") and then give a Port/File Path of something like "/tmp/controller_serial" which you can later use with socat - for example: socat UNIX-CONNECT:/tmp/controller_serial stdio,raw,echo=0,icanon=0
Set the boot priority for interface 2 (eth1) on ALL VMs (controller, compute and storage)
# First list the VMs abc@server:~$ VBoxManage list vms "controller-0" {3db3a342-780f-41d5-a012-dbe6d3591bf1} "controller-1" {ad89a706-61c6-4c27-8c78-9729ade01460} "compute-0" {41e80183-2497-4e31-bffd-2d8ec5bcb397} "compute-1" {68382c1d-9b67-4f3b-b0d5-ebedbe656246} "storage-0" {7eddce9e-b814-4c40-94ce-2cde1fd2d168} # Then set the priority for interface 2. Do this for ALL VMs. # Command syntax: VBoxManage modifyvm <uuid> --nicbootprio2 1 abc@server:~$ VBoxManage modifyvm 3db3a342-780f-41d5-a012-dbe6d3591bf1 --nicbootprio2 1 #OR do them all with a foreach loop in linux abc@server:~$ for f in $(VBoxManage list vms | cut -f 1 -d " " | sed 's/"//g'); do echo $f; VBoxManage modifyvm $f --nicbootprio2 1; done # NOTE: In windows, you need to specify the full path to the VBoxManage executable - for example: "\Program Files\Oracle\VirtualBox\VBoxManage.exe"
Install StarlingX
Boot the VM from the ISO media. Select the following options for installation:
- Standard Controller Configuration
- Graphical Console
- STANDARD Security Boot Profile
Initial Configuration
Note: If you do not have direct access to the public docker registry (https://hub.docker.com/u/starlingx) and instead use a proxy for internet access, a workaround is required until this StoryBoard is implemented: https://storyboard.openstack.org/#!/story/2004710
Add proxy for docker
sudo mkdir -p /etc/systemd/system/docker.service.d sudo vi /etc/systemd/system/docker.service.d/http-proxy.conf
Add following lines with your proxy infomation to http-proxy.conf
[Service] Environment="HTTP_PROXY=<your_proxy>" "HTTPS_PROXY=<your_proxy>" "NO_PROXY=<your_no_proxy_ip>"
Do NOT use wildcard in NO_PROXY variable.
Run config_controller
sudo config_controller --kubernetes
Use default settings during config_controller, except for the following
External OAM floating address: 10.10.10.3 External OAM address for first controller node: 10.10.10.4 External OAM address for second controller node 10.10.10.5
The system configuration should look like this:
System Configuration -------------------- Time Zone: UTC System mode: duplex Distributed Cloud System Controller: no PXEBoot Network Configuration ----------------------------- Separate PXEBoot network not configured PXEBoot Controller floating hostname: pxecontroller Management Network Configuration -------------------------------- Management interface name: enp0s8 Management interface: enp0s8 Management interface MTU: 1500 Management subnet: 192.168.204.0/24 Controller floating address: 192.168.204.2 Controller 0 address: 192.168.204.3 Controller 1 address: 192.168.204.4 NFS Management Address 1: 192.168.204.5 NFS Management Address 2: 192.168.204.6 Controller floating hostname: controller Controller hostname prefix: controller- OAM Controller floating hostname: oamcontroller Dynamic IP address allocation is selected Management multicast subnet: 239.1.1.0/28 Infrastructure Network Configuration ------------------------------------ Infrastructure interface not configured External OAM Network Configuration ---------------------------------- External OAM interface name: enp0s3 External OAM interface: enp0s3 External OAM interface MTU: 1500 External OAM subnet: 10.10.10.0/24 External OAM gateway address: 10.10.10.1 External OAM floating address: 10.10.10.3 External OAM 0 address: 10.10.10.4 External OAM 1 address: 10.10.10.5
Provisioning controller-0
- Set DNS server (so we can set the ntp servers)
source /etc/platform/openrc system dns-modify nameservers=8.8.8.8 action=apply
- Set the ntp server
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
- Enable the Ceph backend
system storage-backend-add ceph -s glance,cinder,swift,nova,rbd-provisioner --confirmed
- Wait for 'applying-manifests' task to complete
while [ $(system storage-backend-list | awk '/ceph-store/{print $8}') != 'configured' ]; do echo 'Waiting for ceph to be configured'; sleep 5; done system storage-backend-list
- Unlock controller-0
system host-unlock controller-0
Install remaining hosts
- PXE boot hosts
Power-on, the remaining hosts, they should PXEboot from the controller. Press F-12 for network boot if they do not. Once booted from PXE, hosts should be visible with Check with 'system host-list':
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 2 | None | None | locked | disabled | offline | | 3 | None | None | locked | disabled | offline | | 4 | None | None | locked | disabled | offline | +----+--------------+-------------+----------------+-------------+--------------+
- Configure host personalities
source /etc/platform/openrc system host-update 2 personality=controller system host-update 3 personality=controller system host-update 4 personality=worker hostname=compute-0 system host-update 5 personality=worker hostname=compute-1
At this point hosts should start installing.
- Wait for hosts to become online
Once all Nodes have been installed and rebooted, on Controller-0 list the hosts:
+----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 2 | controller-1 | controller | locked | disabled | online | | 3 | compute-0 | worker | locked | disabled | online | | 4 | compute-1 | worker | locked | disabled | online | +----+--------------+-------------+----------------+-------------+--------------+
Provisioning controller-1
- Add the OAM inteface on controller-1
system host-if-modify -n oam0 -c platform --networks oam controller-1 $(system host-if-list -a controller-1 | awk '/enp0s3/{print $2}')
- Add the Cluster-host interface on controller-1
system host-if-modify controller-1 mgmt0 --networks cluster-host
- Unlock controller-1
system host-unlock controller-1
Wait for node to be available:
+----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 2 | controller-1 | controller | unlocked | enabled | available | | 3 | compute-0 | worker | locked | disabled | online | | 4 | compute-1 | worker | locked | disabled | online | +----+--------------+-------------+----------------+-------------+--------------+
- Ceph cluster shows a quorum with controller-0 and controller-1
[root@controller-0 wrsroot(keystone_admin)]# ceph -s cluster 93f79bcb-526f-4396-84a4-a29c93614d09 health HEALTH_ERR 128 pgs are stuck inactive for more than 300 seconds 128 pgs stuck inactive 128 pgs stuck unclean no osds monmap e1: 2 mons at {controller-0=192.168.204.3:6789/0,controller-1=192.168.204.4:6789/0} election epoch 6, quorum 0,1 controller-0,controller-1 osdmap e2: 0 osds: 0 up, 0 in flags sortbitwise,require_jewel_osds pgmap v3: 128 pgs, 2 pools, 0 bytes data, 0 objects 0 kB used, 0 kB / 0 kB avail 128 creating
Provisioning computes
- Add the third Ceph monitor to a compute node
[root@controller-0 wrsroot(keystone_admin)]# system ceph-mon-add compute-0 +--------------+------------------------------------------------------------------+ | Property | Value | +--------------+------------------------------------------------------------------+ | uuid | f76bc385-190c-4d9a-aa0f-107346a9907b | | ceph_mon_gib | 20 | | created_at | 2019-01-17T12:32:33.372098+00:00 | | updated_at | None | | state | configuring | | task | {u'controller-1': 'configuring', u'controller-0': 'configuring'} | +--------------+------------------------------------------------------------------+
Wait for compute monitor to be configured:
[root@controller-0 wrsroot(keystone_admin)]# system ceph-mon-list +--------------------------------------+-------+--------------+------------+------+ | uuid | ceph_ | hostname | state | task | | | mon_g | | | | | | ib | | | | +--------------------------------------+-------+--------------+------------+------+ | 64176b6c-e284-4485-bb2a-115dee215279 | 20 | controller-1 | configured | None | | a9ca151b-7f2c-4551-8167-035d49e2df8c | 20 | controller-0 | configured | None | | f76bc385-190c-4d9a-aa0f-107346a9907b | 20 | compute-0 | configured | None | +--------------------------------------+-------+--------------+------------+------+
- Create the volume group for nova.
for COMPUTE in compute-0 compute-1; do
echo "Configuring nova local for: $COMPUTE" set -ex ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}') ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | awk /${ROOT_DISK}/'{print $2}') PARTITION_SIZE=10 NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE}) NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}') system host-lvg-add ${COMPUTE} nova-local system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID} system host-lvg-modify -b image ${COMPUTE} nova-local set +ex
done
- Configure data interfaces
DATA0IF=eth1000 DATA1IF=eth1001 PHYSNET0='physnet0' PHYSNET1='physnet1' SPL=/tmp/tmp-system-port-list SPIL=/tmp/tmp-system-host-if-list for COMPUTE in compute-0 compute-1; do echo "Configuring interface for: $COMPUTE" set -ex system host-port-list ${COMPUTE} --nowrap > ${SPL} system host-if-list -a ${COMPUTE} --nowrap > ${SPIL} DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}') DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}') DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}') DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}') DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}') DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}') DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}') DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}') system host-if-modify -m 1500 -n data0 -p ${PHYSNET0} -c data ${COMPUTE} ${DATA0IFUUID} system host-if-modify -m 1500 -n data1 -p ${PHYSNET1} -c data ${COMPUTE} ${DATA1IFUUID} set +ex done
- Setup the cluster-host interfaces on the computes
system host-if-modify -n clusterhost0 -c platform --networks cluster-host compute-0 $(system host-if-list -a compute-0 | awk '/enp0s3/{print $2}') system host-if-modify -n clusterhost0 -c platform --networks cluster-host compute-1 $(system host-if-list -a compute-1 | awk '/enp0s3/{print $2}')
- Unlock compute nodes
for COMPUTE in compute-0 compute-1; do system host-unlock $COMPUTE done
- After the hosts are available, test that Ceph cluster is operational and that all 3 monitors (controller-0, controller-1 & compute-0) have joined the monitor quorum:
[root@controller-0 wrsroot(keystone_admin)]# system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 2 | controller-1 | controller | unlocked | enabled | available | | 3 | compute-0 | worker | unlocked | enabled | available | | 4 | compute-1 | worker | unlocked | enabled | available | +----+--------------+-------------+----------------+-------------+--------------+ [root@controller-0 wrsroot(keystone_admin)]# ceph -s cluster 93f79bcb-526f-4396-84a4-a29c93614d09 health HEALTH_ERR 128 pgs are stuck inactive for more than 300 seconds 128 pgs stuck inactive 128 pgs stuck unclean no osds monmap e2: 3 mons at {compute-0=192.168.204.182:6789/0,controller-0=192.168.204.3:6789/0,controller-1=192.168.204.4:6789/0} election epoch 14, quorum 0,1,2 controller-0,controller-1,compute-0 osdmap e11: 0 osds: 0 up, 0 in flags sortbitwise,require_jewel_osds pgmap v12: 128 pgs, 2 pools, 0 bytes data, 0 objects 0 kB used, 0 kB / 0 kB avail 128 creating
Add Ceph OSDs to controllers
- Lock controller-1
system host-lock controller-1
- Wait for node to be locked.
- Add OSD(s) to controller-1
HOST=controller-1 DISKS=$(system host-disk-list ${HOST}) TIERS=$(system storage-tier-list ceph_cluster) OSDs="/dev/sdb" for OSD in $OSDs; do
system host-stor-add ${HOST} $(echo "$DISKS" | grep /dev/sdb | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
done
- Unlock controller-1
system host-unlock controller-1
- Wait controller-1 to be available
[root@controller-0 wrsroot(keystone_admin)]# system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 2 | controller-1 | controller | unlocked | enabled | available | | 3 | compute-0 | worker | unlocked | enabled | available | | 4 | compute-1 | worker | unlocked | enabled | available | +----+--------------+-------------+----------------+-------------+--------------+
- Swact controllers
system host-swact controller-0
Wait for swact to complete and services to stabilize (approximately 30s). You may get disconnect if you connected over OAM floating IP. Reconnect or connect to controller-1.
controller-1:/home/wrsroot# source /etc/platform/openrc [root@controller-1 wrsroot(keystone_admin)]# system host-show controller-1 | grep Controller-Active | capabilities | {u'stor_function': u'monitor', u'Personality': u'Controller-Active'} |
- Lock controller-0
system host-lock controller-0
- Add OSD(s) to controller-0
HOST=controller-0 DISKS=$(system host-disk-list ${HOST}) TIERS=$(system storage-tier-list ceph_cluster) OSDs="/dev/sdb" for OSD in $OSDs; do
system host-stor-add ${HOST} $(echo "$DISKS" | grep /dev/sdb | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
done
- Unlock controller-0
system host-unlock controller-0
- Wait controller-0 to be available. At this point ceph should report HEALTH_OK and two OSDs configured one for each controller:
[root@controller-0 wrsroot(keystone_admin)]# ceph -s cluster 93f79bcb-526f-4396-84a4-a29c93614d09 health HEALTH_OK monmap e2: 3 mons at {compute-0=192.168.204.182:6789/0,controller-0=192.168.204.3:6789/0,controller-1=192.168.204.4:6789/0} election epoch 22, quorum 0,1,2 controller-0,controller-1,compute-0 osdmap e31: 2 osds: 2 up, 2 in flags sortbitwise,require_jewel_osds pgmap v73: 384 pgs, 6 pools, 1588 bytes data, 1116 objects 90044 kB used, 17842 MB / 17929 MB avail 384 active+clean [root@controller-1 wrsroot(keystone_admin)]# ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 0.01700 root storage-tier -2 0.01700 chassis group-0 -4 0.00850 host controller-0 1 0.00850 osd.1 up 1.00000 1.00000 -3 0.00850 host controller-1 0 0.00850 osd.0 up 1.00000 1.00000
Prepare the host for running the containerized services
- On the controller node, apply all the node labels for each controller and compute functions
source /etc/platform/openrc for NODE in controller-0 controller-1; do system host-label-assign $NODE openstack-control-plane=enabled system host-label-assign $NODE openvswitch=enabled done for NODE in compute-0 compute-1; do system host-label-assign $NODE openstack-compute-node=enabled system host-label-assign $NODE openvswitch=enabled done kubectl get nodes --show-labels
Using sysinv to bring up/down the containerized services
- Generate the stx-openstack application tarball. In a development environment, run the following command to construct the application tarballs. The tarballs can be found under $MY_WORKSPACE/containers/build-helm/stx. Currently it produces 2 application tarballs, one with tests enabled and one without. Transfer the selected tarball to your lab/virtual box.
$MY_REPO_ROOT_DIR/cgcs-root/build-tools/build-helm-charts.sh
- Alternatively the stx-openstack application tarballs are generated with each build on the CENGN mirror. These are present in builds after 2018-12-12 and can be found under <build>/outputs/helm-charts/.
- Workaround: Need to create the controller-1 helm_chart directory
ssh -t wrsroot@controller-1 sudo install -d -o www -g root -m 755 /www/pages/helm_charts
- Download helm charts to active controller
- Stage application for deployment: Use sysinv to upload the application tarball.
system application-upload stx-openstack helm-charts-manifest-no-tests.tgz system application-list
- Bring Up Services: Use sysinv to apply the application. You can monitor the progress either by watching system application-list (watch -n 1.0 system application-list) or tailing Armada execution log (sudo docker exec armada_service tailf stx-openstack-apply.log).
system application-apply stx-openstack system application-list
With the application applied the containerized openstack services are now running. You must now set Ceph pool replication for the new pools created when the application was applied:
ceph osd pool ls | xargs -i ceph osd pool set {} size 1
Skip to #Verify the cluster endpoints to continue the setup.
The following commands are for reference.
- Bring Down Services: Use sysinv to uninstall the application.
system application-remove stx-openstack system application-list
- Delete Services: Use sysinv to delete the application definition.
system application-delete stx-openstack system application-list
- Bring Down Services: Clean up and stragglers (volumes and pods)
# Watch and wait for the pods to terminate kubectl get pods -n openstack -o wide -w # Armada Workaround: delete does not clean up the old test pods. Sooo... Delete them. kubectl get pods -n openstack | awk '/osh-.*-test/{print $1}' | xargs -i kubectl delete pods -n openstack --force --grace-period=0 {} # Cleanup all PVCs kubectl get pvc --all-namespaces; kubectl get pv --all-namespaces kubectl delete pvc --all --namespace openstack; kubectl delete pv --all --namespace openstack kubectl get pvc --all-namespaces; kubectl get pv --all-namespaces # Useful to cleanup the mariadb grastate data. kubectl get configmaps -n openstack | awk '/osh-/{print $1}' | xargs -i kubectl delete configmaps -n openstack {} # Remove all the contents of the ceph pools. I have seen orphaned contents here that take up space. for p in cinder-volumes images kube-rbd; do rbd -p $p ls | xargs -i rbd -p $p snap unprotect {}@snap; done for p in cinder-volumes images kube-rbd; do rbd -p $p ls | xargs -i rbd -p $p snap purge {}; done for p in cinder-volumes images kube-rbd; do rbd -p $p ls | xargs -i rbd -p $p rm {}; done
Verify the cluster endpoints
# Note: Do this from a new shell as a root user (do not source /etc/platform/openrc in that shell). The 'password' should be set to the admin password which configured during config_controller. mkdir -p /etc/openstack tee /etc/openstack/clouds.yaml << EOF clouds: openstack_helm: region_name: RegionOne identity_api_version: 3 auth: username: 'admin' password: 'Li69nux*' project_name: 'admin' project_domain_name: 'default' user_domain_name: 'default' auth_url: 'http://keystone.openstack.svc.cluster.local/v3' EOF export OS_CLOUD=openstack_helm openstack endpoint list
Provider/tenant networking setup
- Create the providernets
PHYSNET0='physnet0' PHYSNET1='physnet1' neutron providernet-create ${PHYSNET0} --type vlan neutron providernet-create ${PHYSNET1} --type vlan
- Create host and bind interfaces
#Query sysinv db directly instead of switching credentials neutron host-create controller-0 --id $(sudo -u postgres psql -qt -d sysinv -c "select uuid from i_host where hostname='controller-0';") --availability up neutron host-bind-interface --interface $(sudo -u postgres psql -qt -d sysinv -c "select uuid from ethernet_interfaces join interfaces on ethernet_interfaces.id=interfaces.id where providernetworks='physnet0';") --providernets physnet0 --mtu 1500 controller-0 neutron host-bind-interface --interface $(sudo -u postgres psql -qt -d sysinv -c "select uuid from ethernet_interfaces join interfaces on ethernet_interfaces.id=interfaces.id where providernetworks='physnet1';") --providernets physnet1 --mtu 1500 controller-0 #Alternatively, can source /etc/platform/openrc and then query using sysinv api.
- Setup tenant networking (adapt based on lab config)
ADMINID=`openstack project list | grep admin | awk '{print $2}'` PHYSNET0='physnet0' PHYSNET1='physnet1' PUBLICNET='public-net0' PRIVATENET='private-net0' INTERNALNET='internal-net0' EXTERNALNET='external-net0' PUBLICSUBNET='public-subnet0' PRIVATESUBNET='private-subnet0' INTERNALSUBNET='internal-subnet0' EXTERNALSUBNET='external-subnet0' PUBLICROUTER='public-router0' PRIVATEROUTER='private-router0' neutron providernet-range-create ${PHYSNET0} --name ${PHYSNET0}-a --range 400-499 neutron providernet-range-create ${PHYSNET0} --name ${PHYSNET0}-b --range 10-10 --shared neutron providernet-range-create ${PHYSNET1} --name ${PHYSNET1}-a --range 500-599 neutron net-create --tenant-id ${ADMINID} --provider:network_type=vlan --provider:physical_network=${PHYSNET0} --provider:segmentation_id=10 --router:external ${EXTERNALNET} neutron net-create --tenant-id ${ADMINID} --provider:network_type=vlan --provider:physical_network=${PHYSNET0} --provider:segmentation_id=400 ${PUBLICNET} neutron net-create --tenant-id ${ADMINID} --provider:network_type=vlan --provider:physical_network=${PHYSNET1} --provider:segmentation_id=500 ${PRIVATENET} neutron net-create --tenant-id ${ADMINID} ${INTERNALNET} PUBLICNETID=`neutron net-list | grep ${PUBLICNET} | awk '{print $2}'` PRIVATENETID=`neutron net-list | grep ${PRIVATENET} | awk '{print $2}'` INTERNALNETID=`neutron net-list | grep ${INTERNALNET} | awk '{print $2}'` EXTERNALNETID=`neutron net-list | grep ${EXTERNALNET} | awk '{print $2}'` neutron subnet-create --tenant-id ${ADMINID} --name ${PUBLICSUBNET} ${PUBLICNET} 192.168.101.0/24 neutron subnet-create --tenant-id ${ADMINID} --name ${PRIVATESUBNET} ${PRIVATENET} 192.168.201.0/24 neutron subnet-create --tenant-id ${ADMINID} --name ${INTERNALSUBNET} --no-gateway ${INTERNALNET} 10.10.0.0/24 neutron subnet-create --tenant-id ${ADMINID} --name ${EXTERNALSUBNET} --gateway 192.168.1.1 --disable-dhcp ${EXTERNALNET} 192.168.1.0/24 neutron router-create ${PUBLICROUTER} neutron router-create ${PRIVATEROUTER} PRIVATEROUTERID=`neutron router-list | grep ${PRIVATEROUTER} | awk '{print $2}'` PUBLICROUTERID=`neutron router-list | grep ${PUBLICROUTER} | awk '{print $2}'` neutron router-gateway-set --disable-snat ${PUBLICROUTERID} ${EXTERNALNETID} neutron router-gateway-set --disable-snat ${PRIVATEROUTERID} ${EXTERNALNETID} neutron router-interface-add ${PUBLICROUTER} ${PUBLICSUBNET} neutron router-interface-add ${PRIVATEROUTER} ${PRIVATESUBNET}
Horizon access
# After successful armada manifest apply the following should be seen kubectl get services -n openstack | grep horizon horizon ClusterIP 10.104.34.245 <none> 80/TCP,443/TCP 13h horizon-int NodePort 10.101.103.238 <none> 80:31000/TCP 13h The platform horizon UI is available at http://<external OAM IP> $ curl -L http://10.10.10.3:80 -so - | egrep '(PlugIn|<title>)' <title>Login - StarlingX</title> global.horizonPlugInModules = ['horizon.dashboard.project', 'horizon.dashboard.container-infra', 'horizon.dashboard.dc_admin', 'horizon.dashboard.identity', 'horizon.app.murano']; The containerized horizon UI is available at http://<external OAM IP>:31000 $ curl -L http://10.10.10.3:31000 -so - | egrep '(PlugIn|<title>)' <title>Login - StarlingX</title> global.horizonPlugInModules = ['horizon.dashboard.project', 'horizon.dashboard.identity'];
After controller node reboot
- If the keystone-api pod is stuck in a CrashLoopBackOff, delete the pod and it will be re-created.
# List the pods to get the name of the keystone-api pod kubectl -n openstack get pods # Delete the keystone-api pod kubectl -n openstack delete pod <name of keystone-api pod>
- If you are seeing DNS failures for cluster addresses, restart dnsmasq on the controller after puppet has completed its initialization.
sudo sm-restart service dnsmasq
VirtualBox Nat Networking
First add a NAT Network in VirtualBox:
* Select File -> Preferences menu * Choose Network, "Nat Networks" tab should be selected * Click on plus icon to add a network, which will add a network named NatNetwork * Edit the NatNetwork (gear or screwdriver icon) * Network CIDR: 10.10.10.0/24 (to match OAM network specified in config_controller) * Disable "Supports DHCP" * Enable "Supports IPv6" * Select "Port Forwarding" and add any rules you desire. Some examples:
Name | Protocol | Host IP | Host Port | Guest IP | Guest Port |
controller-ssh | TCP | 22 | 10.10.10.3 | 22 | |
controller-http | TCP | 80 | 10.10.10.3 | 80 | |
controller-https | TCP | 443 | 10.10.10.3 | 443 | |
controller-0-ssh | TCP | 23 | 10.10.10.4 | 22 | |
controller-1-ssh | TCP | 24 | 10.10.10.4 | 22 |