Jump to: navigation, search


< StarlingX‎ | Containers
Revision as of 19:45, 7 February 2019 by Tao.liu (talk | contribs) (VirtualBox Nat Networking)

Installing StarlingX with containers: One node configuration


  • January 18, 2019: Removed Nova Cell DB Workaround - no longer required on loads built January 15th or later.
  • January 25, 2019: Configure datanetworks in sysinv, prior to referencing it in the 'system host-if-modify/host-if-add command'. Needed on loads Jan 25, 2019 or later.
  • January 29, 2019: Removed obsolete neutron host/interface configuration and updated DNS instructions.


These instructions are for an All-in-one simplex system (AIO-SX) in VirtualBox. Other configurations are in development. Installing on bare metal is also possible, however the the process would have to be adapted for the specific hardware configuration.

Note: These instructions are valid for a load built on January 25, 2019 or later.

Building the Software

Follow the standard build process in the StarlingX Developer Guide.

Alternatively a prebuilt iso can be used, all required packages are provided by the StarlingX CENGN mirror

Setup the VirtualBox VM

Create a virtual machine for the system with the following options:

     * Type: Linux
     * Version: Other Linux (64-bit)
     * Memory size: 16384 MB
     * Storage: 
        * Recommend to use VDI and dynamically allocated disks
        * At least two disks are required
          * 240GB disk for a root disk 
          * 50GB for an OSD
     * System->Processors: 
        * 4 cpu
     * Network:
        * OAM network:
           OAM interface must have external connectivity, for now we will use a NatNetwork
           * Adapter 1: NAT Network; Name: NatNetwork Follow the instructions at #VirtualBox Nat Networking 
        * Data Network
           * Adapter 2: Internal Network, Name: intnet-data1; Advanced: Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All
           * Adapter 3: Internal Network, Name: intnet-data2; Advanced: Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All

VirtualBox Nat Networking

First add a NAT Network in VirtualBox:

 * Select File -> Preferences menu
 * Choose Network, "Nat Networks" tab should be selected
   * Click on plus icon to add a network, which will add a network named NatNetwork
   * Edit the NatNetwork (gear or screwdriver icon)
     * Network CIDR: (to match OAM network specified in config_controller)
     * Disable "Supports DHCP"
     * Enable "Supports IPv6"
     * Select "Port Forwarding" and add any rules you desire. Some examples:
Name Protocol Host IP Host Port Guest IP Guest Port
controller-ssh TCP 22 22
controller-http TCP 80 8080
controller-https TCP 443 8443

Setup Controller-0

Install StarlingX

Boot the VM from the ISO media. Select the following options for installation:

  • All-in-one Controller
  • Graphical Console
  • Standard Security Profile

Once booted, log into Controller-0 as user wrsroot, with password wrsroot. The first time you log in as wrsroot, you are required to change your password. Enter the current password (wrsroot):

Changing password for wrsroot.
(current) UNIX Password: wrsroot

Enter a new password for the wrsroot account and confirm it.

Run config_controller

sudo config_controller --kubernetes

Use default settings during config_controller, except for the following:

  • System mode: simplex
  • External OAM address:
  • If you do not have direct access to the google DNS nameserver(s) , you will need to configure that when prompted. Press Enter to choose the default, or type a new entry.
  • If you do not have direct access to the public docker registry (https://hub.docker.com/u/starlingx) and instead use a proxy for internet access, you will need to add proxy information when prompted. (Storyboard 2004710 was merged on Jan 30, 2019. )

The system configuration should look like this:

System Configuration
Time Zone: UTC
System mode: simplex

PXEBoot Network Configuration
Separate PXEBoot network not configured
PXEBoot Controller floating hostname: pxecontroller

Management Network Configuration
Management interface name: lo
Management interface: lo
Management interface MTU: 1500
Management subnet:
Controller floating address:
Controller 0 address:
Controller 1 address:
NFS Management Address 1:
NFS Management Address 2:
Controller floating hostname: controller
Controller hostname prefix: controller-
OAM Controller floating hostname: oamcontroller
Dynamic IP address allocation is selected
Management multicast subnet:

Infrastructure Network Configuration
Infrastructure interface not configured

Kubernetes Cluster Network Configuration
Cluster pod network subnet:
Cluster service network subnet:
Cluster host interface name: lo
Cluster host interface: lo
Cluster host interface MTU: 1500
Cluster host subnet:

External OAM Network Configuration
External OAM interface name: enp0s3
External OAM interface: enp0s3
External OAM interface MTU: 1500
External OAM subnet:
External OAM gateway address:
External OAM address:

DNS Configuration
Nameserver 1:
Nameserver 2:

Provisioning the platform

Set the ntp server

source /etc/platform/openrc
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org

Configure data interfaces

export COMPUTE=controller-0
source /etc/platform/openrc
system host-port-list ${COMPUTE} --nowrap > ${SPL}
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')

# configure the datanetworks in sysinv, prior to referencing it in the 'system host-if-modify command'
system datanetwork-add ${PHYSNET0} vlan
system datanetwork-add ${PHYSNET1} vlan

# the host-if-modify '-p' flag is deprecated in favor of  the '-d' flag for assignment of datanetworks.
system host-if-modify -m 1500 -n data0 -d ${PHYSNET0} -c data ${COMPUTE} ${DATA0IFUUID}
system host-if-modify -m 1500 -n data1 -d ${PHYSNET1} -c data ${COMPUTE} ${DATA1IFUUID}

Prepare the host for running the containerized services

  • On the controller node, apply all the node labels for each controller and compute functions
source /etc/platform/openrc

system host-label-assign controller-0 openstack-control-plane=enabled
system host-label-assign controller-0 openstack-compute-node=enabled
system host-label-assign controller-0 openvswitch=enabled
system host-label-assign controller-0 sriov=enabled

Setup partitions for Controller-0

  • Create partitions on the root disk and wait for them to be ready
    • 34G for nova-local (mandatory).
    • 6G for the cgts-vg (optional). This extends the existing cgts volume group. There should be sufficient space by default)
export COMPUTE=controller-0
source /etc/platform/openrc

echo ">>> Getting root disk info"
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"

echo ">>>> Configuring nova-local"
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-lvg-add ${COMPUTE} nova-local
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
sleep 2

echo ">>> Wait for partition $NOVA_PARTITION_UUID to be ready."
while true; do system host-disk-partition-list $COMPUTE --nowrap | grep $NOVA_PARTITION_UUID | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done

echo ">>>> Extending cgts-vg"
CGTS_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
CGTS_PARTITION_UUID=$(echo ${CGTS_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')

echo ">>> Wait for partition $CGTS_PARTITION_UUID to be ready"
while true; do system host-disk-partition-list $COMPUTE --nowrap | grep $CGTS_PARTITION_UUID | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done

system host-pv-add ${COMPUTE} cgts-vg ${CGTS_PARTITION_UUID}
sleep 2

echo ">>> Waiting for cgts-vg to be ready"
while true; do system host-pv-list ${COMPUTE} | grep cgts-vg | grep adding; if [ $? -ne 0 ]; then break; fi; sleep 1; done

system host-pv-list ${COMPUTE} 

Configure Ceph for Controller-0

source /etc/platform/openrc
echo ">>> Enable primary Ceph backend"
system storage-backend-add ceph --confirmed

echo ">>> Wait for primary ceph backend to be configured"
echo ">>> This step really takes a long time"
while [ $(system storage-backend-list | awk '/ceph-store/{print $8}') != 'configured' ]; do echo 'Waiting for ceph..'; sleep 5; done

echo ">>> Ceph health"
ceph -s

echo ">>> Add OSDs to primary tier"

system host-disk-list controller-0
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
system host-stor-list controller-0

echo ">>> ceph osd tree"
ceph osd tree

Set Ceph pool replication (AIO-SX only)

ceph osd pool ls | xargs -i ceph osd pool set {} size 1

Unlock the controller

system host-unlock controller-0
  • After the host unlocks, test that the ceph cluster is operational
ceph -s
    cluster 6cb8fd30-622a-4a15-a039-b9e945628133
     health HEALTH_OK
     monmap e1: 1 mons at {controller-0=}
            election epoch 4, quorum 0 controller-0
     osdmap e32: 1 osds: 1 up, 1 in
            flags sortbitwise,require_jewel_osds
      pgmap v35: 1728 pgs, 6 pools, 0 bytes data, 0 objects
            39180 kB used, 50112 MB / 50150 MB avail
                1728 active+clean

Using sysinv to bring up/down the containerized services

Generate the stx-openstack application tarball

There are currently 2 application tarballs, one with tests enabled and one without.

The stx-openstack application tarballs are generated with each build on the CENGN mirror.

Alternatively, in a development environment, run the following command to construct the application tarballs.


The resulting tarballs can be found under $MY_WORKSPACE/std/build-helm/stx.

If the build-helm-charts.sh command is unable to find the charts, run "build-pkgs" to build the chart rpms and re-run the build-helm-charts.sh command.

Stage application for deployment

Transfer the helm-charts-manifest-no-tests.tgz application tarball onto your active controller.

Use sysinv to upload the application tarball.

source /etc/platform/openrc
system application-upload stx-openstack helm-charts-manifest-no-tests.tgz
system application-list

Bring Up Services

Use sysinv to apply the application.

system application-apply stx-openstack

You can monitor the progress by watching system application-list

watch -n 5 system application-list

or tailing Armada execution log

sudo docker exec armada_service tailf stx-openstack-apply.log

Update Ceph pool replication (AIO-SX only)

With the application applied the containerized openstack services are now running.

In an AIO SX environment, you must now set Ceph pool replication for the new pools created when the application was applied:

ceph osd pool ls | xargs -i ceph osd pool set {} size 1

Verify the cluster endpoints

Note: Do this from a new shell as a root user (do not source /etc/platform/openrc in that shell).

The 'password' should be set to the admin password which configured during config_controller.

mkdir -p /etc/openstack
tee /etc/openstack/clouds.yaml << EOF
    region_name: RegionOne
    identity_api_version: 3
      username: 'admin'
      password: 'Li69nux*'
      project_name: 'admin'
      project_domain_name: 'default'
      user_domain_name: 'default'
      auth_url: 'http://keystone.openstack.svc.cluster.local/v3'

export OS_CLOUD=openstack_helm
openstack endpoint list

The remaining networking steps are done using this root user.

Provider/tenant networking setup

  • Create the providernets
neutron providernet-create ${PHYSNET0} --type vlan
neutron providernet-create ${PHYSNET1} --type vlan
  • Setup tenant networking (adapt based on lab config)
ADMINID=`openstack project list | grep admin | awk '{print $2}'`
neutron providernet-range-create ${PHYSNET0} --name ${PHYSNET0}-a --range 400-499
neutron providernet-range-create ${PHYSNET0} --name ${PHYSNET0}-b --range 10-10 --shared
neutron providernet-range-create ${PHYSNET1} --name ${PHYSNET1}-a --range 500-599
neutron net-create --tenant-id ${ADMINID} --provider:network_type=vlan --provider:physical_network=${PHYSNET0} --provider:segmentation_id=10 --router:external ${EXTERNALNET}
neutron net-create --tenant-id ${ADMINID} --provider:network_type=vlan --provider:physical_network=${PHYSNET0} --provider:segmentation_id=400 ${PUBLICNET}
neutron net-create --tenant-id ${ADMINID} --provider:network_type=vlan --provider:physical_network=${PHYSNET1} --provider:segmentation_id=500 ${PRIVATENET}
neutron net-create --tenant-id ${ADMINID} ${INTERNALNET}
PUBLICNETID=`neutron net-list | grep ${PUBLICNET} | awk '{print $2}'`
PRIVATENETID=`neutron net-list | grep ${PRIVATENET} | awk '{print $2}'`
INTERNALNETID=`neutron net-list | grep ${INTERNALNET} | awk '{print $2}'`
EXTERNALNETID=`neutron net-list | grep ${EXTERNALNET} | awk '{print $2}'`
neutron subnet-create --tenant-id ${ADMINID} --name ${PUBLICSUBNET} ${PUBLICNET}
neutron subnet-create --tenant-id ${ADMINID} --name ${PRIVATESUBNET} ${PRIVATENET}
neutron subnet-create --tenant-id ${ADMINID} --name ${INTERNALSUBNET} --no-gateway  ${INTERNALNET}
neutron subnet-create --tenant-id ${ADMINID} --name ${EXTERNALSUBNET} --gateway --disable-dhcp ${EXTERNALNET}
neutron router-create ${PUBLICROUTER}
neutron router-create ${PRIVATEROUTER}
PRIVATEROUTERID=`neutron router-list | grep ${PRIVATEROUTER} | awk '{print $2}'`
PUBLICROUTERID=`neutron router-list | grep ${PUBLICROUTER} | awk '{print $2}'`
neutron router-gateway-set --disable-snat ${PUBLICROUTERID} ${EXTERNALNETID}
neutron router-gateway-set --disable-snat ${PRIVATEROUTERID} ${EXTERNALNETID}
neutron router-interface-add ${PUBLICROUTER} ${PUBLICSUBNET}
neutron router-interface-add ${PRIVATEROUTER} ${PRIVATESUBNET}

Additional Setup Instructions

The following commands are for reference.

  • Bring Down Services: Use sysinv to uninstall the application.
system application-remove stx-openstack
system application-list
  • Delete Services: Use sysinv to delete the application definition.
system application-delete stx-openstack
system application-list
  • Bring Down Services: Clean up and stragglers (volumes and pods)
# Watch and wait for the pods to terminate
kubectl get pods -n openstack -o wide -w

# Armada Workaround: delete does not clean up the old test pods. Sooo... Delete them.
kubectl get pods -n openstack | awk '/osh-.*-test/{print $1}' | xargs -i kubectl delete pods -n openstack --force --grace-period=0 {}

# Cleanup all PVCs
kubectl get pvc --all-namespaces; kubectl get pv --all-namespaces
kubectl delete pvc --all --namespace openstack; kubectl delete pv --all --namespace openstack
kubectl get pvc --all-namespaces; kubectl get pv --all-namespaces

# Useful to cleanup the mariadb grastate data.
kubectl get configmaps -n openstack | awk '/osh-/{print $1}' | xargs -i kubectl delete configmaps -n openstack {}

# Remove all the contents of the ceph pools. I have seen orphaned contents here that take up space.
for p in cinder-volumes images kube-rbd; do rbd -p $p ls | xargs -i rbd -p $p snap unprotect {}@snap; done
for p in cinder-volumes images kube-rbd; do rbd -p $p ls | xargs -i rbd -p $p snap purge {}; done
for p in cinder-volumes images kube-rbd; do rbd -p $p ls | xargs -i rbd -p $p rm {}; done

Horizon access

# After successful armada manifest apply the following should be seen

kubectl get services -n openstack | grep horizon
horizon                       ClusterIP    <none>        80/TCP,443/TCP                 13h
horizon-int                   NodePort   <none>        80:31000/TCP                   13h

The platform horizon UI is available at http://<external OAM IP>

 $ curl -L -so - | egrep '(PlugIn|<title>)'
    <title>Login - StarlingX</title>
    global.horizonPlugInModules = ['horizon.dashboard.project', 'horizon.dashboard.container-infra', 'horizon.dashboard.dc_admin', 'horizon.dashboard.identity', 'horizon.app.murano'];

The containerized horizon UI is available at http://<external OAM IP>:31000

$ curl -L -so - | egrep '(PlugIn|<title>)'
    <title>Login - StarlingX</title>
    global.horizonPlugInModules = ['horizon.dashboard.project', 'horizon.dashboard.identity'];

Known Issues and Troubleshooting