Jump to: navigation, search

StarlingX/Containers/InstallationOnStandard

< StarlingX‎ | Containers
Revision as of 13:26, 18 January 2019 by Ovidiu.poncea (talk | contribs) (Installing StarlingX with containers: One node configuration)

Installing StarlingX with containers: One node configuration

Introduction

These instructions are for a Standard 2 controllers and 2 computes (2+2) configuration in VirtualBox. Other configurations are in development. Installing on bare metal is also possible, however the the process would have to be adapted for the specific hardware configuration.

Note: These instructions are valid for a load built on January 21, 2019 or later.

Building the Software

Follow the standard build process in the StarlingX Developer Guide. Alternatively a prebuilt iso can be used, all required pacakges are provided by the StarlingX CENGN mirror.

Setup the VirtualBox VM

Create a virtual machine for the system with the following options:

     * Type: Linux
     * Version: Other Linux (64-bit)
     * Memory size:
        * Controller nodes: 12288 MB
        * Compute nodes: 4096 MB
     * Storage: 
        * Recommend to use VDI and dynamically allocated disks
        * Controller nodes; at least two disks are required:
             * 240GB disk for a root disk 
             * 50GB for an OSD
        * Compute nodes; at least one disk is required:
             * 240GB disk for a root disk 
        * System->Processors: 
           * Controller nodes: 4 cpu
           * Compute nodes: 3 cpu
        * Network:
           * Controller nodes:
              * OAM network:
                 OAM interface must have external connectivity, for now we will use a NatNetwork
                 * Adapter 1: NAT Network; Name: NatNetwork Follow the instructions at #VirtualBox Nat Networking
              * Internal management network:
                 * Adapter 2: Internal Network, Name: intnet-management; Intel PRO/1000MT Desktop, Advanced: Promiscuous Mode: Allow All;
           * Compute nodes:
              * Usused network 
                 * Adapter 1: Internal Network, Name: intnet-unused; Advanced: Intel PRO/1000MT Desktop, Promiscuous Mode: Allow All (Optional - if infrastructure network will be used then set "Name" to "intnet-infra")
              * Internal management network:
                 * Adapter 2: Internal Network, Name: intnet-management; Intel PRO/1000MT Desktop, Advanced: Promiscuous Mode: Allow All;
              * Data Network
                 * Adapter 3: Internal Network, Name: intnet-data1; Advanced: Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All
                 * Adapter 4: Internal Network, Name: intnet-data2; Advanced: Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All
        * Serial Ports: Select this to use a serial console.
           * Windows: Select "Enable Serial Port", port mode to "Host Pipe". Select "Create Pipe" (or deselect "Connect to existing pipe/socket") and then give a Port/File Path of something like "\\.\pipe\controller-0" or "\\.\pipe\compute-1" which you can later use in PuTTY to connect to the console. Choose speed of 9600 or 38400.
           * Linux: Select "Enable Serial Port" and set the port mode to "Host Pipe". Select "Create Pipe" (or deselect "Connect to existing pipe/socket") and then give a Port/File Path of something like "/tmp/controller_serial" which you can later use with socat - for example: socat UNIX-CONNECT:/tmp/controller_serial stdio,raw,echo=0,icanon=0

Set the boot priority for interface 2 (eth1) on ALL VMs (controller, compute and storage)

# First list the VMs
abc@server:~$ VBoxManage list vms
"controller-0" {3db3a342-780f-41d5-a012-dbe6d3591bf1}
"controller-1" {ad89a706-61c6-4c27-8c78-9729ade01460}
"compute-0" {41e80183-2497-4e31-bffd-2d8ec5bcb397}
"compute-1" {68382c1d-9b67-4f3b-b0d5-ebedbe656246}
"storage-0" {7eddce9e-b814-4c40-94ce-2cde1fd2d168}

# Then set the priority for interface 2. Do this for ALL VMs.
# Command syntax: VBoxManage modifyvm <uuid> --nicbootprio2 1
abc@server:~$ VBoxManage modifyvm 3db3a342-780f-41d5-a012-dbe6d3591bf1 --nicbootprio2 1

#OR do them all with a foreach loop in linux
abc@server:~$ for f in $(VBoxManage list vms | cut -f 1 -d " " | sed 's/"//g'); do echo $f; VBoxManage modifyvm $f --nicbootprio2 1; done

# NOTE: In windows, you need to specify the full path to the VBoxManage executable - for example:
"\Program Files\Oracle\VirtualBox\VBoxManage.exe"

Install StarlingX

Boot the VM from the ISO media. Select the following options for installation:

  • Standard Controller Configuration
  • Graphical Console
  • STANDARD Security Boot Profile

Initial Configuration

Note: If you do not have direct access to the public docker registry (https://hub.docker.com/u/starlingx) and instead use a proxy for internet access, a workaround is required until this StoryBoard is implemented: https://storyboard.openstack.org/#!/story/2004710

Add proxy for docker

sudo mkdir -p /etc/systemd/system/docker.service.d
sudo vi /etc/systemd/system/docker.service.d/http-proxy.conf

Add following lines with your proxy infomation to http-proxy.conf

[Service]
Environment="HTTP_PROXY=<your_proxy>" "HTTPS_PROXY=<your_proxy>" "NO_PROXY=<your_no_proxy_ip>"

Do NOT use wildcard in NO_PROXY variable.


Run config_controller

sudo config_controller --kubernetes

Use default settings during config_controller, except for the following

External OAM floating address: 10.10.10.3 External OAM address for first controller node: 10.10.10.4 External OAM address for second controller node 10.10.10.5

The system configuration should look like this:

System Configuration
--------------------
Time Zone: UTC
System mode: duplex
Distributed Cloud System Controller: no

PXEBoot Network Configuration
-----------------------------
Separate PXEBoot network not configured
PXEBoot Controller floating hostname: pxecontroller

Management Network Configuration
--------------------------------
Management interface name: enp0s8
Management interface: enp0s8
Management interface MTU: 1500
Management subnet: 192.168.204.0/24
Controller floating address: 192.168.204.2
Controller 0 address: 192.168.204.3
Controller 1 address: 192.168.204.4
NFS Management Address 1: 192.168.204.5
NFS Management Address 2: 192.168.204.6
Controller floating hostname: controller
Controller hostname prefix: controller-
OAM Controller floating hostname: oamcontroller
Dynamic IP address allocation is selected
Management multicast subnet: 239.1.1.0/28

Infrastructure Network Configuration
------------------------------------
Infrastructure interface not configured

External OAM Network Configuration
----------------------------------
External OAM interface name: enp0s3
External OAM interface: enp0s3
External OAM interface MTU: 1500
External OAM subnet: 10.10.10.0/24
External OAM gateway address: 10.10.10.1
External OAM floating address: 10.10.10.3
External OAM 0 address: 10.10.10.4
External OAM 1 address: 10.10.10.5

Provisioning controller-0

  • Set DNS server (so we can set the ntp servers)
source /etc/platform/openrc
system dns-modify nameservers=8.8.8.8 action=apply
  • Set the ntp server
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
  • Enable the Ceph backend

system storage-backend-add ceph -s glance,cinder,swift,nova,rbd-provisioner --confirmed while [ $(system storage-backend-list | awk '/ceph-store/{print $8}') != 'configured' ]; do echo 'Waiting for ceph to be configured'; sleep 5; done system storage-backend-list

system host-unlock controller-0

Install remaining hosts

  • PXE boot hosts

Power-on, the remaining hosts, they should PXEboot from the controller. Press F-12 for network boot if they do not. Once booted from PXE, hosts should be visible with Check with 'system host-list':

[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 2  | None         | None        | locked         | disabled    | offline      |
| 3  | None         | None        | locked         | disabled    | offline      |
| 4  | None         | None        | locked         | disabled    | offline      |
+----+--------------+-------------+----------------+-------------+--------------+
  • Configure host personalities
source /etc/nova/openrc
system host-update 2 personality=controller
system host-update 3 personality=controller
system host-update 4 personality=worker hostname=compute-0
system host-update 5 personality=worker hostname=compute-1
<pre>

At this point hosts should start installing.

* Wait for hosts to become online
Once all Nodes have been installed and rebooted, on Controller-0 list the hosts:

<pre>
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 3  | controller-1 | controller  | locked        | disabled    | online      |
| 4  | compute-0  | worker   | locked           | disabled    | online      |
| 5  | compute-1  | worker   | locked           | disabled    | online      |
+----+--------------+-------------+----------------+-------------+--------------+

+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 2  | controller-1 | controller  | locked         | disabled    | online       |
| 3  | compute-0    | worker      | locked         | disabled    | online       |
| 4  | compute-1    | worker      | locked         | disabled    | online       |
+----+--------------+-------------+----------------+-------------+--------------+

Provisioning controller-1

  • Add the OAM inteface on controller-1
system host-if-modify -n oam0 -c platform --networks oam controller-1 $(system host-if-list -a controller-1 | awk '/enp0s3/{print $2}')
  • Add the Cluster-host interface on controller-1
system host-if-modify controller-1 mgmt0 --networks cluster-host
  • Unlock controller-1
system host-unlock controller-1

Wait for node to be available:

+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 2  | controller-1 | controller  | unlocked       | enabled     | available    |
| 3  | compute-0    | worker      | locked         | disabled    | online       |
| 4  | compute-1    | worker      | locked         | disabled    | online       |
+----+--------------+-------------+----------------+-------------+--------------+
  • Ceph cluster shows a quorum with controller-0 and controller-1
ceph -s

Provisioning computes

  • Add the third Ceph monitor to a compute node
system ceph-mon-add compute-0
  • Create the volume group for nova.
system host-lvg-add controller-0 nova-local
  • Create physical volumes from the partitions.
system host-pv-add controller-0 cgts-vg $(system host-disk-partition-list controller-0 | awk '/sda5/{print $2}')
system host-pv-add controller-0 nova-local $(system host-disk-partition-list controller-0 | awk '/sda6/{print $2}')
system host-pv-list controller-0
  • Add the ceph storage backend (note: you may have to wait a few minutes before this command will succeed)
system storage-backend-add ceph --confirmed
  • Wait for 'applying-manifests' task to complete
while [ $(system storage-backend-list | awk '/ceph-store/{print $8}') != 'configured' ]; do echo 'Waiting for ceph to be configured'; sleep 5; done
  • Add an OSD (/dev/sdb)
system host-disk-list controller-0
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
system host-stor-list controller-0
  • Set Ceph pool replication
ceph osd pool ls | xargs -i ceph osd pool set {} size 1
  • Configure data interfaces
DATA0IF=eth1000
DATA1IF=eth1001
export COMPUTE=controller-0
PHYSNET0='physnet0'
PHYSNET1='physnet1'
SPL=/tmp/tmp-system-port-list
SPIL=/tmp/tmp-system-host-if-list
source /etc/platform/openrc
system host-port-list ${COMPUTE} --nowrap > ${SPL}
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
system host-if-modify -m 1500 -n data0 -p ${PHYSNET0} -c data ${COMPUTE} ${DATA0IFUUID}
system host-if-modify -m 1500 -n data1 -p ${PHYSNET1} -c data ${COMPUTE} ${DATA1IFUUID}
  • Unlock the controller
system host-unlock controller-0
  • After the host unlocks, test that the ceph cluster is operational
ceph -s
    cluster 6cb8fd30-622a-4a15-a039-b9e945628133
     health HEALTH_OK
     monmap e1: 1 mons at {controller-0=127.168.204.3:6789/0}
            election epoch 4, quorum 0 controller-0
     osdmap e32: 1 osds: 1 up, 1 in
            flags sortbitwise,require_jewel_osds
      pgmap v35: 1728 pgs, 6 pools, 0 bytes data, 0 objects
            39180 kB used, 50112 MB / 50150 MB avail
                1728 active+clean

Prepare the host for running the containerized services

  • On the controller node, apply all the node labels for each controller and compute functions
source /etc/platform/openrc
system host-label-assign controller-0 openstack-control-plane=enabled
system host-label-assign controller-0  openstack-compute-node=enabled
system host-label-assign controller-0  openvswitch=enabled
kubectl get nodes --show-labels
  • Add the DNS for the cluster
DNS_EP=$(kubectl describe svc -n kube-system kube-dns | awk /IP:/'{print $2}')
system dns-modify nameservers="$DNS_EP,8.8.8.8"
  • Perform the following platform tweak.
sudo sh -c "echo 1 > /proc/sys/net/bridge/bridge-nf-call-arptables"

Using sysinv to bring up/down the containerized services

  • Generate the stx-openstack application tarball. In a development environment, run the following command to construct the application tarballs. The tarballs can be found under $MY_WORKSPACE/containers/build-helm/stx. Currently it produces 2 application tarballs, one with tests enabled and one without. Transfer the selected tarball to your lab/virtual box.
$MY_REPO_ROOT_DIR/cgcs-root/build-tools/build-helm-charts.sh
  • Alternatively the stx-openstack application tarballs are generated with each build on the CENGN mirror. These are present in builds after 2018-12-12 and can be found under <build>/outputs/helm-charts/.
  • Stage application for deployment: Use sysinv to upload the application tarball.
system application-upload stx-openstack helm-charts-manifest-no-tests.tgz
system application-list
  • Bring Up Services: Use sysinv to apply the application. You can monitor the progress either by watching system application-list (watch -n 1.0 system application-list) or tailing Armada execution log (sudo docker exec armada_service tailf stx-openstack-apply.log).
system application-apply stx-openstack
system application-list

With the application applied the containerized openstack services are now running. You must now set Ceph pool replication for the new pools created when the application was applied:

ceph osd pool ls | xargs -i ceph osd pool set {} size 1

Skip to #Verify the cluster endpoints to continue the setup.

The following commands are for reference.


  • Bring Down Services: Use sysinv to uninstall the application.
system application-remove stx-openstack
system application-list
  • Delete Services: Use sysinv to delete the application definition.
system application-delete stx-openstack
system application-list
  • Bring Down Services: Clean up and stragglers (volumes and pods)
# Watch and wait for the pods to terminate
kubectl get pods -n openstack -o wide -w

# Armada Workaround: delete does not clean up the old test pods. Sooo... Delete them.
kubectl get pods -n openstack | awk '/osh-.*-test/{print $1}' | xargs -i kubectl delete pods -n openstack --force --grace-period=0 {}

# Cleanup all PVCs
kubectl get pvc --all-namespaces; kubectl get pv --all-namespaces
kubectl delete pvc --all --namespace openstack; kubectl delete pv --all --namespace openstack
kubectl get pvc --all-namespaces; kubectl get pv --all-namespaces

# Useful to cleanup the mariadb grastate data.
kubectl get configmaps -n openstack | awk '/osh-/{print $1}' | xargs -i kubectl delete configmaps -n openstack {}

# Remove all the contents of the ceph pools. I have seen orphaned contents here that take up space.
for p in cinder-volumes images kube-rbd; do rbd -p $p ls | xargs -i rbd -p $p snap unprotect {}@snap; done
for p in cinder-volumes images kube-rbd; do rbd -p $p ls | xargs -i rbd -p $p snap purge {}; done
for p in cinder-volumes images kube-rbd; do rbd -p $p ls | xargs -i rbd -p $p rm {}; done

Verify the cluster endpoints

# Note: Do this from a new shell as a root user (do not source /etc/platform/openrc in that shell).
        The 'password' should be set to the admin password which configured during config_controller.

mkdir -p /etc/openstack
tee /etc/openstack/clouds.yaml << EOF
clouds:
  openstack_helm:
    region_name: RegionOne
    identity_api_version: 3
    auth:
      username: 'admin'
      password: 'Li69nux*'
      project_name: 'admin'
      project_domain_name: 'default'
      user_domain_name: 'default'
      auth_url: 'http://keystone.openstack.svc.cluster.local/v3'
EOF

export OS_CLOUD=openstack_helm
openstack endpoint list

Nova related items

  • Execute the nova cell DB workaround
# Remove the && from the end of the line if you are executing one command at a time
openstack hypervisor list
kubectl exec -it -n openstack mariadb-server-0 -- bash -c "mysql --password=\$MYSQL_ROOT_PASSWORD --user=root nova -e 'select host,mapped from compute_nodes'" &&
kubectl exec -it -n openstack mariadb-server-0 -- bash -c "mysql --password=\$MYSQL_ROOT_PASSWORD --user=root nova -e 'update compute_nodes set mapped=0'" &&
kubectl exec -it -n openstack $(kubectl get pods  -n openstack | grep nova-conductor | awk '{print $1}') -- nova-manage cell_v2 discover_hosts --verbose &&
openstack hypervisor list

Provider/tenant networking setup

  • Create the providernets
PHYSNET0='physnet0'
PHYSNET1='physnet1'
neutron providernet-create ${PHYSNET0} --type vlan
neutron providernet-create ${PHYSNET1} --type vlan
  • Create host and bind interfaces
#Query sysinv db directly instead of switching credentials
neutron host-create controller-0 --id $(sudo -u postgres psql -qt -d sysinv -c "select uuid from i_host where hostname='controller-0';") --availability up
neutron host-bind-interface --interface $(sudo -u postgres psql -qt -d sysinv -c "select uuid from ethernet_interfaces join interfaces on ethernet_interfaces.id=interfaces.id where providernetworks='physnet0';") --providernets physnet0 --mtu 1500 controller-0
neutron host-bind-interface --interface $(sudo -u postgres psql -qt -d sysinv -c "select uuid from ethernet_interfaces join interfaces on ethernet_interfaces.id=interfaces.id where providernetworks='physnet1';") --providernets physnet1 --mtu 1500 controller-0
#Alternatively, can source /etc/platform/openrc and then query using sysinv api.
  • Setup tenant networking (adapt based on lab config)
ADMINID=`openstack project list | grep admin | awk '{print $2}'`
PHYSNET0='physnet0'
PHYSNET1='physnet1'
PUBLICNET='public-net0'
PRIVATENET='private-net0'
INTERNALNET='internal-net0'
EXTERNALNET='external-net0'
PUBLICSUBNET='public-subnet0'
PRIVATESUBNET='private-subnet0'
INTERNALSUBNET='internal-subnet0'
EXTERNALSUBNET='external-subnet0'
PUBLICROUTER='public-router0'
PRIVATEROUTER='private-router0'
neutron providernet-range-create ${PHYSNET0} --name ${PHYSNET0}-a --range 400-499
neutron providernet-range-create ${PHYSNET0} --name ${PHYSNET0}-b --range 10-10 --shared
neutron providernet-range-create ${PHYSNET1} --name ${PHYSNET1}-a --range 500-599
neutron net-create --tenant-id ${ADMINID} --provider:network_type=vlan --provider:physical_network=${PHYSNET0} --provider:segmentation_id=10 --router:external ${EXTERNALNET}
neutron net-create --tenant-id ${ADMINID} --provider:network_type=vlan --provider:physical_network=${PHYSNET0} --provider:segmentation_id=400 ${PUBLICNET}
neutron net-create --tenant-id ${ADMINID} --provider:network_type=vlan --provider:physical_network=${PHYSNET1} --provider:segmentation_id=500 ${PRIVATENET}
neutron net-create --tenant-id ${ADMINID} ${INTERNALNET}
PUBLICNETID=`neutron net-list | grep ${PUBLICNET} | awk '{print $2}'`
PRIVATENETID=`neutron net-list | grep ${PRIVATENET} | awk '{print $2}'`
INTERNALNETID=`neutron net-list | grep ${INTERNALNET} | awk '{print $2}'`
EXTERNALNETID=`neutron net-list | grep ${EXTERNALNET} | awk '{print $2}'`
neutron subnet-create --tenant-id ${ADMINID} --name ${PUBLICSUBNET} ${PUBLICNET} 192.168.101.0/24
neutron subnet-create --tenant-id ${ADMINID} --name ${PRIVATESUBNET} ${PRIVATENET} 192.168.201.0/24
neutron subnet-create --tenant-id ${ADMINID} --name ${INTERNALSUBNET} --no-gateway  ${INTERNALNET} 10.10.0.0/24
neutron subnet-create --tenant-id ${ADMINID} --name ${EXTERNALSUBNET} --gateway 192.168.1.1 --disable-dhcp ${EXTERNALNET} 192.168.1.0/24
neutron router-create ${PUBLICROUTER}
neutron router-create ${PRIVATEROUTER}
PRIVATEROUTERID=`neutron router-list | grep ${PRIVATEROUTER} | awk '{print $2}'`
PUBLICROUTERID=`neutron router-list | grep ${PUBLICROUTER} | awk '{print $2}'`
neutron router-gateway-set --disable-snat ${PUBLICROUTERID} ${EXTERNALNETID}
neutron router-gateway-set --disable-snat ${PRIVATEROUTERID} ${EXTERNALNETID}
neutron router-interface-add ${PUBLICROUTER} ${PUBLICSUBNET}
neutron router-interface-add ${PRIVATEROUTER} ${PRIVATESUBNET}

Horizon access

# After successful armada manifest apply the following should be seen

kubectl get services -n openstack | grep horizon
horizon                       ClusterIP   10.104.34.245    <none>        80/TCP,443/TCP                 13h
horizon-int                   NodePort    10.101.103.238   <none>        80:31000/TCP                   13h

The platform horizon UI is available at http://<external OAM IP>

 $ curl -L http://10.10.10.3:80 -so - | egrep '(PlugIn|<title>)'
    <title>Login - StarlingX</title>
    global.horizonPlugInModules = ['horizon.dashboard.project', 'horizon.dashboard.container-infra', 'horizon.dashboard.dc_admin', 'horizon.dashboard.identity', 'horizon.app.murano'];

The containerized horizon UI is available at http://<external OAM IP>:31000

$ curl -L http://10.10.10.3:31000 -so - | egrep '(PlugIn|<title>)'
    <title>Login - StarlingX</title>
    global.horizonPlugInModules = ['horizon.dashboard.project', 'horizon.dashboard.identity'];
 

After controller node reboot

Until we get them integrated into node startup, there are a number of the above items that need to be re-done after every controller node reboot:

sudo sh -c "echo 1 > /proc/sys/net/bridge/bridge-nf-call-arptables"
  • If the keystone-api pod is stuck in a CrashLoopBackOff, delete the pod and it will be re-created.
# List the pods to get the name of the keystone-api pod
kubectl -n openstack get pods
# Delete the keystone-api pod
kubectl -n openstack delete pod <name of keystone-api pod>
  • If you are seeing DNS failures for cluster addresses, restart dnsmasq on the controller after puppet has completed its initialization.
sudo sm-restart service dnsmasq

VirtualBox Nat Networking

First add a NAT Network in VirtualBox:

 * Select File -> Preferences menu
 * Choose Network, "Nat Networks" tab should be selected
   * Click on plus icon to add a network, which will add a network named NatNetwork
   * Edit the NatNetwork (gear or screwdriver icon)
     * Network CIDR: 10.10.10.0/24 (to match OAM network specified in config_controller)
     * Disable "Supports DHCP"
     * Enable "Supports IPv6"
     * Select "Port Forwarding" and add any rules you desire. Some examples:
Name Protocol Host IP Host Port Guest IP Guest Port
controller-ssh TCP 22 10.10.10.3 22
controller-http TCP 80 10.10.10.3 80
controller-https TCP 443 10.10.10.3 443
controller-0-ssh TCP 23 10.10.10.4 22
controller-1-ssh TCP 24 10.10.10.4 22