Jump to: navigation, search

Difference between revisions of "StarlingX/Containers/Installation"

m (Provisioning the platform)
(Provider/tenant networking setup)
Line 367: Line 367:
 
#Query sysinv db directly instead of switching credentials
 
#Query sysinv db directly instead of switching credentials
 
neutron host-create controller-0 --id $(sudo -u postgres psql -qt -d sysinv -c "select uuid from i_host where hostname='controller-0';") --availability up
 
neutron host-create controller-0 --id $(sudo -u postgres psql -qt -d sysinv -c "select uuid from i_host where hostname='controller-0';") --availability up
neutron host-bind-interface --interface $(sudo -u postgres psql -qt -d sysinv -c "select uuid from ethernet_interfaces join interfaces on ethernet_interfaces.id=interfaces.id where providernetworks='physnet0';") --providernets physnet0 --mtu 1500 controller-0
+
neutron host-bind-interface --interface $(sudo -u postgres psql -qt -d sysinv -c "select uuid from ethernet_interfaces join interfaces on ethernet_interfaces.id=interfaces.id where interfaces.ifname='data0';") --providernets physnet0 --mtu 1500 controller-0
neutron host-bind-interface --interface $(sudo -u postgres psql -qt -d sysinv -c "select uuid from ethernet_interfaces join interfaces on ethernet_interfaces.id=interfaces.id where providernetworks='physnet1';") --providernets physnet1 --mtu 1500 controller-0
+
neutron host-bind-interface --interface $(sudo -u postgres psql -qt -d sysinv -c "select uuid from ethernet_interfaces join interfaces on ethernet_interfaces.id=interfaces.id where interfaces.ifname='data1';") --providernets physnet1 --mtu 1500 controller-0
 
#Alternatively, can source /etc/platform/openrc and then query using sysinv api.
 
#Alternatively, can source /etc/platform/openrc and then query using sysinv api.
 
</pre>
 
</pre>

Revision as of 07:19, 29 January 2019

Installing StarlingX with containers: One node configuration

History

January 18, 2019: Removed Nova Cell DB Workaround - no longer required on loads built January 15th or later.

January 25, 2019: Configure datanetworks in sysinv, prior to referencing it in the 'system host-if-modify/host-if-add command'. Needed on loads Jan 25, 2019 or later.

Introduction

These instructions are for an All-in-one simplex system in VirtualBox. Other configurations are in development. Installing on bare metal is also possible, however the the process would have to be adapted for the specific hardware configuration.

Note: These instructions are valid for a load built on December 22, 2018 or later.

Building the Software

Follow the standard build process in the StarlingX Developer Guide. Alternatively a prebuilt iso can be used, all required pacakges are provided by the StarlingX CENGN mirror.

Setup the VirtualBox VM

Create a virtual machine for the system with the following options:

     * Type: Linux
     * Version: Other Linux (64-bit)
     * Memory size: 16384 MB
     * Storage: 
        * Recommend to use VDI and dynamically allocated disks
        * At least two disks are required
          * 240GB disk for a root disk 
          * 50GB for an OSD
     * System->Processors: 
        * 4 cpu
     * Network:
        * OAM network:
           OAM interface must have external connectivity, for now we will use a NatNetwork
           * Adapter 1: NAT Network; Name: NatNetwork Follow the instructions at #VirtualBox Nat Networking 
        * Data Network
           * Adapter 2: Internal Network, Name: intnet-data1; Advanced: Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All
           * Adapter 3: Internal Network, Name: intnet-data2; Advanced: Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All

Install StarlingX

Boot the VM from the ISO media. Select the following options for installation:

  • All-in-one Controller
  • Graphical Console
  • Standard Security Profile

Once booted, log into Controller-0 as user wrsroot, with password wrsroot. The first time you log in as wrsroot, you are required to change your password. Enter the current password (wrsroot):

 
Changing password for wrsroot.
(current) UNIX Password: wrsroot

Enter a new password for the wrsroot account and confirm it.

Docker Proxy Configuration

Note: If you do not have direct access to the public docker registry (https://hub.docker.com/u/starlingx) and instead use a proxy for internet access, a workaround is required until this StoryBoard is implemented: https://storyboard.openstack.org/#!/story/2004710

Add proxy for docker:

sudo mkdir -p /etc/systemd/system/docker.service.d
sudo vi /etc/systemd/system/docker.service.d/http-proxy.conf

Add following lines with your proxy information to http-proxy.conf:

[Service]
Environment="HTTP_PROXY=<your_proxy>" "HTTPS_PROXY=<your_proxy>" "NO_PROXY=<your_no_proxy_ip>"

Do NOT use wildcard in NO_PROXY variable.

Initial Configuration

Run config_controller

sudo config_controller --kubernetes

Use default settings during config_controller, except for the following

System mode: simplex External OAM address: 10.10.10.3

The system configuration should look like this:

System Configuration
--------------------
Time Zone: UTC
System mode: simplex

PXEBoot Network Configuration
-----------------------------
Separate PXEBoot network not configured
PXEBoot Controller floating hostname: pxecontroller

Management Network Configuration
--------------------------------
Management interface name: lo
Management interface: lo
Management interface MTU: 1500
Management interface link capacity Mbps: 10000
Management subnet: 127.168.204.0/24
Controller floating address: 127.168.204.2
Controller 0 address: 127.168.204.3
Controller 1 address: 127.168.204.4
NFS Management Address 1: 127.168.204.5
NFS Management Address 2: 127.168.204.6
Controller floating hostname: controller
Controller hostname prefix: controller-
OAM Controller floating hostname: oamcontroller
Dynamic IP address allocation is selected
Management multicast subnet: 239.1.1.0/28

Infrastructure Network Configuration
------------------------------------
Infrastructure interface not configured

External OAM Network Configuration
----------------------------------
External OAM interface name: enp0s3
External OAM interface: enp0s3
External OAM interface MTU: 1500
External OAM subnet: 10.10.10.0/24
External OAM gateway address: 10.10.10.1
External OAM address: 10.10.10.3

Provisioning the platform

  • Set DNS server (so we can set the ntp servers)
source /etc/platform/openrc
system dns-modify nameservers=8.8.8.8 action=apply
  • Set the ntp server
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
  • Create partitions on the root disk (6G for the cgts-vg, 34G for nova) and wait for them to be ready. The first partition extends the existing cgts volume group. This step is optional as there should be sufficient space by default. The second partition for nova-local is manidtory.
system host-disk-list controller-0
system host-disk-partition-add -t lvm_phys_vol controller-0 $(system host-disk-list controller-0 | awk '/sda/{print $2}') 6
while true; do system host-disk-partition-list controller-0 --nowrap | grep 6\.0 | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done
system host-disk-partition-add -t lvm_phys_vol controller-0 $(system host-disk-list controller-0 | awk '/sda/{print $2}') 34
while true; do system host-disk-partition-list controller-0 --nowrap | grep 34\.0 | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done
system host-disk-partition-list controller-0
  • Create the volume group for nova.
system host-lvg-add controller-0 nova-local
  • Create physical volumes from the partitions.
system host-pv-add controller-0 cgts-vg $(system host-disk-partition-list controller-0 | awk '/sda5/{print $2}')
system host-pv-add controller-0 nova-local $(system host-disk-partition-list controller-0 | awk '/sda6/{print $2}')
system host-pv-list controller-0
  • Add the ceph storage backend (note: you may have to wait a few minutes before this command will succeed)
system storage-backend-add ceph --confirmed
  • Wait for 'applying-manifests' task to complete
while [ $(system storage-backend-list | awk '/ceph-store/{print $8}') != 'configured' ]; do echo 'Waiting for ceph to be configured'; sleep 5; done
  • Add an OSD (/dev/sdb)
system host-disk-list controller-0
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
system host-stor-list controller-0
  • Set Ceph pool replication
ceph osd pool ls | xargs -i ceph osd pool set {} size 1
  • Configure data interfaces
DATA0IF=eth1000
DATA1IF=eth1001
export COMPUTE=controller-0
PHYSNET0='physnet0'
PHYSNET1='physnet1'
SPL=/tmp/tmp-system-port-list
SPIL=/tmp/tmp-system-host-if-list
source /etc/platform/openrc
system host-port-list ${COMPUTE} --nowrap > ${SPL}
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')

# configure the datanetworks in sysinv, prior to referencing it in the 'system host-if-modify command'
system datanetwork-add ${PHYSNET0} vlan
system datanetwork-add ${PHYSNET1} vlan

# the host-if-modify '-p' flag is deprecated in favor of  the '-d' flag for assignment of datanetworks.
system host-if-modify -m 1500 -n data0 -d ${PHYSNET0} -c data ${COMPUTE} ${DATA0IFUUID}
system host-if-modify -m 1500 -n data1 -d ${PHYSNET1} -c data ${COMPUTE} ${DATA1IFUUID}
  • Unlock the controller
system host-unlock controller-0
  • After the host unlocks, test that the ceph cluster is operational
ceph -s
    cluster 6cb8fd30-622a-4a15-a039-b9e945628133
     health HEALTH_OK
     monmap e1: 1 mons at {controller-0=127.168.204.3:6789/0}
            election epoch 4, quorum 0 controller-0
     osdmap e32: 1 osds: 1 up, 1 in
            flags sortbitwise,require_jewel_osds
      pgmap v35: 1728 pgs, 6 pools, 0 bytes data, 0 objects
            39180 kB used, 50112 MB / 50150 MB avail
                1728 active+clean

Prepare the host for running the containerized services

  • On the controller node, apply all the node labels for each controller and compute functions
source /etc/platform/openrc
system host-label-assign controller-0 openstack-control-plane=enabled
system host-label-assign controller-0  openstack-compute-node=enabled
system host-label-assign controller-0  openvswitch=enabled
kubectl get nodes --show-labels

Using sysinv to bring up/down the containerized services

  • Generate the stx-openstack application tarball. In a development environment, run the following command to construct the application tarballs. The tarballs can be found under $MY_WORKSPACE/containers/build-helm/stx. Currently it produces 2 application tarballs, one with tests enabled and one without. Transfer the selected tarball to your lab/virtual box.
$MY_REPO_ROOT_DIR/cgcs-root/build-tools/build-helm-charts.sh
  • Alternatively the stx-openstack application tarballs are generated with each build on the CENGN mirror. These are present in builds after 2018-12-12 and can be found under <build>/outputs/helm-charts/.
  • Stage application for deployment: Use sysinv to upload the application tarball.
system application-upload stx-openstack helm-charts-manifest-no-tests.tgz
system application-list
  • Bring Up Services: Use sysinv to apply the application. You can monitor the progress either by watching system application-list (watch -n 1.0 system application-list) or tailing Armada execution log (sudo docker exec armada_service tailf stx-openstack-apply.log).
system application-apply stx-openstack
system application-list

With the application applied the containerized openstack services are now running. You must now set Ceph pool replication for the new pools created when the application was applied:

ceph osd pool ls | xargs -i ceph osd pool set {} size 1

Skip to #Verify the cluster endpoints to continue the setup.

The following commands are for reference.


  • Bring Down Services: Use sysinv to uninstall the application.
system application-remove stx-openstack
system application-list
  • Delete Services: Use sysinv to delete the application definition.
system application-delete stx-openstack
system application-list
  • Bring Down Services: Clean up and stragglers (volumes and pods)
# Watch and wait for the pods to terminate
kubectl get pods -n openstack -o wide -w

# Armada Workaround: delete does not clean up the old test pods. Sooo... Delete them.
kubectl get pods -n openstack | awk '/osh-.*-test/{print $1}' | xargs -i kubectl delete pods -n openstack --force --grace-period=0 {}

# Cleanup all PVCs
kubectl get pvc --all-namespaces; kubectl get pv --all-namespaces
kubectl delete pvc --all --namespace openstack; kubectl delete pv --all --namespace openstack
kubectl get pvc --all-namespaces; kubectl get pv --all-namespaces

# Useful to cleanup the mariadb grastate data.
kubectl get configmaps -n openstack | awk '/osh-/{print $1}' | xargs -i kubectl delete configmaps -n openstack {}

# Remove all the contents of the ceph pools. I have seen orphaned contents here that take up space.
for p in cinder-volumes images kube-rbd; do rbd -p $p ls | xargs -i rbd -p $p snap unprotect {}@snap; done
for p in cinder-volumes images kube-rbd; do rbd -p $p ls | xargs -i rbd -p $p snap purge {}; done
for p in cinder-volumes images kube-rbd; do rbd -p $p ls | xargs -i rbd -p $p rm {}; done

Verify the cluster endpoints

# Note: Do this from a new shell as a root user (do not source /etc/platform/openrc in that shell).
        The 'password' should be set to the admin password which configured during config_controller.

mkdir -p /etc/openstack
tee /etc/openstack/clouds.yaml << EOF
clouds:
  openstack_helm:
    region_name: RegionOne
    identity_api_version: 3
    auth:
      username: 'admin'
      password: 'Li69nux*'
      project_name: 'admin'
      project_domain_name: 'default'
      user_domain_name: 'default'
      auth_url: 'http://keystone.openstack.svc.cluster.local/v3'
EOF

export OS_CLOUD=openstack_helm
openstack endpoint list

Provider/tenant networking setup

  • Create the providernets
PHYSNET0='physnet0'
PHYSNET1='physnet1'
neutron providernet-create ${PHYSNET0} --type vlan
neutron providernet-create ${PHYSNET1} --type vlan
  • Create host and bind interfaces
#Query sysinv db directly instead of switching credentials
neutron host-create controller-0 --id $(sudo -u postgres psql -qt -d sysinv -c "select uuid from i_host where hostname='controller-0';") --availability up
 neutron host-bind-interface --interface $(sudo -u postgres psql -qt -d sysinv -c "select uuid from ethernet_interfaces join interfaces on ethernet_interfaces.id=interfaces.id where interfaces.ifname='data0';") --providernets physnet0 --mtu 1500 controller-0
 neutron host-bind-interface --interface $(sudo -u postgres psql -qt -d sysinv -c "select uuid from ethernet_interfaces join interfaces on ethernet_interfaces.id=interfaces.id where interfaces.ifname='data1';") --providernets physnet1 --mtu 1500 controller-0
#Alternatively, can source /etc/platform/openrc and then query using sysinv api.
  • Setup tenant networking (adapt based on lab config)
ADMINID=`openstack project list | grep admin | awk '{print $2}'`
PHYSNET0='physnet0'
PHYSNET1='physnet1'
PUBLICNET='public-net0'
PRIVATENET='private-net0'
INTERNALNET='internal-net0'
EXTERNALNET='external-net0'
PUBLICSUBNET='public-subnet0'
PRIVATESUBNET='private-subnet0'
INTERNALSUBNET='internal-subnet0'
EXTERNALSUBNET='external-subnet0'
PUBLICROUTER='public-router0'
PRIVATEROUTER='private-router0'
neutron providernet-range-create ${PHYSNET0} --name ${PHYSNET0}-a --range 400-499
neutron providernet-range-create ${PHYSNET0} --name ${PHYSNET0}-b --range 10-10 --shared
neutron providernet-range-create ${PHYSNET1} --name ${PHYSNET1}-a --range 500-599
neutron net-create --tenant-id ${ADMINID} --provider:network_type=vlan --provider:physical_network=${PHYSNET0} --provider:segmentation_id=10 --router:external ${EXTERNALNET}
neutron net-create --tenant-id ${ADMINID} --provider:network_type=vlan --provider:physical_network=${PHYSNET0} --provider:segmentation_id=400 ${PUBLICNET}
neutron net-create --tenant-id ${ADMINID} --provider:network_type=vlan --provider:physical_network=${PHYSNET1} --provider:segmentation_id=500 ${PRIVATENET}
neutron net-create --tenant-id ${ADMINID} ${INTERNALNET}
PUBLICNETID=`neutron net-list | grep ${PUBLICNET} | awk '{print $2}'`
PRIVATENETID=`neutron net-list | grep ${PRIVATENET} | awk '{print $2}'`
INTERNALNETID=`neutron net-list | grep ${INTERNALNET} | awk '{print $2}'`
EXTERNALNETID=`neutron net-list | grep ${EXTERNALNET} | awk '{print $2}'`
neutron subnet-create --tenant-id ${ADMINID} --name ${PUBLICSUBNET} ${PUBLICNET} 192.168.101.0/24
neutron subnet-create --tenant-id ${ADMINID} --name ${PRIVATESUBNET} ${PRIVATENET} 192.168.201.0/24
neutron subnet-create --tenant-id ${ADMINID} --name ${INTERNALSUBNET} --no-gateway  ${INTERNALNET} 10.10.0.0/24
neutron subnet-create --tenant-id ${ADMINID} --name ${EXTERNALSUBNET} --gateway 192.168.1.1 --disable-dhcp ${EXTERNALNET} 192.168.1.0/24
neutron router-create ${PUBLICROUTER}
neutron router-create ${PRIVATEROUTER}
PRIVATEROUTERID=`neutron router-list | grep ${PRIVATEROUTER} | awk '{print $2}'`
PUBLICROUTERID=`neutron router-list | grep ${PUBLICROUTER} | awk '{print $2}'`
neutron router-gateway-set --disable-snat ${PUBLICROUTERID} ${EXTERNALNETID}
neutron router-gateway-set --disable-snat ${PRIVATEROUTERID} ${EXTERNALNETID}
neutron router-interface-add ${PUBLICROUTER} ${PUBLICSUBNET}
neutron router-interface-add ${PRIVATEROUTER} ${PRIVATESUBNET}

Horizon access

# After successful armada manifest apply the following should be seen

kubectl get services -n openstack | grep horizon
horizon                       ClusterIP   10.104.34.245    <none>        80/TCP,443/TCP                 13h
horizon-int                   NodePort    10.101.103.238   <none>        80:31000/TCP                   13h

The platform horizon UI is available at http://<external OAM IP>

 $ curl -L http://10.10.10.3:80 -so - | egrep '(PlugIn|<title>)'
    <title>Login - StarlingX</title>
    global.horizonPlugInModules = ['horizon.dashboard.project', 'horizon.dashboard.container-infra', 'horizon.dashboard.dc_admin', 'horizon.dashboard.identity', 'horizon.app.murano'];

The containerized horizon UI is available at http://<external OAM IP>:31000

$ curl -L http://10.10.10.3:31000 -so - | egrep '(PlugIn|<title>)'
    <title>Login - StarlingX</title>
    global.horizonPlugInModules = ['horizon.dashboard.project', 'horizon.dashboard.identity'];
 

After controller node reboot

  • If the keystone-api pod is stuck in a CrashLoopBackOff, delete the pod and it will be re-created.
# List the pods to get the name of the keystone-api pod
kubectl -n openstack get pods
# Delete the keystone-api pod
kubectl -n openstack delete pod <name of keystone-api pod>
  • If you are seeing DNS failures for cluster addresses, restart dnsmasq on the controller after puppet has completed its initialization.
sudo sm-restart service dnsmasq

VirtualBox Nat Networking

First add a NAT Network in VirtualBox:

 * Select File -> Preferences menu
 * Choose Network, "Nat Networks" tab should be selected
   * Click on plus icon to add a network, which will add a network named NatNetwork
   * Edit the NatNetwork (gear or screwdriver icon)
     * Network CIDR: 10.10.10.0/24 (to match OAM network specified in config_controller)
     * Disable "Supports DHCP"
     * Enable "Supports IPv6"
     * Select "Port Forwarding" and add any rules you desire. Some examples:
Name Protocol Host IP Host Port Guest IP Guest Port
controller-ssh TCP 22 10.10.10.3 22
controller-http TCP 80 10.10.10.3 80
controller-https TCP 443 10.10.10.3 443