|
|
(128 intermediate revisions by 24 users not shown) |
Line 1: |
Line 1: |
− | = Installing and configuring StarlingX with containers = | + | {{Warning|header='''Warning - Deprecated'''|body='''This wiki page is out of date and now deprecated. For the current and upcoming versions, see [https://docs.starlingx.io/deploy_install_guides/index.html StarlingX Installation and Deployment guides]'''}} |
| | | |
− | == Introduction == | + | = Documentation Contribution = |
| | | |
− | WARNING These instructions are not functional at this time. There is outstanding work to add the StarlingX docker images to a public repository. This work is tracked under [https://storyboard.openstack.org/#!/story/2003907 Story 2003907]
| + | You might consider contributing to StarlingX documentation if you find a bug or have a suggestions for improvement. |
| + | To get started: |
| | | |
− | These instructions are for an All-in-one simplex system in VirtualBox. Other configurations are in development. Installing on bare metal is also possible, however the the process would have to be adapted for the specific hardware configuration.
| + | * Please use "[https://docs.starlingx.io/contributor/index.html Contribute]" guides. |
| + | * Launch a bug in [https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.docs StarlingX Launchpad] with the tag ''stx.docs''. |
| | | |
− | '''Note''': These instructions are valid for a load built on '''December 12, 2018''' or later.
| + | = History = |
| | | |
− | == Building the Software == | + | Go to [https://wiki.openstack.org/w/index.php?title=StarlingX/Containers/Installation&action=history Page > History] link if you want to: |
| | | |
− | Follow the standard build process in the [https://docs.starlingx.io/developer_guide/index.html StarlingX Developer Guide]. A prebuilt iso can be used, but additional packages are needed after the initial installation. A developer environment is required to build those packages.
| + | * See the old content of this page |
− | | + | * Compare revisions |
− | == Setup the VirtualBox VM ==
| |
− | | |
− | Create a virtual machine for the system with the following options:
| |
− | * Type: Linux
| |
− | * Version: Other Linux (64-bit)
| |
− | * Memory size: 16384 MB
| |
− | * Storage:
| |
− | * Recommend to use VDI and dynamically allocated disks
| |
− | * Two disks are required
| |
− | * 240GB disk for a root disk
| |
− | * 50GB for an OSD
| |
− | * System->Processors:
| |
− | * 4 cpu
| |
− | * Network:
| |
− | * OAM network:
| |
− | OAM interface must have external connectivity, for now we will use a NatNetwork
| |
− | * Adapter 1: NAT Network; Name: NatNetwork Follow the instructions at [[#VirtualBox Nat Networking]]
| |
− | * Data Network
| |
− | * Adapter 2: Internal Network, Name: intnet-data1; Advanced: Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All
| |
− | * Adapter 3: Internal Network, Name: intnet-data2; Advanced: Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All
| |
− | | |
− | == Install StarlingX ==
| |
− | | |
− | Boot the VM from the ISO media. Select the following options for installation:
| |
− | *All-in-one Controller
| |
− | *Graphical Console
| |
− | *Standard Security Profile
| |
− | | |
− | == Initial Configuration ==
| |
− | | |
− | Run config_controller
| |
− | | |
− | <code>sudo config_controller --kubernetes</code>
| |
− | | |
− | Use default settings during config_controller, except for the following
| |
− | | |
− | System mode: '''simplex'''
| |
− | External OAM address: '''10.10.10.3'''
| |
− | | |
− | The system configuration should look like this:
| |
− | <pre>
| |
− | System Configuration
| |
− | --------------------
| |
− | Time Zone: UTC
| |
− | System mode: simplex
| |
− | | |
− | PXEBoot Network Configuration
| |
− | -----------------------------
| |
− | Separate PXEBoot network not configured
| |
− | PXEBoot Controller floating hostname: pxecontroller
| |
− | | |
− | Management Network Configuration
| |
− | --------------------------------
| |
− | Management interface name: lo
| |
− | Management interface: lo
| |
− | Management interface MTU: 1500
| |
− | Management interface link capacity Mbps: 10000
| |
− | Management subnet: 127.168.204.0/24
| |
− | Controller floating address: 127.168.204.2
| |
− | Controller 0 address: 127.168.204.3
| |
− | Controller 1 address: 127.168.204.4
| |
− | NFS Management Address 1: 127.168.204.5
| |
− | NFS Management Address 2: 127.168.204.6
| |
− | Controller floating hostname: controller
| |
− | Controller hostname prefix: controller-
| |
− | OAM Controller floating hostname: oamcontroller
| |
− | Dynamic IP address allocation is selected
| |
− | Management multicast subnet: 239.1.1.0/28
| |
− | | |
− | Infrastructure Network Configuration
| |
− | ------------------------------------
| |
− | Infrastructure interface not configured
| |
− | | |
− | External OAM Network Configuration
| |
− | ----------------------------------
| |
− | External OAM interface name: enp0s3
| |
− | External OAM interface: enp0s3
| |
− | External OAM interface MTU: 1500
| |
− | External OAM subnet: 10.10.10.0/24
| |
− | External OAM gateway address: 10.10.10.1
| |
− | External OAM address: 10.10.10.3
| |
− | </pre>
| |
− | | |
− | == Provisioning the platform ==
| |
− | | |
− | * Set DNS server (so we can set the ntp servers)
| |
− | | |
− | <pre>
| |
− | source /etc/nova/openrc
| |
− | system dns-modify nameservers=8.8.8.8 action=apply
| |
− | </pre>
| |
− | | |
− | * Set the ntp server
| |
− | | |
− | <pre>
| |
− | system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
| |
− | </pre>
| |
− | | |
− | * Create partitions on the root disk (50G for the cgts-vg, 40G for nova) and wait for them to be ready. The first partition extends the existing cgts volume group. This step is optional as there should be sufficient space by default. The second partition for nova-local is manidtory.
| |
− | | |
− | <pre>
| |
− | system host-disk-list controller-0
| |
− | system host-disk-partition-add -t lvm_phys_vol controller-0 $(system host-disk-list controller-0 | awk '/sda/{print $2}') 49
| |
− | while true; do system host-disk-partition-list controller-0 --nowrap | grep 49\.0 | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done
| |
− | system host-disk-partition-add -t lvm_phys_vol controller-0 $(system host-disk-list controller-0 | awk '/sda/{print $2}') 40
| |
− | while true; do system host-disk-partition-list controller-0 --nowrap | grep 40\.0 | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done
| |
− | system host-disk-partition-list controller-0
| |
− | </pre>
| |
− | | |
− | * Create the volume group for nova.
| |
− | | |
− | <pre>
| |
− | system host-lvg-add controller-0 nova-local
| |
− | </pre>
| |
− | | |
− | * Create physical volumes from the partitions.
| |
− | | |
− | <pre>
| |
− | system host-pv-add controller-0 cgts-vg $(system host-disk-partition-list controller-0 | awk '/sda5/{print $2}')
| |
− | system host-pv-add controller-0 nova-local $(system host-disk-partition-list controller-0 | awk '/sda6/{print $2}')
| |
− | system host-pv-list controller-0
| |
− | </pre>
| |
− | | |
− | * Add the ceph storage backend (note: you may have to wait a few minutes before this command will succeed)
| |
− | | |
− | <pre>
| |
− | system storage-backend-add ceph --confirmed
| |
− | </pre>
| |
− | | |
− | * Wait for 'applying-manifests' task to complete
| |
− | | |
− | <pre>
| |
− | while [ $(system storage-backend-list | awk '/ceph-store/{print $8}') != 'configured' ]; do echo 'Waiting for ceph to be configured'; sleep 5; done
| |
− | </pre>
| |
− | | |
− | * Enable the rbd-provisioner pool
| |
− | | |
− | <pre>
| |
− | system storage-backend-modify ceph-store -s rbd-provisioner
| |
− | </pre>
| |
− | | |
− | * Configure rbd provisioner secrets and storage class. If the namespace creation fails, try logging out and back in, then retry.
| |
− | | |
− | <pre>
| |
− | kubectl create namespace openstack
| |
− | system storage-backend-modify ceph-store rbd_provisioner_namespaces=kube-system,default,openstack rbd_storageclass_name=general
| |
− | system storage-backend-list
| |
− | </pre>
| |
− | | |
− | * Add an OSD (/dev/sdb)
| |
− | | |
− | <pre>
| |
− | system host-disk-list controller-0
| |
− | system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
| |
− | system host-stor-list controller-0
| |
− | </pre>
| |
− | | |
− | * Set Ceph pool replication
| |
− | | |
− | <pre>
| |
− | ceph osd pool ls | xargs -i ceph osd pool set {} size 1
| |
− | </pre>
| |
− | | |
− | * Configure data interfaces
| |
− | | |
− | <pre>
| |
− | DATA0IF=eth1000
| |
− | DATA1IF=eth1001
| |
− | export COMPUTE=controller-0
| |
− | PHYSNET0='physnet0'
| |
− | PHYSNET1='physnet1'
| |
− | SPL=/tmp/tmp-system-port-list
| |
− | SPIL=/tmp/tmp-system-host-if-list
| |
− | source /etc/nova/openrc
| |
− | system host-port-list ${COMPUTE} --nowrap > ${SPL}
| |
− | system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
| |
− | DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
| |
− | DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
| |
− | DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
| |
− | DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
| |
− | DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
| |
− | DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
| |
− | DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
| |
− | DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
| |
− | system host-if-modify -m 1500 -n data0 -p ${PHYSNET0} -c data ${COMPUTE} ${DATA0IFUUID}
| |
− | system host-if-modify -m 1500 -n data1 -p ${PHYSNET1} -c data ${COMPUTE} ${DATA1IFUUID}
| |
− | </pre>
| |
− | | |
− | * Unlock the controller
| |
− | | |
− | <pre>
| |
− | system host-unlock controller-0
| |
− | </pre>
| |
− | | |
− | * After the host unlocks, test that the ceph cluster is operational
| |
− | | |
− | <pre>
| |
− | ceph -s
| |
− | cluster 6cb8fd30-622a-4a15-a039-b9e945628133
| |
− | health HEALTH_OK
| |
− | monmap e1: 1 mons at {controller-0=127.168.204.3:6789/0}
| |
− | election epoch 4, quorum 0 controller-0
| |
− | osdmap e32: 1 osds: 1 up, 1 in
| |
− | flags sortbitwise,require_jewel_osds
| |
− | pgmap v35: 1728 pgs, 6 pools, 0 bytes data, 0 objects
| |
− | 39180 kB used, 50112 MB / 50150 MB avail
| |
− | 1728 active+clean
| |
− | </pre>
| |
− | | |
− | == Prepare the host for running the containerized services ==
| |
− |
| |
− | * On the controller node, apply all the node labels for each controller and compute functions
| |
− | | |
− | <pre>
| |
− | source /etc/nova/openrc
| |
− | system host-label-assign controller-0 openstack-control-plane=enabled
| |
− | system host-label-assign controller-0 openstack-compute-node=enabled
| |
− | system host-label-assign controller-0 openvswitch=enabled
| |
− | kubectl get nodes --show-labels
| |
− | </pre>
| |
− | | |
− | * Add the DNS for the cluster
| |
− | | |
− | <pre>
| |
− | DNS_EP=$(kubectl describe svc -n kube-system kube-dns | awk /IP:/'{print $2}')
| |
− | system dns-modify nameservers="$DNS_EP,8.8.8.8"
| |
− | </pre>
| |
− | | |
− | * Create config map referenced by some of the charts
| |
− | | |
− | <pre>
| |
− | kubectl create configmap ceph-etc --from-file /etc/ceph/ceph.conf -n openstack
| |
− | </pre>
| |
− | | |
− | * Perform the following platform tweak.
| |
− | | |
− | <pre>
| |
− | sudo sh -c "echo 1 > /proc/sys/net/bridge/bridge-nf-call-arptables"
| |
− | </pre>
| |
− | | |
− | == Using sysinv to bring up/down the containerized services ==
| |
− | | |
− | * Generate the stx-openstack application tarball. In a development environment, run the following command to construct the application tarballs. The tarballs can be found under $MY_WORKSPACE/containers/build-helm/stx. Currently it produces 2 application tarballs, one with tests enabled and one without. Transfer the selected tarball to your lab/virtual box. | |
− | <pre>
| |
− | $MY_REPO_ROOT_DIR/cgcs-root/build-tools/build-helm-charts.sh
| |
− | </pre>
| |
− |
| |
− | * Stage application for deployment: Use sysinv to upload the application tarball.
| |
− | | |
− | <pre>
| |
− | system application-upload stx-openstack helm-charts-manifest-no-tests.tgz
| |
− | system application-list
| |
− | </pre>
| |
− | | |
− | * Bring Up Services: Use sysinv to apply the application. You can monitor the progress either by watching system application-list (watch -n 1.0 system application-list) or tailing Armada execution log (sudo docker exec armada_service tailf stx-openstack-apply.log).
| |
− | | |
− | NOTE This step requires access to the StarlingX docker images. There is outstanding work to add the StarlingX docker images to a public repository. This work is tracked under [https://storyboard.openstack.org/#!/story/2003907 Story 2003907]
| |
− | | |
− | <pre>
| |
− | system application-apply stx-openstack
| |
− | system application-list
| |
− | </pre>
| |
− | | |
− | With the application applied the containerized openstack services are now running. You must now set Ceph pool replication for the new pools created when the application was applied:
| |
− | <pre>
| |
− | ceph osd pool ls | xargs -i ceph osd pool set {} size 1
| |
− | </pre>
| |
− | | |
− | Skip to [[#Verify the cluster endpoints]] to continue the setup.
| |
− | | |
− | The following commands are for reference.
| |
− | | |
− | ----
| |
− | | |
− | * Bring Down Services: Use sysinv to uninstall the application.
| |
− | | |
− | <pre>
| |
− | system application-remove stx-openstack
| |
− | system application-list
| |
− | </pre>
| |
− | | |
− | * Delete Services: Use sysinv to delete the application definition.
| |
− | | |
− | <pre>
| |
− | system application-delete stx-openstack
| |
− | system application-list
| |
− | </pre>
| |
− | | |
− | * Bring Down Services: Clean up and stragglers (volumes and pods)
| |
− | | |
− | <pre>
| |
− | # Watch and wait for the pods to terminate
| |
− | kubectl get pods -n openstack -o wide -w
| |
− | | |
− | # Armada Workaround: delete does not clean up the old test pods. Sooo... Delete them.
| |
− | kubectl get pods -n openstack | awk '/osh-.*-test/{print $1}' | xargs -i kubectl delete pods -n openstack --force --grace-period=0 {}
| |
− | | |
− | # Cleanup all PVCs
| |
− | kubectl get pvc --all-namespaces; kubectl get pv --all-namespaces
| |
− | kubectl delete pvc --all --namespace openstack; kubectl delete pv --all --namespace openstack
| |
− | kubectl get pvc --all-namespaces; kubectl get pv --all-namespaces
| |
− | | |
− | # Useful to cleanup the mariadb grastate data.
| |
− | kubectl get configmaps -n openstack | awk '/osh-/{print $1}' | xargs -i kubectl delete configmaps -n openstack {}
| |
− | | |
− | # Remove all the contents of the ceph pools. I have seen orphaned contents here that take up space.
| |
− | for p in cinder-volumes images kube-rbd; do rbd -p $p ls | xargs -i rbd -p $p snap unprotect {}@snap; done
| |
− | for p in cinder-volumes images kube-rbd; do rbd -p $p ls | xargs -i rbd -p $p snap purge {}; done
| |
− | for p in cinder-volumes images kube-rbd; do rbd -p $p ls | xargs -i rbd -p $p rm {}; done
| |
− | </pre>
| |
− | | |
− | == Verify the cluster endpoints ==
| |
− | | |
− | <pre>
| |
− | # Note: Do this from a new shell as a root user (do not source /etc/nova/openrc in that shell)
| |
− | | |
− | mkdir -p /etc/openstack
| |
− | tee /etc/openstack/clouds.yaml << EOF
| |
− | clouds:
| |
− | openstack_helm:
| |
− | region_name: RegionOne
| |
− | identity_api_version: 3
| |
− | auth:
| |
− | username: 'admin'
| |
− | password: 'Li69nux*'
| |
− | project_name: 'admin'
| |
− | project_domain_name: 'default'
| |
− | user_domain_name: 'default'
| |
− | auth_url: 'http://keystone.openstack.svc.cluster.local/v3'
| |
− | EOF
| |
− | | |
− | export OS_CLOUD=openstack_helm
| |
− | openstack endpoint list
| |
− | </pre>
| |
− | | |
− | == Nova related items ==
| |
− | | |
− | * Execute the nova cell DB workaround
| |
− | | |
− | <pre>
| |
− | # Remove the && from the end of the line if you are executing one command at a time
| |
− | openstack hypervisor list
| |
− | kubectl exec -it -n openstack mariadb-server-0 -- bash -c "mysql --password=\$MYSQL_ROOT_PASSWORD --user=root nova -e 'select host,mapped from compute_nodes'" &&
| |
− | kubectl exec -it -n openstack mariadb-server-0 -- bash -c "mysql --password=\$MYSQL_ROOT_PASSWORD --user=root nova -e 'update compute_nodes set mapped=0'" &&
| |
− | kubectl exec -it -n openstack $(kubectl get pods -n openstack | grep nova-conductor | awk '{print $1}') -- nova-manage cell_v2 discover_hosts --verbose &&
| |
− | openstack hypervisor list
| |
− | </pre>
| |
− | | |
− | == Provider/tenant networking setup ==
| |
− | | |
− | * Create the providernets
| |
− | | |
− | <pre>
| |
− | PHYSNET0='physnet0'
| |
− | PHYSNET1='physnet1'
| |
− | neutron providernet-create ${PHYSNET0} --type vlan
| |
− | neutron providernet-create ${PHYSNET1} --type vlan
| |
− | </pre>
| |
− | | |
− | * Create host and bind interfaces | |
− | <pre>
| |
− | #Query sysinv db directly instead of switching credentials
| |
− | neutron host-create controller-0 --id $(sudo -u postgres psql -qt -d sysinv -c "select uuid from i_host where hostname='controller-0';") --availability up
| |
− | neutron host-bind-interface --interface $(sudo -u postgres psql -qt -d sysinv -c "select uuid from ethernet_interfaces join interfaces on ethernet_interfaces.id=interfaces.id where providernetworks='physnet0';") --providernets physnet0 --mtu 1500 controller-0
| |
− | neutron host-bind-interface --interface $(sudo -u postgres psql -qt -d sysinv -c "select uuid from ethernet_interfaces join interfaces on ethernet_interfaces.id=interfaces.id where providernetworks='physnet1';") --providernets physnet1 --mtu 1500 controller-0
| |
− | #Alternatively, can source /etc/nova/openrc and then query using sysinv api.
| |
− | </pre>
| |
− |
| |
− | * Setup tenant networking (adapt based on lab config)
| |
− | | |
− | <pre>
| |
− | ADMINID=`openstack project list | grep admin | awk '{print $2}'`
| |
− | PHYSNET0='physnet0'
| |
− | PHYSNET1='physnet1'
| |
− | PUBLICNET='public-net0'
| |
− | PRIVATENET='private-net0'
| |
− | INTERNALNET='internal-net0'
| |
− | EXTERNALNET='external-net0'
| |
− | PUBLICSUBNET='public-subnet0'
| |
− | PRIVATESUBNET='private-subnet0'
| |
− | INTERNALSUBNET='internal-subnet0'
| |
− | EXTERNALSUBNET='external-subnet0'
| |
− | PUBLICROUTER='public-router0'
| |
− | PRIVATEROUTER='private-router0'
| |
− | neutron providernet-range-create ${PHYSNET0} --name ${PHYSNET0}-a --range 400-499
| |
− | neutron providernet-range-create ${PHYSNET0} --name ${PHYSNET0}-b --range 10-10 --shared
| |
− | neutron providernet-range-create ${PHYSNET1} --name ${PHYSNET1}-a --range 500-599
| |
− | neutron net-create --tenant-id ${ADMINID} --provider:network_type=vlan --provider:physical_network=${PHYSNET0} --provider:segmentation_id=10 --router:external ${EXTERNALNET}
| |
− | neutron net-create --tenant-id ${ADMINID} --provider:network_type=vlan --provider:physical_network=${PHYSNET0} --provider:segmentation_id=400 ${PUBLICNET}
| |
− | neutron net-create --tenant-id ${ADMINID} --provider:network_type=vlan --provider:physical_network=${PHYSNET1} --provider:segmentation_id=500 ${PRIVATENET}
| |
− | neutron net-create --tenant-id ${ADMINID} ${INTERNALNET}
| |
− | PUBLICNETID=`neutron net-list | grep ${PUBLICNET} | awk '{print $2}'`
| |
− | PRIVATENETID=`neutron net-list | grep ${PRIVATENET} | awk '{print $2}'`
| |
− | INTERNALNETID=`neutron net-list | grep ${INTERNALNET} | awk '{print $2}'`
| |
− | EXTERNALNETID=`neutron net-list | grep ${EXTERNALNET} | awk '{print $2}'`
| |
− | neutron subnet-create --tenant-id ${ADMINID} --name ${PUBLICSUBNET} ${PUBLICNET} 192.168.101.0/24
| |
− | neutron subnet-create --tenant-id ${ADMINID} --name ${PRIVATESUBNET} ${PRIVATENET} 192.168.201.0/24
| |
− | neutron subnet-create --tenant-id ${ADMINID} --name ${INTERNALSUBNET} --no-gateway ${INTERNALNET} 10.10.0.0/24
| |
− | neutron subnet-create --tenant-id ${ADMINID} --name ${EXTERNALSUBNET} --gateway 192.168.1.1 --disable-dhcp ${EXTERNALNET} 192.168.1.0/24
| |
− | neutron router-create ${PUBLICROUTER}
| |
− | neutron router-create ${PRIVATEROUTER}
| |
− | PRIVATEROUTERID=`neutron router-list | grep ${PRIVATEROUTER} | awk '{print $2}'`
| |
− | PUBLICROUTERID=`neutron router-list | grep ${PUBLICROUTER} | awk '{print $2}'`
| |
− | neutron router-gateway-set --disable-snat ${PUBLICROUTERID} ${EXTERNALNETID}
| |
− | neutron router-gateway-set --disable-snat ${PRIVATEROUTERID} ${EXTERNALNETID}
| |
− | neutron router-interface-add ${PUBLICROUTER} ${PUBLICSUBNET}
| |
− | neutron router-interface-add ${PRIVATEROUTER} ${PRIVATESUBNET}
| |
− | </pre>
| |
− | | |
− | == Horizon access ==
| |
− | | |
− | <pre>
| |
− | # After successful armada manifest apply the following should be seen
| |
− | | |
− | kubectl get services -n openstack | grep horizon
| |
− | horizon ClusterIP 10.104.34.245 <none> 80/TCP,443/TCP 13h
| |
− | horizon-int NodePort 10.101.103.238 <none> 80:31000/TCP 13h
| |
− | | |
− | The platform horizon UI is available at http://<external OAM IP>
| |
− | | |
− | $ curl -L http://10.10.10.2:80 -so - | egrep '(PlugIn|<title>)'
| |
− | <title>Login - StarlingX</title>
| |
− | global.horizonPlugInModules = ['horizon.dashboard.project', 'horizon.dashboard.container-infra', 'horizon.dashboard.dc_admin', 'horizon.dashboard.identity', 'horizon.app.murano'];
| |
− | | |
− | The containerized horizon UI is available at http://<external OAM IP>:31000
| |
− | | |
− | $ curl -L http://10.10.10.2:31000 -so - | egrep '(PlugIn|<title>)'
| |
− | <title>Login - StarlingX</title>
| |
− | global.horizonPlugInModules = ['horizon.dashboard.project', 'horizon.dashboard.identity'];
| |
− |
| |
− | </pre>
| |
− | | |
− | == After controller node reboot ==
| |
− | | |
− | Until we get them integrated into node startup, there are a number of the above items that need to be re-done after every
| |
− | controller node reboot:
| |
− | | |
− | <pre>
| |
− | sudo sh -c "echo 1 > /proc/sys/net/bridge/bridge-nf-call-arptables"
| |
− | </pre>
| |
− | | |
− | * If the keystone-api pod is stuck in a CrashLoopBackOff, delete the pod and it will be re-created.
| |
− | | |
− | <pre>
| |
− | # List the pods to get the name of the keystone-api pod
| |
− | kubectl -n openstack get pods
| |
− | # Delete the keystone-api pod
| |
− | kubectl -n openstack delete pod <name of keystone-api pod>
| |
− | </pre>
| |
− | | |
− | * If you are seeing DNS failures for cluster addresses, restart dnsmasq on the controller after puppet has completed its initialization.
| |
− | | |
− | <pre>
| |
− | sudo sm-restart service dnsmasq
| |
− | </pre>
| |
− | | |
− | == VirtualBox Nat Networking ==
| |
− | | |
− | First add a NAT Network in VirtualBox:
| |
− | * Select File -> Preferences menu
| |
− | * Choose Network, "Nat Networks" tab should be selected
| |
− | * Click on plus icon to add a network, which will add a network named NatNetwork
| |
− | * Edit the NatNetwork (gear or screwdriver icon)
| |
− | * Network CIDR: 10.10.10.0/24 (to match OAM network specified in config_controller)
| |
− | * Disable "Supports DHCP"
| |
− | * Enable "Supports IPv6"
| |
− | * Select "Port Forwarding" and add any rules you desire. Some examples:
| |
− | {| class="wikitable"
| |
− | | Name || Protocol|| Host IP|| Host Port || Guest IP || Guest Port
| |
− | |-
| |
− | | controller-ssh || TCP || || 22 || 10.10.10.3 || 22
| |
− | |-
| |
− | | controller-http || TCP || || 80 || 10.10.10.3 || 80
| |
− | |-
| |
− | | controller-https || TCP || || 443 || 10.10.10.3 || 443
| |
− | |}
| |
You might consider contributing to StarlingX documentation if you find a bug or have a suggestions for improvement.
To get started: