Jump to: navigation, search

Difference between revisions of "StarlingX/Containers/InstallationOnStandard"

(Installing StarlingX with containers: One node configuration)
 
(96 intermediate revisions by 16 users not shown)
Line 1: Line 1:
= Installing StarlingX with containers: One node configuration =
+
{{Warning|header='''Warning - Deprecated'''|body='''This wiki page is out of date and now deprecated. For the current and upcoming versions, see [https://docs.starlingx.io/deploy_install_guides/index.html StarlingX Installation and Deployment guides]'''}}
  
== Introduction ==
+
= Documentation Contribution =
  
These instructions are for a Standard 2 controllers and 2 computes (2+2) configuration in VirtualBox. Other configurations are in development. Installing on bare metal is also possible, however the the process would have to be adapted for the specific hardware configuration.
+
You might consider contributing to StarlingX documentation if you find a bug or have a suggestions for improvement.
 +
To get started:
  
'''Note''': These instructions are valid for a load built on '''January 21, 2019''' or later.
+
* Please use "[https://docs.starlingx.io/contributor/index.html Contribute]" guides.
 +
* Launch a bug in [https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.docs StarlingX Launchpad] with the tag ''stx.docs''.
  
== Building the Software ==
+
= History =
  
Follow the standard build process in the [https://docs.starlingx.io/developer_guide/index.html StarlingX Developer Guide]. Alternatively a prebuilt iso can be used, all required pacakges are provided by the StarlingX CENGN mirror.
+
Go to [https://wiki.openstack.org/w/index.php?title=StarlingX/Containers/InstallationOnStandard&action=history Page > History] link if you want to:
  
== Setup the VirtualBox VM ==
+
* See the old content of this page
 
+
* Compare revisions
Create a virtual machine for the system with the following options:
 
      * Type: Linux
 
      * Version: Other Linux (64-bit)
 
      * Memory size:
 
        * Controller nodes: 12288 MB
 
        * Compute nodes: 4096 MB
 
      * Storage:
 
        * Recommend to use VDI and dynamically allocated disks
 
        * Controller nodes; at least two disks are required:
 
              * 240GB disk for a root disk
 
              * 50GB for an OSD
 
        * Compute nodes; at least one disk is required:
 
              * 240GB disk for a root disk
 
        * System->Processors:
 
        * Controller nodes: 4 cpu
 
        * Compute nodes: 3 cpu
 
        * Network:
 
            * Controller nodes:
 
              * OAM network:
 
                  OAM interface must have external connectivity, for now we will use a NatNetwork
 
                  * Adapter 1: NAT Network; Name: NatNetwork Follow the instructions at [[#VirtualBox Nat Networking]]
 
              * Internal management network:
 
                  * Adapter 2: Internal Network, Name: intnet-management; Intel PRO/1000MT Desktop, Advanced: Promiscuous Mode: Allow All;
 
        * Compute nodes:
 
              * Usused network
 
                  * Adapter 1: Internal Network, Name: intnet-unused; Advanced: Intel PRO/1000MT Desktop, Promiscuous Mode: Allow All (Optional - if infrastructure network will be used then set "Name" to "intnet-infra")
 
              * Internal management network:
 
                  * Adapter 2: Internal Network, Name: intnet-management; Intel PRO/1000MT Desktop, Advanced: Promiscuous Mode: Allow All;
 
              * Data Network
 
                  * Adapter 3: Internal Network, Name: intnet-data1; Advanced: Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All
 
                  * Adapter 4: Internal Network, Name: intnet-data2; Advanced: Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All
 
 
 
Set the boot priority for interface 2 (eth1) on ALL VMs (controller, compute and storage)
 
 
 
<pre>abc@server:~$
 
# First list the VMs
 
VBoxManage list vms
 
"controller-0" {3db3a342-780f-41d5-a012-dbe6d3591bf1}
 
"controller-1" {ad89a706-61c6-4c27-8c78-9729ade01460}
 
"compute-0" {41e80183-2497-4e31-bffd-2d8ec5bcb397}
 
"compute-1" {68382c1d-9b67-4f3b-b0d5-ebedbe656246}
 
"storage-0" {7eddce9e-b814-4c40-94ce-2cde1fd2d168}
 
 
 
# Then set the priority for interface 2. Do this for ALL VMs.
 
# Command syntax: VBoxManage modifyvm <uuid> --nicbootprio2 1
 
abc@server:~$ VBoxManage modifyvm 3db3a342-780f-41d5-a012-dbe6d3591bf1 --nicbootprio2 1
 
 
 
#OR do them all with a foreach loop in linux
 
abc@server:~$ for f in $(VBoxManage list vms | cut -f 1 -d " " | sed 's/"//g'); do echo $f; VBoxManage modifyvm $f --nicbootprio2 1; done
 
 
 
# NOTE: In windows, you need to specify the full path to the VBoxManage executable - for example:
 
"\Program Files\Oracle\VirtualBox\VBoxManage.exe"
 
</pre>
 
 
 
== Install StarlingX ==
 
 
 
Boot the VM from the ISO media. Select the following options for installation:
 
*All-in-one Controller
 
*Graphical Console
 
*Standard Security Profile
 
 
 
== Initial Configuration ==
 
 
 
'''Note:''' If you do not have direct access to the public docker registry (https://hub.docker.com/u/starlingx) and instead use a proxy for internet access, a workaround is required until this StoryBoard is implemented: https://storyboard.openstack.org/#!/story/2004710 
 
 
 
Add proxy for docker
 
 
 
<pre>
 
sudo mkdir -p /etc/systemd/system/docker.service.d
 
sudo vi /etc/systemd/system/docker.service.d/http-proxy.conf
 
</pre>
 
 
 
Add following lines with your proxy infomation to http-proxy.conf
 
<pre>
 
[Service]
 
Environment="HTTP_PROXY=<your_proxy>" "HTTPS_PROXY=<your_proxy>" "NO_PROXY=<your_no_proxy_ip>"
 
</pre>
 
Do '''NOT''' use wildcard in NO_PROXY variable.
 
 
 
 
 
Run config_controller
 
 
 
<code>sudo config_controller --kubernetes</code>
 
 
 
Use default settings during config_controller, except for the following
 
 
 
System mode: '''simplex'''
 
External OAM address: '''10.10.10.3'''
 
 
 
The system configuration should look like this:
 
<pre>
 
System Configuration
 
--------------------
 
Time Zone: UTC
 
System mode: simplex
 
 
 
PXEBoot Network Configuration
 
-----------------------------
 
Separate PXEBoot network not configured
 
PXEBoot Controller floating hostname: pxecontroller
 
 
 
Management Network Configuration
 
--------------------------------
 
Management interface name: lo
 
Management interface: lo
 
Management interface MTU: 1500
 
Management interface link capacity Mbps: 10000
 
Management subnet: 127.168.204.0/24
 
Controller floating address: 127.168.204.2
 
Controller 0 address: 127.168.204.3
 
Controller 1 address: 127.168.204.4
 
NFS Management Address 1: 127.168.204.5
 
NFS Management Address 2: 127.168.204.6
 
Controller floating hostname: controller
 
Controller hostname prefix: controller-
 
OAM Controller floating hostname: oamcontroller
 
Dynamic IP address allocation is selected
 
Management multicast subnet: 239.1.1.0/28
 
 
 
Infrastructure Network Configuration
 
------------------------------------
 
Infrastructure interface not configured
 
 
 
External OAM Network Configuration
 
----------------------------------
 
External OAM interface name: enp0s3
 
External OAM interface: enp0s3
 
External OAM interface MTU: 1500
 
External OAM subnet: 10.10.10.0/24
 
External OAM gateway address: 10.10.10.1
 
External OAM address: 10.10.10.3
 
</pre>
 
 
 
== Provisioning the platform ==
 
 
 
* Set DNS server (so we can set the ntp servers)
 
 
 
<pre>
 
source /etc/platform/openrc
 
system dns-modify nameservers=8.8.8.8 action=apply
 
</pre>
 
 
 
* Set the ntp server
 
 
 
<pre>
 
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
 
</pre>
 
 
 
* Create partitions on the root disk (6G for the cgts-vg, 34G for nova) and wait for them to be ready. The first partition extends the existing cgts volume group. This step is optional as there should be sufficient space by default. The second partition for nova-local is manidtory.
 
 
 
<pre>
 
system host-disk-list controller-0
 
system host-disk-partition-add -t lvm_phys_vol controller-0 $(system host-disk-list controller-0 | awk '/sda/{print $2}') 6
 
while true; do system host-disk-partition-list controller-0 --nowrap | grep 6\.0 | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done
 
system host-disk-partition-add -t lvm_phys_vol controller-0 $(system host-disk-list controller-0 | awk '/sda/{print $2}') 34
 
while true; do system host-disk-partition-list controller-0 --nowrap | grep 34\.0 | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done
 
system host-disk-partition-list controller-0
 
</pre>
 
 
 
* Create the volume group for nova.
 
 
 
<pre>
 
system host-lvg-add controller-0 nova-local
 
</pre>
 
 
 
* Create physical volumes from the partitions.
 
 
 
<pre>
 
system host-pv-add controller-0 cgts-vg $(system host-disk-partition-list controller-0 | awk '/sda5/{print $2}')
 
system host-pv-add controller-0 nova-local $(system host-disk-partition-list controller-0 | awk '/sda6/{print $2}')
 
system host-pv-list controller-0
 
</pre>
 
 
 
* Add the ceph storage backend (note: you may have to wait a few minutes before this command will succeed)
 
 
 
<pre>
 
system storage-backend-add ceph --confirmed
 
</pre>
 
 
 
* Wait for 'applying-manifests' task to complete
 
 
 
<pre>
 
while [ $(system storage-backend-list | awk '/ceph-store/{print $8}') != 'configured' ]; do echo 'Waiting for ceph to be configured'; sleep 5; done
 
</pre>
 
 
 
* Add an OSD (/dev/sdb)
 
 
 
<pre>
 
system host-disk-list controller-0
 
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
 
system host-stor-list controller-0
 
</pre>
 
 
 
* Set Ceph pool replication
 
 
 
<pre>
 
ceph osd pool ls | xargs -i ceph osd pool set {} size 1
 
</pre>
 
 
 
* Configure data interfaces
 
 
 
<pre>
 
DATA0IF=eth1000
 
DATA1IF=eth1001
 
export COMPUTE=controller-0
 
PHYSNET0='physnet0'
 
PHYSNET1='physnet1'
 
SPL=/tmp/tmp-system-port-list
 
SPIL=/tmp/tmp-system-host-if-list
 
source /etc/platform/openrc
 
system host-port-list ${COMPUTE} --nowrap > ${SPL}
 
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
 
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 
DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 
system host-if-modify -m 1500 -n data0 -p ${PHYSNET0} -c data ${COMPUTE} ${DATA0IFUUID}
 
system host-if-modify -m 1500 -n data1 -p ${PHYSNET1} -c data ${COMPUTE} ${DATA1IFUUID}
 
</pre>
 
 
 
* Unlock the controller
 
 
 
<pre>
 
system host-unlock controller-0
 
</pre>
 
 
 
* After the host unlocks, test that the ceph cluster is operational
 
 
 
<pre>
 
ceph -s
 
    cluster 6cb8fd30-622a-4a15-a039-b9e945628133
 
    health HEALTH_OK
 
    monmap e1: 1 mons at {controller-0=127.168.204.3:6789/0}
 
            election epoch 4, quorum 0 controller-0
 
    osdmap e32: 1 osds: 1 up, 1 in
 
            flags sortbitwise,require_jewel_osds
 
      pgmap v35: 1728 pgs, 6 pools, 0 bytes data, 0 objects
 
            39180 kB used, 50112 MB / 50150 MB avail
 
                1728 active+clean
 
</pre>
 
 
 
== Prepare the host for running the containerized services ==
 
 
 
* On the controller node, apply all the node labels for each controller and compute functions
 
 
 
<pre>
 
source /etc/platform/openrc
 
system host-label-assign controller-0 openstack-control-plane=enabled
 
system host-label-assign controller-0  openstack-compute-node=enabled
 
system host-label-assign controller-0  openvswitch=enabled
 
kubectl get nodes --show-labels
 
</pre>
 
 
 
* Add the DNS for the cluster
 
 
 
<pre>
 
DNS_EP=$(kubectl describe svc -n kube-system kube-dns | awk /IP:/'{print $2}')
 
system dns-modify nameservers="$DNS_EP,8.8.8.8"
 
</pre>
 
 
 
* Perform the following platform tweak.
 
 
 
<pre>
 
sudo sh -c "echo 1 > /proc/sys/net/bridge/bridge-nf-call-arptables"
 
</pre>
 
 
 
== Using sysinv to bring up/down the containerized services ==
 
 
 
* Generate the stx-openstack application tarball. In a development environment, run the following command to construct the application tarballs. The tarballs can be found under $MY_WORKSPACE/containers/build-helm/stx. Currently it produces 2 application tarballs, one with tests enabled and one without. Transfer the selected tarball to your lab/virtual box.
 
<pre>
 
$MY_REPO_ROOT_DIR/cgcs-root/build-tools/build-helm-charts.sh
 
</pre>
 
 
 
* Alternatively the stx-openstack application tarballs are generated with each build on the CENGN mirror. These are present in builds after 2018-12-12 and can be found under <build>/outputs/helm-charts/.
 
 
 
* Stage application for deployment: Use sysinv to upload the application tarball.
 
 
 
<pre>
 
system application-upload stx-openstack helm-charts-manifest-no-tests.tgz
 
system application-list
 
</pre>
 
 
 
* Bring Up Services: Use sysinv to apply the application. You can monitor the progress either by watching system application-list (watch -n 1.0 system application-list) or tailing Armada execution log (sudo docker exec armada_service tailf stx-openstack-apply.log).
 
 
 
<pre>
 
system application-apply stx-openstack
 
system application-list
 
</pre>
 
 
 
With the application applied the containerized openstack services are now running. You must now set Ceph pool replication for the new pools created when the application was applied:
 
<pre>
 
ceph osd pool ls | xargs -i ceph osd pool set {} size 1
 
</pre>
 
 
 
Skip to [[#Verify the cluster endpoints]] to continue the setup.
 
 
 
The following commands are for reference.
 
 
 
----
 
 
 
* Bring Down Services: Use sysinv to uninstall the application.
 
 
 
<pre>
 
system application-remove stx-openstack
 
system application-list
 
</pre>
 
 
 
* Delete Services: Use sysinv to delete the application definition.
 
 
 
<pre>
 
system application-delete stx-openstack
 
system application-list
 
</pre>
 
 
 
* Bring Down Services: Clean up and stragglers (volumes and pods)
 
 
 
<pre>
 
# Watch and wait for the pods to terminate
 
kubectl get pods -n openstack -o wide -w
 
 
 
# Armada Workaround: delete does not clean up the old test pods. Sooo... Delete them.
 
kubectl get pods -n openstack | awk '/osh-.*-test/{print $1}' | xargs -i kubectl delete pods -n openstack --force --grace-period=0 {}
 
 
 
# Cleanup all PVCs
 
kubectl get pvc --all-namespaces; kubectl get pv --all-namespaces
 
kubectl delete pvc --all --namespace openstack; kubectl delete pv --all --namespace openstack
 
kubectl get pvc --all-namespaces; kubectl get pv --all-namespaces
 
 
 
# Useful to cleanup the mariadb grastate data.
 
kubectl get configmaps -n openstack | awk '/osh-/{print $1}' | xargs -i kubectl delete configmaps -n openstack {}
 
 
 
# Remove all the contents of the ceph pools. I have seen orphaned contents here that take up space.
 
for p in cinder-volumes images kube-rbd; do rbd -p $p ls | xargs -i rbd -p $p snap unprotect {}@snap; done
 
for p in cinder-volumes images kube-rbd; do rbd -p $p ls | xargs -i rbd -p $p snap purge {}; done
 
for p in cinder-volumes images kube-rbd; do rbd -p $p ls | xargs -i rbd -p $p rm {}; done
 
</pre>
 
 
 
== Verify the cluster endpoints ==
 
 
 
<pre>
 
# Note: Do this from a new shell as a root user (do not source /etc/platform/openrc in that shell).
 
        The 'password' should be set to the admin password which configured during config_controller.
 
 
 
mkdir -p /etc/openstack
 
tee /etc/openstack/clouds.yaml << EOF
 
clouds:
 
  openstack_helm:
 
    region_name: RegionOne
 
    identity_api_version: 3
 
    auth:
 
      username: 'admin'
 
      password: 'Li69nux*'
 
      project_name: 'admin'
 
      project_domain_name: 'default'
 
      user_domain_name: 'default'
 
      auth_url: 'http://keystone.openstack.svc.cluster.local/v3'
 
EOF
 
 
 
export OS_CLOUD=openstack_helm
 
openstack endpoint list
 
</pre>
 
 
 
== Nova related items ==
 
 
 
* Execute the nova cell DB workaround
 
 
 
<pre>
 
# Remove the && from the end of the line if you are executing one command at a time
 
openstack hypervisor list
 
kubectl exec -it -n openstack mariadb-server-0 -- bash -c "mysql --password=\$MYSQL_ROOT_PASSWORD --user=root nova -e 'select host,mapped from compute_nodes'" &&
 
kubectl exec -it -n openstack mariadb-server-0 -- bash -c "mysql --password=\$MYSQL_ROOT_PASSWORD --user=root nova -e 'update compute_nodes set mapped=0'" &&
 
kubectl exec -it -n openstack $(kubectl get pods  -n openstack | grep nova-conductor | awk '{print $1}') -- nova-manage cell_v2 discover_hosts --verbose &&
 
openstack hypervisor list
 
</pre>
 
 
 
== Provider/tenant networking setup ==
 
 
 
* Create the providernets
 
 
 
<pre>
 
PHYSNET0='physnet0'
 
PHYSNET1='physnet1'
 
neutron providernet-create ${PHYSNET0} --type vlan
 
neutron providernet-create ${PHYSNET1} --type vlan
 
</pre>
 
 
 
* Create host and bind interfaces
 
<pre>
 
#Query sysinv db directly instead of switching credentials
 
neutron host-create controller-0 --id $(sudo -u postgres psql -qt -d sysinv -c "select uuid from i_host where hostname='controller-0';") --availability up
 
neutron host-bind-interface --interface $(sudo -u postgres psql -qt -d sysinv -c "select uuid from ethernet_interfaces join interfaces on ethernet_interfaces.id=interfaces.id where providernetworks='physnet0';") --providernets physnet0 --mtu 1500 controller-0
 
neutron host-bind-interface --interface $(sudo -u postgres psql -qt -d sysinv -c "select uuid from ethernet_interfaces join interfaces on ethernet_interfaces.id=interfaces.id where providernetworks='physnet1';") --providernets physnet1 --mtu 1500 controller-0
 
#Alternatively, can source /etc/platform/openrc and then query using sysinv api.
 
</pre>
 
     
 
* Setup tenant networking (adapt based on lab config)
 
 
 
<pre>
 
ADMINID=`openstack project list | grep admin | awk '{print $2}'`
 
PHYSNET0='physnet0'
 
PHYSNET1='physnet1'
 
PUBLICNET='public-net0'
 
PRIVATENET='private-net0'
 
INTERNALNET='internal-net0'
 
EXTERNALNET='external-net0'
 
PUBLICSUBNET='public-subnet0'
 
PRIVATESUBNET='private-subnet0'
 
INTERNALSUBNET='internal-subnet0'
 
EXTERNALSUBNET='external-subnet0'
 
PUBLICROUTER='public-router0'
 
PRIVATEROUTER='private-router0'
 
neutron providernet-range-create ${PHYSNET0} --name ${PHYSNET0}-a --range 400-499
 
neutron providernet-range-create ${PHYSNET0} --name ${PHYSNET0}-b --range 10-10 --shared
 
neutron providernet-range-create ${PHYSNET1} --name ${PHYSNET1}-a --range 500-599
 
neutron net-create --tenant-id ${ADMINID} --provider:network_type=vlan --provider:physical_network=${PHYSNET0} --provider:segmentation_id=10 --router:external ${EXTERNALNET}
 
neutron net-create --tenant-id ${ADMINID} --provider:network_type=vlan --provider:physical_network=${PHYSNET0} --provider:segmentation_id=400 ${PUBLICNET}
 
neutron net-create --tenant-id ${ADMINID} --provider:network_type=vlan --provider:physical_network=${PHYSNET1} --provider:segmentation_id=500 ${PRIVATENET}
 
neutron net-create --tenant-id ${ADMINID} ${INTERNALNET}
 
PUBLICNETID=`neutron net-list | grep ${PUBLICNET} | awk '{print $2}'`
 
PRIVATENETID=`neutron net-list | grep ${PRIVATENET} | awk '{print $2}'`
 
INTERNALNETID=`neutron net-list | grep ${INTERNALNET} | awk '{print $2}'`
 
EXTERNALNETID=`neutron net-list | grep ${EXTERNALNET} | awk '{print $2}'`
 
neutron subnet-create --tenant-id ${ADMINID} --name ${PUBLICSUBNET} ${PUBLICNET} 192.168.101.0/24
 
neutron subnet-create --tenant-id ${ADMINID} --name ${PRIVATESUBNET} ${PRIVATENET} 192.168.201.0/24
 
neutron subnet-create --tenant-id ${ADMINID} --name ${INTERNALSUBNET} --no-gateway  ${INTERNALNET} 10.10.0.0/24
 
neutron subnet-create --tenant-id ${ADMINID} --name ${EXTERNALSUBNET} --gateway 192.168.1.1 --disable-dhcp ${EXTERNALNET} 192.168.1.0/24
 
neutron router-create ${PUBLICROUTER}
 
neutron router-create ${PRIVATEROUTER}
 
PRIVATEROUTERID=`neutron router-list | grep ${PRIVATEROUTER} | awk '{print $2}'`
 
PUBLICROUTERID=`neutron router-list | grep ${PUBLICROUTER} | awk '{print $2}'`
 
neutron router-gateway-set --disable-snat ${PUBLICROUTERID} ${EXTERNALNETID}
 
neutron router-gateway-set --disable-snat ${PRIVATEROUTERID} ${EXTERNALNETID}
 
neutron router-interface-add ${PUBLICROUTER} ${PUBLICSUBNET}
 
neutron router-interface-add ${PRIVATEROUTER} ${PRIVATESUBNET}
 
</pre>
 
 
 
== Horizon access ==
 
 
 
<pre>
 
# After successful armada manifest apply the following should be seen
 
 
 
kubectl get services -n openstack | grep horizon
 
horizon                      ClusterIP  10.104.34.245    <none>        80/TCP,443/TCP                13h
 
horizon-int                  NodePort    10.101.103.238  <none>        80:31000/TCP                  13h
 
 
 
The platform horizon UI is available at http://<external OAM IP>
 
 
 
$ curl -L http://10.10.10.3:80 -so - | egrep '(PlugIn|<title>)'
 
    <title>Login - StarlingX</title>
 
    global.horizonPlugInModules = ['horizon.dashboard.project', 'horizon.dashboard.container-infra', 'horizon.dashboard.dc_admin', 'horizon.dashboard.identity', 'horizon.app.murano'];
 
 
 
The containerized horizon UI is available at http://<external OAM IP>:31000
 
 
 
$ curl -L http://10.10.10.3:31000 -so - | egrep '(PlugIn|<title>)'
 
    <title>Login - StarlingX</title>
 
    global.horizonPlugInModules = ['horizon.dashboard.project', 'horizon.dashboard.identity'];
 
 
</pre>
 
 
 
== After controller node reboot ==
 
 
 
Until we get them integrated into node startup, there are a number of the above items that need to be re-done after every
 
controller node reboot:
 
 
 
<pre>
 
sudo sh -c "echo 1 > /proc/sys/net/bridge/bridge-nf-call-arptables"
 
</pre>
 
 
 
* If the keystone-api pod is stuck in a CrashLoopBackOff, delete the pod and it will be re-created.
 
 
 
<pre>
 
# List the pods to get the name of the keystone-api pod
 
kubectl -n openstack get pods
 
# Delete the keystone-api pod
 
kubectl -n openstack delete pod <name of keystone-api pod>
 
</pre>
 
 
 
* If you are seeing DNS failures for cluster addresses, restart dnsmasq on the controller after puppet has completed its initialization.
 
 
 
<pre>
 
sudo sm-restart service dnsmasq
 
</pre>
 
 
 
== VirtualBox Nat Networking ==
 
 
 
First add a NAT Network in VirtualBox:
 
  * Select File -> Preferences menu
 
  * Choose Network, "Nat Networks" tab should be selected
 
    * Click on plus icon to add a network, which will add a network named NatNetwork
 
    * Edit the NatNetwork (gear or screwdriver icon)
 
      * Network CIDR: 10.10.10.0/24 (to match OAM network specified in config_controller)
 
      * Disable "Supports DHCP"
 
      * Enable "Supports IPv6"
 
      * Select "Port Forwarding" and add any rules you desire. Some examples:
 
{| class="wikitable"
 
| Name || Protocol|| Host IP|| Host Port || Guest IP || Guest Port
 
|-
 
| controller-ssh || TCP || || 22 || 10.10.10.3 || 22
 
|-
 
| controller-http || TCP || || 80 || 10.10.10.3 || 80
 
|-
 
| controller-https || TCP ||  || 443 || 10.10.10.3 || 443
 
|}
 

Latest revision as of 18:01, 1 August 2019

Warning icon.svg Warning - Deprecated

This wiki page is out of date and now deprecated. For the current and upcoming versions, see StarlingX Installation and Deployment guides

Documentation Contribution

You might consider contributing to StarlingX documentation if you find a bug or have a suggestions for improvement. To get started:

History

Go to Page > History link if you want to:

  • See the old content of this page
  • Compare revisions