|
|
(79 intermediate revisions by 19 users not shown) |
Line 1: |
Line 1: |
− | = Installing StarlingX with containers: One node configuration = | + | {{Warning|header='''Warning - Deprecated'''|body='''This wiki page is out of date and now deprecated. For the current and upcoming versions, see [https://docs.starlingx.io/deploy_install_guides/index.html StarlingX Installation and Deployment guides]'''}} |
| | | |
− | == History == | + | = Documentation Contribution = |
| | | |
− | * '''January 18, 2019:''' Removed Nova Cell DB Workaround - no longer required on loads built January 15th or later.
| + | You might consider contributing to StarlingX documentation if you find a bug or have a suggestions for improvement. |
− | * '''January 25, 2019:''' Configure datanetworks in sysinv, prior to referencing it in the 'system host-if-modify/host-if-add command'. Needed on loads Jan 25, 2019 or later.
| + | To get started: |
− | * '''January 29, 2019:''' Removed obsolete neutron host/interface configuration and updated DNS instructions.
| |
| | | |
− | == Introduction == | + | * Please use "[https://docs.starlingx.io/contributor/index.html Contribute]" guides. |
| + | * Launch a bug in [https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.docs StarlingX Launchpad] with the tag ''stx.docs''. |
| | | |
− | These instructions are for an All-in-one simplex system (AIO-SX) in VirtualBox. Other configurations are in development. Installing on bare metal is also possible, however the the process would have to be adapted for the specific hardware configuration.
| + | = History = |
| | | |
− | '''Note''': These instructions are valid for a load built on '''January 25, 2019''' or later.
| + | Go to [https://wiki.openstack.org/w/index.php?title=StarlingX/Containers/Installation&action=history Page > History] link if you want to: |
| | | |
− | == Building the Software ==
| + | * See the old content of this page |
− | | + | * Compare revisions |
− | Follow the standard build process in the [https://docs.starlingx.io/developer_guide/index.html StarlingX Developer Guide]. Alternatively a prebuilt iso can be used, all required pacakges are provided by the StarlingX CENGN mirror.
| |
− | | |
− | == Setup the VirtualBox VM ==
| |
− | | |
− | Create a virtual machine for the system with the following options:
| |
− | * Type: Linux
| |
− | * Version: Other Linux (64-bit)
| |
− | * Memory size: 16384 MB
| |
− | * Storage:
| |
− | * Recommend to use VDI and dynamically allocated disks
| |
− | * At least two disks are required
| |
− | * 240GB disk for a root disk
| |
− | * 50GB for an OSD
| |
− | * System->Processors:
| |
− | * 4 cpu
| |
− | * Network:
| |
− | * OAM network:
| |
− | OAM interface must have external connectivity, for now we will use a NatNetwork
| |
− | * Adapter 1: NAT Network; Name: NatNetwork Follow the instructions at [[#VirtualBox Nat Networking]]
| |
− | * Data Network
| |
− | * Adapter 2: Internal Network, Name: intnet-data1; Advanced: Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All
| |
− | * Adapter 3: Internal Network, Name: intnet-data2; Advanced: Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All
| |
− | | |
− | === VirtualBox Nat Networking ===
| |
− | | |
− | First add a NAT Network in VirtualBox:
| |
− | * Select File -> Preferences menu
| |
− | * Choose Network, "Nat Networks" tab should be selected
| |
− | * Click on plus icon to add a network, which will add a network named NatNetwork
| |
− | * Edit the NatNetwork (gear or screwdriver icon)
| |
− | * Network CIDR: 10.10.10.0/24 (to match OAM network specified in config_controller)
| |
− | * Disable "Supports DHCP"
| |
− | * Enable "Supports IPv6"
| |
− | * Select "Port Forwarding" and add any rules you desire. Some examples:
| |
− | {| class="wikitable"
| |
− | | Name || Protocol|| Host IP|| Host Port || Guest IP || Guest Port
| |
− | |-
| |
− | | controller-ssh || TCP || || 22 || 10.10.10.3 || 22
| |
− | |-
| |
− | | controller-http || TCP || || 80 || 10.10.10.3 || 80
| |
− | |-
| |
− | | controller-https || TCP || || 443 || 10.10.10.3 || 443
| |
− | |}
| |
− | | |
− | == Setup Controller-0 ==
| |
− | | |
− | === Install StarlingX ===
| |
− | | |
− | Boot the VM from the ISO media. Select the following options for installation:
| |
− | *All-in-one Controller
| |
− | *Graphical Console
| |
− | *Standard Security Profile
| |
− | | |
− | Once booted, log into Controller-0 as user wrsroot, with password wrsroot. The first time you log in as wrsroot, you are required to change your password. Enter the current password (wrsroot):
| |
− | | |
− | <pre>
| |
− | Changing password for wrsroot.
| |
− | (current) UNIX Password: wrsroot
| |
− | </pre>
| |
− | | |
− | Enter a new password for the wrsroot account and confirm it.
| |
− | | |
− | === Docker Proxy Configuration ===
| |
− | | |
− | '''Note:''' If you do not have direct access to the public docker registry (https://hub.docker.com/u/starlingx) and instead use a proxy for internet access, a workaround is required until this StoryBoard is implemented: https://storyboard.openstack.org/#!/story/2004710
| |
− | | |
− | Note: That Storyboard was merged on Jan 30, 2019.
| |
− | | |
− | Add proxy for docker:
| |
− | <pre>
| |
− | sudo mkdir -p /etc/systemd/system/docker.service.d
| |
− | sudo vi /etc/systemd/system/docker.service.d/http-proxy.conf
| |
− | </pre>
| |
− | | |
− | Add following lines with your proxy information to http-proxy.conf:
| |
− | <pre>
| |
− | [Service]
| |
− | Environment="HTTP_PROXY=<your_proxy>" "HTTPS_PROXY=<your_proxy>" "NO_PROXY=localhost,127.0.0.1,192.168.204.2,<your_no_proxy_ip>"
| |
− | </pre>
| |
− | Do '''NOT''' use wildcard in NO_PROXY variable.
| |
− | | |
− | === Run config_controller ===
| |
− | | |
− | <code>sudo config_controller --kubernetes</code>
| |
− | | |
− | Use default settings during config_controller, except for the following:
| |
− | * System mode: '''simplex'''
| |
− | * External OAM address: '''10.10.10.3'''
| |
− | * If you do not have direct access to the google DNS nameserver(s) 8.8.8.8 , 8.8.4.4 you will need to configure that when prompted. Press Enter to choose the default, or type a new entry.
| |
− | | |
− | The system configuration should look like this:
| |
− | <pre>
| |
− | System Configuration
| |
− | --------------------
| |
− | Time Zone: UTC
| |
− | System mode: simplex
| |
− | | |
− | PXEBoot Network Configuration
| |
− | -----------------------------
| |
− | Separate PXEBoot network not configured
| |
− | PXEBoot Controller floating hostname: pxecontroller
| |
− | | |
− | Management Network Configuration
| |
− | --------------------------------
| |
− | Management interface name: lo
| |
− | Management interface: lo
| |
− | Management interface MTU: 1500
| |
− | Management interface link capacity Mbps: 10000
| |
− | Management subnet: 127.168.204.0/24
| |
− | Controller floating address: 127.168.204.2
| |
− | Controller 0 address: 127.168.204.3
| |
− | Controller 1 address: 127.168.204.4
| |
− | NFS Management Address 1: 127.168.204.5
| |
− | NFS Management Address 2: 127.168.204.6
| |
− | Controller floating hostname: controller
| |
− | Controller hostname prefix: controller-
| |
− | OAM Controller floating hostname: oamcontroller
| |
− | Dynamic IP address allocation is selected
| |
− | Management multicast subnet: 239.1.1.0/28
| |
− | | |
− | Infrastructure Network Configuration
| |
− | ------------------------------------
| |
− | Infrastructure interface not configured
| |
− | | |
− | External OAM Network Configuration
| |
− | ----------------------------------
| |
− | External OAM interface name: enp0s3
| |
− | External OAM interface: enp0s3
| |
− | External OAM interface MTU: 1500
| |
− | External OAM subnet: 10.10.10.0/24
| |
− | External OAM gateway address: 10.10.10.1
| |
− | External OAM address: 10.10.10.3
| |
− | | |
− | DNS Configuration
| |
− | -----------------
| |
− | Nameserver 1: 8.8.8.8
| |
− | </pre>
| |
− | | |
− | === Provisioning the platform ===
| |
− | | |
− | ==== Set the ntp server ====
| |
− | | |
− | <pre>
| |
− | source /etc/platform/openrc
| |
− | system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
| |
− | </pre>
| |
− | | |
− | ==== Configure data interfaces ====
| |
− | | |
− | <pre>
| |
− | DATA0IF=eth1000
| |
− | DATA1IF=eth1001
| |
− | export COMPUTE=controller-0
| |
− | PHYSNET0='physnet0'
| |
− | PHYSNET1='physnet1'
| |
− | SPL=/tmp/tmp-system-port-list
| |
− | SPIL=/tmp/tmp-system-host-if-list
| |
− | source /etc/platform/openrc
| |
− | system host-port-list ${COMPUTE} --nowrap > ${SPL}
| |
− | system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
| |
− | DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
| |
− | DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
| |
− | DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
| |
− | DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
| |
− | DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
| |
− | DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
| |
− | DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
| |
− | DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
| |
− | | |
− | # configure the datanetworks in sysinv, prior to referencing it in the 'system host-if-modify command'
| |
− | system datanetwork-add ${PHYSNET0} vlan
| |
− | system datanetwork-add ${PHYSNET1} vlan
| |
− | | |
− | # the host-if-modify '-p' flag is deprecated in favor of the '-d' flag for assignment of datanetworks.
| |
− | system host-if-modify -m 1500 -n data0 -d ${PHYSNET0} -c data ${COMPUTE} ${DATA0IFUUID}
| |
− | system host-if-modify -m 1500 -n data1 -d ${PHYSNET1} -c data ${COMPUTE} ${DATA1IFUUID}
| |
− | </pre>
| |
− | | |
− | | |
− | ==== Setup partitions for Controller-0 ====
| |
− | | |
− | * Create partitions on the root disk and wait for them to be ready
| |
− | ** 34G for nova-local (mandatory).
| |
− | ** 6G for the cgts-vg (optional). This extends the existing cgts volume group. There should be sufficient space by default)
| |
− | | |
− | <pre>
| |
− | export COMPUTE=controller-0
| |
− | source /etc/platform/openrc
| |
− | | |
− | echo ">>> Getting root disk info"
| |
− | ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
| |
− | ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
| |
− | echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
| |
− | | |
− | echo ">>>> Configuring nova-local"
| |
− | NOVA_SIZE=34
| |
− | NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
| |
− | NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
| |
− | system host-lvg-add ${COMPUTE} nova-local
| |
− | system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
| |
− | sleep 2
| |
− | | |
− | echo ">>> Wait for partition $NOVA_PARTITION_UUID to be ready."
| |
− | while true; do system host-disk-partition-list $COMPUTE --nowrap | grep $NOVA_PARTITION_UUID | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done
| |
− | | |
− | echo ">>>> Extending cgts-vg"
| |
− | PARTITION_SIZE=6
| |
− | CGTS_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
| |
− | CGTS_PARTITION_UUID=$(echo ${CGTS_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
| |
− | | |
− | echo ">>> Wait for partition $CGTS_PARTITION_UUID to be ready"
| |
− | while true; do system host-disk-partition-list $COMPUTE --nowrap | grep $CGTS_PARTITION_UUID | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done
| |
− | | |
− | system host-pv-add ${COMPUTE} cgts-vg ${CGTS_PARTITION_UUID}
| |
− | sleep 2
| |
− | | |
− | echo ">>> Waiting for cgts-vg to be ready"
| |
− | while true; do system host-pv-list ${COMPUTE} | grep cgts-vg | grep adding; if [ $? -ne 0 ]; then break; fi; sleep 1; done
| |
− | | |
− | system host-pv-list ${COMPUTE}
| |
− | </pre>
| |
− | | |
− | ==== Configure Ceph for Controller-0 ====
| |
− | | |
− | <pre>
| |
− | source /etc/platform/openrc
| |
− | echo ">>> Enable primary Ceph backend"
| |
− | system storage-backend-add ceph --confirmed
| |
− | | |
− | echo ">>> Wait for primary ceph backend to be configured"
| |
− | echo ">>> This step really takes a long time"
| |
− | while [ $(system storage-backend-list | awk '/ceph-store/{print $8}') != 'configured' ]; do echo 'Waiting for ceph..'; sleep 5; done
| |
− | | |
− | echo ">>> Ceph health"
| |
− | ceph -s
| |
− | | |
− | echo ">>> Add OSDs to primary tier"
| |
− | | |
− | system host-disk-list controller-0
| |
− | system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
| |
− | system host-stor-list controller-0
| |
− | | |
− | echo ">>> ceph osd tree"
| |
− | ceph osd tree
| |
− | </pre>
| |
− | | |
− | ==== Set Ceph pool replication (AIO-SX only) ====
| |
− | <pre>
| |
− | ceph osd pool ls | xargs -i ceph osd pool set {} size 1
| |
− | </pre>
| |
− | | |
− | ==== Unlock the controller ====
| |
− | | |
− | <pre>
| |
− | system host-unlock controller-0
| |
− | </pre>
| |
− | | |
− | * After the host unlocks, test that the ceph cluster is operational
| |
− | | |
− | <pre>
| |
− | ceph -s
| |
− | cluster 6cb8fd30-622a-4a15-a039-b9e945628133
| |
− | health HEALTH_OK
| |
− | monmap e1: 1 mons at {controller-0=127.168.204.3:6789/0}
| |
− | election epoch 4, quorum 0 controller-0
| |
− | osdmap e32: 1 osds: 1 up, 1 in
| |
− | flags sortbitwise,require_jewel_osds
| |
− | pgmap v35: 1728 pgs, 6 pools, 0 bytes data, 0 objects
| |
− | 39180 kB used, 50112 MB / 50150 MB avail
| |
− | 1728 active+clean
| |
− | </pre>
| |
− | | |
− | == Prepare the host for running the containerized services ==
| |
− |
| |
− | * On the controller node, apply all the node labels for each controller and compute functions | |
− | | |
− | <pre>
| |
− | source /etc/platform/openrc
| |
− | | |
− | system host-label-assign controller-0 openstack-control-plane=enabled
| |
− | system host-label-assign controller-0 openstack-compute-node=enabled
| |
− | system host-label-assign controller-0 openvswitch=enabled
| |
− | | |
− | kubectl get nodes --show-labels
| |
− | </pre>
| |
− | | |
− | == Using sysinv to bring up/down the containerized services ==
| |
− | | |
− | === Generate the stx-openstack application tarball ===
| |
− | | |
− | There are currently 2 application tarballs, one with tests enabled and one without.
| |
− | | |
− | The [http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_green_build/outputs/helm-charts/ stx-openstack application tarballs] are generated with each build on the CENGN mirror.
| |
− | | |
− | Alternatively, in a development environment, run the following command to construct the application tarballs.
| |
− | <pre>
| |
− | $MY_REPO_ROOT_DIR/cgcs-root/build-tools/build-helm-charts.sh
| |
− | </pre>
| |
− | The resulting tarballs can be found under $MY_WORKSPACE/std/build-helm/stx.
| |
− | | |
− | === Stage application for deployment ===
| |
− | Transfer the helm-charts-manifest-no-tests.tgz application tarball onto your active controller.
| |
− | | |
− | Use sysinv to upload the application tarball.
| |
− | | |
− | <pre>
| |
− | system application-upload stx-openstack helm-charts-manifest-no-tests.tgz
| |
− | system application-list
| |
− | </pre>
| |
− | | |
− | === Bring Up Services ===
| |
− | Use sysinv to apply the application.
| |
− | system application-apply stx-openstack
| |
− | | |
− | You can monitor the progress by watching system application-list
| |
− | watch -n 1 system application-list
| |
− | | |
− | or tailing Armada execution log
| |
− | sudo docker exec armada_service tailf stx-openstack-apply.log
| |
− | | |
− | === Update Ceph pool replication (AIO-SX only) ===
| |
− | With the application applied the containerized openstack services are now running.
| |
− | | |
− | In an AIO SX environment, you must now set Ceph pool replication for the new pools created when the application was applied:
| |
− | <pre>
| |
− | ceph osd pool ls | xargs -i ceph osd pool set {} size 1
| |
− | </pre>
| |
− | | |
− | === Additional Setup Instructions ===
| |
− | Skip to [[#Verify the cluster endpoints]] to continue the setup.
| |
− | | |
− | The following commands are for reference.
| |
− | | |
− | ----
| |
− | | |
− | * Bring Down Services: Use sysinv to uninstall the application.
| |
− | | |
− | <pre>
| |
− | system application-remove stx-openstack
| |
− | system application-list
| |
− | </pre>
| |
− | | |
− | * Delete Services: Use sysinv to delete the application definition.
| |
− | | |
− | <pre>
| |
− | system application-delete stx-openstack
| |
− | system application-list
| |
− | </pre>
| |
− | | |
− | * Bring Down Services: Clean up and stragglers (volumes and pods)
| |
− | | |
− | <pre>
| |
− | # Watch and wait for the pods to terminate
| |
− | kubectl get pods -n openstack -o wide -w
| |
− | | |
− | # Armada Workaround: delete does not clean up the old test pods. Sooo... Delete them.
| |
− | kubectl get pods -n openstack | awk '/osh-.*-test/{print $1}' | xargs -i kubectl delete pods -n openstack --force --grace-period=0 {}
| |
− | | |
− | # Cleanup all PVCs
| |
− | kubectl get pvc --all-namespaces; kubectl get pv --all-namespaces
| |
− | kubectl delete pvc --all --namespace openstack; kubectl delete pv --all --namespace openstack
| |
− | kubectl get pvc --all-namespaces; kubectl get pv --all-namespaces
| |
− | | |
− | # Useful to cleanup the mariadb grastate data.
| |
− | kubectl get configmaps -n openstack | awk '/osh-/{print $1}' | xargs -i kubectl delete configmaps -n openstack {}
| |
− | | |
− | # Remove all the contents of the ceph pools. I have seen orphaned contents here that take up space.
| |
− | for p in cinder-volumes images kube-rbd; do rbd -p $p ls | xargs -i rbd -p $p snap unprotect {}@snap; done
| |
− | for p in cinder-volumes images kube-rbd; do rbd -p $p ls | xargs -i rbd -p $p snap purge {}; done
| |
− | for p in cinder-volumes images kube-rbd; do rbd -p $p ls | xargs -i rbd -p $p rm {}; done
| |
− | </pre>
| |
− | | |
− | == Verify the cluster endpoints ==
| |
− | | |
− | <pre>
| |
− | # Note: Do this from a new shell as a root user (do not source /etc/platform/openrc in that shell).
| |
− | The 'password' should be set to the admin password which configured during config_controller.
| |
− | | |
− | mkdir -p /etc/openstack
| |
− | tee /etc/openstack/clouds.yaml << EOF
| |
− | clouds:
| |
− | openstack_helm:
| |
− | region_name: RegionOne
| |
− | identity_api_version: 3
| |
− | auth:
| |
− | username: 'admin'
| |
− | password: 'Li69nux*'
| |
− | project_name: 'admin'
| |
− | project_domain_name: 'default'
| |
− | user_domain_name: 'default'
| |
− | auth_url: 'http://keystone.openstack.svc.cluster.local/v3'
| |
− | EOF
| |
− | | |
− | export OS_CLOUD=openstack_helm
| |
− | openstack endpoint list
| |
− | </pre>
| |
− | | |
− | == Provider/tenant networking setup ==
| |
− | | |
− | * Create the providernets
| |
− | | |
− | <pre>
| |
− | PHYSNET0='physnet0'
| |
− | PHYSNET1='physnet1'
| |
− | neutron providernet-create ${PHYSNET0} --type vlan
| |
− | neutron providernet-create ${PHYSNET1} --type vlan
| |
− | </pre>
| |
− | | |
− | * Setup tenant networking (adapt based on lab config) | |
− | | |
− | <pre>
| |
− | ADMINID=`openstack project list | grep admin | awk '{print $2}'`
| |
− | PHYSNET0='physnet0'
| |
− | PHYSNET1='physnet1'
| |
− | PUBLICNET='public-net0'
| |
− | PRIVATENET='private-net0'
| |
− | INTERNALNET='internal-net0'
| |
− | EXTERNALNET='external-net0'
| |
− | PUBLICSUBNET='public-subnet0'
| |
− | PRIVATESUBNET='private-subnet0'
| |
− | INTERNALSUBNET='internal-subnet0'
| |
− | EXTERNALSUBNET='external-subnet0'
| |
− | PUBLICROUTER='public-router0'
| |
− | PRIVATEROUTER='private-router0'
| |
− | neutron providernet-range-create ${PHYSNET0} --name ${PHYSNET0}-a --range 400-499
| |
− | neutron providernet-range-create ${PHYSNET0} --name ${PHYSNET0}-b --range 10-10 --shared
| |
− | neutron providernet-range-create ${PHYSNET1} --name ${PHYSNET1}-a --range 500-599
| |
− | neutron net-create --tenant-id ${ADMINID} --provider:network_type=vlan --provider:physical_network=${PHYSNET0} --provider:segmentation_id=10 --router:external ${EXTERNALNET}
| |
− | neutron net-create --tenant-id ${ADMINID} --provider:network_type=vlan --provider:physical_network=${PHYSNET0} --provider:segmentation_id=400 ${PUBLICNET}
| |
− | neutron net-create --tenant-id ${ADMINID} --provider:network_type=vlan --provider:physical_network=${PHYSNET1} --provider:segmentation_id=500 ${PRIVATENET}
| |
− | neutron net-create --tenant-id ${ADMINID} ${INTERNALNET}
| |
− | PUBLICNETID=`neutron net-list | grep ${PUBLICNET} | awk '{print $2}'`
| |
− | PRIVATENETID=`neutron net-list | grep ${PRIVATENET} | awk '{print $2}'`
| |
− | INTERNALNETID=`neutron net-list | grep ${INTERNALNET} | awk '{print $2}'`
| |
− | EXTERNALNETID=`neutron net-list | grep ${EXTERNALNET} | awk '{print $2}'`
| |
− | neutron subnet-create --tenant-id ${ADMINID} --name ${PUBLICSUBNET} ${PUBLICNET} 192.168.101.0/24
| |
− | neutron subnet-create --tenant-id ${ADMINID} --name ${PRIVATESUBNET} ${PRIVATENET} 192.168.201.0/24
| |
− | neutron subnet-create --tenant-id ${ADMINID} --name ${INTERNALSUBNET} --no-gateway ${INTERNALNET} 10.10.0.0/24
| |
− | neutron subnet-create --tenant-id ${ADMINID} --name ${EXTERNALSUBNET} --gateway 192.168.1.1 --disable-dhcp ${EXTERNALNET} 192.168.1.0/24
| |
− | neutron router-create ${PUBLICROUTER}
| |
− | neutron router-create ${PRIVATEROUTER}
| |
− | PRIVATEROUTERID=`neutron router-list | grep ${PRIVATEROUTER} | awk '{print $2}'`
| |
− | PUBLICROUTERID=`neutron router-list | grep ${PUBLICROUTER} | awk '{print $2}'`
| |
− | neutron router-gateway-set --disable-snat ${PUBLICROUTERID} ${EXTERNALNETID}
| |
− | neutron router-gateway-set --disable-snat ${PRIVATEROUTERID} ${EXTERNALNETID}
| |
− | neutron router-interface-add ${PUBLICROUTER} ${PUBLICSUBNET}
| |
− | neutron router-interface-add ${PRIVATEROUTER} ${PRIVATESUBNET}
| |
− | </pre>
| |
− | | |
− | == Horizon access ==
| |
− | | |
− | <pre>
| |
− | # After successful armada manifest apply the following should be seen
| |
− | | |
− | kubectl get services -n openstack | grep horizon
| |
− | horizon ClusterIP 10.104.34.245 <none> 80/TCP,443/TCP 13h
| |
− | horizon-int NodePort 10.101.103.238 <none> 80:31000/TCP 13h
| |
− | | |
− | The platform horizon UI is available at http://<external OAM IP>
| |
− | | |
− | $ curl -L http://10.10.10.3:80 -so - | egrep '(PlugIn|<title>)'
| |
− | <title>Login - StarlingX</title>
| |
− | global.horizonPlugInModules = ['horizon.dashboard.project', 'horizon.dashboard.container-infra', 'horizon.dashboard.dc_admin', 'horizon.dashboard.identity', 'horizon.app.murano'];
| |
− | | |
− | The containerized horizon UI is available at http://<external OAM IP>:31000
| |
− | | |
− | $ curl -L http://10.10.10.3:31000 -so - | egrep '(PlugIn|<title>)'
| |
− | <title>Login - StarlingX</title>
| |
− | global.horizonPlugInModules = ['horizon.dashboard.project', 'horizon.dashboard.identity'];
| |
− |
| |
− | </pre>
| |
You might consider contributing to StarlingX documentation if you find a bug or have a suggestions for improvement.
To get started: