Jump to: navigation, search

Difference between revisions of "StarlingX/Containers/InstallationOnStandard"

(Setup the VirtualBox VM)
 
(20 intermediate revisions by 8 users not shown)
Line 1: Line 1:
= Installing StarlingX with containers: Standard configuration =
+
{{Warning|header='''Warning - Deprecated'''|body='''This wiki page is out of date and now deprecated. For the current and upcoming versions, see [https://docs.starlingx.io/deploy_install_guides/index.html StarlingX Installation and Deployment guides]'''}}
  
== History ==
+
= Documentation Contribution =
  
* '''January 29, 2019:''' Removed obsolete neutron host/interface configuration and updated DNS instructions.
+
You might consider contributing to StarlingX documentation if you find a bug or have a suggestions for improvement.
* '''January 29, 2019:''' Configure datanetworks in sysinv, prior to referencing it in the 'system host-if-modify/host-if-add command'.
+
To get started:
  
== Introduction ==
+
* Please use "[https://docs.starlingx.io/contributor/index.html Contribute]" guides.
 +
* Launch a bug in [https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.docs StarlingX Launchpad] with the tag ''stx.docs''.
  
These instructions are for a Standard, 2 controllers and 2 computes (2+2) configuration, in VirtualBox. Other configurations are in development. Installing on bare metal is also possible, however the the process would have to be adapted for the specific hardware configuration.
+
= History =
  
'''Note''': These instructions are valid for a load built on '''January 25, 2019''' or later.
+
Go to [https://wiki.openstack.org/w/index.php?title=StarlingX/Containers/InstallationOnStandard&action=history Page > History] link if you want to:
  
== Building the Software ==
+
* See the old content of this page
 
+
* Compare revisions
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Building_the_Software|  Building the Software]]
 
 
 
== Setup the VirtualBox VM ==
 
 
 
Create a virtual machine for the system with the following options:
 
      * Type: Linux
 
      * Version: Other Linux (64-bit)
 
      * Memory size:
 
        * Controller nodes: 16384 MB
 
        * Compute nodes: 10240 MB
 
      * Storage:
 
        * Recommend to use VDI and dynamically allocated disks
 
        * Controller nodes; at least two disks are required:
 
              * 240GB disk for a root disk
 
              * 50GB for an OSD
 
        * Compute nodes; at least one disk is required:
 
              * 240GB disk for a root disk
 
      * System->Processors:
 
        * Controller nodes: 4 cpu
 
        * Compute nodes: 3 cpu
 
      * Network:
 
        * Controller nodes:
 
            * OAM network:
 
              OAM interface must have external connectivity, for now we will use a NatNetwork
 
              * Adapter 1: NAT Network; Name: NatNetwork Follow the instructions at [[#VirtualBox Nat Networking]]
 
            * Internal management network:
 
              * Adapter 2: Internal Network, Name: intnet-management; Intel PRO/1000MT Desktop, Advanced: Promiscuous Mode: Allow All;
 
        * Compute nodes:
 
            * Unused network
 
              * Adapter 1: Internal Network, Name: intnet-unused; Advanced: Intel PRO/1000MT Desktop, Promiscuous Mode: Allow All (Optional - if infrastructure network will be used then set "Name" to "intnet-infra")
 
            * Internal management network:
 
              * Adapter 2: Internal Network, Name: intnet-management; Intel PRO/1000MT Desktop, Advanced: Promiscuous Mode: Allow All;
 
            * Data Network
 
              * Adapter 3: Internal Network, Name: intnet-data1; Advanced: Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All
 
              * Adapter 4: Internal Network, Name: intnet-data2; Advanced: Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All
 
      * Serial Ports: Select this to use a serial console.
 
        * Windows: Select "Enable Serial Port", port mode to "Host Pipe". Select "Create Pipe" (or deselect "Connect to existing pipe/socket") and then give a Port/File Path of something like "\\.\pipe\controller-0" or "\\.\pipe\compute-1" which you can later use in PuTTY to connect to the console. Choose speed of 9600 or 38400.
 
        * Linux: Select "Enable Serial Port" and set the port mode to "Host Pipe". Select "Create Pipe" (or deselect "Connect to existing pipe/socket") and then give a Port/File Path of something like "/tmp/controller_serial" which you can later use with socat - for example: socat UNIX-CONNECT:/tmp/controller_serial stdio,raw,echo=0,icanon=0
 
 
 
Set the boot priority for interface 2 (eth1) on ALL VMs (controller, compute and storage)
 
 
 
<pre>
 
# First list the VMs
 
abc@server:~$ VBoxManage list vms
 
"controller-0" {3db3a342-780f-41d5-a012-dbe6d3591bf1}
 
"controller-1" {ad89a706-61c6-4c27-8c78-9729ade01460}
 
"compute-0" {41e80183-2497-4e31-bffd-2d8ec5bcb397}
 
"compute-1" {68382c1d-9b67-4f3b-b0d5-ebedbe656246}
 
"storage-0" {7eddce9e-b814-4c40-94ce-2cde1fd2d168}
 
 
 
# Then set the priority for interface 2. Do this for ALL VMs.
 
# Command syntax: VBoxManage modifyvm <uuid> --nicbootprio2 1
 
abc@server:~$ VBoxManage modifyvm 3db3a342-780f-41d5-a012-dbe6d3591bf1 --nicbootprio2 1
 
 
 
#OR do them all with a foreach loop in linux
 
abc@server:~$ for f in $(VBoxManage list vms | cut -f 1 -d " " | sed 's/"//g'); do echo $f; VBoxManage modifyvm $f --nicbootprio2 1; done
 
 
 
# NOTE: In windows, you need to specify the full path to the VBoxManage executable - for example:
 
"\Program Files\Oracle\VirtualBox\VBoxManage.exe"
 
</pre>
 
 
 
=== VirtualBox Nat Networking ===
 
 
 
First add a NAT Network in VirtualBox:
 
  * Select File -> Preferences menu
 
  * Choose Network, "Nat Networks" tab should be selected
 
    * Click on plus icon to add a network, which will add a network named NatNetwork
 
    * Edit the NatNetwork (gear or screwdriver icon)
 
      * Network CIDR: 10.10.10.0/24 (to match OAM network specified in config_controller)
 
      * Disable "Supports DHCP"
 
      * Enable "Supports IPv6"
 
      * Select "Port Forwarding" and add any rules you desire. Some examples:
 
{| class="wikitable"
 
| Name || Protocol|| Host IP|| Host Port || Guest IP || Guest Port
 
|-
 
| controller-ssh || TCP || || 22 || 10.10.10.3 || 22
 
|-
 
| controller-http || TCP || || 80 || 10.10.10.3 || 8080
 
|-
 
| controller-https || TCP ||  || 443 || 10.10.10.3 || 8443
 
|-
 
| controller-ostk-http || TCP ||  || 31000 || 10.10.10.3 || 31000
 
|-
 
| controller-0-ssh || TCP || || 23 || 10.10.10.4 || 22
 
|-
 
| controller-1-ssh || TCP || || 24 || 10.10.10.4 || 22
 
|-
 
|}
 
 
 
== Setup Controller-0 ==
 
 
 
=== Install StarlingX ===
 
 
 
Boot the VM from the ISO media. Select the following options for installation:
 
*Standard Controller Configuration
 
*Graphical Console
 
*STANDARD Security Boot Profile
 
 
 
Log into Controller-0 as user wrsroot, with password wrsroot. The first time you log in as wrsroot, you are required to change your password. Enter the current password (wrsroot):
 
 
 
<pre>
 
Changing password for wrsroot.
 
(current) UNIX Password: wrsroot
 
</pre>
 
 
 
Enter a new password for the wrsroot account and confirm it.
 
 
 
=== Run config_controller ===
 
 
 
<code>sudo config_controller --kubernetes</code>
 
 
 
Use default settings during config_controller, except for the following:
 
* External OAM floating address: 10.10.10.3
 
* External OAM address for first controller node: 10.10.10.4
 
* External OAM address for second controller node 10.10.10.5
 
* If you do not have direct access to the google DNS nameserver(s) 8.8.8.8 , 8.8.4.4 you will need to configure that when prompted. Press Enter to choose the default, or type a new entry.
 
* If you do not have direct access to the public docker registry (https://hub.docker.com/u/starlingx) and instead use a proxy for internet access, you will need to add proxy information when prompted. (Storyboard 2004710 was merged on Jan 30, 2019. )
 
 
 
The system configuration should look like this:
 
<pre>
 
System Configuration
 
--------------------
 
Time Zone: UTC
 
System mode: duplex
 
Distributed Cloud System Controller: no
 
 
 
PXEBoot Network Configuration
 
-----------------------------
 
Separate PXEBoot network not configured
 
PXEBoot Controller floating hostname: pxecontroller
 
 
 
Management Network Configuration
 
--------------------------------
 
Management interface name: enp0s8
 
Management interface: enp0s8
 
Management interface MTU: 1500
 
Management subnet: 192.168.204.0/24
 
Controller floating address: 192.168.204.2
 
Controller 0 address: 192.168.204.3
 
Controller 1 address: 192.168.204.4
 
NFS Management Address 1: 192.168.204.5
 
NFS Management Address 2: 192.168.204.6
 
Controller floating hostname: controller
 
Controller hostname prefix: controller-
 
OAM Controller floating hostname: oamcontroller
 
Dynamic IP address allocation is selected
 
Management multicast subnet: 239.1.1.0/28
 
 
 
Infrastructure Network Configuration
 
------------------------------------
 
Infrastructure interface not configured
 
 
 
External OAM Network Configuration
 
----------------------------------
 
External OAM interface name: enp0s3
 
External OAM interface: enp0s3
 
External OAM interface MTU: 1500
 
External OAM subnet: 10.10.10.0/24
 
External OAM gateway address: 10.10.10.1
 
External OAM floating address: 10.10.10.3
 
External OAM 0 address: 10.10.10.4
 
External OAM 1 address: 10.10.10.5
 
 
 
DNS Configuration
 
-----------------
 
Nameserver 1: 8.8.8.8
 
</pre>
 
 
 
=== Provisioning controller-0 ===
 
 
 
==== Set the ntp server ====
 
 
 
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Set_the_ntp_server|  Set the ntp server]]
 
 
 
==== Prepare the host for running the containerized services ====
 
 
 
* On the controller node, apply the node label for controller functions
 
 
 
<pre>
 
source /etc/platform/openrc
 
system host-label-assign controller-0 openstack-control-plane=enabled
 
</pre>
 
 
 
==== Enable the Ceph backend ====
 
 
 
Enable ceph backend and wait for 'applying-manifests' task to complete
 
<pre>
 
source /etc/platform/openrc
 
system storage-backend-add ceph --confirmed
 
 
 
while [ $(system storage-backend-list | awk '/ceph-store/{print $8}') != 'configured' ]; do echo 'Waiting for ceph.'; sleep 5; done
 
system storage-backend-list
 
</pre>
 
 
 
====  Unlock controller-0 ====
 
 
 
<pre>
 
source /etc/platform/openrc
 
system host-unlock controller-0
 
</pre>
 
 
 
== Install remaining hosts ==
 
 
 
=== PXE boot hosts ===
 
Power-on, the remaining hosts, they should PXEboot from the controller.
 
 
 
Press F-12 for network boot if they do not.
 
 
 
Once booted from PXE, hosts should be visible with Check with 'system host-list':
 
 
 
<pre>
 
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
 
+----+--------------+-------------+----------------+-------------+--------------+
 
| id | hostname    | personality | administrative | operational | availability |
 
+----+--------------+-------------+----------------+-------------+--------------+
 
| 1  | controller-0 | controller  | unlocked      | enabled    | available    |
 
| 2  | None        | None        | locked        | disabled    | offline      |
 
| 3  | None        | None        | locked        | disabled    | offline      |
 
| 4  | None        | None        | locked        | disabled    | offline      |
 
+----+--------------+-------------+----------------+-------------+--------------+
 
</pre>
 
 
 
=== Configure host personalities ===
 
 
 
<pre>
 
source /etc/platform/openrc
 
system host-update 2 personality=controller
 
system host-update 3 personality=worker hostname=compute-0
 
system host-update 4 personality=worker hostname=compute-1
 
</pre>
 
 
 
At this point hosts should start installing.
 
 
 
=== Wait for hosts to become online ===
 
 
 
Once all Nodes have been installed and rebooted, on Controller-0 list the hosts:
 
 
 
<pre>
 
+----+--------------+-------------+----------------+-------------+--------------+
 
| id | hostname    | personality | administrative | operational | availability |
 
+----+--------------+-------------+----------------+-------------+--------------+
 
| 1  | controller-0 | controller  | unlocked      | enabled    | available    |
 
| 2  | controller-1 | controller  | locked        | disabled    | online      |
 
| 3  | compute-0    | worker      | locked        | disabled    | online      |
 
| 4  | compute-1    | worker      | locked        | disabled    | online      |
 
+----+--------------+-------------+----------------+-------------+--------------+
 
</pre>
 
 
 
== Prepare the remaining hosts for running the containerized services ==
 
 
 
* On the controller node, apply all the node labels for each controller and compute functions
 
 
 
<pre>
 
source /etc/platform/openrc
 
system host-label-assign controller-1 openstack-control-plane=enabled
 
for NODE in compute-0 compute-1; do
 
  system host-label-assign $NODE  openstack-compute-node=enabled
 
  system host-label-assign $NODE  openvswitch=enabled
 
  system host-label-assign $NODE  sriov=enabled
 
done
 
</pre>
 
 
 
== Provisioning controller-1 ==
 
 
 
=== Add interfaces on Controller-1 ===
 
* Add the OAM Interface on Controller-1
 
* Add the Cluster-Host Interface on Controller-1
 
 
 
<pre>
 
source /etc/platform/openrc
 
system host-if-modify -n oam0 -c platform --networks oam controller-1 $(system host-if-list -a controller-1 | awk '/enp0s3/{print $2}')
 
system host-if-modify controller-1 mgmt0 --networks cluster-host
 
</pre>
 
 
 
=== Unlock Controller-1 ===
 
<pre>
 
source /etc/platform/openrc
 
system host-unlock controller-1
 
</pre>
 
 
 
Wait for node to be available:
 
 
 
<pre>
 
+----+--------------+-------------+----------------+-------------+--------------+
 
| id | hostname    | personality | administrative | operational | availability |
 
+----+--------------+-------------+----------------+-------------+--------------+
 
| 1  | controller-0 | controller  | unlocked      | enabled    | available    |
 
| 2  | controller-1 | controller  | unlocked      | enabled    | available    |
 
| 3  | compute-0    | worker      | locked        | disabled    | online      |
 
| 4  | compute-1    | worker      | locked        | disabled    | online      |
 
+----+--------------+-------------+----------------+-------------+--------------+
 
</pre>
 
 
 
* Ceph cluster shows a quorum with controller-0 and controller-1
 
<pre>
 
[root@controller-0 wrsroot(keystone_admin)]# ceph -s
 
    cluster 93f79bcb-526f-4396-84a4-a29c93614d09
 
    health HEALTH_ERR
 
            128 pgs are stuck inactive for more than 300 seconds
 
            128 pgs stuck inactive
 
            128 pgs stuck unclean
 
            no osds
 
    monmap e1: 2 mons at {controller-0=192.168.204.3:6789/0,controller-1=192.168.204.4:6789/0}
 
            election epoch 6, quorum 0,1 controller-0,controller-1
 
    osdmap e2: 0 osds: 0 up, 0 in
 
            flags sortbitwise,require_jewel_osds
 
      pgmap v3: 128 pgs, 2 pools, 0 bytes data, 0 objects
 
            0 kB used, 0 kB / 0 kB avail
 
                128 creating
 
</pre>
 
 
 
== Provisioning computes ==
 
 
 
=== Add the third Ceph monitor to a compute node (Standard Only) ===
 
 
 
<pre>
 
[root@controller-0 wrsroot(keystone_admin)]# system ceph-mon-add compute-0
 
+--------------+------------------------------------------------------------------+
 
| Property    | Value                                                            |
 
+--------------+------------------------------------------------------------------+
 
| uuid        | f76bc385-190c-4d9a-aa0f-107346a9907b                            |
 
| ceph_mon_gib | 20                                                              |
 
| created_at  | 2019-01-17T12:32:33.372098+00:00                                |
 
| updated_at  | None                                                            |
 
| state        | configuring                                                      |
 
| task        | {u'controller-1': 'configuring', u'controller-0': 'configuring'} |
 
+--------------+------------------------------------------------------------------+
 
</pre>
 
 
 
Wait for compute monitor to be configured:
 
 
 
<pre>
 
[root@controller-0 wrsroot(keystone_admin)]# system ceph-mon-list
 
+--------------------------------------+-------+--------------+------------+------+
 
| uuid                                | ceph_ | hostname    | state      | task |
 
|                                      | mon_g |              |            |      |
 
|                                      | ib    |              |            |      |
 
+--------------------------------------+-------+--------------+------------+------+
 
| 64176b6c-e284-4485-bb2a-115dee215279 | 20    | controller-1 | configured | None |
 
| a9ca151b-7f2c-4551-8167-035d49e2df8c | 20    | controller-0 | configured | None |
 
| f76bc385-190c-4d9a-aa0f-107346a9907b | 20    | compute-0    | configured | None |
 
+--------------------------------------+-------+--------------+------------+------+
 
</pre>
 
 
 
=== Create the volume group for nova ===
 
 
 
<pre>
 
for COMPUTE in compute-0 compute-1; do
 
  echo "Configuring nova local for: $COMPUTE"
 
  set -ex
 
  ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
 
  ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | awk /${ROOT_DISK}/'{print $2}')
 
  PARTITION_SIZE=10
 
  NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
 
  NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 
  system host-lvg-add ${COMPUTE} nova-local
 
  system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
 
  system host-lvg-modify -b image ${COMPUTE} nova-local
 
  set +ex
 
done
 
</pre>
 
 
 
=== Configure data interfaces for computes ===
 
 
 
<pre>
 
DATA0IF=eth1000
 
DATA1IF=eth1001
 
PHYSNET0='physnet0'
 
PHYSNET1='physnet1'
 
SPL=/tmp/tmp-system-port-list
 
SPIL=/tmp/tmp-system-host-if-list
 
 
 
# configure the datanetworks in sysinv, prior to referencing it in the 'system host-if-modify command'.
 
system datanetwork-add ${PHYSNET0} vlan
 
system datanetwork-add ${PHYSNET1} vlan
 
 
 
for COMPUTE in compute-0 compute-1; do
 
  echo "Configuring interface for: $COMPUTE"
 
  set -ex
 
  system host-port-list ${COMPUTE} --nowrap > ${SPL}
 
  system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
 
  DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 
  DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 
  DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 
  DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 
  DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 
  DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 
  DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 
  DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 
  system host-if-modify -m 1500 -n data0 -d ${PHYSNET0} -c data ${COMPUTE} ${DATA0IFUUID}
 
  system host-if-modify -m 1500 -n data1 -d ${PHYSNET1} -c data ${COMPUTE} ${DATA1IFUUID}
 
  set +ex
 
done
 
</pre>
 
 
 
=== Setup the cluster-host interfaces on the computes to the management network (enp0s8) ===
 
 
 
<pre>
 
for COMPUTE in compute-0 compute-1; do
 
  system host-if-modify $COMPUTE mgmt0 --networks cluster-host
 
done
 
</pre>
 
 
 
=== Unlock compute nodes ===
 
 
 
<pre>
 
for COMPUTE in compute-0 compute-1; do
 
  system host-unlock $COMPUTE
 
done
 
</pre>
 
 
 
* After the hosts are available, test that Ceph cluster is operational and that all 3 monitors (controller-0, controller-1 & compute-0) have joined the monitor quorum:
 
 
 
<pre>
 
[root@controller-0 wrsroot(keystone_admin)]# system host-list
 
+----+--------------+-------------+----------------+-------------+--------------+
 
| id | hostname    | personality | administrative | operational | availability |
 
+----+--------------+-------------+----------------+-------------+--------------+
 
| 1  | controller-0 | controller  | unlocked      | enabled    | available    |
 
| 2  | controller-1 | controller  | unlocked      | enabled    | available    |
 
| 3  | compute-0    | worker      | unlocked      | enabled    | available    |
 
| 4  | compute-1    | worker      | unlocked      | enabled    | available    |
 
+----+--------------+-------------+----------------+-------------+--------------+
 
[root@controller-0 wrsroot(keystone_admin)]# ceph -s
 
    cluster 93f79bcb-526f-4396-84a4-a29c93614d09
 
    health HEALTH_ERR
 
            128 pgs are stuck inactive for more than 300 seconds
 
            128 pgs stuck inactive
 
            128 pgs stuck unclean
 
            no osds
 
    monmap e2: 3 mons at {compute-0=192.168.204.182:6789/0,controller-0=192.168.204.3:6789/0,controller-1=192.168.204.4:6789/0}
 
            election epoch 14, quorum 0,1,2 controller-0,controller-1,compute-0
 
    osdmap e11: 0 osds: 0 up, 0 in
 
            flags sortbitwise,require_jewel_osds
 
      pgmap v12: 128 pgs, 2 pools, 0 bytes data, 0 objects
 
            0 kB used, 0 kB / 0 kB avail
 
                128 creating
 
</pre>
 
 
 
== Add Ceph OSDs to controllers ==
 
 
 
* Lock controller-1
 
 
 
<pre>
 
system host-lock controller-1
 
</pre>
 
 
 
* Wait for node to be locked.
 
 
 
* Add OSD(s) to controller-1
 
<pre>
 
HOST=controller-1
 
DISKS=$(system host-disk-list ${HOST})
 
TIERS=$(system storage-tier-list ceph_cluster)
 
OSDs="/dev/sdb"
 
for OSD in $OSDs; do
 
    system host-stor-add ${HOST} $(echo "$DISKS" | grep /dev/sdb | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
 
done
 
</pre>
 
 
 
* Unlock controller-1
 
<pre>
 
system host-unlock controller-1
 
</pre>
 
 
 
* Wait controller-1 to be available
 
<pre>
 
[root@controller-0 wrsroot(keystone_admin)]# system host-list
 
+----+--------------+-------------+----------------+-------------+--------------+
 
| id | hostname    | personality | administrative | operational | availability |
 
+----+--------------+-------------+----------------+-------------+--------------+
 
| 1  | controller-0 | controller  | unlocked      | enabled    | available    |
 
| 2  | controller-1 | controller  | unlocked      | enabled    | available    |
 
| 3  | compute-0    | worker      | unlocked      | enabled    | available    |
 
| 4  | compute-1    | worker      | unlocked      | enabled    | available    |
 
+----+--------------+-------------+----------------+-------------+--------------+
 
</pre>
 
 
 
* Swact controllers
 
<pre>
 
system host-swact controller-0
 
</pre>
 
 
 
Wait for swact to complete and services to stabilize (approximately 30s). You may get disconnect if you connected over OAM floating IP. Reconnect or connect to controller-1.
 
<pre>
 
controller-1:/home/wrsroot# source /etc/platform/openrc
 
[root@controller-1 wrsroot(keystone_admin)]# system host-show controller-1 | grep Controller-Active
 
| capabilities        | {u'stor_function': u'monitor', u'Personality': u'Controller-Active'} |
 
</pre>
 
 
 
* Lock controller-0
 
<pre>
 
system host-lock controller-0
 
</pre>
 
 
 
* Wait controller-0 to be locked
 
 
 
* Add OSD(s) to controller-0
 
<pre>
 
HOST=controller-0
 
DISKS=$(system host-disk-list ${HOST})
 
TIERS=$(system storage-tier-list ceph_cluster)
 
OSDs="/dev/sdb"
 
for OSD in $OSDs; do
 
    system host-stor-add ${HOST} $(echo "$DISKS" | grep /dev/sdb | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
 
done
 
</pre>
 
 
 
* Unlock controller-0
 
<pre>
 
system host-unlock controller-0
 
</pre>
 
 
 
* Wait for controller-0 to be available. At this point ceph should report HEALTH_OK and two OSDs configured one for each controller:
 
<pre>
 
[root@controller-0 wrsroot(keystone_admin)]# ceph -s
 
    cluster 93f79bcb-526f-4396-84a4-a29c93614d09
 
    health HEALTH_OK
 
    monmap e2: 3 mons at {compute-0=192.168.204.182:6789/0,controller-0=192.168.204.3:6789/0,controller-1=192.168.204.4:6789/0}
 
            election epoch 22, quorum 0,1,2 controller-0,controller-1,compute-0
 
    osdmap e31: 2 osds: 2 up, 2 in
 
            flags sortbitwise,require_jewel_osds
 
      pgmap v73: 384 pgs, 6 pools, 1588 bytes data, 1116 objects
 
            90044 kB used, 17842 MB / 17929 MB avail
 
                384 active+clean
 
[root@controller-1 wrsroot(keystone_admin)]# ceph osd tree
 
ID WEIGHT  TYPE NAME                      UP/DOWN REWEIGHT PRIMARY-AFFINITY                                 
 
-1 0.01700 root storage-tier                                               
 
-2 0.01700    chassis group-0                                             
 
-4 0.00850        host controller-0                                       
 
1 0.00850            osd.1                  up  1.00000          1.00000
 
-3 0.00850        host controller-1                                       
 
0 0.00850            osd.0                  up  1.00000          1.00000
 
</pre>
 
 
 
== Using sysinv to bring up/down the containerized services ==
 
 
 
=== Generate the stx-openstack application tarball ===
 
 
 
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Generate_the_stx-openstack_application_tarball|  Generate the stx-openstack application tarball]]
 
 
 
=== Stage application for deployment ===
 
 
 
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Stage_application_for_deployment|  Stage application for deployment]]
 
 
 
=== Bring Up Services ===
 
 
 
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Bring_Up_Services|  Bring Up Services]]
 
 
 
=== Verify the cluster endpoints ===
 
 
 
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Verify_the_cluster_endpoints|  here ]]
 
 
 
== Provider/tenant networking setup ==
 
 
 
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Provider.2Ftenant_networking_setup|  here ]]
 
 
 
== Additional Setup Instructions ==
 
 
 
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Additional_Setup_Instructions|  Additional Setup Instructions]]
 
 
 
== Horizon access ==
 
 
 
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Horizon_access|  here ]]
 
 
 
== Known Issues and Troubleshooting ==
 
 
 
None
 

Latest revision as of 18:01, 1 August 2019

Warning icon.svg Warning - Deprecated

This wiki page is out of date and now deprecated. For the current and upcoming versions, see StarlingX Installation and Deployment guides

Documentation Contribution

You might consider contributing to StarlingX documentation if you find a bug or have a suggestions for improvement. To get started:

History

Go to Page > History link if you want to:

  • See the old content of this page
  • Compare revisions