Jump to: navigation, search

Difference between revisions of "StarlingX/Containers/InstallationOnStandardStorage"

(Add the cluster-host interface on storage hosts)
(Set the ntp server)
Line 60: Line 60:
 
Refer to these instructions on the AIO DX page [[StarlingX/Containers/InstallationOnAIODX#Configure_OAM.2C_Management_and_Cluster_interfaces| Configure OAM, Management and Cluster interfaces]]
 
Refer to these instructions on the AIO DX page [[StarlingX/Containers/InstallationOnAIODX#Configure_OAM.2C_Management_and_Cluster_interfaces| Configure OAM, Management and Cluster interfaces]]
  
==== Set the ntp server ====
+
==== (Hardware lab only) Set the ntp server====
Refer to these instructions on the AIO SX page [[StarlingX/Containers/Installation#Set_the_ntp_server|  Set the ntp server]]
+
Refer to these instructions on the AIO SX page [https://wiki.openstack.org/wiki/StarlingX/Containers/Installation#.28Hardware_lab_only.29_Set_the_ntp_server Set the ntp server]
  
 
==== Configure the vswitch type (optional) ====
 
==== Configure the vswitch type (optional) ====

Revision as of 13:26, 19 June 2019

Contents

Installing StarlingX with containers: Standard Storage configuration

WARNING: DO NOT EDIT THIS WIKI CONTENT.

The information on this wiki page is in the process of transitioning to "Deploy/Install" guides that are being created as part of the StarlingX documentation. Consequently, do not make edits to the content in this wiki page. If you have changes that need to be made to the installation process described on this page of the wiki, contact StarlingX Documentation Team.

this page is still under construction

History

  • January 30, 2019: Initial draft

Introduction

These instructions are for a Standard, 2 controllers, 2 computes, 2 storage (2+2+2) configuration, in VirtualBox.

Other configurations are in development.

Installing on bare metal is also possible, however the the process would have to be adapted for the specific hardware configuration.

Note: These instructions are valid for a load built on January 30, 2019 or later.

Building the Software

Refer to these instructions on the AIO SX page Building the Software

Setup the VirtualBox VM

Refer to these instructions on the Standard 2+2 page Setup the VirtualBox VM

Remember to setup 2 controllers, 2 computes and 2 storage nodes.

VirtualBox Nat Networking

Refer to these instructions on the Standard 2+2 page VirtualBox Nat Networking

Setup Controller-0

Install StarlingX

Boot the VM from the ISO media. Select the following options for installation:

  • Standard Controller Configuration
  • Graphical Console
  • STANDARD Security Boot Profile

Log into Controller-0 as user wrsroot, with password wrsroot. The first time you log in as wrsroot, you are required to change your password. Enter the current password (wrsroot):

 
Changing password for wrsroot.
(current) UNIX Password: wrsroot

Enter a new password for the wrsroot account and confirm it.

Bootstrap the controller

Refer to these instructions on the AIO DX page Bootstrap the controller

Provisioning Controller-0

Configure OAM, Management and Cluster interfaces

Refer to these instructions on the AIO DX page Configure OAM, Management and Cluster interfaces

(Hardware lab only) Set the ntp server

Refer to these instructions on the AIO SX page Set the ntp server

Configure the vswitch type (optional)

Refer to these instructions on the AIO SX page Configure the vswitch type

Prepare the host for running the containerized services

Refer to these instructions on the Standard 2+2 page Prepare the host for running the containerized services

Unlock Controller-0

source /etc/platform/openrc
system host-unlock controller-0

Install remaining hosts

PXE boot hosts

Power-on, the remaining hosts, they should PXEboot from the controller.

Press F-12 for network boot if they do not.

Once booted from PXE, hosts should be visible with Check with 'system host-list':

[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 2  | None         | None        | locked         | disabled    | offline      |
| 3  | None         | None        | locked         | disabled    | offline      |
| 4  | None         | None        | locked         | disabled    | offline      |
| 5  | None         | None        | locked         | disabled    | offline      |
| 6  | None         | None        | locked         | disabled    | offline      |
+----+--------------+-------------+----------------+-------------+--------------+

Configure host personalities

source /etc/platform/openrc
system host-update 2 personality=controller
system host-update 3 personality=storage
system host-update 4 personality=storage
system host-update 5 personality=worker hostname=compute-0
system host-update 6 personality=worker hostname=compute-1

At this point hosts should start installing.

Wait for hosts to become online

Once all Nodes have been installed and rebooted, on Controller-0 list the hosts:

+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 2  | controller-1 | controller  | locked         | disabled    | online       |
| 3  | storage-0    | storage      | locked         | disabled    | online       |
| 4  | storage-1    | storage      | locked         | disabled    | online       |
| 5  | compute-0    | worker      | locked         | disabled    | online       |
| 6  | compute-1    | worker      | locked         | disabled    | online       |
+----+--------------+-------------+----------------+-------------+--------------+

Prepare the remaining hosts for running the containerized services

Refer to these instructions on the Standard 2+2 page here

(Optional) Setup Remote Storage

(Optional) Nova local storage will default to local image, but you can enable remote storage for root/ephemeral/swap disks in standard storage configurations by labeling the worker nodes.

source /etc/platform/openrc
for NODE in compute-0 compute-1; do
  system host-label-assign $NODE remote-storage=enabled
done

Provisioning controller-1

Add interfaces on Controller-1

Refer to these instructions on the Standard 2+2 page here

Unlock Controller-1

source /etc/platform/openrc
system host-unlock controller-1

Wait for node to be available:

+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 2  | controller-1 | controller  | unlocked       | enabled     | available    |
| 3  | storage-0    | worker      | locked         | disabled    | online       |
| 4  | storage-1    | worker      | locked         | disabled    | online       |
| 5  | compute-0    | worker      | locked         | disabled    | online       |
| 6  | compute-1    | worker      | locked         | disabled    | online       |
+----+--------------+-------------+----------------+-------------+--------------+
  • Ceph cluster shows a quorum with controller-0 and controller-1
[root@controller-0 wrsroot(keystone_admin)]# ceph -s
    cluster 03b577a9-b368-4e1a-bc2a-f508903e124d
     health HEALTH_ERR
            no osds
            1 mons down, quorum 0,1 controller-0,controller-1
     monmap e1: 3 mons at {controller-0=192.168.204.3:6789/0,controller-1=192.168.204.4:6789/0,storage-0=192.168.204.243:6789/0}
            election epoch 4, quorum 0,1 controller-0,controller-1
     osdmap e1: 0 osds: 0 up, 0 in
            flags sortbitwise,require_jewel_osds
      pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
            0 kB used, 0 kB / 0 kB avail
                  64 creating

Provision storage

Add the cluster-host interface on storage hosts

source /etc/platform/openrc
system interface-network-assign storage-0 $(system host-if-list -a storage-0 | awk '/mgmt0/{print $2}') cluster-host
system interface-network-assign storage-1 $(system host-if-list -a storage-1 | awk '/mgmt0/{print $2}') cluster-host

Add an OSD to the storage hosts

source /etc/platform/openrc
system host-stor-add storage-0 $(system host-disk-list storage-0 | awk '/sdb/{print $2}')
system host-stor-add storage-1 $(system host-disk-list storage-1 | awk '/sdb/{print $2}')

Unlock the storage hosts

source /etc/platform/openrc
system host-unlock storage-0
system host-unlock storage-1
  • Ceph cluster shows HEALTH_OK
ceph -s
    cluster 03b577a9-b368-4e1a-bc2a-f508903e124d
     health HEALTH_OK
     monmap e1: 3 mons at {controller-0=192.168.204.3:6789/0,controller-1=192.168.204.4:6789/0,storage-0=192.168.204.243:6789/0}
            election epoch 6, quorum 0,1,2 controller-0,controller-1,storage-0
     osdmap e15: 2 osds: 2 up, 2 in
            flags sortbitwise,require_jewel_osds
      pgmap v23: 64 pgs, 1 pools, 0 bytes data, 0 objects
            67584 kB used, 100234 MB / 100300 MB avail
                  64 active+clean

Provision Computes

Setup the cluster-host interfaces on the computes

source /etc/platform/openrc
for COMPUTE in compute-0 compute-1; do
   system interface-network-assign $COMPUTE mgmt0 cluster-host
done

Configure data interfaces for computes

Refer to these instructions on the Standard 2+2 page Configure data interfaces for computes

Create volume groups for computes

TODO: Determine if we can reference this or if it makes sense for these to be different.

Create 2 partitions on the root disk (/dev/sda).

Assign /dev/sda6 (4GB) to the cgts-vg.

Assign /dev/sda7 (28G) to nova-local

for COMPUTE in compute-0 compute-1; do
  ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
  ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | awk /${ROOT_DISK}/'{print $2}')
  AVAIL_SIZE=$(system host-disk-list ${COMPUTE} | awk /${ROOT_DISK}/'{printf("%d",$12)}')
  CGTS_PARTITION_SIZE=4
  NOVA_PARTITION_SIZE=$(($AVAIL_SIZE - $CGTS_PARTITION_SIZE))
  CGTS_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${CGTS_PARTITION_SIZE})
  CGTS_PARTITION_UUID=$(echo ${CGTS_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
  NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_PARTITION_SIZE})
  NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
  system host-lvg-add ${COMPUTE} cgts-vg
  system host-pv-add ${COMPUTE} cgts-vg  ${CGTS_PARTITION_UUID}
  system host-lvg-add ${COMPUTE} nova-local
  system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
  system host-lvg-modify -b image ${COMPUTE} nova-local
done

Unlock the Computes

for COMPUTE in compute-0 compute-1; do
    system host-unlock $COMPUTE
done

Using sysinv to bring up/down the containerized services

Generate the stx-openstack application tarball

Refer to these instructions on the AIO SX page Generate the stx-openstack application tarball

Stage application for deployment

Refer to these instructions on the AIO SX page Stage application for deployment

Bring Up Services

Refer to these instructions on the AIO SX page Bring Up Services

Verify the cluster endpoints

Refer to these instructions on the AIO SX page here

Provider/tenant networking setup

Refer to these instructions on the AIO SX page here

Additional Setup Instructions

Refer to these instructions on the AIO SX page Additional Setup Instructions

Horizon access

Refer to these instructions on the AIO SX page here

Known Issues and Troubleshooting

None