Jump to: navigation, search

Difference between revisions of "StarlingX/Installation Guide"

(Getting the StarlingX ISO Image)
 
(40 intermediate revisions by 4 users not shown)
Line 1: Line 1:
 +
 +
 +
See the [https://docs.starlingx.io/deploy_install_guides/index.html StarlingX Deploy and Installation Guide] for information on installing StarlingX.  This wiki page has been deprecated.
 +
 +
<!--
 +
 
== Intro ==
 
== Intro ==
  
This section contains information about the StarlingX installation in a virtualized environment using Libvirt/QEMU.
+
StarlingX may be installed in:
 +
* '''Bare Metal''': Real deployments of StarlingX are only supported on physical servers.
 +
* '''Virtual Environment''': It should only be used for evaluation or development purposes.
 +
<br>
 +
StarlingX installed in virtual environments has two options:
 +
* [[Installation_libvirt_qemu|Libvirt/QEMU]]
 +
* VirtualBox
 +
 
 +
== Requirements ==
 +
 
 +
Different use cases require different configurations.
  
==Software Configurations==
+
=== Bare Metal ===
  
* All In One
+
The minimum requirements for the physical servers where StarlingX might be deployed, include:
* Standard Controller Storage
 
* Duplex
 
* Standard Dedicated Storage
 
  
==Standard Controller Storage==
+
* '''Controller Hosts'''
 +
** Minimum Processor is:
 +
*** Dual-CPU Intel® Xeon® E5 26xx Family (SandyBridge) 8 cores/socket
 +
** Minimum Memory: 64 GB
 +
** Hard Drives:
 +
*** Primary Hard Drive, minimum 500 GB for OS and system databases.
 +
*** Secondary Hard Drive, minimum 500 GB for persistent VM storage.
 +
** 2 physical Ethernet interfaces: OAM and MGMT Network.
 +
** USB boot support.
 +
** PXE boot support.
 +
* '''Storage Hosts'''
 +
** Minimum Processor is:
 +
*** Dual-CPU Intel® Xeon® E5 26xx Family (SandyBridge) 8 cores/socket.
 +
** Minimum Memory: 64 GB.
 +
** Hard Drives:
 +
*** Primary Hard Drive, minimum 500 GB for OS.
 +
*** 1 or more additional Hard Drives for CEPH OSD storage, and
 +
*** Optionally 1 or more SSD or NVMe Drives for CEPH Journals.
 +
** 1 physical Ethernet interface: MGMT Network
 +
** PXE boot support.
 +
* '''Compute Hosts'''
 +
** Minimum Processor is:
 +
*** Dual-CPU Intel® Xeon® E5 26xx Family (SandyBridge) 8 cores/socket.
 +
** Minimum Memory: 32 GB.
 +
** Hard Drives:
 +
*** Primary Hard Drive, minimum 500 GB for OS.
 +
*** 1 or more additional Hard Drives for ephemeral VM Storage.
 +
** 2 or more physical Ethernet interfaces: MGMT Network and 1 or more Provider Networks.
 +
** PXE boot support.
 +
<br>
 +
The recommended minimum requirements for the physical servers are described later in each StarlingX Deployment Options guide.
  
==Requirements==
+
=== Virtual Environment ===
  
Different use cases require different configurations. For general StarlingX deployment, the recommended minimum requirements include:
+
The recommended minimum requirements for the workstation, hosting the Virtual Machine(s) where StarlingX will be deployed, include:
  
===Hardware Requirements===
+
==== Hardware Requirements ====
  
 
A workstation computer with:
 
A workstation computer with:
  
* Processor: x86_64 only supported architecture with hardware virtualization extensions
+
* Processor: x86_64 only supported architecture with BIOS enabled hardware virtualization extensions
 +
* Cores: 8 (4 with careful monitoring of cpu load)
 
* Memory: At least 32GB RAM
 
* Memory: At least 32GB RAM
 
* Hard Disk: 500GB HDD
 
* Hard Disk: 500GB HDD
 
* Network: Two network adapters with active Internet connection
 
* Network: Two network adapters with active Internet connection
  
===Software Requirements===
+
==== Software Requirements ====
  
 
A workstation computer with:
 
A workstation computer with:
Line 38: Line 82:
 
* StarlingX ISO Image
 
* StarlingX ISO Image
  
==Deployment Environment Setup==
+
==== Deployment Environment Setup ====
 
 
This section describes how to set up a StarlingX system in a workstation computer. After completing these steps, you will be able to deploy and run your StarlingX system on the following Linux distribution:
 
  
* Ubuntu 16.04 LTS 64-bit
+
This section describes how to set up the workstation computer which will host the Virtual Machine(s) where StarlingX will be deployed.
  
===Updating Your Operating System===
+
===== Updating Your Operating System =====
  
 
Before proceeding with the build, ensure your OS is up to date. You’ll first need to update the local database list of available packages:
 
Before proceeding with the build, ensure your OS is up to date. You’ll first need to update the local database list of available packages:
Line 52: Line 94:
 
</nowiki></pre>
 
</nowiki></pre>
  
===Installing Requirements and Dependencies===
+
===== Install stx-tools project =====
  
Install the required packages in an Ubuntu host system with:
+
Clone the stx-tools project. Usually you’ll want to clone it under your user’s home directory.
 
 
<pre><nowiki>
 
$ sudo apt-get install git virt-manager libvirt-bin qemu-system
 
</nowiki></pre>
 
 
 
===Installing Deployment Tool===
 
 
 
Clone the <stx-deployment> project. Usually you’ll want to clone it under your user’s home directory.
 
  
 
<pre><nowiki>
 
<pre><nowiki>
 
$ cd $HOME
 
$ cd $HOME
$ git clone <stx-deployment>
+
$ git clone git://git.openstack.org/openstack/stx-tools
</nowiki></pre>
 
 
 
===Getting the StarlingX ISO Image===
 
 
 
Follow the instructions from [[StarlingX/Developer_Guide]] to build a StarlingX ISO image. Copy the StarlingX ISO Image to the ''<stx-deployment>'' libvirt project directory naming it as bootimage.iso:
 
 
 
<pre><nowiki>
 
$ cp <starlingx iso image> $HOME/<stx-deployment>/libvirt/bootimage.iso
 
 
</nowiki></pre>
 
</nowiki></pre>
  
==Controller-0 Host Installation==
+
===== Installing Requirements and Dependencies =====
 
 
Installing controller-0 involves initializing a host with software and then applying a configuration from the command line. The configured host becomes Controller-0. <br>
 
Procedure:
 
 
 
# Using an ISO image of StarlingX, initialize the controller host via Libvirt/QEMU.
 
# Configure the controller using the config_controller script.
 
 
 
===Initializing Controller-0===
 
This section describes how to initialize StarlingX in host Controller-0. Except where noted, all the commands must be executed from a console of the Workstation.
 
  
Navigate to the ''<stx-deployment>'' libvirt project directory:
+
Navigate to the stx-tools installation libvirt directory:
 
<pre><nowiki>
 
<pre><nowiki>
$ cd <stx-deployment>/libvirt
+
$ cd $HOME/stx-tools/deployment/libvirt/
 
</nowiki></pre>
 
</nowiki></pre>
  
Run the install packages script:
+
Install the required packages:
 
<pre><nowiki>
 
<pre><nowiki>
 
$ bash install_packages.sh
 
$ bash install_packages.sh
 
</nowiki></pre>
 
</nowiki></pre>
  
Run the libvirt qemu setup script:
+
===== Disabling Firewall =====
<pre><nowiki>
 
$ bash setup_tic.sh
 
</nowiki></pre>
 
 
 
From the KVM/VirtManager window, power on the host to be configured as Controller-0 and show the virtual machine console and details:
 
* When the installer is loaded and the installer welcome screen appears in the Controller-0 host, select the type of installation "Standard Controller Configuration".
 
* Select the "Graphical Console" as the console to use during installation.
 
* Select "Standard Security Boot Profile" as the Security Profile.
 
* Monitor the initialization until it is complete. When initialization is complete, a reboot is initiated on the Controller-0 host, briefly displays a GNU GRUB screen, and then boots automatically into the StarlingX image.
 
 
 
Log into Controller-0 as user wrsroot, with password wrsroot. The first time you log in as wrsroot, you are required to change your password. Enter the current password (wrsroot):
 
<pre><nowiki>
 
Changing password for wrsroot.
 
(current) UNIX Password:
 
</nowiki></pre>
 
 
 
Enter a new password for the wrsroot account:
 
<pre><nowiki>
 
New password:
 
</nowiki></pre>
 
 
 
Enter the new password again to confirm it:
 
<pre><nowiki>
 
Retype new password:
 
</nowiki></pre>
 
 
 
Controller-0 is initialized with StarlingX, and is ready for configuration.
 
 
 
===Configuring Controller-0===
 
 
 
This section describes how to perform the Controller-0 configuration interactively. Except where noted, all the commands must be executed from the console of the active controller (here assumed to be controller-0.
 
 
 
When run interactively, the config_controller script presents a series of prompts for initial configuration of StarlingX. The script is used to configure the first controller in the StarlingX cluster as controller-0. The prompts are grouped by configuration area. To start the script interactively, use the following command with no parameters and accept all the default values:
 
 
 
<pre><nowiki>
 
controller-0:~$ sudo config_controller
 
</nowiki></pre>
 
 
 
The output when config_controller script is run interactively is:
 
 
 
<pre><nowiki>
 
WARNING: Command should only be run from the console. Continuing with this
 
terminal may cause loss of connectivity and configuration failure
 
...
 
Apply the above configuration? [y/n]: y
 
 
 
Applying configuration (this will take several minutes):
 
 
 
01/08: Creating bootstrap configuration ... DONE
 
02/08: Applying bootstrap manifest ... DONE
 
03/08: Persisting local configuration ... DONE
 
04/08: Populating initial system inventory ... DONE
 
05:08: Creating system configuration ... DONE
 
06:08: Applying controller manifest ... DONE
 
07:08: Finalize controller configuration ... DONE
 
08:08: Waiting for service activation ... DONE
 
 
 
Configuration was applied
 
 
 
Please complete any out of service comissioning steps with system commands and unlock controller to proceed.
 
</nowiki></pre>
 
 
 
==Controller-0 and System Provision==
 
 
 
===Configuring Provider Networks at Installation===
 
 
 
You must set up provider networks at installation so that you can attach data interfaces and unlock the compute nodes.
 
 
 
On Controller-0, acquire Keystone administrative privileges:
 
 
 
<pre><nowiki>
 
controller-0:~$ source /etc/nova/openrc
 
</nowiki></pre>
 
 
 
Set up one provider network of the vlan type, named providernet-a:
 
 
 
<pre><nowiki>
 
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-create providernet-a --type=vlan
 
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-range-create --name providernet-a-range1 --range 100-400 providernet-a
 
</nowiki></pre>
 
 
 
===Unlocking Controller-0===
 
 
 
You must unlock controller-0 so that you can use it to install the remaining hosts. On Controller-0, acquire Keystone administrative privileges:
 
 
 
<pre><nowiki>
 
controller-0:~$ source /etc/nova/openrc
 
</nowiki></pre>
 
 
 
Use the system host-unlock command:
 
 
 
<pre><nowiki>
 
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock controller-0
 
</nowiki></pre>
 
 
 
The host is rebooted. During the reboot, the command line is unavailable, and any ssh connections are dropped. To monitor the progress of the reboot, use the controller-0 console.
 
 
 
===Verifying the Controller-0 Configuration===
 
 
 
On Controller-0, acquire Keystone administrative privileges:
 
 
 
<pre><nowiki>
 
controller-0:~$ source /etc/nova/openrc
 
</nowiki></pre>
 
 
 
Verify that the Titanium Cloud controller services are running:
 
 
 
<pre><nowiki>
 
[wrsroot@controller-0 ~(keystone_admin)]$ nova service-list
 
+--------------------------------------+------------------+--------------+----------+---------+-------+ ...
 
| Id                                  | Binary          | Host        | Zone    | Status  | State | ...
 
+--------------------------------------+------------------+--------------+----------+---------+-------+ ...
 
| d7cdfaf0-9394-4053-b752-d609c837b729 | nova-conductor  | controller-0 | internal | enabled | up    | ...
 
| 692c2659-7188-42ec-9ad1-c88ac3020dc6 | nova-scheduler  | controller-0 | internal | enabled | up    | ...
 
| 5c7c9aad-696f-4dd1-a497-464bdd525e0c | nova-consoleauth | controller-0 | internal | enabled | up    | ...
 
+--------------------------------------+------------------+--------------+----------+---------+-------+ ...
 
</nowiki></pre>
 
 
 
Verify that controller-0 is unlocked, enabled, and available:
 
 
 
<pre><nowiki>
 
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
 
+----+--------------+-------------+----------------+-------------+--------------+
 
| id | hostname    | personality | administrative | operational | availability |
 
+----+--------------+-------------+----------------+-------------+--------------+
 
| 1  | controller-0 | controller  | unlocked      | enabled    | available    |
 
+----+--------------+-------------+----------------+-------------+--------------+
 
</nowiki></pre>
 
 
 
==Compute Host Installation==
 
 
 
After initializing and configuring an active controller, you can add and configure a backup controller and additional compute or storage hosts. Using the system host-add command, you add one or more host entries to the system inventory, assigning a personality, MAC address, IP address, and so on for each host, and then you power on the hosts, causing them to be recognized and configured according to the system inventory entry.
 
 
 
===Initializing Compute Host===
 
 
 
On Workstation, print information of virbr2 virtual interface associated to compute-N host:
 
 
 
<pre><nowiki>
 
$ sudo virsh domiflist compute-0 | grep virbr2
 
vnet5      bridge    virbr2    e1000      52:54:00:b6:1f:c7
 
$ sudo virsh domiflist compute-1 | grep virbr2
 
vnet9      bridge    virbr2    e1000      52:54:00:da:58:b4
 
</nowiki></pre>
 
 
 
On Controller-0, acquire Keystone administrative privileges:
 
 
 
<pre><nowiki>
 
controller-0:~$ source /etc/nova/openrc
 
</nowiki></pre>
 
 
 
Use the system host-add command to add compute-N host and specify their compute personality using their associated virbr2 virtual interfaces MAC addresses:
 
 
 
<pre><nowiki>
 
[wrsroot@controller-0 ~(keystone_admin)]$ system host-add -n compute-0 -p compute -m 52:54:00:15:7a:86
 
[wrsroot@controller-0 ~(keystone_admin)]$ system host-add -n compute-1 -p compute -m 52:54:00:aa:a2:46
 
</nowiki></pre>
 
 
 
On Workstation, start Compute-N hosts:
 
 
 
<pre><nowiki>
 
$ sudo virsh start compute-0
 
$ sudo virsh start compute-1
 
</nowiki></pre>
 
 
 
Once the message "Domain compute-N started" is displayed, from the KVM/VirtManager window, power on the host to be configured as compute-N and show the virtual machine console and details. The node is assigned the personality specified in the system host-add parameters. A display device menu appears on the console, with text customized for the personality (Controller, Storage, or Compute Node). You can start the installation manually by pressing Enter. Otherwise, it is started automatically after a few seconds.
 
 
 
On Controller-0, you can monitor the installation progress by running the system host-show command for the host periodically. Progress is shown in the install_state field.
 
 
 
<pre><nowiki>
 
[wrsroot@controller-0 ~(keystone_admin)]$ system host-show compute-0 | grep install
 
| install_output      | text                                |
 
| install_state      | booting                              |
 
| install_state_info  | None                                |
 
[wrsroot@controller-0 ~(keystone_admin)]$ system host-show compute-1 | grep install
 
| install_output      | text                                |
 
| install_state      | booting                              |
 
| install_state_info  | None                                |
 
[wrsroot@controller-0 ~(keystone_admin)]$
 
</nowiki></pre>
 
 
 
Wait while the compute-N is configured and rebooted. Up to 20 minutes may be required for a reboot, depending on hardware. When the reboot is complete, the compute-N is reported as Locked, Disabled, and Online.
 
 
 
<pre><nowiki>
 
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
 
+----+--------------+-------------+----------------+-------------+--------------+
 
| id | hostname    | personality | administrative | operational | availability |
 
+----+--------------+-------------+----------------+-------------+--------------+
 
| 1  | controller-0 | controller  | unlocked      | enabled    | available    |
 
| 2  | compute-0    | compute    | locked        | disabled    | online      |
 
| 3  | compute-1    | compute    | locked        | disabled    | online      |
 
+----+--------------+-------------+----------------+-------------+--------------+
 
[wrsroot@controller-0 ~(keystone_admin)]$
 
</nowiki></pre>
 
 
 
==Compute Host Provision==
 
 
 
You must configure the network interfaces and the storage disks on a host before you can unlock it.
 
  
On Controller-0, acquire Keystone administrative privileges:
+
Unload firewall and disable firewall on boot:
  
 
<pre><nowiki>
 
<pre><nowiki>
controller-0:~$ source /etc/nova/openrc
+
$ sudo ufw disable
 +
Firewall stopped and disabled on system startup
 +
$ sudo ufw status
 +
Status: inactive
 
</nowiki></pre>
 
</nowiki></pre>
  
===Provisioning Network Interfaces on a Compute Host===
+
== Getting the StarlingX ISO Image ==
  
Provision the data interfaces
+
Follow the instructions from [[StarlingX/Developer_Guide]] to build a StarlingX ISO image.
  
<pre><nowiki>
+
=== Bare Metal ===
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -p providernet-a -nt data compute-0 ens6
 
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -p providernet-a -nt data compute-1 ens6
 
</nowiki></pre>
 
  
===Provisioning Storage on a Compute Host===
+
A bootable USB flash drive containing StarlingX ISO image.
  
Ensure that provider networks are available for the data interfaces. Provision the data interfaces:
+
=== Virtual Environment ===
  
<pre><nowiki>
+
Copy the StarlingX ISO Image to the stx-tools deployment libvirt project directory:
system host-list --nowrap &> /dev/null && NOWRAP="--nowrap"
 
ALL_COMPUTE=`system host-list $NOWRAP | grep compute- | cut -d '|' -f 3`
 
# for each compute node, we should run the followings
 
for compute in $ALL_COMPUTE; do
 
    system host-cpu-modify ${compute} -f vswitch -p0 1
 
    system host-lvg-add ${compute} nova-local
 
    system host-pv-add ${compute} nova-local $(system host-disk-list ${compute} $NOWRAP | grep /dev/sdb | awk '{print $2}')
 
    system host-lvg-modify -b image -s 10240 ${compute} nova-local
 
done
 
</nowiki></pre>
 
 
 
===Unlocking a Compute Host===
 
 
 
Use the system host-unlock command to unlock the node:
 
  
 
<pre><nowiki>
 
<pre><nowiki>
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock compute-0
+
$ cp <starlingx iso image> $HOME/stx-tools/deployment/libvirt/
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock compute-1
 
 
</nowiki></pre>
 
</nowiki></pre>
  
Wait while the compute-N is rebooted. Up to 10 minutes may be required for a reboot, depending on hardware. The host is rebooted, and its Availability State is reported as In-Test.
+
== Deployment Options ==
  
==System Health Check==
+
* Standard Controller
 +
** [[StarlingX/Installation Guide/Dedicated Storage|StarlingX Cloud with Dedicated Storage]]
 +
** [[StarlingX/Installation Guide/Controller Storage|StarlingX Cloud with Controller Storage]]
 +
* All-in-one
 +
** [[StarlingX/Installation Guide/Duplex|StarlingX Cloud Duplex]]
 +
** [[StarlingX/Installation Guide/Simplex|StarlingX Cloud Simplex]]
  
After a few minutes, all nodes shall be reported as Unlocked, Enabled, and Available:
+
-->
 
 
<pre><nowiki>
 
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
 
+----+--------------+-------------+----------------+-------------+--------------+
 
| id | hostname    | personality | administrative | operational | availability |
 
+----+--------------+-------------+----------------+-------------+--------------+
 
| 1  | controller-0 | controller  | unlocked      | enabled    | available    |
 
| 2  | compute-0    | compute    | unlocked      | enabled    | available    |
 
| 3  | compute-1    | compute    | unlocked      | enabled    | available    |
 
+----+--------------+-------------+----------------+-------------+--------------+
 
</nowiki></pre>
 

Latest revision as of 20:53, 26 July 2019


See the StarlingX Deploy and Installation Guide for information on installing StarlingX. This wiki page has been deprecated.