StarlingX/Developer Guide

See the StarlingX Build Guide for the latest information regarding StarlingX development practices. This wiki page has been deprecated.

<!--

This section contains the steps for building a StarlingX ISO from Master branch.

Requirements
The recommended minimum requirements include:

Hardware Requirements
A workstation computer with:


 * Processor: x86_64 is the only supported architecture
 * Memory: At least 32GB RAM
 * Hard Disk: 500GB HDD
 * Network: Network adapter with active Internet connection

Software Requirements
A workstation computer with:


 * Operating System: Ubuntu 16.04 LTS 64-bit
 * Docker
 * Android Repo Tool
 * Proxy Settings Configured (If Required)
 * See http://lists.starlingx.io/pipermail/starlingx-discuss/2018-July/000136.html for more details
 * Public SSH Key

Development Environment Setup
This section describes how to set up a StarlingX development system on a workstation computer. After completing these steps, you will be able to build a StarlingX ISO image on the following Linux distribution:


 * Ubuntu 16.04 LTS 64-bit

Update Your Operating System
Before proceeding with the build, ensure your OS is up to date. You’ll first need to update the local database list of available packages:

Git
Install the required packages in an Ubuntu host system with: 

Make sure to setup your identity 

Docker CE
 Install the required Docker CE packages in an Ubuntu host system. See Get Docker CE for Ubuntu for more information. 

Android Repo Tool
 Install the required Android Repo Tool in an Ubuntu host system. Follow the 2 steps in "Installing Repo" section from Installing Repo to have Andriod Repo Tool installed. 

Install Public SSH Key

 * 1) Follow these instructions on GitHub to Generate a Public SSH Key and then upload your public key to your GitHub and Gerrit account profiles:
 * 2) * Upload to Github
 * 3) * Upload to Gerrit

Install stx-tools project
Under your $HOME directory, clone the &lt;stx-tools&gt; project



Create a Workspace Directory
Create a starlingx workspace directory on your workstation computer. Usually, you’ll want to create it somewhere under your user’s home directory.



Build the CentOS Mirror Repository
This section describes how to build the CentOS Mirror Repository.

Setup Repository Docker Container
Run the following commands under a terminal identified as "One".

Navigate to the &lt;$HOME/stx-tools&gt;/centos-mirror-tool project directory:



If necessary you might have to set http/https proxy in your Dockerfile before building the docker image.

</li> Build your &lt;user&gt;:&lt;tag&gt; base container image with e.g. user:centos-mirror-repository

</li> Launch a &lt;user&gt; docker container using previously created Docker base container image &lt;user&gt;:&lt;tag&gt; e.g.  -centos-mirror-repository. As /localdisk is defined as the workdir of the container, the same folder name should be used to define the volume. The container will start to run and populate a logs and output folders in this directory. The container shall be run from the same directory where the other scripts are stored.

Note: the above command will create the container in background, this mean that you need to attach it manually. The advantage of this is that you can enter/exit from the container many times as you want. </li></ol>

Download Packages
Attach to the docker repository previously created </li> Inside Repository Docker container, enter the following command to download the required packages to populate the CentOS Mirror Repository: </li>

Monitor the download of packages until it is complete. When download is complete, the following message is displayed:

</li></ol>

Verify Packages
Verify there are no missing or failed packages:

</li>

 In case there are missing or failed ones due to network instability (or timeout), you should download them manually, to assure you get all RPMs listed in rpms_3rdparties.lst/rpms_centos.lst/rpms_centos3rdparties.lst. </li>

</li></ol>

Packages Structure
The following is a general overview of the packages structure that you will have after having downloaded the packages

</li>

Create CentOS Mirror Repository
Outside your Repository Docker container, in another terminal identified as "Two", run the following commands:

From terminal identified as "Two", create a mirror/CentOS directory under your starlingx workspace directory:

</li> Copy the built CentOS Mirror Repository built under $HOME/stx-tools/centos-mirror-tool to the $HOME/starlingx/mirror/ workspace directory.

</li></ol>

Setup Building Docker Container
From terminal identified as "Two", create the workspace folder

</li>

Navigate to the  $HOME/stx-tools project directory:

</li>

Copy your git options to "toCopy" folder

</li>

Create a &lt;localrc&gt; file

</li>

If necessary you might have to set http/https proxy in your Dockerfile.centos73 before building the docker image.

</li>

Base container setup If you are running in fedora system, you will see " .makeenv:88: *** missing separator.  Stop. " error, to continue :


 * delete the functions define in the .makeenv ( module { ... } )
 * delete the line 19 in the Makefile and ( NULL := $(shell bash -c "source buildrc ... ).

</li>

<li>Build container setup

</li>

<li>Verify environment variables

</li>

<li>Build container run

</li>

<li>Execute the built container:

</li></ol>

Download Source Code Repositories
<li>From terminal identified as "Two", now inside the Building Docker container, Internal environment

</li>

<li>Repo init

</li>

<li>Repo sync

</li>

<li> Tarballs Repository

Alternatively you can run the populate_downloads.sh script to copy the tarballs instead of using a symlink.

</li>

Outside the container

<li>From another terminal identified as "Three", Mirror Binaries

</li></ol>

Build Packages
<li>Back to the Building Docker container, terminal identified as "Two"</li>

<li> Temporal! Build-Pkgs Errors Be prepared to have some missing / corrupted rpm and tarball packages generated during StarlingX/Developer_Guide which will make the next step to fail, if that happens please download manually those missing / corrupted packages. </li>

<li> Update the symbolic links </li>

<li>Build-Pkgs

</li>

<li> Optional! Generate-Cgcs-Tis-Repo This step is optional but will improve performance on subsequent builds. The cgcs-tis-repo has the dependency information that sequences the build order; To generate or update the information the following command needs to be executed after building modified or new packages. </li>

</ol>

Build StarlingX ISO
<li>Build-Iso

</li></ol>

Build installer
To get your StarlingX ISO ready to use, you will need to create the init files that will be used to boot the ISO as well to boot additional controllers and compute nodes. Note that this procedure only is needed in your first build and every time the kernel is upgraded.

Once you had run build-iso, run:

This will build rpm and anaconda packages. Then run:

The update-pxe-network-installer covers the steps detailed in $MY_REPO/stx/stx-metal/installer/initrd/README. This script will create three files on /localdisk/loadbuild/ / /pxe-network-installer/output.

Then, rename them to:

There are two ways to use these files:


 * 1) Store the files in the /import/mirror/CentOS/tis-installer/ folder  for future use.
 * 2) Store it in an arbitrary location and modify the $MY_REPO/stx/stx-metal/installer/pxe-network-installer/centos/build_srpm.data file to point to these files.

Now, the pxe-network-installer package needs to be recreated and the ISO regenerated.

Now your ISO should be able to boot.

Additional notes

 * In order to get the first boot working this complete procedure needs to be done. However, once the init files are created, these can be stored in a shared location where different developers can make use of them. Updating these files is not a frequent task and should be done whenever the kernel is upgraded.
 * StarlingX is in active development, so it is possible that in the future the 0.2 version will change to a more generic solution.

Purpose
Greatly reduce build times after a repo sync for designers working within a regional office. Starting from a new workspace, build-pkgs typically requires 3+ hours. Build avoidance typically reduces this step to ~20min

Limitations

 * Little or no benefit for designers who refresh a pre-existing workspace at least daily. (download_mirror.sh, repo sync, generate-cgcs-centos-repo.sh, build-pkgs, build-iso). In these cases an incremental build (reuse of same workspace without a 'build-pkgs --clean') is often just as efficient.
 * Not likely to be useful to solo designers, or teleworkers that wish to compile on there home computers. Build avoidance downloads build artifacts from a reference build, and WAN speeds are generally to slow.

Method (in brief)
<li> Reference Builds </li>
 * A server in the regional office performs a regular (daily?), automated builds using existing methods. Call these the reference builds.
 * The builds are timestamped, and preserved for some time. (a few weeks)
 * A build CONTEXT is captured. This is a file produced by build-pkgs at location '$MY_WORKSPACE/CONTEXT'. It is a bash script that can cd to each and every git and checkout the SHA that contributed to the build.
 * For each package built, a file shall capture he md5sums of all the source code inputs to the build of that package. These files  are also produced by build-pkgs at location '$MY_WORKSPACE/<build-type>/rpmbuild/SOURCES/<pkg-name>/srpm_reference.md5'.
 * All these build products are accessible locally (e.g. a regional office) via rsync (other protocols can be added later)

<li> Designers </li>
 * Request a build avoidance build. Recommended after you have just done a repo sync.  e.g.
 * Additional arguments, and/or environment variables, and/or a config file unique to the regional office, are used to specify a URL to the reference builds.
 * Using a config file to specify location of your reference build
 * Using command line args to specify location of your reference build
 * Prior to your build attempt, you need to accept the host key. This will prevent rsync failures on a yes/no prompt. (you should only have to do this once)
 * build-pkgs will:
 * From newest to oldest, scan the CONTEXTs of the various reference builds. Select the first (most recent) context which satisfies the following requirement.  For every git, the SHA specified in the CONTEXT is present.
 * The selected context might be slightly out of date, but not by more than a day (assuming daily reference builds).
 * If the context has not been previously downloaded, then download it now. Meaning download select portions of the reference build workspace into the designer's workspace.  This includes all the SRPMS, RPMS, MD5SUMS, and misc supporting files.  (~10 min over office LAN)
 * The designer may have additional commits not present in the reference build, or uncommitted changes. Affected packages will identified by the differing md5sum's, and the package is re-built. (5+ min, depending on what packages have changed)

</ol>
 * What if no valid reference build is found? Then build-pkgs will fall back to a regular build.

Reference builds

 * The regional office implements an automated build that pulls the latest StarlingX software and builds it on a regular basis. e.g. a daily.  Perhaps implemented by Jenkins, cron, or similar tools.
 * Each build is saved to a unique directory, and preserved for a time that is reflective of how long a designer might be expected to work on a private branch without syncronizing with the master branch. e.g. 2 weeks.
 * The MY_WORKSPACE directory for the build shall have a common root directory, and a leaf directory that is a sortable time stamp. Suggested format YYYYMMDDThhmmss. e.g.
 * Designers can access all build products over the internal network of the regional office. The current prototype employs rsync. Other protocols that can efficiently share/copy/transfer large directories of content can be added as needed.

Advanced usage
Can the reference build itself use build avoidance? Yes Can it reference itself? Yes. In either case we advise caution. To protect against any possible 'divergence from reality', you should limit how many steps removed a build avoidance build is from a full build. Suppose we want to implement a self referencing daily build, except that a full build occurs every Saturday. To protect ourselves from a build failure on Saturday we also want a limit of 7 days since last full build. You build script might look like this ...

One final wrinkle. We can ask build avoidance to preferentially use the full build day rather than the most recent build, as the reference point of the next avoidance build via use of '--build-avoidance-day <day-name>'. e.g. substitute this line into the above. The advantage is that our build is never more than one step removed from a full build (assuming the full build was successful). The disadvantage is that by end of week the reference build is getting rather old. During active weeks, builds times might be approaching that of a full build.

-->