Jump to: navigation, search

StarlingX/DebianBuildEnvironment

< StarlingX
Revision as of 19:16, 28 February 2022 by Mark.asselstine (talk | contribs) (Don't use of angle brackets around email address for project.gitemail. These are commonly used around email addresses and we don't want to encourage their use here as they cause issues in the parsing of the config file. Instead use double quotes.)

StarlingX Build Tools

The Debian build is completed using a set of containers designed to run in a Kubernetes environment. To facilitate this we are currently making use of Minikube and Helm, later on we will provide versions of the Helm Charts to allow for running builds directly on Kubernetes or StarlingX.

There are four containers (stx-builder|stx-pkgbuilder|stx-repomgr| stx-lat-tool) required to complete a build:

  • stx-builder: main developer build container.
  • stx-pkgbuilder: Debian package builder (uses sbuild).
  • stx-repomgr: Debian local repository archive (uses aptly)
  • stx-lat-tool: Debian image builder

At a high level the StarlingX ISO image creation flow involves the following general steps (assuming you have already configured Docker on your system).

  1. Install Minikube and Helm.
  2. Build or download the StarlingX k8s development environment.
  3. Enter the stx-builder pod/container to triger the building task.
  4. Build packages/ISO creation.


NOTE: the build system requires a Linux system with Docker and python 3.x installed. Building on Windows is not supported -- please use a Virtual Machine if necessary. The steps on this page have been tested on CentOS 7 and Ubuntu Focal.

Configure build environment

We need to create and start the build containers, which requires some additional configuration described below.

Install Minikube and Helm

Install Minikube to support the local k8s framework for building. Install Helm tools to manage the Helm Charts required to start/stop/upgrade the pods or the deployments for the StarlingX Building system. Before installing these components please make sure that Docker is available in your environment.

Install minikube (https://minikube.sigs.k8s.io/docs/start/):

   curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
   sudo install minikube-linux-amd64 /usr/local/bin/minikube

Note: as of this writing minikube v 1.22.0 is current.

Note: minikube requires at least 2 CPU cores.

Alternatively, we can also use a third-party Minikube binary:

  curl -LO http://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/releases/v1.20.0/minikube-linux-amd64
  sudo install minikube-linux-amd64 /usr/local/bin/minikube

Install Helm -- you can select the version listed here or the latest released version:

   curl -LO https://get.helm.sh/helm-v3.6.2-linux-amd64.tar.gz
   tar xvf helm-v3.6.2-linux-amd64.tar.gz
   sudo mv linux-amd64/helm /usr/local/bin/

Add your user account to docker group:

 sudo usermod -aG docker $(id -un) && newgrp docker

Clone build tools and set up workspace

Clone build tools:

export TOOL_HOME=~/DebianBuild
mkdir -p $TOOL_HOME
cd $TOOL_HOME
git clone https://opendev.org/starlingx/tools

Create a workspace directory; it will be mapped into build container.

 export WORKSPACE_HOME=~/DebianBuildWorkspace
 mkdir -p $WORKSPACE_HOME
 export PROJECT=stx-debian
 export STX_BUILD_HOME=/localdisk/designer/$(id -nu)/$PROJECT
 # Create the STX_BUILD_HOME and adjust any user permission required
 mkdir -p $STX_BUILD_HOME
 ln -sf $WORKSPACE_HOME $STX_BUILD_HOME


Source the environment

The build tools comes with a script, import-stx, which sets up your PATH and other environment as necessary. This script must be sourced before attempting to use any tools:

There's a number of environment variables you can set prior to sourcing this file, please feel free to review the script and import-stx.README for a full list.

WARNING: minikube can't work if your $HOME directory points to an NFS location, we need to point it to some other local file system by defining ``MINIKUBE_HOME`` in the environment before sourcing ``import-stx``:

# Necessary if your $HOME is on NFS
export MINIKUBE_HOME=/localdisk/designer/$(id -nu)
# Source the environment
cd $TOOL_HOME/tools
source import-stx

Configure build containers

The build expects a configuration file, ``stx.conf`` (example) to exist at the root of the build tools working directory. It is a key/value file containing various build options. The ``stx config`` command may be used to add/modify entries in the file.

# source the environment
cd $TOOL_HOME/tools
source ./import-stx

# Align the builder container to use your user/UID
stx config --add builder.myuname $(id -un)
stx config --add builder.uid $(id -u)

# Embedded in ~/localrc of the build container
stx config --add project.gituser "First Last"              
stx config --add project.gitemail "your@email.address"

# This will be included in the name of your build container and the basename for $MY_REPO_ROOT_DIR  
stx config --add project.name $PROJECT                
stx config --add project.proxy false

# Show all the settings
stx config --show
# Show usage information
stx config --help

Create build containers

The ``stx-init-env`` script will download or re-create build (docker) containers, and start them:

cd $TOOL_HOME/tools
bash stx-init-env

The script pulls build containers from DockerHub by default, where a new version is built once per day (ie default container images may be slightly out of date when you pull them). You can force a local re-build as follows:

cd $TOOL_HOME/tools
bash stx-init-env --rebuild

Once docker images are available locally, you can start & stop them using the ``stx`` tool:

stx control start          # start builder PODs if not running
stx control status       # display POD status
stx control stop          # stop PODs

WARNING: any changes to ``stx.conf`` or (``stx config add`` etc) requires that the PODs are re-started. f you want to make changes to the environment in the build container, use ‘stx control stop’, then ‘stx config’ to adjust the variables, and re-start the containers.

stx control stop
stx config add <...>
stx control start

Entering & controlling Pods

Once the containers are running, one can enter them (think ``docker exec <...> /bin/bash). While there are 4 containers, most build tasks are driven from the "builder" container, which is the default when using the ``stx`` tool:

 # enter the "builder" container
 stx control enter

you can enter other containers as follows

 stx control enter --dockername [builder|pkgbuilder|lat|repomgr]

Use ``exit`` command to exit from the node to host environment.

You can use the ``stx control`` command to start/stop & monitor builder POD status:

 # control the Pods
 stx control start
 stx control stop
 stx control status
 # more info
 stx control --help

The ``status`` command will include Helm status, including deployments and the pods. You can use that information to manually enter or troubleshoot POds using munikube or kubectl.

Every time you start/restart Pods

Execute these mandatory steps inside the builder:

 sudo apt-get update
 sudo apt-get install less
 git config --global user.name "First Last"
 git config --global user.email your@email.com

NOTE: you may see the following errors from apt. You can ignore this and continue.

 E: Failed to fetch http://stx-stx-repomgr:80/deb-local-source/dists/bullseye/main/source/Sources 404 Not Found [IP: 10.102.135.193 80]
 E: Some index files failed to download. They have been ignored, or old ones used instead.

Build packages/ISO creation

The six builder is the container where you will perform most of the actions, such as launching the task of building packages and images.

  stx control enter


Initialize the source tree

The StarlingX source tree consists of multiple git repositories. The tool ‘repo’ is used to sync these repositories locally, below config Is minimally required to config to make ‘repo’ work:

 BUILD_BRANCH=master
 MANIFEST="default.xml"
 cd $MY_REPO_ROOT_DIR
 repo init -u https://opendev.org/starlingx/manifest -b $BUILD_BRANCH -m ${MANIFEST}
 repo sync

After the ‘repo sync’ is done, check the below directory:

  $ ls $MY_REPO
  $ ls $MY_REPO/stx
  $ ls $MY_REPO_ROOT_DIR/stx-tools


Before running 'build-pkgs':

Run below command to download the sources of all buildable packages by scanning the repo root $MY_REPO/stx

the download directory is: $STX_MIRROR/sources

  $ downloader -s

All the below lists with build types will be scanned in the repo root $MY_REPO/stx:

debian_pkg_dirs

debian_pkg_dirs_rt

debian_pkg_dirs_installer

Verify that the local repos are created

 repo_manage.py list
 INFO:repo_manage:No remote repo
 INFO:repo_manage:3 local repos:
 INFO:repo_manage:deb-local-build : bullseye : main
 INFO:repo_manage:deb-local-binary : bullseye : main
 INFO:repo_manage:deb-local-source : bullseye : main

NOTE: All 3 repos should be seen only after build-pkgs [-p <package>] is done at a later time.

Download 3rd-party tar & deb files

Run below command to download the debian binary packages (distribution: bullseye) into directory $STX_MIRROR/binaries:

  $ downloader -b

All the below lists of binary packages will be downloaded:

 $MY_REPO_ROOT_DIR/stx-tools/debian-mirror-tools/config/debian/common/base-bullseye.lst
 $MY_REPO_ROOT_DIR/stx-tools/debian-mirror-tools/config/debian/<layer>/os-std.lst
 $MY_REPO_ROOT_DIR/stx-tools/debian-mirror-tools/config/debian/<layer>/os-rt.lst


Also run below command to download both sources and binaries:

  $ downloader -b -s


  $ downloader --help
  usage: downloader [-h] [-b] [-s] [-c]
  downloader helper
  optional arguments:
    -h, --help            show this help message and exit
    -b, --download_binary
                          download binary debs
    -s, --download_source
                          download stx source
    -c, --clean_mirror    clean the whole mirror and download again, be careful to use

Currently, the apt sources used to download packages are in the '/etc/apt/sources.list' file in the builder container.

Build packages

To bulld an individual package:

 build-pkgs -p <name of package>

To build all of the packages available

 build-pkgs -a

NOTE: your build may fail due to circular dependencies. You can try building 2 or 3 times as a workaround.

Build ISO

Once you have built all of the packages you can build the iso by running the following command:

  build-image
  ls -al /localdisk/deploy/*.iso

Log files

While inside the build container, log files may be found here:

  • /localdisk/builder.log /localdisk/pkgbuilder.log - top-level build controller log files
  • ${MY_WORKSPACE}/<std or rt>/<package name>/*.build' - individual package build logs