Jump to: navigation, search

Difference between revisions of "StarlingX/DebianBuildEnvironment"

(Initialize the source tree)
(Download 3rd-party tar & deb files)
(47 intermediate revisions by 10 users not shown)
Line 3: Line 3:
 
The Debian build is completed using a set of containers designed to run in a Kubernetes environment. To facilitate this we are currently
 
The Debian build is completed using a set of containers designed to run in a Kubernetes environment. To facilitate this we are currently
 
making use of Minikube and Helm, later on we will provide versions of the Helm Charts to allow for running builds directly on Kubernetes or
 
making use of Minikube and Helm, later on we will provide versions of the Helm Charts to allow for running builds directly on Kubernetes or
on StarlingX directly.
+
StarlingX.
  
There are four containers (stx-builder|stx-pkgbuilder|stx-repomgr| stx-lat-tool) required to complete a build:
+
There are five containers required to complete a build:
  
 
* stx-builder: main developer build container.
 
* stx-builder: main developer build container.
Line 11: Line 11:
 
* stx-repomgr: Debian local repository archive (uses aptly)
 
* stx-repomgr: Debian local repository archive (uses aptly)
 
* stx-lat-tool: Debian image builder
 
* stx-lat-tool: Debian image builder
 +
* stx-docker: Docker in Docker (build docker images)
  
 
At a high level the StarlingX ISO image creation flow involves the following general steps (assuming you have already configured Docker on your system).
 
At a high level the StarlingX ISO image creation flow involves the following general steps (assuming you have already configured Docker on your system).
  
 
# Install Minikube and Helm.
 
# Install Minikube and Helm.
# Build the StarlingX k8s development environment.
+
# Build or download the StarlingX k8s development environment.
 
# Enter the stx-builder pod/container to triger the building task.
 
# Enter the stx-builder pod/container to triger the building task.
 
# Build packages/ISO creation.
 
# Build packages/ISO creation.
  
== Build the four StarlingX pod images ==
 
  
The four StarlingX build container images handle all steps related to StarlingX ISO creation. This section describes how to customize the
+
'''NOTE''': the build system requires a Linux system with Docker and python 3.x installed. Building on Windows is not supported -- please use a Virtual Machine if necessary. The steps on this page have been tested on CentOS 7 and Ubuntu Focal.
build container image building process.
 
  
== Install Minikube and Helm ==
+
== Configure build environment ==
  
 +
We need to create and start the build containers, which requires some additional configuration described below.
  
Install Minikube to support the local k8s framework for building. Meanwhile install Helm tools to manage the Helm Charts required to
+
=== Install Minikube and Helm ===
 +
 
 +
Install Minikube to support the local k8s framework for building. Install Helm tools to manage the Helm Charts required to
 
start/stop/upgrade the pods or the deployments for the StarlingX Building system. Before installing these components please make sure
 
start/stop/upgrade the pods or the deployments for the StarlingX Building system. Before installing these components please make sure
 
that Docker is available in your environment.
 
that Docker is available in your environment.
  
We can download directly the binary packages to install them from the upstream.(https://minikube.sigs.k8s.io/docs/start/)
+
Install minikube (https://minikube.sigs.k8s.io/docs/start/):
  
 
     curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
 
     curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
 
     sudo install minikube-linux-amd64 /usr/local/bin/minikube
 
     sudo install minikube-linux-amd64 /usr/local/bin/minikube
  
If necessary, we can also use the third-party Minikube binary:
+
'''Note''': as of this writing minikube v 1.22.0 is current.
 +
 
 +
'''Note''': minikube requires at least 2 CPU cores.
 +
 
 +
Alternatively, we can also use a third-party Minikube binary:
  
 
   curl -LO http://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/releases/v1.20.0/minikube-linux-amd64
 
   curl -LO http://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/releases/v1.20.0/minikube-linux-amd64
 
   sudo install minikube-linux-amd64 /usr/local/bin/minikube
 
   sudo install minikube-linux-amd64 /usr/local/bin/minikube
  
Meanwhile we can also install Helm binary package, you can select the version listed here or the latest released version:
+
Install Helm -- you can select the version listed here or the latest released version:
  
 
     curl -LO https://get.helm.sh/helm-v3.6.2-linux-amd64.tar.gz
 
     curl -LO https://get.helm.sh/helm-v3.6.2-linux-amd64.tar.gz
Line 47: Line 53:
 
     sudo mv linux-amd64/helm /usr/local/bin/
 
     sudo mv linux-amd64/helm /usr/local/bin/
  
== Initialize Minikube container and build container images ==
+
Add your user account to docker group:
  
Based on Minikube and Helm, you can build our local k8s deployments to support the building of StarlingX.
+
  sudo usermod -aG docker $(id -un) && newgrp docker
  
Before this step, you need to check if your UID is in the docker group.  If not, please use the following command to add or communicate
+
=== Install repo ===
to your administrator for help.
 
  
  sudo usermod -aG docker $yourusername
+
https://gerrit.googlesource.com/git-repo/
  
Once Minikube and Helm have been installed, you can execute the following command to start Minikube and create the container images
+
=== Export environment variables ===
before using stx command.
 
  
  stx-init-env
+
  export PROJECT="stx-env"
 +
export USER_NAME="<Firstname Lastname>"
 +
export USER_EMAIL="<your email>"
 +
 +
# STX_BUILD_HOME should be set to a directory owned by your userid
 +
# the build home needs to have at least 200Gb of free space to build all packages and iso
 +
export STX_BUILD_HOME="/home/${USER}/${PROJECT}"
 +
export REPO_ROOT="${STX_BUILD_HOME}"/repo
 +
export REPO_ROOT_SUBDIR="localdisk/designer/${USER}/${PROJECT}"
 +
 
 +
# MINIKUBE
 +
export STX_PLATFORM="minikube"
 +
export MINIKUBENAME="minikube-${USER}"
 +
 +
#############################################
 +
# Manifest/Repo Options:
 +
#############################################
 +
# STX MASTER
 +
export MANIFEST_URL="https://opendev.org/starlingx/manifest.git"
 +
export MANIFEST_BRANCH="master"
 +
export MANIFEST="default.xml"
  
To support multiple users, use the ``MINIKUBENAME`` to distinguish the different Minikube containers, so that unique Minikube containers will
+
For more details about the STX environment variables [https://opendev.org/starlingx/tools/src/branch/master/import-stx.README click here].
distinguish between the developers, attempting to start with an existing Minikube will cause the system will block.
 
  
In the stx-init-env script, we default use ``--driver=docker`` to start the Minikube container.
+
=== Create directories ===
 +
Create the $STX_BUILD_HOME directory, you may need sudo privileges if using /build
  
The ``--mount-string`` argument of ``minikube start`` will map the host path to the Minikube container. The default host directory is
+
e.g:
``/localdisk/$USER``, so please check if this default directory exists, or you can modify it in stx-init-env script.
+
sudo mkdir -p /build/${USER}
 +
sudo chown ${USER}: /build/${USER}
  
Once the Minikube container has started, we will build all container images required for the StarlingX building system if they are not yet
+
mkdir -p $STX_BUILD_HOME
built or available.
+
cd $STX_BUILD_HOME
  
This process will continue several minutes since it will download the required Minikube container images (such as CoreDNS) as well as build
+
=== Initialize repo ===
the StarlingX build container images.
+
# create REPO_ROOT_SUBDIR and symlink
 +
# symlink is a helper as minikube mounts the stx_build_home as its workspace
 +
# so it works as a shortcut to access the repos
 +
mkdir -p $REPO_ROOT_SUBDIR
 +
ln -s $REPO_ROOT_SUBDIR $REPO_ROOT
 +
 
 +
cd $REPO_ROOT
 +
# download and sync the repos
 +
repo init -u ${MANIFEST_URL} -b ${MANIFEST_BRANCH} -m ${MANIFEST}
 +
repo sync
  
'''NOTE''':
+
=== Init and setup STX ===
 +
The build tools comes with a script, import-stx, which sets up your PATH and other environment as necessary. This script must be sourced before attempting to use any tools:
  
Before executing stx-init-env, if you don't set the environment variable ``MINIKUBE_HOME``, this variable will inherit the your $HOME,
+
There's a number of environment variables you can set prior to sourcing this file, please feel free to review the script and import-stx.README for a full list.
meanwhile if your $HOME is NIS home directory(nfs mount point), itwill cause something wrong with Minikube. There is a known issue to
 
track it in the upstream: [https://github.com/kubernetes/minikube/pull/1022] So we had better
 
export the variable ``MINIKUBE_HOME`` as a non nfs-mount-point to bypass this issue
 
  
  export MINIKUBE_HOME=${yourminikubehomedirectory}
+
'''WARNING''': minikube can't work if your $HOME directory points to an NFS location, we need to point it to some other local file system by defining $MINIKUBE_HOME in the environment before sourcing import-stx.
  
or change your $HOME directly as follows:
+
The build expects a configuration file, ``stx.conf`` ([https://opendev.org/starlingx/tools/src/branch/master/stx.conf.sample example]) to exist at the root of the build tools working directory. It is a key/value file containing various build options. The ``stx config`` command may be used to add/modify entries in the file.
  
  export HOME=${yournewhomedirectory}
+
# Init stx tool
 +
cd stx-tools
 +
source import-stx
 +
 +
# Update stx config
 +
# Align the builder container to use your user/UID
 +
stx config --add builder.myuname $(id -un)
 +
stx config --add builder.uid $(id -u)
 +
 +
# Embedded in ~/localrc of the build container
 +
stx config --add project.gituser ${USER}
 +
stx config --add project.gitemail ${USER_EMAIL}
 +
 +
# This will be included in the name of your build container and the basename for $MY_REPO_ROOT_DIR 
 +
stx config --add project.name ${PROJECT}
 +
 
 +
stx config --show
 +
# Show usage information
 +
stx config --help
  
We advise you to set ``MINIKUBE_HOME``, as modifying $HOME may have unintended consequences for other software running on your system.
+
=== Start/Create build containers ===
  
== Build the containers ==
+
The ``stx-init-env`` script will download or re-create build (docker) containers, and start them:
  
To build the containers that are necessary in order to build debian packages, run the following command:
+
cd repo/stx-tools
 +
./stx-init-env
 +
# Monitor the status until they are running:
 +
stx control status
 +
# You should see 5 containers on Running state
  
bash stx-init-env --rebuild
+
Once docker images are available locally, you can start & stop them using the ``stx`` tool:
  
To make the containers accessible after you have built the containers, run the following command:
+
stx control start          # start builder PODs if not running
 +
stx control status      # display POD status
 +
stx control stop          # stop PODs
  
  source import-stx
+
'''WARNING''': any changes to ``stx.conf`` or (``stx config add`` etc) requires that the PODs are re-started.  f you want to make changes to the environment in the build container, use ‘stx control stop’, then ‘stx config’ to adjust the variables, and re-start the containers.
  
== Minikube build image customization ==
+
stx control stop
 +
stx config add <...>
 +
stx control start
  
After sourcing  import-stx, the ``stx`` command should be available. You can start by customizing values for
+
The script pulls build containers from DockerHub by default, where a new version is built once per day (ie default container images may be slightly out of date when you pull them). You can force a local re-build as follows:
the StarlingX container image build process.
 
  
The ``stx.conf`` file is a key-value config file that is used to set the default configuration values. We can use ``stx config`` command to
+
stx control stop
get or change the items in the stx.conf file. You will see the usages of the 'stx' command in the 'stx' command section.
+
cd repo/stx-tools
 +
./stx-init-env --rebuild
  
This is a sample of a ``stx.conf`` file can be found [https://opendev.org/starlingx/tools/src/branch/master/stx.conf here.]
+
== Entering & controlling Pods ==
we can use the ``stx config`` command to change or show this ``stx.conf`` file
 
as follows:
 
  
# Align the builder container to use your user/UID
+
Once the containers are running, one can enter them (think ``docker exec <...> /bin/bash). While there are 5 containers, most build tasks are driven from the "builder" container, which is the default when using the ``stx`` tool:
stx config --add builder.myuname $(id -un)
+
 
stx config --add builder.uid $(id -u)
+
  # enter the "builder" container
# Embedded in ~/localrc of the build container
+
  stx shell
stx config --add project.gituser "First Last"             
+
 
stx config --add project.gitemail <your email address>
+
you can enter other containers as follows
# This will be included in the name of your build container and the basename for $MY_REPO_ROOT_DIR 
+
 
stx config --add project.name stx-deb-bld-1               
+
  stx shell --container [builder|pkgbuilder|lat|repomgr|docker]
stx config --add project.proxy false
+
 
stx config --add project.ostree_osname wrlinux
+
Use ``exit`` command to exit from the node to host environment.
# Show all the settings
+
 
stx config --show
+
You can use the ``stx control`` command to start/stop & monitor builder POD status:
 +
 
 +
  # control the Pods
 +
  stx control start
 +
  stx control stop
 +
  stx control status
 +
 
 +
  # more info
 +
  stx control --help
 +
 
 +
The ``status`` command will include Helm status, including deployments and the pods. You can use that information to manually enter or troubleshoot POds using munikube or kubectl.
 +
 
 +
=== Every time you start/restart Pods ===
 +
 
 +
Execute these mandatory steps inside the builder:
 +
 
 +
  sudo apt-get update
 +
  git config --global user.name "Firstname Lastname"
 +
  git config --global user.email "Your email"
 +
 
 +
'''NOTE''': you may see the following errors from apt. You can ignore this and continue.
  
Please use ``stx config -h`` command to show more help information for config module. We can also use the default values for the building
+
  E: Failed to fetch http://stx-stx-repomgr:80/deb-local-source/dists/bullseye/main/source/Sources 404 Not Found [IP: 10.102.135.193 80]
project.
+
  E: Some index files failed to download. They have been ignored, or old ones used instead.
  
== Entering the Pods ==
+
== Build packages/ISO creation ==
  
Execute the following command to enter the controller node to trigger the build task:
+
The builder is the container where you will perform most of the actions, such as launching the task of building packages and images.
  
  stx control enter
+
  stx shell
  
The default will enter the controller node without other arguments. If you want to enter the other nodes, please use the following
+
== Refresh the source tree ==
command:
 
  
  stx control enter --dockername [builder|pkgbuilder|lat|repomgr]
+
The StarlingX source tree consists of multiple git repositories. The tool ‘repo’ is used to sync these repositories, if required you can sync the repos from inside the builder:
  
Use ``exit`` command to exit from the node to host env.
+
  cd $MY_REPO_ROOT_DIR
 +
  repo sync
  
== Monitoring the status ==
+
After the ‘repo sync’ is done, check the below directory:
  
After the building system starts, you can use the following command to show its status:
+
  ls $MY_REPO
 +
  ls $MY_REPO/stx
 +
  ls $MY_REPO_ROOT_DIR/stx-tools
  
  stx control status
 
  
It will output the status of the Helm Charts, the deployments and the pods.  According to the podname from this result, you also can
+
Before running 'build-pkgs':
manually enter any pod node container.
 
  
== Stop the pods ==
+
Run below command to download the sources of all buildable packages by scanning the repo root  $MY_REPO/stx
 +
the download directory is:  $STX_MIRROR
  
To stop the pod:
+
  downloader -s -B std,rt
  
  stx control stop
+
All the below lists with build types will be scanned in the repo root $MY_REPO/stx:
  
 +
* debian_pkg_dirs
 +
* debian_pkg_dirs_rt
 +
* debian_pkg_dirs_installer
  
 +
=== Download 3rd-party tar & deb files ===
  
== Build packages/ISO creation ==
+
Run below command to download the debian binary packages (distribution: bullseye) into directory $STX_MIRROR/binaries:
  
The six builder is the container where you will perform most of the actions, such as launching the task of building packages and images.
+
  downloader -b -B std,rt
  
  stx control enter
+
All the below lists of binary packages will be downloaded:
  
 +
  $MY_REPO_ROOT_DIR/stx-tools/debian-mirror-tools/config/debian/common/base-bullseye.lst
 +
  $MY_REPO_ROOT_DIR/stx-tools/debian-mirror-tools/config/debian/<layer>/os-std.lst
 +
  $MY_REPO_ROOT_DIR/stx-tools/debian-mirror-tools/config/debian/<layer>/os-rt.lst
  
== Initialize the source tree ==
 
  
The StarlingX source tree consists of multiple git repositories. The tool ‘repo’ is used to sync these repositories locally, below config
+
You can also run below command to download both sources and binaries:
Is minimally required to config to make ‘repo’ work:
 
  
   repo init -u https://opendev.org/starlingx/manifest -m default.xml
+
   downloader -B std,rt
   repo sync
+
  # To check all options:
 +
   downloader --help
  
After the ‘repo sync’ is done, check the below directory:
+
Currently, the apt sources used to download packages are in the '/etc/apt/sources.list' file in the builder container.
  
  $ ls $MY_REPO
+
=== Verify that the local repos are created ===
  $ ls $MY_REPO/stx
 
  $ ls $MY_REPO_ROOT_DIR/stx-tools
 
  
When the repo sync has finished, mirror the download and source directories from the CENGEN mirror:
+
  repo_manage.py list
 +
  INFO:repo_manage:No remote repo
 +
  INFO:repo_manage:3 local repos:
 +
  INFO:repo_manage:deb-local-build : bullseye : main
 +
  INFO:repo_manage:deb-local-binary : bullseye : main
 +
  INFO:repo_manage:deb-local-source : bullseye : main
  
cd  $MY_REPO_ROOT_DIR/stx-tools/centos-mirror-tools && bash download_mirror.sh
+
'''NOTE''': All 3 repos should be seen only after build-pkgs [-p <package>] is done at a later time.
  
 
== Build packages ==  
 
== Build packages ==  
  
To bulld an individual package:
+
The 'build-pkgs' has two phases based on the Debian package's build:
 +
 
 +
1) Check the digest of package's source meta data, for example:
 +
 
 +
if package 'dhcp' in cache:
 +
  if sha256 digest of the folder (/path/to/stx/integ/base/dhcp/debian) have not changed:
 +
    if the dsc file of package dhcp exists:
 +
        reuse the existing dsc file
 +
        return
 +
    create dsc file for package 'dhcp' and add the checksum to cache
  
  build-pkgs -p <name of package>
+
2) Build avoidance is enabled by default for this phase, the build option '-c' will turn 'build avoidance' off.
  
To build all of the packages available
+
if build avoidance is enabled:
 +
    check whether there is build stamp for this package:
 +
        if Yes, skip the build and return
 +
Send the build request for the package to pkgbuilder container
  
  build-pkgs -a
+
To Build packages:
 +
 
 +
# Build all packages
 +
# this should rebuild all packages (std and rt)
 +
build-pkgs
 +
# If you want to clean and build all:
 +
build-pkgs --clean
 +
 +
But be careful, '--clean' not only cleans the whole build directory "/localdisk/loadbuild/<user>/<project>/{std,rt}" but also cleans the local repository "deb-local-build".
 +
This means all the starlingX packages will be built from scratch and this will take time.
 +
If you just want to resume the previous build, please run without '-c':
 +
# build-pkgs
 +
 +
 +
# Build packages in parallel
 +
build-pkgs --parallel <number of parallel tasks, the default number is 10, the maximum number of parallel tasks is 30>
 +
 +
# To define the interval to poll the packages build status during parallel build:
 +
--poll_interval <the default interval is 10 seconds>
 +
 +
# To limit the number of make jobs for a package:
 +
--max_make_jobs <the default number of jobs is equal to the environment variable 'STX_BUILD_CPUS' or 'MAX_CPUS' inside the container>
 +
 +
 +
# Build single package
 +
build-pkgs -p <package name>
 +
 +
# Build single package cleaning previous build
 +
build-pkgs -c -p <package name>
 +
 +
# Once the packages are ready you can build the iso
 +
build-image
  
 
== Build ISO ==
 
== Build ISO ==
Line 198: Line 327:
  
 
   build-image
 
   build-image
 +
  ls -al /localdisk/deploy/*.iso
 +
 +
== Log files ==
 +
 +
While inside the build container, log files may be found here:
 +
 +
* /localdisk/builder.log /localdisk/pkgbuilder.log - top-level build controller log files
 +
* ${MY_WORKSPACE}/<std or rt>/<package name>/*.build' - individual package build logs

Revision as of 15:59, 15 November 2022

StarlingX Build Tools

The Debian build is completed using a set of containers designed to run in a Kubernetes environment. To facilitate this we are currently making use of Minikube and Helm, later on we will provide versions of the Helm Charts to allow for running builds directly on Kubernetes or StarlingX.

There are five containers required to complete a build:

  • stx-builder: main developer build container.
  • stx-pkgbuilder: Debian package builder (uses sbuild).
  • stx-repomgr: Debian local repository archive (uses aptly)
  • stx-lat-tool: Debian image builder
  • stx-docker: Docker in Docker (build docker images)

At a high level the StarlingX ISO image creation flow involves the following general steps (assuming you have already configured Docker on your system).

  1. Install Minikube and Helm.
  2. Build or download the StarlingX k8s development environment.
  3. Enter the stx-builder pod/container to triger the building task.
  4. Build packages/ISO creation.


NOTE: the build system requires a Linux system with Docker and python 3.x installed. Building on Windows is not supported -- please use a Virtual Machine if necessary. The steps on this page have been tested on CentOS 7 and Ubuntu Focal.

Configure build environment

We need to create and start the build containers, which requires some additional configuration described below.

Install Minikube and Helm

Install Minikube to support the local k8s framework for building. Install Helm tools to manage the Helm Charts required to start/stop/upgrade the pods or the deployments for the StarlingX Building system. Before installing these components please make sure that Docker is available in your environment.

Install minikube (https://minikube.sigs.k8s.io/docs/start/):

   curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
   sudo install minikube-linux-amd64 /usr/local/bin/minikube

Note: as of this writing minikube v 1.22.0 is current.

Note: minikube requires at least 2 CPU cores.

Alternatively, we can also use a third-party Minikube binary:

  curl -LO http://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/releases/v1.20.0/minikube-linux-amd64
  sudo install minikube-linux-amd64 /usr/local/bin/minikube

Install Helm -- you can select the version listed here or the latest released version:

   curl -LO https://get.helm.sh/helm-v3.6.2-linux-amd64.tar.gz
   tar xvf helm-v3.6.2-linux-amd64.tar.gz
   sudo mv linux-amd64/helm /usr/local/bin/

Add your user account to docker group:

 sudo usermod -aG docker $(id -un) && newgrp docker

Install repo

https://gerrit.googlesource.com/git-repo/

Export environment variables

export PROJECT="stx-env"
export USER_NAME="<Firstname Lastname>"
export USER_EMAIL="<your email>"

# STX_BUILD_HOME should be set to a directory owned by your userid
# the build home needs to have at least 200Gb of free space to build all packages and iso
export STX_BUILD_HOME="/home/${USER}/${PROJECT}"
export REPO_ROOT="${STX_BUILD_HOME}"/repo
export REPO_ROOT_SUBDIR="localdisk/designer/${USER}/${PROJECT}"
 
# MINIKUBE
export STX_PLATFORM="minikube"
export MINIKUBENAME="minikube-${USER}"

#############################################
# Manifest/Repo Options:
#############################################
# STX MASTER
export MANIFEST_URL="https://opendev.org/starlingx/manifest.git"
export MANIFEST_BRANCH="master"
export MANIFEST="default.xml"

For more details about the STX environment variables click here.

Create directories

Create the $STX_BUILD_HOME directory, you may need sudo privileges if using /build

e.g: 
sudo mkdir -p /build/${USER}
sudo chown ${USER}: /build/${USER}
mkdir -p $STX_BUILD_HOME
cd $STX_BUILD_HOME

Initialize repo

# create REPO_ROOT_SUBDIR and symlink
# symlink is a helper as minikube mounts the stx_build_home as its workspace
# so it works as a shortcut to access the repos
mkdir -p $REPO_ROOT_SUBDIR
ln -s $REPO_ROOT_SUBDIR $REPO_ROOT
 
cd $REPO_ROOT
# download and sync the repos
repo init -u ${MANIFEST_URL} -b ${MANIFEST_BRANCH} -m ${MANIFEST}
repo sync

Init and setup STX

The build tools comes with a script, import-stx, which sets up your PATH and other environment as necessary. This script must be sourced before attempting to use any tools:

There's a number of environment variables you can set prior to sourcing this file, please feel free to review the script and import-stx.README for a full list.

WARNING: minikube can't work if your $HOME directory points to an NFS location, we need to point it to some other local file system by defining $MINIKUBE_HOME in the environment before sourcing import-stx.

The build expects a configuration file, ``stx.conf`` (example) to exist at the root of the build tools working directory. It is a key/value file containing various build options. The ``stx config`` command may be used to add/modify entries in the file.

# Init stx tool
cd stx-tools
source import-stx

# Update stx config
# Align the builder container to use your user/UID
stx config --add builder.myuname $(id -un)
stx config --add builder.uid $(id -u)

# Embedded in ~/localrc of the build container
stx config --add project.gituser ${USER}
stx config --add project.gitemail ${USER_EMAIL}

# This will be included in the name of your build container and the basename for $MY_REPO_ROOT_DIR  
stx config --add project.name ${PROJECT}
 
stx config --show
# Show usage information
stx config --help

Start/Create build containers

The ``stx-init-env`` script will download or re-create build (docker) containers, and start them:

cd repo/stx-tools
./stx-init-env
# Monitor the status until they are running:
stx control status
# You should see 5 containers on Running state

Once docker images are available locally, you can start & stop them using the ``stx`` tool:

stx control start          # start builder PODs if not running
stx control status       # display POD status
stx control stop          # stop PODs

WARNING: any changes to ``stx.conf`` or (``stx config add`` etc) requires that the PODs are re-started. f you want to make changes to the environment in the build container, use ‘stx control stop’, then ‘stx config’ to adjust the variables, and re-start the containers.

stx control stop
stx config add <...>
stx control start

The script pulls build containers from DockerHub by default, where a new version is built once per day (ie default container images may be slightly out of date when you pull them). You can force a local re-build as follows:

stx control stop
cd repo/stx-tools
./stx-init-env --rebuild

Entering & controlling Pods

Once the containers are running, one can enter them (think ``docker exec <...> /bin/bash). While there are 5 containers, most build tasks are driven from the "builder" container, which is the default when using the ``stx`` tool:

 # enter the "builder" container
 stx shell

you can enter other containers as follows

 stx shell --container [builder|pkgbuilder|lat|repomgr|docker]

Use ``exit`` command to exit from the node to host environment.

You can use the ``stx control`` command to start/stop & monitor builder POD status:

 # control the Pods
 stx control start
 stx control stop
 stx control status
 # more info
 stx control --help

The ``status`` command will include Helm status, including deployments and the pods. You can use that information to manually enter or troubleshoot POds using munikube or kubectl.

Every time you start/restart Pods

Execute these mandatory steps inside the builder:

 sudo apt-get update
 git config --global user.name "Firstname Lastname"
 git config --global user.email "Your email"

NOTE: you may see the following errors from apt. You can ignore this and continue.

 E: Failed to fetch http://stx-stx-repomgr:80/deb-local-source/dists/bullseye/main/source/Sources 404 Not Found [IP: 10.102.135.193 80]
 E: Some index files failed to download. They have been ignored, or old ones used instead.

Build packages/ISO creation

The builder is the container where you will perform most of the actions, such as launching the task of building packages and images.

  stx shell

Refresh the source tree

The StarlingX source tree consists of multiple git repositories. The tool ‘repo’ is used to sync these repositories, if required you can sync the repos from inside the builder:

 cd $MY_REPO_ROOT_DIR
 repo sync

After the ‘repo sync’ is done, check the below directory:

  ls $MY_REPO
  ls $MY_REPO/stx
  ls $MY_REPO_ROOT_DIR/stx-tools


Before running 'build-pkgs':

Run below command to download the sources of all buildable packages by scanning the repo root $MY_REPO/stx the download directory is: $STX_MIRROR

  downloader -s -B std,rt

All the below lists with build types will be scanned in the repo root $MY_REPO/stx:

  • debian_pkg_dirs
  • debian_pkg_dirs_rt
  • debian_pkg_dirs_installer

Download 3rd-party tar & deb files

Run below command to download the debian binary packages (distribution: bullseye) into directory $STX_MIRROR/binaries:

  downloader -b -B std,rt

All the below lists of binary packages will be downloaded:

 $MY_REPO_ROOT_DIR/stx-tools/debian-mirror-tools/config/debian/common/base-bullseye.lst
 $MY_REPO_ROOT_DIR/stx-tools/debian-mirror-tools/config/debian/<layer>/os-std.lst
 $MY_REPO_ROOT_DIR/stx-tools/debian-mirror-tools/config/debian/<layer>/os-rt.lst


You can also run below command to download both sources and binaries:

  downloader -B std,rt
  # To check all options:
  downloader --help

Currently, the apt sources used to download packages are in the '/etc/apt/sources.list' file in the builder container.

Verify that the local repos are created

 repo_manage.py list
 INFO:repo_manage:No remote repo
 INFO:repo_manage:3 local repos:
 INFO:repo_manage:deb-local-build : bullseye : main
 INFO:repo_manage:deb-local-binary : bullseye : main
 INFO:repo_manage:deb-local-source : bullseye : main

NOTE: All 3 repos should be seen only after build-pkgs [-p <package>] is done at a later time.

Build packages

The 'build-pkgs' has two phases based on the Debian package's build:

1) Check the digest of package's source meta data, for example:

if package 'dhcp' in cache:
  if sha256 digest of the folder (/path/to/stx/integ/base/dhcp/debian) have not changed:
    if the dsc file of package dhcp exists:
        reuse the existing dsc file
        return
    create dsc file for package 'dhcp' and add the checksum to cache 

2) Build avoidance is enabled by default for this phase, the build option '-c' will turn 'build avoidance' off.

if build avoidance is enabled:
    check whether there is build stamp for this package:
        if Yes, skip the build and return
Send the build request for the package to pkgbuilder container

To Build packages:

# Build all packages
# this should rebuild all packages (std and rt)
build-pkgs
# If you want to clean and build all:
build-pkgs --clean 

But be careful, '--clean' not only cleans the whole build directory "/localdisk/loadbuild/<user>/<project>/{std,rt}" but also cleans the local repository "deb-local-build".
This means all the starlingX packages will be built from scratch and this will take time. 
If you just want to resume the previous build, please run without '-c':
# build-pkgs


# Build packages in parallel
build-pkgs --parallel <number of parallel tasks, the default number is 10, the maximum number of parallel tasks is 30>

# To define the interval to poll the packages build status during parallel build:
--poll_interval <the default interval is 10 seconds>

# To limit the number of make jobs for a package:
--max_make_jobs <the default number of jobs is equal to the environment variable 'STX_BUILD_CPUS' or 'MAX_CPUS' inside the container>


# Build single package
build-pkgs -p <package name>

# Build single package cleaning previous build
build-pkgs -c -p <package name>

# Once the packages are ready you can build the iso
build-image

Build ISO

Once you have built all of the packages you can build the iso by running the following command:

  build-image
  ls -al /localdisk/deploy/*.iso

Log files

While inside the build container, log files may be found here:

  • /localdisk/builder.log /localdisk/pkgbuilder.log - top-level build controller log files
  • ${MY_WORKSPACE}/<std or rt>/<package name>/*.build' - individual package build logs