Jump to: navigation, search

Difference between revisions of "StarlingX/DebianBuildEnvironment"

(Install Minikube and Helm)
Line 19: Line 19:
 
# Build packages/ISO creation.
 
# Build packages/ISO creation.
  
== Build the four StarlingX pod images ==
+
== Configure build environment ==
  
The four StarlingX build container images handle all steps related to StarlingX ISO creation. This section describes how to customize the
+
We need to create and start the build containers, which requires some additional configuration described below.
build container image building process.
 
  
== Install Minikube and Helm ==
+
=== Install Minikube and Helm ===
  
 
Install Minikube to support the local k8s framework for building. Install Helm tools to manage the Helm Charts required to
 
Install Minikube to support the local k8s framework for building. Install Helm tools to manage the Helm Charts required to
Line 30: Line 29:
 
that Docker is available in your environment.
 
that Docker is available in your environment.
  
We can download directly the binary packages to install them from the upstream.(https://minikube.sigs.k8s.io/docs/start/)
+
Install minikube (https://minikube.sigs.k8s.io/docs/start/):
  
 
     curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
 
     curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
 
     sudo install minikube-linux-amd64 /usr/local/bin/minikube
 
     sudo install minikube-linux-amd64 /usr/local/bin/minikube
  
If necessary, we can also use the third-party Minikube binary:
+
Alternatively, we can also use a third-party Minikube binary:
  
 
   curl -LO http://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/releases/v1.20.0/minikube-linux-amd64
 
   curl -LO http://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/releases/v1.20.0/minikube-linux-amd64
 
   sudo install minikube-linux-amd64 /usr/local/bin/minikube
 
   sudo install minikube-linux-amd64 /usr/local/bin/minikube
  
Install Helm binary package -- you can select the version listed here or the latest released version:
+
Install Helm -- you can select the version listed here or the latest released version:
  
 
     curl -LO https://get.helm.sh/helm-v3.6.2-linux-amd64.tar.gz
 
     curl -LO https://get.helm.sh/helm-v3.6.2-linux-amd64.tar.gz
Line 46: Line 45:
 
     sudo mv linux-amd64/helm /usr/local/bin/
 
     sudo mv linux-amd64/helm /usr/local/bin/
  
== Initialize Minikube container and build container images ==
+
Add your user account to docker group:
  
Based on Minikube and Helm, you can build our local k8s deployments to support the building of StarlingX.
+
  sudo usermod -aG docker $(id -un) && newgrp docker
  
Before this step, you need to check if your UID is in the docker group.  If not, please use the following command to add or communicate
+
=== Clone build tools and set up workspace ===
to your administrator for help.
 
  
  sudo usermod -aG docker $yourusername
+
Clone build tools:
  
 +
export TOOL_HOME=~/DebianBuild
 +
mkdir -p $TOOL_HOME
 +
cd $TOOL_HOME
 +
git clone https://opendev.org/starlingx/tools
  
== Minikube build image customization ==
+
Create a workspace directory; it will be mapped into build container.
  
After sourcing  import-stx, the ``stx`` command should be available.  You can start by customizing values for
+
export WORKSPACE_HOME=~/DebianBuildWorkspace
the StarlingX container image build process.
+
mkdir -p $WORKSPACE_HOME
 +
sudo mkdir -p /localdisk
 +
sudo ln -sf $WORKSPACE_HOME /localdisk/$(id -nu)
  
The ``stx.conf`` file is a key-value config file that is used to set the default configuration values. We can use ``stx config`` command to
+
=== Source the environment ===
get or change the items in the stx.conf file. You will see the usages of the 'stx' command in the 'stx' command section.
 
  
This is a sample of a ``stx.conf`` file can be found [https://opendev.org/starlingx/tools/src/branch/master/stx.conf here.]
+
The build tools comes with a script, import-stx, which sets up your PATH and other environment as necessary. This script must be sourced before attempting to use any tools:
we can use the ``stx config`` command to change or show this ``stx.conf`` file
 
as follows:
 
  
 +
There's a number of environment variables you can set prior to sourcing this file, please fee free to review that script for a full list.
 +
 +
'''WARNING''': minikube can't work if your $HOME directory points to an NFS location, we need to point it to some other local file system by defining ``MINIKUBE_HOME`` in the environment before sourcing ``import-stx``:
 +
 +
# Necessary if your $HOME is on NFS
 +
export MINIKUBE_HOME=/localdisk/$(id -nu)
 +
# Source the environment
 +
cd $TOOL_HOME/tools
 +
source import-stx
 +
 +
=== Configure build containers ===
 +
 +
The build expects a configuration file, ``stx.conf`` ([https://opendev.org/starlingx/tools/src/branch/master/stx.conf example]) to exist at the root of the build tools working directory. It is a key/value file containing various build options. The ``stx config`` command may be used to add/modify entries in the file.
 +
 +
# source the environment
 +
cd $TOOL_HOME/tools
 +
source ./import-stx
 +
 
  # Align the builder container to use your user/UID
 
  # Align the builder container to use your user/UID
 
  stx config --add builder.myuname $(id -un)
 
  stx config --add builder.myuname $(id -un)
 
  stx config --add builder.uid $(id -u)
 
  stx config --add builder.uid $(id -u)
 +
 
  # Embedded in ~/localrc of the build container
 
  # Embedded in ~/localrc of the build container
 
  stx config --add project.gituser "First Last"               
 
  stx config --add project.gituser "First Last"               
 
  stx config --add project.gitemail <your email address>
 
  stx config --add project.gitemail <your email address>
 +
 
  # This will be included in the name of your build container and the basename for $MY_REPO_ROOT_DIR   
 
  # This will be included in the name of your build container and the basename for $MY_REPO_ROOT_DIR   
 
  stx config --add project.name stx-deb-bld-1                 
 
  stx config --add project.name stx-deb-bld-1                 
 
  stx config --add project.proxy false
 
  stx config --add project.proxy false
stx config --add project.ostree_osname wrlinux
 
 
   
 
   
 
  # Show all the settings
 
  # Show all the settings
 
  stx config --show
 
  stx config --show
  
Please use ``stx config -h`` command to show more help information for config module.  We can also use the default values for the building
+
# Show usage information
project.
+
  stx config --help
 
 
Once Minikube and Helm have been installed, you can execute the following command to start Minikube and create the container images
 
before using stx command.
 
 
 
  stx-init-env
 
 
 
To support multiple users, use the ``MINIKUBENAME`` to distinguish the different Minikube containers, so that unique Minikube containers will
 
distinguish between the developers, attempting to start with an existing Minikube will cause the system will block.
 
 
 
In the stx-init-env script, we default use ``--driver=docker`` to start the Minikube container.
 
 
 
The ``--mount-string`` argument of ``minikube start`` will map the host path to the Minikube container. The default host directory is
 
``/localdisk/$USER``, so please check if this default directory exists, or you can modify it in stx-init-env script.
 
 
 
Once the Minikube container has started, we will build all container images required for the StarlingX building system if they are not yet
 
built or available.
 
 
 
This process will continue several minutes since it will download the required Minikube container images (such as CoreDNS) as well as build
 
the StarlingX build container images.
 
 
 
'''NOTE''':
 
  
Before executing stx-init-env, if you don't set the environment variable ``MINIKUBE_HOME``, this variable will inherit the your $HOME,
+
=== Create build containers ===
meanwhile if your $HOME is NIS home directory(nfs mount point), itwill cause something wrong with Minikube. There is a known issue to
 
track it in the upstream: [https://github.com/kubernetes/minikube/pull/1022] So we had better
 
export the variable ``MINIKUBE_HOME`` as a non nfs-mount-point to bypass this issue
 
  
  export MINIKUBE_HOME=${yourminikubehomedirectory}
+
The ``stx-init-env`` script will download or re-create build (docker) containers, and start them:
  
or change your $HOME directly as follows:
+
cd $TOOL_HOME/tools
 +
bash stx-init-env
  
  export HOME=${yournewhomedirectory}
+
The script pulls build containers from DockerHub by default, where a new version is built once per day (ie default container images may be slightly out of date when you pull them). You can force a local re-build as follows:
  
We advise you to set ``MINIKUBE_HOME``, as modifying $HOME may have unintended consequences for other software running on your system.
+
cd $TOOL_HOME/tools
 +
bash stx-init-env --rebuild
  
 +
Once docker images are available locally, you can start & stop them using the ``stx`` tool:
  
== Build the containers ==
+
stx control start          # start builder PODs if not running
 +
stx control status      # display POD status
 +
stx control stop          # stop PODs
  
To build the containers that are necessary in order to build debian packages, run the following command:
+
'''WARNING''': any changes to ``stx.conf`` or (``stx config add`` etc) requires that the PODs are re-started.  f you want to make changes to the environment in the build container, use ‘stx control stop’, then ‘stx config’ to adjust the variables, and re-start the containers.
  
  bash stx-init-env --rebuild
+
stx control stop
 +
stx config add <...>
 +
stx control start
  
To make the containers accessible after you have built the containers, run the following command:
+
== Entering & controlling Pods ==
  
  source import-stx
+
Once the containers are running, one can enter them (think ``docker exec <...> /bin/bash). While there are 4 containers, most build tasks are driven from the "builder" container, which is the default when using the ``stx`` tool:
 
 
 
 
== Entering the Pods ==
 
 
 
Execute the following command to enter the controller node to trigger the build task:
 
  
 +
  # enter the "builder" container
 
   stx control enter
 
   stx control enter
  
The default will enter the controller node without other arguments. If you want to enter the other nodes, please use the following
+
you can enter other containers as follows
command:
 
  
 
   stx control enter --dockername [builder|pkgbuilder|lat|repomgr]
 
   stx control enter --dockername [builder|pkgbuilder|lat|repomgr]
  
Use ``exit`` command to exit from the node to host env.
+
Use ``exit`` command to exit from the node to host environment.
  
== Monitoring the status ==
+
You can use the ``stx control`` command to start/stop & monitor builder POD status:
  
After the building system starts, you can use the following command to show its status:
+
  # control the Pods
 +
  stx control start
 +
  stx control stop
 +
  stx control status
  
   stx control status
+
  # more info
 +
   stx control --help
  
It will output the status of the Helm Charts, the deployments and the pods. According to the podname from this result, you also can
+
The ``status`` command will include Helm status, including deployments and the pods. You can use that information to manually enter or troubleshoot POds using munikube or kubectl.
manually enter any pod node container.
 
  
== Stop the pods ==
+
=== Every time you start/restart Pods ===
  
To stop the pod:
+
Execute these mandatory steps:
  
   stx control stop
+
   sudo apt-get update
 +
  sudo apt-get install less
 +
  git config --global user.name "First Last"
 +
  git config --global user.email your@email.com
  
 +
'''NOTE''': you may see the following errors from apt. You can ignore this and continue.
  
 +
  E: Failed to fetch http://stx-stx-repomgr:80/deb-local-source/dists/bullseye/main/source/Sources 404 Not Found [IP: 10.102.135.193 80]
 +
  E: Some index files failed to download. They have been ignored, or old ones used instead.
  
 
== Build packages/ISO creation ==
 
== Build packages/ISO creation ==
Line 249: Line 256:
  
 
   build-image
 
   build-image
 +
  ls -al /localdisk/deploy/*.iso
 +
 +
== Log files ==
 +
 +
While inside the build container, log files may be found here:
 +
 +
* /localdisk/builder.log /localdisk/pkgbuilder.log - top-level build controller log files
 +
* ${MY_WORKSPACE}/<std or rt>/<package name>/*.build' - individual package build logs

Revision as of 22:03, 24 January 2022

StarlingX Build Tools

The Debian build is completed using a set of containers designed to run in a Kubernetes environment. To facilitate this we are currently making use of Minikube and Helm, later on we will provide versions of the Helm Charts to allow for running builds directly on Kubernetes or StarlingX.

There are four containers (stx-builder|stx-pkgbuilder|stx-repomgr| stx-lat-tool) required to complete a build:

  • stx-builder: main developer build container.
  • stx-pkgbuilder: Debian package builder (uses sbuild).
  • stx-repomgr: Debian local repository archive (uses aptly)
  • stx-lat-tool: Debian image builder

At a high level the StarlingX ISO image creation flow involves the following general steps (assuming you have already configured Docker on your system).

  1. Install Minikube and Helm.
  2. Build or download the StarlingX k8s development environment.
  3. Enter the stx-builder pod/container to triger the building task.
  4. Build packages/ISO creation.

Configure build environment

We need to create and start the build containers, which requires some additional configuration described below.

Install Minikube and Helm

Install Minikube to support the local k8s framework for building. Install Helm tools to manage the Helm Charts required to start/stop/upgrade the pods or the deployments for the StarlingX Building system. Before installing these components please make sure that Docker is available in your environment.

Install minikube (https://minikube.sigs.k8s.io/docs/start/):

   curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
   sudo install minikube-linux-amd64 /usr/local/bin/minikube

Alternatively, we can also use a third-party Minikube binary:

  curl -LO http://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/releases/v1.20.0/minikube-linux-amd64
  sudo install minikube-linux-amd64 /usr/local/bin/minikube

Install Helm -- you can select the version listed here or the latest released version:

   curl -LO https://get.helm.sh/helm-v3.6.2-linux-amd64.tar.gz
   tar xvf helm-v3.6.2-linux-amd64.tar.gz
   sudo mv linux-amd64/helm /usr/local/bin/

Add your user account to docker group:

 sudo usermod -aG docker $(id -un) && newgrp docker

Clone build tools and set up workspace

Clone build tools:

export TOOL_HOME=~/DebianBuild
mkdir -p $TOOL_HOME
cd $TOOL_HOME
git clone https://opendev.org/starlingx/tools

Create a workspace directory; it will be mapped into build container.

export WORKSPACE_HOME=~/DebianBuildWorkspace mkdir -p $WORKSPACE_HOME sudo mkdir -p /localdisk sudo ln -sf $WORKSPACE_HOME /localdisk/$(id -nu)

Source the environment

The build tools comes with a script, import-stx, which sets up your PATH and other environment as necessary. This script must be sourced before attempting to use any tools:

There's a number of environment variables you can set prior to sourcing this file, please fee free to review that script for a full list.

WARNING: minikube can't work if your $HOME directory points to an NFS location, we need to point it to some other local file system by defining ``MINIKUBE_HOME`` in the environment before sourcing ``import-stx``:

# Necessary if your $HOME is on NFS
export MINIKUBE_HOME=/localdisk/$(id -nu)
# Source the environment
cd $TOOL_HOME/tools
source import-stx

Configure build containers

The build expects a configuration file, ``stx.conf`` (example) to exist at the root of the build tools working directory. It is a key/value file containing various build options. The ``stx config`` command may be used to add/modify entries in the file.

# source the environment
cd $TOOL_HOME/tools
source ./import-stx

# Align the builder container to use your user/UID
stx config --add builder.myuname $(id -un)
stx config --add builder.uid $(id -u)

# Embedded in ~/localrc of the build container
stx config --add project.gituser "First Last"              
stx config --add project.gitemail <your email address>

# This will be included in the name of your build container and the basename for $MY_REPO_ROOT_DIR  
stx config --add project.name stx-deb-bld-1                 
stx config --add project.proxy false

# Show all the settings
stx config --show
# Show usage information
stx config --help

Create build containers

The ``stx-init-env`` script will download or re-create build (docker) containers, and start them:

cd $TOOL_HOME/tools
bash stx-init-env

The script pulls build containers from DockerHub by default, where a new version is built once per day (ie default container images may be slightly out of date when you pull them). You can force a local re-build as follows:

cd $TOOL_HOME/tools
bash stx-init-env --rebuild

Once docker images are available locally, you can start & stop them using the ``stx`` tool:

stx control start          # start builder PODs if not running
stx control status       # display POD status
stx control stop          # stop PODs

WARNING: any changes to ``stx.conf`` or (``stx config add`` etc) requires that the PODs are re-started. f you want to make changes to the environment in the build container, use ‘stx control stop’, then ‘stx config’ to adjust the variables, and re-start the containers.

stx control stop
stx config add <...>
stx control start

Entering & controlling Pods

Once the containers are running, one can enter them (think ``docker exec <...> /bin/bash). While there are 4 containers, most build tasks are driven from the "builder" container, which is the default when using the ``stx`` tool:

 # enter the "builder" container
 stx control enter

you can enter other containers as follows

 stx control enter --dockername [builder|pkgbuilder|lat|repomgr]

Use ``exit`` command to exit from the node to host environment.

You can use the ``stx control`` command to start/stop & monitor builder POD status:

 # control the Pods
 stx control start
 stx control stop
 stx control status
 # more info
 stx control --help

The ``status`` command will include Helm status, including deployments and the pods. You can use that information to manually enter or troubleshoot POds using munikube or kubectl.

Every time you start/restart Pods

Execute these mandatory steps:

 sudo apt-get update
 sudo apt-get install less
 git config --global user.name "First Last"
 git config --global user.email your@email.com

NOTE: you may see the following errors from apt. You can ignore this and continue.

 E: Failed to fetch http://stx-stx-repomgr:80/deb-local-source/dists/bullseye/main/source/Sources 404 Not Found [IP: 10.102.135.193 80]
 E: Some index files failed to download. They have been ignored, or old ones used instead.

Build packages/ISO creation

The six builder is the container where you will perform most of the actions, such as launching the task of building packages and images.

  stx control enter


Initialize the source tree

The StarlingX source tree consists of multiple git repositories. The tool ‘repo’ is used to sync these repositories locally, below config Is minimally required to config to make ‘repo’ work:

  cd  $MY_REPO_ROOT_DIR
  repo init -u https://opendev.org/starlingx/manifest -m default.xml
  repo sync

After the ‘repo sync’ is done, check the below directory:

  $ ls $MY_REPO
  $ ls $MY_REPO/stx
  $ ls $MY_REPO_ROOT_DIR/stx-tools


Before running 'build-pkgs':

Run below command to download the sources of all buildable packages by scanning the repo root $MY_REPO/stx

the download directory is: $STX_MIRROR/sources

  $ downloader -s

All the below lists with build types will be scanned in the repo root $MY_REPO/stx:

debian_pkg_dirs

debian_pkg_dirs_rt

debian_pkg_dirs_installer


Before running 'build-image':

Run below command to download the debian binary packages (distribution: bullseye) into directory $STX_MIRROR/binaries:

  $ downloader -b

All the below lists of binary packages will be downloaded:

$MY_REPO_ROOT_DIR/stx-tools/debian-mirror-tools/config/debian/common/base-bullseye.lst

$MY_REPO_ROOT_DIR/stx-tools/debian-mirror-tools/config/debian/<layer>/os-std.lst

$MY_REPO_ROOT_DIR/stx-tools/debian-mirror-tools/config/debian/<layer>/os-rt.lst


Also run below command to download both sources and binaries:

  $ downloader -b -s


  $ downloader --help
  usage: downloader [-h] [-b] [-s] [-c]
  downloader helper
  optional arguments:
    -h, --help            show this help message and exit
    -b, --download_binary
                          download binary debs
    -s, --download_source
                          download stx source
    -c, --clean_mirror    clean the whole mirror and download again, be careful to use

Current the apt sources used to download packages is in '/etc/apt/sources.list' of builder container.

Build packages

To bulld an individual package:

 build-pkgs -p <name of package>

To build all of the packages available

 build-pkgs -a

Build ISO

Once you have built all of the packages you can build the iso by running the following command:

  build-image
  ls -al /localdisk/deploy/*.iso

Log files

While inside the build container, log files may be found here:

  • /localdisk/builder.log /localdisk/pkgbuilder.log - top-level build controller log files
  • ${MY_WORKSPACE}/<std or rt>/<package name>/*.build' - individual package build logs