Jump to: navigation, search

Difference between revisions of "StarlingX/DebianArmBuildEnvironment"

m (Start/Create build containers)
Line 1: Line 1:
 
== Notes ==
 
== Notes ==
 
* This is Work-In-Progress. If you try and see any problems, please contact Jackie Huang <jackie.huang@windriver.com>
 
* This is Work-In-Progress. If you try and see any problems, please contact Jackie Huang <jackie.huang@windriver.com>
* The only tested hardware is:
+
* The tested Hardwares are:
 +
* HW 1:
 
   * Product Name: HPE ProLiant RL300 Gen11
 
   * Product Name: HPE ProLiant RL300 Gen11
 
   * CPU: Ampere(R) Altra(R) Processor, 3000MHz, 80/80 cores; 80 threads
 
   * CPU: Ampere(R) Altra(R) Processor, 3000MHz, 80/80 cores; 80 threads
 
   * Memory: 16G 3200MHz X 16 = 256G
 
   * Memory: 16G 3200MHz X 16 = 256G
 
   * Network: Mellanox MT2894 Family [ConnectX-6 Lx] Adapter
 
   * Network: Mellanox MT2894 Family [ConnectX-6 Lx] Adapter
   * Disk: nvme 1T
+
   * Disk: NVMe 1T
 +
* HW 2:
 +
  * Product Name: SuperMicro R12SPD-A
 +
  * CPU: Q80-30, Ampere(R) Altra(R) Processor 80 cores; 80 threads
 +
  * Memory: 32G 3200MHz X 16 = 512G
 +
  * Network: Mellanox Technologies MT27710 Family [ConnectX-4 Lx]
 +
  * Disk: NVMe 1T
 
* The only tested OS is: Debian 11 (bullseye)
 
* The only tested OS is: Debian 11 (bullseye)
  

Revision as of 10:54, 16 October 2023

Notes

  • This is Work-In-Progress. If you try and see any problems, please contact Jackie Huang <jackie.huang@windriver.com>
  • The tested Hardwares are:
  • HW 1:
 * Product Name: HPE ProLiant RL300 Gen11
 * CPU: Ampere(R) Altra(R) Processor, 3000MHz, 80/80 cores; 80 threads
 * Memory: 16G 3200MHz X 16 = 256G
 * Network: Mellanox MT2894 Family [ConnectX-6 Lx] Adapter
 * Disk: NVMe 1T
  • HW 2:
 * Product Name: SuperMicro R12SPD-A
 * CPU: Q80-30, Ampere(R) Altra(R) Processor 80 cores; 80 threads
 * Memory: 32G 3200MHz X 16 = 512G
 * Network: Mellanox Technologies MT27710 Family [ConnectX-4 Lx]
 * Disk: NVMe 1T
  • The only tested OS is: Debian 11 (bullseye)

StarlingX Build Tools

The Debian build on ARM is almost the same as on X86-64 (see [StarlingX Build Tools for X86-64])

There are five containers required to complete a build:

  • stx-builder: main developer build container.
  • stx-pkgbuilder: Debian package builder (uses sbuild).
  • stx-repomgr: Debian local repository archive (uses aptly)
  • stx-lat-tool: Debian image builder
  • stx-docker: Docker in Docker (build docker images)

At a high level the StarlingX ISO image creation flow involves the following general steps (assuming you have already configured Docker on your system).

  1. Install Minikube and Helm.
  2. Build the StarlingX k8s development environment.
  3. Enter the stx-builder pod/container to trigger the building task.
  4. Build packages/ISO creation.


NOTE: the build system requires a Linux system with Docker and python 3.x installed. The steps on this page have been ONLY tested on Debian 11 (bullseye) on HPE ProLiant RL300 Gen11 Ampere based ARM server.

Register on Docker Hub

The build environment relies on the Docker Hub registry for storing container images used during the build. Docker Hub puts limits on the amount of data that can be downloaded by the same user/IP address. To avoid this limit, we recommend to register an account on Docker Hub and log in to docker prior to initializing your StarlingX development environment. Note the user ID and password as we will supply them to the `stx-init-env` script below.

Configure build environment

We need to create and start the build containers, which requires some additional configuration described below.

Install Minikube and Helm

Install Minikube to support the local k8s framework for building. Install Helm tools to manage the Helm Charts required to start/stop/upgrade the pods or the deployments for the StarlingX Building system. Before installing these components, please make sure that Docker is available in your environment.

Install minikube (https://minikube.sigs.k8s.io/docs/start/):

   curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-arm64
   sudo install minikube-linux-arm64 /usr/local/bin/minikube

Note: as of this writing minikube v 1.29.0 is current.

Note: minikube requires at least 2 CPU cores.

Install Helm -- you can select the version listed here or the latest released version:

   curl -LO https://get.helm.sh/helm-v3.6.2-linux-arm64.tar.gz
   tar xvf helm-v3.6.2-linux-arm64.tar.gz
   sudo mv linux-arm64/helm /usr/local/bin/

Add your user account to docker group:

   sudo usermod -aG docker $(id -un) && newgrp docker

Install repo

   curl https://storage.googleapis.com/git-repo-downloads/repo > /usr/local/repo
   chmod +x repo


Export environment variables

export PROJECT="prj-stx-arm"
export USER_NAME="<Firstname Lastname>"
export USER_EMAIL="<your email>"

# STX_BUILD_HOME should be set to a directory owned by your userid
# the build home needs to have at least 200Gb of free space to build all packages and iso
export STX_BUILD_HOME="/home/${USER}/${PROJECT}"
export REPO_ROOT="${STX_BUILD_HOME}"/repo
export REPO_ROOT_SUBDIR="localdisk/designer/${USER}/${PROJECT}"
 
# MINIKUBE
export STX_PLATFORM="minikube"
export MINIKUBENAME="minikube-${USER}"

#############################################
# Manifest/Repo Options:
#############################################
# STX MASTER
export MANIFEST_URL="https://opendev.org/starlingx/manifest.git"
export MANIFEST_BRANCH="master"
export MANIFEST="default.xml"

For more details about the STX environment variables click here.

Create directories

Create the $STX_BUILD_HOME directory:

mkdir -p $STX_BUILD_HOME
cd $STX_BUILD_HOME

Initialize repo

# create REPO_ROOT_SUBDIR and symlink
# symlink is a helper as minikube mounts the stx_build_home as its workspace
# so it works as a shortcut to access the repos
mkdir -p $REPO_ROOT_SUBDIR
ln -s $REPO_ROOT_SUBDIR $REPO_ROOT
 
cd $REPO_ROOT
# download and sync the repos
repo init -u ${MANIFEST_URL} -b ${MANIFEST_BRANCH} -m ${MANIFEST}
repo sync

[Temporary step] Apply code changes for ARM implementation

The spec and code changes for ARM implementation are still in progress, you need to get the codes from gerrit review for now. please get all code changes from: https://review.opendev.org/q/topic:arm64/20230725-stx-master-native

The impacted repos are:

* $REPO_ROOT/stx-tools
* $REPO_ROOT/cgcs-root
* $REPO_ROOT/cgcs-root/stx/ansible-playbooks
* $REPO_ROOT/cgcs-root/stx/app-istio
* $REPO_ROOT/cgcs-root/stx/config
* $REPO_ROOT/cgcs-root/stx/containers
* $REPO_ROOT/cgcs-root/stx/fault
* $REPO_ROOT/cgcs-root/stx/ha
* $REPO_ROOT/cgcs-root/stx/integ
* $REPO_ROOT/cgcs-root/stx/kernel
* $REPO_ROOT/cgcs-root/stx/metal
* $REPO_ROOT/cgcs-root/stx/nginx-ingress-controller-armada-app
* $REPO_ROOT/cgcs-root/stx/stx-puppet
* $REPO_ROOT/cgcs-root/stx/utilities

Besides the above code changes, a workaround is needed to remove the CENGNURL lines since no pre-built packages are available on CENGN:

 sed -i '/@CENGNURL@/ d' ${REPO_ROOT}/stx-tools/stx/toCOPY/pkgbuilder/debbuilder.conf

Init and setup STX

The build tools comes with a script, import-stx, which sets up your PATH and other environment as necessary. This script must be sourced before attempting to use any tools:

There's a number of environment variables you can set prior to sourcing this file, please feel free to review the script and import-stx.README for a full list.

WARNING: minikube can't work if your $HOME directory points to an NFS location, we need to point it to some other local file system by defining $MINIKUBE_HOME in the environment before sourcing import-stx.

The build expects a configuration file, ``stx.conf`` (example) to exist at the root of the build tools working directory. It is a key/value file containing various build options. The ``stx config`` command may be used to add/modify entries in the file.

# Init stx tool
cd stx-tools
source import-stx

# Update stx config
# Align the builder container to use your user/UID
stx config --add builder.myuname $(id -un)
stx config --add builder.uid $(id -u)

# Embedded in ~/localrc of the build container
stx config --add project.gituser ${USER}
stx config --add project.gitemail ${USER_EMAIL}

# This will be included in the name of your build container and the basename for $MY_REPO_ROOT_DIR  
stx config --add project.name ${PROJECT}
# [Temporary step] For now, there is no snapshot for ARM on CENGEN mirror, so change the snapshot with:
stx config --add project.debian_snapshot_base http://snapshot.debian.org/archive/debian
stx config --add project.debian_security_snapshot_base http://snapshot.debian.org/archive/debian-security

# options: cengn_first(default), cengn, upstream_first, upstream
# For now, there is not any packages for ARM on cengn, so only opiton
# can be used is "upstream"
stx config --add repomgr.cengnstrategy upstream


stx config --show
# Show usage information
stx config --help

Start/Create build containers

The ``stx-init-env`` script will download or re-create build (docker) containers. But for ARM implementation, the container images are not available on official [StarlingX dockerhub] yet, please use the following for now from [stx4arm]:

# [Temporary step] 
export STX_PREBUILT_BUILDER_IMAGE_PREFIX=stx4arm/
export STX_PREBUILT_BUILDER_IMAGE_TAG=master-20230823
 
cd repo/stx-tools
# Type in DockerHub username & password if prompted
./stx-init-env --dockerhub-login
# Monitor the status until they are running:
stx control status
# You should see 5 containers on Running state

Once docker images are available locally, you can start & stop them using the ``stx`` tool:

stx control start          # start builder PODs if not running
stx control status       # display POD status
stx control stop          # stop PODs

WARNING: any changes to ``stx.conf`` or (``stx config add`` etc) requires that the PODs are re-started. f you want to make changes to the environment in the build container, use ‘stx control stop’, then ‘stx config’ to adjust the variables, and re-start the containers.

stx control stop
stx config add <...>
stx control start

Note: you can't use --rebuild to rebuild the build containers now, since the LAT-SDK for ARM is not available on [CENGEN] yet.

Entering & controlling Pods

Once the containers are running, one can enter them (think ``docker exec <...> /bin/bash). While there are 5 containers, most build tasks are driven from the "builder" container, which is the default when using the ``stx`` tool:

 # enter the "builder" container
 stx shell

you can enter other containers as follows

 stx shell --container [builder|pkgbuilder|lat|repomgr|docker]

Use ``exit`` command to exit from the node to host environment.

You can use the ``stx control`` command to start/stop & monitor builder POD status:

 # control the Pods
 stx control start
 stx control stop
 stx control status
 # more info
 stx control --help

The ``status`` command will include Helm status, including deployments and the pods. You can use that information to manually enter or troubleshoot POds using munikube or kubectl.

Every time you start/restart Pods

Execute these mandatory steps inside the builder:

 sudo apt-get update
 git config --global user.name "Firstname Lastname"
 git config --global user.email "Your email"

NOTE: you may see the following errors from apt. You can ignore this and continue.

 E: Failed to fetch http://stx-stx-repomgr:80/deb-local-source/dists/bullseye/main/source/Sources 404 Not Found [IP: 10.102.135.193 80]
 E: Some index files failed to download. They have been ignored, or old ones used instead.

Build packages/ISO creation

The builder is the container where you will perform most of the actions, such as launching the task of building packages and images.

  stx shell

Download 3rd-party tar & deb files

Before running 'build-pkgs':

Run below command to download the sources of all buildable packages by scanning the repo root $MY_REPO/stx the download directory is: $STX_MIRROR

  downloader -s

All the below lists with build types will be scanned in the repo root $MY_REPO/stx:

  • debian_pkg_dirs_arm64
  • debian_pkg_dirs_rt_arm64
  • debian_pkg_dirs_installer_arm64

Run below command to download the debian binary packages (distribution: bullseye) into directory $STX_MIRROR/binaries:

  downloader -b

All the below lists of binary packages will be downloaded:

 $MY_REPO_ROOT_DIR/stx-tools/debian-mirror-tools/config/debian/common/base-bullseye_arm64.lst
 $MY_REPO_ROOT_DIR/stx-tools/debian-mirror-tools/config/debian/<layer>/os-std_arm64.lst
 $MY_REPO_ROOT_DIR/stx-tools/debian-mirror-tools/config/debian/<layer>/os-rt_arm64.lst

You can also run below command to download both sources and binaries:

  downloader -s -b
  # To check all options:
  downloader --help

Currently, the apt sources used to download packages are in the '/etc/apt/sources.list' file in the builder container.

Verify that the local repos are created

 repo_manage.py list
 INFO:repo_manage:No remote repo
 INFO:repo_manage:3 local repos:
 INFO:repo_manage:deb-local-build : bullseye : main
 INFO:repo_manage:deb-local-binary : bullseye : main
 INFO:repo_manage:deb-local-source : bullseye : main

NOTE: All 3 repos should be seen only after build-pkgs [-p <package>] is done at a later time.

Build packages

The 'build-pkgs' has two phases based on the Debian package's build:

1) Check the digest of package's source meta data, for example:

if package 'dhcp' in cache:
  if sha256 digest of the folder (/path/to/stx/integ/base/dhcp/debian) have not changed:
    if the dsc file of package dhcp exists:
        reuse the existing dsc file
        return
    create dsc file for package 'dhcp' and add the checksum to cache 

2) Build avoidance is enabled by default for this phase, the build option '-c' will turn 'build avoidance' off.

if build avoidance is enabled:
    check whether there is build stamp for this package:
        if Yes, skip the build and return
Send the build request for the package to pkgbuilder container

To Build packages:

# Build all packages
# this should rebuild all packages (std and rt)
build-pkgs
# If you want to clean and build all:
build-pkgs --clean 

But be careful, '--clean' not only cleans the whole build directory "/localdisk/loadbuild/<user>/<project>/{std,rt}" but also cleans the local repository "deb-local-build".
This means all the starlingX packages will be built from scratch and this will take time. 
If you just want to resume the previous build, please run without '-c':
# build-pkgs


# Build packages in parallel
build-pkgs --parallel <number of parallel tasks, the default number is 10, the maximum number of parallel tasks is 30>

# To define the interval to poll the packages build status during parallel build:
--poll_interval <the default interval is 10 seconds>

# To limit the number of make jobs for a package:
--max_make_jobs <the default number of jobs is equal to the environment variable 'STX_BUILD_CPUS' or 'MAX_CPUS' inside the container>


# Build single package
build-pkgs -p <package name>

# Build single package cleaning previous build
build-pkgs -c -p <package name>

# Once the packages are ready you can build the iso
build-image

Build ISO

Once you have built all of the packages you can build the iso by running the following command:

  build-image
  ls -al /localdisk/deploy/*.iso

Log files

While inside the build container, log files may be found here:

  • /localdisk/builder.log /localdisk/pkgbuilder.log - top-level build controller log files
  • ${MY_WORKSPACE}/<std or rt>/<package name>/*.build' - individual package build logs