StarlingX/DebianBuildEnvironment
Contents
- 1 StarlingX Build Tools
- 2 Build the four StarlingX pod images
- 3 Install Minikube and Helm
- 4 Initialize Minikube container and build container images
- 5 Build the containers
- 6 Minikube build image customization
- 7 Entering the Pods
- 8 Monitoring the status
- 9 Stop the pods
- 10 Build packages/ISO creation
- 11 Initialize the source tree
- 12 Build packages
- 13 Build ISO
StarlingX Build Tools
The Debian build is completed using a set of containers designed to run in a Kubernetes environment. To facilitate this we are currently making use of Minikube and Helm, later on we will provide versions of the Helm Charts to allow for running builds directly on Kubernetes or on StarlingX directly.
There are four containers (stx-builder|stx-pkgbuilder|stx-repomgr| stx-lat-tool) required to complete a build:
- stx-builder: main developer build container.
- stx-pkgbuilder: Debian package builder (uses sbuild).
- stx-repomgr: Debian local repository archive (uses aptly)
- stx-lat-tool: Debian image builder
At a high level the StarlingX ISO image creation flow involves the following general steps (assuming you have already configured Docker on your system).
- Install Minikube and Helm.
- Build the StarlingX k8s development environment.
- Enter the stx-builder pod/container to triger the building task.
- Build packages/ISO creation.
Build the four StarlingX pod images
The four StarlingX build container images handle all steps related to StarlingX ISO creation. This section describes how to customize the build container image building process.
Install Minikube and Helm
Install Minikube to support the local k8s framework for building. Meanwhile install Helm tools to manage the Helm Charts required to start/stop/upgrade the pods or the deployments for the StarlingX Building system. Before installing these components please make sure that Docker is available in your environment.
We can download directly the binary packages to install them from the upstream.(https://minikube.sigs.k8s.io/docs/start/)
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 sudo install minikube-linux-amd64 /usr/local/bin/minikube
If necessary, we can also use the third-party Minikube binary:
curl -LO http://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/releases/v1.20.0/minikube-linux-amd64 sudo install minikube-linux-amd64 /usr/local/bin/minikube
Meanwhile we can also install Helm binary package, you can select the version listed here or the latest released version:
curl -LO https://get.helm.sh/helm-v3.6.2-linux-amd64.tar.gz tar xvf helm-v3.6.2-linux-amd64.tar.gz sudo mv linux-amd64/helm /usr/local/bin/
Initialize Minikube container and build container images
Based on Minikube and Helm, you can build our local k8s deployments to support the building of StarlingX.
Before this step, you need to check if your UID is in the docker group. If not, please use the following command to add or communicate to your administrator for help.
sudo usermod -aG docker $yourusername
Once Minikube and Helm have been installed, you can execute the following command to start Minikube and create the container images before using stx command.
stx-init-env
To support multiple users, use the ``MINIKUBENAME`` to distinguish the different Minikube containers, so that unique Minikube containers will distinguish between the developers, attempting to start with an existing Minikube will cause the system will block.
In the stx-init-env script, we default use ``--driver=docker`` to start the Minikube container.
The ``--mount-string`` argument of ``minikube start`` will map the host path to the Minikube container. The default host directory is ``/localdisk/$USER``, so please check if this default directory exists, or you can modify it in stx-init-env script.
Once the Minikube container has started, we will build all container images required for the StarlingX building system if they are not yet built or available.
This process will continue several minutes since it will download the required Minikube container images (such as CoreDNS) as well as build the StarlingX build container images.
NOTE:
Before executing stx-init-env, if you don't set the environment variable ``MINIKUBE_HOME``, this variable will inherit the your $HOME, meanwhile if your $HOME is NIS home directory(nfs mount point), itwill cause something wrong with Minikube. There is a known issue to track it in the upstream: [1] So we had better export the variable ``MINIKUBE_HOME`` as a non nfs-mount-point to bypass this issue
export MINIKUBE_HOME=${yourminikubehomedirectory}
or change your $HOME directly as follows:
export HOME=${yournewhomedirectory}
We advise you to set ``MINIKUBE_HOME``, as modifying $HOME may have unintended consequences for other software running on your system.
Build the containers
To build the containers that are necessary in order to build debian packages, run the following command:
bash stx-init-env --rebuild
To make the containers accessible after you have built the containers, run the following command:
source import-stx
Minikube build image customization
After sourcing import-stx, the ``stx`` command should be available. You can start by customizing values for the StarlingX container image build process.
The ``stx.conf`` file is a key-value config file that is used to set the default configuration values. We can use ``stx config`` command to get or change the items in the stx.conf file. You will see the usages of the 'stx' command in the 'stx' command section.
This is a sample of a ``stx.conf`` file can be found here. we can use the ``stx config`` command to change or show this ``stx.conf`` file as follows:
# Align the builder container to use your user/UID stx config --add builder.myuname $(id -un) stx config --add builder.uid $(id -u) # Embedded in ~/localrc of the build container stx config --add project.gituser "First Last" stx config --add project.gitemail <your email address> # This will be included in the name of your build container and the basename for $MY_REPO_ROOT_DIR stx config --add project.name stx-deb-bld-1 stx config --add project.proxy false
stx config --add project.ostree_osname wrlinux
# Show all the settings stx config --show
Please use ``stx config -h`` command to show more help information for config module. We can also use the default values for the building project.
Entering the Pods
Execute the following command to enter the controller node to trigger the build task:
stx control enter
The default will enter the controller node without other arguments. If you want to enter the other nodes, please use the following command:
stx control enter --dockername [builder|pkgbuilder|lat|repomgr]
Use ``exit`` command to exit from the node to host env.
Monitoring the status
After the building system starts, you can use the following command to show its status:
stx control status
It will output the status of the Helm Charts, the deployments and the pods. According to the podname from this result, you also can manually enter any pod node container.
Stop the pods
To stop the pod:
stx control stop
Build packages/ISO creation
The six builder is the container where you will perform most of the actions, such as launching the task of building packages and images.
stx control enter
Initialize the source tree
The StarlingX source tree consists of multiple git repositories. The tool ‘repo’ is used to sync these repositories locally, below config Is minimally required to config to make ‘repo’ work:
repo init -u https://opendev.org/starlingx/manifest -m default.xml repo sync
After the ‘repo sync’ is done, check the below directory:
$ ls $MY_REPO $ ls $MY_REPO/stx $ ls $MY_REPO_ROOT_DIR/stx-tools
When the repo sync has finished, mirror the download and source directories from the CENGEN mirror:
cd $MY_REPO_ROOT_DIR/stx-tools/centos-mirror-tools && bash download_mirror.sh
Build packages
To bulld an individual package:
build-pkgs -p <name of package>
To build all of the packages available
build-pkgs -a
Build ISO
Once you have built all of the packages you can build the iso by running the following command:
build-image