StarlingX/Containers/HowToAddNewArmadaAppInSTX

This page details steps required to add a new system-managed application's Armada Helm charts in StarlngX.

Note that some steps below include instructions needed to import new packages from 3rd party upstream repositories, if they are not included in Helm Stable (https://github.com/helm/charts/tree/master/stable).

NOTE: To create an Armada App from scratch without dependencies from other repositories, you have to start with step 0 and then skip to step 4 to continue.
 * Keep in mind that you will need to modify the build_srpm.data and {your-repo-name}.spec files accordingly to add to your project.
 * You can get an example of an Armada App, created from scratch without dependencies from other repositories, here: https://opendev.org/starlingx/snmp-armada-app

(Repo) Create top level repo in Openstack/project-config
This step has an external dependency (openstack infra team) and may take a few days to resolve. Process details are covered here (Skip PyPi part): https://docs.openstack.org/infra/manual/creators.html In short, the following steps will be needed: git clone https://git.openstack.org/openstack-infra/project-config cd project-config

Edit the following files. Note that the project names have to follow lexicographical order in the list. gerrit/projects.yaml zuul/main.yaml gerrit/acls/openstack/.config  <--- new file

See this commit for an example: https://review.opendev.org/#/c/714689/

NOTE: A new repo is being created in this step and a list of core reviewers is required. Contact the build team and request the creation of the cores group and provide the list of members to be added.

Get app tarball in the /import folder
NOTE: Skip this step if helm charts are from Helm Stable (https://github.com/helm/charts/tree/master/stable). This step is only needed if new helm chart sources need to be included in application.

Before an app's Helm charts can be included, the application tarball needs to be available in the /import folder. This downloads the helm-charts tarball and keeps it in a local repository.

Find the helm chart artifact link
In case of cert-manager, the artifact can be found here for a package with the SHA code specified: https://github.com/jetstack/cert-manager/archive/6da95758a4751b20cf85b29a3252e993449660eb.tar.gz

Create a file tarball-dl.lst
The file needs to be on the build server and file contents should include the helm chart location where the tarball is available.

Note that the filename needs to be tarball-dl.lst for following step to work. Exact path doesn't matter, was tested with file in directory: /localdisk/designer/ /

sansari@yow-cgts1-lx$cat tarball-dl.lst helm-charts-certmanager-6da95758a4751b20cf85b29a3252e993449660eb.tar.gz#helm-charts-certmanager#https://github.com/jetstack/cert-manager/archive/6da95758a4751b20cf85b29a3252e993449660eb.tar.gz#http##

The download script "dl_tarball.sh " get this whole line and get the tarball, save the tarball with the name and change the name of the folder, see below the meaning of each part

"https://github.com/jetstack/cert-manager/archive/1d6ecc9cf8d841782acb5f3d3c28467c24c5fd18.tar.gz" The chart that will be downloaded

"Helm-charts-certmanager-1d6ecc9cf8d841782acb5f3d3c28467c24c5fd18.tar.gz" The name of the tarball file will be saved

"Helm-charts-certmanager" Get the root folder and rename it to that name

Create Armada app
Create a top-level folder for the new app under cgcs-root/stx folder.

In the main app folder, create a centos_tarball-dl.lst file with same content as the one from Step 1 above. Note that the filename is important for this step to work.

sansari@yow-cgts1-lx$pwd /localdisk/designer/sansari/starlingx-0/cgcs-root/stx/cert-manager-armada-app sansari@yow-cgts1-lx$cat centos_tarball-dl.lst helm-charts-certmanager-6da95758a4751b20cf85b29a3252e993449660eb.tar.gz#helm-charts-certmanager#https://github.com/jetstack/cert-manager/archive/6da95758a4751b20cf85b29a3252e993449660eb.tar.gz#http## sansari@yow-cgts1-lx$

Full details of directory structure and notes can be found on this page: https://wiki.openstack.org/wiki/StarlingX/Containers/ArmadaAppCodeStructure

Populate stx/downloads with the tarball
NOTE: Skip this step if helm charts are from Helm Stable (https://github.com/helm/charts/tree/master/stable).

Run the following commands. This will copy the tarball present in /import (from Step 1) to the cgcs-root/stx/downloads folder. It is important to name the file 'centos_tarbll-dl.lst' in the Armada app folder (in Step 2 above). The populate_downloads.sh script does not work with other filenames.

${MY_REPO_ROOT_DIR}/stx-tools/toCOPY/generate-cgcs-centos-repo.sh /import/mirrors/starlingx ${MY_REPO_ROOT_DIR}/stx-tools/toCOPY/populate_downloads.sh /import/mirrors/starlingx

Confirm that the tarball is now present in cgcs-root/stx/downloads folder.

Build Packages
Run build-pkgs. An empty .git folder may be needed to run this script if top level repo (Section A above) isnt completed.

${MY_REPO_ROOT_DIR}/cgcs-root/build-tools/build-pkgs  cert-manager-helm   stx-cert-manager-helm

Watch for errors. Logs are directed to /localdisk/loadbuild/ /starlingx-0/build-std.log

Generate tarball
There are two ways to get the tarball for upload and apply the tarball. Use either Option A *or* Option B below.

If is necessary to add the package into the ISO, include the name of the package into centos_iso_image.inc file and it will be included in the ISO automatically by running build-iso script (it's not gonna be uploaded/applied automatically). The tarball will be available on /usr/local/share/applications/helm/ after install the ISO.

When you are developing you don't need to build the ISO to see your modifications, so you can follow one of the two approaches below.

Option A: Application is system-managed mandatory app the needs to be included as an RPM in the ISO.

Note that this method will not support (read: does not need to support) build-helm-charts.sh (as shown in Option B below

RPMs are available in /localdisk/loadbuild/ / /std/rpmbuild/RPMS Extract the RPM using following command – this extracts the RPM without installing the app rpm2cpio stx-nginx-ingress-controller-helm-1.0-0.tis.noarch.rpm | cpio -idmv

The tarball will get extracted into a directory structure as specified in the .spec file (for e.g., ./usr/local/share/applications/helm/)

Note that even if application is designed to be included as RPM in ISO image, for faster testing, you can still extract tarball from RPM and use 'system application-upload' and 'system application-apply' at runtime.

Option B: Application can be uploaded & applied at runtime and does not need to be included as RPM in ISO

Use build-helm-charts.sh to generate tarball with following command ${MY_REPO_ROOT_DIR}/cgcs-root/build-tools/build-helm-charts.sh --app stx-cert-manager

This generates the application tarball and will be located in /localdisk/loadbuild/ /starlingx-0/std/build-helm/stx/stx-cert-manager-1.0-1.tgz

Upload Application to StarlingX
The application tarball can now be copied to a StarlingX running platform. Copy tarball to /home/sysadmin folder and on the CLI use following commands

NOTE: If you build/install the ISO the tarball will be placed on /usr/local/share/applications/helm/

$ system application-upload  $ system application-apply

(Repo) Project setup in repo
Once Step 0 has been completely approved and the repository has been created, the following files need to be added and committed to the project.

.gitreview (copy from other armada app projects + change git name) .zuul.yaml (copy from other armada app projects + change git name + see note below) requirements.txt (copy from other armada app projects) test-requirements.txt (copy from other armada app projects) tox.ini (copy from other armada app projects)

Note on .zuul yaml file: The .zuul.yaml file contains details for mirroring the repo from Opendev to GitHub. To achieve this, the file needs a number of keys to authorize the mirroring.

Note that all work can be completed without the mirroring work performed. However, it is nice to have so that the code is mirrored to GitHub (Wind River's Cloud Native certification requires a repo on GitHub + keeps our consistent across the repositories in Opendev & GitHub + an extra backup online in case of failure).

Following are the details to achieve the GitHub mirroring.

The 'host_key' to GitHub remains the same. Key from other projects can be used as-is. Generate 'ssh_key' entry. Details of how to generate are captured in this doc: https://docs.starlingx.io/developer_resources/mirror_repo.html. In short, the steps are: Run the zuul/tools/encrypt_secret.py script script (details in the doc linked above) → Add these keys to the .zuul.yaml file. Login to GitHub with username starlingx.github@gmail.com (get passwd from manager or others) & create repo with preferably the same name NOTE: You will likely not have access to the github private keys required to perform these steps - I spoke with Bin Qian to work around this and generate the keys from his existing setup Commit this to repo. See example commit here: https://review.opendev.org/#/c/716429/

StarlingX/manifest
Once previous step has been completed (wait until repo merged), update starlingx/manifest to include the new project.

The files default.xml and flock.xml (or may be other file depending on type of layered build) will need to be updated with new armada app git details.

See example commit here: https://review.opendev.org/#/c/716117/

StarlingX/root
Update StarlingX/root's stx/.gitignore with new repo details

See example commit here: https://review.opendev.org/#/c/720193/

Checklist + Test builds
See the cert-manager Storyboard for all the repositories that were touched during the process and confirm that all work items have been completed. See: https://storyboard.openstack.org/#!/story/2007360

In addition, test builds to confirm that addition of new repo/app doesn't break anything:

Build app only Build-iso (and install from scratch) Test layered (flock) build

Container image configuration in charts
Specify the container image configuration in values.yaml file of the helm charts.

image: repository: docker.io/starlingx/stx-audit tag: stx.6.0-v1.0.1 pullPolicy: IfNotPresent debug: ''

For an example of the full context of the helm charts that define the container image see https://opendev.org/starlingx/audit-armada-app.

Add armada app configuration in containers.xml
Updating armada app in "starlinx/manifest/containers.xml" will ensure a container image build for the armada app.

An example can be found in commit https://review.opendev.org/c/starlingx/manifest/+/795353/.

Obtain a timestamped formal build and test it
The build team can trigger a containers build and the timestamped docker-hub image can be referenced in the local armada app testing environment and used to test with it.

Once all the tests passed, a static tag can be set in image-tags.yaml.

Add container image tag to image-tags.yaml
Update "build-tools/build-docker-images/tag-management/image-tags.yaml" with the container image static tag (see example https://review.opendev.org/c/starlingx/root/+/795560).

For details on the definition of all the image fields that need to be set see Image Tag Management.

The commit of the image tag update will push the retagged image to docker-hub. The image will get downloaded when the app is applied on the node.

Update the image static tag in armada app helm charts
The charts need to get updated with the new image tag in values.yaml (see example https://review.opendev.org/c/starlingx/audit-armada-app/+/797319)

Build your own container image for testing
Besides using the formal build container image, you can also build it in your build environment using the following commands:

In your development build environment set the following variables: OS=centos BUILD_STREAM=stable BRANCH=master CENTOS_BASE=starlingx/stx-centos:${BRANCH}-${BUILD_STREAM}-latest WHEELS=http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/wheels//stx-centos-${BUILD_STREAM}-wheels.tar DOCKER_USER=${USER} DOCKER_REGISTRY=tis-lab-registry.cumulus.wrs.com:9001

Run the build image command: $MY_REPO/build-tools/build-docker-images/build-stx-images.sh --os centos --stream ${BUILD_STREAM} --base ${CENTOS_BASE} --wheels ${WHEELS} --user ${DOCKER_USER} --registry ${DOCKER_REGISTRY} --push --latest --clean --only 

Once the image is built, pull the image to your testing VM, retag to your registry-local and then push the registry-local tag.