StarlingX/Containers/HowToAddNewFluxCDAppInSTX

This page details steps required to add a new system-managed application's FluxCD Helm charts in StarlingX.

Most of the information for creating a new repo from How to Add New Armada App in STX are still applicable except for the FluxCD directory structure. Note that Armada is in the process of being replaced by Flux.

Also, Starlingx is in the process of transitioning from Centos to Debian, with Centos becoming deprecated by the end of 2022. In order to a build STX packages and applications, the developer will need to set up the StarlingX Debian Build Environment. For general information on both the Armada and FluxCD file structures used in STX, please view this page StarlingX/Containers/ConvertingArmadaAppsToFluxCD.

For more information about StarlingX Apps configuration/features/guidelines please view this page: StarlingX/Containers/StarlingXAppsInternals.

The recommended naming convention is now: app-

Create FluxCD app
Create a top-level folder for the new app under cgcs-root/stx folder.

In the main app folder, create a centos_tarball-dl.lst file with same content as the one from Step 1 above. Note that the filename is important for this step to work.

$ pwd /localdisk/designer/ /starlingx_master/cgcs-root/stx/app-istio $ cat centos_tarball-dl.lst helm-charts-istio-1.13.2.tar.gz#helm-charts-istio#https://github.com/istio/istio/archive/refs/tags/1.13.2.tar.gz#http## helm-charts-kiali-1.45.0.tar.gz#helm-charts-kiali#https://github.com/kiali/helm-charts/archive/refs/tags/v1.45.0.tar.gz#http##

It is recommended to start off copying another app code structure as starting point and then make necessary changes to various configuration build & configuration files.

Example review for adding a new FluxCD app:
 * https://review.opendev.org/c/starlingx/app-istio/+/836360

As an overview, this is the directory structure for app-istio:

├── centos_build_layer.cfg ├── centos_iso_image.inc ├── centos_pkg_dirs ├── centos_pkg_dirs_containers ├── centos_stable_docker_images.inc ├── centos_tarball-dl.lst ├── CONTRIBUTING.rst ├── debian_build_layer.cfg ├── debian_pkg_dirs ├── HACKING.rst ├── istio-helm │   ├── centos │   │   ├── build_srpm.data │   │   ├── istio-helm.spec │   │   └── istio_proxy.stable_docker_image │   ├── debian │   │   ├── deb_folder │   │   │   ├── changelog │   │   │   ├── control │   │   │   ├── copyright │   │   │   ├── istio-helm.install │   │   │   ├── rules │   │   │   └── source │   │   │      └── format │   │   └── meta_data.yaml │   └── files │      ├── index.yaml │      ├── Makefile │      ├── metadata.yaml │      └── repositories.yaml ├── kiali-helm │   ├── centos │   │   ├── build_srpm.data │   │   ├── kiali-helm.spec │   │   └── kiali.stable_docker_image │   ├── debian │   │   ├── deb_folder │   │   │   ├── changelog │   │   │   ├── control │   │   │   ├── copyright │   │   │   ├── kiali-helm.install │   │   │   ├── rules │   │   │   └── source │   │   │      └── format │   │   └── meta_data.yaml │   └── files │      ├── index.yaml │      ├── Makefile │      ├── metadata.yaml │      └── repositories.yaml ├── python-k8sapp-istio │   ├── centos │   │   ├── build_srpm.data │   │   └── python-k8sapp-istio.spec │   ├── debian │   │   ├── deb_folder │   │   │   ├── changelog │   │   │   ├── control │   │   │   ├── copyright │   │   │   ├── python3-k8sapp-istio.install │   │   │   ├── python3-k8sapp-istio-wheels.install │   │   │   ├── rules │   │   │   └── source │   │   │      └── format │   │   └── meta_data.yaml │   └── k8sapp_istio │      ├── AUTHORS │      ├── ChangeLog │      ├── k8sapp_istio │      │   ├── common │      │   │   ├── constants.py │       │   │   └── __init__.py │       │   ├── helm │      │   │   ├── __init__.py │       │   │   ├── istio_operator.py │       │   │   └── kiali_server.py │       │   ├── __init__.py │       │   └── tests │      │       ├── __init__.py │       │       ├── test_istio.py │       │       └── test_plugins.py │       ├── LICENSE │      ├── pylint.rc │       ├── README.rst │      ├── requirements.txt │      ├── setup.cfg │      ├── setup.py │       ├── test-requirements.txt │      ├── tox.ini │      └── upper-constraints.txt ├── requirements.txt ├── stx-istio-helm │   ├── centos │   │   ├── build_srpm.data │   │   └── stx-istio-helm.spec │   ├── debian │   │   ├── deb_folder │   │   │   ├── changelog │   │   │   ├── control │   │   │   ├── copyright │   │   │   ├── rules │   │   │   ├── source │   │   │   │   └── format │   │   │   └── stx-istio-helm.install │   │   └── meta_data.yaml │   └── stx-istio-helm │      ├── files │      │   ├── index.yaml │      │   ├── Makefile │      │   ├── metadata.yaml │      │   └── repositories.yaml │      ├── fluxcd-manifests │      │   ├── base │      │   │   ├── helmrepository.yaml │      │   │   ├── kustomization.yaml │      │   │   └── namespace.yaml │      │   ├── istio-operator │      │   │   ├── helmrelease.yaml │      │   │   ├── istio-operator-static-overrides.yaml │      │   │   ├── istio-operator-system-overrides.yaml │       │   │   ├── istio-operator.yaml │      │   │   └── kustomization.yaml │      │   ├── kiali-server │      │   │   ├── helmrelease.yaml │      │   │   ├── kiali-server-static-overrides.yaml │      │   │   ├── kiali-server-system-overrides.yaml │      │   │   ├── kustomization.yaml │      │   │   └── namespace.yaml │      │   └── kustomization.yaml │      └── helm-charts │          └── Makefile ├── stx-kiali-helm ├── test-requirements.txt └── tox.ini

The file structure does receive minor updates over time, so for the current version as well as the contents of each file in this application example, please check the following link

The debian folder contents are set up according to the following guide StarlingX/DebianBuildStructure. For more in-depth information on the configuration options available for each file, the developer may wish to consult the Debian source package documentation. In general, it is recommended to use an already available app as a template since most of the configuration options available aren't required.

Updating charts from upstream apps in debian
In case there is a need to update the helm charts provided by an upstream app, the procedure is similar to what is done to patch debian packages, as explained in StarlingX/DebianBuildStructure.

In short, the stx-app-NAME-helm/debian/meta_data.yaml is responsible for defining the upstream tarball to be downloaded by the build system. Then, patches to the files in the tarball can be added to the folder stx-app-NAME-helm/debian/patches/ The order in which they are applied is defined in the file stx-app-NAME-helm/debian/patches/series

Please view this commit as an example: https://review.opendev.org/c/starlingx/platform-armada-app/+/858737

---

Introduction
For steps on how to convert an Armada application to the FluxCD framework refer to: Converting armada applications to FluxCD The page also has an explanation on the file structure and their contents.

There are typically two use cases when building a new system application:

An external opensource application that is being packaged as a system application for easy integration/use in StarlingX where: the container images come from a public docker registry (e.g. docker.io, gcr.io, quay.io, ...); i.e. as released from an opensource project, the helm chart(s) comes from a specific commit or release of an opensource project, you are writing the FluxCD manifest, provide "recommended deployment" by automatically setting tested default helm overrides for application-specific parameters, you are writing the system application plugins to: dynamically set helm chart(s) overrrides based on the current StarlingX infrastructure configuration (e.g. replicas=2 for duplex systems), provide custom behaviour on 'system application- ...' StarlingX management of the FluxCD application packaging. etc. OR

An internally developed (i.e. StarlingX-developed) containerized application, where: you are developing and building the container image(s), you are writing the helm chart(s) ... and all its yaml manifest files, its default values, etc. you are writing the FluxCD manifest, you are writing the system application plugins to: dynamically set helm chart(s) overrrides based on the current StarlingX infrastructure configuration (e.g. replicas=2 for duplex systems), provide custom behaviour on 'system application- ...' StarlingX management of the FluxCD application packaging. etc.

NOTE: Some steps below are specific to one usecase versus the other.

NOTE: Some steps can be skipped if you are just working / experimenting locally in a local developer environment.

Step 0: Create top level repo in openstack/project-config for new Application
This step has an external dependency (openstack infra team) and may take a few days to resolve (i.e. for openstack infra team to review and create new repo).

This step can be skipped if you are just working/experimenting locally in a local developer environment.

Process details are covered here (Skip PyPi part): https://docs.openstack.org/infra/manual/creators.html

In short, the following steps will be needed:

git clone https://git.openstack.org/openstack-infra/project-config cd project-config

Edit the following files to add entries for your new application (e.g. 'starlingx/app-NAME'). Note that the project names have to follow lexicographical order in the list.

gerrit/projects.yaml        ← existing file

zuul/main.yaml               ← existing file

gerrit/acls/starlingx/.config  ← create new file for this

See this commit for an example of the specific changes required to the files above: https://review.opendev.org/c/openstack/project-config/+/834896 (Ignore the 'armada' in the app name)

AFTER THE ABOVE COMMIT IS MERGED by the external openstack infra team, you must contact the build team (Build (DevOps) Team) to:

create a new core reviewers group for your new application repo, in opendev gerrit, and populate it with at least to starlingx members.

Step 1: Basic setup of Application Repo
NOTE: Step 0 must be FULLY complete in order to do this step.

This step can be skipped if you are just working/experimenting locally in a local developer environment.

Once Step 0 has been completely approved and the repository has been created, the following standard repo files need to be added and committed to your new application repo.

.gitreview .zuul.yaml requirements.txt test-requirements.txt tox.ini Git clone your new application repo, setup branch in your application repo, create files and commit for review and merge into git: ( See example commit here: https://review.opendev.org/#/c/716429/ )

git clone https://opendev.org/starlingx/app-NAME

cd app-NAME git review -s               ← // For Debian, this command might need to be used from outside the build containers git checkout -b setupAPPNAMEApp

// Create the following files in app-NAME/

.gitreview copy from an existing system app repo and change git name requirements.txt copy from an existing system app repo test-requirements.txt copy from an existing system app repo tox.ini copy from an existing system app repo .zuul.yaml copy from an existing system app repo change git name, and update keys in this file for mirroring this new repo from OpenDev to GitHub Wind River's CNCF/Kubernetes certification requires all repos to be mirrored on GitHub To achieve this, the file needs a number of keys to authorize the mirroring. The 'host_key' to GitHub remains the same. The key from other projects can be used as-is. Generate 'ssh_key' entry. Details of how to generate are captured in this doc: https://docs.starlingx.io/developer_resources/mirror_repo.html. In short, the steps are: Run the zuul/tools/encrypt_secret.py script script (details in the doc linked above) → Add these keys to the .zuul.yaml file. Login to GitHub with username starlingx.github@gmail.com (get passwd from manager or Bin Qian) create repo with preferably the same name NOTE: You will likely not have access to the github private keys required to perform these steps I spoke with Bin Qian to work around this and generate the keys from his existing setup,

// Commit this to starlingx master

git add .gitreview git add requirements.txt git add test-requirements.txt git add tox.ini git add .zuul.yaml

git commit -s

// Upload for review

git review

Step 2: Update Starlingx/manifest & StarlingX/root to include new Application Repo
This step can be skipped if you are just working/experimenting locally in a local developer environment.

2.1 starlingx/manifest repo update Once previous step has been completed (wait until commit merged), update the https://opendev.org/starlingx/manifest repo to include the new project.

The files default.xml and flock.xml (or may be other file depending on type of layered build) will need to be updated with new application git details.

See example commit here: https://review.opendev.org/#/c/716117/

2.2 starlingx/root repo update Update the https://opendev.org/starlingx/root repo with new application repo details; i.e. specifically updating ./stx/.gitignore.

See example commit here: https://review.opendev.org/#/c/720193/

After these commits merge, when running 'repo-sync' (as part of the setup of your general DEV environment, e.g.  see StarlingX Development Environment ), your new application repo will be cloned into that environment.

Step 3: Setup a "StarlingX Master" DEV environment for developing your application
For Centos, Setup "StarlingX Master" dev environment by following StarlingX Development Environment#BuildingStarlingX.

For Debian, setup a debian build environment by following https://wiki.openstack.org/wiki/StarlingX/DebianBuildEnvironment

Create branch and review under your new application repo in order to start creating files for the development of your new application repo e.g.

cd ${MY_REPO_ROOT_DIR}/cgcs-root/stx/app-NAME

git review -s

repo start mybranch

// If you are working/experimenting locally in a local developer environment, // INSTEAD of doing the above commands, // you will have to create your top-level application directory at this point

cd ${MY_REPO_ROOT_DIR}/cgcs-root/stx/

mkdir my-app/

cd my-app

git init            // some of spec files expect to be inside git repo

... the following steps will assume you are working in this dev environment.

Step 4: Download the opensource application helm charts to yow-cgts1-lx.wrs.com:/import/mirrors/starlingx/downloads folder
NOTE: Skip this step if helm chart(s) are being written locally as part of creating the application.

If you are just working/experimenting locally in a local developer environment, you can just locate your desired application tarball, build the tarball-dl.lst content for your application tarball and use that in STEP 6.

For Wind River builds of StarlingX, before an external opensource application's Helm charts can be referenced in the build files for your new application repo, the external opensource application tarball, containing its helm chart(s), needs to be available in the yow-cgts1-lx.wrs.com:/import/mirrors/starlingx/download folder.

Resource Accounting Caveat
Most platform applications should be affined to the platform cores, but all other containerized workloads would use the application or application-isolated cores. For legacy applications this is controlled by namespace, and is managed by a customization to kubelet. Going forward (as of June 2023) we are modifying the system to use the "app.starlingx.io/component=platform" label on the application pod or namespace to signify that it should be run on the platform cores.

We also need to ensure the resources consumed by platform pods (cpu/memory requests) are not counted against the application node resources since they should be accounted against the platform resources. This includes both the Pod requests and the platform accounting [1]. In particular, containers running on platform cores must not request cpu resources from Kubernetes. Memory resources are less of a concern as we usually are not memory-constrained the way that we are CPU-constrained.

If your application should run on platform cores, it is important to minimize the amount of CPU time it uses. We have a limited amount of platform CPU and it needs to be carefully managed.

[1] Resource accounting: monitoring/collectd-extensions/src/plugin_common.py (K8S_NAMESPACE_SYSTEM)

Step 4a: Find the link to the external opensource application tarball containing the application's helm chart(s)
For example, in the case of cert-manager, the external opensource application tarballs (containing helm charts) are here: https://github.com/cert-manager/cert-manager/releases.

You could use the 'source code (tar.gz)' tarball under the appropriate release, e.g. https://github.com/cert-manager/cert-manager/archive/refs/tags/v1.9.1.tar.gz , however our mirror download tools typically want you to use the tarballs named with the SHA code for a particular commit ... since sometimes we need to use commits in-between formal releases.

To find the SHA code for a particular commit, for a particular release, go to (say) https://github.com/cert-manager/cert-manager/commits/v1.9.1, and the SHA code for each commit is shown on right hand side ... COPY the SHA for (say) the last commit of the version you want.

Then construct the github URL to pull the source tarball based on commit SHA as follows:    https://github.com/ / /archive/.tar.gz e.g.  for cert-manager, for last commit of v1.91, https://github.com/cert-manager/cert-manager/archive/4486c01f726f17d2790a8a563ae6bc6e98465505.tar.gz

Step 4b: Create a tarball-dl.lst file with details for downloading the external opensource application tarball containing the application's helm chart(s)
The file needs to be on yow-cgts1-lx.wrs.com, located in any directory and have a filename of specifically 'tarball-dl.lst'.

The contents of the file is a single line with the following format:

#:#http##

For example

sansari@yow-cgts1-lx$ cat tarball-dl.lst helm-charts-certmanager-6da95758a4751b20cf85b29a3252e993449660eb.tar.gz#helm-charts-certmanager#https://github.com/jetstack/cert-manager/archive/6da95758a4751b20cf85b29a3252e993449660eb.tar.gz#http##

Step 4c: Run Jenkins job
On the following Jenkins page,

http://yow-cgts1-lx:8080/job/StarlingX_download_mirror/build

login with your WR corporate Linux userid, add full path to the 'tarball-dl.lst' file created above in the 'extra_rpm_lst_file' box and kickstart a build. Once the build is completed, the tarball (helm-charts-certmanger-xxxx in this case) should be present in cgts1's /import/mirrors/starlingx/downloads/ folder.

If not available, check the Jenkins console output log for errors.

// If you are just working/experimenting locally in a local developer environment, // for example:

https://github.com/goharbor/harbor-helm/releases https://github.com/goharbor/harbor-helm https://github.com/goharbor/harbor-helm/commits/master https://github.com/goharbor/harbor-helm/commits/v1.10.0

wget https://github.com/goharbor/harbor-helm/archive/47a3871d9e369670cf70fa4601eaf03ac601de2c.tar.gz

$ cat tarball-dl.lst helm-chart-harbor-47a3871d9e369670cf70fa4601eaf03ac601de2c.tar.gz#harbor#https://github.com/goharbor/harbor-helm/archive/47a3871d9e369670cf70fa4601eaf03ac601de2c.tar.gz#http##

Step 5: Create tarball download control file for downloading the opensource application helm chart(s) tarball to upstream CENGN build server mirrors tarballs folder
NOTE: Skip this step if helm chart(s) are being written locally as part of creating the application.

NOTE: Although the links and files in this step mention Centos, this step is exactly the same for Debian at the moment, including the 'centos' filenames and links.

For opensource CENGN builds of StarlingX, before an external opensource application's Helm chart(s) can be referenced in the build files for your new application repo, the external opensource application tarball, containing its helm chart(s), needs to be available in the http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/latest_build/outputs/tarballs/. folder.

From within your git branch in your application repo (i.e. setup in Step 3):

Create the centos_tarball-dl.lst file for your application. .. with the same content/format as the one described for "Step 4b" above (i.e. #:#http## ) Note again that the filename is important for this step to work.

cd ${MY_REPO_ROOT_DIR}/cgcs-root/stx/app-NAME

vi centos_tarball-dl.lst

helm-charts-certmanager-6da95758a4751b20cf85b29a3252e993449660eb.tar.gz#helm-charts-certmanager#https://github.com/jetstack/cert-manager/archive/6da95758a4751b20cf85b29a3252e993449660eb.tar.gz#http##

git add centos_tarball-dl.lst

StarlingX master build has a job to process 'centos_tarball-dl.lst' files of repos and download the specified application tarballs in http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/latest_build/outputs/tarballs/.

Step 6: Populate stx/downloads in your DEV environment with the opensource application helm chart tarball
NOTE: Skip this step if helm chart(s) are being written locally as part of creating the application.

Note: For Debian, this step can be skipped. The tarball is downloaded by the "downloader -s" command (or the "build-pkgs" command) in a later step. After building your package, you should see the tarball in the build folder (/localdisk/loadbuild/ / /std/).

Run the following commands to copy the tarball present in yow-cgts1-lx.wrs.com:/import/mirrors/starlingx/download folder (from Step 4) to the ${MY_REPO_ROOT_DIR}/cgcs-root/stx/downloads folder in your DEV environment (see Step 3).

NOTE: It is important to name the file 'centos_tarball-dl.lst' in your app folder (in Step 5 above). The populate_downloads.sh script does not work with other filenames.

NOTE: If working/experimenting locally in a local developer environment, since you haven't updated the starlingx/manifest and starlingx/root (Step 2), this won't completely work. So run the commands below to setup the downloads directory in your environment, and then run the additional commands show below to manually copy your application tarball to the downloads directory.

${MY_REPO_ROOT_DIR}/stx-tools/toCOPY/generate-cgcs-centos-repo.sh /import/mirrors/starlingx

${MY_REPO_ROOT_DIR}/stx-tools/toCOPY/populate_downloads.sh /import/mirrors/starlingx

// If you are working/experimenting locally in a local developer environment, // you will have to manually copy your application tarball to the downlaods diretory // e.g.

wget https://github.com/goharbor/harbor-helm/archive/47a3871d9e369670cf70fa4601eaf03ac601de2c.tar.gz

mv 47a3871d9e369670cf70fa4601eaf03ac601de2c.tar.gz helm-chart-harbor-47a3871d9e369670cf70fa4601eaf03ac601de2c.tar.gz

tar xvf helm-chart-harbor-47a3871d9e369670cf70fa4601eaf03ac601de2c.tar.gz

mv harbor-helm-47a3871d9e369670cf70fa4601eaf03ac601de2c/ harbor

rm helm-chart-harbor-47a3871d9e369670cf70fa4601eaf03ac601de2c.tar.gz

tar cvf helm-chart-harbor-47a3871d9e369670cf70fa4601eaf03ac601de2c.tar.gz harbor

mv helm-chart-harbor-47a3871d9e369670cf70fa4601eaf03ac601de2c.tar.gz ../downloads

rm -rf harbor/

Confirm that your application helm chart tarball is now present in ${MY_REPO_ROOT_DIR}/cgcs-root/stx/downloads folder.

7.1 Basic application directory structure
For a FluxCD -based system application, the following page, Converting armada applications to FluxCD , describes:

the FluxCD application archive format, the content and format of the files in the archive, and the suggested application repo directory structure for building the application tarball. e.g.. ├── ... ├── centos_iso_image.inc ├── centos_build_layer.cfg ├── centos_pkg_dirs ├── centos_pkg_dirs_containers ├── centos_tarball-dl.lst ├── debian_iso_image.inc ├── debian_build_layer.cfg ├── debian_pkg_dirs ├──... ├── python-k8sapp-APPNAME     ← // Application Framework Plugins ( see Step 9 ) │  ├── ... ├── requirements.txt ├── stx-APPNAME-helm │  ├── centos │  │   ├── build_srpm.data │  │   └── stx-APPNAME-helm.spec │  ├── debian                 ← // For specific information on the StarlingX debian folder structure: https://wiki.openstack.org/wiki/StarlingX/DebianBuildStructure │  │   ├── ... │   ├── docker                 ← // If you are writing your own container images ( see Step 8) │  │   ├── ...   │   └── stx-APPNAME-helm │      ├── files │      │   ├── index.yaml │      │   ├── Makefile │      │   ├── metadata.yaml │      │   └── repositories.yaml │      ├── fluxcd-manifests │      │   ├── base │      │   │   ├── helmrepository.yaml │      │   │   ├── kustomization.yaml │      │   │   └── namespace.yaml │      │   ├── APPNAME │      │   │   ├── helmrelease.yaml │      │   │   ├── APPNAME-static-overrides.yaml │      │   │   ├── APPNAME-system-overrides.yaml │      │   │   └── kustomization.yaml │      │   └── kustomization.yaml │      ├── helm-charts          ← // If you are writing your own helm charts │      │   ├── ... │       └── README ├── test-requirements.txt └── tox.ini

This is an example of the repo structure of a functional app: https://opendev.org/starlingx/cert-manager-armada-app

Create the above application repo directory structure under ${MY_REPO_ROOT_DIR}/cgcs-root/stx/app-NAME in your DEV environment, using Converting armada applications to FluxCD and other existing application repos as a guide.

For example, for an application that simply uses a helm chart from an opensource application source tarball in downloads/ folder, copy the nginx-ingress-controller-armada-app/ directory structure and make updates accordingly (see 7.2 for some detailed notes).

7.2 Additional Notes
For an application using a helm chart from an opensource application source tarball in downloads folder

copy app directory structure from nginx-ingress-controller-armada-app/ maybe copy your centos_tarball-dl.lst file from Step 4 to /tmp or somewhere safe to not clobber it (for now) remove the python-k8sapp-../ subfolder ... we'll add it back in Step 9 remove stx-nginx-ingress-controller-helm/stx-nginx-ingress-controller-helm/files/*.patch    (these are nginx patches) rename directories, replacing 'nginx-ingress-controller' with 'APPNAME' for centos, update contents of centos_iso_image.inc, centos_pkg_dirs, centos_pkg_containers replacing 'nginx-ingress-controller' with 'APPNAME', and removing python-k8sapp-... packages ... we'll add it back in Step 9 for debian, update contents of debian_iso_image.inc, debian_pkg_dirs for centos, update contents of ./stx-APPNAME-helm/centos/build_srpm.data changing SRC_DIR to replace 'nginx-ingress-controller' with 'APPNAME' changing CHART_TAR_NAME and FLUXCD_APPNAME_VERSION to be consistent with application source tarball in downloads/ folder remove *.patch from COPY_LIST update files under ./stx-APPNAME-helm/stx-app-NAME-helm/fluxcd-manifests/ update contents of ./kustomization.yaml, ./harbor/helmrelease.yaml, ./harbor/kustomization.yaml and ./base/namespace.yaml ... generally replacing '*-nginx-*' with 'APPNAME', and changing namespace from kube-system to something specific if you want // (e.g. harbor) ... actually couldn't get this to work because of enabling PVCs in harbor namespace ... so leave as kube-system update contents of ./app-NAME/APPNAME-static-overrides.yaml with helm overrides specific to your application ... if applicable to your application, could just leave this blank to accept defaults update ./stx-harbor-helm/stx-harbor-helm/files/metadata.yaml if required ... e.g. remove forbidden comands   (Note if you forget ... you can use --force with 'system application-,,,' commands to override) for centos, update contents of ./stx-APPNAME-helm/centos/stx-APPNAME-helm.spec updating app_name, Summary and Name to your application name updating fluxcd_xxx_version and Source1 to be consistent with application source tarball in downloads/ folder remove Patch01: and %patch01 lines ... only needed if you're patching helm chart remove BuildRequires: python-k8sapp-... lines to ignore plugins for now ... we'll add it back in Step 9 update %description with your application name updating %prep and %build sections to be consistent with location of helm chart in application source tarball in downloads/ folder in a couple places you need to replace 'nginx-ingress-controller' with 'APPNAME' can remove lines related to plugins for now ... we'll add it back in Step 9 only these lines mkdir -p %{app_staging}/plugins cp /plugins/%{app_name}/*.whl %{app_staging}/plugins for debian, update the contents of the debian folder according to the structure described in https://wiki.openstack.org/wiki/StarlingX/DebianBuildStructure The 'vault' app serves as a good reference here: https://opendev.org/starlingx/vault-armada-app/src/branch/master/stx-vault-helm/debian about the meta_data.yaml file (more details in the debian structure link): The debver should be set to 1.0-1 (unless you have a reason to set otherwise) Make sure the src_path matches the source folder (the one on the same level as the debian folder). If you are using a dl_hook, then the source folder is defined there. debname will be 'stx-APPNAME-helm' The entries in the deb_folder/changelog should match the debname and debver defined in the meta_data.yaml In deb_folder/control, update names and descriptions. Comment out the python-APPNAME entries for now In deb_folder/rules, replace instances of 'vault' with 'APPNAME' Also, for now, comment out the 2 lines mentioning 'plugins'. They will be added back in a later step. for deb_folder/copywrite, copy the file from 'vault' app and change the appname, unless your new app is using some particular licenses or IP. For centos, if working/experimenting locally in a local developer environment, add a commit to git ... since spec file looks at git commit count cd ${MY_REPO_ROOT_DIR}/cgcs-root/stx/app-NAME git add -A. git commit -m "Baseline commit"
 * 1) Copy the plugins: installed in the buildroot

7.3 Build 'stx-app-NAME-helm' package
Try building the 'stx-app-NAME-helm' package.

Specifically this will extract the helm chart from your opensource application source tarball in downloads/ folder, build the fluxcd manifest, build the system-application tarball containing the fluxcd structure (containing the helm chart) and build a source containing the system-application tarball and install instructions for installing it in /usr/local/share/applications/helm/ on WRCP target.

For centos:

build-pkgs --no-descendants --no-require --no-build-info --clean stx-APPNAME-helm build-pkgs --no-descendants --no-require --no-build-info stx-APPNAME-helm If error check build.log here: /localdisk/loadbuild/gwaines/starlingx_master/std/results/gwaines-starlingx_master-tis-r6-pike-std/stx-APPNAME-helm-1.1-1.tis/build.log if successful, the built RPM should be here: /localdisk/loadbuild/gwaines/starlingx_master/std/rpmbuild/RPMS/stx-APPNAME-helm-1.1-2.tis.noarch.rpm For debian:

"build-pkgs -p stx-APPNAME-helm" inside the debian build container (For more details: https://wiki.openstack.org/wiki/StarlingX/DebianBuildEnvironment#Build_packages) The command might refuse to rebuild the app in some conditions. If you want to force a rebuild use "build-pkgs -c -p stx-APPNAME-helm" The output package and build logs are saved in /localdisk/loadbuild/ / /std/ One possible failure reason is that the app failed some tests at build time. If that's the case, try adding the following command to deb_folder/rules: "override_dh_auto_test:"

7.4 Try out initial application packaging on WRCP
For Debian, you can check the contents of the deb file with:

dpkg -x   For Centos, you can extract contents of the RPM ... to get the system-application tarball ... by

rpm2cpio stx-spo-helm-1.1-2.tis.noarch.rpm | cpio -idv cd usr/local/share/applications/helm/ ls spo-1.1-2.tgz ... and you could take that to a WRCP deployment and upload & apply (Step 12). NOTE: If you didn't put anything in ./stx-APPNAME-helm/stx-app-NAME-helm/fluxcd-manifests/app-NAME/APPNAME-static-overrides.yaml, then applying your application on WRCP will only work if the application's helm chart works with no overrides, i.e. default values.

8.1 If you are using Container Images released by opensource project:
In Step 5, you would have referenced the released opensource container image likely in ${MY_REPO_ROOT_DIR}/cgcs-root/stx/app-NAME/stx-APPNAME-helm/stx-APPNAME-helm/fluxcd-manifests/APPNAME/APPNAME-static-overrides.yaml , with something like:

image: repository: quay.io/jetstack/cert-manager-controller tag: v1.7.1

8.1.1 For the purposes of releasing these container images in Wind River Registries: Add/Update the images in the prebuilt-images.lst file
The prebuilt-image.lst file is used to track all the images required for a WRCP release. Any images used by your application need to be added to this list.

http://bitbucket.wrs.com/projects/CGCS/repos/titanium-tools/browse/docker-images/prebuilt-images.lst

For more details, see Image Tag Management

Section: Pre-Built and Managed Image Lists

Setup "WRCP Dev" dev environment by following StarlingX Development Environment#BuildingStarlingX.

Create branch and review under your new application repo in order to start creating files for the development of your new application repo e.g.

cd ${MY_REPO_ROOT_DIR}/cgcs-root/stx/app-NAME

git review -s

repo start mybranch

vi prebuilt-images.lst //adding in your container image(s)

// Commit this to WRCP Dev

git add prebuilt-images.lst

git commit -s

// Upload for review

git review

Assign Davlet Panech and Al Bailey to your review and they will review and approve ... AND will push the new image(s) to the Local DEV Environment Registries (e.g. harbor on cumulus).

8.2 If you are building your own Container Images:
In Step 5, you would have referenced your soon-to-be-built container image likely in ${MY_REPO_ROOT_DIR}/cgcs-root/stx/app-NAME/stx-APPNAME-helm/stx-APPNAME-helm/fluxcd-manifests/APPNAME/APPNAME-static-overrides.yaml , with something like:      ( all starlingx-built container images eventually get pushed to docker hub under starlingx organization )

image: repository: docker.io/starlingx/stx-APPNAME-container1 tag: testv1

8.2.1 Create a docker/ folder in your application repo for developing and building your Container Images
You need add a ${MY_REPO_ROOT_DIR}/cgcs-root/stx/app-NAME/stx-app-NAME-helm/docker/ folder in your applicatin repo for building your container ... with a Dockerfile and any other make and source files for building your container.

For example, if you had to build a couple of container images for your application, your directory structure might look like this:

( Use existing system application repos for examples of specific build specs, Dockerfile details, etc. . )

. ├── ... ├── centos_tarball-dl.lst ├──... ├── python-k8sapp-APPNAME     ← // Application Framework Plugins ( see Step 9 ) │  ├── ... ├── requirements.txt ├── stx-app-NAME-helm │  ├── ... │   ├── docker │  │   ├── APPNAME-container-1 │  │   │   ├── Dockerfile │  │   │   ├── Makefile │  │   │   └── src │  │   │       ├── APPNAME-container1-blah.c │   │   │       └── APPNAME-container1-foo.c │   │   ├── APPNAME-container-2 │  │   │   ├── Dockerfile │  │   │   ├── Makefile │  │   │   └── src │  │   │       ├── APPNAME-container2-blah.c │   │   │       └── APPNAME-container2-foo.c │   └── stx-APPNAME-helm │      ├── files │      │   ├── index.yaml │      │   ├── Makefile │      │   ├── metadata.yaml │      │   └── repositories.yaml │      ├── fluxcd-manifests │      │   ├── base │      │   │   ├── helmrepository.yaml │      │   │   ├── kustomization.yaml │      │   │   └── namespace.yaml │      │   ├── app-NAME │      │   │   ├── helmrelease.yaml │      │   │   ├── APPNAME-static-overrides.yaml │      │   │   ├── APPNAME-system-overrides.yaml │      │   │   └── kustomization.yaml │      │   └── kustomization.yaml │      └── README ├── test-requirements.txt └── tox.ini

8.2.1.1 Build your own container image for testing
You can build your container image in your build environment using the following commands:

In your development build environment set the following variables:

OS=centos BUILD_STREAM=stable BRANCH=master CENTOS_BASE=starlingx/stx-centos:${BRANCH}-${BUILD_STREAM}-latest WHEELS=http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/wheels//stx-centos-${BUILD_STREAM}-wheels.tar DOCKER_USER=${USER} DOCKER_REGISTRY=tis-lab-registry.cumulus.wrs.com:9001        ← Does this try to push to this registry ?

Run the build image command:

$MY_REPO/build-tools/build-docker-images/build-stx-images.sh --os centos --stream ${BUILD_STREAM} --base ${CENTOS_BASE} --wheels ${WHEELS} --user ${DOCKER_USER} --registry ${DOCKER_REGISTRY} --push --latest --clean --only 

Once the image is built, pull the image to your testing VM, retag to your registry-local (as registry.local:9001/docker.io/starlingx/stx-APPNAME-container1:testv1) and then push the registry-local tag.

Step 9: Develop your System Application Plugins
This step covers orientation on how to write application plugins to:

dynamically set helm chart(s) overrides based on the current StarlingX infrastructure configuration (e.g. replicas=2 for duplex systems), provide "simple deployment" by automatically setting tested default helm overrides for application-specific parameters, provide custom behavior on 'system application- ...' StarlingX management of the FluxCD application packaging. etc.

At the moment a detailed guide explaining each setting in the configuration files is not available, but we can use the plugins from another app as a starting point and make adjustments from there.

App plugins are defined in the "python-k8sapp-app-NAME" directory in the app repo.

9.1 Additional Notes to setup basic plugins
For this step, we can use the vault app plugins as a template.

cd ${MY_REPO_ROOT_DIR}/cgcs-root/stx/app-NAME/ copy app directory structure from https://opendev.org/starlingx/vault-armada-app/src/branch/master/python-k8sapp-vault rename directories/files, replacing 'vault' with 'APPNAME' remove the armada folder in ./k8sapp_/k8sapp_ for centos, updating ./python-k8sapp-APPNAME/centos/build_srpm.data ... changing SRC_DIR to replace 'vault' with 'APPNAME' for centos, updating ./python-k8sapp-APPNAME/centos/python-k8sapp-APPNAME.spec updating app_name, pypi_name and sname ... replacing 'vault' with 'APPNAME' updating Summary and Description for APPNAME app for debian, update the mentions for the APPNAME in the files in the debian folder. Also update the app description in deb_folder/control in ./python-k8sapp-APPNAME/k8sapp_APPNAME/ updating README.rst, setup.cfg, .stestr.conf and tox.init ... generally replacing 'vault' with 'APPNAME' note .stestr.conf is a hidden file If zuul fails to set up pbr, disable the sdist step in [tox]: “skipsdist = True”, and use “usedevelop=True” in each testenv that needs to install the local package in ./python-k8sapp-APPNAME/k8sapp_APPNAME/ k8sapp_APPNAME/ updating * / *.py generally replacing 'vault' with 'APPNAME', 'Vault' with 'APPNAME' and 'VAULT' with 'APPNAME' updating ./python-k8sapp-APPNAME/k8sapp_APPNAME/ k8sapp_APPNAME/helm/harbor.py change ... if self.get_master_worker_host_count >= 3:       to 2 change replica updating to align with harbor helm chart syntax, e.g.                common.HELM_NS_HARBOR: { 'registry': { 'replicas': 2, },                } add 'python-k8sapp-APPNAME' package into centos_pkg_dirs debian_pkg_dirs centos_pkg_dirs_containers for centos, add the following lines into ./stx-APPNAME-helm/centos/stx-APPNAME-helm.spec ... right after the 'BuildRequires: chartmuseum' line BuildRequires: python-k8sapp-APPNAME BuildRequires: python-k8sapp-APPNAME-wheels for debian, add back the plugins in the helm part of the app 'stx-APPNAME-helm': Add 'python-k8sapp-APPNAME' and 'python-k8sapp-APPNAME-wheels' in stx-APPNAME-helm/debian/deb_folder/control in the "Build-Depends" section In the rules file, add back in the 2 lines mentioning 'plugins' for centos, add the following lines into ./stx-APPNAME-helm/centos/stx-APPNAME-helm.spec ... right after the sed lines mkdir -p %{app_staging}/plugins cp /plugins/%{app_name}/*.whl %{app_staging}/plugins
 * 1) Copy the plugins: installed in the buildroot

9.2 Build 'stx-app-NAME-helm' package
Try building the 'stx-app-NAME-helm' package.

Specifically this will NOW BUILD THE PLUGIN PACKAGE, extract the helm chart from your opensource application source tarball in downloads/ folder, build the fluxcd manifest, build the system-application tarball containing the fluxcd structure (containing the helm chart ) AND THE PLUGIN PACKAGE, and build a source containing the system-application tarball and install instructions for installing the application tarball in /usr/local/share/applications/helm/ on WRCP target.

For debian:

"build-pkgs -p stx-app-NAME-helm" inside the debian build container (For more details: https://wiki.openstack.org/wiki/StarlingX/DebianBuildEnvironment#Build_packages) The command might refuse to rebuild the app in some conditions. If you want to force a rebuild use "build-pkgs -c -p stx-app-NAME-helm" The output package and build logs are saved in /localdisk/loadbuild/ / /std/stx-app-NAME-helm One possible failure reason is that the app failed some tests at build time. If that's the case, try adding the following command to deb_folder/rules: "override_dh_auto_test:" For centos:

build-pkgs --no-descendants --no-require --no-build-info --clean stx-APPNAME-helm build-pkgs --no-descendants --no-require --no-build-info stx-APPNAME-helm

If error check build.log here: /localdisk/loadbuild/gwaines/starlingx_master/std/results/gwaines-starlingx_master-tis-r6-pike-std/stx-APPNAME-helm-1.1-1.tis/build.log

if successful, the built RPM should be here: /localdisk/loadbuild/gwaines/starlingx_master/std/rpmbuild/RPMS/stx-APPNAME-helm-1.1-2.tis.noarch.rpm

9.3 Try out initial application packaging
For Debian, you can check the contents of the deb file with:

dpkg -x <deb_file> <output_folder>

The tarball can be found at <output_folder>/usr/local/share/applications/helm/

For Centos, you can extract contents of the RPM to get the system-application tarball with:

rpm2cpio stx-spo-helm-1.1-2.tis.noarch.rpm | cpio -idv

The tarball can be found at <output_folder>/usr/local/share/applications/helm/ and it can be installed in a running system (Step 12).

Step 10: Build Application Packages
Run the "build-pkgs" command.

Specifically this will build your application plugins, extract the helm chart from your opensource application source tarball in downloads/ folder, build the fluxcd manifest, build the system-application tarball containing both the plugins and the fluxcd structure (containing the helm chart) and build a source containing the system-application tarball and install instructions for installing it in /usr/local/share/applications/helm/

For centos, use:

${MY_REPO_ROOT_DIR}/cgcs-root/build-tools/build-pkgs app-NAME-helm stx-app-NAME-helm

Watch for errors. Logs are directed to ${MY_REPO_ROOT_DIR}/build-std.log

For debian:

build-pkgs -p app-NAME-helm -p stx-app-NAME-helm

The output package and build logs are saved in /localdisk/loadbuild/ / /std/

Step 11: Extract System Application Tarball
On centos, if the previous step's building of packages was successful, the built RPM should be in /localdisk/loadbuild/ / /std/rpmbuild/RPMS/stx- -helm- .tis.noarch.rpm You can extract contents of the RPM to get the system-application tarball:

rpm2cpio stx- -helm- .tis.noarch.rpm | cpio -idv

The tarball will be in <output_folder>/usr/local/share/applications/helm/

On debian, the output package and build logs are saved in /localdisk/loadbuild/ / /std/ and you can check the contents of the deb file with:

dpkg -x <deb_file> <output_folder> The tarball can be found at <output_folder>/usr/local/share/applications/helm/

To test the app, you can simply copy the tarball into a running system rather than building a new ISO with the package (rpm/deb) added and installing it.

Step 12: Upload, Apply and Test System Application
The application tarball can now be copied to running platform. Copy the tarball to /home/sysadmin folder and on CLI use following commands:

source /etc/platform/openrc system application-upload system application-apply

Then run application specific testing.

Step 13: Commit, Review and Merge your Application Repo change started in Step 3
From within your git branch in your application repo (i.e. setup in Step 3):

cd ${MY_REPO_ROOT_DIR}/cgcs-root/stx/app-NAME git status git add <any files you've created/changed> git commit -s git review

Step 14: Add the build of your Container Image(s) into main Container-Build
NOTE: Skip this step if only using opensource container images NOTE: wait for previous Step's commit to merge before doing this step.

Add your system app in "containers.xml" in the https://opendev.org/starlingx/manifest repo to ensure a container image build for your application as part of the build team's containers-build. An example can be found in commit https://review.opendev.org/c/starlingx/manifest/+/795353/.

Step 15: Finalize Container Image Tag for Starlingx-built Container Images
NOTE: Skip this step if only using opensource container images

15.1 Obtain a timestamped formal build and test it
The build team can trigger a containers build and the timestamped docker-hub image can be referenced in your local app testing environment and used to test with it. Once all the tests passed, a static tag can be set in image-tags.yaml, see next step.

15.2 Add container image tag to image-tags.yaml
Update "build-tools/build-docker-images/tag-management/image-tags.yaml" with the container image static tag (see example https://review.opendev.org/c/starlingx/root/+/795560). For details on the definition of all the image fields that need to be set see Image Tag Management. The commit of the image tag update will push the retagged image to docker-hub. The image will get downloaded when the app is applied on the node.

15.3 Update the image static tag in system app helm charts
The charts need to get updated with the new image tag in values.yaml (see example https://review.opendev.org/c/starlingx/audit-armada-app/+/797319)

Step 16: Add the application to the ISO
For System Applications, the application tarball is put into a package (deb file for debian, rpm file for centos). When the package is installed, the tarball is copied to /usr/local/share/applications/helm/ and made available for the user to install.

To bundle the package with the ISO, just make sure the app names are listed in centos_iso_image.inc and debian_iso_image.inc in your app repo.

(For debian, you might also need to add the package names in the tools repo file ./debian-mirror-tools/config/debian/distro/stx-std.lst)

Step 17: Upload, Apply and Test System Application
After installing the ISO, the tarball can be found in /usr/local/share/applications/helm/. You can also build the package in you build environment, unpack the package and copy the tarball into a running STX system.

Step 17.1: Authentication
Acquire the credentials before running the application commands:

source /etc/platform/openrc

Step 17.2: Uploading and Applying Your Application
Uploading the new app:

system application-upload <app-tarball>

Installing the new app:

system application-apply <app-name>

Apps can also be reapplied by running the same command above.

Checking the app status:

system application-show <app-name>

App status should be "applied". Then run application specific testing.

Step 17.3: Managing Your Application
Removing the new app:

system application-remove <app-name>

That will remove your application. It will be available to be applied again if needed.

Certain essential applications such as cert-manager requires the removal to be forced:

system application-remove --force <app-name>

Deleting the new app:

system application-delete <app-name>

That will fully delete your app. It will not be available to be immediately applied again. If you want to apply it afterwards you need to upload it again.

Updating to the new app:

system application-update <updated-app-tarball>

The update process will automatically upload and apply the new version to the system.

App updates can also be triggered during platform version upgrades. This process happen when the following command is issued during the upgrade process:

system upgrade-activate

Visit the StarlingX documentation for the full platform upgrade path.

Aborting an operation:

system application-abort <app-name>

That will cancel the current operation if it was not completed or failed.

In case you want to see the status of all apps:

system application-list

Step 17.4: Managing Helm Overrides
Listing chart overrides:

system helm-override-list <app-name>

Showing overrides for a particular chart:

system helm-override-show <app_name> <chart_name>

Modifying service configuration parameters using user-specified overrides:

system helm-override-update <app_name> <chart_name> --reuse-values --reset-values --values <file_name> --set <commandline_overrides>

Enabling/Disabling the installation of a particular Helm chart within an application manifest:

system helm-chart-attribute-modify [--enabled <true/false>] <app_name> <chart_name>

Deleting user overrides:

system helm-override-delete <app_name> <chart_name>

Step 18: Test STX build
As a reference, see the cert-manager app Storyboard for all the repositories that are touched during the process and confirm that all work items have been completed. See: https://storyboard.openstack.org/#!/story/2007360

In addition, test builds to confirm that addition of new repo/app doesn't break anything:
 * Build app only
 * Build-iso (and install from scratch)
 * Test layered (flock) build

Step 19: Updating charts from upstream apps in debian
In case there is a need to update the helm charts provided by an upstream app, the procedure is similar to what is done to patch debian packages, as explained in https://wiki.openstack.org/wiki/StarlingX/DebianBuildStructure

In short, the stx-app-NAME-helm/debian/meta_data.yaml is responsible for defining the upstream tarball to be downloaded by the build system. Then, patches to the files in the tarball can be added to the folder stx-app-NAME-helm/debian/patches/ The order in which they are applied is defined in the file stx-app-NAME-helm/debian/patches/series

Please view this commit as an example: https://review.opendev.org/c/starlingx/platform-armada-app/+/858737