- 1 History
- 2 FAQ: General Overview
- 3 FAQ: Openstack Application Life-cycle
- 3.1 How do I start the Containerized services?
- 3.2 How to I change a configuration parameter in a deployed service?
- 3.3 What is the order of precedence for helm chart overrides in StarlingX?
- 3.4 What are the current set of platform workarounds needed to deploy the services?
- 3.5 As compared to the previously running native services, what changes in behavior can be expected?
- 4 FAQ: Service Debugging
- February 11, 2019: Initial FAQ Setup: WIP until cut-over
FAQ: General Overview
What services are being containerized?
See Containerizing the StarlingX Infrastructure Initiative for information on the implementation. This describes the containerization infrastructure and what you can expect to see running in pods under Kubernetes (K8S). These pods are deployed via service specific Helm charts as described by an Armada "stx-openstack application" manifest.
Where can I get additional information on how to interact with the underlying technologies used to deploy the containerized services?
FAQ: Openstack Application Life-cycle
How do I start the Containerized services?
Generate the stx-openstack application tarball
There are currently 2 application tarballs, one with tests enabled and one without.
The stx-openstack application tarballs are generated with each build on the CENGN mirror.
Alternatively, in a development environment, run the following command to construct the application tarballs.
The resulting tarballs can be found under $MY_WORKSPACE/std/build-helm/stx.
If the build-helm-charts.sh command is unable to find the charts, run "build-pkgs" to build the chart rpms and re-run the build-helm-charts.sh command.
Stage application for deployment
Transfer the helm-charts-manifest-no-tests.tgz application tarball onto your active controller.
Use sysinv to upload the application tarball.
source /etc/platform/openrc system application-upload stx-openstack helm-charts-manifest-no-tests.tgz system application-list
Bring Up Services
Use sysinv to apply the application.
system application-apply stx-openstack
You can monitor the progress by watching system application-list
watch -n 5 system application-list
or tailing Armada execution log
sudo docker exec armada_service tailf stx-openstack-apply.log
How to I change a configuration parameter in a deployed service?
To update a parameter associated with a given deployed service, use the system helm-overrides-xxx commands. For example, to update the number of glance workers, you would execute:
system helm-override-update glance openstack --set conf.glance.DEFAULT.workers=2
and then re-apply the application:
system application-apply stx-openstack
Execute the following kubectl command to observe the glance pods restarting:
kubectl get pods --all-namespaces -o wide -w | grep glance
What is the order of precedence for helm chart overrides in StarlingX?
There are four locations from which a given helm chart for a service can have values specified. If the values occur in more than one location an order of precedence is applied.
- User supplied (Highest)
- Established via the system helm-override-xxx commands
- Allows the user to override existing values and add new values previously not specified. Known values for a deployment can be seen with system helm-override-show
- Dynamic overrides
- Generated by sysinv and based on the contents of the system inventory
- Resulting files are located in /opt/platform/helm/19.01/
- Static overrides
- These are defined in the application's armada manifest located in /opt/platform/armada/19.01/
- These are the optimal operational values based on the testing done across all the supported StarlingX provisioned platforms
- Chart values.yaml (Lowest)
- These are the values provided by the helm chart.
- These charts are packaged with the application and installed on the controller helm repo.
- They can be examined by executing:
helm repo update helm inspect starlingx/glance helm inspect starlingx/glance | less
What are the current set of platform workarounds needed to deploy the services?
Any platform workarounds are contained in deployment instructions for the specific platform configurations. See:
As compared to the previously running native services, what changes in behavior can be expected?
The following items currently do not work or are not supported:
- Neutron agent rescheduling
FAQ: Service Debugging
After executing system application-apply stx-openstack you should check the health of your deployed pods in the K8S cluster.
How do I check the health of service pods?
A healthy deployment will have all pods in either a Running or a Completed state. This can be checked with:
kubectl get pods --all-namespaces -o wide
What should I do if I see a pod is not in a Running state?
First check the pod events to see if a dependency may not have been met. For example, to check the events of an ailing nova compute pod, run the following command and examine the contents of the Events: section. Note: The pod name will be unique to your deployed system.
kubectl describe pods -n openstack nova-compute-compute-0-75ea0372-nmtz2
Then check the logs for that pod with:
kubectl logs -n openstack nova-compute-compute-0-75ea0372-nmtz2
Based on data observed from these commands, you can typically start your debugging investigation which may require you to update overrides and redeploy the application.
How do I access the logs for the service pods?
The logs for a given pods can be checked with
kubectl logs -n openstack <pod name>
The above command allows you to access logs running on any host in the cluster. As an alternative, you can ssh to a given host and examine the logs in /var/log/pods and /var/log/containers. These will contain log information specific to pods and containers running only on that host.
How do I gain shell access to a pod so I can examine the contents of the deployed container?
Execute the following command:
kubectl exec -it -n openstack <pod name> -- bash
Note: This typically works for most images, but depending on how the docker image is built this may not be supported. All StarlingX images will support this as do most non-StarlingX images that are pulled by the helm charts.