Jump to: navigation, search

Difference between revisions of "StarlingX/Containers/BuildingImages"

(Base Wheels)
Line 102: Line 102:
  
 
This will produce a wheels tarball in your workspace:
 
This will produce a wheels tarball in your workspace:
${MY_WORKSPACE}/std/build-wheels-${OS}-${OPENSTACK_RELEASE}/stx-${OS}-${OPENSTACK_RELEASE}-wheels.tar
+
 
 +
${MY_WORKSPACE}/std/build-wheels-${OS}-${OPENSTACK_RELEASE}/stx-${OS}-${OPENSTACK_RELEASE}-wheels.tar
  
 
=== StarlingX Wheels ===
 
=== StarlingX Wheels ===

Revision as of 21:12, 29 January 2019

Building StarlingX Docker Images

History

  • January 29, 2019: Page creation

Introduction

Building StarlingX Docker Images consists of three components:

  • StarlingX Base Image, which provides the operating system for the docker image
  • Python wheels, providing an installation source for pip when installing python modules via loci
  • Docker image build, using image directives files

The images are currently built using either a docker file or via loci (https://github.com/openstack/loci), which is an OpenStack image builder.

Base Image

The StarlingX Base Image is the operating system image that provides the base for the StarlingX Docker Images. This is built using the build-stx-base.sh tool in stx-root/build-tools/build-docker-images.

Currently, we build two base images with Centos OS - one for use with the Pike images, and one for the master (bleeding edge) images:

  • The Pike base image is configured with repo commands to point to the StarlingX build output as the source for packages to be installed in the images. After setting up the repo configuration, a yum upgrade is performed to update installed packages with versions from the StarlingX build, to try to align with the host OS as much as possible.
  • The master image does not point to the StarlingX build, as packages will come primarily from upstream sources. It currently installs centos-release-openstack-rocky in order to add repo configuration to point to the Rocky release, which is currently the latest available release.

The base image is passed into the StarlingX Docker Image build command as an argument.

Example Pike build command:

OS=centos
OPENSTACK_RELEASE=pike
IMAGE_VERSION=dev-${OPENSTACK_RELEASE}-${USER}
LATEST=dev-${OPENSTACK_RELEASE}-latest
DOCKER_USER=${USER}
DOCKER_REGISTRY=192.168.0.1:9001 # Some private registry you've setup for your testing, for example

time $MY_REPO/build-tools/build-docker-images/build-stx-base.sh \
    --os ${OS} \
    --release ${OPENSTACK_RELEASE} \
    --version ${IMAGE_VERSION} \
    --user ${DOCKER_USER} --registry ${DOCKER_REGISTRY} \
    --push \
    --latest-tag ${LATEST} \
    --repo local-stx-build,http://${HOSTNAME}:8088/${MY_WORKSPACE}/std/rpmbuild/RPMS \
    --repo stx-distro,http://${HOSTNAME}:8088/${MY_REPO}/cgcs-${OS}-repo/Binary \
    --clean

Example master build command:

OS=centos
OPENSTACK_RELEASE=master
IMAGE_VERSION=dev-${OPENSTACK_RELEASE}-${USER}
LATEST=dev-${OPENSTACK_RELEASE}-latest
DOCKER_USER=${USER}
DOCKER_REGISTRY=192.168.0.1:9001 # Some private registry you've setup for your testing, for example

time $MY_REPO/build-tools/build-docker-images/build-stx-base.sh \
    --os ${OS} \
    --release ${OPENSTACK_RELEASE} \
    --version ${IMAGE_VERSION} \
    --user ${DOCKER_USER} --registry ${DOCKER_REGISTRY} \
    --push \
    --latest-tag ${LATEST} \
   --clean

If you are not making changes to any source packages (ie. RPMs) that need to be installed in your designer-built images, you can use the CENGN-built stx-base image. For example: https://hub.docker.com/r/starlingx/stx-centos/tags

  • Pike base image: starlingx/stx-centos:dev-latest
  • Master base image: starlingx/stx-centos:f-stein-latest

Building Wheels

A wheel is a package format that provides a pre-built python module. We collect or build a set of python modules in wheel format and store them in a tarball, which can be passed to loci when building the StarlingX docker images. We have two groups of wheels in the StarlingX build:

  • Base wheels - wheels that come from upstream source
  • StarlingX wheels - wheels produced by the StarlingX build

The build-wheel-tarball.sh tool in stx-root/build-tools/build-wheels is used to build and collect wheels and generate the wheels tarball. It uses two sub-tools (located in the same directory) to build and/or collect the two groups of wheels.

If you are not modifying any python modules, you can use the CNEGN-built wheels tarball:

Base Wheels

The base wheels are built and/or collected by the build-base-wheels.sh script, which is called from build-wheel-tarball.sh. It uses a docker file in stx-root/build-tools/build-wheels/docker to setup a wheel-builder container and runs the docker-build-wheel.sh script. This script uses a wheels.cfg file as input (eg. master-wheels.cfg), which provides a list of wheels and build/download directives. The wheels.cfg file can specify wheel/module sources as:

  • pre-built wheel file to be downloaded
  • source git repo
  • source tarball
  • source zip

In addition, when building the "master" wheels tarball, the build-base-wheels.sh script will pull the loci/requirements:master-${OS} image, extracting the wheels from that image to provide the initial set. This allows us to keep the master wheels tarball at the latest upstream versions, with the exception of wheels that we explicitly build.

Example build command:

OS=centos
OPENSTACK_RELEASE=master

${MY_WORKSPACE}/build-wheels/build-wheel-tarball.sh \
    --os ${OS} \
    --release ${OPENSTACK_RELEASE}

This will produce a wheels tarball in your workspace:

${MY_WORKSPACE}/std/build-wheels-${OS}-${OPENSTACK_RELEASE}/stx-${OS}-${OPENSTACK_RELEASE}-wheels.tar

StarlingX Wheels

The StarlingX build provides support for producing python wheels during the build. For Centos, this means updating the package rpm specfile to build the wheel and package it in a -wheels package. The names of the wheels packages to be included in the tarball are listed in the wheels.inc files in the corresponding repo (ie. centos_pike_wheels.inc).

Building Images

The StarlingX Docker Images are built using a set of image directives files, with the base image and wheels tarball as input. The images are built by the build-stx-images.sh tool, in stx-root/build-tools/build-docker-images. The build-stx-images.sh tool will search the StarlingX repos for a corresponding docker_images.inc file (ie. centos_master_docker_images.inc) which contains a list of subdirectories that contain the associated image directives files, which are processed and built.

The following diff provides an example of changes made to a specfile to add building a wheel to a package:

diff --git a/openstack/distributedcloud-client/centos/distributedcloud-client.spec b/openstack/distributedcloud-client/centos/distributedcloud-client.spec
index c6e17f6..7dc83f5 100644
--- a/openstack/distributedcloud-client/centos/distributedcloud-client.spec
+++ b/openstack/distributedcloud-client/centos/distributedcloud-client.spec
@@ -20,6 +20,8 @@ BuildArch:     noarch

 BuildRequires: python2-devel
 BuildRequires: python-setuptools
+BuildRequires: python2-pip
+BuildRequires: python2-wheel
 BuildRequires: python-jsonschema >= 2.0.0
 BuildRequires: python-keystonemiddleware
 BuildRequires: python-oslo-concurrency
@@ -75,10 +77,13 @@ rm -rf {test-,}requirements.txt tools/{pip,test}-requires
 %build
 export PBR_VERSION=%{version}
 %{__python2} setup.py build
+%py2_build_wheel

 %install
 export PBR_VERSION=%{version}
 %{__python2} setup.py install --skip-build --root %{buildroot}
+mkdir -p $RPM_BUILD_ROOT/wheels
+install -m 644 dist/*.whl $RPM_BUILD_ROOT/wheels/

 # prep SDK package
 mkdir -p %{buildroot}/usr/share/remote-clients
@@ -94,3 +99,11 @@ tar zcf %{buildroot}/usr/share/remote-clients/%{pypi_name}-%{version}.tgz --excl
 %files sdk
 /usr/share/remote-clients/%{pypi_name}-%{version}.tgz

+%package wheels
+Summary: %{name} wheels
+
+%description wheels
+Contains python wheels for %{name}
+
+%files wheels
+/wheels/*

The get-stx-wheels.sh script, called by build-wheel-tarball.sh, will gather the set of -wheels packages, defined by the corresponding wheels.inc files, and extract the wheel files, making them available to the build-wheel-tarball.sh tool.

Wheels Tarball

The build-wheel-tarball.sh tool, after successfully calling build-base-wheels.sh and get-stx-wheels.sh, will collect the wheels built or downloaded and prep the tarball. It will also download the OpenStack requirements.txt and upper-constraints.txt files, which are used by loci when installing the python modules. The upper-constraints.txt file is modified based on the collected/built wheels, allowing us to override or append module specifications. The upper-constraints.txt file in the StarlingX wheels tarball then reflects the content of the tarball, to ensure the desired module versions are installed.

Building Images

The StarlingX Docker Images are built using the build-stx-images.sh tool, in stx-root/build-tools/build-docker-images, using the image directives files for build instructions, with the base image and wheels as input.

Image Directives Files

The image directives files provide the build arguments necessary for building a specific image. The first required option is BUILDER, which can be either "docker" or "loci".

docker

Images with BUILDER set to "docker" are built using a docker file. The only other required option in the image directives file for "docker" builds is the LABEL, or image name (ie. stx-libvirt). The docker file can use the StarlingX base image as its "FROM" by including the following at the top:

ARG BASE
FROM ${BASE}

The BASE is passed by build-stx-images.sh as a build argument.

For an example of a BUILDER=docker image, see https://github.com/openstack/stx-integ/tree/master/virt/libvirt/centos

loci

The loci project (https://github.com/openstack/loci) provides a mechanism for building images using a python module as the main project source. The image directives file for BUILDER=loci images allows you to specify supporting python modules or packages to be installed, in addition to specifying the main project source repo and/or branch. In addition, the build-stx-images.sh supports specifying an additional customization command that is applied to the loci-built image. Options supported by BUILD=loci image directives files that are passed on to loci include:

  • LABEL: the image name
  • PROJECT: main project name
  • PROJECT_REPO: main project source git repo
  • PROJECT_REF: git branch or tag for main project source repo
  • PIP_PACKAGES: list of python modules to be installed, beyond those specified by project dependencies or requirements
  • DIST_PACKAGES: additional packages to be installed (eg. RPMs from repo, configured by base image)
  • PROFILES: bindep profiles supported by project to be installed (eg. apache)

In addition, you can specify a bash command in the CUSTOMIZATION option, in order to do a modification on the loci-built image.

Example: stx-upstream/openstack/python-nova/centos/stx-nova.master_docker_image

BUILDER=loci
LABEL=stx-nova
PROJECT=nova
PROJECT_REPO=https://github.com/openstack/nova.git
PIP_PACKAGES="pycrypto httplib2 pylint"
DIST_PACKAGES="openssh-clients openssh-server libvirt e2fsprogs"
PROFILES="fluent nova ceph linuxbridge openvswitch configdrive qemu apache"
CUSTOMIZATION="yum install -y openssh-clients"

In a case where the image is built without a main project source git repo, where the main project source is just coming from a wheel, you can set PROJECT to infra, and loci skips the git clone steps. For example, stx-nova-api-proxy: stx-nfv/nova-api-proxy/centos/stx-nova-api-proxy.master_docker_image

BUILDER=loci
LABEL=stx-nova-api-proxy
# Set PROJECT=infra and PROJECT_REPO=nil because we are not cloning a repo
PROJECT=infra
PROJECT_REPO=nil
PIP_PACKAGES="api_proxy eventlet oslo.config oslo.log \
              paste PasteDeploy routes webob keystonemiddleware pylint"

Image Build Command

Example image build command, using the CENGN base image and wheels:

OS=centos
OPENSTACK_RELEASE=master
CENTOS_BASE=starlingx/stx-centos:f-stein-latest
WHEELS=http://mirror.starlingx.cengn.ca/mirror/starlingx/f/stein/centos/latest_docker_image_build/outputs/wheels/stx-centos-master-wheels.tar
DOCKER_USER=${USER}
DOCKER_REGISTRY=192.168.0.1:9001 # Some private registry you've setup for your testing, for example

time $MY_REPO/build-tools/build-docker-images/build-stx-images.sh \
    --os centos \
    --release ${OPENSTACK_RELEASE} \
    --base ${CENTOS_BASE} \
    --wheels ${WHEELS} \
    --user ${DOCKER_USER} --registry ${DOCKER_REGISTRY} \
    --push --latest \
    --clean

If I want to build using the wheels tarball from my build, instead:

WHEELS=http://${HOSTNAME}:8088/${MY_WORKSPACE}/std/build-wheels-${OS}-${OPENSTACK_RELEASE}/stx-${OS}-${OPENSTACK_RELEASE}-wheels.tar

Note: To specify a local wheels tarball, loci needs to be able to access it via wget from a docker container. This could mean changes to your http server and iptables rules to allow "external" access, to allow access from the docker containers.

## Note: Verify that lighttpd is not bound to "localhost"
vi /etc/lighttpd/lighttpd.conf
# server.bind = "localhost"
systemctl restart lighttpd

## Note: You may need to add an iptables rule to allow the docker
## containers to access the http server on your host. For example:
iptables -I INPUT 6 -i docker0 -p tcp --dport ${HOST_PORT} -m state --state NEW,ESTABLISHED -j ACCEPT

If you only want to build specify images, the build-stx-images.sh provides --only and --skip options (ie. --only stx-nova).