Jump to: navigation, search

Difference between revisions of "Solum/FeatureBlueprints/BuildingSourceIntoDeploymentArtifacts"

(Objectives)
(Inputs)
Line 42: Line 42:
 
* Arbitrary input parameters, source code, or binary artifacts
 
* Arbitrary input parameters, source code, or binary artifacts
 
** must have some sort of contract as to how it is placed inside the building instance
 
** must have some sort of contract as to how it is placed inside the building instance
* A known command (or set of commands) in that that will be invoked to tranfsorm the base image, and the inputs, into the final state of this image (the Output)
+
* A known command (or set of commands) in that that will be invoked to transform the base image, and the inputs, into the final state of this image (the Output)
 
+
** It is required that a single command be available to perform the overall transformation, which may or may not include integrated testing. 
 +
** It is recommended that additional commands be considered to allow environment native test cases or steps be executed.  These additional commands require use cases to justify.
  
 
=== Outputs ===
 
=== Outputs ===

Revision as of 02:56, 4 December 2013

Blueprint: https://blueprints.launchpad.net/solum/+spec/lang-pack Drafted by: Clayton Coleman

Objectives

  • Support turning user code into immutable server images that can be run, scaled, and deployed in an OpenStack environment easily.
  • Allow consumers to be extremely precise about dependencies, down to the kernel where desired.
  • Create a simple model that can fit a wide range of build and deployment workflows (all workflows?)
  • Support the following existing models - be simple enough to encompass all, but avoid hiding complexity and features
    • vm images (linux or windows)
    • docker containers
    • heroku build packs
    • openshift cartridges
  • Offer simple flows for most use cases, and allow complex composition of source code and runtime environments where necessary


All flows take a base image (docker container, other container, or vm), inject some artifacts (source code, binaries, static config), execute a command inside a running instance based on that image, and then initialize an execution environment. The resulting snapshot (and metadata) becomes a deployment artifact that can be started in an openstack environment and have network traffic routed to it. In general, Solum expects to closely link a source code repository or input binary file format to a process that creates executable isolated processes, but developers should be free to implement arbitrary complexity.


Personas

  • System Operator
 Provides sets and groups of base images to developers.  A common operator might be a large company with a set of approved development platforms, or a service provider that has supported development frameworks.
  • Developer
 A developer may wish to use a language, framework, build system, or custom execution environment that corresponds to specific needs of their application.  The developer may also take an image a system operator provides and customize it, although this behavior would require the system operator to expose this capability.
  • Image author
 Might be a 3rd party provider of images or supported execution environments.  These images might be chosen by a developer (if a system operator exposes the ability to use custom images) or by a system operator offering redistribution.

Design

Each base image defines a **contract** for the source repository input it expects and how that source will be used. A Java image might expect to receive a standard maven project, would build that project using "mvn", and then expect to receive a WAR file that can be started with Tomcat. Advanced composition is out of scope of this reference document.

There are two **types** of images defined today - VM images, and Docker images. The Solum system would know how to handle images of these types, but other types may be added later. The blueprint does not depend on the use Docker containers, but does define a model that works for both VMs (and thus for existing deployments of OpenStack) and Docker. Docker is a primary example of a container technology in Linux that allows efficient multi-tenancy of containers and advanced composition and replication scenarios.

The process of transforming a base image into a deployment artifact is called **preparation** - it can include a number of traditional steps such as build, test, and deployment onto a filesystem. Since the input and process of preparation may be organization or technology specific, Solum expects a deployable artifact to be a runnable image (vm or otherwise).


Inputs

  • A base image that can be started via a hypervisor or in a linux container
    • image is assumed to have a writable filesystem
  • Arbitrary input parameters, source code, or binary artifacts
    • must have some sort of contract as to how it is placed inside the building instance
  • A known command (or set of commands) in that that will be invoked to transform the base image, and the inputs, into the final state of this image (the Output)
    • It is required that a single command be available to perform the overall transformation, which may or may not include integrated testing.
    • It is recommended that additional commands be considered to allow environment native test cases or steps be executed. These additional commands require use cases to justify.

Outputs

  • A mutated filesystem that can support an execution environment of the injected inputs (ie run a built WAR)
    • stored in glance and associated with the consuming application
    • The image author is responsible for securing the execution environment, i.e. by removing debugging or insecure build tools
  • Sufficient information to allow a consumer to interact with the container
    • network ports, the type of protocol those ports support, and any specific info to allow firewalls, load balancing, or port mapping
    • for docker containers, a run command
    • for vms, one or more services that run on startup


Example of Python running in a disk-image-builder vm

  1. author creates a disk-image-builder template with python on Fedora 19 via yum install
    1. author builds the image from that template and uploads it to the local glance server
  2. a Solum application is constructed pointing to this image as the basis for a build
  3. when the user pushes code to their git repo, a post receive hook notifies Solum to build commit ABC
  4. Solum starts a container via nova/container service based on the image, and bind mounts a tar of the repo at commit ABC to a certain location
  5. Solum instructs the container to execute a known command (or uses the default run command) that should build the source in the tarball
  6. the command in the container unzips the tar, runs a pip install, deletes any temporary files, and symlinks a WSGI directory into /var/www/html where apache will run it on port 80
  7. when the command completes successfully, Solum creates a new snapshot via nova
  8. Solum registers the image with Glance and triggers a new deployment using that ID


Example of Java running in a Docker container

  1. author creates a docker file with java on Ubuntu 12 via apt-get
    1. author generates an image from that docker file and uploads it to the local glance server
  2. a Solum application is constructed pointing to this image as the basis for a build
  3. when the user pushes code to their git repo, a post receive hook notifies Solum to build commit ABC
  4. Solum starts a container via nova/container service based on the image, and bind mounts a tar of the repo at commit ABC to a certain location
  5. Solum instructs the container to execute a known command (or uses the default run command) that should build the source in the tarball
  6. the command in the container unzips the tar, runs a maven build, deletes any temporary files, and moves the resulting WAR into a directory where a system service will run it on port 8080
  7. when the command completes successfully, Solum creates a new docker image via "docker commit" and ensures the proper metadata is set so that the WAR will be running when that image is started
  8. Solum registers the image with Glance and triggers a new deployment using that ID


Example of Python running on a Windows vm

  1. author creates a Windows 7 vm image that contains an installed Python distribution
    1. author uploads that image to glance
  2. a Solum application is constructed pointing to this image as the basis for a build
  3. when the user pushes code to their git repo, a post receive hook notifies Solum to build commit ABC
  4. Solum starts a vm via nova based on the image, with a user data script that downloads the tarball for the repo at commit ABC and executes a known command that should build the source in the tarball
  5. the command in the vm unzips the tar, runs a pip install, and then defines a windows service that will start on port 8080 at system start and run the provided source as a wsgi app.
  6. when the command completes successfully, Solum creates a new image via snapshot
  7. Solum registers the image with Glance and triggers a new deployment using that ID


Example of a Heroku buildpack running on a Docker container

  1. author creates a docker image with Ubuntu 12 and the set of packages that Heroku makes available (python 2.7, etc)
    1. In that image, the author creates a command that should work for any buildpack
    2. author uploads that image to glance
  2. a Solum application is constructed pointing to this image as the basis for a build, and includes an input parameter to a specific buildpack
  3. -- solum is notified of a change and starts preparing a deployment artifact as described above --
  4. the command in the container unzips the source tar, download/clones the provided buildpack based on an input parameter, unzips it to a temporary directory, and executes the buildpack bin/compile step against the source
  5. Since Heroku depends on a Procfile, create a Procfile if necessary, install the foreman gem, and then sets the run command to execute foreman against the home directory
  6. -- solum registers the completed image as a deployment artifact --


Example of Redis running on a Docker container

  1. author creates a docker image that has a base Fedora 18 environment and yum installs redis
    1. The image prepare command looks only at the parameters passed to it
    2. author uploads that image to glance
  2. a Solum application is constructed pointing to this image as the basis for a build, and includes an input parameter for the database name and a database user
  3. -- solum is notified of a change and starts preparing a deployment artifact as described above --
  4. the command in the container:
    1. creates a second script that attempts to create a database in a mount directory if it does not exist, and then starts Redis
    2. makes that second script the image run command
  5. -- solum registers the completed image as a deployment artifact --

When this image is deployed, a persistent directory might be bind mounted to the container. The second script would look at its environment on startup and setup the Redis config file with that directory, and then start Redis. In this way an arbitrary service can be provisioned at deployment time. The version of Redis might be custom or stock.


Example of an OpenShift cartridge running on a Docker container

  1. author creates a docker image with RHEL 6.x and the set of packages that OpenShift requires (base cartridge code, plus some shared runtime functionality)
    1. In that image, the author creates a prepare command that should work for any web framework or service cartridge
    2. An OpenShift cartridge is a manifest (metadata), binaries, and a set of known scripts (bin/setup, bin/control) that perform installation and process control
    3. author uploads that image to glance
  2. a Solum application is constructed pointing to this image as the basis for a build, and includes an input parameter to a specific cartridge
  3. -- solum is notified of a change and starts preparing a deployment artifact as described above --
  4. the command in the container:
    1. unzips the source tar into the ~/app-root/runtime directory
    2. selects the appropriate cartridge based on the input (may need to download the cartridge locally)
    3. executes the cartridge bin/install and bin/setup steps
    4. executes the cartridge build step
    5. moves the build output to the ~/app-root/runtime directory
    6. registers the "gear start" command as the image run command
  5. -- solum registers the completed image as a deployment artifact --


Contract

Each base image must define *contract* for the source input it accepts and the output it generates. The contract for Python may differ greatly from the contract for Java, especially since different languages may have one or more commonly used build or runtime environments that may be composited.

Example contract for Java, Tomcat 7, and Maven builds:

  1. Source repository follows the standard maven build structure
  2. A "mvn" command run in the root of the source repository will build the correct output
  3. The output WAR is sent to the default maven build directory by the maven build
  4. The runtime environment will be Tomcat 7.0 on top of Java 1.7.


Metadata

  • Name, description, ???
  • The command that must be executed to begin the preparation process to transform the image
  • The parameters that are supported as input to preparation
  • The standard network configuration of this image (may be altered during preparation)
  • The standard execution command of this image (may be altered during preparation)


Updating deployment units and base images

There are three types of updates that are generally applied that might cause a deployment unit to need to be updated. They are

  1. user application update
  2. image author update that changes how a build is run, fixes a bug, or updates a key dependency
  3. operator security update of a package in an image


The first is part of the Solum deployment flow and is described in other documents.

An image author may wish to make new versions of their base images available - that process may include a registry of base images included with Solum along with the necessary API to add, update, and delete images. This topic may be covered in future blueprints.

The need to periodically apply security or bug fixes naturally leads to a model where both an operator and author may generate an image on their own terms. In some scenarios, the operator may allow images to be uploaded directly, but by doing so may lose the ability to offer developers a controlled security update stream. As a consequence of operators needing to propagate security updates to a large number of images (the base images, the deployments), this blueprint describes how that might occur:

  1. Operator identifies a security update to a set of packages or products
  2. Operator identifies the set of affected base images that it knows how to recreate
    1. For each base image, run a recreation step that applies the security update, or refreshes the image from an upstream source
      1. For each application that utilizes those base images, either notify the application owner, or trigger a new preparation and deployment of the application
  3. Operator identifiers the set of base images and external deployment artifacts that it cannot recreate
    1. For each base image, notify the application owner


The need to refresh or recreate an image to take security updates implies that creating and maintaining images should emphasize the recipe and processes by which an image is created rather than the actual binary images themselves. For docker containers, a dockerfile is a natural mechanism by which such recipes can be distributed, and the operator may wish to offer base images they create themselves. For virtual machines, base images and well defined user scripts can offer similar flexibility. It is hoped that communities of base image authors

User selection

This blueprint does not define the method by which developers browse, select, and use base images to create their deployment units. It is reasonable to assume a set of registries both public and operator maintained that add user facing metadata (names, descriptions, categorization) and enhanced capabilities to the images themselves.


Naming

The current name *language pack* has been proposed to change because "language" is too specific - the base image may or may not do any actual compilation, nor may it depend on one or another language. "Build pack" overlaps with what Heroku offers, and since this spec describes a broader concept, there was general agreement that the term was also too specific.

Proposed new terms:

  • StackPack
  • Layer
  • Strata
  • Badger (working term during F2F discussion)
  •  ???


Portability

The goal of a base image is to provide a contract by which identical source repositories can be built and executed on different platforms and runtime environments. We expect different development communities to iterate on those contracts, and wish to encourage an open approach to supporting and standardizing contracts. Operators must often make choices about the operating systems, packages, and languages they wish to support - therefore we wish to make the sharing of recipes for base image creation the primary mechanism of reuse, and the contract per language/runtime the interoperability for applications. Some operators may allow direct execution of arbitrary base images, but those images are considered user maintained rather than operator maintained (the operator cannot easily provide security updates).