Nova compute will need the ability to discover available images as well as a mechanism for fetching those images from a variety of object stores. Image discovery will be provided through an image registry (Parallax). The remote fetching of images will be provided by a caching-image-proxy (Teller, formerly known as the Iback). Collectively these components make up the Glance project.
One of the goals of OpenStack is to enable portability between cloud providers. In order to meet this objective we need a means of globally identifying, describing, locating, and retrieving images.
Image identifiers need to be universal across all installations of OpenStack, as instances of those images may be moved from one cloud provider to another. Parallax provides a system for registering, discovering and globally identifying images.
Nova control domains may be isolated from the Internet for added security. In addition, images may be provided across diverse object stores, both public and private. Nova nodes should be able to access these images through a single proxy. Teller serves as this proxy and provides a common interface to these various object stores. Finally, as an option, Teller may cache frequently accessed images to improve performance.
A user wishes to build a server from a public image. The user queries Parallax for the list of available images. The user builds a VM from this image by passing its Parallax URI in as the `image_uri` parameter of the Nova API’s create instance call. The `image uri` is propagated to the node agent which makes an HTTP request to Teller, passing along `image_uri`. Teller, in turn, fetches the image from the object store and return the image back to the node agent as the body of the HTTP response. The node agent unpacks the image, performs any fix-ups needed, and then boots the instance.
- There is a need to support multiple image registries
- Some image registries will be public, however, some will require credentials for access
- Images will have metadata that must be easily searchable via Parallax
- There is a need to support a variety of object stores (Swift, S3, local file system, etc).
- Some object stores will be publicly accessible, however, some will require credentials for access.
- Object stores may require large images to be split into chunks.
This design proposes two new components, the Parallax image registry and the Teller caching-image-proxies.
The Parallax image registry is a ReSTful web service which is queried to provide a list of available images matching a set of criteria. The query response will contain, among other things, the name of the image, the location of the image, and a URI which is the globally unique identifier for the image. Having a URI as the image identifier will enable Nova to support multiple image registries as well as facilitate sharing of images between OpenStack installations. Depending on the circumstances, Parallax servers may optionally require authentication to perform some or all operations and may use SSL for transport.
The Nova API “create instance” call will support a new parameter, `image_uri`, which will contain the Parallax URI of the image to be used.
The `image_uri` will be passed down to the node agent which in turn will make a GET request to Teller using the `image_uri` to fetch the image.
Teller, upon receiving an image GET request, will use the `image_uri` to verify that the target image is still available. If the image is still available, Teller will then consult its cache. If the image is present, Teller will return a 200 OK and begin streaming the data as part of the response body.
If the image is not present in the cache, Teller will then attempt to verify that the objects residing in the object store are still present and haven’t been modified before returning a response. For the Swift backend, Teller will verify the objects exist by performing a HEAD request on each of the required objects (for other backends, the method of testing presence will be different). Returned in the HEAD request will also be an Etag which Teller will compare with a `checksums` field stored in Parallax. If the object is not present or the checksums do not match, Teller will return back a 500-class error. If both checks succeed, Teller will return a 200 OK and begin streaming the data from the object store. The data will then be proxied back to the node agent. If the image is composed of multiple chunks in the object store (e.g. a 10 GB image is stored as two 5 GB objects in Swift), then the chunks will be seamlessly concatenated by Teller so that the node agent receives a single file: the bundled image.
The node agent will be responsible for unbundling the image based on the specified encoding and format (e.g. unzipping and untarring the image).
Since Parallax is conceptually just a key/value store exposed through a ReSTful web service, we’re left with a lot of flexibility in the implementation. For an initial version, a small Eventlet webserver backed by a SQLite or MySQL database would be sufficient. As demands increase on the system, we can add caching, load-balancing, and potentially moving to a NoSQL store like Redis or Cassandra.
Teller will be composed of a few discrete components which can be developed and tested independently. Among them will be a backend adapter module (Swift, HTTP, S3, etc), a module for querying Parallax for location information, and a web server that will expose a ReSTful web service for images. Since Teller uses plain HTTP to communicate with the node agent, we can use a stock HTTP caching proxy (e.g. Varnish) for the image caching. In addition, we can add multiple Teller servers to a cluster and use a load-balancer to distribute the requests.
Test / Demo Plan
Using fakes in place of Parallax and the object store, we will be able to functionally test an image fetch request.
Credentials will need to be managed for private Parallax image registries as well as private object stores. We need to design a mechanism for making authenticated requests to these systems on behalf of the user. Is oAuth an option here?
Bit Torrent can be used in two places: externally to fetch images into the system and internally to distribute I/O load among all of the host machines when fetching cached images (100’s of host vs 1-10’s of Teller servers). We will need to discuss whether one or both of these options makes sense down the road.