Magnum is an OpenStack API service developed by the OpenStack Containers Team making container orchestration engines such as Docker Swarm, Kubernetes, and Apache Mesos available as first class resources in OpenStack. Magnum uses Heat to orchestrate an OS image which contains Docker and Kubernetes and runs that image in either virtual machines or bare metal in a cluster configuration. Click below for a ~2 minute demo of how the Magnum CLI works.
Getting Started / Download
To get started with Magnum, see: Our Quickstart Guide
The project is under active development by our OpenStack Containers Team. We meet weekly by IRC.
- We want you to contribute to Magnum!
- Launchpad Project Pages
- Mailing List
- Code Reviews
- Code Repository
- git clone git://git.openstack.org/openstack/magnum
Our developers use IRC in #openstack-containers on freenode for development discussion.
- The weekly Containers IRC meeting is held on Tuesdays at 1600 UTC [schedule].
- 2016 Containers Meeting Archive
Frequently Asked Questions
1) How is Magnum is different from Nova?
Magnum provides a purpose built API to manage application containers orchestration engines, which have a distinctly different life cycle and operations than Nova (machine) Instances. We actually use Nova instances to compose our Bays.
2) How is Magnum different than Docker or Kubernetes?
Magnum offers an asynchronous API that's compatible with Keystone, and a complete multi-tenancy implementation. It does not perform orchestration internally, and instead relies on OpenStack Orchestration. Magnum does leverage both Kubernetes and Docker as components.
3) Is this the same thing as Nova-Docker?
No, Nova-Docker is a virt driver for Nova that allows containers to be created as Nova instances. This is suitable for use cases when you want to treat a container like a lightweight machine. Magnum provides container orchestration engine management that is beyond the scope of Nova's API, and implements its own API to surface these features in a way that is consistent with other OpenStack services. Containers started on Magnum Bays are run on top of Nova instances that are created using Heat.
4) Who is Magnum for?
Magnum is for OpenStack cloud operators (public or private) who want to offer a self-service solution to provide a hosted containers service to their cloud users. Magnum simplifies the required integration with OpenStack, and allows for cloud users who can already launch cloud resources such as Nova Instances, Cinder Volumes, Trove Databases, etc. to also create container clusters (Bays) to run applications in an environment that provides advanced features that are beyond the scope of existing cloud resources.
5) Will I get the same thing if I use the Docker resource in Heat?
No, the Docker Heat resource does not provide a resource scheduler, or a choice of container technology used. It is specific to Docker, and uses Glance to store container images. It does not currently allow for layered image features, which can cause containers to take longer to start than if layered images are used with a locally cached base image. Magnum leverages all of the speed benefits that Docker offers, and implements Kubernetes and Mesos as alternate choices to Docker Swarm for container orchestration.
6) What does multi-tenancy mean in Magnum (Is Magnum Secure)?
Resources such as Bays started by Magnum can only be viewed and accessed by users of the tenant that created them. Bays are not shared, meaning that containers will not run on the same kernel as neighboring tenants. This is a key security feature that allows containers belonging to the same tenant to be tightly packed within the same Pods and Bays, but runs separate kernels (in separate Nova Instances) between different tenants. This is different than using a system like Kubernetes without Magnum, which was originally designed to be used only by a single tenant, and leaves the security isolation design up to the implementer. Using Magnum provides the same level of security isolation as Nova provides when running Virtual Machines belonging to different tenants on the same compute nodes.