About training labs
Openstack Training Labs is a collection of scripts that install a working OpenStack cluster on your computer almost at the push of a button. It's an automated, fast, reliable and reproducible way of following OpenStack install guides to create a cluster using VirtualBox/KVM virtual machines and should run on most common hardware (Desktops/Laptops) out of the box.
Training Labs website is up and running. If you just want to deploy the given OpenStack release, just download the tarball or zip file for your platform and get rocking with OpenStack. Follow training-labs page here: Get training labs here
The scripts support Linux, Mac OS X, and Windows as host operating systems. Other platforms are likely to work as well, as long as they have VirtualBox. They currently install the Kilo, Juno and Icehouse release of OpenStack on Ubuntu 14.04 LTS.
Any reasonably modern laptop or desktop PC should be able to run the training labs. The most likely bottleneck is main memory - on a 4GB PC, close major memory consumers like browsers before starting a cluster. Less than 10GB of disk space will be consumed by the virtual machines that make up the cluster and the ISO image required to install them.
VirtualBox on any supported platform or KVM on Linux.
You also need git in order to download the training labs repo, for example:
git clone git://git.openstack.org/openstack/training-labs.git
Additional requirements for Windows:
- access to a POSIX environment (Linux, OS X, UNIX, Cygwin, ...) to run a bash script that generates DOS batch files.
- an ssh client such as Putty, or the openssh client in Cygwin.
Building the cluster
On all platforms, log files are written to the training-labs/labs/log directory while the cluster is building.
Linux and Mac OS X
The cluster is built in three phases
- Download the OS image. This phase is skipped if the image exists already in the training-labs/labs/img directory.
- Build a base disk, about 15 to 30 minutes. This phase is skipped if the base disk exists already.
- Build the node VMs based on the base disk, about 15 to 30 minutes.
cd training-labs/labs/osbash ./osbash.sh -b cluster
By default, the cluster is built on Virtualbox VMs. To use KVM, set the environment variable PROVIDER to kvm.
The command builds a base disk which contains the operating system and the software needed for the OpenStack cluster. After the base disk, the command builds three node VMs (controller, compute, network).
If you execute the same command again, the existing node VMs are deleted and recreated based on the existing base disk. If you want to rebuild the base disk, too, either delete the disk file in the labs/img directory, or use this command:
./osbash.sh -b basedisk
Generate DOS batch files
The batch files that create the cluster need to be generated once. You need to do this in a POSIX environment that contains bash - a Linux or UNIX installation is fine, but also Cygwin.
In a POSIX environment:
cd training-labs/labs/osbash ./osbash.sh -w cluster
The DOS batch files are created in a new directory named wbatch. Transfer them to Windows.
Creating the cluster under Windows
Run the following three scripts in this order. The first two scripts are only needed once:
- Creates the host-only networks used by the node VMs to communicate. The script asks for elevated privileges which are needed for that task. You only need to run this script once, the network configuration is saved by VirtualBox. You can verify the configured networks in the VirtualBox GUI: File->Preferences->Network->Host-only Networks.
- Creates the base disk. You only need to run this script once (and every time you want to update the base disk). This script downloads the OS image needed to build the base disk to training-labs\labs\img, if it doesn't exist, and asks the user to hit a key to proceed after downloading.
- Creates the node VMs based on the base disk.
Note: The Windows batch scripts still have some limitations. For instance, if they find an existing node VM of the same name, they print an error and exit. Do not start a batch script if another one is still running.
Using the cluster
By default, the cluster is built in headless mode. In that case, the way to access the cluster nodes is via secure shell (ssh). The localhost's TCP ports 2230 through 2232 are forwarded to the nodes.
To get shell prompts on the cluster nodes:
ssh -p 2230 osbash@localhost # controller (includes network on the Liberty version) ssh -p 2231 osbash@localhost # network node; on the Liberty version, compute node ssh -p 2232 osbash@localhost # pre-Liberty versions: compute node
The password is osbash. To become root, use sudo.
The Putty client allows you to set the port number on the settings->session page before making the connection.
Console windows for the VirtualBox VMs can be displayed by stopping the VMs and starting them again from the VirtualBox GUI. Alternatively, in order to have the console always on, even during the build, add the "-g gui" option to the osbash commands. For instance:
./osbash.sh -g gui -b cluster ./osbash.sh -g gui -b basedisk ./osbash.sh -g gui -w cluster # generate DOS batch files which will always display console windows
Horizon is also accessed via a forwarded port. Use this URL to access the GUI:
Two accounts are configured: admin/admin_pass and demo/demo_pass.
- Training-Labs Webpage: http://docs.openstack.org/training_labs/
- Slides from our presentation at the OpenStack Summit 2015 in Tokyo: https://docs.google.com/presentation/d/1PYe1SQnAL8DxOXcnGI8O-1YAW-Z45P2IXj4NMb3VTNo/
- training-labs repo: http://git.openstack.org/cgit/openstack/training-labs/
- Launchpad bug tracker https://launchpad.net/labs
- Review queue: https://review.openstack.org/#/q/status:open+project:openstack/training-labs,n,z
- Old labs section in Training guides: http://git.openstack.org/cgit/openstack/training-guides/tree/labs
- Spec: http://specs.openstack.org/openstack/docs-specs/specs/liberty/training-labs.html
- For more information please follow this link: Training Labs Team Meeting
- Pranav Salunke, IRC: dguitarbite
- Roger Luethi, IRC: rluethi
- Bernd Bausch, IRC: berndbausch
- Name, IRC: Nick, role/interests.