- 1 About training labs
- 2 Supported Platforms
- 3 Dependencies
- 4 Proxy
- 5 Building the cluster
- 6 Using the cluster
- 7 Managing and Maintaining our Project
- 8 Quick Links
- 9 Meeting Information
- 10 Team Members
About training labs
Openstack Training Labs is a collection of scripts that install a working OpenStack cluster on your computer almost at the push of a button. It's an automated, fast, reliable and reproducible way of following OpenStack install guides to create a cluster using VirtualBox/KVM virtual machines and should run on most common hardware (Desktops/Laptops) out of the box.
Training Labs website is up and running. If you just want to deploy the given OpenStack release, just download the tarball or zip file for your platform and get rocking with OpenStack. Follow training-labs page here: Get training labs here
The scripts support Linux, Mac OS X, and Windows as host operating systems. Other platforms are likely to work as well, as long as they have VirtualBox. They currently install the Kilo, Juno and Icehouse release of OpenStack on Ubuntu 14.04 LTS.
Any reasonably modern laptop or desktop PC should be able to run the training labs. The most likely bottleneck is main memory - on a 4GB PC, close major memory consumers like browsers before starting a cluster. Less than 10GB of disk space will be consumed by the virtual machines that make up the cluster and the ISO image required to install them.
VirtualBox on any supported platform or KVM on Linux. VirtualBox on Windows `VBoxManage.exe` should be on Path.
You also need git in order to download the training labs repo, for example:
git clone git://git.openstack.org/openstack/training-labs.git
Additional requirements for Windows:
- access to a POSIX environment (Linux, OS X, UNIX, Cygwin, ...) to run a bash script that generates DOS batch files.
- an ssh client such as Putty, or the openssh client in Cygwin.
If your network need to set proxy to access Internet, remember to set VM_PROXY in "training-labs/labs/osbash/config/localrc" as follow:
Building the cluster
On all platforms, log files are written to the training-labs/labs/log directory while the cluster is building.
The cluster is built in three phases
- Download the OS image. This phase is skipped if the image exists already in the training-labs/labs/img directory.
- Build a base disk, about 15 to 30 minutes. This phase is skipped if the base disk exists already.
- Build the node VMs based on the base disk, about 15 to 30 minutes.
Linux and Mac OS X
cd training-labs/labs/osbash ./osbash.sh -b cluster
By default, the cluster is built on Virtualbox VMs. To use KVM, set the environment variable PROVIDER to kvm.
The command builds a base disk which contains the operating system and the software needed for the OpenStack cluster. After the base disk, the command builds three node VMs (controller, compute, network).
If you execute the same command again, the existing node VMs are deleted and recreated based on the existing base disk. If you want to rebuild the base disk, too, either delete the disk file in the labs/img directory, or use this command:
./osbash.sh -b basedisk
Get Windows batch files
The easiest and recommended way to get everything you need besides VirtualBox is to download a zip file for Windows from the Training Labs page. The zip files include pre-generated Windows batch files. If you have the content of the zip file on your Windows machine, you can skip straight to the next step, "Creating the cluster under Windows".
If you prefer to generate the batch files that create the cluster yourself, you need a POSIX environment that contains bash - a Linux or UNIX installation is fine, but Cygwin may also work.
In a POSIX environment:
cd training-labs/labs/osbash ./osbash.sh -w cluster
The Windows batch files are created in a new directory named wbatch. Transfer them to Windows.
Creating the cluster under Windows
Run the following three scripts in this order. The first two scripts are only needed once:
- Creates the host-only networks used by the node VMs to communicate. The script asks for elevated privileges which are needed for that task. You only need to run this script once, the network configuration is saved by VirtualBox. You can verify the configured networks in the VirtualBox GUI: File->Preferences->Network->Host-only Networks.
- Creates the base disk. You only need to run this script once (and every time you want to update the base disk). This script downloads the OS image needed to build the base disk to training-labs\labs\img, if it doesn't exist, and asks the user to hit a key to proceed after downloading.
- Creates the node VMs based on the base disk.
Note: The Windows batch scripts still have some limitations. For instance, if they find an existing node VM of the same name, they print an error and exit. Do not start a batch script if another one is still running.
Using the cluster
By default, the cluster is built in headless mode. In that case, the way to access the cluster nodes is via secure shell (ssh). The localhost's TCP ports 2230 through 2232 are forwarded to the nodes.
To get shell prompts on the cluster nodes:
ssh -p 2230 osbash@localhost # controller (includes network on the Liberty version) ssh -p 2231 osbash@localhost # network node; on the Liberty version, compute node ssh -p 2232 osbash@localhost # pre-Liberty versions: compute node
The password is osbash. To become root, use sudo.
The Putty client allows you to set the port number on the settings->session page before making the connection.
Console windows for the VirtualBox VMs can be displayed by stopping the VMs and starting them again from the VirtualBox GUI. Alternatively, in order to have the console always on, even during the build, add the "-g gui" option to the osbash commands. For instance:
./osbash.sh -g gui -b cluster ./osbash.sh -g gui -b basedisk ./osbash.sh -g gui -w cluster # generate DOS batch files which will always display console windows
Horizon is also accessed via a forwarded port. Use this URL to access the GUI:
Two accounts are configured: admin/admin_user_secret and demo/demo_user_pass. The default domain required for login is "default". These and other passwords are configured in config/credentials.
Managing and Maintaining our Project
Just a starting point for the core developers. It is easier for us to share our knowledge on various important but not feature related boilerplate work for the project. In case of the original author for a given task is not available, this should enable the team to function and not miss important aspects of the project like releases, backports etc.
Please follow the well written and maintained [Project Driver's Guide](http://docs.openstack.org/infra/manual/drivers.html) for more information. The above given document may cover some regions of the project which may require you to be in the **core team** . But majority of the tasks should not require the same.
- Training-Labs Webpage: http://docs.openstack.org/training_labs/
- Slides from our presentation at the OpenStack Summit 2015 in Tokyo: https://docs.google.com/presentation/d/1PYe1SQnAL8DxOXcnGI8O-1YAW-Z45P2IXj4NMb3VTNo/
- training-labs repo: http://git.openstack.org/cgit/openstack/training-labs/
- Launchpad bug tracker https://launchpad.net/labs
- Review queue: https://review.openstack.org/#/q/status:open+project:openstack/training-labs,n,z
- Old labs section in Training guides: http://git.openstack.org/cgit/openstack/training-guides/tree/labs
- Spec: http://specs.openstack.org/openstack/docs-specs/specs/liberty/training-labs.html
- Developers Guide: http://docs.openstack.org/infra/manual/developers.html
- For more information please follow this link: Training Labs Team Meeting
- Pranav Salunke, IRC: dguitarbite
- Roger Luethi, IRC: rluethi
- Julen Larrucea, IRC: julen
- Name, IRC: Nick, role/interests.