About training labs
Openstack Lab scripts installs a working OpenStack cluster on your computer. They are faster, reproducible and automated way of following OpenStack [install guides] to install a cluster into VirtualBox/KVM and should run on most common hardware (Desktops/Laptops) out of the box.
The scripts support Linux, Mac OS X, and Windows as host operating systems. They currently install the Kilo, Juno and Icehouse release of OpenStack on Ubuntu 14.04 LTS.
On all supported platforms, you need to have VirtualBox installed.
In addition, you need the content of the training-labs labs directory.
We plan to provide download links just for the labs directory soonish. For the time being, you can use git to get the training-labs repo which includes the lab directory:
git clone git://git.openstack.org/openstack/training-labs.git If your host operating system is Windows, you also need the wbatch scripts which are not yet part of the repo but can be generated on Linux and OS X with this command from the labs directory:
./osbash -w cluster To build the base disk (see below), the Windows batch scripts need the distribution ISO image in the labs/img directory. If the file is not there, the script will print the download URL and exit.
Building the cluster
Expect the base disk build to take between 15 and 30 minutes; building the node VMs takes another 15 to 30 minutes.
On all platforms, log files are written to the labs/log directory while the cluster is building
Linux and Mac OS X
Change to the labs directory and enter this command:
./osbash -b cluster The command builds a base disk which contains the operating system and the software needed for the OpenStack cluster. After the base disk, the command builds three node VMs (controller, compute, network).
If you execute the same command again, the existing node VMs are deleted and recreated based on the existing base disk. If you want to rebuild the base disk, too, either delete the disk file in the labs/img directory, or use this command:
./osbash -b basedisk
Open the labs/wbatch directory. You should find these batch scripts:
Creates the host-only networks used by the node VMs to communicate. The script asks for elevated privileges which are needed for that task. You only need to run this script once, the network configuration is saved by VirtualBox. You can verify the configured networks in the VirtualBox GUI: File->Preferences->Network->Host-only Networks.
Creates the base disk. You only need to run this once (and every time you want to update the base disk). create_controller_node.bat, create_compute_node.bat, create_network_node.bat Create the node VMs. Start in the order given above.
Note: The Windows batch scripts still have some limitations. For instance, if they find an existing node VM of the same name, they print an error and exit. Do not start a batch script if another one is still running.
Using the cluster
By default, the cluster is built in headless mode. In that case, the way to access your node VMs is a secure shell (ssh). The localhost's TCP ports 2230 through 2232 are forwarded to the node VMs' ssh daemons.
To get a shell on the controller VM, for instance, use (the password is osbash):
ssh -p 2230 osbash@localhost
If you would like console windows for your VirtualBox VMs, stop the VMs and start them again from the VirtualBox GUI.
Alternatively, in order to have the console always on, even during the build, add the "-g gui" option to your osbash commands. For instance:
./osbash -g gui -b cluster ./osbash -g gui -b basedisk ./osbash -g gui -w cluster
- Slides from our presentation at the OpenStack Summit 2015 in Tokyo: https://docs.google.com/presentation/d/1PYe1SQnAL8DxOXcnGI8O-1YAW-Z45P2IXj4NMb3VTNo/
- training-labs repo: http://git.openstack.org/cgit/openstack/training-labs/
- Review queue: https://review.openstack.org/#/q/status:open+project:openstack/training-labs,n,z
- Old labs section in Training guides: http://git.openstack.org/cgit/openstack/training-guides/tree/labs
- Spec: http://specs.openstack.org/openstack/docs-specs/specs/liberty/training-labs.html
- Pranav Salunke, IRC: dguitarbite
- Roger Luethi, IRC: rluethi
- Name, IRC: Nick, role/interests.