Jump to: navigation, search


Revision as of 23:29, 17 February 2013 by Ryan Lane (talk | contribs) (Text replace - "__NOTOC__" to "")

/!\ If you are looking for the old page, it has been moved here.


Baremetal is a driver for Openstack Nova Compute which controls physical hardware instead of virtual machines. This hardware, often called the "baremetal nodes", is exposed via Openstack's API and in many ways acts just like any other compute instance. Provisioning and management of physical hardware can thus be accomplished using common cloud tools. This opens the door for the orchestration of physical deployments using Heat, salt-cloud, and so on.

The current implementation provides:

  • Deploy machine images onto bare metal using PXE and iSCSI
  • Control machine power using IPMI
  • Support common architectures (x86_64, i686)

Future plans include:

  • Improve performance/scalability of PXE deployment process
  • Better support for complex network environments (VLANs, etc)
  • Support snapshot and migrate of baremetal instances
  • Support non-PXE image deployment
  • Support other architectures (arm, tilepro)
  • Support fault-tolerance of baremetal nova-compute node

Key Differences

There are several key differences between the baremetal driver and other hypervisor drivers (kvm, xen, etc).

  • There is no hypervisor running underneath the cloud instances, so the tenant has full and direct access to the hardware, and that hardware is dedicated to a single instance.
  • Nova does not have any access to manipulate a baremetal instance except for what is provided at the hardware level and exposed over the network, such as IPMI control. Therefor, some functionality implemented by other hypervisor drivers is not available via the baremetal driver, such as: instance snapshots, attach and detach network volumes to a running instance, and so on.
  • It is also important to note that there are additional security concerns created by tenants having direct access to the network (eg., MAC spoofing, packet sniffing, etc).
    • Other hypervisors mitigate this with virtualized networking.
    • Quantum + OpenFlow can be used much to the same effect, if your network hardware supports it.
  • Public cloud images may not work on some hardware, particularly if your hardware requires add'l drivers to be loaded.
  • The PXE sub-driver requires a specialized ramdisk for deployment, which is distinct from the cloud image's ramdisk.

Extra services

At a minimum, Keystone, Nova, Glance, and Quantum must be up and running. The following additional services are currently required for baremetal deployment, though work is underway to simplify things by removing these:

  • dnsmasq
    • This must run on the nova-compute host, and quantum-dhcp must not be answering on the same network.
  • nova-baremetal-deploy-helper
    • This service must run on the nova-compute host to assist with image deployment.

Nova must be configured for the baremetal driver by adding options to the [baremetal] section of nova.conf. Also, a separate database is used to store hardware information. This can be configured on the same database host with Nova's database or a separate host.

Also, you must inform the baremetal driver of your hardware's physical characteristics:

  • The # of CPUs, and amount of RAM and disk
  • MAC address of all network interfaces
  • optionally, power management login information

This can be done via a Nova API extension or written directly to the baremetal database.


Here are a few ideas we have about potential use-cases for the baremetal driver. This isn't an exhaustive list -- there are doubtless many more interesting things which it can do!

  • High-performance computing clusters.
  • Computing tasks that require access to hardware devices which can't be virtualized.
  • Database hosting (some databases run poorly in a hypervisor).
  • Or, rapidly deploying a cloud infrastructure ....

We (the tripleo team) have a vision that Openstack can be used to deploy Openstack at a massive scale. We think the story of getting "from here to there" goes like this:

  • First, do simple hardware provisioning with a base image that contains configuration-management software (chef/puppet/salt/etc). The CMS checks in with a central server to determine what packages to install, then installs and configures your applications. All this happens automatically after first-boot of any baremetal node.
  • Then, accelerate provisioning by pre-installing your application software into the cloud image, but let a CMS still do all configuration.
  • Pre-install KVM and nova-compute into an image, and scale out your compute cluster by using baremetal driver to deploy nova-compute images. Do the same thing for Swift, proxy nodes, software load balancers, and so on.
  • Use Heat to orchestrate the deployment of an entire cloud.
  • Finally, run a mixture of baremetal nova-compute and KVM nova-compute in the same cloud (shared keystone and glance, but different tenants). Continuously deploy the cloud from the cloud using a common API.