Jump to: navigation, search

Difference between revisions of "Baremetal"

m (Text replace - "__NOTOC__" to "")
(Baremetal driver has been deleted from Nova, so I am deleting this page. Well, replacing it with a stub / reference to Ironic. Most info on this page was two years old.)
 
(45 intermediate revisions by 12 users not shown)
Line 1: Line 1:
  
__TOC__
+
<big>
  
/!\ If you are looking for the old page, [[GeneralBareMetalProvisioningFramework/Historical|it has been moved here]].
+
The Nova "baremetal" driver was deprecated in the Juno release, and has been deleted from Nova.
  
== Overview ==
+
Please see [[Ironic]] for all current work on the Bare Metal Provisioning program within OpenStack.
  
Baremetal is a driver for Openstack Nova Compute which controls physical hardware instead of virtual machines. This hardware, often called the "baremetal nodes", is exposed via Openstack's API and in many ways acts just like any other compute instance. Provisioning and management of physical hardware can thus be accomplished using common cloud tools. This opens the door for the orchestration of physical deployments using Heat, salt-cloud, and so on.
+
</big>
 
 
The current implementation provides:
 
* Deploy machine images onto bare metal using PXE and iSCSI
 
* Control machine power using IPMI
 
* Support common architectures (x86_64, i686)
 
 
 
Future plans include:
 
* Improve performance/scalability of PXE deployment process
 
* Better support for complex network environments (VLANs, etc)
 
* Support snapshot and migrate of baremetal instances
 
* Support non-PXE image deployment
 
* Support other architectures (arm, tilepro)
 
* Support fault-tolerance of baremetal nova-compute node
 
 
 
== Key Differences ==
 
 
 
There are several key differences between the baremetal driver and other hypervisor drivers (kvm, xen, etc).
 
* There is no hypervisor running underneath the cloud instances, so the tenant has full and direct access to the hardware, and that hardware is dedicated to a single instance.
 
* Nova does not have any access to manipulate a baremetal instance except for what is provided at the hardware level and exposed over the network, such as IPMI control. Therefor, some functionality implemented by other hypervisor drivers is not available via the baremetal driver, such as: instance snapshots, attach and detach network volumes to a running instance, and so on.
 
* It is also important to note that there are additional security concerns created by tenants having direct access to the network (eg., MAC spoofing, packet sniffing, etc).
 
** Other hypervisors mitigate this with virtualized networking.
 
** Quantum + [[OpenFlow]] can be used much to the same effect, if your network hardware supports it.
 
* Public cloud images may not work on some hardware, particularly if your hardware requires add'l drivers to be loaded.
 
* The PXE sub-driver requires a specialized ramdisk for deployment, which is distinct from the cloud image's ramdisk.
 
 
 
== Extra services ==
 
 
 
At a minimum, Keystone, Nova, Glance, and Quantum must be up and running. The following additional services are currently required for baremetal deployment, though work is underway to simplify things by removing these:
 
 
 
* ''dnsmasq''
 
** This must run on the nova-compute host, and quantum-dhcp must not be answering on the same network.
 
* ''nova-baremetal-deploy-helper''
 
** This service must run on the nova-compute host to assist with image deployment.
 
 
 
Nova must be configured for the baremetal driver by adding options to the <code><nowiki>[baremetal]</nowiki></code> section of nova.conf. Also, a separate database is used to store hardware information. This can be configured on the same database host with Nova's database or a separate host.
 
 
 
Also, you must inform the baremetal driver of your hardware's physical characteristics:
 
* The # of CPUs, and amount of RAM and disk
 
* MAC address of all network interfaces
 
* optionally, power management login information
 
This can be done via a Nova API extension or written directly to the baremetal database.
 
 
 
== Use-cases ==
 
 
 
Here are a few ideas we have about potential use-cases for the baremetal driver. This isn't an exhaustive list -- there are doubtless many more interesting things which it can do!
 
 
 
* High-performance computing clusters.
 
* Computing tasks that require access to hardware devices which can't be virtualized.
 
* Database hosting (some databases run poorly in a hypervisor).
 
* Or, rapidly deploying a cloud infrastructure ....
 
 
 
We (the tripleo team) have a vision that Openstack can be used to deploy Openstack at a massive scale. We think the story of getting "from here to there" goes like this:
 
 
 
* First, do simple hardware provisioning with a base image that contains configuration-management software (chef/puppet/salt/etc). The CMS checks in with a central server to determine what packages to install, then installs and configures your applications. All this happens automatically after first-boot of any baremetal node.
 
* Then, accelerate provisioning by pre-installing your application software into the cloud image, but let a CMS still do all configuration.
 
* Pre-install KVM and nova-compute into an image, and scale out your compute cluster by using baremetal driver to deploy nova-compute images. Do the same thing for Swift, proxy nodes, software load balancers, and so on.
 
* Use Heat to orchestrate the deployment of an entire cloud.
 
* Finally, run a mixture of baremetal nova-compute and KVM nova-compute in the same cloud (shared keystone and glance, but different tenants). Continuously deploy the cloud from the cloud using a common API.
 
 
 
== Community ==
 
 
 
* '''Main Contributors'''
 
** [ [https://launchpad.net/~USC-ISI USC/ISI] ]
 
*** Mikyung Kang <mkkang@isi.edu>, David Kang <dkang@isi.edu>
 
*** https://github.com/usc-isi/hpc-trunk-essex (stable/essex)
 
*** https://github.com/usc-isi/nova (folsom)
 
*** [[HeterogeneousTileraSupport]]
 
* [NTT DOCOMO]
 
** Ken Igarashi <igarashik@nttdocomo.co.jp>
 
** https://github.com/NTTdocomo-openstack/nova
 
* [[VirtualTech|Japan Inc.]]
 
** Arata Notsu <notsu@virtualtech.jp>
 
* [ [https://launchpad.net/~tripleo HP tripleo team] ]
 
** Devananda van der Veen <devananda@hp.com>, Robert Collins <robertc@hp.com>
 
** https://github.com/tripleo/nova/tree/baremetal-dev
 
** https://github.com/tripleo/devstack/tree/baremetal-dev
 
** irc.freenode.net #tripleo
 
* '''Blueprints on Launchpad'''
 
** https://blueprints.launchpad.net/nova/+spec/general-bare-metal-provisioning-framework
 
** https://blueprints.launchpad.net/nova/+spec/improve-baremetal-pxe-deploy
 
** https://blueprints.launchpad.net/quantum/+spec/pxeboot-ports
 
* '''Mailing List''':
 
** Discussion is on the openstack-dev email list.
 
** Following convention, we use both [nova] and [baremetal] tags in the Subject line
 
** http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
* '''Bug listing'''
 
** Use the "baremetal" tag when filing bugs in Launchpad
 
** https://bugs.launchpad.net/nova/+bugs?field.tag=baremetal
 
* '''Etherpads''':
 
** Here is a list of etherpads from past summit discussions:
 
*** http://etherpad.openstack.org/GrizzlyBareMetalCloud
 
*** http://etherpad.openstack.org/FolsomBareMetalCloud
 
* '''Team Branches'''
 
** The USC/ISI team has a branch here (general bare-metal provisioning framework and non-PXE support):
 
*** https://github.com/usc-isi/hpc-trunk-essex (stable/essex)
 
*** https://github.com/usc-isi/nova (folsom)
 
*** [[HeterogeneousTileraSupport]]
 
* NTT docomo has a branch here (PXE support and additional bare-metal features):
 
** https://github.com/NTTdocomo-openstack/nova (master branch)
 
* HP "TripleO" team has several branches here:
 
** https://github.com/tripleo/nova/tree/baremetal-dev
 
** https://github.com/tripleo/devstack/tree/baremetal-dev
 
** Walkthrough for a development environment: https://github.com/tripleo/incubator/blob/master/notes.md
 

Latest revision as of 23:43, 8 October 2014

The Nova "baremetal" driver was deprecated in the Juno release, and has been deleted from Nova.

Please see Ironic for all current work on the Bare Metal Provisioning program within OpenStack.