Jump to: navigation, search

Difference between revisions of "Baremetal"

m (Text replace - "__NOTOC__" to "")
(work in progress. Adding several sections and sample configuration. Much more still to do.)
Line 2: Line 2:
 
__TOC__
 
__TOC__
  
/!\ If you are looking for the old page, [[GeneralBareMetalProvisioningFramework/Historical|it has been moved here]].
+
If you are looking for the old page, [[GeneralBareMetalProvisioningFramework/Historical|it has been moved here]].
  
 
== Overview ==
 
== Overview ==
  
Baremetal is a driver for Openstack Nova Compute which controls physical hardware instead of virtual machines. This hardware, often called the "baremetal nodes", is exposed via Openstack's API and in many ways acts just like any other compute instance. Provisioning and management of physical hardware can thus be accomplished using common cloud tools. This opens the door for the orchestration of physical deployments using Heat, salt-cloud, and so on.
+
Baremetal is a driver for Openstack Nova Compute which controls physical hardware instead of virtual machines. This hardware is exposed via Openstack's API and in some ways acts like any other compute instance. Provisioning and management of physical hardware can thus be accomplished using common cloud tools. This opens the door for the orchestration of physical deployments using Heat, salt-cloud, and so on. In some ways, baremetal is also very different from other hypervisor drivers for Openstack, and deploying it requires some additional steps be taken and some additional configuration be done.
 +
 
 +
=== Terminology ===
 +
 
 +
There is also some terminology which baremetal introduces.
 +
* ''Baremetal host'' and ''compute host'' are often used interchangeably to refer to the machine which runs the nova-compute and nova-baremetal-deploy-helper services (and possibly other services as well). This functions like a hypervisor, providing power management and imaging services.
 +
* ''Node'' and ''baremetal node'' refer to the physical machines which are controlled by the ''compute host''. When a user requests that Nova start a ''baremetal instance'', it is created on a ''baremetal node''.
 +
* A ''baremetal instance'' is a Nova instance created directly on a physical machine without any virtualization layer running underneath it. Nova retains both power control (via IPMI) and, in some situations, may retain network control (via Quantum and OpenFlow).
 +
* ''Deploy image'' is pair of specialized kernel and ramdisk images which are used by the ''compute host'' to write the user-specified image onto the ''baremetal node''.
 +
* Hardware is ''enrolled'' in the baremetal driver by adding its MAC addresses, physical characteristics (# CPUs, RAM, and disk space), and the IPMI credentials into the baremetal database. Without this information, the ''compute host'' has no knowledge of the ''baremetal node''.
 +
 
 +
=== Features ===
 +
 
 +
The current implementation of the Baremetal driver provides the following functionality.
 +
* A Nova API to enroll & manage hardware in the baremetal database
 +
* Power control of enrolled hardware via IPMI
 +
* PXE boot of the baremetal nodes.
 +
* Support for common CPU architectures (i386, x86_64)
 +
* FlatNetwork environments are supported and well tested
 +
** OpenFlow-enabled environments should be supported, but are less well tested at this time
 +
* Cloud-init is used for passing user data into the baremetal instances after provisioning. Limited support for file-injection also exists, but is being deprecated.
 +
 
 +
 
 +
Current limitations include:
 +
* A separate dnsmasq process must run on the baremetal compute host to control the PXE boot process. This conflicts with quantum-dhcp, which must therefor be disabled.
 +
* Cloud-init requires an instances' IP be assigned by quantum, and without quantum-dhcp, this requires file injection to set the IP statically.
  
The current implementation provides:
 
* Deploy machine images onto bare metal using PXE and iSCSI
 
* Control machine power using IPMI
 
* Support common architectures (x86_64, i686)
 
  
 
Future plans include:
 
Future plans include:
 
* Improve performance/scalability of PXE deployment process
 
* Improve performance/scalability of PXE deployment process
* Better support for complex network environments (VLANs, etc)
+
* Better support for complex non-SDN environments (eg., static VLANs)
 +
* Better integration with quantum-dhcp
 
* Support snapshot and migrate of baremetal instances
 
* Support snapshot and migrate of baremetal instances
 
* Support non-PXE image deployment
 
* Support non-PXE image deployment
Line 21: Line 43:
 
* Support fault-tolerance of baremetal nova-compute node
 
* Support fault-tolerance of baremetal nova-compute node
  
== Key Differences ==
+
=== Key Differences ===
  
 
There are several key differences between the baremetal driver and other hypervisor drivers (kvm, xen, etc).
 
There are several key differences between the baremetal driver and other hypervisor drivers (kvm, xen, etc).
* There is no hypervisor running underneath the cloud instances, so the tenant has full and direct access to the hardware, and that hardware is dedicated to a single instance.
+
* There is no hypervisor running underneath the baremetal instances, so the tenant has full and direct access to the hardware, and that hardware is dedicated to a single instance.
 
* Nova does not have any access to manipulate a baremetal instance except for what is provided at the hardware level and exposed over the network, such as IPMI control. Therefor, some functionality implemented by other hypervisor drivers is not available via the baremetal driver, such as: instance snapshots, attach and detach network volumes to a running instance, and so on.
 
* Nova does not have any access to manipulate a baremetal instance except for what is provided at the hardware level and exposed over the network, such as IPMI control. Therefor, some functionality implemented by other hypervisor drivers is not available via the baremetal driver, such as: instance snapshots, attach and detach network volumes to a running instance, and so on.
 
* It is also important to note that there are additional security concerns created by tenants having direct access to the network (eg., MAC spoofing, packet sniffing, etc).
 
* It is also important to note that there are additional security concerns created by tenants having direct access to the network (eg., MAC spoofing, packet sniffing, etc).
Line 32: Line 54:
 
* The PXE sub-driver requires a specialized ramdisk for deployment, which is distinct from the cloud image's ramdisk.
 
* The PXE sub-driver requires a specialized ramdisk for deployment, which is distinct from the cloud image's ramdisk.
  
== Extra services ==
 
 
At a minimum, Keystone, Nova, Glance, and Quantum must be up and running. The following additional services are currently required for baremetal deployment, though work is underway to simplify things by removing these:
 
 
* ''dnsmasq''
 
** This must run on the nova-compute host, and quantum-dhcp must not be answering on the same network.
 
* ''nova-baremetal-deploy-helper''
 
** This service must run on the nova-compute host to assist with image deployment.
 
 
Nova must be configured for the baremetal driver by adding options to the <code><nowiki>[baremetal]</nowiki></code> section of nova.conf. Also, a separate database is used to store hardware information. This can be configured on the same database host with Nova's database or a separate host.
 
 
Also, you must inform the baremetal driver of your hardware's physical characteristics:
 
* The # of CPUs, and amount of RAM and disk
 
* MAC address of all network interfaces
 
* optionally, power management login information
 
This can be done via a Nova API extension or written directly to the baremetal database.
 
  
 
== Use-cases ==
 
== Use-cases ==
Line 65: Line 71:
 
* Use Heat to orchestrate the deployment of an entire cloud.
 
* Use Heat to orchestrate the deployment of an entire cloud.
 
* Finally, run a mixture of baremetal nova-compute and KVM nova-compute in the same cloud (shared keystone and glance, but different tenants). Continuously deploy the cloud from the cloud using a common API.
 
* Finally, run a mixture of baremetal nova-compute and KVM nova-compute in the same cloud (shared keystone and glance, but different tenants). Continuously deploy the cloud from the cloud using a common API.
 +
 +
 +
== The Baremetal Deployment Process ==
 +
 +
'''This section is a stub and needs to be expanded.'''
 +
 +
 +
== Differences in Staring a Baremetal Cloud ==
 +
 +
'''This section is a stub and needs to be expanded.'''
 +
 +
This section aims to cover the technical aspects of creating a baremetal deployment without duplicating the information required in general to create an openstack cloud. It starts by assuming you already have all the other services -- MySQL, Rabbit, Keystone, Glance, etc -- up and running, and then covers:
 +
* Nova configuration changes
 +
* Extra services that need to be started
 +
* Images, Instance types, and metadata that need to be created and defined
 +
 +
=== Configuration Changes ===
 +
 +
The following nova configuration options should be set, in addition to any others that your environment requires:
 +
 +
[DEFAULT]
 +
scheduler_default_filters = ComputeFilter,RetryFilter,AvailabilityZoneFilter,ImagePropertiesFilter
 +
scheduler_host_manager = nova.scheduler.baremetal_host_manager.BaremetalHostManager
 +
firewall_driver = nova.virt.firewall.NoopFirewallDriver
 +
compute_driver = nova.virt.baremetal.driver.BareMetalDriver
 +
 +
[baremetal]
 +
net_config_template = /opt/stack/nova/nova/virt/baremetal/net-static.ubuntu.template
 +
tftp_root = /tftpboot
 +
power_manager = nova.virt.baremetal.ipmi.IPMI
 +
driver = nova.virt.baremetal.pxe.PXE
 +
instance_type_extra_specs = cpu_arch:{i386|x86_64}
 +
sql_connection = mysql://{user}:{pass}@{host}/nova_bm
 +
 +
 +
 +
=== Extra Services ===
 +
 +
At a minimum, Keystone, Nova, Glance, and Quantum must be up and running. The following additional services are currently required for baremetal deployment, and should be started on the nova compute host.
 +
 +
* ''nova-baremetal-deploy-helper''
 +
** This service assists with image deployment.
 +
 +
* ''dnsmasq''
 +
** This must run on the nova-compute host, and quantum-dhcp must not be answering on the same network.
 +
 +
A separate database schema must be created for the baremetal driver to store information about the enrolled hardware. This can be done with the following command:
 +
  nova-baremetal-manage db sync
 +
 +
  
 
== Community ==
 
== Community ==

Revision as of 21:33, 27 February 2013

If you are looking for the old page, it has been moved here.

Overview

Baremetal is a driver for Openstack Nova Compute which controls physical hardware instead of virtual machines. This hardware is exposed via Openstack's API and in some ways acts like any other compute instance. Provisioning and management of physical hardware can thus be accomplished using common cloud tools. This opens the door for the orchestration of physical deployments using Heat, salt-cloud, and so on. In some ways, baremetal is also very different from other hypervisor drivers for Openstack, and deploying it requires some additional steps be taken and some additional configuration be done.

Terminology

There is also some terminology which baremetal introduces.

  • Baremetal host and compute host are often used interchangeably to refer to the machine which runs the nova-compute and nova-baremetal-deploy-helper services (and possibly other services as well). This functions like a hypervisor, providing power management and imaging services.
  • Node and baremetal node refer to the physical machines which are controlled by the compute host. When a user requests that Nova start a baremetal instance, it is created on a baremetal node.
  • A baremetal instance is a Nova instance created directly on a physical machine without any virtualization layer running underneath it. Nova retains both power control (via IPMI) and, in some situations, may retain network control (via Quantum and OpenFlow).
  • Deploy image is pair of specialized kernel and ramdisk images which are used by the compute host to write the user-specified image onto the baremetal node.
  • Hardware is enrolled in the baremetal driver by adding its MAC addresses, physical characteristics (# CPUs, RAM, and disk space), and the IPMI credentials into the baremetal database. Without this information, the compute host has no knowledge of the baremetal node.

Features

The current implementation of the Baremetal driver provides the following functionality.

  • A Nova API to enroll & manage hardware in the baremetal database
  • Power control of enrolled hardware via IPMI
  • PXE boot of the baremetal nodes.
  • Support for common CPU architectures (i386, x86_64)
  • FlatNetwork environments are supported and well tested
    • OpenFlow-enabled environments should be supported, but are less well tested at this time
  • Cloud-init is used for passing user data into the baremetal instances after provisioning. Limited support for file-injection also exists, but is being deprecated.


Current limitations include:

  • A separate dnsmasq process must run on the baremetal compute host to control the PXE boot process. This conflicts with quantum-dhcp, which must therefor be disabled.
  • Cloud-init requires an instances' IP be assigned by quantum, and without quantum-dhcp, this requires file injection to set the IP statically.


Future plans include:

  • Improve performance/scalability of PXE deployment process
  • Better support for complex non-SDN environments (eg., static VLANs)
  • Better integration with quantum-dhcp
  • Support snapshot and migrate of baremetal instances
  • Support non-PXE image deployment
  • Support other architectures (arm, tilepro)
  • Support fault-tolerance of baremetal nova-compute node

Key Differences

There are several key differences between the baremetal driver and other hypervisor drivers (kvm, xen, etc).

  • There is no hypervisor running underneath the baremetal instances, so the tenant has full and direct access to the hardware, and that hardware is dedicated to a single instance.
  • Nova does not have any access to manipulate a baremetal instance except for what is provided at the hardware level and exposed over the network, such as IPMI control. Therefor, some functionality implemented by other hypervisor drivers is not available via the baremetal driver, such as: instance snapshots, attach and detach network volumes to a running instance, and so on.
  • It is also important to note that there are additional security concerns created by tenants having direct access to the network (eg., MAC spoofing, packet sniffing, etc).
    • Other hypervisors mitigate this with virtualized networking.
    • Quantum + OpenFlow can be used much to the same effect, if your network hardware supports it.
  • Public cloud images may not work on some hardware, particularly if your hardware requires add'l drivers to be loaded.
  • The PXE sub-driver requires a specialized ramdisk for deployment, which is distinct from the cloud image's ramdisk.


Use-cases

Here are a few ideas we have about potential use-cases for the baremetal driver. This isn't an exhaustive list -- there are doubtless many more interesting things which it can do!

  • High-performance computing clusters.
  • Computing tasks that require access to hardware devices which can't be virtualized.
  • Database hosting (some databases run poorly in a hypervisor).
  • Or, rapidly deploying a cloud infrastructure ....

We (the tripleo team) have a vision that Openstack can be used to deploy Openstack at a massive scale. We think the story of getting "from here to there" goes like this:

  • First, do simple hardware provisioning with a base image that contains configuration-management software (chef/puppet/salt/etc). The CMS checks in with a central server to determine what packages to install, then installs and configures your applications. All this happens automatically after first-boot of any baremetal node.
  • Then, accelerate provisioning by pre-installing your application software into the cloud image, but let a CMS still do all configuration.
  • Pre-install KVM and nova-compute into an image, and scale out your compute cluster by using baremetal driver to deploy nova-compute images. Do the same thing for Swift, proxy nodes, software load balancers, and so on.
  • Use Heat to orchestrate the deployment of an entire cloud.
  • Finally, run a mixture of baremetal nova-compute and KVM nova-compute in the same cloud (shared keystone and glance, but different tenants). Continuously deploy the cloud from the cloud using a common API.


The Baremetal Deployment Process

This section is a stub and needs to be expanded.


Differences in Staring a Baremetal Cloud

This section is a stub and needs to be expanded.

This section aims to cover the technical aspects of creating a baremetal deployment without duplicating the information required in general to create an openstack cloud. It starts by assuming you already have all the other services -- MySQL, Rabbit, Keystone, Glance, etc -- up and running, and then covers:

  • Nova configuration changes
  • Extra services that need to be started
  • Images, Instance types, and metadata that need to be created and defined

Configuration Changes

The following nova configuration options should be set, in addition to any others that your environment requires:

[DEFAULT]
scheduler_default_filters = ComputeFilter,RetryFilter,AvailabilityZoneFilter,ImagePropertiesFilter
scheduler_host_manager = nova.scheduler.baremetal_host_manager.BaremetalHostManager
firewall_driver = nova.virt.firewall.NoopFirewallDriver
compute_driver = nova.virt.baremetal.driver.BareMetalDriver

[baremetal]
net_config_template = /opt/stack/nova/nova/virt/baremetal/net-static.ubuntu.template
tftp_root = /tftpboot
power_manager = nova.virt.baremetal.ipmi.IPMI
driver = nova.virt.baremetal.pxe.PXE
instance_type_extra_specs = cpu_arch:{i386|x86_64}
sql_connection = mysql://{user}:{pass}@{host}/nova_bm


Extra Services

At a minimum, Keystone, Nova, Glance, and Quantum must be up and running. The following additional services are currently required for baremetal deployment, and should be started on the nova compute host.

  • nova-baremetal-deploy-helper
    • This service assists with image deployment.
  • dnsmasq
    • This must run on the nova-compute host, and quantum-dhcp must not be answering on the same network.

A separate database schema must be created for the baremetal driver to store information about the enrolled hardware. This can be done with the following command:

 nova-baremetal-manage db sync


Community