Jump to: navigation, search

Difference between revisions of "Evacuate"

Line 15: Line 15:
  
 
== User stories ==
 
== User stories ==
Administrator wants to evacuate and rebuild VMs from failed nodes
+
* Administrator wants to evacuate and rebuild VMs from failed nodes
  
 
== Assumptions ==
 
== Assumptions ==
  
VM to evacuate is down due to node failure, and is in started/powered off state
+
* VM to evacuate is down due to node failure, and is in started/powered off state
VMs' storage is accessible from other nodes (e.g. shared storage)
+
* VMs' storage is accessible from other nodes (e.g. shared storage)
The administrator selected a valid target node to rebuild the VM on
+
* The administrator selected a valid target node to rebuild the VM on
Post evacuation and rebuild on target node, administrator responsible for any VM inconsistency that might occur during the sudden node failure (e.g. partial disk writes)  
+
* Post evacuation and rebuild on target node, administrator responsible for any VM inconsistency that might occur during the sudden node failure (e.g. partial disk writes)  
  
== Design ==
+
== Recovery from compute node failure ==
 +
 
 +
Administrator detects compute node failure, as a result - all the VMs that ran on it are now down, next the administrator invokes REST API to evacuate a selected VM to a specified running target compute node.
 +
 
 +
Target compute node receives evac_instance msg, and the instance is rebuilt on the new host. Volumes and networks are re-connected, instance identity and state are persistent. Cleanup of persistent information connecting the instance to the failed host.
 +
 
 +
When/if the failed node is back online, further self cleanup is needed (cleanup stale instances in virt), to ensure that the recovered node is aware of the evacuated VMs and does not re-launch them. Evacuated VMs are locked to ensure single handler as the recovery of the failed node might happen while the evacuation is in progress (e.g. the VM has not yet rebuilt on the target node).
  
 
This is just one possible design for this feature (keep that in mind). At its simplest, a server template consists of a core image and a ''metadata map''. The metadata map defines metadata that must be collected during server creation and a list of files (on the server) that must be modified using the defined metadata.
 
This is just one possible design for this feature (keep that in mind). At its simplest, a server template consists of a core image and a ''metadata map''. The metadata map defines metadata that must be collected during server creation and a list of files (on the server) that must be modified using the defined metadata.
Line 62: Line 68:
 
This section should describe a plan of action (the "how") to implement the changes discussed. Could include subsections like:
 
This section should describe a plan of action (the "how") to implement the changes discussed. Could include subsections like:
  
=== UI Changes ===
+
=== REST API ===
  
Should cover changes required to the UI, or specific UI that is required to implement this
+
Admin API:
 +
v2/{tenant_id}/servers/{server_id}/action, with {server_id}=the server to evacuate, parameters: action=evacuate, host=target compute node to rebuild the server on
  
 
=== Code Changes ===
 
=== Code Changes ===

Revision as of 09:21, 1 August 2012

  • Launchpad Entry: NovaSpec:rebuild-for-ha
  • Created: 1 Aug 2012
  • Contributors: Alex Glikson

Summary

High availability for VMs minimizes the effect of a nova-compute node failure. Upon failure detection, VMs whose storage is accessible from other nodes (e.g. shared storage) could be rebuilt and restarted on a target node

Release Note

Administrators detecting a compute node failure could evacuate the nodes' VMs to target nodes

Rationale

On commodity hardware, failures are common and should be considered to provide high service level. With VM HA support, administrators can evacuate VMs from a failed node, while keeping the VM characteristics such as identity, volumes, networks and state to ensure VM availability over time

User stories

  • Administrator wants to evacuate and rebuild VMs from failed nodes

Assumptions

  • VM to evacuate is down due to node failure, and is in started/powered off state
  • VMs' storage is accessible from other nodes (e.g. shared storage)
  • The administrator selected a valid target node to rebuild the VM on
  • Post evacuation and rebuild on target node, administrator responsible for any VM inconsistency that might occur during the sudden node failure (e.g. partial disk writes)

Recovery from compute node failure

Administrator detects compute node failure, as a result - all the VMs that ran on it are now down, next the administrator invokes REST API to evacuate a selected VM to a specified running target compute node.

Target compute node receives evac_instance msg, and the instance is rebuilt on the new host. Volumes and networks are re-connected, instance identity and state are persistent. Cleanup of persistent information connecting the instance to the failed host.

When/if the failed node is back online, further self cleanup is needed (cleanup stale instances in virt), to ensure that the recovered node is aware of the evacuated VMs and does not re-launch them. Evacuated VMs are locked to ensure single handler as the recovery of the failed node might happen while the evacuation is in progress (e.g. the VM has not yet rebuilt on the target node).

This is just one possible design for this feature (keep that in mind). At its simplest, a server template consists of a core image and a metadata map. The metadata map defines metadata that must be collected during server creation and a list of files (on the server) that must be modified using the defined metadata.

Here is a simple example: let's assume that the server template has a Linux server with Apache HTTP installed. Apache needs to know the IP address of the server and the directory on the server that contains the HTML files.

The metadata map would look something like this:


  metadata {
   IP_ADDRESS;
   HTML_ROOT : string(1,255) : "/var/www/";
  }
  map {
   /etc/httpd/includes/server.inc
  }


In this case, the metadata section defines the metadata components required; the map section defines the files that must be parsed and have the metadata configured. Within the metadata section, there are two defined items. IP_ADDRESS is a predefined (built-in) value, and HTML_ROOT is the root directory of the web server.

For HTML_ROOT, there are three sub-fields: the name, the data type, and (in this case) the default value. The token required could be used for items that must be supplied by the user.

When the server is created, a (as-yet-undefined) process would look at the files in the map section and replace metadata tokens with the defined values. For example, the file might contain:


<VirtualHost {{IP_ADDRESS}}:*>
  DocumentRoot "{{HTML_ROOT}}";
</VirtualHost>


Implementation

This section should describe a plan of action (the "how") to implement the changes discussed. Could include subsections like:

REST API

Admin API: v2/{tenant_id}/servers/{server_id}/action, with {server_id}=the server to evacuate, parameters: action=evacuate, host=target compute node to rebuild the server on

Code Changes

Code changes should include an overview of what needs to change, and in some cases even the specific details.

Related entries

http://wiki.openstack.org/Rebuildforvms

Test/Demo Plan

This need not be added or completed until the specification is nearing beta.

Unresolved issues

This should highlight any issues that should be addressed in further specifications, and not problems with the specification itself; since any specification with problems cannot be approved.

BoF agenda and discussion