OpsGuide/Compute Node Failures and Maintenance
Sometimes a compute node either crashes unexpectedly or requires a reboot for maintenance reasons.
If you need to reboot a compute node due to planned maintenance, such as a software or hardware upgrade, perform the following steps:
Disable scheduling of new VMs to the node, optionally providing a reason comment:
Verify that all hosted instances have been moved off the node:
If your cloud is using a shared storage:
Get a list of instances that need to be moved:
Migrate all instances one by one:
If your cloud is not using a shared storage, run:
If you use a configuration-management system, such as Puppet, that ensures the
nova-computeservice is always running, you can temporarily move the
Shut down your compute node, perform the maintenance, and turn the node back on.
You can re-enable the
nova-computeservice by undoing the commands:
Enable scheduling of VMs to the node:
Optionally, migrate the instances back to their original compute node.
After a Compute Node Reboots
When you reboot a compute node, first verify that it booted successfully. This includes ensuring that the
nova-compute service is running:
Also ensure that it has successfully connected to the AMQP server:
After the compute node is successfully running, you must deal with the instances that are hosted on that compute node because none of them are running. Depending on your SLA with your users or customers, you might have to start each instance and ensure that they start correctly.
You can create a list of instances that are hosted on the compute node by performing the following command:
After you have the list, you can use the openstack command to start each instance:
Any time an instance shuts down unexpectedly, it might have problems on boot. For example, the instance might require an
fsck on the root partition. If this happens, the user can use the dashboard VNC console to fix this.
If an instance does not boot, meaning
virsh list never shows the instance as even attempting to boot, do the following on the compute node:
Try executing the openstack server reboot command again. You should see an error message about why the instance was not able to boot.
In most cases, the error is the result of something in libvirt’s XML file (
/etc/libvirt/qemu/instance-xxxxxxxx.xml) that no longer exists. You can enforce re-creation of the XML file as well as rebooting the instance by running the following command:
Inspecting and Recovering Data from Failed Instances
In some scenarios, instances are running but are inaccessible through SSH and do not respond to any command. The VNC console could be displaying a boot failure or kernel panic error messages. This could be an indication of file system corruption on the VM itself. If you need to recover files or inspect the content of the instance, qemu-nbd can be used to mount the disk.
To access the instance’s disk (
/var/lib/nova/instances/instance-xxxxxx/disk), use the following steps:
- Suspend the instance using the
- Connect the qemu-nbd device to the disk.
- Mount the qemu-nbd device.
- Unmount the device after inspecting.
- Disconnect the qemu-nbd device.
- Resume the instance.
If you do not follow last three steps, OpenStack Compute cannot manage the instance any longer. It fails to respond to any command issued by OpenStack Compute, and it is marked as shut down.
Once you mount the disk file, you should be able to access it and treat it as a collection of normal directories with files and a directory structure. However, we do not recommend that you edit or touch any files because this could change the access control lists (ACLs) that are used to determine which accounts can perform what operations on files and directories. Changing ACLs can make the instance unbootable if it is not already.
Suspend the instance using the virsh command, taking note of the internal ID:
Find the ID for each instance by listing the server IDs using the following command:
Connect the qemu-nbd device to the disk:
Mount the qemu-nbd device.
The qemu-nbd device tries to export the instance disk’s different partitions as separate devices. For example, if vda is the disk and vda1 is the root partition, qemu-nbd exports the device as
You can now access the contents of
/mnt, which correspond to the first partition of the instance’s disk.
To examine the secondary or ephemeral disk, use an alternate mount point if you want both primary and secondary drives mounted at the same time:
Once you have completed the inspection, unmount the mount point and release the qemu-nbd device:
Resume the instance using virsh:
Managing floating IP addresses between instances
In an elastic cloud environment using the
Public_AGILE network, each instance has a publicly accessible IPv4 & IPv6 address. It does not support the concept of OpenStack floating IP addresses that can easily be attached, removed, and transferred between instances. However, there is a workaround using neutron ports which contain the IPv4 & IPv6 address.
Create a port that can be reused
Create a port on the
If you know the fully qualified domain name (FQDN) that will be assigned to the IP address, assign the port with the same name:
Use the port when creating an instance:
Verify the instance has the correct IP address:
Check the port connection using the netcat utility:
Detach a port from an instance
Find the port corresponding to the instance. For example:
Run the openstack port set command to remove the port from the instance:
Delete the instance and create a new instance using the
Retrieve an IP address when an instance is deleted before detaching a port
The following procedure is a possible workaround to retrieve an IP address when an instance has been deleted with the port still attached:
Launch several neutron ports:
Check the ports for the lost IP address and update the name:
Delete the ports that are not needed:
If you still cannot find the lost IP address, repeat these steps again.
If the affected instances also had attached volumes, first generate a list of instance and volume UUIDs:
You should see a result similar to the following:
Next, manually detach and reattach the volumes, where X is the proper mount point:
Be sure that the instance has successfully booted and is at a login screen before doing the above.
Total Compute Node Failure
Compute nodes can fail the same way a cloud controller can fail. A motherboard failure or some other type of hardware failure can cause an entire compute node to go offline. When this happens, all instances running on that compute node will not be available. Just like with a cloud controller failure, if your infrastructure monitoring does not detect a failed compute node, your users will notify you because of their lost instances.
If a compute node fails and won’t be fixed for a few hours (or at all), you can relaunch all instances that are hosted on the failed node if you use shared storage for
To do this, generate a list of instance UUIDs that are hosted on the failed node by running the following query on the nova database:
Next, update the nova database to indicate that all instances that used to be hosted on c01.example.com are now hosted on c02.example.com:
If you’re using the Networking service ML2 plug-in, update the Networking service database to indicate that all ports that used to be hosted on c01.example.com are now hosted on c02.example.com:
After that, use the openstack command to reboot all instances that were on c01.example.com while regenerating their XML files at the same time:
Finally, reattach volumes using the same method described in the section Volumes.
It’s worth mentioning this directory in the context of failed compute nodes. This directory contains the libvirt KVM file-based disk images for the instances that are hosted on that compute node. If you are not running your cloud in a shared storage environment, this directory is unique across all compute nodes.
/var/lib/nova/instances contains two types of directories.
The first is the
_base directory. This contains all the cached base images from glance for each unique image that has been launched on that compute node. Files ending in
_20 (or a different number) are the ephemeral base images.
The other directories are titled
instance-xxxxxxxx. These directories correspond to instances running on that compute node. The files inside are related to one of the files in the
_base directory. They’re essentially differential-based files containing only the changes made from the original
All files and directories in
/var/lib/nova/instances are uniquely named. The files in _base are uniquely titled for the glance image that they are based on, and the directory names
instance-xxxxxxxx are uniquely titled for that particular instance. For example, if you copy all data from
/var/lib/nova/instances on one compute node to another, you do not overwrite any files or cause any damage to images that have the same unique name, because they are essentially the same file.
Although this method is not documented or supported, you can use it when your compute node is permanently offline but you have instances locally stored on it.