Jump to: navigation, search

Difference between revisions of "RabbitmqHA"

Line 25: Line 25:
 
To understand why RabbitMQ clusters cannot be used with the current Nova RPC mappings, and how a node break-down may lead to catostrophic failures, please keep reading.
 
To understand why RabbitMQ clusters cannot be used with the current Nova RPC mappings, and how a node break-down may lead to catostrophic failures, please keep reading.
  
attachment:cluster.JPG
+
[[attachment:cluster.JPG]]
  
 
== User stories ==
 
== User stories ==

Revision as of 16:40, 19 October 2010

  • Launchpad Entry: NovaSpec:tbd
  • Created: 19 October 2010
  • Last updated: 19 October 2010
  • Contributors: Armando Migliaccio

Summary

This specification covers how Nova supports RabbitMQ configurations like clustering and active/passive replication.

Release Note

Austin release of Nova RPC mappings deals with intermittent network connectivity only. In order to support RabbitMQ clusters and active/passive brokers, more advanced Nova RPC mappings need to be provided, such as strategies to deal with failures of nodes holding queues within clusters and/or master/slave failover for active/passive replication.

Rationale

Currently, the message queue configuration variables are tied to RabbitMQ from nova/flags.py. In particular, only one rabbitmq host is provided and it is assumed, for simplicity of the deployment, that a single instance is up and running. In face of failures of the RabbitMQ host (e.g. disk or power related), Nova components cannot send/receive messages from the queueing system until it recovers. To provide higher resiliency, RabbitMQ can be made to work in an active/passive setup, such that persistent messages that have been written to disk on the active node are able to be recovered by the passive node should the active node fail. If high-availability is required, active/passive HA can be achieved by using shared disk storage, heartbeat/pacemaker, and possibly a TCP load-balancer in front of the service replicas. Although this solution ensures higher level of transparency to the client-side such as Nova API, Scheduler, and Compute (e.g. no or minimal fail-over strategies are required in the Nova RPC mappings) it still represents a bottleneck of the overall architecture, it may require expensive hardware to run, and hence it is far from ideal.

Another option is RabbitMQ Clustering. A RabbitMQ cluster (or broker) is a logical grouping of one or several Erlang nodes, each running the RabbitMQ application and sharing users, virtual hosts, queues, exchanges, bindings etc. The adoption of a RabbitMQ cluster becames appealing in the context of virtual appliances, where each appliance is dedicated to a single specific Nova task (e.g. compute, volume, network, scheduler, api, ...) and it also runs an instance of RabbitMQ server. By clustering all the instances together a single massive cluster spanning the deployment would be available, providing the following benefits:

  • no single point of failure
  • no requirement of expensive hardware
  • no requirement of separate appliances/hosts to run RabbitMQ
  • RabbitMQ becomes 'hidden' in the deployment

However, there is a problem that may hinder the realization of this scenario: all data/state required for the operation of a RabbitMQ broker is replicated across all nodes; an exception to this are message queues, which currently, only reside on the node that created them. Queues are still visible and reachable from all nodes, but in case of a failure of the node holding them, bad things will happen! This is the main reason why clusters are discouraged for high-availability and see their application primarily to improve scalability. Nonetheless, their choice is still appealing if client-side fail-over strategies are implemented accordingly.

To understand why RabbitMQ clusters cannot be used with the current Nova RPC mappings, and how a node break-down may lead to catostrophic failures, please keep reading.

attachment:cluster.JPG

User stories

Assumptions

Design

Implementation

Code Changes