Jump to: navigation, search

Cinder/blueprints/multi-attach-volume

< Cinder
Revision as of 19:45, 28 November 2013 by Sgordon (talk | contribs) (Summary)

[ DRAFT ]

Summary

Currently the Block Storage (Cinder) service only allow a volume to be attached to one instance at a time. The Compute Service (Nova) also makes assumptions in a number of places to this effect as do the APIs, CLIs, and UIs exposed to users. This specification aims to outline the changes required to allow users to share volumes between multiple guests using either read-write or read-only attachments.

There have been several discussions about adding this type of functionality over the Grizzly and Havana cycles. This page is intended to link together those discussions and provide a place for recording consensus on any and all outstanding issues with the design backing these blueprint(s).

Resources

In addition to the blueprints noted above these resources were consulted in framing this page:

User Story

Traditional cluster solutions rely on the use of clustered filesystems and quorum disks, writeable by one or more systems and often read by a larger number of systems, to maintain high availability. Users would like to be able to run such clustered applications on their OpenStack clouds. This requires the ability to have a volume attached to multiple compute instances, with some instances having read-only access and some having read-write access.

Design

Assumptions

  • Administrators and users are ultimately responsible for ensuring data integrity is maintained once a shared volume is attached to multiple instances in read-write mode, however such attachments must only occur as the result of an explicit request.
  • Horizon support is not crucial to "Phase I" implementation of this feature, but must be considered and properly tracked as a potential future addition.

Requirements

  • Read-only volume support in Cinder and Nova (read-only-volumes).
  • Users must be able to explicitly define a volume as "shareable" at creation time.
  • Users must be able to attach a "shareable" volume to multiple compute instances, specifying a separate mode (read-write or read-only) for each attachment. That is, some attachments to a volume may be read-only, while other attachments to the same volume may be read-write.
  • While Cinder will track the mode of each attachment restriction of write access must be handled by the Hypervisor drivers in Nova.
  • Normal reservations should be required (and enforced) for volumes that are not marked as shareable.

Cinder

These details have been reverse engineered from the initial patchset submitted by Charlie Zhou and likely need further discussion and iteration to ensure they meet the requirements outlined above. In particular additional changes would be required to support explicit marking volumes as shareable.

API

  • New calls:
    • volume_attach
    • volume_attachment_get
    • volume_attachment_get_used_by_volume
    • volume_attachment_get_by_host
    • volume_attachment_get_by_instance_uuid
    • volume_attachment_update
  • Updated calls:
    • volume_attached
    • volume_detach
    • volume_detached
    • reserve_volume
    • unreserve_volume

Database

New volume_attachment table:

id = Column(String(36), primary_key=True)

user_id = Column(String(255))
project_id = Column(String(255))

volume_id = Column(String(36), ForeignKey('volumes.id'), nullable=False)
instance_uuid = Column(String(36))
attached_host = Column(String(255))
mountpoint = Column(String(255))
attach_time = Column(String(255))  # TODO(vish): datetime
detach_time = Column(String(255))  # TODO(vish): datetime
attach_status = Column(String(255))  # TODO(vish): enum
attach_mode = Column(String(255))

scheduled_at = Column(DateTime)
launched_at = Column(DateTime)
terminated_at = Column(DateTime)

bootable = Column(Boolean, default=False)

Exceptions

  • New exceptions:
    • VolumeAttachmentNotFound

Nova

  • New calls:
  • Updated calls:

Horizon

  • Volumes screen:
    • Reflect multiple attachments in the Attached To column/field.
    • Reflect the mode of each attachment (ro or rw).
    • Reflect whether a volume is "shareable" or not.
  • Volume Detail screen:
    • As per requirements for Volumes screen.
  • Create Volume dialog:
    • Allow the marking of the volume as "shareable".
  • Edit Attachments dialog:
    • Allow the addition of further attachments to shareable volumes that have already been attached to an instance.
    • Allow setting of the attachment mode.

Migration

All existing volumes must automatically be marked, or assumed to be marked, as non-shareable. User impact is therefore expected to be minimal except for users explicitly using this new feature.

Test/Demo Plan

This need not be added or completed until the specification is nearing beta.

Unresolved Issues/Questions

These issues or questions are outstanding and without resolution will block implementation of this proposal:

  • A determination needs to be made with regards to what to resolve the conflict between the overall volume status and the status of individual attachment(s).
    • Proposal (from 2013-09-25):
      • Volume status = attached when one or more attachments exist.
      • Volume status = detached when there are no attachments exist.
      • Volume status = attaching on first attach.
      • Volume status = detaching on last detach.
    • What values are required for the attachment status?
  • If multi-attach is determined to be extension functionality, then how to implement as an extension of the core attachment functionality?
  • In the discussion on the shared-volume blueprint itself it was suggested that volumes should have to be explicitly marked as shareable to allow multi-attachment, in addition to later discussion about failing the attach if no mode is specified. Is there consensus that a "shareable" marker is required? Currently this proposal assumes the answer is yes.
  • Are there additional issues to watch out for when snapshotting a shared volume?
  • Wont work for QCOW2 disks on Libvirt/KVM only RAW - do other Hypervisors have any similar restrictions for sharing of volumes with read-write?