Jump to: navigation, search

Difference between revisions of "Cinder/blueprints/multi-attach-volume"

(Design)
(Blockers)
Line 34: Line 34:
 
*** Volume status = '''detaching''' on last detach.
 
*** Volume status = '''detaching''' on last detach.
 
* If multi-attach is determined to be extension functionality, then how to implement as an extension of the core attachment functionality?
 
* If multi-attach is determined to be extension functionality, then how to implement as an extension of the core attachment functionality?
* In the discussion of the [https://blueprints.launchpad.net/cinder/+spec/shared-volume shared-volume] blueprint it was suggested that volumes should have to be explicitly marked as sharable to allow multi-attachment. This is not however the case in the current proposed implementation. Should this be added to help ensure users are aware of the inherent risks of attaching a volume to multiple instances, or is this the responsibility of the client (e.g. Horizon?).
+
* In the discussion of the [https://blueprints.launchpad.net/cinder/+spec/shared-volume shared-volume] blueprint it was suggested that volumes should have to be explicitly marked as shareable to allow multi-attachment, in addition to later discussion about failing the attach if no mode is specified. Is there consensus that a "shareable" marker is required?
  
 
= User Story =
 
= User Story =

Revision as of 21:37, 27 November 2013

[ DRAFT ]

Summary

Currently the Block Storage (Cinder) service only allow a volume to be attached to one instance at a time. The Compute Service (Nova) also makes assumptions in a number of places to this effect as do the APIs, CLIs, and UIs exposed to users. This specification is drafted in support of multi-attach-volume and aims to outline the changes required to allow users to share volumes between multiple guests using either read-write or read-only attachments.

Comments from another proposal to this effect (shared-volume) were also taken into consideration. Further background is also available in the etherpad from the relevant session at the Portland (Havana) Design Summit and these Cinder meeting logs:

This mailing list thread is also relevant:

Blockers

These issues or questions are outstanding and without resolution will block implementation of this proposal:

  • A determination needs to be made with regards to what to resolve the conflict between the overall volume status and the status of individual attachment(s).
    • Proposal (from 2013-09-25):
      • Volume status = attached when one or more attachments exist.
      • Volume status = detached when there are no attachments exist.
      • Volume status = attaching on first attach.
      • Volume status = detaching on last detach.
  • If multi-attach is determined to be extension functionality, then how to implement as an extension of the core attachment functionality?
  • In the discussion of the shared-volume blueprint it was suggested that volumes should have to be explicitly marked as shareable to allow multi-attachment, in addition to later discussion about failing the attach if no mode is specified. Is there consensus that a "shareable" marker is required?

User Story

Traditional cluster solutions rely on the use of clustered filesystems and quorum disks, writeable by one or more systems and often read by a larger number of systems, to maintain high availability. Users would like to be able to run such clustered applications on their OpenStack clouds. This requires the ability to have a volume attached to multiple compute instances, with some instances having read-only access and some having read-write access.

Design

Requirements

  • Ability to define a volume as "shareable".
  • Ability to attach a "shareable" volume to multiple compute instances, specifying a mode (read-write, read-only) for each attachment. That is, some attachments to a volume may be read-only, while other attachments to the same volume may be read-write.
  • While Cinder will track the mode of each attachment restriction of write access must be handled by Nova.

Assumptions

  • Administrators and users are ultimately responsible for ensuring data integrity is maintained when attaching a shared volume to multiple instances in read-write mode, however this must only occur as the result of an explicit request.

Cinder

Database

volume_attachment.

id = Column(String(36), primary_key=True)

user_id = Column(String(255))
project_id = Column(String(255))

volume_id = Column(String(36), ForeignKey('volumes.id'), nullable=False)
instance_uuid = Column(String(36))
attached_host = Column(String(255))
mountpoint = Column(String(255))
attach_time = Column(String(255))  # TODO(vish): datetime
detach_time = Column(String(255))  # TODO(vish): datetime
attach_status = Column(String(255))  # TODO(vish): enum
attach_mode = Column(String(255))

scheduled_at = Column(DateTime)
launched_at = Column(DateTime)
terminated_at = Column(DateTime)

bootable = Column(Boolean, default=False)

API

POST v1/{tenant_id}/volumes/

Changes to volume creation, *if* it is decided that volumes should be explicitly marked as sharable via create/update.

Request
Response

GET v1/{tenant_id}/volumes/detail

Changes to detailed volume list to return multiple attachments.

Request
Response

GET v1/{tenant_id}/volumes/{volume_id}

Changes to volume show to return multiple attachments.

Request
Response

GET v1/​{tenant_id}​/volumes/​{volume_id}​

Changes to volume update to handle multiple attachments.

Request
Response

CLI

Nova

API

CLI

Horizon