Jump to: navigation, search

Cinder/blueprints/multi-attach-volume

< Cinder
Revision as of 19:57, 27 November 2013 by Sgordon (talk | contribs) (Summary)

[ DRAFT ]

Summary

Currently the Block Storage (Cinder) service only allow a volume to be attached to one instance at a time. The Compute Service (Nova) also makes assumptions in a number of places to this effect as do the APIs, CLIs, and UIs exposed to users. This specification is drafted in support of multi-attach-volume and aims to outline the changes required to allow users to share volumes between multiple guests using either read-write or read-only attachments.

Comments from another proposal to this effect (shared-volume) were also taken into consideration. Further background is also available in the etherpad from the relevant session at the Portland (Havana) Design Summit and these Cinder meeting logs:

These mailing list threads are also relevant:

Blockers

These issues or questions are outstanding and without resolution will block implementation of this proposal:

  • A determination needs to be made with regards to what to resolve the conflict between the overall volume status and the attachment status.
    • Proposal (from 2013-09-25):
      • Volume status = attached when one or more attachments exist.
      • Volume status = detached when there are no attachments exist.
      • Volume status = attaching on first attach - and/or perhaps move this value to the attachment level?
      • Volume status = detaching on last detach - and/or perhaps move this value to the attachment level?
  • A determination needs to be made with regards to implementing separation of core and extension functionality within Cinder, allowing multi-attachment to be an extension of core attachment functionality?
  • In the discussion of the shared-volume blueprint it was suggested that volumes should have to be explicitly marked as sharable to allow multi-attachment. This is not however the case in the current proposed implementation. Should this be added to help ensure users are aware of the inherent risks of attaching a volume to multiple instances, or is this the responsibility of the client (e.g. Horizon?).

User Story

Traditional cluster solutions rely on the use of clustered filesystems and quorum disks, writeable by one or more systems and often read by a larger number of systems, to maintain high availability. Users would like to be able to run such clustered applications on their OpenStack clouds. This requires the ability to have a volume attached to multiple compute instances, with some instances having read-only access and some having read-write access.

Design

Assumptions

  • Administrators and users are ultimately responsible for ensuring data integrity is maintained when attaching a shared volume to multiple instances in read-write mode.

Cinder

Database

volume_attachment.

id = Column(String(36), primary_key=True)

user_id = Column(String(255))
project_id = Column(String(255))

volume_id = Column(String(36), ForeignKey('volumes.id'), nullable=False)
instance_uuid = Column(String(36))
attached_host = Column(String(255))
mountpoint = Column(String(255))
attach_time = Column(String(255))  # TODO(vish): datetime
detach_time = Column(String(255))  # TODO(vish): datetime
attach_status = Column(String(255))  # TODO(vish): enum
attach_mode = Column(String(255))

scheduled_at = Column(DateTime)
launched_at = Column(DateTime)
terminated_at = Column(DateTime)

bootable = Column(Boolean, default=False)

API

POST v1/{tenant_id}/volumes/

Changes to volume creation, *if* it is decided that volumes should be explicitly marked as sharable via create/update.

Request
Response

GET v1/{tenant_id}/volumes/detail

Changes to detailed volume list to return multiple attachments.

Request
Response

GET v1/{tenant_id}/volumes/{volume_id}

Changes to volume show to return multiple attachments.

Request
Response

GET v1/​{tenant_id}​/volumes/​{volume_id}​

Changes to volume update to handle multiple attachments.

Request
Response

CLI

Nova

API

CLI

Horizon