Jump to: navigation, search

Difference between revisions of "Cinder/blueprints/multi-attach-volume"

(Unresolved Issues)
(Horizon)
Line 86: Line 86:
  
 
==Horizon==
 
==Horizon==
 +
 +
=== Assumptions ===
 +
 +
Horizon support is not crucial to "Phase I" implementation of this feature, but must be tracked as a potential future addition.
 +
 +
=== Requirements ===
 +
 +
* Correctly reflect attachment of a volume to multiple compute instances in the '''Volumes''' screen and the attachment modes, e.g.:
 +
<pre><nowiki>
 +
admin cinder.example.com volume002 1GB In-Use - Attached to instance001 on /dev/vdc (rw)
 +
                                                                        Attached to instance002 on /dev/vdc (ro)</nowiki></pre>
 +
*
 +
* Reflect whether a volume is "sharable" or not.
  
 
= Migration =
 
= Migration =

Revision as of 15:18, 28 November 2013

[ DRAFT ]

Summary

Currently the Block Storage (Cinder) service only allow a volume to be attached to one instance at a time. The Compute Service (Nova) also makes assumptions in a number of places to this effect as do the APIs, CLIs, and UIs exposed to users. This specification is drafted in support of shared-volume and aims to outline the changes required to allow users to share volumes between multiple guests using either read-write or read-only attachments.

Comments from on a similar proposal (https://blueprints.launchpad.net/cinder/+spec/multi-attach-volume multi-attach-volume) were also taken into consideration. Further background is also available in the etherpad from the relevant session at the Portland (Havana) Design Summit and these Cinder meeting logs:

This mailing list thread is also relevant:

User Story

Traditional cluster solutions rely on the use of clustered filesystems and quorum disks, writeable by one or more systems and often read by a larger number of systems, to maintain high availability. Users would like to be able to run such clustered applications on their OpenStack clouds. This requires the ability to have a volume attached to multiple compute instances, with some instances having read-only access and some having read-write access.

Design

Assumptions

Administrators and users are ultimately responsible for ensuring data integrity is maintained once a shared volume is attached to multiple instances in read-write mode, however such attachments must only occur as the result of an explicit request.

Requirements

  • Users must be able to explicitly define a volume as "shareable" at creation time.
  • Users must be able to attach a "shareable" volume to multiple compute instances, specifying a mode (read-write, read-only) for each attachment. That is, some attachments to a volume may be read-only, while other attachments to the same volume may be read-write.
  • While Cinder will track the mode of each attachment restriction of write access must be handled by the Hypervisor drivers in Nova.
  • Normal reservations should be required (and enforced) for volumes that are not shareable.

Cinder

Database

volume_attachment.

id = Column(String(36), primary_key=True)

user_id = Column(String(255))
project_id = Column(String(255))

volume_id = Column(String(36), ForeignKey('volumes.id'), nullable=False)
instance_uuid = Column(String(36))
attached_host = Column(String(255))
mountpoint = Column(String(255))
attach_time = Column(String(255))  # TODO(vish): datetime
detach_time = Column(String(255))  # TODO(vish): datetime
attach_status = Column(String(255))  # TODO(vish): enum
attach_mode = Column(String(255))

scheduled_at = Column(DateTime)
launched_at = Column(DateTime)
terminated_at = Column(DateTime)

bootable = Column(Boolean, default=False)

API

  • POST v1/{tenant_id}/volumes/
    • Changes to volume creation, *if* it is decided that volumes should be explicitly marked as sharable via create/update.
  • GET v1/{tenant_id}/volumes/detail
    • Changes to detailed volume list to return multiple attachments.
  • GET v1/{tenant_id}/volumes/{volume_id}
    • Changes to volume show to return multiple attachments.
  • GET v1/​{tenant_id}​/volumes/​{volume_id}​
    • Changes to volume update to handle multiple attachments.

CLI

Nova

API

CLI

Horizon

Assumptions

Horizon support is not crucial to "Phase I" implementation of this feature, but must be tracked as a potential future addition.

Requirements

  • Correctly reflect attachment of a volume to multiple compute instances in the Volumes screen and the attachment modes, e.g.:
admin	cinder.example.com	volume002	1GB	In-Use	-	Attached to instance001 on /dev/vdc (rw)
                                                                        Attached to instance002 on /dev/vdc (ro)
  • Reflect whether a volume is "sharable" or not.

Migration

All existing volumes must automatically be marked, or assumed to be marked, as non-shareable. User impact is therefore expected to be minimal except for users explicitly using this new feature.

Test/Demo Plan

This need not be added or completed until the specification is nearing beta.

Unresolved Issues

These issues or questions are outstanding and without resolution will block implementation of this proposal:

  • A determination needs to be made with regards to what to resolve the conflict between the overall volume status and the status of individual attachment(s).
    • Proposal (from 2013-09-25):
      • Volume status = attached when one or more attachments exist.
      • Volume status = detached when there are no attachments exist.
      • Volume status = attaching on first attach.
      • Volume status = detaching on last detach.
    • What values are required for the attachment status?
  • If multi-attach is determined to be extension functionality, then how to implement as an extension of the core attachment functionality?
  • In the discussion on the shared-volume blueprint itself it was suggested that volumes should have to be explicitly marked as shareable to allow multi-attachment, in addition to later discussion about failing the attach if no mode is specified. Is there consensus that a "shareable" marker is required? Currently this proposal assumes the answer is yes.
  • Did read-only ever get implemented in Nova?