Obsolete:Cinder/FibreChannelSupport


 * Launchpad Entry: https://blueprints.launchpad.net/cinder/+spec/fibre-channel-block-storage
 * Created: 15 Nov 2012
 * Contributors: Gary Thunquest, Kurt Martin, Walter Boring

Summary
Currently block storage volumes can be attached to hosts via iSCSI. This blueprint adds support for block storage attaching to hosts via Fibre Channel SANs as well. Support for Fibre Channel (FC) including Fibre Channel over Ethernet (FCoE) is planned.

iSCSI and FC will be able to be used simultaneously within the infrastructure, and adding FC can’t break existing iSCSI cinder volume drivers.

The scope of changes intended by this blueprint includes:
 * Handling FC endpoint addressing (WWN’s vs. iqn’s + IP addresses), and allow FC endpoints to be passed through the system
 * Reading of the host initiator WWNs
 * Defining a new FibreChannelDriver superclass to facilitate the creation of Fibre Channel based cinder volume drivers
 * Adding support for the nova host commands necessary to initialize a connection between a VM and a FC volume. (separate blueprint will be added to nova for this work - https://blueprints.launchpad.net/nova/+spec/libvirt-fibre-channel)
 * Intending to implement for KVM only. Support for other hypervisors will be handled in subsequent blueprints. (separate blueprint will be added to nova for this work - https://blueprints.launchpad.net/nova/+spec/libvirt-fibre-channel)
 * Typical Fibre Channel arrays support exporting volumes via multiple ports, so multipath support is highly desirable for redundancy and fault tolerance. If multipath is install and available, the new FibreChannel libvirt volume driver will take advantage of it. (separate blueprint will be added to nova for this work - https://blueprints.launchpad.net/nova/+spec/libvirt-fibre-channel)

FC support immediately raises the question of how FC SAN zoning will be performed. The scope of this blueprint does not include SAN zone management. A separate blueprint defining automated zone management is being submitted which will work together with this blueprint. This blueprint, however, does stand alone, in that FC SAN based deployments which require no zoning (open-zoned or pre-zoned SANs) can be fully supported with the changes defined in this blueprint.

Release Note
This feature supports Fibre Channel attached cinder volumes. Require a Fibre Channel cinder driver talking to a backend Fibre Channel array & SAN.

Rationale
FC support is required by enterprise data centers with Fibre Channel storage investments wanting to deploy private clouds using OpenStack.

User stories
To use:
 * Infrastructure setup requires FC SAN connectivity between hosts and storage device(s)
 * Backend cinder volume provider supporting FC storage attachment capability

User level volume operations (create/delete/snapshot/attach, etc.) will function the same for FC volumes as with iSCSI volumes.

It is intended that Cinder volume drivers supporting FC storage will publish a driver FC “capability” that will work with the Grizzly “volume types” facility to allow types to be defined to use FC vs. iSCSI if they choose.

Assumptions
Security – iSCSI volumes use CHAP security to control access to volumes from hosts. FC doesn’t have an equivalent mechanism. With FC, access control is provided though SAN zoning and LUN masking on the arrays. However, both of these mechanisms rely on trusted initiator WWNs. This design assumes initiator WWNs logging into the SAN are trusted.

Design
Attached is a diagram detailing the areas of change anticipated by this blueprint: attachment:FibreChannelChanges.png

Code Changes
Cinder changes
 * A new Fibre Channel driver that extends the cinder.volume.driver.FibreChannelDriver
 * FC DRIVER NOTES: If multipath is enabled, the new Fibre Channel volume driver detects each of the attached devices for the volume, and properly removes every one of them on detach.
 * In order to use this, the cinder volume driver's initialize_connection will simply return a dictionary with a new driver_volume_type called 'fibre_channel'.
 * The target_wwn can be a single entry or a list of wwns that correspond to the list of remote wwn(s) that will export the volume.

return {'driver_volume_type': 'fibre_channel', 'data': {'target_lun', 1, 'target_wwn': '1234567890123'}}

- or -

return {'driver_volume_type': 'fibre_channel', 'data': {'target_lun', 1, 'target_wwn': ['1234567890123', '0987654321321']}}

Nova changes
 * Added a new class LibvirtFibreChannelVolumeDriver to nova/virt/libvirt/volume.py. This class implements connect_volume and disconnect_volume, it also has a private method that runs the rescan to make the kernal aware of the new storage.
 * Modified the get_volume_connector method in the nova/virt/libvirt/driver.py to return any WWNs(both WWNN -node name and WWPN - port name) from fibre channel HBAs that may be on the system.
 * Added a new method get_fc_wwns to the nova/virt/libvirt/utils.py that is called by get_volume_connector above. This method returns a list of WWNs for any Fibre Channel HBAs that are on the system by calling the new method get_fc_hbas.
 * The sg3-utils package (required for scsi device discovery) and systool(required for returning HBA info - WWNs) commands were added to the list of commands in nova/rootwrap.d/compute.filters file.
 * The multipath package is needed for multipath support and was added to the list of commands in nova/rootwrap.d/compute.filters file.

Migration
NA