Jump to: navigation, search

Difference between revisions of "Mellanox-Cinder"

(References)
(Grizzly Release)
Line 31: Line 31:
  
 
For troubleshooting issues refer to section [https://wiki.openstack.org/wiki/Mellanox-Cinder#Known_Issues Known Issues]
 
For troubleshooting issues refer to section [https://wiki.openstack.org/wiki/Mellanox-Cinder#Known_Issues Known Issues]
 
= Grizzly Release =
 
== Installation ==
 
=== Getting the code ===
 
wget https://github.com/mellanox-openstack/cinder-iser/archive/stable/grizzly.zip
 
cd cinder-iser-stable-grizzly
 
 
=== Applying the patches ===
 
You have two choices:
 
# To apply this support, replace the files under "cinder/cinder/" And "nova/nova/" Respectively.
 
# Apply the patches under "cinder/" And "nova/", don't forget to copy "cinder/cinder/volume/iser.py" if you choose this way.
 
 
== Configuration ==
 
In order to enable iSER, need to adjust these values at '''/etc/cinder/cinder.conf''':
 
 
iser_ip_address = <ipoib/roce_address>
 
volume_driver = cinder.volume.drivers.lvm.LVMISERDriver
 
 
'''iser_ip_address''' is required to do a "discovery" over the IB/RoCE interface from the initiator side.
 
 
'''volume_driver''' points cinder to use the ISERDriver, instead the LVMISCSIDriver.
 
 
 
On nova-compute side change the following in '''/etc/nova/nova.conf''':
 
 
libvirt_volume_drivers = iser=nova.virt.libvirt.volume.LibvirtISERVolumeDriver
 
 
Restart Cinder and Nova compute services
 
 
#/etc/init.d/openstack-cinder-volume restart
 
#/etc/init.d/openstack-nova-compute restart
 
  
 
= Known Issues =
 
= Known Issues =

Revision as of 09:28, 17 November 2014

Overview

iSER (iSCSI over RDMA) Mellanox OpenStack support to Cinder.
This can allow 5x faster bandwidth compared to using iSCSI TCP.

For example, over RAM device LUN I got ~1.3GBps Vs. ~5.5GBps (TCP Vs. iSER), and much lower CPU overhead.

Havana Release

Installation

The Mellanox Cinder pacakge is embeded within the OpenStack Cinder, no patches or plugins are needed. Just follow the formal OpenStack Cinder installation

Configuration

In order to enable iSER, need to adjust these values at /etc/cinder/cinder.conf:

iser_ip_address = <ipoib/roce_address>
volume_driver = cinder.volume.drivers.lvm.LVMISERDriver

iser_ip_address is required to do a "discovery" over the IB/RoCE interface from the initiator side.

volume_driver points cinder to use the ISERDriver, instead the LVMISCSIDriver.


On nova-compute side change the following in /etc/nova/nova.conf:

libvirt_volume_drivers = iser=nova.virt.libvirt.volume.LibvirtISERVolumeDriver

Restart Cinder and Nova compute services

#/etc/init.d/openstack-cinder-volume restart
#/etc/init.d/openstack-nova-compute restart

For troubleshooting issues refer to section Known Issues

Known Issues

(1) “scsi-target-utils” package

Note: for RH6.4 or below: “scsi-target-utils” package needs to be 1.0.38 and up, if you use RDO installation you should check the version. The RPM can be found in http://www.mellanox.com/downloads/solutions/rpms/

#wget http://www.mellanox.com/downloads/solutions/rpms/scsi-target-utils-1.0.39-v1.0.39.c1135a.x86_64.rpm
#rpm -Uvh scsi-target-utils-1.0.39-v1.0.39.c1135a.x86_64.rpm

In addition, RH6.5 has “scsi-target-utils” package with version 1.0.24-10, which is good enough, no further action is needed for RHEL6.5

(2) Boot Instance from volume fails when iSER is enabled

Steps to reproduce the bug:

- Create new volume with image source.

- Launch a new instance boot from the above volume using iSER transport.

The bug is defined here: [1]

A patch can be found in here: [2]

To apply the patch run the following commands on all nova-compute nodes:

# wget http://www.mellanox.com/downloads/solutions/openstack/havana/patches/volume.patch 
# patch -p1 < volume.patch
# service nova-compute restart 

(3) Flow control

In case the network is Ethernet, Make sure that flow control is enabled on the switches on the relevant ports across the network.

References

1. OpenStack solution page at Mellanox site

2. Source repository

3. Mellanox OFED web page

4. Cinder-controller

5. Cinder-node

Return to Mellanox-OpenStack wiki page.

For more details, please refer any inquiries to openstack@mellanox.com.