Cinder-zvm-plugin
Contents
Overview
This plugin is to support Cinder on z/VM.
Generally, OpenStack zvm virt driver will call xCat REST API to operate to zVM hypervisors.xCat has a control point(a virtual machine) in zVM system, which enables xCat management node to control the zVM system. Every zHCP in xCAT has SCSI pools file which contain pre-defined SCSI disk including FCP, WWPN, LUN, those info will be managed by cinder z/VM driver
Code Links
( You can find latest code review link there)
- Cinder z/VM plugin
https://blueprints.launchpad.net/cinder/+spec/zvm-cinder
- Nova z/VM plugin
https://blueprints.launchpad.net/nova/+spec/zvm-plugin
- Quantum z/VM plugin
https://blueprints.launchpad.net/quantum/+spec/quantum-zvm-plugin
z/VM and z/VM Storage
z/VM is a VM hypervisor based on 64-bit z/Architecture, and now with multi-system virtualization and virtual server mobilit. z/VM only support ECKD disk drivers, for SCSI disk, it will dedicate the communication to the Linux run on top of it. The xCAT which worked as core component store the FCP, WWPN, LUN info into a local repos then manage it.
- zfcp drivers introduction
The zfcp device driver supports SCSI-over-Fibre Channel host bus adapters (HBAs) for Linux on mainframes. It is the backend for a driver and software stack that includes other parts of the Linux SCSI stack as well as block request and multipathing functions, file systems, and SCSI applications. Following picture shows how the zfcp device driver fits into Linux and the SCSI stack.
- zfcp configuration
This book (http://public.dhe.ibm.com/software/dw/linux390/docu/lk38ts07a.pdf) gives a detailed introduction about how SCSI was used by z/VM, chapter 3 in this book introduced following steps:
* Configure the IODF * Define storage zones * Lun Masking * Attach an FCP device under z/VM * Configuring the zfcp device driver
Architecture
Cinder z/VM plugin/agent will communicate with xCAT REST api to control/configure z/VM. This picture(http://sourceforge.net/apps/mediawiki/xcat/index.php?title=XCAT_zVM#Design_Architecture) show the architecture of xCAT and zVM xCAT can be used to manage virtual servers spanning across multiple z/VM partitions. The xCAT management node (MN) runs on any Linux virtual server. It manages each z/VM partition using a System z hardware control point (zHCP) running on a privileged Linux virtual server. The zHCP interfaces with z/VM systems management API (SMAPI), directory manager (DirMaint), and control program layer (CP) to manage the z/VM partition. It utilizes a C socket interface to communicate with the SMAPI layer and VMCP Linux module to communicate with the CP layer.
Prerequisites
- The source pool should be created by users inside xCAT and added to xCAT zHCP
- Every zHCP will work as a back-end and the volumes between different zHCP can’t be shared
- The volumes types should be created according to multi-backend feature
Code Structure
1. Cinder Plugin package structure:
drivers/zvm/constants.py: constant definition drivers/zvm/exception.py: exception definition drivers/zvm/imageop.py: handle image operations such as copy volume to image and copy image to volume drivers/zvm/utils.py: utility code such as path, xCAT operations drivers/zvm/volumedriver.py: volume driver interface drivers/zvm/volumeop.py: volume core operations
2. Nova package structure is:
nova/nova/virt/zvm/volumeop.py: volume operations such as attach, detach
zVM specified Configuration Samples
enabled_backends=zvmssio1 #For each z/VM node, add one back-end for it [zvmssio1] volume_driver=cinder.volume.drivers.zvm.volumedriver.ZVMVolumeDriver volume_backend_name=SCSI_zvmssio1
zvm_xcat_server = 9.60.27.176 #The xCAT MN node IP zvm_xcat_username = root zvm_xcat_password = root zvm_scsi_pool = scsipol1 #The SCSI pool’s name