Jump to: navigation, search





Blueprint is implemented and merged for icehouse-3. This includes the second phase of "auto discover of PBM wsdl files". So vmware_pbm_wsdl config is deprecated and not used.


  1. Cinder VMDK driver when configured with a VC should allow specifying storage policies for a volume. When configured with ESX, the VMDK driver will ignore storage policy.
  2. A new vmdk volume placement as well as moving existing vmdk should honour the storage profile
    Note: There would be no way for the OpenStack user to verify compliance of the vmdk with its associated storage policy. Though the vmdk could go out of compliance sometime after placement due to various factors in the environment compliance reporting is not exposed to the OpenStack users/admins.
  3. If there are no datastores that match the given storage profile then the placement should fail and hence the operation (volume attach) could fail.
    Note that during a volume attach the VMDK driver can only try to move the vmdk file to a datastore that is visible from the VM instance. This can fail if none of the datastores visible to this VM instance satisfy/match the given storage profile. There could be other datastores in VC that potentially match the given storage profile but are not visible to the VM instance. There is no attempt made to migrate the VM instance in this case so that the volume attachment could succeed.
  4. Cloning from a volume with an associated storage profile should carry forward the storage profile to the new target volume as well.
  5. Cloning from a volume snapshot that has an associated storage profile should carry forward the storage profile to the new target volume as well.

Admin workflow to setup the volume-type

The storage admin first creates storage profiles in VC based on the storage vendor provided capabilities and/or tag based capabilities of the underlying storage infrastructure. Refer to http://pubs.vmware.com/vsphere-55/topic/com.vmware.vsphere.storage.doc/GUID-A8BA9141-31F1-4555-A554-4B5B04D75E54.html to learn more about storage profiles on VC.

Let us assume that the storage admin has partitioned the entire available storage into three categories of storage profiles named gold, silver and bronze. The admin now creates three corresponding volumes types on OpenStack referencing the three storage profiles. This is done as follows.

$ cinder type-create gold
$ cinder type-list
|                  ID                  | Name |
| 02e1ed45-4189-4649-b52c-2ec45e2804c1 | gold |
$ cinder type-key 02e1ed45-4189-4649-b52c-2ec45e2804c1 set vmware:storage_profile=gold
$ cinder extra-specs-list
|                  ID                  | Name |             extra_specs             |
| 02e1ed45-4189-4649-b52c-2ec45e2804c1 | gold | {u'vmware:storage_profile': u'gold'} |

Similarly the admin sets up volume types for "silver" and "bronze" as well. Now a user has 3 volume types to choose from when creating a new cinder volume.

User workflow to create and attach a volume associated with a storage profile

A user can list all the available volume types and choose one of them when creating a new volume.

# list volume types and create a volume based on a volume type
$ cinder type-list
|                  ID                  |  Name  |
| 02e1ed45-4189-4649-b52c-2ec45e2804c1 |  gold  |
| 3d5e187d-7f3b-4a88-8e4a-c55f6a9af54d | silver |
| 7fcd4153-faf5-4f7d-a688-49b2c9119de4 | bronze |
$ cinder create --name vol1 --volume-type gold 1
|      Property     |                Value                 |
|    attachments    |                  []                  |
| availability_zone |                 nova                 |
|      bootable     |                false                 |
|     created_at    |      2013-12-02T04:48:54.000000      |
|    description    |                 None                 |
|         id        | 50855c59-ef81-47dc-9983-1edf95ab9804 |
|      metadata     |                  {}                  |
|        name       |                 vol1                 |
|        size       |                  1                   |
|    snapshot_id    |                 None                 |
|    source_volid   |                 None                 |
|       status      |               creating               |
|      user_id      |   2c5480d03a6849a29db9ec57e8927ac7   |
|    volume_type    |                 gold                 |
$ cinder list
|                  ID                  |   Status  | Name | Size | Volume Type | Bootable | Attached to |
| 50855c59-ef81-47dc-9983-1edf95ab9804 | available | vol1 |  1   |     gold    |  false   |             |
# attach the volume to a running nova VM
$ nova list
| ID                                   | Name     | Status | Task State | Power State | Networks         |
| 6dd8e9b0-4939-4295-8ef3-2a92ca1b4f55 | debianVM | ACTIVE | None       | Running     | private= |
$ nova volume-attach debianVM 50855c59-ef81-47dc-9983-1edf95ab9804 /dev/sdb
| Property | Value                                |
| device   | /dev/sdb                             |
| id       | 50855c59-ef81-47dc-9983-1edf95ab9804 |
| serverId | 6dd8e9b0-4939-4295-8ef3-2a92ca1b4f55 |
| volumeId | 50855c59-ef81-47dc-9983-1edf95ab9804 |

The VMDK driver will create the volume's vmdk file on a datastore that matches the given profile.


  • Cinder's existing volume type definition will be used to configure and hold the names of storage profiles under the namespace "vmware:storage_profile".
  • A new PBM client (similar to the existing VIM client) will be needed to make API calls to the PBM service. The existing VMwareAPISession will expand to accommodate api calls to PBM service through the pbm client object.
  • In the first phase, the PBM WSDL file needs to be manually downloaded and configured in cinder.conf. The second phase of implementation will package the PBM WSDL files necessary. See the #Configuration section below for details.
  • Datastore filtering when placing a new backing VM as well as when moving an existing backing VM will make use of PBM APIs to find out matching datastores.
  • Additional unit tests to cover new code. This will be written using Mock. Existing unit test changes will continue to be in mox.


  • Admin user should configure the name of the storage profile on a volume type under the key vmware:storage_profile. Look at Admin workflow section for the commands to do this.
  • PBM WSDL file configuration
    • If using code after first phase of delivery, the path to PBM WSDL file needs to be configured in the cinder-volume node within the /etc/cinder/cinder.conf file against vmware_pbm_wsdl config key. This will look like this if the vSphere Management SDK is downloaded and extracted to /opt on a Linux system.
vmware_pbm_wsdl = "file:///opt/SDK/spbm/wsdl/pbmService.wsdl"

On a Windows system this path could look like this if the vSphere Management SDK is downloaded and extracted within the Downloads folder.

vmware_pbm_wsdl = "file:///c:/Users/Administrator/Downloads/SDK/spbm/wsdl/pbmService.wsdl"
  • Auto discovery of PBM WSDL
    • The second phase of delivery will try to ease this manual step by packaging PBM WSDL files for vSphere version 5.5. So if configuring OpenStack Cinder with a vSphere version 5.5 or above then the admin has no manual steps involved. These PBM WSDL files are auto-discovered and used to invoke the PBM APIs. If Cinder is being configured with an older version of vSphere then PBM based placement is disabled.

Default PBM Policy

  • When a volume is created either
    • without specifying a volume_type or
    • specifying a volume_type that does not have a "vmware:storage_profile" extra spec in it

then the volume is created without any storage policy attached to it. The OpenStack admin has an option of configuring a default storage policy to be used in these cases. This is done by setting pbm_default_policy config key to a storage policy that is defined in vCenter. For example an entry in /etc/cinder/cinder.conf could look like this: