Jump to: navigation, search



The VMwareVCDriver is for managing a single cluster. The new driver derived from this will allow users to manage multiple clusters using a single service.

This is being done since the current implementation of VMware VC driver for OpenStack uses one proxy server to run nova-compute service to manage a cluster. To avoid creating multiple VMs to run the nova-compute service for each cluster, the multi-node support of nova will be used to manage multiple clusters using a single compute service. This gives a choice point to the administrator as to the number of services required to manage the clusters in vCenter.


The following changes are done to accomplish this.

a. The nova.conf vmwareapi_cluster_name is now a multiStrOpt to allow accepting multiple clusters and resource pools. These can be specified as a full path starting from the datacenter. The delimiter is /.
The intent of the multistropt was that the user could specify the cluster or resource pool names as vmwareapi_cluster_name=clusterA, vmwareapi_cluster_name=clusterB vmwareapi_cluster_name=ResPoolB vmwareapi_cluster_name=ResPoolC
- OR -
vmwareapi_cluster_name=clusterA, clusterB
vmwareapi_cluster_name=ResPoolB, ResPoolC
- OR -
vmwareapi_cluster_name=clusterA, clusterB, ResPoolB, ResPoolC

The other change that is done is the cluster or respool names can be the full path starting from the root example folder1/datacenter2/ClusterA. So if there are many long paths then its convenient to have multiple lines, if they are short they can be comma separated values.

b. For each cluster/RP, the vmops and volumeops object initialized with the cluster MOR and stored in a dict

c. The get_available_nodes method will return all the clusters/RPs being managed. The name will be of the form moid(name). This is similar to what is shown in the managed object browser. This results in each cluster/RP being availa ble as a compute node

Implementing this method enables representing multiple cluster/resource pools as individual compute nodes is by providing each of these as an available node via the get_available_nodes method. The names are stored in the hypervisor_hostname column of the compute nodes table. when the command

nova hypervisor list

is executed the values are displayed.

Whenever the scheduler selects a node for provisioning, the hypervisor_hostname attribute is also sent to the spawn method of the driver. We then use this information to retrieve the managed object id and spawn the instance to the right entity (cluster/RP). We extract the managed object id, since this is a more reliable field than the name. We added the name since the user/developer can associate it easily with the vCenter view. This design is based on the multi node support of nova drivers.

d. The stats are reported for each node from get_host_stats

e. Upon an instance creation request, the instance[‘node’] will have the moid(name). From this we retrieve the moid, then retrieve the vmops from the dict. Subsequently spawn method of the vmops is called.