Jump to: navigation, search

Difference between revisions of "Manila/SAP enterprise team"

(Open Topics:)
(Open Topics:)
Line 31: Line 31:
 
|11|| Manila hierarchical port binding support  || Port binding with multi-segement support || A || SAP || https://blueprints.launchpad.net/manila/+spec/manila-hpb-support || Newton ||
 
|11|| Manila hierarchical port binding support  || Port binding with multi-segement support || A || SAP || https://blueprints.launchpad.net/manila/+spec/manila-hpb-support || Newton ||
 
|-
 
|-
|12|| allow more than one aggr with parameter "netapp_root_volume_aggregate"  || Current issue: In case of a node fail all SVM´s are affected and  takeover/giveback will take long. || B || NetApp on-site || In standard operation guides at SAP NetApp proposes to distribute root volumes for SVM's accross the cluster. This is currently not possible with OpenStack Manila. Why not using the selected aggregate for the first volume to place also the root volume in, in case the the option netapp_root_volume_aggregate is not set. In this case "follow the first aggregate" it's assumed that there is enough space left on the selected aggregate to place also the small root volume. ||  || proposal
+
|12|| allow more than one aggr with parameter "netapp_root_volume_aggregate"  || Current issue: In case of a node fail all SVM´s are affected and  takeover/giveback will take long. || B || NetApp on-site || In standard operation guides it is proposed to distribute root volumes for SVM's across the cluster. This is currently not possible with OpenStack Manila. Why not using the selected aggregate for the first volume to place also the root volume, in case the the option netapp_root_volume_aggregate is not set. In this case "follow the first aggregate" it's assumed that there is enough space left on the selected aggregate to place also the SVM root volume. ||  || proposal
 
|-
 
|-
 
|13|| Implement LS-Mirror for NFS-Versions lower than V4.1  || In case NFS V4.1 is used, we do not need LS-mirror functionality but if we use a lower Version we have to have it and we have to take care about the changing configuration behavior within the driver scripts || C || NetApp on-site || || ||
 
|13|| Implement LS-Mirror for NFS-Versions lower than V4.1  || In case NFS V4.1 is used, we do not need LS-mirror functionality but if we use a lower Version we have to have it and we have to take care about the changing configuration behavior within the driver scripts || C || NetApp on-site || || ||

Revision as of 12:25, 29 August 2016

Misson

The SAP Manila enterprise team tries to address topics to make Manila enterprise ready. The listed topics can be bugs, features or even long running tasks.

Open Topics:

No Issue Description Priority Assignee Reference Release Status
1 Snapshot: make it possible for users to specify "full copy clones" or "copy-on-write clones" In order to speed up the rollout the concept requires to 'clone a template' , where a template is a snapshot of a master volume and clone resolves into a snapshot (manila create --snapshot) A NetApp NetApp cDOT driver configurable clone split.
Define a NetApp extra spec that is used to define a share type that selects whether an initial clone split will be processed as part of the create-from-snapshot workflow. https://github.com/openstack/manila/commit/1143406ed4e9bae6ba646d721aa3d73eb5bcc68a
https://blueprints.launchpad.net/manila/+spec/netapp-cdot-clone-split-control
Newton OK
2 Snapshot: make it possible for users decouple the "copy-on-write clones" from it's source snapshot Shares based on Snapshots are bound to the source pool. Over the lifetime of a project the need may arise to migrate the shares to other backends. The first step towards this is to break the dependency from the source i.e. break the "copy-on-write-clones" C  ? Implementation of #'s 1, 4, 5, & 6 may obviate this.
3 Manila-share active-active HA Manila service manila-share should be active-active HA. This means a backend should be managed by at least 2 services. A  ? https://specs.openstack.org/openstack/cinder-specs/specs/mitaka/ha-aa-tooz-locks.html
4 Migrate: a share within the same share server / share network With growing projects capacity as well as performance may require to move a share within the same share server / share network to a different pool. Migration should support this preferably as function within the backend specific driver to keep overhead to a minimum A NetApp Newton
5 Migrate: a share to a different share server In case the share servers capacity is at it'S limits, or the share network need to be changed (moving from a QA to a production network) it is required to allow the migration to a different share server/network. Optionally the scheduler may choose the target. If possible drivers can handle this migration B  ?
6 Migrate: a share to a different availability zone Move shares between Availability zones. Optionally the scheduler may choose the target. C  ?
7 Schedule: Use of virtual resources to schedule share creation Large enterprise applications do not only require capacity for shares but may also rely on other attributes such as I/O, network bandwidth, that define the limits for a certain resource-'pool'. Virtual resources could help to guide the scheduler to consume those virtual resources based on shared types and thus limit the creation of shares based on those limits. C  ?
8 Create --snapshot --schedule, create a clone via scheduler Creating a clone using a snapshot will allocate the share from the same pool/resource. Due to resource or other limitations this may require a later migration. Optimizing the creation process using the scheduler logic for selecting the target resource should combine all these steps within one, leaving - if possible - the hard work within the driver. C  ?
9 Specify NFS protocol Set active NFS protocol (for backend / share server) NFSV3 ,NFSV4.0, NFSV4.1 A NetApp The network protocol can be choosen in manila.conf with f.e.
netapp_enabled_share_protocols = nfs3
netapp_enabled_share_protocols = nfs4.0
netapp_enabled_share_protocols = nfs3, nfs4.0, nfs4.1
manila has to be restarted.
Then the choosen protocol is used while manila create
https://blueprints.launchpad.net/manila/+spec/netapp-cdot-configure-nfs-versions
Newton OK
10 MTU Size must be definable actual there is no way to define the MTU Size for the broadcast domain setup (NetApp driver) A SAP & NetApp https://bugs.launchpad.net/neutron/+bug/1617284 : Fix provided to dynamically evaluate the MTU size set in neutron when creating a share server. In order to use this feature the NetApp port used for VLAN's need to be in a broadcast domain that allows this MTU size. Smaller MTU sizes on VLAN LIF's are possible.
Fix merged: https://review.openstack.org/#/c/361139/
Change ID : https://review.openstack.org/#q,Ib68cf7a64332c6a4b3df7b5d0a41922421b58dba,n,z
Newton OK
11 Manila hierarchical port binding support Port binding with multi-segement support A SAP https://blueprints.launchpad.net/manila/+spec/manila-hpb-support Newton
12 allow more than one aggr with parameter "netapp_root_volume_aggregate" Current issue: In case of a node fail all SVM´s are affected and takeover/giveback will take long. B NetApp on-site In standard operation guides it is proposed to distribute root volumes for SVM's across the cluster. This is currently not possible with OpenStack Manila. Why not using the selected aggregate for the first volume to place also the root volume, in case the the option netapp_root_volume_aggregate is not set. In this case "follow the first aggregate" it's assumed that there is enough space left on the selected aggregate to place also the SVM root volume. proposal
13 Implement LS-Mirror for NFS-Versions lower than V4.1 In case NFS V4.1 is used, we do not need LS-mirror functionality but if we use a lower Version we have to have it and we have to take care about the changing configuration behavior within the driver scripts C NetApp on-site
14 NetApp: Extending the Cluster When extending a NetApp Cluster to add additional resources, already existing share servers should be able to consume these new resources. Meta-data should be extended B
15 API extension to list the extra specs Currently documentation is the only space to find available extra specs. API call listing all specs based on available / configured drivers would be a huge help for configuration B
16 Replication: between availability zones when using managed share servers Classical DR requirement C
17 Scheduler not using all pools It seems that the manila scheduler is not using all the available pools to distribute shares. Instead if a share server hold 2 pools (i.e. aggregates) the first one will be filled before the seconds is used for placement of a new share A NetApp local Newton in evaluation