Manila/SAP enterprise team
The SAP Manila enterprise team tries to address topics to make Manila enterprise ready. The listed topics can be bugs, features or even long running tasks.
|1||Snapshot: make it possible for users to specify "full copy clones" or "copy-on-write clones"||In order to speed up the rollout the concept requires to 'clone a template' , where a template is a snapshot of a master volume and clone resolves into a snapshot (manila create --snapshot)||A||NetApp|| NetApp cDOT driver configurable clone split.
Define a NetApp extra spec that is used to define a share type that selects whether an initial clone split will be processed as part of the create-from-snapshot workflow. https://github.com/openstack/manila/commit/1143406ed4e9bae6ba646d721aa3d73eb5bcc68a
|2||Snapshot: make it possible for users decouple the "copy-on-write clones" from it's source snapshot||Shares based on Snapshots are bound to the source pool. Over the lifetime of a project the need may arise to migrate the shares to other backends. The first step towards this is to break the dependency from the source i.e. break the "copy-on-write-clones"||C||?||Implementation of #'s 1, 4, 5, & 6 may obviate this.|
|3||Manila-share active-active HA||Manila service manila-share should be active-active HA. This means a backend should be managed by at least 2 services.||A||?||https://specs.openstack.org/openstack/cinder-specs/specs/mitaka/ha-aa-tooz-locks.html|
|4||Migrate: a share within the same share server / share network||With growing projects capacity as well as performance may require to move a share within the same share server / share network to a different pool. Migration should support this preferably as function within the backend specific driver to keep overhead to a minimum||A||NetApp||Newton||Started|
|5||Migrate: a share to a different share server||In case the share servers capacity is at it'S limits, or the share network need to be changed (moving from a QA to a production network) it is required to allow the migration to a different share server/network. Optionally the scheduler may choose the target. If possible drivers can handle this migration||B||?|
|6||Migrate: a share to a different availability zone||Move shares between Availability zones. Optionally the scheduler may choose the target.||C||?|
|7||Schedule: Use of virtual resources to schedule share creation||Large enterprise applications do not only require capacity for shares but may also rely on other attributes such as I/O, network bandwidth, that define the limits for a certain resource-'pool'. Virtual resources could help to guide the scheduler to consume those virtual resources based on shared types and thus limit the creation of shares based on those limits.||C||?|
|8||Create --snapshot --schedule, create a clone via scheduler||Creating a clone using a snapshot will allocate the share from the same pool/resource. Due to resource or other limitations this may require a later migration. Optimizing the creation process using the scheduler logic for selecting the target resource should combine all these steps within one, leaving - if possible - the hard work within the driver.||C||?|
|9||Specify NFS protocol||Set active NFS protocol (for backend / share server) NFSV3 ,NFSV4.0, NFSV4.1||A||Mikata||NetApp|| The network protocol can be choosen in manila.conf with f.e.
netapp_enabled_share_protocols = nfs3
netapp_enabled_share_protocols = nfs4.0
netapp_enabled_share_protocols = nfs3, nfs4.0, nfs4.1
manila has to be restarted.
Then the choosen protocol is used while manila create
Recommendation: If you want to create a share with NFS3 only: Create a second backend in manila.conf with nfs3. Create an additional type with key share_backend_name using this additional backend. Create manila share using this new share-type. So the newly defined nfs_protocols is used for this share. Additional share-server are created even on the same share-network.
|10||MTU Size must be definable||actual there is no way to define the MTU Size for the broadcast domain setup (NetApp driver)||A||Mikata||SAP & NetApp|| https://bugs.launchpad.net/neutron/+bug/1617284 : Fix provided to dynamically evaluate the MTU size set in neutron when creating a share server. In order to use this feature the NetApp port used for VLAN's need to be in a broadcast domain that allows this MTU size. Smaller MTU sizes on VLAN LIF's are possible.
Fix merged: https://review.openstack.org/#/c/361139/
Change ID : https://review.openstack.org/#q,Ib68cf7a64332c6a4b3df7b5d0a41922421b58dba,n,z
|11||Manila hierarchical port binding support||Port binding with multi-segement support||A||SAP||https://blueprints.launchpad.net/manila/+spec/manila-hpb-support||Newton|
|12||allow more than one aggr with parameter "netapp_root_volume_aggregate"||Current issue: In case of a node fail all SVM´s are affected and takeover/giveback will take long.||B||NetApp on-site||In standard operation guides it is proposed to distribute root volumes for SVM's across the cluster. This is currently not possible with OpenStack Manila. Why not using the selected aggregate for the first volume to place also the root volume, in case the the option netapp_root_volume_aggregate is not set. In this case "follow the first aggregate" it's assumed that there is enough space left on the selected aggregate to place also the SVM root volume.||proposal|
|13||Implement LS-Mirror for NFS-Versions lower than V4.1||In case NFS V4.1 is used, we do not need LS-mirror functionality but if we use a lower Version we have to have it and we have to take care about the changing configuration behavior within the driver scripts||C||NetApp on-site||Need to revalidate business scenario. Seems not valid||revisit|
|14||Automatic adapt resources when adding controllers to a cluster||When extending an existing cluster using managed shared servers, the already existing servers should automatically be extended, i.e. adding LIF's to the controllers, adding the required metadata for the shares,... so that the new resources could be utilized.||B|
|15||API extension to list the extra specs||Currently documentation is the only space to find available extra specs. API call listing all specs based on available / configured drivers would be a huge help for configuration||B|| See type concept: http://docs.openstack.org/developer/manila/devref/capabilities_and_extra_specs.html
See share type http://netapp.github.io/openstack-deploy-ops-guide/mitaka/content/section_manila-key-concepts.html#d6e4657
See extra specs: http://netapp.github.io/openstack-deploy-ops-guide/mitaka/content/section_manila-deployment-choices.html#manila.netapp.extra_specs
|16||Replication: between availability zones when using managed share servers||Classical DR requirement||C|
|17||Scheduler not using all pools||It seems that the manila scheduler is not using all the available pools to distribute shares. Instead if a share server hold 2 pools (i.e. aggregates) the first one will be filled before the seconds is used for placement of a new share||A||NetApp local||In order to ensure that the scheduler recognizes the updated capacity we need a minimum of 2 minutes in between "manila create" calls. This may have a significant impact on creating shares for SAP systems where typically more than one share is required.||Newton||OK|