Difference between revisions of "CinderVictoriaMidCycleSummary"
m (added outline for session 2 content)
m (→Cinder Project Updates)
|Line 113:||Line 113:|
===Cinder Project Updates===
===Cinder Project Updates===
Revision as of 11:55, 13 August 2020
- 1 Introduction
- 2 Session One: R-16: 24 June 2020
- 3 Session Two: R-9: 12 August 2020
- 3.1 Cinder Project Updates
- 3.2 gate issues
- 3.3 Ceph iSCSI driver
- 3.4 review the Brocade FCZM situation
- 3.5 support volume re-image
- 3.6 Revisit specifying availability_zone or volume_type for backup restore
- 3.7 in-flight image encryption effort update
- 3.8 Continue with Sizing encrypted volumes
- 3.9 NFS online extend
- 3.10 os-brick filters
- 3.11 backup drivers
At the Victoria (Virtual) PTG we decided to hold two mid-cycle meetings, each two hours long. The first meeting would be before the Cinder Spec Freeze, and the second would be around the New Feature Status Checkpoint. So the mid-cycle meetings will be the week of R-16 and the week of R-9 (so as not to conflict with Kubecon, which is happening at R-8).
Session One: R-16: 24 June 2020
We met in BlueJeans from 1400 to 1600 UTC.
Cinder Project Updates
New os-brick releases for stein (2.8.6) and train (2.10.4) have happened to address Bug #1883654 (fix for OSSN-0086 not working on Python 2.7). There have been some gate problems holding up the cinder releases containing the new os-brick libraries; train (15.3.0) should happen today, and stein (14.2.0) soon.
You can keep track of what's been released by looking at Launchpad:
I'm looking for a volunteer for the position of "release czar" for Cinder. You don't have to be a core contributor, you just need to be a responsible and active member of the Cinder community. Ping me in email or on IRC if you are interested in finding out more.
We agreed to try the experiment of a monthly video meeting; it will be at the regular weekly meeting time on the last Wednesday of each month (and we'll make adjustments as necessary as conflicts arise). So the first one will be 29 July. We'll take a poll about what videoconferencing software to use.
- rosmaita - send out survey about monthly video meeting
Ivan asked for some feedback for his "Backup Backends Configuration" spec, https://review.opendev.org/#/c/712301/
His question was about the Data Model, whether he should re-purpose existing tables for volume_types, since what he needs for backup_types is very similar, or whether he should add new tables. After some discussion to the team agreed that new tables would be safer and more flexible in case volume_types and backup_types diverge as they undergo development.
- e0ne - will update https://review.opendev.org/#/c/712301/
Remove Brocade FCZM Driver?
The Brocade Fibre Channel Zone Manager driver was declared 'unsupported' in Ussuri and subject to removal in Victoria by https://review.opendev.org/#/c/696857/
The vendor announced no support after Train, and no intention to support python 3: https://docs.broadcom.com/doc/12397527 (warning: opens a PDF). So we're in a weird situation in Ussuri and Victoria, because the vender explicltly denounced python 3 support, and we *only* support python 3. But we don't want to be down to only 1 FCZM, so we had agreed earlier to keep the driver in-tree but marked 'unsupported' as long as we think it will run under Python 3.
Now we have evidence that initialize-connection will fail under python 3.6 (code expects a list, gets an iterator). We don't know at this point how pervasive a problem that is in the code, and we also don't have third-party CI to validate changes. But it doesn't look great for Cinder to only have one FCZM driver. Plus, we don't know how many people will be impacted by removing it.
After some discussion, we decided to do the following:
1. rosmaita will put up a patch removing the Brocade FCZM driver, but we'll mark it as WIP.
2. Gorka will try to find some time to look into it and see if he can fix it. If he can't we'll go ahead and remove it.
3. In the meantime, rosmaita will send a note to the ML explaining the situation and that there's a removal patch and a date; hopefully, impacted people will speak up and let us know.
4. We will review the situation at part 2 of the mid-cycle (which is in roughly 7 weeks).
- rosmaita - put up WIP removal patch - https://review.opendev.org/#/c/738148/
- rosmaita - email to ML - http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015692.html
- geguileo - assess the fixability of the driver
- anyone interested - contact geguileo to find out what you can do to help
Volume List Query Optimization
This effort is being promoted by haixin. He's got a spec and a patch up:
Roughly, the problem is that if a user tries to filter the volume-detail-list for status=error, some volumes in error (namely, volumes that are in error status because the error occurred while they were being "managed") don't show up in the list. The proposal is to make all the volumes in error show up
It looked like this might require an API change, so we had some discussion of a new microversion and maybe adding some kind of flag to show all errors. And there are several interesting points you can read on the etherpad, https://etherpad.opendev.org/p/cinder-victoria-mid-cycles
I had the action item of summarizing the discussion as a comment on the spec review, so you can go there to see a summary: https://review.opendev.org/#/c/726070/
- rosmaita - summarize the discussion on the spec (done!)
- haixin - needs to explain either on the spec, on an etherpad, on the ML, or in IRC how he plans to implement the change (it looks like there are 3 or so different bugs here)
- anyone interested - it would be good to make sure that when volumes go into these "managing" statuses, user messages are being created to give users more info about what has happened
Support Revert to Any Snapshot
This topic was proposed by xuanyd, who has a spec proposed: https://review.opendev.org/#/c/736111/
Currently, Cinder only supports revert-volume-to-most-recent-snapshot. Some (many?) storage vendors support reverting to any snapshot. Cinder should do this too.
Most of the discussion was about the Cinder project's policy that if a feature can be implemented in a generic way, then it should, and backends that support an optimized version can override the generic implementation to use native support. Since there's a generic (though inefficient) way to support revert-to-any-snapshot, exposing this feature must consist of supplying a generic implementation, and then backends that support it natively can advertise that. The key point is that the community is against adding this to the API with the generic implementation raising a 'not implemented' exception.
Then the discussion turned to how the generic implementation should go. A key issue is what happens to the snapshots that are more recent than the one you just reverted your volume to. That needs to be worked out in the spec, maybe by using RBD as the reference architecture.
The question came up of how many drivers already support this natively. The Inspur MCS driver, the RBD driver, and IBM Storwize driver have been tested; looks like the Dell EMC SC Series driver should also be able to do this.
The discussion already sparked a lively discussion on the spec review, so see https://review.opendev.org/#/c/736111/ for more details.
- all interested reviewers - leave comments on the spec
Victoria Milestone-1 Review
This is a short cycle and M-1 happened last week. The current situation is that we haven't hit *any* of our targets for M-1. So the top priorities for the next 2 weeks are the patches associated with the Victoria Milestone-1 Blueprints, in particular:
- https://review.opendev.org/#/c/663549/ os-brick
- https://review.opendev.org/#/c/700799/ cinder
- https://review.opendev.org/#/c/715762/ tempest test case
- for background:
- White paper (English): https://01.org/blogs/liangfang/2020/intel%C2%AE-optane%E2%84%A2-technology-equipped-storage-solution-accelerate-china-unicom
- White paper (Chinese): https://www.intel.cn/content/www/cn/zh/architecture-and-technology/wocloud-optimized-performance-with-intel-optane-ssd.html?wapkw=%E8%81%94%E9%80%9A%E4%BA%91
- NFS encrypted volume support
- brick gpg encryption support
- new backend driver - Hitachi
- everyone - review the above!
Session Two: R-9: 12 August 2020
We met in BlueJeans from 1400 to 1600 UTC.
recording: <not yet available>
Cinder Project Updates
The next PTG scheduled for 26-30 October 2020, which is the week after the summit. There is no charge to attend, but the foundation would like you to register: http://lists.openstack.org/pipermail/openstack-discuss/2020-August/016424.html
Where we are in the Victoria cycle: https://releases.openstack.org/victoria/schedule.html
- this is week R-9 (ussuri cycle-trailing release deadline; ussuri cinderlib has been released; see https://launchpad.net/cinderlib/+series)
- 1 week to Cinder New Feature Status Checkpoint and Driver Features Declaration
- 3 weeks to final non-client library releases at R-6 (os-brick)
- 4 weeks to final client library release for Ussuri at R-5
- 4 weeks to Milestone-3 and feature freeze (R-5)
- 4 weeks to 3rd Party CI Compliance Checkpoint (R-5)
- 4 weeks to Victoria community goal completion (R-5)
- 6 weeks to RC-1 target week (R-3)
- rosmaita - email about os-brick deadline
- rosmaita - email about Cinder New Feature Status Checkpoint
- rosmaita - email about Driver Features Declaration