Jump to: navigation, search


Revision as of 13:00, 13 August 2020 by Brian-rosmaita (talk | contribs) (Ceph iSCSI driver)


At the Victoria (Virtual) PTG we decided to hold two mid-cycle meetings, each two hours long. The first meeting would be before the Cinder Spec Freeze, and the second would be around the New Feature Status Checkpoint. So the mid-cycle meetings will be the week of R-16 and the week of R-9 (so as not to conflict with Kubecon, which is happening at R-8).

Session One: R-16: 24 June 2020

We met in BlueJeans from 1400 to 1600 UTC.
etherpad: https://etherpad.openstack.org/p/cinder-victoria-mid-cycles
recording: https://www.youtube.com/watch?v=WS8ylhbjT2s

Cinder Project Updates

New os-brick releases for stein (2.8.6) and train (2.10.4) have happened to address Bug #1883654 (fix for OSSN-0086 not working on Python 2.7). There have been some gate problems holding up the cinder releases containing the new os-brick libraries; train (15.3.0) should happen today, and stein (14.2.0) soon.

You can keep track of what's been released by looking at Launchpad:

I'm looking for a volunteer for the position of "release czar" for Cinder. You don't have to be a core contributor, you just need to be a responsible and active member of the Cinder community. Ping me in email or on IRC if you are interested in finding out more.

We agreed to try the experiment of a monthly video meeting; it will be at the regular weekly meeting time on the last Wednesday of each month (and we'll make adjustments as necessary as conflicts arise). So the first one will be 29 July. We'll take a poll about what videoconferencing software to use.


  • rosmaita - send out survey about monthly video meeting


Ivan asked for some feedback for his "Backup Backends Configuration" spec, https://review.opendev.org/#/c/712301/

His question was about the Data Model, whether he should re-purpose existing tables for volume_types, since what he needs for backup_types is very similar, or whether he should add new tables. After some discussion to the team agreed that new tables would be safer and more flexible in case volume_types and backup_types diverge as they undergo development.


Remove Brocade FCZM Driver?

The Brocade Fibre Channel Zone Manager driver was declared 'unsupported' in Ussuri and subject to removal in Victoria by https://review.opendev.org/#/c/696857/

The vendor announced no support after Train, and no intention to support python 3: https://docs.broadcom.com/doc/12397527 (warning: opens a PDF). So we're in a weird situation in Ussuri and Victoria, because the vender explicltly denounced python 3 support, and we *only* support python 3. But we don't want to be down to only 1 FCZM, so we had agreed earlier to keep the driver in-tree but marked 'unsupported' as long as we think it will run under Python 3.

Now we have evidence that initialize-connection will fail under python 3.6 (code expects a list, gets an iterator). We don't know at this point how pervasive a problem that is in the code, and we also don't have third-party CI to validate changes. But it doesn't look great for Cinder to only have one FCZM driver. Plus, we don't know how many people will be impacted by removing it.

After some discussion, we decided to do the following:

1. rosmaita will put up a patch removing the Brocade FCZM driver, but we'll mark it as WIP.

2. Gorka will try to find some time to look into it and see if he can fix it. If he can't we'll go ahead and remove it.

3. In the meantime, rosmaita will send a note to the ML explaining the situation and that there's a removal patch and a date; hopefully, impacted people will speak up and let us know.

4. We will review the situation at part 2 of the mid-cycle (which is in roughly 7 weeks).


Volume List Query Optimization

This effort is being promoted by haixin. He's got a spec and a patch up:

Roughly, the problem is that if a user tries to filter the volume-detail-list for status=error, some volumes in error (namely, volumes that are in error status because the error occurred while they were being "managed") don't show up in the list. The proposal is to make all the volumes in error show up

It looked like this might require an API change, so we had some discussion of a new microversion and maybe adding some kind of flag to show all errors. And there are several interesting points you can read on the etherpad, https://etherpad.opendev.org/p/cinder-victoria-mid-cycles

I had the action item of summarizing the discussion as a comment on the spec review, so you can go there to see a summary: https://review.opendev.org/#/c/726070/


  • rosmaita - summarize the discussion on the spec (done!)
  • haixin - needs to explain either on the spec, on an etherpad, on the ML, or in IRC how he plans to implement the change (it looks like there are 3 or so different bugs here)
  • anyone interested - it would be good to make sure that when volumes go into these "managing" statuses, user messages are being created to give users more info about what has happened

Support Revert to Any Snapshot

This topic was proposed by xuanyd, who has a spec proposed: https://review.opendev.org/#/c/736111/

Currently, Cinder only supports revert-volume-to-most-recent-snapshot. Some (many?) storage vendors support reverting to any snapshot. Cinder should do this too.

Most of the discussion was about the Cinder project's policy that if a feature can be implemented in a generic way, then it should, and backends that support an optimized version can override the generic implementation to use native support. Since there's a generic (though inefficient) way to support revert-to-any-snapshot, exposing this feature must consist of supplying a generic implementation, and then backends that support it natively can advertise that. The key point is that the community is against adding this to the API with the generic implementation raising a 'not implemented' exception.

Then the discussion turned to how the generic implementation should go. A key issue is what happens to the snapshots that are more recent than the one you just reverted your volume to. That needs to be worked out in the spec, maybe by using RBD as the reference architecture.

The question came up of how many drivers already support this natively. The Inspur MCS driver, the RBD driver, and IBM Storwize driver have been tested; looks like the Dell EMC SC Series driver should also be able to do this.

The discussion already sparked a lively discussion on the spec review, so see https://review.opendev.org/#/c/736111/ for more details.


  • all interested reviewers - leave comments on the spec

Victoria Milestone-1 Review

This is a short cycle and M-1 happened last week. The current situation is that we haven't hit *any* of our targets for M-1. So the top priorities for the next 2 weeks are the patches associated with the Victoria Milestone-1 Blueprints, in particular:


  • everyone - review the above!

Session Two: R-9: 12 August 2020

We met in BlueJeans from 1400 to 1600 UTC.
etherpad: https://etherpad.openstack.org/p/cinder-victoria-mid-cycles
recording: <not yet available>

Cinder Project Updates

The next PTG scheduled for 26-30 October 2020, which is the week after the summit. There is no charge to attend, but the foundation would like you to register: http://lists.openstack.org/pipermail/openstack-discuss/2020-August/016424.html

Where we are in the Victoria cycle: https://releases.openstack.org/victoria/schedule.html

  • this is week R-9 (ussuri cycle-trailing release deadline; ussuri cinderlib has been released; see https://launchpad.net/cinderlib/+series)
  • 1 week to Cinder New Feature Status Checkpoint and Driver Features Declaration
  • 3 weeks to final non-client library releases at R-6 (os-brick)
  • 4 weeks to final client library release for Ussuri at R-5
  • 4 weeks to Milestone-3 and feature freeze (R-5)
  • 4 weeks to 3rd Party CI Compliance Checkpoint (R-5)
  • 4 weeks to Victoria community goal completion (R-5)
  • 6 weeks to RC-1 target week (R-3)


  • rosmaita - email about os-brick deadline
  • rosmaita - email about Cinder New Feature Status Checkpoint
  • rosmaita - email about Driver Features Declaration

gate issues

We're currently seeing problems with cinder-tempest-plugin-lvm-lio-barbican and cinder-grenade-mn-sub-volbak jobs.

The cinder-tempest-plugin-lvm-lio-barbican failure is connected to a new bindep that was added to os-brick but isn't a direct dependency for either cinder or cinderlib. We've tried various approaches to fixing this with mixed results. Luigi finally figured out the devstack-approved way to do this: https://review.opendev.org/#/c/745838/ This looks like it has fixed the zuul job, so once the QA team approves it, we should be back in business.

Melanie Witt figured out that the cinder-grenade-mn-sub-volbak failures are caused by an update to msgpack that breaks Keystone token handling: http://lists.openstack.org/pipermail/openstack-discuss/2020-August/016446.html Once her patch is merged, cinder-grenade-mn-sub-volbak will hopefully be clear for us.


Ceph iSCSI driver

The tl;dr is that we'll work on getting this reviewed and merged early in Wallaby.

There are a bunch of moving parts, the driver, the CI jobs, brick changes, and the ceph project. Walt reported that things are mostly coming together.

The driver depends on a rbd-iscsi-client. Some current Cinder drivers also have clients whose code is in the cinder tree along with the driver code. Walt's preference is to keep this separate as the rbd-target-api may change, and this lets us update the rbd-iscsi-client internals at will to keep up with changes to rbd-target-api and the ceph-iscsi driver will continue to work. The consensus was that this makes sense and we should keep the rbd-iscsi-client separate. We'll pull it in as a cinder project using the 'independent' release model, which makes sense because its changes are tied to the Ceph project, not OpenStack.


  • cinder team - review the above
  • rosmaita - get the paperwork started to make rbd-iscsi-client a cinder project deliverable

review the Brocade FCZM situation


support volume re-image


Revisit specifying availability_zone or volume_type for backup restore


in-flight image encryption effort update


Continue with Sizing encrypted volumes


NFS online extend


os-brick filters


backup drivers