Jump to: navigation, search

CinderVictoriaMidCycleSummary

Revision as of 19:19, 13 August 2020 by Brian-rosmaita (talk | contribs) (Continue with Sizing encrypted volumes)

Introduction

At the Victoria (Virtual) PTG we decided to hold two mid-cycle meetings, each two hours long. The first meeting would be before the Cinder Spec Freeze, and the second would be around the New Feature Status Checkpoint. So the mid-cycle meetings will be the week of R-16 and the week of R-9 (so as not to conflict with Kubecon, which is happening at R-8).

Session One: R-16: 24 June 2020

We met in BlueJeans from 1400 to 1600 UTC.
etherpad: https://etherpad.openstack.org/p/cinder-victoria-mid-cycles
recording: https://www.youtube.com/watch?v=WS8ylhbjT2s

Cinder Project Updates

New os-brick releases for stein (2.8.6) and train (2.10.4) have happened to address Bug #1883654 (fix for OSSN-0086 not working on Python 2.7). There have been some gate problems holding up the cinder releases containing the new os-brick libraries; train (15.3.0) should happen today, and stein (14.2.0) soon.

You can keep track of what's been released by looking at Launchpad:


I'm looking for a volunteer for the position of "release czar" for Cinder. You don't have to be a core contributor, you just need to be a responsible and active member of the Cinder community. Ping me in email or on IRC if you are interested in finding out more.

We agreed to try the experiment of a monthly video meeting; it will be at the regular weekly meeting time on the last Wednesday of each month (and we'll make adjustments as necessary as conflicts arise). So the first one will be 29 July. We'll take a poll about what videoconferencing software to use.

actions

  • rosmaita - send out survey about monthly video meeting

Backups

Ivan asked for some feedback for his "Backup Backends Configuration" spec, https://review.opendev.org/#/c/712301/

His question was about the Data Model, whether he should re-purpose existing tables for volume_types, since what he needs for backup_types is very similar, or whether he should add new tables. After some discussion to the team agreed that new tables would be safer and more flexible in case volume_types and backup_types diverge as they undergo development.

actions

Remove Brocade FCZM Driver?

The Brocade Fibre Channel Zone Manager driver was declared 'unsupported' in Ussuri and subject to removal in Victoria by https://review.opendev.org/#/c/696857/

The vendor announced no support after Train, and no intention to support python 3: https://docs.broadcom.com/doc/12397527 (warning: opens a PDF). So we're in a weird situation in Ussuri and Victoria, because the vender explicltly denounced python 3 support, and we *only* support python 3. But we don't want to be down to only 1 FCZM, so we had agreed earlier to keep the driver in-tree but marked 'unsupported' as long as we think it will run under Python 3.

Now we have evidence that initialize-connection will fail under python 3.6 (code expects a list, gets an iterator). We don't know at this point how pervasive a problem that is in the code, and we also don't have third-party CI to validate changes. But it doesn't look great for Cinder to only have one FCZM driver. Plus, we don't know how many people will be impacted by removing it.

After some discussion, we decided to do the following:

1. rosmaita will put up a patch removing the Brocade FCZM driver, but we'll mark it as WIP.

2. Gorka will try to find some time to look into it and see if he can fix it. If he can't we'll go ahead and remove it.

3. In the meantime, rosmaita will send a note to the ML explaining the situation and that there's a removal patch and a date; hopefully, impacted people will speak up and let us know.

4. We will review the situation at part 2 of the mid-cycle (which is in roughly 7 weeks).

actions

Volume List Query Optimization

This effort is being promoted by haixin. He's got a spec and a patch up:


Roughly, the problem is that if a user tries to filter the volume-detail-list for status=error, some volumes in error (namely, volumes that are in error status because the error occurred while they were being "managed") don't show up in the list. The proposal is to make all the volumes in error show up

It looked like this might require an API change, so we had some discussion of a new microversion and maybe adding some kind of flag to show all errors. And there are several interesting points you can read on the etherpad, https://etherpad.opendev.org/p/cinder-victoria-mid-cycles

I had the action item of summarizing the discussion as a comment on the spec review, so you can go there to see a summary: https://review.opendev.org/#/c/726070/

actions

  • rosmaita - summarize the discussion on the spec (done!)
  • haixin - needs to explain either on the spec, on an etherpad, on the ML, or in IRC how he plans to implement the change (it looks like there are 3 or so different bugs here)
  • anyone interested - it would be good to make sure that when volumes go into these "managing" statuses, user messages are being created to give users more info about what has happened

Support Revert to Any Snapshot

This topic was proposed by xuanyd, who has a spec proposed: https://review.opendev.org/#/c/736111/

Currently, Cinder only supports revert-volume-to-most-recent-snapshot. Some (many?) storage vendors support reverting to any snapshot. Cinder should do this too.

Most of the discussion was about the Cinder project's policy that if a feature can be implemented in a generic way, then it should, and backends that support an optimized version can override the generic implementation to use native support. Since there's a generic (though inefficient) way to support revert-to-any-snapshot, exposing this feature must consist of supplying a generic implementation, and then backends that support it natively can advertise that. The key point is that the community is against adding this to the API with the generic implementation raising a 'not implemented' exception.

Then the discussion turned to how the generic implementation should go. A key issue is what happens to the snapshots that are more recent than the one you just reverted your volume to. That needs to be worked out in the spec, maybe by using RBD as the reference architecture.

The question came up of how many drivers already support this natively. The Inspur MCS driver, the RBD driver, and IBM Storwize driver have been tested; looks like the Dell EMC SC Series driver should also be able to do this.

The discussion already sparked a lively discussion on the spec review, so see https://review.opendev.org/#/c/736111/ for more details.

actions

  • all interested reviewers - leave comments on the spec

Victoria Milestone-1 Review

This is a short cycle and M-1 happened last week. The current situation is that we haven't hit *any* of our targets for M-1. So the top priorities for the next 2 weeks are the patches associated with the Victoria Milestone-1 Blueprints, in particular:

actions

  • everyone - review the above!

Session Two: R-9: 12 August 2020

We met in BlueJeans from 1400 to 1600 UTC.
etherpad: https://etherpad.openstack.org/p/cinder-victoria-mid-cycles
recording: <not yet available>

Cinder Project Updates

The next PTG scheduled for 26-30 October 2020, which is the week after the summit. There is no charge to attend, but the foundation would like you to register: http://lists.openstack.org/pipermail/openstack-discuss/2020-August/016424.html

Where we are in the Victoria cycle: https://releases.openstack.org/victoria/schedule.html

  • this is week R-9 (ussuri cycle-trailing release deadline; ussuri cinderlib has been released; see https://launchpad.net/cinderlib/+series)
  • 1 week to Cinder New Feature Status Checkpoint and Driver Features Declaration
  • 3 weeks to final non-client library releases at R-6 (os-brick)
  • 4 weeks to final client library release for Ussuri at R-5
  • 4 weeks to Milestone-3 and feature freeze (R-5)
  • 4 weeks to 3rd Party CI Compliance Checkpoint (R-5)
  • 4 weeks to Victoria community goal completion (R-5)
  • 6 weeks to RC-1 target week (R-3)

actions

  • rosmaita - email about os-brick deadline
  • rosmaita - email about Cinder New Feature Status Checkpoint
  • rosmaita - email about Driver Features Declaration

gate issues

We're currently seeing problems with cinder-tempest-plugin-lvm-lio-barbican and cinder-grenade-mn-sub-volbak jobs.

The cinder-tempest-plugin-lvm-lio-barbican failure is connected to a new bindep that was added to os-brick but isn't a direct dependency for either cinder or cinderlib. We've tried various approaches to fixing this with mixed results. Luigi finally figured out the devstack-approved way to do this: https://review.opendev.org/#/c/745838/ This looks like it has fixed the zuul job, so once the QA team approves it, we should be back in business.

Melanie Witt figured out that the cinder-grenade-mn-sub-volbak failures are caused by an update to msgpack that breaks Keystone token handling: http://lists.openstack.org/pipermail/openstack-discuss/2020-August/016446.html Once her patch is merged, cinder-grenade-mn-sub-volbak will hopefully be clear for us.

actions

Ceph iSCSI driver

The tl;dr is that we'll work on getting this reviewed and merged early in Wallaby.

There are a bunch of moving parts, the driver, the CI jobs, brick changes, and the ceph project. Walt reported that things are mostly coming together.

The driver depends on a rbd-iscsi-client. Some current Cinder drivers also have clients whose code is in the cinder tree along with the driver code. Walt's preference is to keep this separate as the rbd-target-api may change, and this lets us update the rbd-iscsi-client internals at will to keep up with changes to rbd-target-api and the ceph-iscsi driver will continue to work. The consensus was that this makes sense and we should keep the rbd-iscsi-client separate. We'll pull it in as a cinder project using the 'independent' release model, which makes sense because its changes are tied to the Ceph project, not OpenStack.

actions

  • cinder team - review the above
  • rosmaita - get the paperwork started to make rbd-iscsi-client a cinder project deliverable

review the Brocade FCZM driver situation

The Brocade FCZM driver was not working in Python 3. (Brocade decided not to support beyond Python 2.7). Gorka took the initiative to get his hands on a Brocade FC switch so he could test the FCZM driver out, and he's got patches up to fix it to run in Python 3: https://review.opendev.org/#/q/project:openstack/cinder+branch:master+topic:brocade

The driver was marked 'unsupported' in Ussuri and subject to removal in Victoria. We want to backport Gorka's patches to Ussuri (which is Python-3 only) and Train (because that's the release where a lot of people were making the transition to running in Python 3 even though 2.7 was still supported there).

We discussed what to do about the driver in Victoria. We adjusted the 'unsupported' driver removal policy about a year ago so that we don't immediately remove unsupported drivers at the earliest opportunity in order to give driver maintainers more time to get their third-party CI working (which has been the biggest problem). The Brocade situation is a bit different because the vendor has announced no interest in supporting the FCZM driver past Train.

Gorka proposed that he could run CI tests with Victoria RC-1 to verify the driver. We can make the situation clear in documentation and release notes. Historically, it's been a stable driver (except for the Python 3 business), so this is at least reasonable. We can revisit the situation at the Wallaby mid-cycle.

As a side note, the team announced the impending removal of the driver at the end of June: http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015692.html We were hoping to get some feedback from users about the desirability of maintaining this thing, but didn't hear back from anyone.

actions

  • rosmaita - put up a patch for docs and a release note as outlined above
  • geguileo - run CI for the Brocade FCZM with cinder RC-1
  • rosmaita - reply to ML posting announcing our decision

support volume re-image

We had a quick discussion of rambo-li's proposal to implement the volume re-image feature, which had been approved in Stein, re-targeted to Train, but never implemented.

  • the cinder spec needs to be re-proposed for Wallaby
    • include the proposal to use the Nova external events API to notify the result of the operation
    • will need to include cinder-tempest-plugin tests to make sure everything works as expected
    • revise the volume statuses that for which a re-image will be attempted

Eric mentioned that the code will probably need to do explicit testing for NFS, can't assume it will work the same way as other drivers

actions

  • rosmaita - get ^^ onto the current patch
  • rambo-li - revise the spec

Revisit specifying availability_zone or volume_type for backup restore

Alan Bishop brought up an unimplemented newton spec that would allow specifying an availability zone and/or volume type for the backup restore API, mainly to let people know he's interested in working on it and to make sure the community still thinks it's a good idea. He pointed out some recent Launchpad bugs that indicate there's user interest in this topic. The consensus what that this is still a feature worth implementing and Alan is just the person to do it!

actions

  • rosmaita - remove current assignee from spec and re-target it to Wallaby

in-flight image encryption effort update

Luzi brought us up to date on what's been going on with the in-flight encryption effort. The Barbican Secret Consumer API, which is a key part of the scheme, is nearly complete. With the Victoria os-brick release only 3 weeks away, we agreed that this is looking like a Wallaby feature at this point.

The WIP os-brick patch is https://review.opendev.org/#/c/709432

actions

  • rosmaita - re-target spec for Wallaby

Sizing encrypted volumes (continued)

The issue (roughly): an encrypted volume must have a header, which takes up some space; when you re-type a "full" volume from an unencrypted volume type to an encrypted volume type, there's not enough room for the header and so the retype fails.

Sofia reported on her efforts to implement a scheme that was worked out at the cinder weekly meetings, namely, to allow a new size to be specified on retype (or an "allow-expansion" flag or something). The problem is that drivers optimize migration in different ways, and since we previously didn't have a new size parameter, there's no easy way to get this info into the drivers. The consensus was that Sofia will write up a spec to change the driver API to enable resize on migration and suggest how/whether to handle a displayed size of a volume vs. the actual size of the volume. She'll also put together an etherpad outlining what this would look like.

actions

  • enriquetaso - spec and etherpad as described above

NFS online extend

actions

os-brick filters

actions

backup drivers

actions