Difference between revisions of "CinderCaracalMidCycleSummary"
(→Introduction) |
(→Session One: R-17: 06 December 2024) |
||
Line 19: | Line 19: | ||
Etherpad: https://etherpad.opendev.org/p/cinder-caracal-midcycles | Etherpad: https://etherpad.opendev.org/p/cinder-caracal-midcycles | ||
− | ==Session One: R-17: 06 December | + | ==Session Two: R-7: 14 February 2024== |
+ | ===recordings=== | ||
+ | * Recording for Midcycle 2 (YouTube): | ||
+ | <br /> | ||
+ | |||
+ | |||
+ | ==Session One: R-17: 06 December 2023== | ||
===recordings=== | ===recordings=== | ||
* Recording for Midcycle 1 (YouTube): https://youtu.be/QSKWA1St97A | * Recording for Midcycle 1 (YouTube): https://youtu.be/QSKWA1St97A |
Revision as of 22:09, 14 February 2024
Contents
Introduction
Welcome to the Cinder 2024.1 (Caracal) midcycle summary page!
We conduct 2 midcycles between the OpenStack development cycle (6 months) that acts as a checkpoint for the following:
- Revisiting/following up the topics discussed at PTG
- Discuss topics that were missed during PTG due to author's unavailability or lack of time or any other reason
- Status of work items based on the milestone
There could be more reasons but the above highlighted are the major ones.
For 2024.2 (Caracal), the Midcycle will happen at:
- R-17: 6th December, 2023 (Wednesday) 1400-1600 UTC
- R-7: 14th February, 2024 (Wednesday) 1400-1600 UTC
Etherpad: https://etherpad.opendev.org/p/cinder-caracal-midcycles
Session Two: R-7: 14 February 2024
recordings
- Recording for Midcycle 2 (YouTube):
Session One: R-17: 06 December 2023
recordings
- Recording for Midcycle 1 (YouTube): https://youtu.be/QSKWA1St97A
We held our first mid cycle of the 2024.1 (Caracal) development Cycle on 6th December (R-17 week) between 1400-1600 UTC.
- Retiring cinderlib
- Etherpad: https://etherpad.opendev.org/p/cinderlib-retirement
- Major consumers of Cinderlib are Ember-CSI and oVirt
- They are happy with old releases of cinderlib and don't require new development work into it
- We will not transition to 2024.1 (Caracal) development
- We will not accept any new patches in cinderlib
- We will support the last 3 stable releases i.e. 2023.2, 2023.1 and Zed (approximately 18 months)
- EM branches will transition to Unmantained (Victoria -> Yoga)
- #action: rosmaita to follow up on work associated with the deprecation
- Rework of JovianDSS Driver
- Patch: https://review.opendev.org/c/openstack/cinder/+/889284
- Refactoring of an old driver
- #action: Review the patch
- NFS online extend
- Blueprint: https://blueprints.launchpad.net/cinder/+spec/extend-volume-completion-action
- Patches in order of dependency
- https://review.opendev.org/c/openstack/cinder/+/873557
- Add the os-extend-volume-completion volume action
- https://review.opendev.org/c/openstack/python-cinderclient/+/873558
- Support the new volume action in python-cinderclient
- https://review.opendev.org/c/openstack/nova/+/873560
- Make Nova use the new volume action when handling "volume-extended" events
- https://review.opendev.org/c/openstack/cinder/+/891602]
- Make the new feature available for volume drivers
- https://review.opendev.org/c/openstack/cinder/+/873686
- Add support in the NFS driver
- https://review.opendev.org/c/openstack/cinder/+/873889
- Add support in the Netapp NFS driver
- https://review.opendev.org/c/openstack/devstack-plugin-nfs/+/896196
- Enable attached volume extend tests in devstack-plugin-nfs-tempest-full
- https://review.opendev.org/c/openstack/cinder/+/873557
- Slides to explain the current workflow and the changes going to be made by the feature
- #action: review the patches (at least ones that are blocking nova side reviews)
- acceptable usage of `__init__()` in os-brick
- Patch: https://review.opendev.org/c/openstack/os-brick/+/887576
- Eric's concern is that this code in __init__ shouldn't affect other brick connectors
- by failing initialization of other connectors
- by holding the initialization for too long (a heavy weight call to backend)
- The above concerns shouldn't be an issue in this case
- #action: document standards for working on os-brick connectors
- CI Monitoring
- Etherpad: https://etherpad.opendev.org/p/cinder-caracal-ci-tracking
- Lately we have been facing too many CI issues
- There are also issues reported from Nova and Glace team for some cinder related failures
- The actions to fix the CI have mostly been reactive so it would be better to monitor the CI proactively
- Jon Bernard volunteered to actively monitor our CI and integrate it in weekly bug report status
- cinder-tempest-plugin-cbak-s3 history has also been consistently failing
- swap space increase patch to make gate stable
- #action rosmaita - get info from infra team about the nodepool nodes and what kind of control we have over configuration from the zuul side
- #action whoami-rajat: check the blocker for concurrency effort - talk to Luigi as he was working on it some time ago
- Supporting AND operation on time comparison filters
- original spec: https://specs.openstack.org/openstack/cinder-specs/specs/ussuri/query-cinder-resources-filter-by-time-comparison-operators.html
- Patch: https://review.opendev.org/c/openstack/cinder/+/740146
- The change to use AND operator instead of OR seems to be a bug in the original feature fixed by this patch
- Need to verify if any user is not using the OR filter as a feature and we are not breaking backward compatibility
- #action: Verify if the patch has right approach to fix this as a bugfix
- #action: check if this is the case with other APIs (apart from the one mentioned on the ML) like backups, messages etc
- Several patches to the StorPool Cinder driver
- Storpool driver reports multiple pools under one backend
- This can be problematic if we are using image volume cache or cinder as glance backend
- When using the optimized clone path to create a bootable volume from image, we check if the image-volume exists in the same host (host@backend#pool) to clone it from
- The pool part can be different for the image-volume and the new volume to be created skipping the optimized path
- Since storpool driver supports cross pool cloning, it can be reported as a capability which can be leveraged to perform cross pool volume cloning in the optimized path
- The only problem here is this feature shouldn't be added in the support matrix since it might have end users asking other vendors for this feature whereas this feature is a specific case and doesn't provide any benefit in general operations
- #action Review the storpool patch