Jump to: navigation, search

Difference between revisions of "CinderZedPTGSummary"

(More "Cloudy" like actions for Cinder)
(Unifying and fixing of Capacity factors calculations and reporting)
Line 103: Line 103:
  
 
===Unifying and fixing of Capacity factors calculations and reporting===
 
===Unifying and fixing of Capacity factors calculations and reporting===
 +
 +
There are some inconsistencies with our scheduler stats like the allocated_capacity_gb is created by the volume manger to let the scheduler know what cinder has already allocated against a backend/pool, this value isn't being updated for migrations. This value can go negative because the init_host calculations only account for in-use and available currently. Patch: https://review.opendev.org/c/openstack/cinder/+/826510
 +
 +
Also we've an issue where there are a few places in Cinder that try and calculate the virtual free space for a pool but the problem is the Capacity filter and Capacity weigher do it differently.
 +
Patch: https://review.opendev.org/c/openstack/cinder/+/831247
 +
 +
The backend's stats may show that there is lots of space free/available, but cinder's view might be different due to:
 +
* reserved_percentage
 +
* thin vs. thick provisioning
 +
* lazy volume creation (space unused until volume is actually created)
 +
* max_over_subscription_ratio
 +
 +
These calculations needs to be corrected so the operators have the accurate idea which backends are low on space and requires attention and we don't face resource creation failures even though when there is available space in the backend.
 +
 +
====conclusions====
 +
#action: review the patches proposed by Walt
 +
 +
===Volume affinity/anti-affinity issues===

Revision as of 09:49, 13 April 2022

Introduction

The fifth virtual PTG for the Zed cycle of Cinder was conducted from Tuesday, 5th April, 2022 to Friday, 8th April, 2022, 4 hours each day (1300-1700 UTC). This page will provide a summary of all the topics discussed throughout the PTG.

This document aims to give a summary of each session. More context is available on the cinder Zed PTG etherpad:


The sessions were recorded, so to get all the details of any discussion, you can watch/listen to the recording. Links to the recordings are located at appropriate places below.

Tuesday 05 April

recordings


For the benefit of people who haven't attended, this is the way the cinder team works at the PTG:

  • sessions are recorded
  • please sign in on the "Attendees" section of this etherpad for each day
  • all notes, questions, etc. happen in the etherpad; try to remember to preface your comment with your irc nick
  • anyone present can comment or ask questions in the etherpad
  • also, anyone present should feel free to ask questions or make comments during any of the discussions
  • we discuss topics in the order listed in the etherpad, making adjustments as we go for sessions that run longer or shorter
  • we stick to the scheduled times for cross-project sessions, but for everything else we are flexible

Release cadence discussion: tick-tock model

https://governance.openstack.org/tc/resolutions/20220210-release-cadence-adjustment.html

There was a PTL-TC session about this on 4th April, 2022 (Monday) and the following points were discussed:

  • This only affects the upgrade path and not the release model which remains same (i.e. 6 months)
  • It proposes a tick-tock release model where if a release is tick, the subsequent release will be tock and so on
  • This new effort provides the ability to upgrade from tick->tick release (skipping one release) but we cannot upgrade (directly) from tock-> tock release
  • There is a job in place, grenade-skip-level, that will run on tick releases and check upgrade from tick-> tick release (or N-2 to N release)


There's a patch up by Gorka documenting the impact of the new release cadence on Cinder and it requires changes based on the discussion at PTG about the following points: Patch: https://review.opendev.org/c/openstack/cinder/+/830283


There will be a 2 cycle deprecation process which we can see with the following example, Suppose we have a config option "cinder_option_foo" deprecated in AA (tick), we need to continue the deprecation process in BB (tock), then we can remove that option in CC (tick + 1).

conclusions

  1. action: geguileo to update the patch with the current discussion points

Best review practices doc

whoami-rajat is working on putting together a review doc that would help new reviewers to efficiently review changes hence increasing the quality of review. link: https://review.opendev.org/c/openstack/cinder/+/834448 The discussion had a great point that should be mentioned in the review doc regarding a reviewer doesn't have to review everything but mentioning what they reviewed would benefit the other reviewers a lot. Eg: If someone reviewed the releasenote, it saves other reviewers time looking at the releasenote. Also there is a suggestion regarding adding the tick/tock release cadence specific review points.

conclusions

  1. action: whoami-rajat to update the review doc with the suggested points

Secure RBAC

We made the project ID optional in the url to support the system scope use case with the plan to expand scopes from project level to system level in Zed. System level personas will deal with system level resources that are not project specific, Eg: host information. We also have to take into account mixed personas for some resources like volume type is a system level resource but acts at project level if it is private and also needs to be listed by project members to create resources like volumes.

The community goal is divided into different phases and the goals for every phase are defined as follows:

  • Phase 1: project scope support -- COMPLETED
  • Phase 2: project manager and service role
  • Phase 3: (in AA) implement system-member and system-reader personas

The two new roles i.e. manager and service are intended to serve some use cases as follows:

  • Manager: It will have more authority than members but less authority than an admin. Currently, it is useful for set default volume type for a project.
  • Service: useful for service to service interaction. Eg: currently we requires an admin token for cinder-nova interaction that makes a service like cinder to be able to do anything in nova as an admin.

There were doubts regarding resource filtering which we can propose as extend work item to the current SRBAC goal. Currently our resource filtering has same functional structure i.e. if it doesn't work for non-admins then it doesn't work for admins either. There was another concern regarding attribute level granularity. Eg: the host field in the volume show response is a system scope entity which should be not be returned with a project scoped token response.

conclusions

  1. action: rosmaita to update policy matrix
  2. action: consider attributes (system level like host) associated to personas (for show, list, filtering...)

More "Cloudy" like actions for Cinder

Walt discussed that certain cases should automatically be handled by the cinder.

1) Certain actions should also invoke an automatic migration due to space limitations. For example, when there are multiple pools against the same backend and a user wants to extend a volume. If the volume doesn't fit on it's existing pool, but there is space for it on another pool on the same backend. There is a concern that If moving between pools take considerably long time then it would not be good that the operation takes that amount of time then we have the following points to consider:

  • moving between pools with dd will not be efficient
  • A user message could be useful in this case
  • Some backends (like RBD) backend can do this efficiently
  • would be better if we have a generic mechanism + efficient way in the driver to do it

There was a concern regarding a lot of concurrent migration happening due to this at the same time causing performance issue but we currently do that while migrating volumes and it works fine. There was also a suggestion to not migrate the original volume if it's a large one and rather migrate a smaller sized volume to free up the space required to extend but there are a lot of things to consider in this case, major being that the other volume might belong to another project and any failure during the operation might corrupt the other volume as well.

2) Backups, sometimes induce a snap of the volume. Snaps require living on the same pool as the original volume as the original volume. An optimization to this is the volume drivers can say whether they want to use snapshots or clones, depending on what's best for them. The driver can report if it will require the full space for a temp resource or not, if it requires it, it will go through the scheduler to check for free space, in the other case we will just proceed with the efficient cloning.

conclusions

  1. action: Walt to write a spec describing the design and working of it

Unifying and fixing of Capacity factors calculations and reporting

There are some inconsistencies with our scheduler stats like the allocated_capacity_gb is created by the volume manger to let the scheduler know what cinder has already allocated against a backend/pool, this value isn't being updated for migrations. This value can go negative because the init_host calculations only account for in-use and available currently. Patch: https://review.opendev.org/c/openstack/cinder/+/826510

Also we've an issue where there are a few places in Cinder that try and calculate the virtual free space for a pool but the problem is the Capacity filter and Capacity weigher do it differently. Patch: https://review.opendev.org/c/openstack/cinder/+/831247

The backend's stats may show that there is lots of space free/available, but cinder's view might be different due to:

  • reserved_percentage
  • thin vs. thick provisioning
  • lazy volume creation (space unused until volume is actually created)
  • max_over_subscription_ratio

These calculations needs to be corrected so the operators have the accurate idea which backends are low on space and requires attention and we don't face resource creation failures even though when there is available space in the backend.

conclusions

  1. action: review the patches proposed by Walt

Volume affinity/anti-affinity issues