Jump to: navigation, search

Difference between revisions of "CinderZedPTGSummary"

(recordings)
(Restoring from cinder backups create thick volumes instead of thin)
Line 241: Line 241:
  
 
===Restoring from cinder backups create thick volumes instead of thin===
 
===Restoring from cinder backups create thick volumes instead of thin===
 +
 +
Gorka explained the problem statement is when we do migration from thin -> thin, it stays thin -- dd is smart enough to do it (if you ask it to) but when doing backup it stays thin but when restoring, it is a thin volume which behaves like a thick volume (because all gaps are filled and takes full space of the volume).
 +
 +
There was a suggestion that SCSI has commands to only read allocated blocks but not sure if it exists.
 +
Eric suggested that we can do this simply by reading 0 sized blocks but not write them and write other blocks.
 +
Gorka has concern if the 0 sized block is valid and we want to write it.  Example of storage that doesn't return Zeroes:
 +
* There are different types of TRIM defined by SATA Words 69 and 169 returned from an ATA IDENTIFY DEVICE command:
 +
** Non-deterministic TRIM: Each read command to the logical block address (LBA) after a TRIM may return different data.
 +
** Deterministic TRIM (DRAT): All read commands to the LBA after a TRIM shall return the same data, or become determinate.
 +
** Deterministic Read Zero after TRIM (RZAT): All read commands to the LBA after a TRIM shall return zero.
 +
 +
Further supporting the point of skipping zero blocks is a concern:
 +
* SCSI thin HAS to return zeroes as per specs with UNMAPed  blocks.
 +
* SCSI will also do it on thick if it has LBPRZ set
 +
* NVMe seems to be the same with DEALLOCATEd blocks
 +
 +
====conclusions====
 +
* action discuss this again if anyone is interested in taking this up
 +
** what's the deadline? - see spec deadline for e.g. Zed
 +
* action find the current cinder code that is currently doing this
 +
** dd conv=sparse
 +
** qemu-img convert (-S defaults to 4k)
 +
* When writing back we have to be CAREFUL:
 +
** If it's an existing volume we cannot just skip a specific location (since it could have data)
 +
* action Zaitcev to take a look and report - for sure before next PTG, hopefully in a month!
 +
* action look for places where rbd sparsify is useful
 +
 +
===Cross Project with Manila team===
 +
====Devstack-plugin-ceph changes====

Revision as of 03:49, 14 April 2022

Introduction

The fifth virtual PTG for the Zed cycle of Cinder was conducted from Tuesday, 5th April, 2022 to Friday, 8th April, 2022, 4 hours each day (1300-1700 UTC). This page will provide a summary of all the topics discussed throughout the PTG.

This document aims to give a summary of each session. More context is available on the cinder Zed PTG etherpad:


The sessions were recorded, so to get all the details of any discussion, you can watch/listen to the recording. Links to the recordings are located at appropriate places below.

Tuesday 05 April

recordings


For the benefit of people who haven't attended, this is the way the cinder team works at the PTG:

  • sessions are recorded
  • please sign in on the "Attendees" section of this etherpad for each day
  • all notes, questions, etc. happen in the etherpad; try to remember to preface your comment with your irc nick
  • anyone present can comment or ask questions in the etherpad
  • also, anyone present should feel free to ask questions or make comments during any of the discussions
  • we discuss topics in the order listed in the etherpad, making adjustments as we go for sessions that run longer or shorter
  • we stick to the scheduled times for cross-project sessions, but for everything else we are flexible

Release cadence discussion: tick-tock model

https://governance.openstack.org/tc/resolutions/20220210-release-cadence-adjustment.html

There was a PTL-TC session about this on 4th April, 2022 (Monday) and the following points were discussed:

  • This only affects the upgrade path and not the release model which remains same (i.e. 6 months)
  • It proposes a tick-tock release model where if a release is tick, the subsequent release will be tock and so on
  • This new effort provides the ability to upgrade from tick->tick release (skipping one release) but we cannot upgrade (directly) from tock-> tock release
  • There is a job in place, grenade-skip-level, that will run on tick releases and check upgrade from tick-> tick release (or N-2 to N release)


There's a patch up by Gorka documenting the impact of the new release cadence on Cinder and it requires changes based on the discussion at PTG about the following points: Patch: https://review.opendev.org/c/openstack/cinder/+/830283


There will be a 2 cycle deprecation process which we can see with the following example, Suppose we have a config option "cinder_option_foo" deprecated in AA (tick), we need to continue the deprecation process in BB (tock), then we can remove that option in CC (tick + 1).

conclusions

  1. action: geguileo to update the patch with the current discussion points

Best review practices doc

whoami-rajat is working on putting together a review doc that would help new reviewers to efficiently review changes hence increasing the quality of review. link: https://review.opendev.org/c/openstack/cinder/+/834448 The discussion had a great point that should be mentioned in the review doc regarding a reviewer doesn't have to review everything but mentioning what they reviewed would benefit the other reviewers a lot. Eg: If someone reviewed the releasenote, it saves other reviewers time looking at the releasenote. Also there is a suggestion regarding adding the tick/tock release cadence specific review points.

conclusions

  1. action: whoami-rajat to update the review doc with the suggested points

Secure RBAC

We made the project ID optional in the url to support the system scope use case with the plan to expand scopes from project level to system level in Zed. System level personas will deal with system level resources that are not project specific, Eg: host information. We also have to take into account mixed personas for some resources like volume type is a system level resource but acts at project level if it is private and also needs to be listed by project members to create resources like volumes.

The community goal is divided into different phases and the goals for every phase are defined as follows:

  • Phase 1: project scope support -- COMPLETED
  • Phase 2: project manager and service role
  • Phase 3: (in AA) implement system-member and system-reader personas

The two new roles i.e. manager and service are intended to serve some use cases as follows:

  • Manager: It will have more authority than members but less authority than an admin. Currently, it is useful for set default volume type for a project.
  • Service: useful for service to service interaction. Eg: currently we requires an admin token for cinder-nova interaction that makes a service like cinder to be able to do anything in nova as an admin.

There were doubts regarding resource filtering which we can propose as extend work item to the current SRBAC goal. Currently our resource filtering has same functional structure i.e. if it doesn't work for non-admins then it doesn't work for admins either. There was another concern regarding attribute level granularity. Eg: the host field in the volume show response is a system scope entity which should be not be returned with a project scoped token response.

conclusions

  1. action: rosmaita to update policy matrix
  2. action: consider attributes (system level like host) associated to personas (for show, list, filtering...)

More "Cloudy" like actions for Cinder

Walt discussed that certain cases should automatically be handled by the cinder.

1) Certain actions should also invoke an automatic migration due to space limitations. For example, when there are multiple pools against the same backend and a user wants to extend a volume. If the volume doesn't fit on it's existing pool, but there is space for it on another pool on the same backend. There is a concern that If moving between pools take considerably long time then it would not be good that the operation takes that amount of time then we have the following points to consider:

  • moving between pools with dd will not be efficient
  • A user message could be useful in this case
  • Some backends (like RBD) backend can do this efficiently
  • would be better if we have a generic mechanism + efficient way in the driver to do it

There was a concern regarding a lot of concurrent migration happening due to this at the same time causing performance issue but we currently do that while migrating volumes and it works fine. There was also a suggestion to not migrate the original volume if it's a large one and rather migrate a smaller sized volume to free up the space required to extend but there are a lot of things to consider in this case, major being that the other volume might belong to another project and any failure during the operation might corrupt the other volume as well.

2) Backups, sometimes induce a snap of the volume. Snaps require living on the same pool as the original volume as the original volume. An optimization to this is the volume drivers can say whether they want to use snapshots or clones, depending on what's best for them. The driver can report if it will require the full space for a temp resource or not, if it requires it, it will go through the scheduler to check for free space, in the other case we will just proceed with the efficient cloning.

conclusions

  1. action: Walt to write a spec describing the design and working of it

Unifying and fixing of Capacity factors calculations and reporting

There are some inconsistencies with our scheduler stats like the allocated_capacity_gb is created by the volume manger to let the scheduler know what cinder has already allocated against a backend/pool, this value isn't being updated for migrations. This value can go negative because the init_host calculations only account for in-use and available currently. Patch: https://review.opendev.org/c/openstack/cinder/+/826510

Also we've an issue where there are a few places in Cinder that try and calculate the virtual free space for a pool but the problem is the Capacity filter and Capacity weigher do it differently. Patch: https://review.opendev.org/c/openstack/cinder/+/831247

The backend's stats may show that there is lots of space free/available, but cinder's view might be different due to:

  • reserved_percentage
  • thin vs. thick provisioning
  • lazy volume creation (space unused until volume is actually created)
  • max_over_subscription_ratio

These calculations needs to be corrected so the operators have the accurate idea which backends are low on space and requires attention and we don't face resource creation failures even though when there is available space in the backend.

conclusions

  1. action: review the patches proposed by Walt

Volume affinity/anti-affinity issues

We have the affinity filters but affinity is kept in check while creating volumes but not when migrating them. One way to handle this is we can preserve the scheduler hint (for affinity/anti-affinity) for later operations on the volume. There are a lot of things to consider with this approach:

  • What happens if the original volume (we kept affinity from) is deleted/migrated?
  • Should we keep it as a UUID or host?
  • Should we consider the scheduler hint only for the volume create operation or preserve it for all the operation for the rest of the life of that volume?
  • How to go about the design -- should we store it in the metadata or create a separate table to store the volume UUIDs provided for affinity/anti-affinity
  • What to do about when this requires cascade operations, should we move a lot of resources during the operation maintaining the affinity/anti-affinity?
  • Need to also think about cases like backup/restore, when replication is enabled

conclusions

  1. action: define in our docs that we only honor hints on creation
  2. action: ask Nova team if they have already solved this problem. Nova has a spec up for a similar case: https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/complex-anti-affinity-policies.html
  3. action: continue discussion in upcoming meetings and collect the points gathered in a spec

Wednesday 06 April

recordings

<Placeholder for recordings when they're available>

For Driver Maintainers: How Cinder (the project) Works

Brian provided a quick overview of Cinder's software development cycle, when key deadlines occur in each cycle, the difference between features and bugfixes, when bugfixes are backportable, things you can do to make sure your patches are ready for review, where key information about the project is located, etc.

link: https://etherpad.opendev.org/p/how-cinder-works

Documenting the Driver Interface

During this session, the team reviewed documentation patches for the driver interface class which was a pending item from last PTG. We got valuable feedback and we are planning to do it again to get these type of changes in. Patches to review:

conclusions

  1. action: do the review session again and review current patches

Third-party CI: testing

We found out that most of our third party CI drivers are not testing encryption. A fixed key should be enough to test it. Later this discussion was generalized to what should be tested in third party CI and following is the list:

  • compute API -- attachments, bfv
  • volume API
  • image API -- glance configured to use cinder as the backend (nice to have but not required)
  • scenario tests
  • cinder-tempest-plugin

A suggestion is a python script tool would be helpful to check what the CI promises and which tests are running in tempest (tool will check in tempest.conf and cinder.conf) for which we need to force 3rd party CI systems to store things in a specific location.

upstream tempest tests: https://etherpad.opendev.org/p/cinder-community-CI-tests example downstream tempest tests: https://etherpad.opendev.org/p/cinder-3rd-party-CI-tests-rh

conclusions

  1. action add the current discussion points in the third party CI document
  2. action list of desired tests for comments

Third-party CI: infrastructure

NetApp team provided a great presentation on Software Factory. <placeholder for link of presentation>

Thursday 07 April

recordings

<Placeholder for recordings when they're available>

Status update on the Quota fix effort

Gorka did a presentation on the new quota system and provided options to choose which system we would like in cinder. The team was in agreement that we want both drivers: counting & mixed mode and the following points were discussed:

  • We understand that this increases the code to be maintained and we are ok with it
  • We don't want a system to find that the performance is BAD and be stuck on that release until we release the more efficient driver
  • It's acceptable to implement the 2 drivers in 2 different releases, First the counting one

We need to add release note mentioning the potential performance problem for large projects and that they may want to skip a release

conclusions

  1. action: Gorka to repropose the spec for dynamic resource counting + try to implement the mix quota system (if it goes into same release else next release) with a single spec

NetApp ONTAP - migration from ZAPI to REST API

NetApp team presented the solution proposed to make the migration from ZAPI to REST API. We discussed the possibility to backport this feature to older releases to ensure the customers will not have problems on future NetApp ONTAP versions that will not support ZAPI calls

link: https://etherpad.opendev.org/p/zed-ptg-netapp-rest-migration

There was a concern regarding we need to provide backward compatibility when Customers using a particular version of OpenStack but not upgrading ONTAP cluster. For this we can keep ZAPI calls for long and have a config option for customers to opt in to the REST API interface or ZAPI one

Backporting discussion:

  • Trains doesn't seem to be an option
  • Reasons that backporting seems reasonable:
    • It's not a NEW feature that users will have/be able to see
    • It's a feature today (that backend supports both protocols) but next year this will be a bug (OSP won't work with a newly bought system that comes with the latest firmware version).

We had a opposition regarding the backport reasoning:

conclusions

  1. action: voting on the last IRC meeting of April (Video) to backport till the Wallaby backport


Update on Image Encryption

We are still waiting for Barbican to implement secret consumers which Prevents from deleting secrets accidentally. Alan has a concern that if barbican has multiple consumers then they need to be from same users and projects

  • ACLs should help in this case
  • If you're using an encrypted volume, cinder might allow you to have ownership of it but barbican may not

conclusions

  1. action: Luzi to repropose the cinder spec for Zed

Restoring from cinder backups create thick volumes instead of thin

Gorka explained the problem statement is when we do migration from thin -> thin, it stays thin -- dd is smart enough to do it (if you ask it to) but when doing backup it stays thin but when restoring, it is a thin volume which behaves like a thick volume (because all gaps are filled and takes full space of the volume).

There was a suggestion that SCSI has commands to only read allocated blocks but not sure if it exists. Eric suggested that we can do this simply by reading 0 sized blocks but not write them and write other blocks. Gorka has concern if the 0 sized block is valid and we want to write it. Example of storage that doesn't return Zeroes:

  • There are different types of TRIM defined by SATA Words 69 and 169 returned from an ATA IDENTIFY DEVICE command:
    • Non-deterministic TRIM: Each read command to the logical block address (LBA) after a TRIM may return different data.
    • Deterministic TRIM (DRAT): All read commands to the LBA after a TRIM shall return the same data, or become determinate.
    • Deterministic Read Zero after TRIM (RZAT): All read commands to the LBA after a TRIM shall return zero.

Further supporting the point of skipping zero blocks is a concern:

  • SCSI thin HAS to return zeroes as per specs with UNMAPed blocks.
  • SCSI will also do it on thick if it has LBPRZ set
  • NVMe seems to be the same with DEALLOCATEd blocks

conclusions

  • action discuss this again if anyone is interested in taking this up
    • what's the deadline? - see spec deadline for e.g. Zed
  • action find the current cinder code that is currently doing this
    • dd conv=sparse
    • qemu-img convert (-S defaults to 4k)
  • When writing back we have to be CAREFUL:
    • If it's an existing volume we cannot just skip a specific location (since it could have data)
  • action Zaitcev to take a look and report - for sure before next PTG, hopefully in a month!
  • action look for places where rbd sparsify is useful

Cross Project with Manila team

Devstack-plugin-ceph changes