Jump to: navigation, search

Difference between revisions of "CinderAntelopePTGSummary"

(Allocation size vs requested size for specific storage provider like Dell PowerFlex)
(os-brick privsep conversion)
Line 316: Line 316:
  
 
===os-brick privsep conversion===
 
===os-brick privsep conversion===
 +
 +
Nova has shifted from rootwrap to privsep many cycles ago but nova needs to keep the rootwrap files around because of os-brick.
 +
Also a recent security issue was reported related to this: https://bugs.launchpad.net/os-brick/+bug/1989008
 +
 +
'Stephen has proposed a couple of patches to migrate os-brick code from using rootwrap to privsep.
 +
 +
====conclusion====
 +
 +
action: review changes proposed by stephen
 +
* https://review.opendev.org/c/openstack/os-brick/+/791271
 +
* https://review.opendev.org/c/openstack/os-brick/+/791272
 +
* https://review.opendev.org/c/openstack/os-brick/+/791273
 +
* https://review.opendev.org/c/openstack/os-brick/+/791274
 +
* https://review.opendev.org/c/openstack/os-brick/+/791275
 +
 +
===Configurable soft delete===

Revision as of 05:33, 1 November 2022

Introduction

The Sixth virtual PTG for the 2023.1 Antelope cycle of Cinder was conducted from Tuesday, 18th October, 2022 to Friday, 21 October, 2022, 4 hours each day (1300-1700 UTC). This page will provide a summary of all the topics discussed throughout the PTG.

Cinder 2023.1 Antelope Virtual PTG 19 October, 2022


This document aims to give a summary of each session. More context is available on the cinder Zed PTG etherpad:


The sessions were recorded, so to get all the details of any discussion, you can watch/listen to the recording. Links to the recordings are located at appropriate places below.


Tuesday 18 October

recordings

User survey feedback

https://lists.openstack.org/pipermail/openstack-discuss/2022-October/030843.html https://docs.google.com/spreadsheets/d/1hHC4hg_Zt9FLYYJ7UA9iVomBbExUhhyJd2QpmrriiBQ/edit#gid=0

The user survey feedback comments are summarized in the following 3 sections:

1) Done:


2) Actionable:

  • Document HA deployments
  • Online retyping between different Ceph RBD backends (clusters) ==> Eric will try to look if libvirt supports it now
  • Improvements on encryption: key rotation, multiple LUKS keys ==> Could explore some ideas


3) Questions:

  • Real Active/Active ==> What does this mean specifically?
  • Live migration with Pure iSCSI ==> This should work in new OpenStack releases
  • Error management:
  • Better attach/detach cleanup on failure ==> For example not leaving volumes on reserved/detaching?
  • Better error handling when failed to create/mount/delete ==> User Messages?
  • Better support for cinder-backup services- especially the filesystem drivers. ==> Bug in driver?
  • Volume Group expansion ==> Extend volumes? Or more operations (which)?


User survey question review

The details provided by operators in the user survey feedback were vague and the team agreed to revise the questions to yield more useful information in the feedback.

The team proposed some good ideas as follows:

  • Ask them to provide driver along with protocol
  • Revise the list to mention driver with protocol for operators to select like NetApp iSCSI, HPE3PAR FC etc
  • Alphabetical ordering would be good and easy to find relevant driver-protocol combination
  • Be specific about the feedback, provide release, launchpad bug link if there is an issue


Based on the points, we've revised the user survey feedback questions in the following etherpad.

https://etherpad.opendev.org/p/antelope-ptg-cinder-user-survey-current-questions

conclusion

  • action: Brian to talk to Allison regarding the revised survey feedback (status after PTG: Done)

SLURP release cadence

The concept of SLURP (Skip Level Upgrade Release Process) was introduced because six month upgrades are difficult infeasible, or undesirable for operators. 2023.1 Antelope will the the first SLURP release of OpenStack. following are some of the details to keep in mind with respect to SLURP and not SLURP releases.

  • every other release will be considered to be a “SLURP (Skip Level Upgrade Release Process)” release
  • Upgrades will be supported between “SLURP” releases, in addition to between adjacent major releases
  • Deployments wishing to move to a one year upgrade cycle will synchronize on a “SLURP” release, and then skip the following “not-SLURP” release
  • Testing: test upgrade between SLURP releases
  • Deprecations: deprecation, waiting, and removal can only happen in “SLURP” releases
  • Data migrations: Part of supporting “SLURP to SLURP” upgrades involves keeping a stable (read “compatible” not “unchanging”) database schema from “SLURP to SLURP”
  • Releasenotes: https://review.opendev.org/c/openstack/project-team-guide/+/843457


For detailed info: https://governance.openstack.org/tc/resolutions/20220210-release-cadence-adjustment.html

Cinder well know Encryption Problem

Presentation: https://docs.google.com/presentation/d/1HOHnO9T3BD1KO5uk_y34aWhMs_A5i9ANPn6zIujQxCk/edit

This has been a complex issue to handle and is being discussed since multiple PTGs. There was another topic discussed, "Allocation size vs requested size for specific storage provider like Dell PowerFlex", which had work items that would act as a initial base for the encryption work having the following work items:

  • Keep two DB fields for the user size and actual size
    • requested size -> user size
    • allocated size -> real size
  • Partition the volume and only the partition with user size should be visible inside the VM


The encryption work will follow up on this initial work to implement the following:

  • Calculate encryption header size to know how much user visible size is in the volume
  • start encrypting the volume on creation instead of doing it on first attachment

conclusions

action: Sofia to work on the encryption work after the initial base work is completed

Operator hour

Christian Rohmann joined us and briefed us about their deployment and the current pain points they have with respect to cinder. They were mostly focused on backup related things and a number of backup topics were discussed.


1) State of non-rbd cinder-backup drivers such as S3

The current problem is, non-rbd backends are not optimized as they copy data chunk by chunk. Also they don't work very well with different type of volumes like thin provisioned, encrypted etc.

To address this issue, we will need to implement a generic block tracking feature. We can split the feature into two: backend and frontend.

  • action: Gorka agrees to do a brain dump of what he looked into for future reference


2) Encryption layer for backups

We can implement an encryption layer on the backups using barbican or a static key. The key scope could be global or per project basis.


3) Backup features we currently have

We also discussed about the backup features we currently have so operators could make good use of it:

  • Limit concurrent backup/restore operations
  • Scale backup service vertically and horizontally to improve performance
    • We can configure the worker processes for backup
    • Run cinder backup in Active-Active


4) Some other issues that were mentioned and are pain points of operators:

  • Cannot recover a backup process if the service dies, would be good to have it continue where it left off
  • RBD image has a lock and there is no way to know who/what left it there
  • Interested in 512e/4k support for RBD
  • Resuming operations after restarts
  • auto migrate volumes in pool


Image cache issue when volume created from cache is of less size than cache

Image cache is a very useful feature that allows us to clone and extend the volume from cache instead of downloading the whole image again and again from glance, hence providing optimization.

The problem we face is, if the first volume created with image cache enabled is a large volume (say 100GB) and subsequent volumes created from same image are small sized (say 10GB) then subsequent volumes will also be created of same size as first volume (i.e. 100GB instead of 10GB).

We discussed Possible ways to fix it:

  1. Create the first entry with the requested size, if another request comes in with a smaller size then update the cache entry
  2. Create the cache entry with the minimum sized volume required by the image
  3. Use a tuple (image-id, size) to query the cache entries and have multiple cache entries associated to a single image


The solution described in 2. seems to be simplest and most straightforward to implement.

conclusion

Wednesday 19 October

recordings

image encryption work update

The idea of this feature is to provide support for encrypted images. The work is currently dependent on the secret consumer work on barbican side.

https://review.opendev.org/q/topic:secret-consumers

There is also a patch on os-brick side that can be reviewed: https://review.opendev.org/c/openstack/os-brick/+/709432

conclusion

action: Cinder team to review the os-brick change.

Scenarios where cinder does not check the integrity of image data

The issue is when show_multiple_locations config option is set to True in glance, we allow malicious users to update the location information of a public/shared image.

This parameter is required by cinder to do certain optimizations when using glance cinder store. Here the security comes as a trade off for optimization. Also we don't do image signature verification in the optimized path.

related to this OSSN: https://wiki.openstack.org/wiki/OSSN/OSSN-0090

conclusion

  • action: Brian to file a bug about image signature verification to check for certificates with a link to the nova implementation
  • action: check for documentation for optimization vs security tradeoff

Cinder isn't very cloud like with pools enabled

When a deployment has pools being reported by a backend, a pool(s) can get full with volumes. When there are pools that are full, all volumes on the full pool can now have operations fail on them, while volumes in pools that aren't full don't fail. Cinder should mitigate this and migrate the volumes that are being operated on to a pool that has space for the operation. Operators end up doing this manually as the only solution to fix the failed operations. This manual intervention is not scalable and is not 'cloud'. From a customer's perspective, operations on their volumes should just work.

Commands that fail on volumes where pool is full:

  • backup
  • clone
  • snapshot
  • extend

Operation (like clone volume) will take more time since we also need to add the time taken to migrate the volume if the pool selected is full and then clone the volume

Idea of having this configurable to do the automatic migrations or not

  • embed it into the volume type so volumes with a particular type can be migrated
  • Keep it in the volume type vs global config option
  • both can be done together

conclusion

  • action: Walt to write a spec for the extend case (it can be updated later when other operations are also ready)

Migrating from cinderclient to OSC

Current OSC and cinderclient gaps: https://docs.openstack.org/python-openstackclient/latest/cli/decoder.html#cinder-cli

Projects are moving towards openstackclient like nova, neutron, keystone and also glance is planning to. A lot of gaps have been bridged between the cinderclient and openstackclient. The vision is to unify all the project specific clients to OpenStackClient to provide a better UX.

Currently there are 2 ways to do an API call:

  • using python bindings in project specific clients
  • openstacksdk

openstacksdk has 3 layers:

  • resource layer: set and get attributes for resources
  • proxy layer: acts as a server to connect to project(nova, cinder) api
  • cloud layer: combines operation together

conclusion

  • action: Rajat (whoami-rajat) to create a parity doc/sheet for OSC-cinderclient -- check for openstacksdk as well as osc-cinderclient gaps

cinder-backup is blocking nova instance live-migrations

We can't live migrate an instance if a volume backup is going on due to the volume status lock ('backing-up').

Existing spec: https://review.opendev.org/c/openstack/cinder-specs/+/818551/

Gorka thinks we can benefit if we use the attachments API for internal attachment operations like attaching during a migration

conclusion

Thursday, 20 October

recordings

SRBAC update

Based on recent discussions with operators regarding scope, we will be confined to only project scope and the personas to be implemented are project admin, project member and project reader. Cinder has already implemented all the personas but is missing the ``scope_type`` restriction in the policies.

Tempest team is testing policies with enforced scope and enforce new defaults as True: https://review.opendev.org/c/openstack/tempest/+/614484

Following are the goals for 2023.1: (first 3 are important)

  • switch enforce scope to True by default
  • switch enforce new defaults to True (maybe, but definitely by 2023.2)
  • add scope type to policies -- for cinder
  • implement service role -- needed in keystone first

conclusion

  • update policy matrix
  • remove previously deprecated stuff -- this is not related to SRBAC but we split one policy into multiple to support granularity and now removing the old one (Eg: create_update policy split into create and update policies)
  • update the policy/base.py file so that generated strings in sample policy.yaml make sense
  • probably also change the names of the "constants" that are defined in the base file and used in all the individual policy files (because those are also misleading)
  • add the scope_type=['project'] to all rules
  • key thing: legacy admin (role:admin) should do everything in the new policy defaults that they could do in the old defaults
  • add tempest testing : https://review.opendev.org/c/openstack/tempest/+/614484

Assisted volume extend for remotefs drivers

In case of filesystem type drivers, they don't support online extend as of today.

There is an approach being discussed to make the online extend synchronous: https://review.opendev.org/c/openstack/nova-specs/+/855490

There are concerns about network failure and cinder might wait some amount of time to get a reply from nova making the operation slower. Another concern was that we might end up with 2 code paths, one with extended event and other with the new synchronous extend for different drivers.

conclusion

  • action: kgube to write a spec clarifying the design changes need to be done on the cinder side to support this for FS drivers

Allocation size vs requested size for specific storage provider like Dell PowerFlex

Dell Powerflex driver works in a different way where it rounds every volume capacity by multiple of 8GB. The problem with this behavior is when the user creates a volume with a requested size, the size shown in DB doesn't reflect what's being created in the backend.

The following approach seems to solve the issue:

  • Keep two DB fields for the user size and actual size
    • requested size -> user size
    • allocated size -> real size
  • Partition the volume and only the partition with user size should be visible inside the VM


only admin will be able to see the actual size, which will require a new microversion to be reported in the response and also real size should be sent in the notification.

conclusion

action: JP to write a spec to mention the details of changes required to be done on:

  • os-brick side (only show user visible size partition)
  • cinder DB side (also including the new microversion to get the user visible and actual size)
  • Optimization: Partition on volume create operation instead of the attach operation on os-brick side
  • The partition should always exist (even when an 8GB volume is requested) because otherwise it will not be able to extend it
  • Sse a partitioning method that allows recursive partitioning
  • On extend Cinder will need to extend that partition
  • os-brick will need to receive a new flag in the connection info to tell it to use the first partition and return it (because users can have volumes with partitions)

os-brick privsep conversion

Nova has shifted from rootwrap to privsep many cycles ago but nova needs to keep the rootwrap files around because of os-brick. Also a recent security issue was reported related to this: https://bugs.launchpad.net/os-brick/+bug/1989008

'Stephen has proposed a couple of patches to migrate os-brick code from using rootwrap to privsep.

conclusion

action: review changes proposed by stephen

Configurable soft delete