Jump to: navigation, search




The Ninth virtual PTG for the 2024.2 (Dalmatian) cycle of Cinder was conducted from Tuesday, 9th April, 2024 to Friday, 12th April, 2024, 4 hours each day (1300-1700 UTC). This page will provide a summary of all the topics discussed throughout the PTG.

Cinder 2024.2 (Dalmatian) Virtual PTG 10 April 2024

This document aims to give a summary of each session. More information is available on the cinder 2024.2 Dalmatian PTG etherpad:

The sessions were recorded, so to get all the details of any discussion, you can watch/listen to the recording. Links to the recordings for each day are below their respective day's heading.


Recording - Day1, Part 1

Improve driver documentation

When proposing new or updating existing drivers, our interface can be confusing. We agreed on an effort to migrate class documentation to the interface definitions, make clear the mandatory and optional methods for drivers, and maintain this documentation going forward.

User visible information in volume types

Volume type metadata and extra specs are not visible to users, making it difficult to ascertain whether a volume type will lead to encryption or replication. Agreement was reached on metadata fields similar to Glance's metadefs, allowing drivers to report capabilities in a standard way and allowing that metadata to become visible to admins and users.

Optional backup driver dependencies

Although volume driver dependencies are optional, those of our backup drivers are listed in requirements and are therefore always installed irrespective of deployment configuration. Agreement was reached to move these dependencies to driver-requirements. This should simplify efforts of both deployers and packagers.

Simplify deployments

A few ideas were proposed to improve our deployment:

  • When cinder-volume is deployed in an active-active configuration, our backend_host parameter must be updated on every backend to have the same value, we would like to remove this additional step. 
  • Use of predictable names to define tenants, instead of unique IDs.
  • Have dedicated quote section in our configuration for quote-related options.

Recording - Day 1, Part 2

Response schema validation

A spec to add response schema validation in addition to our existing request validation was proposed. With adequate coverage, clients in various languages could be autogenerated (in theory). No significant objects were raised, agreement to review related patches this cycle.

Documentation of Ceph auth requirements

We do not provide comprehensive and easy-to-find documentation on exactly what authentication/permission expectations there are between different services, leading deployers troubleshoot themselves and not be able to rely on upstream best practices. Agreement to improve this situation in the current cycle.

Cinder backup improvements

An accumulation of bug fixes and performance improvements have stalled in the review queue. We went through the major ones as a team to attempt to bring awareness and unblock the remaining review requirements. See WIKI notes for specific details.

Migrating backups between multiple backends

There is desire to support multiple backup backends where new backups go to a new backend while backups in an old backend remain readable. We want to avoid needing to create a full backup in the new backend to support incremental snapshots (and the associated charge). A spec will be proposed for review and work towards this goal will proceed during this cycle.

Recording - Day 2, Part 1

Cross-project with Glance

In a cross-project collaboration with the Glance team, an improved method for image migration was proposed. Agreement to introduce a new migration operation in Glance. Cinder context was provided and a consensus on path forward was reached.

Recording - Day 2, Part 2

Cross-project with Nova

In a cross-project collaboration with both Nova and Glance, the topic of image encryption was discussed. Dan Smith from the Nova team provided input from the Nova side. Cinder and Nova expect LUKS formatted images, but Glance currently supports GPG encrypted images - requiring re-encryption prior to use. It was noted that LUKS encrypted images can be created without root permission and the Glance team is now looking to drop GPG support and consolidate around LUKS as our unified format.

Recording - Day 3

Active-Active support for NetApp

A NetApp engineer raised questions about adding active-active support to the driver. Questions were answered and that work should proceed in this cycle.

Performance of parallel clone operations

For the clone operation in cinder we're using a single distributed lock to prevent the source volume from being deleted mid-operation. This causes multiple concurrent clone operations to block. Under the right conditions, this can cause a significant performance degradation. Multiple possible solutions were discussed (read-write locks) and a consensus to use the DB was reached. Some details are unclear, awaiting a spec before moving forward.

Volume encryption with user defined keys

Cinder does not currently support encryption with key provided by the user. Users could both manage their own keys and data could be recovered even if keys are lost at the deployment. There are several technical challenges to support this. Several of these hurdles were raised, more thought and research is needed before we have a spec that could be reviewed.

Recording - Day 4

Tuesday 9 April

Improve driver documentation (whoami-rajat)

User visible information in volume types

  • Spec: https://review.opendev.org/c/openstack/cinder-specs/+/909195
  • many possible ways to realize this are outlined in the spec - please read the options
  • Needs a decision in which direction this should be going
  • Presented in cinder meeting previously
  • usecase: weather a volume type leads to encryption or replication in the backend or not
  • need encryption and replication extra specs to be visible for users
  • extra table in DB for extra spec metadata
    • whitelist blacklist in API to evaluate if it should be visible for users or not
  • Alan already worked on user visible extra specs
    • it shows replication
      • Josephine says it is partial info
        • replication can be from cinder side - volume type
        • can be from backend side - this part is not visible currently
  • Brian proposes a metadata field in volume type that shows these properties
    • operators might have to duplicate info - in metadata and extra specs
  • metadefs in glance could be a reference
    • catalog of metadata for various resources
    • encryption - true/false
  • Gorka says it seems to be a 2 part problem
    • show extra specs to admins
    • human operator knows about it but it isn't reported anywhere - like RBD supporting replication
  • we need drivers to report the backend information and it can be shown to users with user visible extra specs
  • we need a standardized way to describe the properties like encryption/replication etc -- won't be a good idea to do it description
  • define keys and validate in metadefs to maintain standardized key names
  • we can do the metadef things in parts
    • define keys that should be used
    • then start validating + more features
  • example from glance/nova image properties
  • we will be going with the approach of metadefs
  • #action: update the spec to leverage metadefs to achieve this functionality
    • the metadefs can be part 2 (nice to have) for now

Make backup driver's dependencies optional

  • Bug: https://bugs.launchpad.net/cinder/+bug/2058601
  • While dependencies of volume drivers are optional, dependencies of backup drivers are mandatory so are installed always
  • Can we apply the same approach to avoid installing unused packages ?
  • proposal to move dependencies to driver-requirements
  • will packages like boto3 still be checked by requirements checks?
  • modification to setup.cfg
  • comments to denote which requirements are for volume drivers vs. backup drivers
  • will start with s3 patch as POC, others to follow
  • concerns about centralized checks (licenses, etc) if packages optional
  • Brian is mentioning about: https://review.opendev.org/c/openstack/requirements/+/915165
    • this removes driver specific depnendencies from global requirements and UC
  • glance_store maintains driver dependencies in test-requirements
  • we should have a centralized way for monitoring these dependencies so we don't end up in a conflict where glance and cinder are not able to install a package due to version conflict?
  • team thinks the content of setup.cfg is not checked by the requirements tooling
  • #action: takashi to revise his patches (separate out the backend driver requirements in setup.cfg by a comment)
    • takashi to check on whether requirements job pays attention to setup.cfg "extras"

Simplify deployments

Add reponse schema validation (and fix gaps in our request body and query string validation

  • Spec: https://review.opendev.org/c/openstack/cinder-specs/+/914543
  • We (the SDK team) would like to generate OpenAPI schemas (with extensions to support e.g. microversions and actions) for core OpenStack services. We'd like these to be stored in-tree to ensure things are complete and up-to-date, to avoid landing another large deliverable on the SDK team, and to allow Cinder to fix their own issues
  • OpenAPI 3.1 is a superset of JSONSchema, which means we can use the same tooling we currently use for this (read: JSON Schema *everywhere*)
  • This will take the form of a glob of new JSON Schema dictionaries in 'cinder/api/schemas' plus decorators for our various (non-deprecated) APIs
    • We will also add decorators to indicate other things that will be useful in spec generation, such as a highlighting removed APIs (HTTP 410 (Gone)) and resource actions (HTTP 400 (Bad Request))
  • Validation of response bodies will be enabled by a new config options and will be opt-in to avoid breaking production. We will however turn it on by default in our unit, functional and integration tests
  • Eventually API documentation will switch from os-api-ref to a new tool developed and owned by the SDK team, but this is a stretch goal. When this happens, only the Sphinx extension itself will live out-of-tree (like os-api-ref today)
  • Advantages:
    • We can start auto-generating API bindings in a load of languages (Go, Python, Rust, ...)
    • We will have a mechanism to avoid accidentally introducing API changes
    • Our API documentation will be (automatically) validated
    • We will likely highlight bugs and issues with the API
  • Disadvantages
    • There will be a lot of "large" reviews to be attended to (but see point about self-validation above)
  • Open questions:
  • Meeting Notes:
    • Rajat to review spec, there is some schema validation in cinder already
    • what we have does not handle responses
      • we do handle responses but not as part of cinder schema validation, it is done with API sample tests (and also validations on tempest side)
    • Stephen will clean up existing patch as POC

Documentation of Ceph auth caps for RBD clients used by Cinder / Glance / Nova is missing or inconsistent

Cinder Backup improvements

Migrate cinder backup backend transparently

  • use case: migrate a backup backend data (like swift) to another (like S3) without downtime
    • new data should go to s3 but old data should be intact
  • It's not really a migration in the sense of moving data from one backend to the other
    • It's about using a new backup backend and still have the old one readable
    • Ideally they want to be able to do incremental backups in the new backend and restore using both (following the chain)
  • Alan mentions about supporting multiple backup backends as a solution
  • Gorka mentions the need of backup_type (similar to volume_type for volume backends)
  • this is an example of spec template
  • We used to have another spec that tried to do something similar
  • Old patch that started this work: https://review.opendev.org/c/openstack/cinder/+/519355
  • limitation: do incremental backup on new backend (s3) which has parent backup in old backend (swift)
    • could be a followup feature
    • workaround: create a full backup in new backend -- customers will be charged more (2TB full backup vs 200MB incremental backup)
  • Minimum implementation
    • create backup_type and associate it to a particular backup backend
    • In creae backup, scheduler will check if the backup_type is associated to a backend, then create it there else randomize the create between the available backup backends
    • Configuration of enabled_backup_backends and the sections? Maybe this can be optional and we just use [DEFAULT] for now in all of them
    • create, delete, restore etc features should work based on the backup type
  • #action: write a high level spec for discussion based on the above points
  • reach out to us on #openstack-cinder channel on OFTC

Wednesday 10 April

general info/observations

Continue doc conversation from day before

Cross project with glance about same store image migration (whoami-rajat)

  • Meeting will be in the Cinder "room"
    • https://etherpad.opendev.org/p/apr2024-ptg-glance#L157
    • https://review.opendev.org/c/openstack/glance-specs/+/914639
    • Generic Image migration + Optimization for same store migration
    • WIP Spec (high level details for discussion purposes): https://review.opendev.org/c/openstack/glance-specs/+/914639
    • Description:
      • Currently the preferred way of migration in glance is two step
        • 1. Copy the image from source store to destination store
        • 2. delete the image from source store
      • As i can see, it has two problems with this approach:
        • 1. requires manual intervention from operators after waiting for image copy to finish and then delete the image from source store
        • 2. No way for stores to optimize the operation
      • This can be addressed by introducing a migration operation in glance which will have two features
        • 1. A generic migration workflow where we will perform the image copy and delete in the same API operation
        • 2. Allow an interface for glance store methods to optimize the migration if possible else it will fall back to the generic workflow

Cross project session with nova and glance about in-flight image encryption (rosmaita)

  • https://review.opendev.org/c/openstack/glance-specs/+/609667
  • Nova and Cinder require LUKS format
    • for encryption, cinder gets the binary secret (a byte array) from barbican and converts it into a string of hex digits (which is used as the luks passphrase), Nova doesn't - it generates passphrases directly and stores them in Barbican
  • current proposal/patchset:
  • new proposal (as interpreted by mhen)
    • get rid of GPG encryption and vastly simplify the patchset by using LUKS encryption for images like Cinder and Nova already do when creating images of encrypted disks
      • as proposed by Dan Smith (Nova)
    • figure out which metadata to add to Glance images to properly reflect the new use cases
      • maybe streamline existing attributes like "cinder_encryption_key_id" and rename it to the same as Nova and the (to-be-introduced) user-side are using
    • Cinder would keep its behavior for Cinder-created encrypted volumes
      • Cinder lets Barbican generate a binary key as secret_type=symmetric
      • Cinder uses binascii.hexlify() on the binary key and passes the result as passphrase to LUKS
      • any image created from such volume would keep the reference to the key that is marked as secret_type=symmetric and would trigger the binascii.hexlify() call before use
    • support for Nova- or user-supplied images is to be added to Cinder
      • secret_type=passphrase indicates that the Barbican secret carries the final passphrase (not binary), this instructs Cinder *not* to binascii.hexlify() the secret payload before passing it to LUKS
      • users can use qemu tooling to create a LUKS image and put the passphrase into Barbican by specifying secret_type=passphrase
        • as an alternative users can have Barbican create the key, do hexlify themselves and specify secret_type=symmetric to imitate what Cinder does, if they want the entropy of Barbican (e.g. HSM)
      • Cinder can use the LUKS encryption contained in the image as-is and copy the LUKS-encrypted blocks 1:1 into the volume backend storage, it just needs to differentiate between secret_types to handle the key/passphrase correctly
        • it already does this when restoring images it created itself during the "os-volume_upload_image" Cinder API action from encrypted volumes

Thursday 11 April

Add Active-Active support for NetApp iSCSI/FCP drivers

  • As part of this release, we will implement active-active support for NetApp iSCSI and FCP drivers. This will allow users to configure NetApp iSCSI/FCP backends in cinder clustered environments.
  • failover and failover_completed methods will be implemented as proposed in this spec https://specs.openstack.org/openstack/cinder-specs/specs/ocata/ha-aa-replication.html
  • geguileo: Sounds good, and they already have experience since they did it for the NFS driver
  • Mind the release schedule and deadlines (feature freeze)

Discuss bug https://bugs.launchpad.net/cinder/+bug/2060830

  • create volume from volume/snapshot creates a lock with delete_volume
  • operations are serialized due to single lock for clone operations
  • lock prevents source volume from being deleted during operation (same for snapshots)
  • other operations managed by status field to handle this
  • why do we use a shared lock for this particilar one?
  • we could update the status as with other operations
  • problem: no reference counting for nested operations, no way to reach original state
  • gorka empathizes
  • a read-write lock would be quite approproate for this case
  • tooz makes this ^ complicated
  • consensus growing around cinder-specific solution using the DB to implement a rw-lock
  • DB will be mysql/mariadb as it's the one we officially support
  • alternative is to set status field and implement ref counts, not ideal but consistent with code base
  • #action: revisit in upstream meeting if anyone interested can assemble a solution using db locking semantics

Volume encryption with user defined keys

  • Spec: https://review.opendev.org/c/openstack/cinder-specs/+/914513/1
  • Cinder currently lags support the API to create a volume with a predefined (e.g. already stored in Barbican) encryption key.
  • Meeting Notes:
    • The idea is to create volumes from pre-existing keys from barbican
    • The preferred way is to ask cinder to create an encrypted volume and cinder communicates with barbican to create the key
    • Cinder creates the passphrase by converting the barbican key into a hex value (binascii.hexlify())
    • The user will not be able to decrypt the volume with their keys in barbican since they need to mimic cinder's procedure for encryption using the custom passphrase as long as Cinder strictly transforms it using binascii.hexlify() like it does currently
    • here's the info about the secret types:
    • The proposal is to have the development in parallel of
      • 1. API change to allow creating volumes with pre-existing secret in barbican
        • implement passing Barbican secret ids during volume creation API call and skip secret order create done by Cinder internally
        • but check received secrets in regards to their metadata (cipher, mode, bit length) to be compatible with the volume type's encryption specification
          • (which Cinder didn't need to do before since it always created secrets itself in a closed ecosystem)
      • 2. support for "passphrase" secret types from Barbican which will circumvent Cinder's binascii.hexlify() conversion and used as passphrases as-is in Cinder
        • currently only "symmetric" is supported, which is transformed using binascii.hexlify() by Cinder before passing it to LUKS
    • need to review the documentation around this, particularly, what an encryption type is and what the fields are used for (admin facing), and also what needs to be supplied as end user facing docs

Review CI updates


Friday 12 April

Review development cycle processes & schedule & documentation --> postpone

Cinder Backup improvements

Just FYI: new Spec for Image Encryption with LUKS