https://wiki.openstack.org/w/api.php?action=feedcontributions&user=Jay+Bryant&feedformat=atomOpenStack - User contributions [en]2024-03-19T05:37:03ZUser contributionsMediaWiki 1.28.2https://wiki.openstack.org/w/index.php?title=CinderCaracalMidCycleSummary&diff=184051CinderCaracalMidCycleSummary2023-12-15T15:08:49Z<p>Jay Bryant: /* Session One: R-17: 06 December 2024 */</p>
<hr />
<div>==Introduction==<br />
<br />
Welcome to the Cinder 2024.1 (Caracal) midcycle summary page!<br />
<br />
We conduct 2 midcycles between the OpenStack development cycle (6 months) that acts as a checkpoint for the following:<br />
<br />
* Revisiting/following up the topics discussed at PTG<br />
* Discuss topics that were missed during PTG due to author's unavailability or lack of time or any other reason<br />
* Status of work items based on the milestone<br />
<br />
<br />
There could be more reasons but the above highlighted are the major ones.<br />
<br />
For 2024.2 (Caracal), the Midcycle will happen at:<br />
# R-17: 6th December, 2023 (Wednesday) 1400-1600 UTC<br />
# R-7: 14th February, 2023 (Wednesday) 1400-1600 UTC<br />
<br />
<br />
Etherpad: https://etherpad.opendev.org/p/cinder-caracal-midcycles<br />
<br />
==Session One: R-17: 06 December 2024==<br />
===recordings===<br />
* Recording for Midcycle 1 (YouTube): https://youtu.be/QSKWA1St97A<br />
<br /><br />
<br />
We held our first mid cycle of the 2024.1 (Caracal) development Cycle on 6th December (R-17 week) between 1400-1600 UTC.<br />
<br />
* Retiring cinderlib<br />
* Rework of JovianDSS Driver<br />
* NFS online extend<br />
* Two os-brick patches and acceptable usage of `__init__()`<br />
* CI Monitoring<br />
* Supporting AND operation on time comparison filters<br />
* Several patches to the StorPool Cinder driver</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=CinderAntelopeMidCycleSummary&diff=182545CinderAntelopeMidCycleSummary2023-01-19T16:04:05Z<p>Jay Bryant: /* recordings */</p>
<hr />
<div>==Introduction==<br />
<br />
Welcome to the Cinder Antelope midcycle summary page!<br />
<br />
We conduct 2 midcycles every 6 months (OpenStack release) that acts as a checkpoint for the following:<br />
<br />
* Revisiting/following up the topics discussed at PTG<br />
* Discuss topics that were missed during PTG due to author's unavailability or lack of time or any other reason<br />
* Status of work items based on the milestone<br />
<br />
There could be more reasons but the above highlighted are the major ones.<br />
<br />
For Antelope, the Midcycle will happen at:<br />
# R-16: 30th November, 2022 (Wednesday) 1400-1600 UTC<br />
# R-9: 18th January, 2022 (Wednesday) 1400-1600 UTC<br />
<br />
Etherpad: https://etherpad.opendev.org/p/cinder-antelope-midcycles<br />
<br />
==Session Two: R-9: 18 January 2023==<br />
===recordings===<br />
* Recording for Midcycle 2: https://bluejeans.com/s/3cDgbhL7LLI<br />
* Recording for Midcycle 2 (YouTube): https://youtu.be/ZYMgvZbi2Dc<br />
<br /><br />
<br />
We held our second mid cycle of the 2023.1 (Antelope) development Cycle on 18th January (R-9 week) between 1400-1600 UTC.<br />
<br />
There were a lot of topics discussed but two important cinder policies to keep in mind for active/new contributors is as follows:<br />
<br />
* Do reviews to get reviews<br />
** The idea is to promote opensource contribution in terms of reviews and not just code commits<br />
** Contributors should actively review and also make their organization understand the importance of reviews in OpenStack contribution<br />
** For starting out with your code review journey, here is a guide proposed to efficiently do reviews<br />
*** https://docs.openstack.org/cinder/latest/contributor/gerrit.html#efficient-review-guidelines<br />
<br />
* Avoid bare rechecks<br />
** Although not intentional, sometimes we skip the reason for rechecking which in general is not a good practice, see our guidelines regarding rechecks<br />
*** https://docs.openstack.org/cinder/latest/contributor/gerrit.html#ci-job-rechecks<br />
** There is also a "bare recheck" thread that keeps track of number of bare rechecks per project, we should try to keep that minimal<br />
*** https://lists.openstack.org/pipermail/openstack-discuss/2022-December/031534.html<br />
<br />
<br />
Coming to the topics, following are the topics discussed and their conclusions:<br />
<br />
* Driver status: We've 4 new drivers this cycle<br />
** HPE XP driver: https://review.opendev.org/c/openstack/cinder/+/815582<br />
*** Rebranding of Hitachi driver, Once CI reports fine should be good to merge<br />
** Fungible NVMe TCP: https://review.opendev.org/c/openstack/cinder/+/849143<br />
*** Need some changes regarding clone in-use volume functionality but looks almost complete<br />
** Lustre: https://review.opendev.org/q/topic:bp%252Fadd-lustre-driver<br />
*** Currently missing CI but can be addressed with an upstream CI like nfs one<br />
*** Might require a new devstack plugin for the CI to work<br />
*** Planned to postpone this to next cycle<br />
** NetApp NVME TCP: https://review.opendev.org/c/openstack/cinder/+/870004<br />
*** Recently proposed and a big code change<br />
*** Due to review bandwidth shortage, might need to postpone is to next cycle<br />
<br />
* Cinderclient to OSC migration update<br />
** There are few concerns regarding storyboard as the bug tracker and the review bandwidth issue that might take long time to get things merged in OSC<br />
** We will stop accepting new CLI changes into cinderclient from Bobcat<br />
<br />
* tox4 update<br />
** Apart from cinderlib and python-cinderclient, other cinder projects have adapted to tox4 and gate is working fine<br />
** for cinderlib and python-cinderclient, we will pin it to tox3 until tox4 is stable for these branches<br />
<br />
* python-six<br />
** The last cycle using python2 as a runtime is Train and we still have six compatibility code in cinder codebase<br />
** General consensus of team is to move forward with removing six<br />
** We also don't have a backport problem since the last active stable branch is Xena<br />
** Driver vendors are recommended to remove six code from their drivers<br />
<br />
==Session One: R-16: 30 November 2022==<br />
===recordings===<br />
YouTube Recording for Midcycle 1: https://youtu.be/fKulY7whZlo<br />
<br />
We held our first mid cycle of the 2023.1 (Antelope) development Cycle on 30th November (R-16 week) between 1400-1600 UTC.<br />
<br />
We started out with specification status. we currently have 3 active specs:<br />
* Encrypted backups: https://review.opendev.org/c/openstack/cinder-specs/+/862601<br />
* Use assisted volume extend API: https://review.opendev.org/c/openstack/cinder-specs/+/864020<br />
* New backup state: (A new spec will be proposed, old discussion at https://review.opendev.org/c/openstack/cinder-specs/+/818551)<br />
<br />
<br />
We continued with drivers proposed/targeted for this cycle:<br />
<br />
* HPE XP driver: https://review.opendev.org/c/openstack/cinder/+/815582<br />
* Fungible NVMe TCP: https://review.opendev.org/c/openstack/cinder/+/849143<br />
<br />
<br />
Other topics (following) can be found in the etherpad as well as in the recording.<br />
<br />
* remove Xenserver image support <br />
* cinderlib Recursion Error<br />
* Infinidat driver improvements</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=CinderAntelopeMidCycleSummary&diff=182544CinderAntelopeMidCycleSummary2023-01-19T16:02:34Z<p>Jay Bryant: /* recordings */</p>
<hr />
<div>==Introduction==<br />
<br />
Welcome to the Cinder Antelope midcycle summary page!<br />
<br />
We conduct 2 midcycles every 6 months (OpenStack release) that acts as a checkpoint for the following:<br />
<br />
* Revisiting/following up the topics discussed at PTG<br />
* Discuss topics that were missed during PTG due to author's unavailability or lack of time or any other reason<br />
* Status of work items based on the milestone<br />
<br />
There could be more reasons but the above highlighted are the major ones.<br />
<br />
For Antelope, the Midcycle will happen at:<br />
# R-16: 30th November, 2022 (Wednesday) 1400-1600 UTC<br />
# R-9: 18th January, 2022 (Wednesday) 1400-1600 UTC<br />
<br />
Etherpad: https://etherpad.opendev.org/p/cinder-antelope-midcycles<br />
<br />
==Session Two: R-9: 18 January 2023==<br />
===recordings===<br />
Recording for Midcycle 2: https://bluejeans.com/s/3cDgbhL7LLI<br />
Recording for Midcycle 2 (YouTube): https://youtu.be/ZYMgvZbi2Dc<br />
<br />
We held our second mid cycle of the 2023.1 (Antelope) development Cycle on 18th January (R-9 week) between 1400-1600 UTC.<br />
<br />
There were a lot of topics discussed but two important cinder policies to keep in mind for active/new contributors is as follows:<br />
<br />
* Do reviews to get reviews<br />
** The idea is to promote opensource contribution in terms of reviews and not just code commits<br />
** Contributors should actively review and also make their organization understand the importance of reviews in OpenStack contribution<br />
** For starting out with your code review journey, here is a guide proposed to efficiently do reviews<br />
*** https://docs.openstack.org/cinder/latest/contributor/gerrit.html#efficient-review-guidelines<br />
<br />
* Avoid bare rechecks<br />
** Although not intentional, sometimes we skip the reason for rechecking which in general is not a good practice, see our guidelines regarding rechecks<br />
*** https://docs.openstack.org/cinder/latest/contributor/gerrit.html#ci-job-rechecks<br />
** There is also a "bare recheck" thread that keeps track of number of bare rechecks per project, we should try to keep that minimal<br />
*** https://lists.openstack.org/pipermail/openstack-discuss/2022-December/031534.html<br />
<br />
<br />
Coming to the topics, following are the topics discussed and their conclusions:<br />
<br />
* Driver status: We've 4 new drivers this cycle<br />
** HPE XP driver: https://review.opendev.org/c/openstack/cinder/+/815582<br />
*** Rebranding of Hitachi driver, Once CI reports fine should be good to merge<br />
** Fungible NVMe TCP: https://review.opendev.org/c/openstack/cinder/+/849143<br />
*** Need some changes regarding clone in-use volume functionality but looks almost complete<br />
** Lustre: https://review.opendev.org/q/topic:bp%252Fadd-lustre-driver<br />
*** Currently missing CI but can be addressed with an upstream CI like nfs one<br />
*** Might require a new devstack plugin for the CI to work<br />
*** Planned to postpone this to next cycle<br />
** NetApp NVME TCP: https://review.opendev.org/c/openstack/cinder/+/870004<br />
*** Recently proposed and a big code change<br />
*** Due to review bandwidth shortage, might need to postpone is to next cycle<br />
<br />
* Cinderclient to OSC migration update<br />
** There are few concerns regarding storyboard as the bug tracker and the review bandwidth issue that might take long time to get things merged in OSC<br />
** We will stop accepting new CLI changes into cinderclient from Bobcat<br />
<br />
* tox4 update<br />
** Apart from cinderlib and python-cinderclient, other cinder projects have adapted to tox4 and gate is working fine<br />
** for cinderlib and python-cinderclient, we will pin it to tox3 until tox4 is stable for these branches<br />
<br />
* python-six<br />
** The last cycle using python2 as a runtime is Train and we still have six compatibility code in cinder codebase<br />
** General consensus of team is to move forward with removing six<br />
** We also don't have a backport problem since the last active stable branch is Xena<br />
** Driver vendors are recommended to remove six code from their drivers<br />
<br />
==Session One: R-16: 30 November 2022==<br />
===recordings===<br />
YouTube Recording for Midcycle 1: https://youtu.be/fKulY7whZlo<br />
<br />
We held our first mid cycle of the 2023.1 (Antelope) development Cycle on 30th November (R-16 week) between 1400-1600 UTC.<br />
<br />
We started out with specification status. we currently have 3 active specs:<br />
* Encrypted backups: https://review.opendev.org/c/openstack/cinder-specs/+/862601<br />
* Use assisted volume extend API: https://review.opendev.org/c/openstack/cinder-specs/+/864020<br />
* New backup state: (A new spec will be proposed, old discussion at https://review.opendev.org/c/openstack/cinder-specs/+/818551)<br />
<br />
<br />
We continued with drivers proposed/targeted for this cycle:<br />
<br />
* HPE XP driver: https://review.opendev.org/c/openstack/cinder/+/815582<br />
* Fungible NVMe TCP: https://review.opendev.org/c/openstack/cinder/+/849143<br />
<br />
<br />
Other topics (following) can be found in the etherpad as well as in the recording.<br />
<br />
* remove Xenserver image support <br />
* cinderlib Recursion Error<br />
* Infinidat driver improvements</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=CinderAntelopeMidCycleSummary&diff=182356CinderAntelopeMidCycleSummary2022-12-05T19:30:23Z<p>Jay Bryant: /* recordings */</p>
<hr />
<div>==Introduction==<br />
<br />
Welcome to the Cinder Antelope midcycle summary page!<br />
<br />
We conduct 2 midcycles every 6 months (OpenStack release) that acts as a checkpoint for the following:<br />
<br />
* Revisiting/following up the topics discussed at PTG<br />
* Discuss topics that were missed during PTG due to author's unavailability or lack of time or any other reason<br />
* Status of work items based on the milestone<br />
<br />
There could be more reasons but the above highlighted are the major ones.<br />
<br />
For Antelope, the Midcycle will happen at:<br />
# R-16: 30th November, 2022 (Wednesday) 1400-1600 UTC<br />
# R-8: DATE TBD, (Wednesday) 1400-1600 UTC<br />
<br />
Etherpad: https://etherpad.opendev.org/p/cinder-antelope-midcycles<br />
<br />
==Session One: R-16: 30 November 2022==<br />
===recordings===<br />
* https://bluejeans.com/s/WnwX8wfsui8 (Placeholder until youtube recording is ready)<br />
YouTube Recording for Part 1: https://youtu.be/fKulY7whZlo<br />
<br />
We held our first mid cycle of the 2023.1 (Antelope) development Cycle on 30th November (R-16 week) between 1400-1600 UTC.<br />
<br />
We started out with specification status. we currently have 3 active specs:<br />
* Encrypted backups: https://review.opendev.org/c/openstack/cinder-specs/+/862601<br />
* Use assisted volume extend API: https://review.opendev.org/c/openstack/cinder-specs/+/864020<br />
* New backup state: (A new spec will be proposed, old discussion at https://review.opendev.org/c/openstack/cinder-specs/+/818551)<br />
<br />
<br />
We continued with drivers proposed/targeted for this cycle:<br />
<br />
* HPE XP driver: https://review.opendev.org/c/openstack/cinder/+/815582<br />
* Fungible NVMe TCP: https://review.opendev.org/c/openstack/cinder/+/849143<br />
<br />
<br />
Other topics (following) can be found in the etherpad as well as in the recording.<br />
<br />
* remove Xenserver image support <br />
* cinderlib Recursion Error<br />
* Infinidat driver improvements</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=CinderAntelopeMidCycleSummary&diff=182355CinderAntelopeMidCycleSummary2022-12-05T19:28:28Z<p>Jay Bryant: /* recordings */</p>
<hr />
<div>==Introduction==<br />
<br />
Welcome to the Cinder Antelope midcycle summary page!<br />
<br />
We conduct 2 midcycles every 6 months (OpenStack release) that acts as a checkpoint for the following:<br />
<br />
* Revisiting/following up the topics discussed at PTG<br />
* Discuss topics that were missed during PTG due to author's unavailability or lack of time or any other reason<br />
* Status of work items based on the milestone<br />
<br />
There could be more reasons but the above highlighted are the major ones.<br />
<br />
For Antelope, the Midcycle will happen at:<br />
# R-16: 30th November, 2022 (Wednesday) 1400-1600 UTC<br />
# R-8: DATE TBD, (Wednesday) 1400-1600 UTC<br />
<br />
Etherpad: https://etherpad.opendev.org/p/cinder-antelope-midcycles<br />
<br />
==Session One: R-16: 30 November 2022==<br />
===recordings===<br />
* https://bluejeans.com/s/WnwX8wfsui8 (Placeholder until youtube recording is ready)<br />
YouTube Recording: https://youtu.be/fKulY7whZlo<br />
<br />
We held our first mid cycle of the 2023.1 (Antelope) development Cycle on 30th November (R-16 week) between 1400-1600 UTC.<br />
<br />
We started out with specification status. we currently have 3 active specs:<br />
* Encrypted backups: https://review.opendev.org/c/openstack/cinder-specs/+/862601<br />
* Use assisted volume extend API: https://review.opendev.org/c/openstack/cinder-specs/+/864020<br />
* New backup state: (A new spec will be proposed, old discussion at https://review.opendev.org/c/openstack/cinder-specs/+/818551)<br />
<br />
<br />
We continued with drivers proposed/targeted for this cycle:<br />
<br />
* HPE XP driver: https://review.opendev.org/c/openstack/cinder/+/815582<br />
* Fungible NVMe TCP: https://review.opendev.org/c/openstack/cinder/+/849143<br />
<br />
<br />
Other topics (following) can be found in the etherpad as well as in the recording.<br />
<br />
* remove Xenserver image support <br />
* cinderlib Recursion Error<br />
* Infinidat driver improvements</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=Meetings/TechnicalCommittee&diff=181967Meetings/TechnicalCommittee2022-09-22T14:47:42Z<p>Jay Bryant: /* Absence */</p>
<hr />
<div>__NOTOC__<br />
<br />
The OpenStack Technical Committee is one of the [https://governance.openstack.org governing bodies] of the OpenStack project. You can find more information about it, such as the list of its [https://governance.openstack.org/tc/ current members] or its [https://governance.openstack.org/tc/reference/charter.html governance charter], on the OpenStack TC governance website at https://governance.openstack.org/tc/ .<br />
<br />
In order to include as many people as possible in the discussion, the Technical Committee relies on asynchronous communications as much as possible. We propose and vote on changes through the [http://git.openstack.org/cgit/openstack/governance openstack/governance repository]. Large-impact changes are discussed on the openstack-discuss mailing-list. '''We track current initiatives on the [[Technical_Committee_Tracker]].'''<br />
<br />
We hold office hours at [https://governance.openstack.org/tc/#office-hours various times] during the week on the [http://eavesdrop.openstack.org/irclogs/%23openstack-tc/ #openstack-tc] IRC channel. We meet formally [http://eavesdrop.openstack.org/#Technical_Committee_Meeting each week] in #openstack-tc <br />
<br />
<br />
=== Next Meeting ===<br />
<br />
* Date: 2022 Sept 22<br />
* Time: 15.00 UTC: http://eavesdrop.openstack.org/#Technical_Committee_Meeting<br />
* Chair: Ghanshyam Mann<br />
* Agenda to be published on the OpenStack-discuss mailing list before the meeting<br />
* Location: IRC OFTC network in the #openstack-tc channel<br />
<br />
==== Agenda Suggestions ====<br />
<br />
* Roll call<br />
* Follow up on past action items<br />
* Gate health check<br />
** Bare 'recheck' state<br />
*** https://etherpad.opendev.org/p/recheck-weekly-summary<br />
* Zed cycle tracker checks<br />
** https://etherpad.opendev.org/p/tc-zed-tracker<br />
* 2023.1 cycle PTG Planning<br />
** TC + Leaders interaction sessions<br />
*** https://etherpad.opendev.org/p/tc-leaders-interaction-2023-1<br />
** TC PTG etherpad<br />
*** https://etherpad.opendev.org/p/tc-2023-1-ptg<br />
** Schedule 'operator hours'<br />
*** https://lists.openstack.org/pipermail/openstack-discuss/2022-September/030301.html<br />
* 2023.1 cycle Technical Election & Leaderless projects<br />
** https://governance.openstack.org/election/<br />
** https://etherpad.opendev.org/p/2023.1-leaderless<br />
** https://lists.openstack.org/pipermail/openstack-discuss/2022-September/030437.html<br />
* Meeting time check<br />
** We have new TC members. Let's check if the current meeting time is ok for everyone.<br />
* Open Reviews<br />
** https://review.opendev.org/q/projects:openstack/governance+is:open<br />
<br />
==== Absence ====<br />
<Please write your name if you are not able to attend the meeting><br />
<br />
* arne_wiebalck will miss the meeting on Sep 22 (on a whole day training)<br />
* Jay Bryant (jungleboyj) - Out of Office<br />
<br />
=== Past meetings logs ===<br />
<br />
Logs of past TC meetings can be accessed at: http://eavesdrop.openstack.org/meetings/tc<br />
<br />
[[Category: meetings]]</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=Meetings/TechnicalCommittee&diff=181966Meetings/TechnicalCommittee2022-09-22T14:47:26Z<p>Jay Bryant: /* Absence */</p>
<hr />
<div>__NOTOC__<br />
<br />
The OpenStack Technical Committee is one of the [https://governance.openstack.org governing bodies] of the OpenStack project. You can find more information about it, such as the list of its [https://governance.openstack.org/tc/ current members] or its [https://governance.openstack.org/tc/reference/charter.html governance charter], on the OpenStack TC governance website at https://governance.openstack.org/tc/ .<br />
<br />
In order to include as many people as possible in the discussion, the Technical Committee relies on asynchronous communications as much as possible. We propose and vote on changes through the [http://git.openstack.org/cgit/openstack/governance openstack/governance repository]. Large-impact changes are discussed on the openstack-discuss mailing-list. '''We track current initiatives on the [[Technical_Committee_Tracker]].'''<br />
<br />
We hold office hours at [https://governance.openstack.org/tc/#office-hours various times] during the week on the [http://eavesdrop.openstack.org/irclogs/%23openstack-tc/ #openstack-tc] IRC channel. We meet formally [http://eavesdrop.openstack.org/#Technical_Committee_Meeting each week] in #openstack-tc <br />
<br />
<br />
=== Next Meeting ===<br />
<br />
* Date: 2022 Sept 22<br />
* Time: 15.00 UTC: http://eavesdrop.openstack.org/#Technical_Committee_Meeting<br />
* Chair: Ghanshyam Mann<br />
* Agenda to be published on the OpenStack-discuss mailing list before the meeting<br />
* Location: IRC OFTC network in the #openstack-tc channel<br />
<br />
==== Agenda Suggestions ====<br />
<br />
* Roll call<br />
* Follow up on past action items<br />
* Gate health check<br />
** Bare 'recheck' state<br />
*** https://etherpad.opendev.org/p/recheck-weekly-summary<br />
* Zed cycle tracker checks<br />
** https://etherpad.opendev.org/p/tc-zed-tracker<br />
* 2023.1 cycle PTG Planning<br />
** TC + Leaders interaction sessions<br />
*** https://etherpad.opendev.org/p/tc-leaders-interaction-2023-1<br />
** TC PTG etherpad<br />
*** https://etherpad.opendev.org/p/tc-2023-1-ptg<br />
** Schedule 'operator hours'<br />
*** https://lists.openstack.org/pipermail/openstack-discuss/2022-September/030301.html<br />
* 2023.1 cycle Technical Election & Leaderless projects<br />
** https://governance.openstack.org/election/<br />
** https://etherpad.opendev.org/p/2023.1-leaderless<br />
** https://lists.openstack.org/pipermail/openstack-discuss/2022-September/030437.html<br />
* Meeting time check<br />
** We have new TC members. Let's check if the current meeting time is ok for everyone.<br />
* Open Reviews<br />
** https://review.opendev.org/q/projects:openstack/governance+is:open<br />
<br />
==== Absence ====<br />
<Please write your name if you are not able to attend the meeting><br />
<br />
* arne_wiebalck will miss the meeting on Sep 22 (on a whole day training)<br />
* Jay Bryant (jungleboj) - Out of Office<br />
<br />
=== Past meetings logs ===<br />
<br />
Logs of past TC meetings can be accessed at: http://eavesdrop.openstack.org/meetings/tc<br />
<br />
[[Category: meetings]]</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=Meetings/TechnicalCommittee&diff=181724Meetings/TechnicalCommittee2022-08-08T17:29:27Z<p>Jay Bryant: /* Absence */</p>
<hr />
<div>__NOTOC__<br />
<br />
The OpenStack Technical Committee is one of the [https://governance.openstack.org governing bodies] of the OpenStack project. You can find more information about it, such as the list of its [https://governance.openstack.org/tc/ current members] or its [https://governance.openstack.org/tc/reference/charter.html governance charter], on the OpenStack TC governance website at https://governance.openstack.org/tc/ .<br />
<br />
In order to include as many people as possible in the discussion, the Technical Committee relies on asynchronous communications as much as possible. We propose and vote on changes through the [http://git.openstack.org/cgit/openstack/governance openstack/governance repository]. Large-impact changes are discussed on the openstack-discuss mailing-list. '''We track current initiatives on the [[Technical_Committee_Tracker]].'''<br />
<br />
We hold office hours at [https://governance.openstack.org/tc/#office-hours various times] during the week on the [http://eavesdrop.openstack.org/irclogs/%23openstack-tc/ #openstack-tc] IRC channel. We meet formally [http://eavesdrop.openstack.org/#Technical_Committee_Meeting each week] in #openstack-tc <br />
<br />
<br />
=== Next Meeting ===<br />
<br />
* Date: 2022 Aug 11<br />
* Time: 15.00 UTC: http://eavesdrop.openstack.org/#Technical_Committee_Meeting<br />
* Chair: Ghanshyam Mann<br />
* Agenda to be published on the OpenStack-discuss mailing list before the meeting<br />
* Location: IRC OFTC network in the #openstack-tc channel<br />
<br />
==== Agenda Suggestions ====<br />
<br />
* Roll call<br />
* Follow up on past action items<br />
* Gate health check<br />
** Bare 'recheck' state<br />
*** https://etherpad.opendev.org/p/recheck-weekly-summary<br />
* 2023.1 cycle PTG Planning<br />
** https://framadate.org/Migjz5j4SZgx8PTa<br />
* Open Reviews<br />
** https://review.opendev.org/q/projects:openstack/governance+is:open<br />
<br />
==== Absence ====<br />
<Please write your name if you are not able to attend the meeting><br />
<br />
* arne_wiebalck (will miss 4 August and 11 August)<br />
* slaweq (will miss 11 August)<br />
* Jay Bryant (jungleboyj) -- Will miss 8/11 due to Out of Office<br />
<br />
=== Past meetings logs ===<br />
<br />
Logs of past TC meetings can be accessed at: http://eavesdrop.openstack.org/meetings/tc<br />
<br />
[[Category: meetings]]</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=Meetings/TechnicalCommittee&diff=180224Meetings/TechnicalCommittee2021-12-16T13:34:58Z<p>Jay Bryant: /* Apologies for Absence */</p>
<hr />
<div>__NOTOC__<br />
<br />
The OpenStack Technical Committee is one of the [https://governance.openstack.org governing bodies] of the OpenStack project. You can find more information about it, such as the list of its [https://governance.openstack.org/tc/ current members] or its [https://governance.openstack.org/tc/reference/charter.html governance charter], on the OpenStack TC governance website at https://governance.openstack.org/tc/ .<br />
<br />
In order to include as many people as possible in the discussion, the Technical Committee relies on asynchronous communications as much as possible. We propose and vote on changes through the [http://git.openstack.org/cgit/openstack/governance openstack/governance repository]. Large-impact changes are discussed on the openstack-discuss mailing-list. '''We track current initiatives on the [[Technical_Committee_Tracker]].'''<br />
<br />
We hold office hours at [https://governance.openstack.org/tc/#office-hours various times] during the week on the [http://eavesdrop.openstack.org/irclogs/%23openstack-tc/ #openstack-tc] IRC channel. We meet formally [http://eavesdrop.openstack.org/#Technical_Committee_Meeting each week] in #openstack-tc <br />
<br />
<br />
=== Next Meeting ===<br />
<br />
* Date: Dec 16th, 2021<br />
* Time: 15.00 UTC: http://eavesdrop.openstack.org/#Technical_Committee_Meeting<br />
* Chair: Ghanshyam Mann<br />
* Agenda to be published on the OpenStack-discuss mailing list before the meeting<br />
* Location: IRC #openstack-tc<br />
<br />
==== Agenda Suggestions ====<br />
<br />
* Roll call<br />
* Follow up on past action items<br />
* Gate health check<br />
** Fixing Zuul config error in OpenStack<br />
*** https://etherpad.opendev.org/p/zuul-config-error-openstack<br />
* Skyline as an official project<br />
** https://review.opendev.org/c/openstack/governance/+/814037<br />
** http://lists.openstack.org/pipermail/openstack-discuss/2021-December/026206.html<br />
* SIG i18n status check<br />
** Xena translation missing<br />
*** http://lists.openstack.org/pipermail/openstack-discuss/2021-December/026244.html<br />
** Translation bug<br />
*** https://review.opendev.org/c/openstack/contributor-guide/+/821371<br />
* Adjutant need PTLs and maintainers<br />
** http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025555.html<br />
* Open Reviews<br />
** https://review.opendev.org/q/projects:openstack/governance+is:open<br />
<br />
==== Apologies for Absence ====<br />
<Please write your name if you are not able to attend the meeting><br />
* Dan Smith on PTO until 2nd Jan<br />
* Radosław Piliszek <yoctozepto> - med appointment (2021-12-16)<br />
* Jay Bryant <jungleboyj> - illness (2021-12-16)<br />
<br />
=== Past meetings logs ===<br />
<br />
Logs of past TC meetings can be accessed at: http://eavesdrop.openstack.org/meetings/tc<br />
<br />
[[Category: meetings]]</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=Meetings/TechnicalCommittee&diff=180054Meetings/TechnicalCommittee2021-11-23T17:43:04Z<p>Jay Bryant: /* Apologies for Absence */</p>
<hr />
<div>__NOTOC__<br />
<br />
The OpenStack Technical Committee is one of the [https://governance.openstack.org governing bodies] of the OpenStack project. You can find more information about it, such as the list of its [https://governance.openstack.org/tc/ current members] or its [https://governance.openstack.org/tc/reference/charter.html governance charter], on the OpenStack TC governance website at https://governance.openstack.org/tc/ .<br />
<br />
In order to include as many people as possible in the discussion, the Technical Committee relies on asynchronous communications as much as possible. We propose and vote on changes through the [http://git.openstack.org/cgit/openstack/governance openstack/governance repository]. Large-impact changes are discussed on the openstack-discuss mailing-list. '''We track current initiatives on the [[Technical_Committee_Tracker]].'''<br />
<br />
We hold office hours at [https://governance.openstack.org/tc/#office-hours various times] during the week on the [http://eavesdrop.openstack.org/irclogs/%23openstack-tc/ #openstack-tc] IRC channel. We meet formally [http://eavesdrop.openstack.org/#Technical_Committee_Meeting each week] in #openstack-tc <br />
<br />
<br />
=== Next Meeting ===<br />
<br />
* Date: Nov 25th, 2021<br />
* Time: 15.00 UTC: http://eavesdrop.openstack.org/#Technical_Committee_Meeting<br />
* Chair: Ghanshyam Mann<br />
* Agenda to be published on the OpenStack-discuss mailing list before the meeting<br />
* Location: IRC #openstack-tc<br />
<br />
==== Agenda Suggestions ====<br />
<br />
* Roll call<br />
* Follow up on past action items<br />
* Gate health check<br />
** Fixing Zuul config error in OpenStack<br />
*** https://etherpad.opendev.org/p/zuul-config-error-openstack<br />
* Updates on community-wide goal<br />
** RBAC goal rework<br />
*** https://review.opendev.org/c/openstack/governance/+/815158<br />
*** https://review.opendev.org/c/openstack/governance/+/818817<br />
** Proposed community goal for FIPS compatibility and compliance<br />
*** https://review.opendev.org/c/openstack/governance/+/816587<br />
* Magnum project health<br />
* Adjutant need PTLs and maintainers<br />
** http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025555.html<br />
* Pain Point targeting<br />
** https://etherpad.opendev.org/p/pain-point-elimination<br />
* Open Reviews<br />
** https://review.opendev.org/q/projects:openstack/governance+is:open<br />
<br />
==== Apologies for Absence ====<br />
<Please write your name if you are not able to attend the meeting><br />
* Jay Bryant (jungleboyj) -- Thanksgiving Holiday in the US<br />
<br />
=== Past meetings logs ===<br />
<br />
Logs of past TC meetings can be accessed at: http://eavesdrop.openstack.org/meetings/tc<br />
<br />
[[Category: meetings]]</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=Meetings/TechnicalCommittee&diff=179206Meetings/TechnicalCommittee2021-08-11T14:27:25Z<p>Jay Bryant: /* Apologies for Absence */</p>
<hr />
<div>__NOTOC__<br />
<br />
The OpenStack Technical Committee is one of the [https://governance.openstack.org governing bodies] of the OpenStack project. You can find more information about it, such as the list of its [https://governance.openstack.org/tc/ current members] or its [https://governance.openstack.org/tc/reference/charter.html governance charter], on the OpenStack TC governance website at https://governance.openstack.org/tc/ .<br />
<br />
In order to include as many people as possible in the discussion, the Technical Committee relies on asynchronous communications as much as possible. We propose and vote on changes through the [http://git.openstack.org/cgit/openstack/governance openstack/governance repository]. Large-impact changes are discussed on the openstack-discuss mailing-list. '''We track current initiatives on the [[Technical_Committee_Tracker]].'''<br />
<br />
We hold office hours at [https://governance.openstack.org/tc/#office-hours various times] during the week on the [http://eavesdrop.openstack.org/irclogs/%23openstack-tc/ #openstack-tc] IRC channel. We meet formally [http://eavesdrop.openstack.org/#Technical_Committee_Meeting each week] in #openstack-tc <br />
<br />
<br />
=== Next Meeting ===<br />
<br />
* Date: Aug 12th, 2021<br />
* Time: 15.00 UTC: http://eavesdrop.openstack.org/#Technical_Committee_Meeting<br />
* Chair: Ghanshyam Mann<br />
* Agenda to be published on the OpenStack-discuss mailing list before the meeting<br />
<br />
==== Agenda Suggestions ====<br />
<br />
* Roll call<br />
* Follow up on past action items<br />
* Required things/steps to push Project Skyline(dashboard) proposal (diablo_rojo)<br />
* Gate health check (dansmith/yoctozepto)<br />
** http://paste.openstack.org/show/jD6kAP9tHk7PZr2nhv8h/<br />
* Wallaby testing runtime for centos8 vs centos8-stream<br />
** https://review.opendev.org/q/I508eceb00d7501ffcfac73d7bc2272badb241494<br />
** centos8 does not work in stable/wallaby, for details, comments in https://review.opendev.org/c/openstack/devstack/+/803039<br />
* Murano project health (gmann)<br />
** No code change in Xena cycle last patch merged 4 months ago: https://review.opendev.org/c/openstack/murano/+/783446<br />
** gmann tried to reach out to PTL via direct email but no response<br />
* PTG Planning<br />
** https://etherpad.opendev.org/p/tc-yoga-ptg<br />
* Open Reviews<br />
** https://review.opendev.org/q/projects:openstack/governance+is:open<br />
<br />
==== Apologies for Absence ====<br />
<Please write your name if you are not able to attend the meeting><br />
* Jay Bryant (jungleboyj) -- Out on PTO<br />
<br />
=== Past meetings logs ===<br />
<br />
Logs of past TC meetings can be accessed at: http://eavesdrop.openstack.org/meetings/tc<br />
<br />
[[Category: meetings]]</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=Meetings/TechnicalCommittee&diff=179015Meetings/TechnicalCommittee2021-07-14T13:47:23Z<p>Jay Bryant: /* Apologies for Absence */</p>
<hr />
<div>__NOTOC__<br />
<br />
The OpenStack Technical Committee is one of the [https://governance.openstack.org governing bodies] of the OpenStack project. You can find more information about it, such as the list of its [https://governance.openstack.org/tc/ current members] or its [https://governance.openstack.org/tc/reference/charter.html governance charter], on the OpenStack TC governance website at https://governance.openstack.org/tc/ .<br />
<br />
In order to include as many people as possible in the discussion, the Technical Committee relies on asynchronous communications as much as possible. We propose and vote on changes through the [http://git.openstack.org/cgit/openstack/governance openstack/governance repository]. Large-impact changes are discussed on the openstack-discuss mailing-list. '''We track current initiatives on the [[Technical_Committee_Tracker]].'''<br />
<br />
We hold office hours at [https://governance.openstack.org/tc/#office-hours various times] during the week on the [http://eavesdrop.openstack.org/irclogs/%23openstack-tc/ #openstack-tc] IRC channel. We meet formally [http://eavesdrop.openstack.org/#Technical_Committee_Meeting each week] in #openstack-tc <br />
<br />
<br />
=== Next Meeting ===<br />
<br />
* Date: July 15th, 2021<br />
* Time: 15.00 UTC: http://eavesdrop.openstack.org/#Technical_Committee_Meeting<br />
* Chair: Ghanshyam Mann<br />
* Agenda to be published on the OpenStack-discuss mailing list before the meeting<br />
<br />
==== Agenda Suggestions ====<br />
<br />
* Roll call<br />
* Follow up on past action items<br />
* Gate health check (dansmith/yoctozepto)<br />
** http://paste.openstack.org/show/jD6kAP9tHk7PZr2nhv8h/<br />
* Migration from 'Freenode' to 'OFTC' (gmann)<br />
** https://etherpad.opendev.org/p/openstack-irc-migration-to-oftc<br />
* ELK services plan and help status<br />
** Help status<br />
** Reducing the size of the existing system<br />
*** One concern is this service represents an outsized slice of our resources.<br />
*** We are investigating if it is possible to slim this down a bit given the current inputs/demands on the system.<br />
* Open Reviews<br />
** https://review.opendev.org/q/projects:openstack/governance+is:open<br />
<br />
==== Apologies for Absence ====<br />
<Please write your name if you are not able to attend the meeting><br />
* yoctozepto on PTO<br />
* spotz on PTO<br />
* jungleboyj on PTO<br />
<br />
=== Past meetings logs ===<br />
<br />
Logs of past TC meetings can be accessed at: http://eavesdrop.openstack.org/meetings/tc<br />
<br />
[[Category: meetings]]</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=Meetings/Oslo&diff=178834Meetings/Oslo2021-06-21T15:23:58Z<p>Jay Bryant: /* Agenda Template */</p>
<hr />
<div>Oslo will hold IRC meetings weekly at the time scheduled below.<br />
<br />
If there's an Oslo topic you think warrants a project meeting, please add it to the agenda section below and notify the [http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss openstack-discuss@lists.openstack.org] mailing list. Please give everyone at least 24 hours notice.<br />
<br />
'''Revised on:''' {{REVISIONMONTH1}}/{{REVISIONDAY}}/{{REVISIONYEAR}} by {{REVISIONUSER}}<br />
<br />
== Agenda for Next Meeting ==<br />
<br />
See http://eavesdrop.openstack.org/#Oslo_Team_Meeting<br />
<br />
* Path forward for lower-constraints<br />
* Adoption of the cursive library<br />
<br />
=== Agenda Template ===<br />
#startmeeting oslo<br />
Courtesy ping for hberaud, bnemec, johnsom, redrobot, stephenfin, bcafarel, kgiusti, jungleboyj<br />
#link https://wiki.openstack.org/wiki/Meetings/Oslo#Agenda_for_Next_Meeting<br />
#topic Red flags for/from liaisons<br />
#topic Releases liaison<br />
#topic Security liaison<br />
#topic TaCT SIG liaison<br />
#topic Action items from last meeting<br />
<One-off topics><br />
#topic Weekly Wayward Wallaby Review<br />
#topic Open discussion<br />
#endmeeting<br />
<br />
== General Information ==<br />
=== Regular Meeting Schedule ===<br />
* What day: The first and the third Monday of each month<br />
* What time: <br />
- 3pm UTC [https://www.timeanddate.com/worldclock/converter.html?iso=20180212T150000&p1=1440] when Western Europe DST is active (https://www.timeanddate.com/time/zones/west).<br />
- 4pm UTC [https://www.timeanddate.com/worldclock/converter.html?iso=20180212T160000&p1=1440] when Western Europe DST is ending (https://www.timeanddate.com/time/zones/west).<br />
* Where: #openstack-oslo on OFTC<br />
* Who: All are welcome to participate<br />
<br />
=== Notes from Previous Meetings ===<br />
<br />
'''Current: ''' http://eavesdrop.openstack.org/meetings/oslo<br />
<br />
'''Historical'''<br />
<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-07-11-16.01.html Jul 11, 2014] - topics: oslo.db exception handling; sprint report<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-06-27-16.00.html Jun 27, 2014]<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-06-20-16.01.html Jun 20, 2014] - topics: oslo.db initial release; oslo.messaging good progress in neutron; alpha releases of 5 libraries next week; oslo.db test bugs reported by devananda<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-06-13-16.00.html Jun 13, 2014] - topics: oslo.db alpha release; db migration bug; <br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-06-06-16.00.html Jun 06, 2014] - topics: juno specs, spec approval process<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-05-30-16.00.html May 30, 2014]<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-05-23-16.01.html May 23, 2014] - topics: osprofile (postponed), run_test.sh, juno specs, oslo.test issue in tempest<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-05-09-16.02.html May 09, 2014] - topics: oslo-specs, oslo.messaging, summit prep, oslo.db, oslo.i18n<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-04-25-16.00.html April 24, 2014] - topics: oslotest, oslo.db, oslo.i18n, creating a specs repo<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-02-28-14.00.html Feb 28, 2014] - topics: icehouse feature freeze; syncing cinder & nova; uuidutils<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-02-14-14.01.html Feb 14, 2014] - topics: oslo.db, icehouse-3<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-01-31-14.01.html Jan 31, 2014] - topics: translation, deprecation policy, adopting taskflow, stevedore, and cliff<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-11-15-14.01.html Nov 15, 2013] - topics: translation, pecan/wsme common code, icehouse scheduling<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-10-25-14.00.html Oct 25, 2013] - topics: deprecated decorator and delayed translation implementation plan<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-10-11-14.00.html Oct 11, 2013] - topics: delayed translations<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-08-16-14.00.html Aug 16, 2013] - topic was new messaging API, message security and reject/reque/ack<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-07-19-14.00.html July 19, 2013] - topic was new messaging API, message security, qpid/proton messaging driver and removing logging dependency on eventlet<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-06-07-14.00.html June 7, 2013] - topic was new messaging API and message security<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-05-03-14.01.html May 3, 2013] - topic was new messaging API and message security<br />
<br />
(In case the list of notes is not up to date, please consult http://eavesdrop.openstack.org/meetings/oslo/)</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=Meetings/TechnicalCommittee&diff=178735Meetings/TechnicalCommittee2021-06-09T16:20:57Z<p>Jay Bryant: /* Apologies for Absence */</p>
<hr />
<div>__NOTOC__<br />
<br />
The OpenStack Technical Committee is one of the [https://governance.openstack.org governing bodies] of the OpenStack project. You can find more information about it, such as the list of its [https://governance.openstack.org/tc/ current members] or its [https://governance.openstack.org/tc/reference/charter.html governance charter], on the OpenStack TC governance website at https://governance.openstack.org/tc/ .<br />
<br />
In order to include as many people as possible in the discussion, the Technical Committee relies on asynchronous communications as much as possible. We propose and vote on changes through the [http://git.openstack.org/cgit/openstack/governance openstack/governance repository]. Large-impact changes are discussed on the openstack-discuss mailing-list. '''We track current initiatives on the [[Technical_Committee_Tracker]].'''<br />
<br />
We hold office hours at [https://governance.openstack.org/tc/#office-hours various times] during the week on the [http://eavesdrop.openstack.org/irclogs/%23openstack-tc/ #openstack-tc] IRC channel. We meet formally [http://eavesdrop.openstack.org/#Technical_Committee_Meeting each week] in #openstack-tc <br />
<br />
<br />
=== Next Meeting ===<br />
<br />
* Date: June 10th, 2021<br />
* Time: 15.00 UTC: http://eavesdrop.openstack.org/#Technical_Committee_Meeting<br />
* Chair: Ghanshyam Mann<br />
* Agenda to be published on the OpenStack-discuss mailing list before the meeting<br />
<br />
==== Agenda Suggestions ====<br />
<br />
* Roll call<br />
* Follow up on past action items<br />
* Gate health check (dansmith/yoctozepto)<br />
** http://paste.openstack.org/show/jD6kAP9tHk7PZr2nhv8h/<br />
* Migration from 'Freenode' to 'OFTC' (gmann)<br />
** https://etherpad.opendev.org/p/openstack-irc-migration-to-oftc<br />
* Xena Tracker<br />
** https://etherpad.opendev.org/p/tc-xena-tracker<br />
* Recommendation on moving the meeting channel to project channel<br />
** https://review.opendev.org/c/openstack/project-team-guide/+/794839<br />
* Open Reviews<br />
** https://review.opendev.org/q/project:openstack/governance+is:open<br />
<br />
==== Apologies for Absence ====<br />
<Please write your name if you are not able to attend the meeting><br />
* Jay S. Bryant (jungleboyj) -- On PTO this week<br />
<br />
=== Past meetings logs ===<br />
<br />
Logs of past TC meetings can be accessed at: http://eavesdrop.openstack.org/meetings/tc<br />
<br />
[[Category: meetings]]</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=Meetings/TechnicalCommittee&diff=178022Meetings/TechnicalCommittee2021-03-30T01:06:11Z<p>Jay Bryant: /* Apologies for Absence */</p>
<hr />
<div>__NOTOC__<br />
<br />
The OpenStack Technical Committee is one of the [https://governance.openstack.org governing bodies] of the OpenStack project. You can find more information about it, such as the list of its [https://governance.openstack.org/tc/ current members] or its [https://governance.openstack.org/tc/reference/charter.html governance charter], on the OpenStack TC governance website at https://governance.openstack.org/tc/ .<br />
<br />
In order to include as many people as possible in the discussion, the Technical Committee relies on asynchronous communications as much as possible. We propose and vote on changes through the [http://git.openstack.org/cgit/openstack/governance openstack/governance repository]. Large-impact changes are discussed on the openstack-discuss mailing-list. '''We track current initiatives on the [[Technical_Committee_Tracker]].'''<br />
<br />
We hold office hours at [https://governance.openstack.org/tc/#office-hours various times] during the week on the [http://eavesdrop.openstack.org/irclogs/%23openstack-tc/ #openstack-tc] IRC channel. We meet formally [http://eavesdrop.openstack.org/#Technical_Committee_Meeting each week] in #openstack-tc <br />
<br />
<br />
=== Next Meeting ===<br />
<br />
* Date: April 1st, 2021<br />
* Time: 15.00 UTC: http://eavesdrop.openstack.org/#Technical_Committee_Meeting<br />
* Chair: Ghanshyam Mann<br />
* Agenda to be published on the OpenStack-discuss mailing list before the meeting<br />
<br />
==== Agenda Suggestions ====<br />
<br />
* Follow up on past action items<br />
* PTG<br />
** https://etherpad.opendev.org/p/tc-xena-ptg<br />
* Gate performance and heavy job configs (dansmith)<br />
** http://paste.openstack.org/show/jD6kAP9tHk7PZr2nhv8h/<br />
* PTL assignment for Xena cycle leaderless projects (gmann)<br />
** https://etherpad.opendev.org/p/xena-leaderless<br />
* Election for one Vacant TC seat (gmann)<br />
** http://lists.openstack.org/pipermail/openstack-discuss/2021-March/021334.html<br />
* Community newsletter: "OpenStack project news" snippets<br />
** https://etherpad.opendev.org/p/newsletter-openstack-news<br />
* Open Reviews<br />
** https://review.opendev.org/q/project:openstack/governance+is:open<br />
<br />
==== Apologies for Absence ====<br />
* Kendall Nelson (diablo_rojo)<br />
* Jay Bryant (jungleboyj)<br />
<br />
=== Past meetings logs ===<br />
<br />
Logs of past TC meetings can be accessed at: http://eavesdrop.openstack.org/meetings/tc<br />
<br />
[[Category: meetings]]</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=Meetings/Oslo&diff=176807Meetings/Oslo2020-11-10T14:07:29Z<p>Jay Bryant: /* Courtesy ping for Wallaby */</p>
<hr />
<div>Oslo will hold IRC meetings weekly at the time scheduled below.<br />
<br />
If there's an Oslo topic you think warrants a project meeting, please add it to the agenda section below and notify the [http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss openstack-discuss@lists.openstack.org] mailing list. Please give everyone at least 24 hours notice.<br />
<br />
'''Revised on:''' {{REVISIONMONTH1}}/{{REVISIONDAY}}/{{REVISIONYEAR}} by {{REVISIONUSER}}<br />
<br />
== Agenda for Next Meeting ==<br />
<br />
See http://eavesdrop.openstack.org/#Oslo_Team_Meeting<br />
<br />
* Commit Audit (https://static.opendev.org/project/opendev.org/gerrit-diffs/openstack/)<br />
<br />
=== Courtesy ping for Wallaby ===<br />
hberaud, stephenfin, moguimar, jungleboyj, <put your name here><br />
<br />
=== Agenda Template ===<br />
#startmeeting oslo<br />
Courtesy ping for bnemec, smcginnis, moguimar, johnsom, stephenfin, bcafarel, kgiusti, jungleboyj, sboyron<br />
#link https://wiki.openstack.org/wiki/Meetings/Oslo#Agenda_for_Next_Meeting<br />
#topic Red flags for/from liaisons<br />
#topic Releases liaison<br />
#topic Security liaison<br />
#topic TaCT SIG liaison<br />
#topic Action items from last meeting<br />
<One-off topics><br />
#topic Weekly Wayward Wallaby Review<br />
#topic Open discussion<br />
#endmeeting<br />
<br />
== General Information ==<br />
=== Regular Meeting Schedule ===<br />
* What day: Monday<br />
* What time: [https://www.timeanddate.com/worldclock/converter.html?iso=20180212T150000&p1=1440&p2=195&p3=43&p4=4675&p5=224 1500 UTC]<br />
* Where: #openstack-oslo on freenode<br />
* Who: All are welcome to participate<br />
<br />
=== Notes from Previous Meetings ===<br />
<br />
'''Current: ''' http://eavesdrop.openstack.org/meetings/oslo<br />
<br />
'''Historical'''<br />
<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-07-11-16.01.html Jul 11, 2014] - topics: oslo.db exception handling; sprint report<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-06-27-16.00.html Jun 27, 2014]<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-06-20-16.01.html Jun 20, 2014] - topics: oslo.db initial release; oslo.messaging good progress in neutron; alpha releases of 5 libraries next week; oslo.db test bugs reported by devananda<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-06-13-16.00.html Jun 13, 2014] - topics: oslo.db alpha release; db migration bug; <br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-06-06-16.00.html Jun 06, 2014] - topics: juno specs, spec approval process<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-05-30-16.00.html May 30, 2014]<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-05-23-16.01.html May 23, 2014] - topics: osprofile (postponed), run_test.sh, juno specs, oslo.test issue in tempest<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-05-09-16.02.html May 09, 2014] - topics: oslo-specs, oslo.messaging, summit prep, oslo.db, oslo.i18n<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-04-25-16.00.html April 24, 2014] - topics: oslotest, oslo.db, oslo.i18n, creating a specs repo<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-02-28-14.00.html Feb 28, 2014] - topics: icehouse feature freeze; syncing cinder & nova; uuidutils<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-02-14-14.01.html Feb 14, 2014] - topics: oslo.db, icehouse-3<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-01-31-14.01.html Jan 31, 2014] - topics: translation, deprecation policy, adopting taskflow, stevedore, and cliff<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-11-15-14.01.html Nov 15, 2013] - topics: translation, pecan/wsme common code, icehouse scheduling<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-10-25-14.00.html Oct 25, 2013] - topics: deprecated decorator and delayed translation implementation plan<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-10-11-14.00.html Oct 11, 2013] - topics: delayed translations<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-08-16-14.00.html Aug 16, 2013] - topic was new messaging API, message security and reject/reque/ack<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-07-19-14.00.html July 19, 2013] - topic was new messaging API, message security, qpid/proton messaging driver and removing logging dependency on eventlet<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-06-07-14.00.html June 7, 2013] - topic was new messaging API and message security<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-05-03-14.01.html May 3, 2013] - topic was new messaging API and message security<br />
<br />
(In case the list of notes is not up to date, please consult http://eavesdrop.openstack.org/meetings/oslo/)</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=OpenStack_Upstream_Institute_Occasions&diff=175906OpenStack Upstream Institute Occasions2020-08-17T19:54:50Z<p>Jay Bryant: /* Virtual Crew */</p>
<hr />
<div>==Virtual Training, 2020==<br />
<br />
During the Open Infrastructure Summit virtual event, October 19-23, 2020<br />
<br />
=== Virtual Crew===<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Name !! IRC !! Mail !! Time Zone !! Projects/Areas !! Notes<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Cinder, Nova, Telemetry<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Pacific Time, US (UTC-8)<br />
|Storyboard, Docs, First Contact SIG, Technical Elections, Mentoring & Outreach<br />
|<br />
|-<br />
|Amy Marrich<br />
|spotz<br />
|amy@demarco.com<br />
|CST<br />
|OSA, Docs, D&I,Mentoring,Board<br />
|<br />
|-<br />
|Jay Bryant<br />
|jungleboyj<br />
|jsbryant@electronicjungle.net<br />
|CST<br />
|Cinder, Docs, Mentoring, TC<br />
|<br />
|-<br />
|}<br />
<br />
==Shanghai Training, 2019==<br />
<br />
During the Open Infrastructure Summit Shanghai event, November 2-3, 2019<br />
<br />
=== Shanghai Crew===<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Name !! IRC !! Mail !! Time Zone !! Projects/Areas !! Notes<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Cinder, Nova, Telemetry<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Pacific Time, US (UTC-8)<br />
|Storyboard, Docs, First Contact SIG, Technical Elections, Mentoring & Outreach<br />
|<br />
|-<br />
|Gergely Csatari <br />
|csatari<br />
|gergely.csatari@nokia.com<br />
|EST (GMT+3)<br />
|Docs, general processes, serving cofee<br />
|<br />
|-<br />
|Jay Bryant<br />
|jungleboyj<br />
|jsbryant@electronicjungle.net<br />
|Central US Time (UTC-6)<br />
|Cinder, Storage, Docs, Oslo<br />
|Will be working with Paul Xu on-site in SH to help coordinate the event.<br />
|-<br />
|Tony Breeds<br />
|tonyb<br />
|tony@bakeyournoodle.com<br />
|Australian Eastern Time (UTC-10)<br />
|<br />
|<br />
|-<br />
|Ghanshyam Mann<br />
|gmann<br />
|gmann@ghanshyammann.com<br />
|Central US Time (UTC-6)<br />
|QA, Nova, API, Infra, First Contact SIG<br />
|I might be in the BoD meeting in parallel and depends on location.<br />
|-<br />
|Rico Lin<br />
|ricolin<br />
|rico.lin.guanyu@gmail.com<br />
| UTC+8<br />
|Heat, Auto-scaling SIG<br />
|Saturday only<br />
|}<br />
<br />
==Toyko Training, 2019==<br />
<br />
During the OpenStack Days Toyko event, second half July 23, 2019<br />
<br />
===Tokyo Crew===<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Name !! IRC !! Mail !! Time Zone !! Projects/Areas !! Notes<br />
|-<br />
|Ghanshyam Mann<br />
|gmann<br />
|gmann@ghanshyammann.com<br />
|UTC-6 (CDT)<br />
|QA, Nova, API, Infra, First Contact SIG<br />
|<br />
|-<br />
|Masayuki Igawa<br />
|masayukig<br />
|masayuki@igawa.io<br />
|UTC+9 (JST)<br />
|QA, Infra<br />
|<br />
|-<br />
|Kota Tsuyuzaki<br />
|kota<br />
|kota.tsuyuzaki.pc@hco.ntt.co.jp<br />
|UTC+9 (JST)<br />
|Swift<br />
|<br />
|-<br />
|Rikimaru Honjo<br />
|<br />
|honjo.rikimaru@po.ntt-tx.co.jp<br />
|UTC+9 (JST)<br />
|<br />
|<br />
|}<br />
<br />
==Denver Training, 2019==<br />
<br />
Before the Open Infrastructure Summit Denver, April 28-29, 2019.<br />
<br />
===Denver Crew===<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Name !! IRC !! Mail !! Time Zone !! Projects/Areas !! Notes<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Cinder, Nova, Telemetry<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Pacific Time, US (UTC-8)<br />
|Storyboard, Docs, First Contact SIG, Technical Elections, Mentoring & Outreach<br />
|<br />
|-<br />
|Matt Oliver<br />
|mattoliverau<br />
|matt@oliver.net.au<br />
|UTC+11 (AEST)<br />
|Swift, First Contact SIG<br />
|<br />
|-<br />
|Tony Breeds<br />
|tonyb<br />
|tony@bakeyournoodle.com<br />
|UTC+1100/+1000 (AEDT/AEST)<br />
|Requirements, Releases, Extended Maintenance, Nova(ish), Infra<br />
|<br />
|-<br />
|Colleen Murphy<br />
|cmurphy<br />
|colleen@gazlene.net<br />
|UTC+1/2 (CET/CEST)<br />
|Keystone, Infra, Rpm-Packaging, First Contact SIG<br />
|Sunday only<br />
|<br />
|-<br />
|Jay Bryant<br />
|jsbryant<br />
|jsbryant@electronicjungle.net<br />
|UTC-6 (CDT)<br />
|Cinder, Manila, Docs, First Contact SIG<br />
|<br />
|-<br />
|Ghanshyam Mann<br />
|gmann<br />
|gmann@ghanshyammann.com<br />
|UTC-6 (CDT)<br />
|QA, Nova, API, Infra, First Contact SIG<br />
|Will be in Board Meeting in parallel<br />
|}<br />
<br />
==Berlin Training, 2018==<br />
<br />
Before the OpenStack Summit Berlin, November 11-12, 2018.<br />
<br />
===Berlin Crew===<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Name !! IRC !! Mail !! Time Zone !! Projects/Areas !! Notes<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Cinder, Nova, Telemetry<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Pacific Time, US (UTC-8)<br />
|Storyboard, Docs, First Contact SIG, Technical Elections, Mentoring & Outreach<br />
|<br />
|-<br />
|Ghanshyam Mann<br />
|gmann<br />
|gmann@ghanshyammann.com<br />
|JST (UTC+9)<br />
|QA, Nova, API<br />
|Will be in Board Meeting on Monday<br />
|-<br />
|Amy Marrich<br />
|spotz<br />
|amy@demarco.com<br />
|Central Time, US<br />
|OSA, Docs. Community, Users<br />
|Will have Board Meeting one day<br />
|-<br />
|Jay Bryant<br />
|jungleboyj<br />
|jsbryant@electronicjungle.net<br />
|CDT (UTC-5)<br />
|Cinder, Docs, Oslo, Manila<br />
|Will be there Saturday and Sunday<br />
|-<br />
|Gergely Csatari<br />
|csatari<br />
|gergely.csatari@nokia.com<br />
|CET<br />
|Docs<br />
|<br />
|-<br />
|Colleen Murphy<br />
|cmurphy<br />
|colleen@gazlene.net<br />
|CET<br />
|Keystone<br />
|<br />
|-<br />
|Mark Korondi<br />
|kmarc<br />
|korondi.mark@gmail.com<br />
|CET<br />
|Training VM<br />
|<br />
|-<br />
|Tony Breeds<br />
|tonyb<br />
|tony@bakeyournoodle.com<br />
| UTC +10<br />
| Stable, Release Management, Nova<br />
|<br />
|-<br />
|Victoria Martinez de la Cruz<br />
|vkmc<br />
|vkmc@redhat.com<br />
| UTC-3<br />
| Manila<br />
|<br />
|-<br />
| Ell Marquez<br />
|<br />
| ellstripes@gmail.com<br />
|<br />
|<br />
|<br />
|-<br />
|Armstrong Foundjem<br />
|armstrong<br />
|foundjem@ieee.org<br />
|UTC-5<br />
|Mentoring<br />
|-<br />
|Daniel Abad<br />
|vabada<br />
|d.abad@cern.ch<br />
| UTC +1<br />
| Ironic<br />
|<br />
|}<br />
<br />
==Vancouver Training, 2018==<br />
<br />
Before the OpenStack Summit Vancouver, May 19-20, 2018.<br />
<br />
===Vancouver Crew===<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Name !! IRC !! Mail !! Time Zone !! Projects/Areas !! Notes<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Cinder, Nova, Telemetry<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Pacific Time, US<br />
|Cinder, os-brick, Storyboard<br />
|<br />
|-<br />
|Masayuki Igawa<br />
|masayukig<br />
|masayuki@igawa.io<br />
|JST<br />
|QA(Tempest, stestr, openstack-health, stackviz, ...)<br />
|still need to figure out my travel<br />
|-<br />
|Ghanshyam Mann<br />
|gmann<br />
|gmann@ghanshyammann.com<br />
|JST (UTC+9)<br />
|QA, Nova, API<br />
|<br />
|-<br />
|Amy Marrich<br />
|spotz<br />
|amy@demarco.com<br />
|Central Time, US<br />
|OSA, Docs. Community, Users<br />
|Will have board meeting on Sunday afternoon<br />
|-<br />
|Mark Korondi<br />
|kmarc<br />
|korondi.mark@gmail.com<br />
|CET<br />
|VM image,devstack,cli git vim, etc.<br />
|still need to figure out my travel<br />
|-<br />
|Ian Y. Choi<br />
|ianychoi<br />
|ianyrchoi@gmail.com<br />
|KST (UTC+9)<br />
|Docs. I18n<br />
|<br />
|-<br />
|Matthew Treinish<br />
|mtreinish<br />
|mtreinish@kortar.org<br />
|EDT (UTC-4)<br />
|<br />
|<br />
|-<br />
|Jay Bryant<br />
|jungleboyj<br />
|jsbryant@electronicjungle.net<br />
|CDT (UTC-5)<br />
|Cinder, Docs, Oslo, Manila<br />
|Will be there Saturday and Sunday<br />
|-<br />
|Gergely Csatari<br />
|csatari<br />
|gergely.csatari@nokia.com<br />
|CET<br />
|Docs<br />
|<br />
|-<br />
|Rich Wellum<br />
|rwellum<br />
|rich.wellum@nokia.com<br />
|EST<br />
|Kolla, Openstack-Helm<br />
|<br />
|-<br />
|Mars Toktonaliev<br />
|marst<br />
|mars.toktonaliev@nokia.com<br />
|CST<br />
|Docs<br />
|<br />
|-<br />
|Victoria Martinez de la Cruz<br />
|vkmc<br />
|vkmc@redhat.com<br />
|UTC-3<br />
|Manila<br />
|<br />
|}<br />
<br />
==Sydney Training==<br />
<br />
Before the OpenStack Summit Sydney, November 4-5, 2017.<br />
<br />
===Sydney Crew===<br />
<br />
{| class="wikitable"<br />
|Name<br />
|IRC<br />
|Mail<br />
|Time Zone<br />
|Projects/Areas<br />
|Notes<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Telemetry, Cinder, Nova<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Pacific Time, US<br />
|Cinder, os-brick, Storyboard<br />
|<br />
|-<br />
|Mark Korondi<br />
|kmArc<br />
|korondi.mark@gmail.com<br />
|CET<br />
|Docs, Upstream Institute<br />
|<br />
|-<br />
|Matthew Treinish<br />
|mtreinish<br />
|mtreinish@kortar.org<br />
|Eastern Time, US<br />
|QA, Infra, API, Nova, Glance<br />
|<br />
|-<br />
|Ghanshyam Mann<br />
|gmann<br />
|ghanshyammann@gmail.com<br />
|JST<br />
|QA, Nova, API<br />
|<br />
|-<br />
|Amy Marrich<br />
|spotz<br />
|amy@demarco.com<br />
|Central Time, US<br />
|OpenStack-Ansible, Docs<br />
|<br />
|-<br />
|Sean McGinnis<br />
|smcginnis<br />
|sean.mcginnis@gmail.com<br />
|Central Time, US<br />
|Cinder<br />
|<br />
|-<br />
|Jay Bryant<br />
|jungleboyj<br />
|jsbryant@electronicjungle.net<br />
|Central Time, US<br />
|Cinder, Docs<br />
|<br />
|-<br />
|Gergely Csatari<br />
|csatari<br />
|gergely.csatari@nokia.com<br />
|CET<br />
|Docs<br />
|<br />
|-<br />
|Victoria Martinez de la Cruz<br />
|vkmc<br />
|victoria@redhat.com<br />
|ART<br />
|Manila<br />
|<br />
|}<br />
<br />
==Copenhagen Training==<br />
<br />
Before the OpenStack Days Nordic event, October 18, 2017.<br />
<br />
===Copenhagen Crew===<br />
<br />
{| class="wikitable"<br />
|Name<br />
|IRC<br />
|Mail<br />
|Time Zone<br />
|Projects/Areas<br />
|Notes<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Telemetry, Cinder, Nova<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Central Time, US<br />
|Cinder, os-brick, Storyboard<br />
|<br />
|-<br />
|Mark Korondi<br />
|kmArc<br />
|korondi.mark@gmail.com<br />
|CET<br />
|Docs, Upstream Institute<br />
|<br />
|}<br />
<br />
==London Office Hours==<br />
<br />
During the OpenStack Days UK event, September 26, 2017.<br />
<br />
===London Crew===<br />
<br />
{| class="wikitable"<br />
|Name<br />
|IRC<br />
|Mail<br />
|Time Zone<br />
|Projects/Areas<br />
|Notes<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Telemetry, Cinder, Nova<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Central Time, US<br />
|Cinder, os-brick, Storyboard<br />
|<br />
|}<br />
<br />
==Beijing Training==<br />
<br />
Before the OPNFV Summit, June 14-15, 2017.<br />
<br />
===Beijing Crew===<br />
<br />
{| class="wikitable"<br />
|Name<br />
|IRC<br />
|Mail<br />
|Time Zone<br />
|Projects/Areas<br />
|Notes<br />
|-<br />
|Dave Neary<br />
|dneary<br />
|dneary@redhat.com<br />
|Eastern Time, US<br />
|OPNFV and RDO - not directly in OpenStack<br />
|<br />
|-<br />
|Gergely Csatari<br />
|csatari<br />
|gergely.csatari@nokia.com<br />
|CET<br />
|Docs, api-refs, Training Guides<br />
|<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Telemetry, Cinder, Nova<br />
|<br />
|-<br />
|JiangShan 江姗<br />
|<br />
|jiangshan@ctsi.com.cn<br />
|<br />
|<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Central Time, US<br />
|Cinder, os-brick, Storyboard<br />
|<br />
|-<br />
|Leo Ma<br />
|<br />
|majiajun@unitedstack.com<br />
|<br />
|<br />
|<br />
|-<br />
|Rossella Sblendido<br />
|rossella_s<br />
|rsblendido@suse.com<br />
|CET<br />
|Neutron<br />
|<br />
|-<br />
|ShangXiao 尚啸<br />
|<br />
|shangxiao@ctsi.com.cn<br />
|<br />
|<br />
|<br />
|}<br />
<br />
== Boston training ==<br />
<br />
Before the OpenStack Summit, May 6-7, 2017.<br />
<br />
===Boston Crew===<br />
<br />
{| class="wikitable"<br />
|Name<br />
|IRC<br />
|Mail<br />
|Time Zone<br />
|Projects/Areas<br />
|-<br />
|Amy Marrich<br />
|spotz<br />
|amy@demarco.com<br />
|Central Time, US<br />
|Ansible<br />
|-<br />
|Flavio Percoco<br />
|flaper87<br />
|flavio@redhat.com<br />
|<br />
|<br />
|-<br />
|Gergely Csatari<br />
|csatari<br />
|gergely.csatari@nokia.com<br />
|CET<br />
|Docs, api-refs, Training Guides<br />
|-<br />
|Ghanshyam Mann<br />
|gmann<br />
|ghanshyammann@gmail.com<br />
|JST(UTC+9)<br />
|QA<br />
|-<br />
|Ian Y. Choi<br />
|ianychoi<br />
|ianyrchoi@gmail.com<br />
|KST(UTC+9)<br />
|Training Guides + Mentor for I18n<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Telemetry, Cinder, Nova<br />
|-<br />
|Jay Bryant<br />
|jungleboyj<br />
|jungleboyj@gmail.com<br />
|Central Time<br />
|Manila<br />
|-<br />
|Jay Pipes<br />
|jaypipes<br />
|jaypipes@gmail.com<br />
|Eastern US<br />
|-<br />
|KATO Tomoyuki<br />
|katomo<br />
|kato.tomoyuki@jp.fujitsu.com<br />
|<br />
|Docs, Training Guides, I18n<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Central US<br />
|Cinder, os-brick, Storyboard<br />
|-<br />
|Mars Toktonaliev<br />
|marst<br />
|mars.toktonaliev@nokia.com<br />
|CST<br />
|<br />
|-<br />
|Mark Korondi<br />
|kmARC<br />
|korondi.mark@gmail.com<br />
|CET<br />
|Training Guides, Swift, Training VM<br />
|-<br />
|Marton Kiss<br />
|mrmartin<br />
|marton.kiss@gmail.com<br />
|CET<br />
|<br />
|-<br />
|Matt Dorn<br />
|madorn<br />
|madorn@gmail.com<br />
|Central Time, US<br />
|Docs, Training Guides<br />
|-<br />
|Miguel A Lavalle<br />
|mlavalle<br />
|malavall@us.ibm.com<br />
|US Central Time<br />
|Neutron, Tempest<br />
|-<br />
|Samantha Blanco<br />
|blancos<br />
|samantha.blanco@att.com<br />
|Eastern Time, US<br />
|Patrole, Murano<br />
|-<br />
|Sean McGinnis<br />
|smcginnis<br />
|sean.mcginnis@gmail.com<br />
|Central Time, US<br />
|Cinder<br />
|-<br />
|Trevor McCasland<br />
|trevormc<br />
|tm2086@att.com<br />
|Central Time, US<br />
|Neutron, Trove<br />
|-<br />
|Victoria MartÃnez de la Cruz<br />
|vkmc<br />
|victoria@redhat.com<br />
|<br />
|Manila<br />
|}</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=Meetings/Oslo&diff=175061Meetings/Oslo2020-06-09T12:27:33Z<p>Jay Bryant: /* Agenda Template */</p>
<hr />
<div>Oslo will hold IRC meetings weekly at the time scheduled below.<br />
<br />
If there's an Oslo topic you think warrants a project meeting, please add it to the agenda section below and notify the [http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss openstack-discuss@lists.openstack.org] mailing list. Please give everyone at least 24 hours notice.<br />
<br />
'''Revised on:''' {{REVISIONMONTH1}}/{{REVISIONDAY}}/{{REVISIONYEAR}} by {{REVISIONUSER}}<br />
<br />
== Agenda for Next Meeting ==<br />
<br />
See http://eavesdrop.openstack.org/#Oslo_Team_Meeting<br />
<br />
* Ping list update<br />
* Oslo core contact details<br />
<br />
=== Agenda Template ===<br />
'''Ping list for Victoria cycle:''' bnemec, smcginnis, moguimar, johnsom, stephenfin, bcafarel, kgiusti, jungleboyj<br />
#startmeeting oslo<br />
Courtesy ping for bnemec, jungleboyj, moguimar, hberaud, stephenfin, kgiusti, johnsom, e0ne, redrobot, bcafarel, smcginnis<br />
#link https://wiki.openstack.org/wiki/Meetings/Oslo#Agenda_for_Next_Meeting<br />
#topic Red flags for/from liaisons<br />
#topic Releases<br />
#topic Action items from last meeting<br />
<One-off topics><br />
#topic Weekly Wayward Review<br />
#topic Open discussion<br />
#endmeeting<br />
<br />
== General Information ==<br />
=== Regular Meeting Schedule ===<br />
* What day: Monday<br />
* What time: [https://www.timeanddate.com/worldclock/converter.html?iso=20180212T150000&p1=1440&p2=195&p3=43&p4=4675&p5=224 1500 UTC]<br />
* Where: #openstack-oslo on freenode<br />
* Who: All are welcome to participate<br />
<br />
=== Notes from Previous Meetings ===<br />
<br />
'''Current: ''' http://eavesdrop.openstack.org/meetings/oslo<br />
<br />
'''Historical'''<br />
<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-07-11-16.01.html Jul 11, 2014] - topics: oslo.db exception handling; sprint report<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-06-27-16.00.html Jun 27, 2014]<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-06-20-16.01.html Jun 20, 2014] - topics: oslo.db initial release; oslo.messaging good progress in neutron; alpha releases of 5 libraries next week; oslo.db test bugs reported by devananda<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-06-13-16.00.html Jun 13, 2014] - topics: oslo.db alpha release; db migration bug; <br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-06-06-16.00.html Jun 06, 2014] - topics: juno specs, spec approval process<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-05-30-16.00.html May 30, 2014]<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-05-23-16.01.html May 23, 2014] - topics: osprofile (postponed), run_test.sh, juno specs, oslo.test issue in tempest<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-05-09-16.02.html May 09, 2014] - topics: oslo-specs, oslo.messaging, summit prep, oslo.db, oslo.i18n<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-04-25-16.00.html April 24, 2014] - topics: oslotest, oslo.db, oslo.i18n, creating a specs repo<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-02-28-14.00.html Feb 28, 2014] - topics: icehouse feature freeze; syncing cinder & nova; uuidutils<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-02-14-14.01.html Feb 14, 2014] - topics: oslo.db, icehouse-3<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-01-31-14.01.html Jan 31, 2014] - topics: translation, deprecation policy, adopting taskflow, stevedore, and cliff<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-11-15-14.01.html Nov 15, 2013] - topics: translation, pecan/wsme common code, icehouse scheduling<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-10-25-14.00.html Oct 25, 2013] - topics: deprecated decorator and delayed translation implementation plan<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-10-11-14.00.html Oct 11, 2013] - topics: delayed translations<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-08-16-14.00.html Aug 16, 2013] - topic was new messaging API, message security and reject/reque/ack<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-07-19-14.00.html July 19, 2013] - topic was new messaging API, message security, qpid/proton messaging driver and removing logging dependency on eventlet<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-06-07-14.00.html June 7, 2013] - topic was new messaging API and message security<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-05-03-14.01.html May 3, 2013] - topic was new messaging API and message security<br />
<br />
(In case the list of notes is not up to date, please consult http://eavesdrop.openstack.org/meetings/oslo/)</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=CinderUssuriPTGSummary&diff=173322CinderUssuriPTGSummary2019-12-04T18:25:21Z<p>Jay Bryant: /* Actions */</p>
<hr />
<div>=== Introduction ===<br />
This page contains a summary of the subjects covered during the Ussuri PTG held in Shanghai, China, November 7-8, 2019.<br />
It also contains a summary of the Virtual PTG held November 25 and 27, 2019.<br />
<br />
The sessions were recorded. Links to the recordings will be added here when they are available.<br />
<br />
The full etherpad and all associated notes may be found here:<br />
* Shanghai: https://etherpad.openstack.org/p/shanghai-ptg-cinder<br />
* Virtual: https://etherpad.openstack.org/p/cinder-ussuri-virtual-ptg-planning<br />
<br />
==Thursday (Shanghai)==<br />
===Cinder project onboarding and "meet the Cinder developers" session===<br />
Everyone who attended were 100% satisfied and very complimentary about Cinder. Unfortunately, no one attended, so we spent the time figuring out how to get the recording equipment connected and positioned properly.<br />
<br />
<Br><br><br />
[https://www.youtube.com/watch?v=QQgiokX_EiI Video Recording Part 1]<br />
<br><br><br />
<br />
===Python 2 support & Work remaining to remove Py27 support===<br />
I came into this arguing that we need to keep Python 2 testing in master for a while -- at least while we are still supporting Python 2 in stable branches, because otherwise backports become a big problem (won't have a clean backport if any py3-only language features are used in a patch to master). Pretty much no one agreed with this.<br />
<br />
Sean pointed out that as libraries drop py2 support, we won't be able to use them in py2 testing anyway. Ivan and Sean can't wait to start ripping out py2-compatability code. Gorka didn't think that the extra effort to modify backports would be that big a deal, and that if we're going to start using py3 for real, might as well start now.<br />
====Actions====<br />
* Reminder to reviewers: we need to be checking the code coverage of test cases very carefully so that new code has excellent coverage and will be likely to fail when tested with py2 in stable branches when a backport is proposed<br />
* Reminder to committers: be patient with reviewers when they ask for more tests!<br />
* Reminder to community: new features can use py3 language constructs; bugfixes likely to be backported should be more conservative and write for py2 compatabilty <br />
* Reminder to driver maintainers: ^^<br />
* Ivan and Sean have a green light to start removing py2 compatability<br />
===Policy migration===<br />
Some background: https://etherpad.openstack.org/p/policy-migration-steps<br />
Keystone has added a default read-only role and service-scoped roles, but they don't do anything until projects write policies that use them<br />
oslo.policy code has a way to define a default policy + deprecated default policy; during deprecation, the most permissive wins. This will allow easy migration to new policies for operators.<br />
<br />
There are still questions about how to set up testing for these. Keystone did only unit tests, but the tests were very heavyweight; had to set up all the users in the DB each time; they wish they had done things in tempest. But there is a concern that it may not be practical to do in tempest, either. (It may depend on the project.)<br />
<br />
A pop-up team is going to be started to help get the larger projects moved to the new Policy Code.<br />
====Actions====<br />
* rosmaita will investigate the different testing approaches. Note: It's possible that tempest will add the methods to create different users and the different projects will have to do their own testing using those.<br />
* rosmaita To look at the scoping options and understand what the impact on Cinder will be.<br />
** Need to create a matrix of our policies and different scopes.<br />
** Need to figure out how the administrative context fits in<br />
** Need to check current test coverage and where testing needs to be enhanced.<br />
** Don't have to have one person do it. It is possible to split up the work.<br />
* rosmaita and e0ne (and anyone else interested in this) to join the pop-up team to get more info and help get Cinder started.<br />
===Cinder V2 API removal===<br />
We can't just remove V2 code right now (e.g., the V2 extensions need to be moved to V3) but we can remove access to the API. Though actually that's not true either, there is a lot of background work that needs to be done before we remove V2 API:<br />
* Tempest still assumes that the V2 API will be there. Need to fix it.<br />
* OpenStack Client also has some V2 API assumptions.<br />
* Devstack also will not work.<br />
<br />
Sean has a patch to see how badly things break with V2 removal: https://review.opendev.org/554372<br />
<br />
V3 is pretty much exactly the same as V2. We should be able to change and just have people switch the endpoint and have it work. It would be nice if we could just update the catalog but that doesn't appear to be the case.<br />
====Actions====<br />
* follow up on this for the virtual PTG. What did we find with Sean's patch?<br />
** Create a list of the specific work items that need to be completed.<br />
** At that point we may be able to split the work up to an intern (if we have an intern).<br />
===Cinder REST API V4===<br />
We had talked in Vancouver about getting to a point after we have enough micro-versions piling up to move to V4.<br />
====Actions====<br />
* We don't need to do that in this release (let's get rid of V2 first), but it is something we need to keep in mind as a future goal.<br />
<br><br><br />
[https://www.youtube.com/watch?v=a-kq_EYrkq0 Video Recording Part 2]<br />
<br><br><br />
<br />
===Volume local cache===<br />
Requires both Cinder and Nova work:<br />
* Cinder spec: https://review.opendev.org/#/c/684556/<br />
* Nova spec: https://review.opendev.org/689070<br />
<br />
Currently there're different types of fast NVME SSDs, such as Intel Optane SSD, which r/w throughput can be 2.x~3.x GB/s, latency can be ~10 us. While typical remote volume for a VM can be hundreds of MB/s, latency can be millisecond level (iscsi / rbd). So these fast SSDs can be mounted on compute node locally and used as a cache for remote volumes. Regarding storage team, we need to add support in os-brick.<br />
<br />
Consensus was: there are some storage solutions this cannot be done for (Ceph, no mount point on host machine), some that might not require this (some vendors already have super-fast caching), and some it's worth doing for, so the overall feeling was supportive for this effort.<br />
<br />
See the PTG etherpad for details. Picture of the flip chart used during the discussion: https://twitter.com/jungleboyj/status/1192323512238776320<br />
====Actions====<br />
* Liang Fang to continue working on this<br />
===Mutable options ===<br />
The context for this is a NetApp customer request who wants to be able to change backend credentials without restarting any services.<br />
<br />
Problem is that the current mutable config can be done for the REST API, but doesn't extend beyond that. Further, changing driver credentials is a little more work since it may require reloading the driver or having a mechanism in all drivers to recognize and handle that change. Also, we don't want config options that are shared across drivers to be mutable.<br />
<br />
Gorka pointed out that a driver supporting Active-Active would not need mutable options for this purpose. It would be better to implement Active-Active instead of refresh credentials this way. A/A HA support has been ready for several releases now, but so far RBD has been the only driver to test and enable it.<br />
<br />
The team feels that using Active/Active is the best way to go.<br />
====Actions====<br />
* Gorka volunteered to support the NetApp team if they choose to implement A/A<br />
* need to add to the developer docs that just making an option mutable in oslo.config does not solve the problem for drivers (more info on the etherpad)<br />
<br />
<br><br><br />
[https://www.youtube.com/watch?v=4BRXtQ41gJw Video Recording Part 3]<br />
<br><br><br />
<br />
===Cross Project Discussion with Edge Working Group===<br />
Apparently the next version of TripleO will support storage at the edge. They were wondering if we knew anything about that. We don't.<br />
<br />
As far as edge persistent storage goes, telcos think about having NFS-only and have it in the core - scary concept.<br />
<br />
In considering the edge use case, it is important to understand the physical limitations of what people have in mind. For example, one small telco rack, or a smaller DC with air conditioning, or a bigger DC with AC and bigger storage unit, etc. You really can't talk about "the edge" (insert U2 joke here).<br />
<br />
See the etherpad for more.<br />
===Default volume types depending on project or user===<br />
Having a single volume type default is too restrictive for bigger clouds with multiple AZs and many tenants/projects. Operators want more defaults to use in particular situations.<br />
<br />
The selection of which default to use is easy; the hard part of this will be the code enabling creation of the default at the end user/project level. Will need:<br />
* new API calls (create, show, list, update, delete), new microversion<br />
* client support<br />
* tell horizon about it<br />
====Actions====<br />
* geguileo - write the spec<br />
** request from Glance: we may also want a per-service default (triggered when a service token is passed)<br />
===Cinder retype doesn't use driver assisted migration===<br />
Gorka thinks this doesn't depend on the driver; he thinks it's broken for all drivers. There is code in the manager that prevents the efficient path from being taken:<br />
https://github.com/openstack/cinder/blob/ca5c2ce4e8ae9fbc92181ac4ba09cec3429a71e6/cinder/volume/manager.py#L2490<br />
There was a reason for it; we need to review and see if it still holds.<br />
<br />
Ivan thinks this is just a bug. Though we don't have a bug open for it.<br />
====Actions====<br />
* e0ne to investigate and fix it if he can verify that it is broken.<br />
===EOL some of the currently open branches===<br />
We have 8 open branches plus master (ussuri). Sent an email to the ML asking for data so we can make a good decision about this:<br />
http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010385.html<br />
<br />
Got zero responses, so this apparently isn't seen as a big deal by the community.<br />
<br />
The policy is that we need to announce 6 months ahead of time the fact that we are planning to EOL a branch. This allows time for a vendor to come in and pick it up if necessary.<br />
So, if we want to drop branches we just need to announce that we are planning to EOL branches and then we can do it in 6 months.<br />
<br />
The driverfixes branches have not been used in quite a while.<br />
* Should we delete those? No, we don't really want to lose that history of commits.<br />
* Could we re-name them? Put 'archived' in the title or something to make it clear that it doesn't still take code. (Or just document that they are an archive of old driver fixes.)<br />
* When we EOL a driver we should probably make it a driverfixes branch. (Not clear on exactly what's being proposed here, need to follow up at VPTG.)<br />
====Actions====<br />
* rosmaita - find out about renaming branches from infra team; also, about read-only branches (change to gerrit so no patches can be proposed to the branch)?<br />
** proposal: EOL o, p and rename them archived-ocata, archived-pike<br />
* rosmaita - send proposal to ML that o, p are due to exit EM status in 6 months<br />
* revisit this at the Virtual PTG<br />
** the EOL policy was revised recently, no longer requires the 6 month waiting period<br />
** want to reconsider whether not deleting the EOL branches is a good idea if we're not going to merge anything into them<br />
<br />
<br><br><br />
[https://www.youtube.com/watch?v=VHyCKbqMJb4 Video Recording Part 4]<br />
<br><br><br />
<br />
===Discuss the latest User Survey Results===<br />
Here's a handy compiled list of only the Cinder responses: https://etherpad.openstack.org/p/cinder-2019-user-survey-question-responses<br />
====Actions====<br />
* replication needs better documentation so that people know we can failover and fail back correctly<br />
* ivan is planning to continue the generic backup driver work<br />
<br><br />
[https://www.youtube.com/watch?v=r4zURMzJpbA Video Recording Part 5]<br />
<br><br />
<br />
===Meeting with the Nova team===<br />
When we failover in Cinder, volumes are no longer usable in Nova, but we don't tell Nova that the failover has ocurred. Any procedure in Nova to correct the situation needs to be done manually. It would be better if we let Nova know that a failover has occurred so they can do something.<br />
<br />
A complication is that Nova can't simply detach and attach the volume because data that is in flight would be lost.<br />
<br />
How about boot from volume? In that case the instance is dead anyway because access to the volume has been lost. Could go through the shutdown, detach, attach, reboot path.<br />
Problem is that detach is going to fail. Need to force it or handle the failure. But we aren't sure that Nova will allow a detach of a boot volume. And we don't currently have a force detach API.<br />
<br />
Also discussed a possible Nova bug for images created from encrypted volumes: https://bugs.launchpad.net/nova/+bug/1852106 , though it's not clear that the scenario described in the bug can actually happen<br />
====Actions====<br />
* need to figure out how to pass the force to os-brick to detach volume and when rebooting a volume<br />
* rosmaita to investigate Bug #1852106<br />
<br />
==Friday (Shanghai)==<br />
===Meeting with the Glance team===<br />
<br><br />
[https://www.youtube.com/watch?v=p-cEMSj44Nc Video Recording Part 1]<br />
<br><br />
====Support for Glance multiple stores in Cinder====<br />
References: (cinder spec) https://review.openstack.org/#/c/641267/<br />
<br />
The Cinder team is still OK with this idea (which was approved for Train).<br />
=====Actions=====<br />
* retarget spec for Ussuri<br />
* get Abhishek's patch reviewed<br />
====Image snapshot co-location====<br />
For the Edge use case, Glance is planning to use info provided by Nova about what image a server was booted from to co-locate snapshots of that server in the same store as the original image. Would like to do the same with Cinder volumes uploaded as images. Just need a header that specifies the "base" image of the volume being uploaded as an image. We agreed that this is a separate use case from the above.<br />
=====Actions=====<br />
* Abhishek will write the spec for Cinder<br />
====Glance Cinder driver is very limited====<br />
We think it uses only default volume type, and also, it is not very well tested. We all agreed that this is a sad state of affairs.<br />
=====Actions=====<br />
* somebody should do something<br />
<br />
===Meet with Horizon about their proposed implementation of Cinder user messages===<br />
Horizon is interested in exposing the User Messages API. We agreed that this is a great idea.<br />
<br />
There's a question about having the message displayed in a requested language. It's possible that this is already handled at the REST API layer via the "Accept-Language" header. If it's not, that's probably the place to support this.<br />
====Actions====<br />
* rosmaita determine whether this would require a change to the API code, or whether existing code handles this already<br />
===Attach/Detach speed===<br />
Gorka was wondering whether there are any complaints about attach/detach speed in OpenStack, particularly since people are now using Cinder to provide volumes for Kubernetes (cinder in-tree driver, Cinder-CSI, Ember-CSI) and may be seeing a lot more attach/detach requests.<br />
<br />
Everybody seems to be OK with it, it's only geguileo who's complaining.<br />
====Actions====<br />
* not a concern at the moment<br />
===Topics from Train mid-cycle: status and carry-over to Ussuri===<br />
Notes about the Train mid-cycle: https://wiki.openstack.org/wiki/CinderTrainMidCycleSummary<br />
<br />
Mid-cycle etherpad: https://etherpad.openstack.org/p/cinder-train-mid-cycle-planning<br />
====Multiattach====<br />
All items need followup. Goals are:<br />
* short-term: document some guidance for how this feature should be tested<br />
* long-term: get some new tests into the cinder-tempest-plugin for this<br />
=====Actions=====<br />
* rosmaita draft the short-term document<br />
====iSCSI Ceph driver====<br />
Due to some downstream priorities changes, Walt is having trouble finding time to work on this.<br />
Ivan suggested that we encourage Walt to post whatever he's got, even if it's not working, so what he's learned isn't lost.<br />
There are some patches up and a github repo for some code Walt had to write that doesn't have a home in OpenStack or Ceph yet<br />
=====Actions=====<br />
* rosmaita: follow up with Walt<br />
* rosmaita: put together an etherpad with links to the work done so far<br />
====3rd Party CI Irregularities====<br />
Third-party testing by backend vendors of their driver code is very important to the project. But most of the 3rd Party CI appear to be pretty unstable.<br />
<br />
For most vendors, updating their 3rd Party CI to run python 3.7 in Train was not a simple task. It would be good if we could offer them better guidance about how to set up & maintain their 3rd Party CI. Would also like vendors to be running the cinder-tempest-plugin, but don't want to make it a demand unless we can make the path easier. (BTW, Datera is running the cinder-tempest-plugin in their CI!)<br />
<br />
Third Party CI Docs (partial list)<br />
* https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver<br />
* https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers<br />
* https://docs.openstack.org/infra/system-config/third_party.html<br />
* https://docs.openstack.org/cinder/latest/contributor/drivers.html<br />
<br />
=====Actions=====<br />
* Luigi has some ideas about using RDO Software Factory as a basis for 3rd Party CI; need to follow up with him on that<br />
* Gorka: will check about what RDO has available <br />
* e0ne: will look to see who's using cinder-tempest-plugin<br />
* the team: after gorka and e0ne report back, reorganize & update the 3rd party CI docs<br />
<br />
====Improve Automated Test Coverage====<br />
We want to do this via the cinder-tempest-plugin. Sophia (enriquetaso) is mentoring an Outreachy intern who has begun some work on this. Eric has been writing bugs to suggest test cases that need to be addressed.<br />
<br />
====SQLAlchemy to Alembic migration====<br />
No progress on this. Put in a proposal for a summer intern to work on this; maybe we'll get lucky.<br />
<br />
See https://etherpad.openstack.org/p/cinder-train-ptg-planning (line #247) for more info.<br />
<br />
====Capabilities Reporting====<br />
Operators need to read the vendor's manual to figure out which extra specs they can write for a particular backend, and what they're used for. it would be nice to drivers report their capabilities in a way that the operator can figure out this info from the CLI.<br />
<br />
Everyone agreed that we still want to do this. It will require an API change and there's already a spec for this:<br />
https://review.opendev.org/#/c/655939/1/specs/train/backend_capabilities.rst<br />
=====Actions=====<br />
* revisit at the Virtual PTG and figure out who's interested in working on it<br />
<br />
===Cinder Business===<br />
<br><br />
[https://www.youtube.com/watch?v=8YQ4TQyIzkw Video Recording Part 2]<br />
<br><br />
====Cinder Ussuri Priorities====<br />
We will finalize this after the Virtual PTG, but here's the initial list:<br />
* Increase testing coverage<br />
* Increase number of CIs running cinder-tempest-plugins<br />
* Better support for third party CIs: Make their life easier by having a way to deploy a robust system<br />
* Volume types per user/project/service-token<br />
** better documentation<br />
* Generic Backups<br />
* Improve HA Active-Active documentation<br />
** want to make it easier to test it<br />
* remove V2 API<br />
* remove python 2 support<br />
====Cinder-core update====<br />
See http://lists.openstack.org/pipermail/openstack-discuss/2019-November/010519.html<br />
We are at roughly the same review strength we had in Train.<br />
====Meeting Time Change update====<br />
We were holding off on this until after the Summit so that new contributors could participate in a poll.<br />
We'll consider the options from Liang Fang's original proposal at the Cinder weekly meeting: http://eavesdrop.openstack.org/meetings/cinder/2019/cinder.2019-10-23-16.00.log.html#l-166<br />
These are to move the meeting 1 or 2 hours earlier.<br />
There has also been some discussion on the ML: http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010328.html<br />
=====Actions=====<br />
* rosmaita put together a community poll<br />
====Virtual PTG====<br />
We discussed what the format should be. Consensus was to do it over 2 consecutive days, using 2 hours each day. This should make it easier for people to participate in at least part of the meeting. We want to do it soon; consensus was the week after KubeCon to avoid conflicts. So that would be the last week in November.<br />
=====Actions=====<br />
* rosmaita put together a community poll to determine days/times<br />
====Virtual Mid-Cycle====<br />
There is interest in having a midcycle. Although everyone recognizes that face-to-face is the best, contributors have been having trouble getting travel support. So we decided to do a completely Virtual Mid-Cycle meetup for Ussuri. We decided to figure out the format after we see how the Virtual PTG works out.<br />
<br />
==Monday (Virtual)==<br />
<br><br />
[https://www.youtube.com/watch?v=dwk2oKXxlfw Video Recording]<br />
<br><br />
===Forum session recap: Are You Using Upgrade Checks?===<br />
* https://etherpad.openstack.org/p/PVG-upgrade-check-forum<br />
Jay gave a quick recap of the Forum session. There are a number of action items in the etherpad above. They are assigned to jungleboyj right now as a TC action.<br />
<br />
There are still some questions about (a) how operators are using these, and (b) what kind of checks we should be providing from the development side. The Cinder team was seeing this as pre-check. Others are seeing it as a check that is used along the way while an upgrade is in process to ensure that things are ready before operators start up their services. Pre-checks seem to make sense for us; Sean noted that we could add an option to do some pre-checks to the cinder-status command.<br />
<br />
So what should the Cinder team do during the Ussuri cycle (before we have the above issues settled)? At the very least, we should still add them when a driver is unsupported and subject to removal:<br />
* inform operators that in order to use an unsupported driver, a flag has to be set in cinder.conf<br />
* inform operators that they need to contact the vendor about whether they have plans to have the driver re-instated; otherwise, the operator needs to prepare to migrate the affected volumes to a backend with a supported driver for the next Cinder release<br />
====Actions====<br />
* jungleboyj - Start a discussion on the mailing list to find out if anyone is actually using or has used the upgrade checks in production<br />
* need to figure out where the documentation for this goes<br />
===Snapshot co-location===<br />
The spec for this is: https://review.opendev.org/#/c/695630/<br />
<br />
This is related to glance multi-store support in Cinder, but the spec needs to specify more carefully what the use case is. We think that is: a user has a volume that was created from a glance image, and wants to upload it as an image; want to give Glance info so that in can put the new image in the same store as the original image. (So use of the term "snapshot" here may be inaccurate.)<br />
<br />
This feature depends on the implementation of the other glance multistore spec: https://review.opendev.org/#/c/661676/<br />
====Actions====<br />
* Rajat, Abhishek - update the spec<br />
===Python 2 support removal===<br />
Gave a quick summary of what we discussed in Shanghai (see above) so that we're all on the same page.<br />
<br />
There's a patch up now removing py2 testing from Cinder: https://review.opendev.org/695317 . Once that's approved, will do the same for the other components.<br />
<br />
General advice to Cinder developers about using Python 3 language features: https://wiki.openstack.org/wiki/CinderUssuriPTGSummary#Actions<br />
<br />
====Actions====<br />
* rosmaita - get the testing/gate patches merged, then let the good times roll<br />
===User messages===<br />
Quick discussion of the admin action "leakage" issue discussed on https://review.opendev.org/#/c/694954/<br />
<br />
Consensus was that it would be useful to expose admin-oriented actions in user messages that only admins would be able to view. Maybe set a special flag when the message is created, and then use the admin context to decide whether this gets shown or not. Agreed that the message content will be same as we have currently (that is, don't expose any sensitive information even to admins). We can wait until admin-facing user messages are being used and get feedback about whether more info is required or not.<br />
<br />
Ivan pointed out that this change should not require a new microversion, since there's no change to the user message API and no change to the current response.<br />
====Actions====<br />
* rosmaita - write up a spec<br />
===3rd Party CI irregularities===<br />
The issue we want to address is that the 3rd Party CI systems seem pretty unstable. We'd like to be able to provide some more support to make the infrastructures more reliable. Luigi suggested using RDO Software Factory as a basis for 3rd Party CI.<br />
<br />
References:<br />
* https://opendev.org/x/third-party-ci-tools<br />
* https://zuul-ci.org/docs/zuul/admin/quick-start.html<br />
====Actions====<br />
* Luigi - follow up with RDO team and get some feedback on how plausible this scenario is<br />
* e0ne - will look to see who's using cinder-tempest-plugin<br />
===Extending default volume type support for tenants===<br />
Quick recap of the Shanghai discussion (see above). Simon had mentioned that he might have developer at Pure who'd be interested in doing the implementation. Rajat volunteered to help support the implementation.<br />
====Actions====<br />
* Gorka - write up the spec<br />
* rosmaita - follow up with Simon<br />
===Quotas!===<br />
Eric has a patch up that may fix one of many problems: https://review.opendev.org/#/c/695096/ <br />
Eric thinks the patch could be optimized if someone is interested.<br />
<br />
The general problem is that we update multiple tables and there can be (are) race conditions and you wind up with strange situations like negative quota values or multiple quotas for the same project. Operators have posted some scripts to be used occasionally clean up the database, but it would be better to fix this in Cinder.<br />
===EOL for driverfixes/{m,n} and stable/{o,p}===<br />
Since the Shanghai discussion, a patch has merged that removes the 6 month waiting period for the transition from EM -> EOL: https://review.opendev.org/#/c/682381/<br />
<br />
There was a discussion about this in #openstack-tc last week: http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-11-22.log.html#t2019-11-22T15:35:01<br />
<br />
Consensus is that we should go ahead and do this.<br />
====Actions====<br />
* rosmaita - send notice on the ML that we are going to do this in one week<br />
** http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011136.html<br />
* rosmaita - put up a release patch to EOL the branches<br />
** https://review.opendev.org/#/c/696173/<br />
===Driver support matrix===<br />
Follow-up from the discussion in Shanghai. A suggestion was made that multipath should be a specific category in the support matrix.<br />
<br />
Consensus is that multipath is more a feature of the backend than of the driver. It is useful to know if drivers do it, but it's not the kind of thing like replication that they do or not. Also, there are options that have to be set in nova in order for it to be useful - nova:libvirt:use_volume_multipath. So there doesn't seem to be a point in adding this to the support matrix.<br />
<br />
==Wednesday (Virtual)==<br />
<br><br />
[https://www.youtube.com/watch?v=VtmmWtkoxx8 Video Recording]<br />
<br><br />
===v2 API removal update===<br />
Outward facing issues:<br />
* Sean's patch that removes v2 from the 'versions' response had some strange failures (but Rajat thinks there may be a quick fix).<br />
* Right now, devstack expects both v2 and v3 to be available and creates endpoints for both in the service catalog: https://opendev.org/openstack/devstack/src/branch/master/lib/cinder<br />
* On the other hand, it looks like tempest is v3 ready: https://review.opendev.org/#/c/530702/<br />
* Also, Ivan is pretty sure that Horizon uses v3 only.<br />
* We should notify Nova and Glance to make sure they don't rely on v2 for anything.<br />
<br />
Internal issues:<br />
* we should be able to clean up the stuff that v3 inherited from v2<br />
* we may not be able to clean up the v2/contrib stuff yet because of microversion reliance<br />
====Actions====<br />
* Rajat take a shot at fixing Sean's patch<br />
* rosmaita - work on the devstack stuff (service catalog)<br />
* we'll be optimistic about tempest<br />
* rosmaita - send a general email to the ML saying that we plan to do this<br />
===Forum session recap: How are you using Cinder's Volume Types?===<br />
Session etherpad: https://etherpad.openstack.org/p/PVG-how-using-cinder-volume-types<br />
<br />
Sean mentioned some highlights of the Forum session.<br />
* NECTAR is running a patch that allows a volume type to be assigned to an AZ: https://github.com/NeCTAR-RC/cinder/commit/d5a3d938a8e0934d31b5a3c568846b3d32843866<br />
* There were some questions about whether RBD supports volume online migration<br />
* Operators are interested in the "Support filter backend based on operation type" spec that was implemented in Rocky, but need some documentation explaining how to use it. The implementation is https://github.com/openstack/cinder/commit/e1ec4b4c2e1f0de512f09e38824c1d7e2fa38617<br />
====Actions====<br />
* e0ne is planning to do some testing around the RBD volume migration<br />
* need someone to pick up the documentation of "Support filter backend based on operation type"<br />
===Ceph iSCSI work===<br />
This is an important feature for Ironic.<br />
There's an etherpad gathering some of the work Walt's done on this: https://etherpad.openstack.org/p/cinder-ceph-iscsi-driver<br />
====Actions====<br />
rosmaita - follow up with Walt about his bandwidth and ask him to add missing stuff to the etherpad<br />
===Ussuri community goals ===<br />
Goal 1: Drop Python 2.7 Support -- we're going to do this in 2 phases. First is to get the python 2 check and gate jobs removed so we aren't depending on any py27 in the gate. Second will be to make the changes that will only allow Cinder to be installed with at least py36. That will follow in January or so when any other project that needs to install Cinder in py27 for their own testing has removed that dependency. <br />
<br />
The cinder patch to drop py2 testing is https://review.opendev.org/#/c/695317/ -- once that's merged, we'll do the same for os-brick, cinderclient, the brick-client-ext, and cinderlib.<br />
<br />
Goal 2: Project Specific New Contributor & PTL Docs -- the goal is not yet approved; it's at the formal vote stage: https://review.opendev.org/#/c/691737/. From comments on the patch, expectations are that the current PTL will do this with help from former PTLs. Luckily, we have 2 former PTLs who are still very active with the project. The open issue right now is that the docs are supposed to be consistent across projects, but there isn't a template for this yet.<br />
<br />
There's a pre-selected goal for V to migrate all legacy zuul jobs: https://review.opendev.org/#/c/691278/. gmann has a patch up moving our legacy jobs (grenade) to the cinder repo and making them py3: https://review.opendev.org/#/c/695787/. At some point, we'll need to convert them to bona fide Zuul v3 jobs. Luigi left a bunch of info on the etherpad about what the moving parts for this are, and he pointed out that reviews are welcome.<br />
<br />
It looks like another V goal is going to be "Consistent and secure default policies" goal, which was floated on the ML: http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010291.html. A "Policy Popup Team" is being organized now to do some work during this cycle: https://review.opendev.org/#/c/695993/<br />
====Actions====<br />
* everyone - keep an eye on the "remove py2 support" patches<br />
* everyone - reviews welcome on https://review.opendev.org/#/q/status:open+branch:master+topic:grenade_zuulv3<br />
===Backup Service Testing===<br />
This was a follow-up from the Train mid-cycle. The backup tests fail intermittently with timeouts. The situation now is that the cinder backup tests have been removed from tempest-full, but they are being run with tempest-integrated-storage (which basically means that failures will hit us but shouldn't block other projects). The issue could still use some investigation; Eric has a suggestion on the etherpad to write an elasticsearch query to look in the c-bak log for a specific IOError instead of looking for the timeout.<br />
<br />
Eric also noted that the test jobs run a long time and fail with a not helpful message -- they look for volume going active but don't notice that it's in an error state and should fail sooner. Failing fast could conserve some gate resources. Maybe someone wants to follow up with something like https://review.opendev.org/#/c/565766/ ?<br />
====Actions====<br />
* anyone who's interested - follow up on this<br />
* rosmaita - (came up during this discussion but not related to this topic) update etherpads with nondestructive translation instructions (get the read-only link to the etherpad and use translations tools there instead of in the writeable etherpad)<br />
===Cycle Priorities===<br />
====Increase testing coverage====<br />
We want to get more thorough tests into the cinder-tempest-plugin. Sofia (enriquetaso) is mentoring an Outreachy intern Anastasiya (anastzhyr) who will focus on this during her internship (3 Dec 2019 to 3 March 2020). The Cinder community can assist in this by (A) writing bugs tagged 'test-coverage' with specific ideas for tests that can be added, and (B) timely reviews of Anastaiya's patches.<br />
====Increase number of 3rd party CIs running cinder-tempest-plugin====<br />
This '''should be easyâ„¢''' for current 3rd party CIs (and actually may already be a requirement). Need to get a baseline for how many CIs are running it now.<br />
====Better support for 3rd Party CIs====<br />
We'd like to their lives easier by having a standard way to deploy a robust system. This may be possible using RDO Software Factory. Luigi has already brought this up at the RDO meeting and RDO (and the SF people) are supportive of this idea: http://eavesdrop.openstack.org/meetings/rdo_meeting___2019_11_27/2019/rdo_meeting___2019_11_27.2019-11-27-15.00.log.html#l-76<br />
<br />
WIP 3rd party CI doc section for SoftwareFactory:<br />
* https://softwarefactory-project.io/r/#/c/17097/<br />
* https://softwarefactory-project.io/logs/97/17097/2/check/sf-docs-build/ae24483/docs-html/guides/third_party_ci.html <br />
====Default volume-type enhancement====<br />
Gorka's working on a spec for having per user/project/service-token default volume-type; basic idea is what was discussed at the PTG (see above).<br />
Related to this is improving the documentation around volume types.<br />
====Generic Backups====<br />
Related patches:<br />
* https://review.opendev.org/#/c/620881/<br />
* https://review.opendev.org/#/c/630305/<br />
====Improve Active-Active (HA) Documentation====<br />
We're anticipating that operators will want to run Cinder in HA mode (Cinder active-active). Currently RBD supports running in HA.<br />
<br />
A driver has to set a flag in order for the service to run in active-active. But in addition to setting the flag, it would be good for the feature to actually work with that driver. The problem is that what you need to do is very driver dependent. We don't have tempest tests that verify that a driver can run in HA (but it should be possible to add tests such that if they fail, then you know HA is not happening).<br />
<br />
We need docs aimed at two audiences:<br />
* driver developers: want to clarify what you need to do to implement active-active and claim HA support. Should be able to give some general advice (like "watch out for race conditions on connection") to help in implementing and testing that their driver supports active-active.<br />
* operators: need to provide some advice about how to deploy Cinder in active-active mode (for example, you should have 3 API nodes, 3 scheduler nodes, 3 volume services)<br />
====Remove the v2 API====<br />
Should at least be able to get it out of the service catalog and remove the option to run it (and remove the option to not run v3), so that from an external point of view, all that's available is the v3 API. The refactoring to remove all v2 code from the API doesn't have to happen immediately.<br />
====Remove Python 2 support====<br />
This is a community goal (and it looks like we're in good shape to make this happen very early in the cycle).<br />
====Move away from squlalchemy- migrate to alembic====<br />
We're getting closer to the point where we will have no choice.<br />
===Should we hold a Virtual Mid-Cycle?===<br />
Cycle Schedule: https://releases.openstack.org/ussuri/schedule.html<br />
<br />
The consensus that we should have a Ussuri Mid-Cycle and that it should be virtual. While we were discussing the timing (have it close to the spec freeze? or closer to M-2), Eric pointed out that if we were going to use the model that we used for this Virtual PTG, namely, 2-hour sessions spread over a couple of days, there's no reason why the sessions have to be close together since we don't need to arrange any physical facilities. So we can be flexible and have 2 hour sessions whenever it makes sense.<br />
<br />
Right now, the sensible times to have 2-hour virtual meetings are:<br />
* around the Cinder Spec Freeze. The spec freeze is the last week in January and is at 15 weeks, which is exactly the middle of the cycle. Maybe have the Virtual meet-up the week before so unmerged specs can have some discussion if necessary?<br />
* at the "Cinder New Feature Status Checkpoint" (week of 16 March 2020), which is 3 weeks before the final release for client libraries.<br />
====Actions====<br />
* rosmaita - get feedback from the wider team at the weekly meeting and organize polls to determine the day of week and time</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=CinderUssuriPTGSummary&diff=173321CinderUssuriPTGSummary2019-12-04T18:25:01Z<p>Jay Bryant: /* Meeting with the Glance team */</p>
<hr />
<div>=== Introduction ===<br />
This page contains a summary of the subjects covered during the Ussuri PTG held in Shanghai, China, November 7-8, 2019.<br />
It also contains a summary of the Virtual PTG held November 25 and 27, 2019.<br />
<br />
The sessions were recorded. Links to the recordings will be added here when they are available.<br />
<br />
The full etherpad and all associated notes may be found here:<br />
* Shanghai: https://etherpad.openstack.org/p/shanghai-ptg-cinder<br />
* Virtual: https://etherpad.openstack.org/p/cinder-ussuri-virtual-ptg-planning<br />
<br />
==Thursday (Shanghai)==<br />
===Cinder project onboarding and "meet the Cinder developers" session===<br />
Everyone who attended were 100% satisfied and very complimentary about Cinder. Unfortunately, no one attended, so we spent the time figuring out how to get the recording equipment connected and positioned properly.<br />
<br />
<Br><br><br />
[https://www.youtube.com/watch?v=QQgiokX_EiI Video Recording Part 1]<br />
<br><br><br />
<br />
===Python 2 support & Work remaining to remove Py27 support===<br />
I came into this arguing that we need to keep Python 2 testing in master for a while -- at least while we are still supporting Python 2 in stable branches, because otherwise backports become a big problem (won't have a clean backport if any py3-only language features are used in a patch to master). Pretty much no one agreed with this.<br />
<br />
Sean pointed out that as libraries drop py2 support, we won't be able to use them in py2 testing anyway. Ivan and Sean can't wait to start ripping out py2-compatability code. Gorka didn't think that the extra effort to modify backports would be that big a deal, and that if we're going to start using py3 for real, might as well start now.<br />
====Actions====<br />
* Reminder to reviewers: we need to be checking the code coverage of test cases very carefully so that new code has excellent coverage and will be likely to fail when tested with py2 in stable branches when a backport is proposed<br />
* Reminder to committers: be patient with reviewers when they ask for more tests!<br />
* Reminder to community: new features can use py3 language constructs; bugfixes likely to be backported should be more conservative and write for py2 compatabilty <br />
* Reminder to driver maintainers: ^^<br />
* Ivan and Sean have a green light to start removing py2 compatability<br />
===Policy migration===<br />
Some background: https://etherpad.openstack.org/p/policy-migration-steps<br />
Keystone has added a default read-only role and service-scoped roles, but they don't do anything until projects write policies that use them<br />
oslo.policy code has a way to define a default policy + deprecated default policy; during deprecation, the most permissive wins. This will allow easy migration to new policies for operators.<br />
<br />
There are still questions about how to set up testing for these. Keystone did only unit tests, but the tests were very heavyweight; had to set up all the users in the DB each time; they wish they had done things in tempest. But there is a concern that it may not be practical to do in tempest, either. (It may depend on the project.)<br />
<br />
A pop-up team is going to be started to help get the larger projects moved to the new Policy Code.<br />
====Actions====<br />
* rosmaita will investigate the different testing approaches. Note: It's possible that tempest will add the methods to create different users and the different projects will have to do their own testing using those.<br />
* rosmaita To look at the scoping options and understand what the impact on Cinder will be.<br />
** Need to create a matrix of our policies and different scopes.<br />
** Need to figure out how the administrative context fits in<br />
** Need to check current test coverage and where testing needs to be enhanced.<br />
** Don't have to have one person do it. It is possible to split up the work.<br />
* rosmaita and e0ne (and anyone else interested in this) to join the pop-up team to get more info and help get Cinder started.<br />
===Cinder V2 API removal===<br />
We can't just remove V2 code right now (e.g., the V2 extensions need to be moved to V3) but we can remove access to the API. Though actually that's not true either, there is a lot of background work that needs to be done before we remove V2 API:<br />
* Tempest still assumes that the V2 API will be there. Need to fix it.<br />
* OpenStack Client also has some V2 API assumptions.<br />
* Devstack also will not work.<br />
<br />
Sean has a patch to see how badly things break with V2 removal: https://review.opendev.org/554372<br />
<br />
V3 is pretty much exactly the same as V2. We should be able to change and just have people switch the endpoint and have it work. It would be nice if we could just update the catalog but that doesn't appear to be the case.<br />
====Actions====<br />
* follow up on this for the virtual PTG. What did we find with Sean's patch?<br />
** Create a list of the specific work items that need to be completed.<br />
** At that point we may be able to split the work up to an intern (if we have an intern).<br />
===Cinder REST API V4===<br />
We had talked in Vancouver about getting to a point after we have enough micro-versions piling up to move to V4.<br />
====Actions====<br />
* We don't need to do that in this release (let's get rid of V2 first), but it is something we need to keep in mind as a future goal.<br />
<br><br><br />
[https://www.youtube.com/watch?v=a-kq_EYrkq0 Video Recording Part 2]<br />
<br><br><br />
<br />
===Volume local cache===<br />
Requires both Cinder and Nova work:<br />
* Cinder spec: https://review.opendev.org/#/c/684556/<br />
* Nova spec: https://review.opendev.org/689070<br />
<br />
Currently there're different types of fast NVME SSDs, such as Intel Optane SSD, which r/w throughput can be 2.x~3.x GB/s, latency can be ~10 us. While typical remote volume for a VM can be hundreds of MB/s, latency can be millisecond level (iscsi / rbd). So these fast SSDs can be mounted on compute node locally and used as a cache for remote volumes. Regarding storage team, we need to add support in os-brick.<br />
<br />
Consensus was: there are some storage solutions this cannot be done for (Ceph, no mount point on host machine), some that might not require this (some vendors already have super-fast caching), and some it's worth doing for, so the overall feeling was supportive for this effort.<br />
<br />
See the PTG etherpad for details. Picture of the flip chart used during the discussion: https://twitter.com/jungleboyj/status/1192323512238776320<br />
====Actions====<br />
* Liang Fang to continue working on this<br />
===Mutable options ===<br />
The context for this is a NetApp customer request who wants to be able to change backend credentials without restarting any services.<br />
<br />
Problem is that the current mutable config can be done for the REST API, but doesn't extend beyond that. Further, changing driver credentials is a little more work since it may require reloading the driver or having a mechanism in all drivers to recognize and handle that change. Also, we don't want config options that are shared across drivers to be mutable.<br />
<br />
Gorka pointed out that a driver supporting Active-Active would not need mutable options for this purpose. It would be better to implement Active-Active instead of refresh credentials this way. A/A HA support has been ready for several releases now, but so far RBD has been the only driver to test and enable it.<br />
<br />
The team feels that using Active/Active is the best way to go.<br />
====Actions====<br />
* Gorka volunteered to support the NetApp team if they choose to implement A/A<br />
* need to add to the developer docs that just making an option mutable in oslo.config does not solve the problem for drivers (more info on the etherpad)<br />
<br />
<br><br><br />
[https://www.youtube.com/watch?v=4BRXtQ41gJw Video Recording Part 3]<br />
<br><br><br />
<br />
===Cross Project Discussion with Edge Working Group===<br />
Apparently the next version of TripleO will support storage at the edge. They were wondering if we knew anything about that. We don't.<br />
<br />
As far as edge persistent storage goes, telcos think about having NFS-only and have it in the core - scary concept.<br />
<br />
In considering the edge use case, it is important to understand the physical limitations of what people have in mind. For example, one small telco rack, or a smaller DC with air conditioning, or a bigger DC with AC and bigger storage unit, etc. You really can't talk about "the edge" (insert U2 joke here).<br />
<br />
See the etherpad for more.<br />
===Default volume types depending on project or user===<br />
Having a single volume type default is too restrictive for bigger clouds with multiple AZs and many tenants/projects. Operators want more defaults to use in particular situations.<br />
<br />
The selection of which default to use is easy; the hard part of this will be the code enabling creation of the default at the end user/project level. Will need:<br />
* new API calls (create, show, list, update, delete), new microversion<br />
* client support<br />
* tell horizon about it<br />
====Actions====<br />
* geguileo - write the spec<br />
** request from Glance: we may also want a per-service default (triggered when a service token is passed)<br />
===Cinder retype doesn't use driver assisted migration===<br />
Gorka thinks this doesn't depend on the driver; he thinks it's broken for all drivers. There is code in the manager that prevents the efficient path from being taken:<br />
https://github.com/openstack/cinder/blob/ca5c2ce4e8ae9fbc92181ac4ba09cec3429a71e6/cinder/volume/manager.py#L2490<br />
There was a reason for it; we need to review and see if it still holds.<br />
<br />
Ivan thinks this is just a bug. Though we don't have a bug open for it.<br />
====Actions====<br />
* e0ne to investigate and fix it if he can verify that it is broken.<br />
===EOL some of the currently open branches===<br />
We have 8 open branches plus master (ussuri). Sent an email to the ML asking for data so we can make a good decision about this:<br />
http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010385.html<br />
<br />
Got zero responses, so this apparently isn't seen as a big deal by the community.<br />
<br />
The policy is that we need to announce 6 months ahead of time the fact that we are planning to EOL a branch. This allows time for a vendor to come in and pick it up if necessary.<br />
So, if we want to drop branches we just need to announce that we are planning to EOL branches and then we can do it in 6 months.<br />
<br />
The driverfixes branches have not been used in quite a while.<br />
* Should we delete those? No, we don't really want to lose that history of commits.<br />
* Could we re-name them? Put 'archived' in the title or something to make it clear that it doesn't still take code. (Or just document that they are an archive of old driver fixes.)<br />
* When we EOL a driver we should probably make it a driverfixes branch. (Not clear on exactly what's being proposed here, need to follow up at VPTG.)<br />
====Actions====<br />
* rosmaita - find out about renaming branches from infra team; also, about read-only branches (change to gerrit so no patches can be proposed to the branch)?<br />
** proposal: EOL o, p and rename them archived-ocata, archived-pike<br />
* rosmaita - send proposal to ML that o, p are due to exit EM status in 6 months<br />
* revisit this at the Virtual PTG<br />
** the EOL policy was revised recently, no longer requires the 6 month waiting period<br />
** want to reconsider whether not deleting the EOL branches is a good idea if we're not going to merge anything into them<br />
<br />
<br><br><br />
[https://www.youtube.com/watch?v=VHyCKbqMJb4 Video Recording Part 4]<br />
<br><br><br />
<br />
===Discuss the latest User Survey Results===<br />
Here's a handy compiled list of only the Cinder responses: https://etherpad.openstack.org/p/cinder-2019-user-survey-question-responses<br />
====Actions====<br />
* replication needs better documentation so that people know we can failover and fail back correctly<br />
* ivan is planning to continue the generic backup driver work<br />
<br><br><br />
[https://www.youtube.com/watch?v=r4zURMzJpbA Video Recording Part 5]<br />
<br><br><br />
<br />
===Meeting with the Nova team===<br />
When we failover in Cinder, volumes are no longer usable in Nova, but we don't tell Nova that the failover has ocurred. Any procedure in Nova to correct the situation needs to be done manually. It would be better if we let Nova know that a failover has occurred so they can do something.<br />
<br />
A complication is that Nova can't simply detach and attach the volume because data that is in flight would be lost.<br />
<br />
How about boot from volume? In that case the instance is dead anyway because access to the volume has been lost. Could go through the shutdown, detach, attach, reboot path.<br />
Problem is that detach is going to fail. Need to force it or handle the failure. But we aren't sure that Nova will allow a detach of a boot volume. And we don't currently have a force detach API.<br />
<br />
Also discussed a possible Nova bug for images created from encrypted volumes: https://bugs.launchpad.net/nova/+bug/1852106 , though it's not clear that the scenario described in the bug can actually happen<br />
====Actions====<br />
* need to figure out how to pass the force to os-brick to detach volume and when rebooting a volume<br />
* rosmaita to investigate Bug #1852106<br />
<br />
==Friday (Shanghai)==<br />
===Meeting with the Glance team===<br />
<br><br />
[https://www.youtube.com/watch?v=p-cEMSj44Nc Video Recording Part 1]<br />
<br><br />
====Support for Glance multiple stores in Cinder====<br />
References: (cinder spec) https://review.openstack.org/#/c/641267/<br />
<br />
The Cinder team is still OK with this idea (which was approved for Train).<br />
=====Actions=====<br />
* retarget spec for Ussuri<br />
* get Abhishek's patch reviewed<br />
====Image snapshot co-location====<br />
For the Edge use case, Glance is planning to use info provided by Nova about what image a server was booted from to co-locate snapshots of that server in the same store as the original image. Would like to do the same with Cinder volumes uploaded as images. Just need a header that specifies the "base" image of the volume being uploaded as an image. We agreed that this is a separate use case from the above.<br />
=====Actions=====<br />
* Abhishek will write the spec for Cinder<br />
====Glance Cinder driver is very limited====<br />
We think it uses only default volume type, and also, it is not very well tested. We all agreed that this is a sad state of affairs.<br />
=====Actions=====<br />
* somebody should do something<br />
<br />
===Meet with Horizon about their proposed implementation of Cinder user messages===<br />
Horizon is interested in exposing the User Messages API. We agreed that this is a great idea.<br />
<br />
There's a question about having the message displayed in a requested language. It's possible that this is already handled at the REST API layer via the "Accept-Language" header. If it's not, that's probably the place to support this.<br />
====Actions====<br />
* rosmaita determine whether this would require a change to the API code, or whether existing code handles this already<br />
===Attach/Detach speed===<br />
Gorka was wondering whether there are any complaints about attach/detach speed in OpenStack, particularly since people are now using Cinder to provide volumes for Kubernetes (cinder in-tree driver, Cinder-CSI, Ember-CSI) and may be seeing a lot more attach/detach requests.<br />
<br />
Everybody seems to be OK with it, it's only geguileo who's complaining.<br />
====Actions====<br />
* not a concern at the moment<br />
===Topics from Train mid-cycle: status and carry-over to Ussuri===<br />
Notes about the Train mid-cycle: https://wiki.openstack.org/wiki/CinderTrainMidCycleSummary<br />
<br />
Mid-cycle etherpad: https://etherpad.openstack.org/p/cinder-train-mid-cycle-planning<br />
====Multiattach====<br />
All items need followup. Goals are:<br />
* short-term: document some guidance for how this feature should be tested<br />
* long-term: get some new tests into the cinder-tempest-plugin for this<br />
=====Actions=====<br />
* rosmaita draft the short-term document<br />
====iSCSI Ceph driver====<br />
Due to some downstream priorities changes, Walt is having trouble finding time to work on this.<br />
Ivan suggested that we encourage Walt to post whatever he's got, even if it's not working, so what he's learned isn't lost.<br />
There are some patches up and a github repo for some code Walt had to write that doesn't have a home in OpenStack or Ceph yet<br />
=====Actions=====<br />
* rosmaita: follow up with Walt<br />
* rosmaita: put together an etherpad with links to the work done so far<br />
====3rd Party CI Irregularities====<br />
Third-party testing by backend vendors of their driver code is very important to the project. But most of the 3rd Party CI appear to be pretty unstable.<br />
<br />
For most vendors, updating their 3rd Party CI to run python 3.7 in Train was not a simple task. It would be good if we could offer them better guidance about how to set up & maintain their 3rd Party CI. Would also like vendors to be running the cinder-tempest-plugin, but don't want to make it a demand unless we can make the path easier. (BTW, Datera is running the cinder-tempest-plugin in their CI!)<br />
<br />
Third Party CI Docs (partial list)<br />
* https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver<br />
* https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers<br />
* https://docs.openstack.org/infra/system-config/third_party.html<br />
* https://docs.openstack.org/cinder/latest/contributor/drivers.html<br />
<br />
=====Actions=====<br />
* Luigi has some ideas about using RDO Software Factory as a basis for 3rd Party CI; need to follow up with him on that<br />
* Gorka: will check about what RDO has available <br />
* e0ne: will look to see who's using cinder-tempest-plugin<br />
* the team: after gorka and e0ne report back, reorganize & update the 3rd party CI docs<br />
<br />
====Improve Automated Test Coverage====<br />
We want to do this via the cinder-tempest-plugin. Sophia (enriquetaso) is mentoring an Outreachy intern who has begun some work on this. Eric has been writing bugs to suggest test cases that need to be addressed.<br />
<br />
====SQLAlchemy to Alembic migration====<br />
No progress on this. Put in a proposal for a summer intern to work on this; maybe we'll get lucky.<br />
<br />
See https://etherpad.openstack.org/p/cinder-train-ptg-planning (line #247) for more info.<br />
<br />
====Capabilities Reporting====<br />
Operators need to read the vendor's manual to figure out which extra specs they can write for a particular backend, and what they're used for. it would be nice to drivers report their capabilities in a way that the operator can figure out this info from the CLI.<br />
<br />
Everyone agreed that we still want to do this. It will require an API change and there's already a spec for this:<br />
https://review.opendev.org/#/c/655939/1/specs/train/backend_capabilities.rst<br />
=====Actions=====<br />
* revisit at the Virtual PTG and figure out who's interested in working on it<br />
<br />
===Cinder Business===<br />
<br><br />
[https://www.youtube.com/watch?v=8YQ4TQyIzkw Video Recording Part 2]<br />
<br><br />
====Cinder Ussuri Priorities====<br />
We will finalize this after the Virtual PTG, but here's the initial list:<br />
* Increase testing coverage<br />
* Increase number of CIs running cinder-tempest-plugins<br />
* Better support for third party CIs: Make their life easier by having a way to deploy a robust system<br />
* Volume types per user/project/service-token<br />
** better documentation<br />
* Generic Backups<br />
* Improve HA Active-Active documentation<br />
** want to make it easier to test it<br />
* remove V2 API<br />
* remove python 2 support<br />
====Cinder-core update====<br />
See http://lists.openstack.org/pipermail/openstack-discuss/2019-November/010519.html<br />
We are at roughly the same review strength we had in Train.<br />
====Meeting Time Change update====<br />
We were holding off on this until after the Summit so that new contributors could participate in a poll.<br />
We'll consider the options from Liang Fang's original proposal at the Cinder weekly meeting: http://eavesdrop.openstack.org/meetings/cinder/2019/cinder.2019-10-23-16.00.log.html#l-166<br />
These are to move the meeting 1 or 2 hours earlier.<br />
There has also been some discussion on the ML: http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010328.html<br />
=====Actions=====<br />
* rosmaita put together a community poll<br />
====Virtual PTG====<br />
We discussed what the format should be. Consensus was to do it over 2 consecutive days, using 2 hours each day. This should make it easier for people to participate in at least part of the meeting. We want to do it soon; consensus was the week after KubeCon to avoid conflicts. So that would be the last week in November.<br />
=====Actions=====<br />
* rosmaita put together a community poll to determine days/times<br />
====Virtual Mid-Cycle====<br />
There is interest in having a midcycle. Although everyone recognizes that face-to-face is the best, contributors have been having trouble getting travel support. So we decided to do a completely Virtual Mid-Cycle meetup for Ussuri. We decided to figure out the format after we see how the Virtual PTG works out.<br />
<br />
==Monday (Virtual)==<br />
<br><br />
[https://www.youtube.com/watch?v=dwk2oKXxlfw Video Recording]<br />
<br><br />
===Forum session recap: Are You Using Upgrade Checks?===<br />
* https://etherpad.openstack.org/p/PVG-upgrade-check-forum<br />
Jay gave a quick recap of the Forum session. There are a number of action items in the etherpad above. They are assigned to jungleboyj right now as a TC action.<br />
<br />
There are still some questions about (a) how operators are using these, and (b) what kind of checks we should be providing from the development side. The Cinder team was seeing this as pre-check. Others are seeing it as a check that is used along the way while an upgrade is in process to ensure that things are ready before operators start up their services. Pre-checks seem to make sense for us; Sean noted that we could add an option to do some pre-checks to the cinder-status command.<br />
<br />
So what should the Cinder team do during the Ussuri cycle (before we have the above issues settled)? At the very least, we should still add them when a driver is unsupported and subject to removal:<br />
* inform operators that in order to use an unsupported driver, a flag has to be set in cinder.conf<br />
* inform operators that they need to contact the vendor about whether they have plans to have the driver re-instated; otherwise, the operator needs to prepare to migrate the affected volumes to a backend with a supported driver for the next Cinder release<br />
====Actions====<br />
* jungleboyj - Start a discussion on the mailing list to find out if anyone is actually using or has used the upgrade checks in production<br />
* need to figure out where the documentation for this goes<br />
===Snapshot co-location===<br />
The spec for this is: https://review.opendev.org/#/c/695630/<br />
<br />
This is related to glance multi-store support in Cinder, but the spec needs to specify more carefully what the use case is. We think that is: a user has a volume that was created from a glance image, and wants to upload it as an image; want to give Glance info so that in can put the new image in the same store as the original image. (So use of the term "snapshot" here may be inaccurate.)<br />
<br />
This feature depends on the implementation of the other glance multistore spec: https://review.opendev.org/#/c/661676/<br />
====Actions====<br />
* Rajat, Abhishek - update the spec<br />
===Python 2 support removal===<br />
Gave a quick summary of what we discussed in Shanghai (see above) so that we're all on the same page.<br />
<br />
There's a patch up now removing py2 testing from Cinder: https://review.opendev.org/695317 . Once that's approved, will do the same for the other components.<br />
<br />
General advice to Cinder developers about using Python 3 language features: https://wiki.openstack.org/wiki/CinderUssuriPTGSummary#Actions<br />
<br />
====Actions====<br />
* rosmaita - get the testing/gate patches merged, then let the good times roll<br />
===User messages===<br />
Quick discussion of the admin action "leakage" issue discussed on https://review.opendev.org/#/c/694954/<br />
<br />
Consensus was that it would be useful to expose admin-oriented actions in user messages that only admins would be able to view. Maybe set a special flag when the message is created, and then use the admin context to decide whether this gets shown or not. Agreed that the message content will be same as we have currently (that is, don't expose any sensitive information even to admins). We can wait until admin-facing user messages are being used and get feedback about whether more info is required or not.<br />
<br />
Ivan pointed out that this change should not require a new microversion, since there's no change to the user message API and no change to the current response.<br />
====Actions====<br />
* rosmaita - write up a spec<br />
===3rd Party CI irregularities===<br />
The issue we want to address is that the 3rd Party CI systems seem pretty unstable. We'd like to be able to provide some more support to make the infrastructures more reliable. Luigi suggested using RDO Software Factory as a basis for 3rd Party CI.<br />
<br />
References:<br />
* https://opendev.org/x/third-party-ci-tools<br />
* https://zuul-ci.org/docs/zuul/admin/quick-start.html<br />
====Actions====<br />
* Luigi - follow up with RDO team and get some feedback on how plausible this scenario is<br />
* e0ne - will look to see who's using cinder-tempest-plugin<br />
===Extending default volume type support for tenants===<br />
Quick recap of the Shanghai discussion (see above). Simon had mentioned that he might have developer at Pure who'd be interested in doing the implementation. Rajat volunteered to help support the implementation.<br />
====Actions====<br />
* Gorka - write up the spec<br />
* rosmaita - follow up with Simon<br />
===Quotas!===<br />
Eric has a patch up that may fix one of many problems: https://review.opendev.org/#/c/695096/ <br />
Eric thinks the patch could be optimized if someone is interested.<br />
<br />
The general problem is that we update multiple tables and there can be (are) race conditions and you wind up with strange situations like negative quota values or multiple quotas for the same project. Operators have posted some scripts to be used occasionally clean up the database, but it would be better to fix this in Cinder.<br />
===EOL for driverfixes/{m,n} and stable/{o,p}===<br />
Since the Shanghai discussion, a patch has merged that removes the 6 month waiting period for the transition from EM -> EOL: https://review.opendev.org/#/c/682381/<br />
<br />
There was a discussion about this in #openstack-tc last week: http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-11-22.log.html#t2019-11-22T15:35:01<br />
<br />
Consensus is that we should go ahead and do this.<br />
====Actions====<br />
* rosmaita - send notice on the ML that we are going to do this in one week<br />
** http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011136.html<br />
* rosmaita - put up a release patch to EOL the branches<br />
** https://review.opendev.org/#/c/696173/<br />
===Driver support matrix===<br />
Follow-up from the discussion in Shanghai. A suggestion was made that multipath should be a specific category in the support matrix.<br />
<br />
Consensus is that multipath is more a feature of the backend than of the driver. It is useful to know if drivers do it, but it's not the kind of thing like replication that they do or not. Also, there are options that have to be set in nova in order for it to be useful - nova:libvirt:use_volume_multipath. So there doesn't seem to be a point in adding this to the support matrix.<br />
<br />
==Wednesday (Virtual)==<br />
<br><br />
[https://www.youtube.com/watch?v=VtmmWtkoxx8 Video Recording]<br />
<br><br />
===v2 API removal update===<br />
Outward facing issues:<br />
* Sean's patch that removes v2 from the 'versions' response had some strange failures (but Rajat thinks there may be a quick fix).<br />
* Right now, devstack expects both v2 and v3 to be available and creates endpoints for both in the service catalog: https://opendev.org/openstack/devstack/src/branch/master/lib/cinder<br />
* On the other hand, it looks like tempest is v3 ready: https://review.opendev.org/#/c/530702/<br />
* Also, Ivan is pretty sure that Horizon uses v3 only.<br />
* We should notify Nova and Glance to make sure they don't rely on v2 for anything.<br />
<br />
Internal issues:<br />
* we should be able to clean up the stuff that v3 inherited from v2<br />
* we may not be able to clean up the v2/contrib stuff yet because of microversion reliance<br />
====Actions====<br />
* Rajat take a shot at fixing Sean's patch<br />
* rosmaita - work on the devstack stuff (service catalog)<br />
* we'll be optimistic about tempest<br />
* rosmaita - send a general email to the ML saying that we plan to do this<br />
===Forum session recap: How are you using Cinder's Volume Types?===<br />
Session etherpad: https://etherpad.openstack.org/p/PVG-how-using-cinder-volume-types<br />
<br />
Sean mentioned some highlights of the Forum session.<br />
* NECTAR is running a patch that allows a volume type to be assigned to an AZ: https://github.com/NeCTAR-RC/cinder/commit/d5a3d938a8e0934d31b5a3c568846b3d32843866<br />
* There were some questions about whether RBD supports volume online migration<br />
* Operators are interested in the "Support filter backend based on operation type" spec that was implemented in Rocky, but need some documentation explaining how to use it. The implementation is https://github.com/openstack/cinder/commit/e1ec4b4c2e1f0de512f09e38824c1d7e2fa38617<br />
====Actions====<br />
* e0ne is planning to do some testing around the RBD volume migration<br />
* need someone to pick up the documentation of "Support filter backend based on operation type"<br />
===Ceph iSCSI work===<br />
This is an important feature for Ironic.<br />
There's an etherpad gathering some of the work Walt's done on this: https://etherpad.openstack.org/p/cinder-ceph-iscsi-driver<br />
====Actions====<br />
rosmaita - follow up with Walt about his bandwidth and ask him to add missing stuff to the etherpad<br />
===Ussuri community goals ===<br />
Goal 1: Drop Python 2.7 Support -- we're going to do this in 2 phases. First is to get the python 2 check and gate jobs removed so we aren't depending on any py27 in the gate. Second will be to make the changes that will only allow Cinder to be installed with at least py36. That will follow in January or so when any other project that needs to install Cinder in py27 for their own testing has removed that dependency. <br />
<br />
The cinder patch to drop py2 testing is https://review.opendev.org/#/c/695317/ -- once that's merged, we'll do the same for os-brick, cinderclient, the brick-client-ext, and cinderlib.<br />
<br />
Goal 2: Project Specific New Contributor & PTL Docs -- the goal is not yet approved; it's at the formal vote stage: https://review.opendev.org/#/c/691737/. From comments on the patch, expectations are that the current PTL will do this with help from former PTLs. Luckily, we have 2 former PTLs who are still very active with the project. The open issue right now is that the docs are supposed to be consistent across projects, but there isn't a template for this yet.<br />
<br />
There's a pre-selected goal for V to migrate all legacy zuul jobs: https://review.opendev.org/#/c/691278/. gmann has a patch up moving our legacy jobs (grenade) to the cinder repo and making them py3: https://review.opendev.org/#/c/695787/. At some point, we'll need to convert them to bona fide Zuul v3 jobs. Luigi left a bunch of info on the etherpad about what the moving parts for this are, and he pointed out that reviews are welcome.<br />
<br />
It looks like another V goal is going to be "Consistent and secure default policies" goal, which was floated on the ML: http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010291.html. A "Policy Popup Team" is being organized now to do some work during this cycle: https://review.opendev.org/#/c/695993/<br />
====Actions====<br />
* everyone - keep an eye on the "remove py2 support" patches<br />
* everyone - reviews welcome on https://review.opendev.org/#/q/status:open+branch:master+topic:grenade_zuulv3<br />
===Backup Service Testing===<br />
This was a follow-up from the Train mid-cycle. The backup tests fail intermittently with timeouts. The situation now is that the cinder backup tests have been removed from tempest-full, but they are being run with tempest-integrated-storage (which basically means that failures will hit us but shouldn't block other projects). The issue could still use some investigation; Eric has a suggestion on the etherpad to write an elasticsearch query to look in the c-bak log for a specific IOError instead of looking for the timeout.<br />
<br />
Eric also noted that the test jobs run a long time and fail with a not helpful message -- they look for volume going active but don't notice that it's in an error state and should fail sooner. Failing fast could conserve some gate resources. Maybe someone wants to follow up with something like https://review.opendev.org/#/c/565766/ ?<br />
====Actions====<br />
* anyone who's interested - follow up on this<br />
* rosmaita - (came up during this discussion but not related to this topic) update etherpads with nondestructive translation instructions (get the read-only link to the etherpad and use translations tools there instead of in the writeable etherpad)<br />
===Cycle Priorities===<br />
====Increase testing coverage====<br />
We want to get more thorough tests into the cinder-tempest-plugin. Sofia (enriquetaso) is mentoring an Outreachy intern Anastasiya (anastzhyr) who will focus on this during her internship (3 Dec 2019 to 3 March 2020). The Cinder community can assist in this by (A) writing bugs tagged 'test-coverage' with specific ideas for tests that can be added, and (B) timely reviews of Anastaiya's patches.<br />
====Increase number of 3rd party CIs running cinder-tempest-plugin====<br />
This '''should be easyâ„¢''' for current 3rd party CIs (and actually may already be a requirement). Need to get a baseline for how many CIs are running it now.<br />
====Better support for 3rd Party CIs====<br />
We'd like to their lives easier by having a standard way to deploy a robust system. This may be possible using RDO Software Factory. Luigi has already brought this up at the RDO meeting and RDO (and the SF people) are supportive of this idea: http://eavesdrop.openstack.org/meetings/rdo_meeting___2019_11_27/2019/rdo_meeting___2019_11_27.2019-11-27-15.00.log.html#l-76<br />
<br />
WIP 3rd party CI doc section for SoftwareFactory:<br />
* https://softwarefactory-project.io/r/#/c/17097/<br />
* https://softwarefactory-project.io/logs/97/17097/2/check/sf-docs-build/ae24483/docs-html/guides/third_party_ci.html <br />
====Default volume-type enhancement====<br />
Gorka's working on a spec for having per user/project/service-token default volume-type; basic idea is what was discussed at the PTG (see above).<br />
Related to this is improving the documentation around volume types.<br />
====Generic Backups====<br />
Related patches:<br />
* https://review.opendev.org/#/c/620881/<br />
* https://review.opendev.org/#/c/630305/<br />
====Improve Active-Active (HA) Documentation====<br />
We're anticipating that operators will want to run Cinder in HA mode (Cinder active-active). Currently RBD supports running in HA.<br />
<br />
A driver has to set a flag in order for the service to run in active-active. But in addition to setting the flag, it would be good for the feature to actually work with that driver. The problem is that what you need to do is very driver dependent. We don't have tempest tests that verify that a driver can run in HA (but it should be possible to add tests such that if they fail, then you know HA is not happening).<br />
<br />
We need docs aimed at two audiences:<br />
* driver developers: want to clarify what you need to do to implement active-active and claim HA support. Should be able to give some general advice (like "watch out for race conditions on connection") to help in implementing and testing that their driver supports active-active.<br />
* operators: need to provide some advice about how to deploy Cinder in active-active mode (for example, you should have 3 API nodes, 3 scheduler nodes, 3 volume services)<br />
====Remove the v2 API====<br />
Should at least be able to get it out of the service catalog and remove the option to run it (and remove the option to not run v3), so that from an external point of view, all that's available is the v3 API. The refactoring to remove all v2 code from the API doesn't have to happen immediately.<br />
====Remove Python 2 support====<br />
This is a community goal (and it looks like we're in good shape to make this happen very early in the cycle).<br />
====Move away from squlalchemy- migrate to alembic====<br />
We're getting closer to the point where we will have no choice.<br />
===Should we hold a Virtual Mid-Cycle?===<br />
Cycle Schedule: https://releases.openstack.org/ussuri/schedule.html<br />
<br />
The consensus that we should have a Ussuri Mid-Cycle and that it should be virtual. While we were discussing the timing (have it close to the spec freeze? or closer to M-2), Eric pointed out that if we were going to use the model that we used for this Virtual PTG, namely, 2-hour sessions spread over a couple of days, there's no reason why the sessions have to be close together since we don't need to arrange any physical facilities. So we can be flexible and have 2 hour sessions whenever it makes sense.<br />
<br />
Right now, the sensible times to have 2-hour virtual meetings are:<br />
* around the Cinder Spec Freeze. The spec freeze is the last week in January and is at 15 weeks, which is exactly the middle of the cycle. Maybe have the Virtual meet-up the week before so unmerged specs can have some discussion if necessary?<br />
* at the "Cinder New Feature Status Checkpoint" (week of 16 March 2020), which is 3 weeks before the final release for client libraries.<br />
====Actions====<br />
* rosmaita - get feedback from the wider team at the weekly meeting and organize polls to determine the day of week and time</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=CinderUssuriPTGSummary&diff=173320CinderUssuriPTGSummary2019-12-04T18:24:38Z<p>Jay Bryant: /* Cinder Business */</p>
<hr />
<div>=== Introduction ===<br />
This page contains a summary of the subjects covered during the Ussuri PTG held in Shanghai, China, November 7-8, 2019.<br />
It also contains a summary of the Virtual PTG held November 25 and 27, 2019.<br />
<br />
The sessions were recorded. Links to the recordings will be added here when they are available.<br />
<br />
The full etherpad and all associated notes may be found here:<br />
* Shanghai: https://etherpad.openstack.org/p/shanghai-ptg-cinder<br />
* Virtual: https://etherpad.openstack.org/p/cinder-ussuri-virtual-ptg-planning<br />
<br />
==Thursday (Shanghai)==<br />
===Cinder project onboarding and "meet the Cinder developers" session===<br />
Everyone who attended were 100% satisfied and very complimentary about Cinder. Unfortunately, no one attended, so we spent the time figuring out how to get the recording equipment connected and positioned properly.<br />
<br />
<Br><br><br />
[https://www.youtube.com/watch?v=QQgiokX_EiI Video Recording Part 1]<br />
<br><br><br />
<br />
===Python 2 support & Work remaining to remove Py27 support===<br />
I came into this arguing that we need to keep Python 2 testing in master for a while -- at least while we are still supporting Python 2 in stable branches, because otherwise backports become a big problem (won't have a clean backport if any py3-only language features are used in a patch to master). Pretty much no one agreed with this.<br />
<br />
Sean pointed out that as libraries drop py2 support, we won't be able to use them in py2 testing anyway. Ivan and Sean can't wait to start ripping out py2-compatability code. Gorka didn't think that the extra effort to modify backports would be that big a deal, and that if we're going to start using py3 for real, might as well start now.<br />
====Actions====<br />
* Reminder to reviewers: we need to be checking the code coverage of test cases very carefully so that new code has excellent coverage and will be likely to fail when tested with py2 in stable branches when a backport is proposed<br />
* Reminder to committers: be patient with reviewers when they ask for more tests!<br />
* Reminder to community: new features can use py3 language constructs; bugfixes likely to be backported should be more conservative and write for py2 compatabilty <br />
* Reminder to driver maintainers: ^^<br />
* Ivan and Sean have a green light to start removing py2 compatability<br />
===Policy migration===<br />
Some background: https://etherpad.openstack.org/p/policy-migration-steps<br />
Keystone has added a default read-only role and service-scoped roles, but they don't do anything until projects write policies that use them<br />
oslo.policy code has a way to define a default policy + deprecated default policy; during deprecation, the most permissive wins. This will allow easy migration to new policies for operators.<br />
<br />
There are still questions about how to set up testing for these. Keystone did only unit tests, but the tests were very heavyweight; had to set up all the users in the DB each time; they wish they had done things in tempest. But there is a concern that it may not be practical to do in tempest, either. (It may depend on the project.)<br />
<br />
A pop-up team is going to be started to help get the larger projects moved to the new Policy Code.<br />
====Actions====<br />
* rosmaita will investigate the different testing approaches. Note: It's possible that tempest will add the methods to create different users and the different projects will have to do their own testing using those.<br />
* rosmaita To look at the scoping options and understand what the impact on Cinder will be.<br />
** Need to create a matrix of our policies and different scopes.<br />
** Need to figure out how the administrative context fits in<br />
** Need to check current test coverage and where testing needs to be enhanced.<br />
** Don't have to have one person do it. It is possible to split up the work.<br />
* rosmaita and e0ne (and anyone else interested in this) to join the pop-up team to get more info and help get Cinder started.<br />
===Cinder V2 API removal===<br />
We can't just remove V2 code right now (e.g., the V2 extensions need to be moved to V3) but we can remove access to the API. Though actually that's not true either, there is a lot of background work that needs to be done before we remove V2 API:<br />
* Tempest still assumes that the V2 API will be there. Need to fix it.<br />
* OpenStack Client also has some V2 API assumptions.<br />
* Devstack also will not work.<br />
<br />
Sean has a patch to see how badly things break with V2 removal: https://review.opendev.org/554372<br />
<br />
V3 is pretty much exactly the same as V2. We should be able to change and just have people switch the endpoint and have it work. It would be nice if we could just update the catalog but that doesn't appear to be the case.<br />
====Actions====<br />
* follow up on this for the virtual PTG. What did we find with Sean's patch?<br />
** Create a list of the specific work items that need to be completed.<br />
** At that point we may be able to split the work up to an intern (if we have an intern).<br />
===Cinder REST API V4===<br />
We had talked in Vancouver about getting to a point after we have enough micro-versions piling up to move to V4.<br />
====Actions====<br />
* We don't need to do that in this release (let's get rid of V2 first), but it is something we need to keep in mind as a future goal.<br />
<br><br><br />
[https://www.youtube.com/watch?v=a-kq_EYrkq0 Video Recording Part 2]<br />
<br><br><br />
<br />
===Volume local cache===<br />
Requires both Cinder and Nova work:<br />
* Cinder spec: https://review.opendev.org/#/c/684556/<br />
* Nova spec: https://review.opendev.org/689070<br />
<br />
Currently there're different types of fast NVME SSDs, such as Intel Optane SSD, which r/w throughput can be 2.x~3.x GB/s, latency can be ~10 us. While typical remote volume for a VM can be hundreds of MB/s, latency can be millisecond level (iscsi / rbd). So these fast SSDs can be mounted on compute node locally and used as a cache for remote volumes. Regarding storage team, we need to add support in os-brick.<br />
<br />
Consensus was: there are some storage solutions this cannot be done for (Ceph, no mount point on host machine), some that might not require this (some vendors already have super-fast caching), and some it's worth doing for, so the overall feeling was supportive for this effort.<br />
<br />
See the PTG etherpad for details. Picture of the flip chart used during the discussion: https://twitter.com/jungleboyj/status/1192323512238776320<br />
====Actions====<br />
* Liang Fang to continue working on this<br />
===Mutable options ===<br />
The context for this is a NetApp customer request who wants to be able to change backend credentials without restarting any services.<br />
<br />
Problem is that the current mutable config can be done for the REST API, but doesn't extend beyond that. Further, changing driver credentials is a little more work since it may require reloading the driver or having a mechanism in all drivers to recognize and handle that change. Also, we don't want config options that are shared across drivers to be mutable.<br />
<br />
Gorka pointed out that a driver supporting Active-Active would not need mutable options for this purpose. It would be better to implement Active-Active instead of refresh credentials this way. A/A HA support has been ready for several releases now, but so far RBD has been the only driver to test and enable it.<br />
<br />
The team feels that using Active/Active is the best way to go.<br />
====Actions====<br />
* Gorka volunteered to support the NetApp team if they choose to implement A/A<br />
* need to add to the developer docs that just making an option mutable in oslo.config does not solve the problem for drivers (more info on the etherpad)<br />
<br />
<br><br><br />
[https://www.youtube.com/watch?v=4BRXtQ41gJw Video Recording Part 3]<br />
<br><br><br />
<br />
===Cross Project Discussion with Edge Working Group===<br />
Apparently the next version of TripleO will support storage at the edge. They were wondering if we knew anything about that. We don't.<br />
<br />
As far as edge persistent storage goes, telcos think about having NFS-only and have it in the core - scary concept.<br />
<br />
In considering the edge use case, it is important to understand the physical limitations of what people have in mind. For example, one small telco rack, or a smaller DC with air conditioning, or a bigger DC with AC and bigger storage unit, etc. You really can't talk about "the edge" (insert U2 joke here).<br />
<br />
See the etherpad for more.<br />
===Default volume types depending on project or user===<br />
Having a single volume type default is too restrictive for bigger clouds with multiple AZs and many tenants/projects. Operators want more defaults to use in particular situations.<br />
<br />
The selection of which default to use is easy; the hard part of this will be the code enabling creation of the default at the end user/project level. Will need:<br />
* new API calls (create, show, list, update, delete), new microversion<br />
* client support<br />
* tell horizon about it<br />
====Actions====<br />
* geguileo - write the spec<br />
** request from Glance: we may also want a per-service default (triggered when a service token is passed)<br />
===Cinder retype doesn't use driver assisted migration===<br />
Gorka thinks this doesn't depend on the driver; he thinks it's broken for all drivers. There is code in the manager that prevents the efficient path from being taken:<br />
https://github.com/openstack/cinder/blob/ca5c2ce4e8ae9fbc92181ac4ba09cec3429a71e6/cinder/volume/manager.py#L2490<br />
There was a reason for it; we need to review and see if it still holds.<br />
<br />
Ivan thinks this is just a bug. Though we don't have a bug open for it.<br />
====Actions====<br />
* e0ne to investigate and fix it if he can verify that it is broken.<br />
===EOL some of the currently open branches===<br />
We have 8 open branches plus master (ussuri). Sent an email to the ML asking for data so we can make a good decision about this:<br />
http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010385.html<br />
<br />
Got zero responses, so this apparently isn't seen as a big deal by the community.<br />
<br />
The policy is that we need to announce 6 months ahead of time the fact that we are planning to EOL a branch. This allows time for a vendor to come in and pick it up if necessary.<br />
So, if we want to drop branches we just need to announce that we are planning to EOL branches and then we can do it in 6 months.<br />
<br />
The driverfixes branches have not been used in quite a while.<br />
* Should we delete those? No, we don't really want to lose that history of commits.<br />
* Could we re-name them? Put 'archived' in the title or something to make it clear that it doesn't still take code. (Or just document that they are an archive of old driver fixes.)<br />
* When we EOL a driver we should probably make it a driverfixes branch. (Not clear on exactly what's being proposed here, need to follow up at VPTG.)<br />
====Actions====<br />
* rosmaita - find out about renaming branches from infra team; also, about read-only branches (change to gerrit so no patches can be proposed to the branch)?<br />
** proposal: EOL o, p and rename them archived-ocata, archived-pike<br />
* rosmaita - send proposal to ML that o, p are due to exit EM status in 6 months<br />
* revisit this at the Virtual PTG<br />
** the EOL policy was revised recently, no longer requires the 6 month waiting period<br />
** want to reconsider whether not deleting the EOL branches is a good idea if we're not going to merge anything into them<br />
<br />
<br><br><br />
[https://www.youtube.com/watch?v=VHyCKbqMJb4 Video Recording Part 4]<br />
<br><br><br />
<br />
===Discuss the latest User Survey Results===<br />
Here's a handy compiled list of only the Cinder responses: https://etherpad.openstack.org/p/cinder-2019-user-survey-question-responses<br />
====Actions====<br />
* replication needs better documentation so that people know we can failover and fail back correctly<br />
* ivan is planning to continue the generic backup driver work<br />
<br><br><br />
[https://www.youtube.com/watch?v=r4zURMzJpbA Video Recording Part 5]<br />
<br><br><br />
<br />
===Meeting with the Nova team===<br />
When we failover in Cinder, volumes are no longer usable in Nova, but we don't tell Nova that the failover has ocurred. Any procedure in Nova to correct the situation needs to be done manually. It would be better if we let Nova know that a failover has occurred so they can do something.<br />
<br />
A complication is that Nova can't simply detach and attach the volume because data that is in flight would be lost.<br />
<br />
How about boot from volume? In that case the instance is dead anyway because access to the volume has been lost. Could go through the shutdown, detach, attach, reboot path.<br />
Problem is that detach is going to fail. Need to force it or handle the failure. But we aren't sure that Nova will allow a detach of a boot volume. And we don't currently have a force detach API.<br />
<br />
Also discussed a possible Nova bug for images created from encrypted volumes: https://bugs.launchpad.net/nova/+bug/1852106 , though it's not clear that the scenario described in the bug can actually happen<br />
====Actions====<br />
* need to figure out how to pass the force to os-brick to detach volume and when rebooting a volume<br />
* rosmaita to investigate Bug #1852106<br />
<br />
==Friday (Shanghai)==<br />
===Meeting with the Glance team===<br />
<br><br><br />
[https://www.youtube.com/watch?v=p-cEMSj44Nc Video Recording Part 1]<br />
<br><br><br />
====Support for Glance multiple stores in Cinder====<br />
References: (cinder spec) https://review.openstack.org/#/c/641267/<br />
<br />
The Cinder team is still OK with this idea (which was approved for Train).<br />
=====Actions=====<br />
* retarget spec for Ussuri<br />
* get Abhishek's patch reviewed<br />
====Image snapshot co-location====<br />
For the Edge use case, Glance is planning to use info provided by Nova about what image a server was booted from to co-locate snapshots of that server in the same store as the original image. Would like to do the same with Cinder volumes uploaded as images. Just need a header that specifies the "base" image of the volume being uploaded as an image. We agreed that this is a separate use case from the above.<br />
=====Actions=====<br />
* Abhishek will write the spec for Cinder<br />
====Glance Cinder driver is very limited====<br />
We think it uses only default volume type, and also, it is not very well tested. We all agreed that this is a sad state of affairs.<br />
=====Actions=====<br />
* somebody should do something<br />
===Meet with Horizon about their proposed implementation of Cinder user messages===<br />
Horizon is interested in exposing the User Messages API. We agreed that this is a great idea.<br />
<br />
There's a question about having the message displayed in a requested language. It's possible that this is already handled at the REST API layer via the "Accept-Language" header. If it's not, that's probably the place to support this.<br />
====Actions====<br />
* rosmaita determine whether this would require a change to the API code, or whether existing code handles this already<br />
===Attach/Detach speed===<br />
Gorka was wondering whether there are any complaints about attach/detach speed in OpenStack, particularly since people are now using Cinder to provide volumes for Kubernetes (cinder in-tree driver, Cinder-CSI, Ember-CSI) and may be seeing a lot more attach/detach requests.<br />
<br />
Everybody seems to be OK with it, it's only geguileo who's complaining.<br />
====Actions====<br />
* not a concern at the moment<br />
===Topics from Train mid-cycle: status and carry-over to Ussuri===<br />
Notes about the Train mid-cycle: https://wiki.openstack.org/wiki/CinderTrainMidCycleSummary<br />
<br />
Mid-cycle etherpad: https://etherpad.openstack.org/p/cinder-train-mid-cycle-planning<br />
====Multiattach====<br />
All items need followup. Goals are:<br />
* short-term: document some guidance for how this feature should be tested<br />
* long-term: get some new tests into the cinder-tempest-plugin for this<br />
=====Actions=====<br />
* rosmaita draft the short-term document<br />
====iSCSI Ceph driver====<br />
Due to some downstream priorities changes, Walt is having trouble finding time to work on this.<br />
Ivan suggested that we encourage Walt to post whatever he's got, even if it's not working, so what he's learned isn't lost.<br />
There are some patches up and a github repo for some code Walt had to write that doesn't have a home in OpenStack or Ceph yet<br />
=====Actions=====<br />
* rosmaita: follow up with Walt<br />
* rosmaita: put together an etherpad with links to the work done so far<br />
====3rd Party CI Irregularities====<br />
Third-party testing by backend vendors of their driver code is very important to the project. But most of the 3rd Party CI appear to be pretty unstable.<br />
<br />
For most vendors, updating their 3rd Party CI to run python 3.7 in Train was not a simple task. It would be good if we could offer them better guidance about how to set up & maintain their 3rd Party CI. Would also like vendors to be running the cinder-tempest-plugin, but don't want to make it a demand unless we can make the path easier. (BTW, Datera is running the cinder-tempest-plugin in their CI!)<br />
<br />
Third Party CI Docs (partial list)<br />
* https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver<br />
* https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers<br />
* https://docs.openstack.org/infra/system-config/third_party.html<br />
* https://docs.openstack.org/cinder/latest/contributor/drivers.html<br />
<br />
=====Actions=====<br />
* Luigi has some ideas about using RDO Software Factory as a basis for 3rd Party CI; need to follow up with him on that<br />
* Gorka: will check about what RDO has available <br />
* e0ne: will look to see who's using cinder-tempest-plugin<br />
* the team: after gorka and e0ne report back, reorganize & update the 3rd party CI docs<br />
<br />
====Improve Automated Test Coverage====<br />
We want to do this via the cinder-tempest-plugin. Sophia (enriquetaso) is mentoring an Outreachy intern who has begun some work on this. Eric has been writing bugs to suggest test cases that need to be addressed.<br />
<br />
====SQLAlchemy to Alembic migration====<br />
No progress on this. Put in a proposal for a summer intern to work on this; maybe we'll get lucky.<br />
<br />
See https://etherpad.openstack.org/p/cinder-train-ptg-planning (line #247) for more info.<br />
<br />
====Capabilities Reporting====<br />
Operators need to read the vendor's manual to figure out which extra specs they can write for a particular backend, and what they're used for. it would be nice to drivers report their capabilities in a way that the operator can figure out this info from the CLI.<br />
<br />
Everyone agreed that we still want to do this. It will require an API change and there's already a spec for this:<br />
https://review.opendev.org/#/c/655939/1/specs/train/backend_capabilities.rst<br />
=====Actions=====<br />
* revisit at the Virtual PTG and figure out who's interested in working on it<br />
<br />
===Cinder Business===<br />
<br><br />
[https://www.youtube.com/watch?v=8YQ4TQyIzkw Video Recording Part 2]<br />
<br><br />
====Cinder Ussuri Priorities====<br />
We will finalize this after the Virtual PTG, but here's the initial list:<br />
* Increase testing coverage<br />
* Increase number of CIs running cinder-tempest-plugins<br />
* Better support for third party CIs: Make their life easier by having a way to deploy a robust system<br />
* Volume types per user/project/service-token<br />
** better documentation<br />
* Generic Backups<br />
* Improve HA Active-Active documentation<br />
** want to make it easier to test it<br />
* remove V2 API<br />
* remove python 2 support<br />
====Cinder-core update====<br />
See http://lists.openstack.org/pipermail/openstack-discuss/2019-November/010519.html<br />
We are at roughly the same review strength we had in Train.<br />
====Meeting Time Change update====<br />
We were holding off on this until after the Summit so that new contributors could participate in a poll.<br />
We'll consider the options from Liang Fang's original proposal at the Cinder weekly meeting: http://eavesdrop.openstack.org/meetings/cinder/2019/cinder.2019-10-23-16.00.log.html#l-166<br />
These are to move the meeting 1 or 2 hours earlier.<br />
There has also been some discussion on the ML: http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010328.html<br />
=====Actions=====<br />
* rosmaita put together a community poll<br />
====Virtual PTG====<br />
We discussed what the format should be. Consensus was to do it over 2 consecutive days, using 2 hours each day. This should make it easier for people to participate in at least part of the meeting. We want to do it soon; consensus was the week after KubeCon to avoid conflicts. So that would be the last week in November.<br />
=====Actions=====<br />
* rosmaita put together a community poll to determine days/times<br />
====Virtual Mid-Cycle====<br />
There is interest in having a midcycle. Although everyone recognizes that face-to-face is the best, contributors have been having trouble getting travel support. So we decided to do a completely Virtual Mid-Cycle meetup for Ussuri. We decided to figure out the format after we see how the Virtual PTG works out.<br />
<br />
==Monday (Virtual)==<br />
<br><br />
[https://www.youtube.com/watch?v=dwk2oKXxlfw Video Recording]<br />
<br><br />
===Forum session recap: Are You Using Upgrade Checks?===<br />
* https://etherpad.openstack.org/p/PVG-upgrade-check-forum<br />
Jay gave a quick recap of the Forum session. There are a number of action items in the etherpad above. They are assigned to jungleboyj right now as a TC action.<br />
<br />
There are still some questions about (a) how operators are using these, and (b) what kind of checks we should be providing from the development side. The Cinder team was seeing this as pre-check. Others are seeing it as a check that is used along the way while an upgrade is in process to ensure that things are ready before operators start up their services. Pre-checks seem to make sense for us; Sean noted that we could add an option to do some pre-checks to the cinder-status command.<br />
<br />
So what should the Cinder team do during the Ussuri cycle (before we have the above issues settled)? At the very least, we should still add them when a driver is unsupported and subject to removal:<br />
* inform operators that in order to use an unsupported driver, a flag has to be set in cinder.conf<br />
* inform operators that they need to contact the vendor about whether they have plans to have the driver re-instated; otherwise, the operator needs to prepare to migrate the affected volumes to a backend with a supported driver for the next Cinder release<br />
====Actions====<br />
* jungleboyj - Start a discussion on the mailing list to find out if anyone is actually using or has used the upgrade checks in production<br />
* need to figure out where the documentation for this goes<br />
===Snapshot co-location===<br />
The spec for this is: https://review.opendev.org/#/c/695630/<br />
<br />
This is related to glance multi-store support in Cinder, but the spec needs to specify more carefully what the use case is. We think that is: a user has a volume that was created from a glance image, and wants to upload it as an image; want to give Glance info so that in can put the new image in the same store as the original image. (So use of the term "snapshot" here may be inaccurate.)<br />
<br />
This feature depends on the implementation of the other glance multistore spec: https://review.opendev.org/#/c/661676/<br />
====Actions====<br />
* Rajat, Abhishek - update the spec<br />
===Python 2 support removal===<br />
Gave a quick summary of what we discussed in Shanghai (see above) so that we're all on the same page.<br />
<br />
There's a patch up now removing py2 testing from Cinder: https://review.opendev.org/695317 . Once that's approved, will do the same for the other components.<br />
<br />
General advice to Cinder developers about using Python 3 language features: https://wiki.openstack.org/wiki/CinderUssuriPTGSummary#Actions<br />
<br />
====Actions====<br />
* rosmaita - get the testing/gate patches merged, then let the good times roll<br />
===User messages===<br />
Quick discussion of the admin action "leakage" issue discussed on https://review.opendev.org/#/c/694954/<br />
<br />
Consensus was that it would be useful to expose admin-oriented actions in user messages that only admins would be able to view. Maybe set a special flag when the message is created, and then use the admin context to decide whether this gets shown or not. Agreed that the message content will be same as we have currently (that is, don't expose any sensitive information even to admins). We can wait until admin-facing user messages are being used and get feedback about whether more info is required or not.<br />
<br />
Ivan pointed out that this change should not require a new microversion, since there's no change to the user message API and no change to the current response.<br />
====Actions====<br />
* rosmaita - write up a spec<br />
===3rd Party CI irregularities===<br />
The issue we want to address is that the 3rd Party CI systems seem pretty unstable. We'd like to be able to provide some more support to make the infrastructures more reliable. Luigi suggested using RDO Software Factory as a basis for 3rd Party CI.<br />
<br />
References:<br />
* https://opendev.org/x/third-party-ci-tools<br />
* https://zuul-ci.org/docs/zuul/admin/quick-start.html<br />
====Actions====<br />
* Luigi - follow up with RDO team and get some feedback on how plausible this scenario is<br />
* e0ne - will look to see who's using cinder-tempest-plugin<br />
===Extending default volume type support for tenants===<br />
Quick recap of the Shanghai discussion (see above). Simon had mentioned that he might have developer at Pure who'd be interested in doing the implementation. Rajat volunteered to help support the implementation.<br />
====Actions====<br />
* Gorka - write up the spec<br />
* rosmaita - follow up with Simon<br />
===Quotas!===<br />
Eric has a patch up that may fix one of many problems: https://review.opendev.org/#/c/695096/ <br />
Eric thinks the patch could be optimized if someone is interested.<br />
<br />
The general problem is that we update multiple tables and there can be (are) race conditions and you wind up with strange situations like negative quota values or multiple quotas for the same project. Operators have posted some scripts to be used occasionally clean up the database, but it would be better to fix this in Cinder.<br />
===EOL for driverfixes/{m,n} and stable/{o,p}===<br />
Since the Shanghai discussion, a patch has merged that removes the 6 month waiting period for the transition from EM -> EOL: https://review.opendev.org/#/c/682381/<br />
<br />
There was a discussion about this in #openstack-tc last week: http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-11-22.log.html#t2019-11-22T15:35:01<br />
<br />
Consensus is that we should go ahead and do this.<br />
====Actions====<br />
* rosmaita - send notice on the ML that we are going to do this in one week<br />
** http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011136.html<br />
* rosmaita - put up a release patch to EOL the branches<br />
** https://review.opendev.org/#/c/696173/<br />
===Driver support matrix===<br />
Follow-up from the discussion in Shanghai. A suggestion was made that multipath should be a specific category in the support matrix.<br />
<br />
Consensus is that multipath is more a feature of the backend than of the driver. It is useful to know if drivers do it, but it's not the kind of thing like replication that they do or not. Also, there are options that have to be set in nova in order for it to be useful - nova:libvirt:use_volume_multipath. So there doesn't seem to be a point in adding this to the support matrix.<br />
<br />
==Wednesday (Virtual)==<br />
<br><br />
[https://www.youtube.com/watch?v=VtmmWtkoxx8 Video Recording]<br />
<br><br />
===v2 API removal update===<br />
Outward facing issues:<br />
* Sean's patch that removes v2 from the 'versions' response had some strange failures (but Rajat thinks there may be a quick fix).<br />
* Right now, devstack expects both v2 and v3 to be available and creates endpoints for both in the service catalog: https://opendev.org/openstack/devstack/src/branch/master/lib/cinder<br />
* On the other hand, it looks like tempest is v3 ready: https://review.opendev.org/#/c/530702/<br />
* Also, Ivan is pretty sure that Horizon uses v3 only.<br />
* We should notify Nova and Glance to make sure they don't rely on v2 for anything.<br />
<br />
Internal issues:<br />
* we should be able to clean up the stuff that v3 inherited from v2<br />
* we may not be able to clean up the v2/contrib stuff yet because of microversion reliance<br />
====Actions====<br />
* Rajat take a shot at fixing Sean's patch<br />
* rosmaita - work on the devstack stuff (service catalog)<br />
* we'll be optimistic about tempest<br />
* rosmaita - send a general email to the ML saying that we plan to do this<br />
===Forum session recap: How are you using Cinder's Volume Types?===<br />
Session etherpad: https://etherpad.openstack.org/p/PVG-how-using-cinder-volume-types<br />
<br />
Sean mentioned some highlights of the Forum session.<br />
* NECTAR is running a patch that allows a volume type to be assigned to an AZ: https://github.com/NeCTAR-RC/cinder/commit/d5a3d938a8e0934d31b5a3c568846b3d32843866<br />
* There were some questions about whether RBD supports volume online migration<br />
* Operators are interested in the "Support filter backend based on operation type" spec that was implemented in Rocky, but need some documentation explaining how to use it. The implementation is https://github.com/openstack/cinder/commit/e1ec4b4c2e1f0de512f09e38824c1d7e2fa38617<br />
====Actions====<br />
* e0ne is planning to do some testing around the RBD volume migration<br />
* need someone to pick up the documentation of "Support filter backend based on operation type"<br />
===Ceph iSCSI work===<br />
This is an important feature for Ironic.<br />
There's an etherpad gathering some of the work Walt's done on this: https://etherpad.openstack.org/p/cinder-ceph-iscsi-driver<br />
====Actions====<br />
rosmaita - follow up with Walt about his bandwidth and ask him to add missing stuff to the etherpad<br />
===Ussuri community goals ===<br />
Goal 1: Drop Python 2.7 Support -- we're going to do this in 2 phases. First is to get the python 2 check and gate jobs removed so we aren't depending on any py27 in the gate. Second will be to make the changes that will only allow Cinder to be installed with at least py36. That will follow in January or so when any other project that needs to install Cinder in py27 for their own testing has removed that dependency. <br />
<br />
The cinder patch to drop py2 testing is https://review.opendev.org/#/c/695317/ -- once that's merged, we'll do the same for os-brick, cinderclient, the brick-client-ext, and cinderlib.<br />
<br />
Goal 2: Project Specific New Contributor & PTL Docs -- the goal is not yet approved; it's at the formal vote stage: https://review.opendev.org/#/c/691737/. From comments on the patch, expectations are that the current PTL will do this with help from former PTLs. Luckily, we have 2 former PTLs who are still very active with the project. The open issue right now is that the docs are supposed to be consistent across projects, but there isn't a template for this yet.<br />
<br />
There's a pre-selected goal for V to migrate all legacy zuul jobs: https://review.opendev.org/#/c/691278/. gmann has a patch up moving our legacy jobs (grenade) to the cinder repo and making them py3: https://review.opendev.org/#/c/695787/. At some point, we'll need to convert them to bona fide Zuul v3 jobs. Luigi left a bunch of info on the etherpad about what the moving parts for this are, and he pointed out that reviews are welcome.<br />
<br />
It looks like another V goal is going to be "Consistent and secure default policies" goal, which was floated on the ML: http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010291.html. A "Policy Popup Team" is being organized now to do some work during this cycle: https://review.opendev.org/#/c/695993/<br />
====Actions====<br />
* everyone - keep an eye on the "remove py2 support" patches<br />
* everyone - reviews welcome on https://review.opendev.org/#/q/status:open+branch:master+topic:grenade_zuulv3<br />
===Backup Service Testing===<br />
This was a follow-up from the Train mid-cycle. The backup tests fail intermittently with timeouts. The situation now is that the cinder backup tests have been removed from tempest-full, but they are being run with tempest-integrated-storage (which basically means that failures will hit us but shouldn't block other projects). The issue could still use some investigation; Eric has a suggestion on the etherpad to write an elasticsearch query to look in the c-bak log for a specific IOError instead of looking for the timeout.<br />
<br />
Eric also noted that the test jobs run a long time and fail with a not helpful message -- they look for volume going active but don't notice that it's in an error state and should fail sooner. Failing fast could conserve some gate resources. Maybe someone wants to follow up with something like https://review.opendev.org/#/c/565766/ ?<br />
====Actions====<br />
* anyone who's interested - follow up on this<br />
* rosmaita - (came up during this discussion but not related to this topic) update etherpads with nondestructive translation instructions (get the read-only link to the etherpad and use translations tools there instead of in the writeable etherpad)<br />
===Cycle Priorities===<br />
====Increase testing coverage====<br />
We want to get more thorough tests into the cinder-tempest-plugin. Sofia (enriquetaso) is mentoring an Outreachy intern Anastasiya (anastzhyr) who will focus on this during her internship (3 Dec 2019 to 3 March 2020). The Cinder community can assist in this by (A) writing bugs tagged 'test-coverage' with specific ideas for tests that can be added, and (B) timely reviews of Anastaiya's patches.<br />
====Increase number of 3rd party CIs running cinder-tempest-plugin====<br />
This '''should be easyâ„¢''' for current 3rd party CIs (and actually may already be a requirement). Need to get a baseline for how many CIs are running it now.<br />
====Better support for 3rd Party CIs====<br />
We'd like to their lives easier by having a standard way to deploy a robust system. This may be possible using RDO Software Factory. Luigi has already brought this up at the RDO meeting and RDO (and the SF people) are supportive of this idea: http://eavesdrop.openstack.org/meetings/rdo_meeting___2019_11_27/2019/rdo_meeting___2019_11_27.2019-11-27-15.00.log.html#l-76<br />
<br />
WIP 3rd party CI doc section for SoftwareFactory:<br />
* https://softwarefactory-project.io/r/#/c/17097/<br />
* https://softwarefactory-project.io/logs/97/17097/2/check/sf-docs-build/ae24483/docs-html/guides/third_party_ci.html <br />
====Default volume-type enhancement====<br />
Gorka's working on a spec for having per user/project/service-token default volume-type; basic idea is what was discussed at the PTG (see above).<br />
Related to this is improving the documentation around volume types.<br />
====Generic Backups====<br />
Related patches:<br />
* https://review.opendev.org/#/c/620881/<br />
* https://review.opendev.org/#/c/630305/<br />
====Improve Active-Active (HA) Documentation====<br />
We're anticipating that operators will want to run Cinder in HA mode (Cinder active-active). Currently RBD supports running in HA.<br />
<br />
A driver has to set a flag in order for the service to run in active-active. But in addition to setting the flag, it would be good for the feature to actually work with that driver. The problem is that what you need to do is very driver dependent. We don't have tempest tests that verify that a driver can run in HA (but it should be possible to add tests such that if they fail, then you know HA is not happening).<br />
<br />
We need docs aimed at two audiences:<br />
* driver developers: want to clarify what you need to do to implement active-active and claim HA support. Should be able to give some general advice (like "watch out for race conditions on connection") to help in implementing and testing that their driver supports active-active.<br />
* operators: need to provide some advice about how to deploy Cinder in active-active mode (for example, you should have 3 API nodes, 3 scheduler nodes, 3 volume services)<br />
====Remove the v2 API====<br />
Should at least be able to get it out of the service catalog and remove the option to run it (and remove the option to not run v3), so that from an external point of view, all that's available is the v3 API. The refactoring to remove all v2 code from the API doesn't have to happen immediately.<br />
====Remove Python 2 support====<br />
This is a community goal (and it looks like we're in good shape to make this happen very early in the cycle).<br />
====Move away from squlalchemy- migrate to alembic====<br />
We're getting closer to the point where we will have no choice.<br />
===Should we hold a Virtual Mid-Cycle?===<br />
Cycle Schedule: https://releases.openstack.org/ussuri/schedule.html<br />
<br />
The consensus that we should have a Ussuri Mid-Cycle and that it should be virtual. While we were discussing the timing (have it close to the spec freeze? or closer to M-2), Eric pointed out that if we were going to use the model that we used for this Virtual PTG, namely, 2-hour sessions spread over a couple of days, there's no reason why the sessions have to be close together since we don't need to arrange any physical facilities. So we can be flexible and have 2 hour sessions whenever it makes sense.<br />
<br />
Right now, the sensible times to have 2-hour virtual meetings are:<br />
* around the Cinder Spec Freeze. The spec freeze is the last week in January and is at 15 weeks, which is exactly the middle of the cycle. Maybe have the Virtual meet-up the week before so unmerged specs can have some discussion if necessary?<br />
* at the "Cinder New Feature Status Checkpoint" (week of 16 March 2020), which is 3 weeks before the final release for client libraries.<br />
====Actions====<br />
* rosmaita - get feedback from the wider team at the weekly meeting and organize polls to determine the day of week and time</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=CinderUssuriPTGSummary&diff=173319CinderUssuriPTGSummary2019-12-04T18:24:17Z<p>Jay Bryant: /* Wednesday (Virtual) */</p>
<hr />
<div>=== Introduction ===<br />
This page contains a summary of the subjects covered during the Ussuri PTG held in Shanghai, China, November 7-8, 2019.<br />
It also contains a summary of the Virtual PTG held November 25 and 27, 2019.<br />
<br />
The sessions were recorded. Links to the recordings will be added here when they are available.<br />
<br />
The full etherpad and all associated notes may be found here:<br />
* Shanghai: https://etherpad.openstack.org/p/shanghai-ptg-cinder<br />
* Virtual: https://etherpad.openstack.org/p/cinder-ussuri-virtual-ptg-planning<br />
<br />
==Thursday (Shanghai)==<br />
===Cinder project onboarding and "meet the Cinder developers" session===<br />
Everyone who attended were 100% satisfied and very complimentary about Cinder. Unfortunately, no one attended, so we spent the time figuring out how to get the recording equipment connected and positioned properly.<br />
<br />
<Br><br><br />
[https://www.youtube.com/watch?v=QQgiokX_EiI Video Recording Part 1]<br />
<br><br><br />
<br />
===Python 2 support & Work remaining to remove Py27 support===<br />
I came into this arguing that we need to keep Python 2 testing in master for a while -- at least while we are still supporting Python 2 in stable branches, because otherwise backports become a big problem (won't have a clean backport if any py3-only language features are used in a patch to master). Pretty much no one agreed with this.<br />
<br />
Sean pointed out that as libraries drop py2 support, we won't be able to use them in py2 testing anyway. Ivan and Sean can't wait to start ripping out py2-compatability code. Gorka didn't think that the extra effort to modify backports would be that big a deal, and that if we're going to start using py3 for real, might as well start now.<br />
====Actions====<br />
* Reminder to reviewers: we need to be checking the code coverage of test cases very carefully so that new code has excellent coverage and will be likely to fail when tested with py2 in stable branches when a backport is proposed<br />
* Reminder to committers: be patient with reviewers when they ask for more tests!<br />
* Reminder to community: new features can use py3 language constructs; bugfixes likely to be backported should be more conservative and write for py2 compatabilty <br />
* Reminder to driver maintainers: ^^<br />
* Ivan and Sean have a green light to start removing py2 compatability<br />
===Policy migration===<br />
Some background: https://etherpad.openstack.org/p/policy-migration-steps<br />
Keystone has added a default read-only role and service-scoped roles, but they don't do anything until projects write policies that use them<br />
oslo.policy code has a way to define a default policy + deprecated default policy; during deprecation, the most permissive wins. This will allow easy migration to new policies for operators.<br />
<br />
There are still questions about how to set up testing for these. Keystone did only unit tests, but the tests were very heavyweight; had to set up all the users in the DB each time; they wish they had done things in tempest. But there is a concern that it may not be practical to do in tempest, either. (It may depend on the project.)<br />
<br />
A pop-up team is going to be started to help get the larger projects moved to the new Policy Code.<br />
====Actions====<br />
* rosmaita will investigate the different testing approaches. Note: It's possible that tempest will add the methods to create different users and the different projects will have to do their own testing using those.<br />
* rosmaita To look at the scoping options and understand what the impact on Cinder will be.<br />
** Need to create a matrix of our policies and different scopes.<br />
** Need to figure out how the administrative context fits in<br />
** Need to check current test coverage and where testing needs to be enhanced.<br />
** Don't have to have one person do it. It is possible to split up the work.<br />
* rosmaita and e0ne (and anyone else interested in this) to join the pop-up team to get more info and help get Cinder started.<br />
===Cinder V2 API removal===<br />
We can't just remove V2 code right now (e.g., the V2 extensions need to be moved to V3) but we can remove access to the API. Though actually that's not true either, there is a lot of background work that needs to be done before we remove V2 API:<br />
* Tempest still assumes that the V2 API will be there. Need to fix it.<br />
* OpenStack Client also has some V2 API assumptions.<br />
* Devstack also will not work.<br />
<br />
Sean has a patch to see how badly things break with V2 removal: https://review.opendev.org/554372<br />
<br />
V3 is pretty much exactly the same as V2. We should be able to change and just have people switch the endpoint and have it work. It would be nice if we could just update the catalog but that doesn't appear to be the case.<br />
====Actions====<br />
* follow up on this for the virtual PTG. What did we find with Sean's patch?<br />
** Create a list of the specific work items that need to be completed.<br />
** At that point we may be able to split the work up to an intern (if we have an intern).<br />
===Cinder REST API V4===<br />
We had talked in Vancouver about getting to a point after we have enough micro-versions piling up to move to V4.<br />
====Actions====<br />
* We don't need to do that in this release (let's get rid of V2 first), but it is something we need to keep in mind as a future goal.<br />
<br><br><br />
[https://www.youtube.com/watch?v=a-kq_EYrkq0 Video Recording Part 2]<br />
<br><br><br />
<br />
===Volume local cache===<br />
Requires both Cinder and Nova work:<br />
* Cinder spec: https://review.opendev.org/#/c/684556/<br />
* Nova spec: https://review.opendev.org/689070<br />
<br />
Currently there're different types of fast NVME SSDs, such as Intel Optane SSD, which r/w throughput can be 2.x~3.x GB/s, latency can be ~10 us. While typical remote volume for a VM can be hundreds of MB/s, latency can be millisecond level (iscsi / rbd). So these fast SSDs can be mounted on compute node locally and used as a cache for remote volumes. Regarding storage team, we need to add support in os-brick.<br />
<br />
Consensus was: there are some storage solutions this cannot be done for (Ceph, no mount point on host machine), some that might not require this (some vendors already have super-fast caching), and some it's worth doing for, so the overall feeling was supportive for this effort.<br />
<br />
See the PTG etherpad for details. Picture of the flip chart used during the discussion: https://twitter.com/jungleboyj/status/1192323512238776320<br />
====Actions====<br />
* Liang Fang to continue working on this<br />
===Mutable options ===<br />
The context for this is a NetApp customer request who wants to be able to change backend credentials without restarting any services.<br />
<br />
Problem is that the current mutable config can be done for the REST API, but doesn't extend beyond that. Further, changing driver credentials is a little more work since it may require reloading the driver or having a mechanism in all drivers to recognize and handle that change. Also, we don't want config options that are shared across drivers to be mutable.<br />
<br />
Gorka pointed out that a driver supporting Active-Active would not need mutable options for this purpose. It would be better to implement Active-Active instead of refresh credentials this way. A/A HA support has been ready for several releases now, but so far RBD has been the only driver to test and enable it.<br />
<br />
The team feels that using Active/Active is the best way to go.<br />
====Actions====<br />
* Gorka volunteered to support the NetApp team if they choose to implement A/A<br />
* need to add to the developer docs that just making an option mutable in oslo.config does not solve the problem for drivers (more info on the etherpad)<br />
<br />
<br><br><br />
[https://www.youtube.com/watch?v=4BRXtQ41gJw Video Recording Part 3]<br />
<br><br><br />
<br />
===Cross Project Discussion with Edge Working Group===<br />
Apparently the next version of TripleO will support storage at the edge. They were wondering if we knew anything about that. We don't.<br />
<br />
As far as edge persistent storage goes, telcos think about having NFS-only and have it in the core - scary concept.<br />
<br />
In considering the edge use case, it is important to understand the physical limitations of what people have in mind. For example, one small telco rack, or a smaller DC with air conditioning, or a bigger DC with AC and bigger storage unit, etc. You really can't talk about "the edge" (insert U2 joke here).<br />
<br />
See the etherpad for more.<br />
===Default volume types depending on project or user===<br />
Having a single volume type default is too restrictive for bigger clouds with multiple AZs and many tenants/projects. Operators want more defaults to use in particular situations.<br />
<br />
The selection of which default to use is easy; the hard part of this will be the code enabling creation of the default at the end user/project level. Will need:<br />
* new API calls (create, show, list, update, delete), new microversion<br />
* client support<br />
* tell horizon about it<br />
====Actions====<br />
* geguileo - write the spec<br />
** request from Glance: we may also want a per-service default (triggered when a service token is passed)<br />
===Cinder retype doesn't use driver assisted migration===<br />
Gorka thinks this doesn't depend on the driver; he thinks it's broken for all drivers. There is code in the manager that prevents the efficient path from being taken:<br />
https://github.com/openstack/cinder/blob/ca5c2ce4e8ae9fbc92181ac4ba09cec3429a71e6/cinder/volume/manager.py#L2490<br />
There was a reason for it; we need to review and see if it still holds.<br />
<br />
Ivan thinks this is just a bug. Though we don't have a bug open for it.<br />
====Actions====<br />
* e0ne to investigate and fix it if he can verify that it is broken.<br />
===EOL some of the currently open branches===<br />
We have 8 open branches plus master (ussuri). Sent an email to the ML asking for data so we can make a good decision about this:<br />
http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010385.html<br />
<br />
Got zero responses, so this apparently isn't seen as a big deal by the community.<br />
<br />
The policy is that we need to announce 6 months ahead of time the fact that we are planning to EOL a branch. This allows time for a vendor to come in and pick it up if necessary.<br />
So, if we want to drop branches we just need to announce that we are planning to EOL branches and then we can do it in 6 months.<br />
<br />
The driverfixes branches have not been used in quite a while.<br />
* Should we delete those? No, we don't really want to lose that history of commits.<br />
* Could we re-name them? Put 'archived' in the title or something to make it clear that it doesn't still take code. (Or just document that they are an archive of old driver fixes.)<br />
* When we EOL a driver we should probably make it a driverfixes branch. (Not clear on exactly what's being proposed here, need to follow up at VPTG.)<br />
====Actions====<br />
* rosmaita - find out about renaming branches from infra team; also, about read-only branches (change to gerrit so no patches can be proposed to the branch)?<br />
** proposal: EOL o, p and rename them archived-ocata, archived-pike<br />
* rosmaita - send proposal to ML that o, p are due to exit EM status in 6 months<br />
* revisit this at the Virtual PTG<br />
** the EOL policy was revised recently, no longer requires the 6 month waiting period<br />
** want to reconsider whether not deleting the EOL branches is a good idea if we're not going to merge anything into them<br />
<br />
<br><br><br />
[https://www.youtube.com/watch?v=VHyCKbqMJb4 Video Recording Part 4]<br />
<br><br><br />
<br />
===Discuss the latest User Survey Results===<br />
Here's a handy compiled list of only the Cinder responses: https://etherpad.openstack.org/p/cinder-2019-user-survey-question-responses<br />
====Actions====<br />
* replication needs better documentation so that people know we can failover and fail back correctly<br />
* ivan is planning to continue the generic backup driver work<br />
<br><br><br />
[https://www.youtube.com/watch?v=r4zURMzJpbA Video Recording Part 5]<br />
<br><br><br />
<br />
===Meeting with the Nova team===<br />
When we failover in Cinder, volumes are no longer usable in Nova, but we don't tell Nova that the failover has ocurred. Any procedure in Nova to correct the situation needs to be done manually. It would be better if we let Nova know that a failover has occurred so they can do something.<br />
<br />
A complication is that Nova can't simply detach and attach the volume because data that is in flight would be lost.<br />
<br />
How about boot from volume? In that case the instance is dead anyway because access to the volume has been lost. Could go through the shutdown, detach, attach, reboot path.<br />
Problem is that detach is going to fail. Need to force it or handle the failure. But we aren't sure that Nova will allow a detach of a boot volume. And we don't currently have a force detach API.<br />
<br />
Also discussed a possible Nova bug for images created from encrypted volumes: https://bugs.launchpad.net/nova/+bug/1852106 , though it's not clear that the scenario described in the bug can actually happen<br />
====Actions====<br />
* need to figure out how to pass the force to os-brick to detach volume and when rebooting a volume<br />
* rosmaita to investigate Bug #1852106<br />
<br />
==Friday (Shanghai)==<br />
===Meeting with the Glance team===<br />
<br><br><br />
[https://www.youtube.com/watch?v=p-cEMSj44Nc Video Recording Part 1]<br />
<br><br><br />
====Support for Glance multiple stores in Cinder====<br />
References: (cinder spec) https://review.openstack.org/#/c/641267/<br />
<br />
The Cinder team is still OK with this idea (which was approved for Train).<br />
=====Actions=====<br />
* retarget spec for Ussuri<br />
* get Abhishek's patch reviewed<br />
====Image snapshot co-location====<br />
For the Edge use case, Glance is planning to use info provided by Nova about what image a server was booted from to co-locate snapshots of that server in the same store as the original image. Would like to do the same with Cinder volumes uploaded as images. Just need a header that specifies the "base" image of the volume being uploaded as an image. We agreed that this is a separate use case from the above.<br />
=====Actions=====<br />
* Abhishek will write the spec for Cinder<br />
====Glance Cinder driver is very limited====<br />
We think it uses only default volume type, and also, it is not very well tested. We all agreed that this is a sad state of affairs.<br />
=====Actions=====<br />
* somebody should do something<br />
===Meet with Horizon about their proposed implementation of Cinder user messages===<br />
Horizon is interested in exposing the User Messages API. We agreed that this is a great idea.<br />
<br />
There's a question about having the message displayed in a requested language. It's possible that this is already handled at the REST API layer via the "Accept-Language" header. If it's not, that's probably the place to support this.<br />
====Actions====<br />
* rosmaita determine whether this would require a change to the API code, or whether existing code handles this already<br />
===Attach/Detach speed===<br />
Gorka was wondering whether there are any complaints about attach/detach speed in OpenStack, particularly since people are now using Cinder to provide volumes for Kubernetes (cinder in-tree driver, Cinder-CSI, Ember-CSI) and may be seeing a lot more attach/detach requests.<br />
<br />
Everybody seems to be OK with it, it's only geguileo who's complaining.<br />
====Actions====<br />
* not a concern at the moment<br />
===Topics from Train mid-cycle: status and carry-over to Ussuri===<br />
Notes about the Train mid-cycle: https://wiki.openstack.org/wiki/CinderTrainMidCycleSummary<br />
<br />
Mid-cycle etherpad: https://etherpad.openstack.org/p/cinder-train-mid-cycle-planning<br />
====Multiattach====<br />
All items need followup. Goals are:<br />
* short-term: document some guidance for how this feature should be tested<br />
* long-term: get some new tests into the cinder-tempest-plugin for this<br />
=====Actions=====<br />
* rosmaita draft the short-term document<br />
====iSCSI Ceph driver====<br />
Due to some downstream priorities changes, Walt is having trouble finding time to work on this.<br />
Ivan suggested that we encourage Walt to post whatever he's got, even if it's not working, so what he's learned isn't lost.<br />
There are some patches up and a github repo for some code Walt had to write that doesn't have a home in OpenStack or Ceph yet<br />
=====Actions=====<br />
* rosmaita: follow up with Walt<br />
* rosmaita: put together an etherpad with links to the work done so far<br />
====3rd Party CI Irregularities====<br />
Third-party testing by backend vendors of their driver code is very important to the project. But most of the 3rd Party CI appear to be pretty unstable.<br />
<br />
For most vendors, updating their 3rd Party CI to run python 3.7 in Train was not a simple task. It would be good if we could offer them better guidance about how to set up & maintain their 3rd Party CI. Would also like vendors to be running the cinder-tempest-plugin, but don't want to make it a demand unless we can make the path easier. (BTW, Datera is running the cinder-tempest-plugin in their CI!)<br />
<br />
Third Party CI Docs (partial list)<br />
* https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver<br />
* https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers<br />
* https://docs.openstack.org/infra/system-config/third_party.html<br />
* https://docs.openstack.org/cinder/latest/contributor/drivers.html<br />
<br />
=====Actions=====<br />
* Luigi has some ideas about using RDO Software Factory as a basis for 3rd Party CI; need to follow up with him on that<br />
* Gorka: will check about what RDO has available <br />
* e0ne: will look to see who's using cinder-tempest-plugin<br />
* the team: after gorka and e0ne report back, reorganize & update the 3rd party CI docs<br />
<br />
====Improve Automated Test Coverage====<br />
We want to do this via the cinder-tempest-plugin. Sophia (enriquetaso) is mentoring an Outreachy intern who has begun some work on this. Eric has been writing bugs to suggest test cases that need to be addressed.<br />
<br />
====SQLAlchemy to Alembic migration====<br />
No progress on this. Put in a proposal for a summer intern to work on this; maybe we'll get lucky.<br />
<br />
See https://etherpad.openstack.org/p/cinder-train-ptg-planning (line #247) for more info.<br />
<br />
====Capabilities Reporting====<br />
Operators need to read the vendor's manual to figure out which extra specs they can write for a particular backend, and what they're used for. it would be nice to drivers report their capabilities in a way that the operator can figure out this info from the CLI.<br />
<br />
Everyone agreed that we still want to do this. It will require an API change and there's already a spec for this:<br />
https://review.opendev.org/#/c/655939/1/specs/train/backend_capabilities.rst<br />
=====Actions=====<br />
* revisit at the Virtual PTG and figure out who's interested in working on it<br />
<br />
===Cinder Business===<br />
<br><br><br />
[https://www.youtube.com/watch?v=8YQ4TQyIzkw Video Recording Part 2]<br />
<br><br><br />
====Cinder Ussuri Priorities====<br />
We will finalize this after the Virtual PTG, but here's the initial list:<br />
* Increase testing coverage<br />
* Increase number of CIs running cinder-tempest-plugins<br />
* Better support for third party CIs: Make their life easier by having a way to deploy a robust system<br />
* Volume types per user/project/service-token<br />
** better documentation<br />
* Generic Backups<br />
* Improve HA Active-Active documentation<br />
** want to make it easier to test it<br />
* remove V2 API<br />
* remove python 2 support<br />
====Cinder-core update====<br />
See http://lists.openstack.org/pipermail/openstack-discuss/2019-November/010519.html<br />
We are at roughly the same review strength we had in Train.<br />
====Meeting Time Change update====<br />
We were holding off on this until after the Summit so that new contributors could participate in a poll.<br />
We'll consider the options from Liang Fang's original proposal at the Cinder weekly meeting: http://eavesdrop.openstack.org/meetings/cinder/2019/cinder.2019-10-23-16.00.log.html#l-166<br />
These are to move the meeting 1 or 2 hours earlier.<br />
There has also been some discussion on the ML: http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010328.html<br />
=====Actions=====<br />
* rosmaita put together a community poll<br />
====Virtual PTG====<br />
We discussed what the format should be. Consensus was to do it over 2 consecutive days, using 2 hours each day. This should make it easier for people to participate in at least part of the meeting. We want to do it soon; consensus was the week after KubeCon to avoid conflicts. So that would be the last week in November.<br />
=====Actions=====<br />
* rosmaita put together a community poll to determine days/times<br />
====Virtual Mid-Cycle====<br />
There is interest in having a midcycle. Although everyone recognizes that face-to-face is the best, contributors have been having trouble getting travel support. So we decided to do a completely Virtual Mid-Cycle meetup for Ussuri. We decided to figure out the format after we see how the Virtual PTG works out.<br />
<br />
==Monday (Virtual)==<br />
<br><br />
[https://www.youtube.com/watch?v=dwk2oKXxlfw Video Recording]<br />
<br><br />
===Forum session recap: Are You Using Upgrade Checks?===<br />
* https://etherpad.openstack.org/p/PVG-upgrade-check-forum<br />
Jay gave a quick recap of the Forum session. There are a number of action items in the etherpad above. They are assigned to jungleboyj right now as a TC action.<br />
<br />
There are still some questions about (a) how operators are using these, and (b) what kind of checks we should be providing from the development side. The Cinder team was seeing this as pre-check. Others are seeing it as a check that is used along the way while an upgrade is in process to ensure that things are ready before operators start up their services. Pre-checks seem to make sense for us; Sean noted that we could add an option to do some pre-checks to the cinder-status command.<br />
<br />
So what should the Cinder team do during the Ussuri cycle (before we have the above issues settled)? At the very least, we should still add them when a driver is unsupported and subject to removal:<br />
* inform operators that in order to use an unsupported driver, a flag has to be set in cinder.conf<br />
* inform operators that they need to contact the vendor about whether they have plans to have the driver re-instated; otherwise, the operator needs to prepare to migrate the affected volumes to a backend with a supported driver for the next Cinder release<br />
====Actions====<br />
* jungleboyj - Start a discussion on the mailing list to find out if anyone is actually using or has used the upgrade checks in production<br />
* need to figure out where the documentation for this goes<br />
===Snapshot co-location===<br />
The spec for this is: https://review.opendev.org/#/c/695630/<br />
<br />
This is related to glance multi-store support in Cinder, but the spec needs to specify more carefully what the use case is. We think that is: a user has a volume that was created from a glance image, and wants to upload it as an image; want to give Glance info so that in can put the new image in the same store as the original image. (So use of the term "snapshot" here may be inaccurate.)<br />
<br />
This feature depends on the implementation of the other glance multistore spec: https://review.opendev.org/#/c/661676/<br />
====Actions====<br />
* Rajat, Abhishek - update the spec<br />
===Python 2 support removal===<br />
Gave a quick summary of what we discussed in Shanghai (see above) so that we're all on the same page.<br />
<br />
There's a patch up now removing py2 testing from Cinder: https://review.opendev.org/695317 . Once that's approved, will do the same for the other components.<br />
<br />
General advice to Cinder developers about using Python 3 language features: https://wiki.openstack.org/wiki/CinderUssuriPTGSummary#Actions<br />
<br />
====Actions====<br />
* rosmaita - get the testing/gate patches merged, then let the good times roll<br />
===User messages===<br />
Quick discussion of the admin action "leakage" issue discussed on https://review.opendev.org/#/c/694954/<br />
<br />
Consensus was that it would be useful to expose admin-oriented actions in user messages that only admins would be able to view. Maybe set a special flag when the message is created, and then use the admin context to decide whether this gets shown or not. Agreed that the message content will be same as we have currently (that is, don't expose any sensitive information even to admins). We can wait until admin-facing user messages are being used and get feedback about whether more info is required or not.<br />
<br />
Ivan pointed out that this change should not require a new microversion, since there's no change to the user message API and no change to the current response.<br />
====Actions====<br />
* rosmaita - write up a spec<br />
===3rd Party CI irregularities===<br />
The issue we want to address is that the 3rd Party CI systems seem pretty unstable. We'd like to be able to provide some more support to make the infrastructures more reliable. Luigi suggested using RDO Software Factory as a basis for 3rd Party CI.<br />
<br />
References:<br />
* https://opendev.org/x/third-party-ci-tools<br />
* https://zuul-ci.org/docs/zuul/admin/quick-start.html<br />
====Actions====<br />
* Luigi - follow up with RDO team and get some feedback on how plausible this scenario is<br />
* e0ne - will look to see who's using cinder-tempest-plugin<br />
===Extending default volume type support for tenants===<br />
Quick recap of the Shanghai discussion (see above). Simon had mentioned that he might have developer at Pure who'd be interested in doing the implementation. Rajat volunteered to help support the implementation.<br />
====Actions====<br />
* Gorka - write up the spec<br />
* rosmaita - follow up with Simon<br />
===Quotas!===<br />
Eric has a patch up that may fix one of many problems: https://review.opendev.org/#/c/695096/ <br />
Eric thinks the patch could be optimized if someone is interested.<br />
<br />
The general problem is that we update multiple tables and there can be (are) race conditions and you wind up with strange situations like negative quota values or multiple quotas for the same project. Operators have posted some scripts to be used occasionally clean up the database, but it would be better to fix this in Cinder.<br />
===EOL for driverfixes/{m,n} and stable/{o,p}===<br />
Since the Shanghai discussion, a patch has merged that removes the 6 month waiting period for the transition from EM -> EOL: https://review.opendev.org/#/c/682381/<br />
<br />
There was a discussion about this in #openstack-tc last week: http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-11-22.log.html#t2019-11-22T15:35:01<br />
<br />
Consensus is that we should go ahead and do this.<br />
====Actions====<br />
* rosmaita - send notice on the ML that we are going to do this in one week<br />
** http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011136.html<br />
* rosmaita - put up a release patch to EOL the branches<br />
** https://review.opendev.org/#/c/696173/<br />
===Driver support matrix===<br />
Follow-up from the discussion in Shanghai. A suggestion was made that multipath should be a specific category in the support matrix.<br />
<br />
Consensus is that multipath is more a feature of the backend than of the driver. It is useful to know if drivers do it, but it's not the kind of thing like replication that they do or not. Also, there are options that have to be set in nova in order for it to be useful - nova:libvirt:use_volume_multipath. So there doesn't seem to be a point in adding this to the support matrix.<br />
<br />
==Wednesday (Virtual)==<br />
<br><br />
[https://www.youtube.com/watch?v=VtmmWtkoxx8 Video Recording]<br />
<br><br />
===v2 API removal update===<br />
Outward facing issues:<br />
* Sean's patch that removes v2 from the 'versions' response had some strange failures (but Rajat thinks there may be a quick fix).<br />
* Right now, devstack expects both v2 and v3 to be available and creates endpoints for both in the service catalog: https://opendev.org/openstack/devstack/src/branch/master/lib/cinder<br />
* On the other hand, it looks like tempest is v3 ready: https://review.opendev.org/#/c/530702/<br />
* Also, Ivan is pretty sure that Horizon uses v3 only.<br />
* We should notify Nova and Glance to make sure they don't rely on v2 for anything.<br />
<br />
Internal issues:<br />
* we should be able to clean up the stuff that v3 inherited from v2<br />
* we may not be able to clean up the v2/contrib stuff yet because of microversion reliance<br />
====Actions====<br />
* Rajat take a shot at fixing Sean's patch<br />
* rosmaita - work on the devstack stuff (service catalog)<br />
* we'll be optimistic about tempest<br />
* rosmaita - send a general email to the ML saying that we plan to do this<br />
===Forum session recap: How are you using Cinder's Volume Types?===<br />
Session etherpad: https://etherpad.openstack.org/p/PVG-how-using-cinder-volume-types<br />
<br />
Sean mentioned some highlights of the Forum session.<br />
* NECTAR is running a patch that allows a volume type to be assigned to an AZ: https://github.com/NeCTAR-RC/cinder/commit/d5a3d938a8e0934d31b5a3c568846b3d32843866<br />
* There were some questions about whether RBD supports volume online migration<br />
* Operators are interested in the "Support filter backend based on operation type" spec that was implemented in Rocky, but need some documentation explaining how to use it. The implementation is https://github.com/openstack/cinder/commit/e1ec4b4c2e1f0de512f09e38824c1d7e2fa38617<br />
====Actions====<br />
* e0ne is planning to do some testing around the RBD volume migration<br />
* need someone to pick up the documentation of "Support filter backend based on operation type"<br />
===Ceph iSCSI work===<br />
This is an important feature for Ironic.<br />
There's an etherpad gathering some of the work Walt's done on this: https://etherpad.openstack.org/p/cinder-ceph-iscsi-driver<br />
====Actions====<br />
rosmaita - follow up with Walt about his bandwidth and ask him to add missing stuff to the etherpad<br />
===Ussuri community goals ===<br />
Goal 1: Drop Python 2.7 Support -- we're going to do this in 2 phases. First is to get the python 2 check and gate jobs removed so we aren't depending on any py27 in the gate. Second will be to make the changes that will only allow Cinder to be installed with at least py36. That will follow in January or so when any other project that needs to install Cinder in py27 for their own testing has removed that dependency. <br />
<br />
The cinder patch to drop py2 testing is https://review.opendev.org/#/c/695317/ -- once that's merged, we'll do the same for os-brick, cinderclient, the brick-client-ext, and cinderlib.<br />
<br />
Goal 2: Project Specific New Contributor & PTL Docs -- the goal is not yet approved; it's at the formal vote stage: https://review.opendev.org/#/c/691737/. From comments on the patch, expectations are that the current PTL will do this with help from former PTLs. Luckily, we have 2 former PTLs who are still very active with the project. The open issue right now is that the docs are supposed to be consistent across projects, but there isn't a template for this yet.<br />
<br />
There's a pre-selected goal for V to migrate all legacy zuul jobs: https://review.opendev.org/#/c/691278/. gmann has a patch up moving our legacy jobs (grenade) to the cinder repo and making them py3: https://review.opendev.org/#/c/695787/. At some point, we'll need to convert them to bona fide Zuul v3 jobs. Luigi left a bunch of info on the etherpad about what the moving parts for this are, and he pointed out that reviews are welcome.<br />
<br />
It looks like another V goal is going to be "Consistent and secure default policies" goal, which was floated on the ML: http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010291.html. A "Policy Popup Team" is being organized now to do some work during this cycle: https://review.opendev.org/#/c/695993/<br />
====Actions====<br />
* everyone - keep an eye on the "remove py2 support" patches<br />
* everyone - reviews welcome on https://review.opendev.org/#/q/status:open+branch:master+topic:grenade_zuulv3<br />
===Backup Service Testing===<br />
This was a follow-up from the Train mid-cycle. The backup tests fail intermittently with timeouts. The situation now is that the cinder backup tests have been removed from tempest-full, but they are being run with tempest-integrated-storage (which basically means that failures will hit us but shouldn't block other projects). The issue could still use some investigation; Eric has a suggestion on the etherpad to write an elasticsearch query to look in the c-bak log for a specific IOError instead of looking for the timeout.<br />
<br />
Eric also noted that the test jobs run a long time and fail with a not helpful message -- they look for volume going active but don't notice that it's in an error state and should fail sooner. Failing fast could conserve some gate resources. Maybe someone wants to follow up with something like https://review.opendev.org/#/c/565766/ ?<br />
====Actions====<br />
* anyone who's interested - follow up on this<br />
* rosmaita - (came up during this discussion but not related to this topic) update etherpads with nondestructive translation instructions (get the read-only link to the etherpad and use translations tools there instead of in the writeable etherpad)<br />
===Cycle Priorities===<br />
====Increase testing coverage====<br />
We want to get more thorough tests into the cinder-tempest-plugin. Sofia (enriquetaso) is mentoring an Outreachy intern Anastasiya (anastzhyr) who will focus on this during her internship (3 Dec 2019 to 3 March 2020). The Cinder community can assist in this by (A) writing bugs tagged 'test-coverage' with specific ideas for tests that can be added, and (B) timely reviews of Anastaiya's patches.<br />
====Increase number of 3rd party CIs running cinder-tempest-plugin====<br />
This '''should be easyâ„¢''' for current 3rd party CIs (and actually may already be a requirement). Need to get a baseline for how many CIs are running it now.<br />
====Better support for 3rd Party CIs====<br />
We'd like to their lives easier by having a standard way to deploy a robust system. This may be possible using RDO Software Factory. Luigi has already brought this up at the RDO meeting and RDO (and the SF people) are supportive of this idea: http://eavesdrop.openstack.org/meetings/rdo_meeting___2019_11_27/2019/rdo_meeting___2019_11_27.2019-11-27-15.00.log.html#l-76<br />
<br />
WIP 3rd party CI doc section for SoftwareFactory:<br />
* https://softwarefactory-project.io/r/#/c/17097/<br />
* https://softwarefactory-project.io/logs/97/17097/2/check/sf-docs-build/ae24483/docs-html/guides/third_party_ci.html <br />
====Default volume-type enhancement====<br />
Gorka's working on a spec for having per user/project/service-token default volume-type; basic idea is what was discussed at the PTG (see above).<br />
Related to this is improving the documentation around volume types.<br />
====Generic Backups====<br />
Related patches:<br />
* https://review.opendev.org/#/c/620881/<br />
* https://review.opendev.org/#/c/630305/<br />
====Improve Active-Active (HA) Documentation====<br />
We're anticipating that operators will want to run Cinder in HA mode (Cinder active-active). Currently RBD supports running in HA.<br />
<br />
A driver has to set a flag in order for the service to run in active-active. But in addition to setting the flag, it would be good for the feature to actually work with that driver. The problem is that what you need to do is very driver dependent. We don't have tempest tests that verify that a driver can run in HA (but it should be possible to add tests such that if they fail, then you know HA is not happening).<br />
<br />
We need docs aimed at two audiences:<br />
* driver developers: want to clarify what you need to do to implement active-active and claim HA support. Should be able to give some general advice (like "watch out for race conditions on connection") to help in implementing and testing that their driver supports active-active.<br />
* operators: need to provide some advice about how to deploy Cinder in active-active mode (for example, you should have 3 API nodes, 3 scheduler nodes, 3 volume services)<br />
====Remove the v2 API====<br />
Should at least be able to get it out of the service catalog and remove the option to run it (and remove the option to not run v3), so that from an external point of view, all that's available is the v3 API. The refactoring to remove all v2 code from the API doesn't have to happen immediately.<br />
====Remove Python 2 support====<br />
This is a community goal (and it looks like we're in good shape to make this happen very early in the cycle).<br />
====Move away from squlalchemy- migrate to alembic====<br />
We're getting closer to the point where we will have no choice.<br />
===Should we hold a Virtual Mid-Cycle?===<br />
Cycle Schedule: https://releases.openstack.org/ussuri/schedule.html<br />
<br />
The consensus that we should have a Ussuri Mid-Cycle and that it should be virtual. While we were discussing the timing (have it close to the spec freeze? or closer to M-2), Eric pointed out that if we were going to use the model that we used for this Virtual PTG, namely, 2-hour sessions spread over a couple of days, there's no reason why the sessions have to be close together since we don't need to arrange any physical facilities. So we can be flexible and have 2 hour sessions whenever it makes sense.<br />
<br />
Right now, the sensible times to have 2-hour virtual meetings are:<br />
* around the Cinder Spec Freeze. The spec freeze is the last week in January and is at 15 weeks, which is exactly the middle of the cycle. Maybe have the Virtual meet-up the week before so unmerged specs can have some discussion if necessary?<br />
* at the "Cinder New Feature Status Checkpoint" (week of 16 March 2020), which is 3 weeks before the final release for client libraries.<br />
====Actions====<br />
* rosmaita - get feedback from the wider team at the weekly meeting and organize polls to determine the day of week and time</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=CinderUssuriPTGSummary&diff=173318CinderUssuriPTGSummary2019-12-04T18:23:40Z<p>Jay Bryant: /* Monday (Virtual) */</p>
<hr />
<div>=== Introduction ===<br />
This page contains a summary of the subjects covered during the Ussuri PTG held in Shanghai, China, November 7-8, 2019.<br />
It also contains a summary of the Virtual PTG held November 25 and 27, 2019.<br />
<br />
The sessions were recorded. Links to the recordings will be added here when they are available.<br />
<br />
The full etherpad and all associated notes may be found here:<br />
* Shanghai: https://etherpad.openstack.org/p/shanghai-ptg-cinder<br />
* Virtual: https://etherpad.openstack.org/p/cinder-ussuri-virtual-ptg-planning<br />
<br />
==Thursday (Shanghai)==<br />
===Cinder project onboarding and "meet the Cinder developers" session===<br />
Everyone who attended were 100% satisfied and very complimentary about Cinder. Unfortunately, no one attended, so we spent the time figuring out how to get the recording equipment connected and positioned properly.<br />
<br />
<Br><br><br />
[https://www.youtube.com/watch?v=QQgiokX_EiI Video Recording Part 1]<br />
<br><br><br />
<br />
===Python 2 support & Work remaining to remove Py27 support===<br />
I came into this arguing that we need to keep Python 2 testing in master for a while -- at least while we are still supporting Python 2 in stable branches, because otherwise backports become a big problem (won't have a clean backport if any py3-only language features are used in a patch to master). Pretty much no one agreed with this.<br />
<br />
Sean pointed out that as libraries drop py2 support, we won't be able to use them in py2 testing anyway. Ivan and Sean can't wait to start ripping out py2-compatability code. Gorka didn't think that the extra effort to modify backports would be that big a deal, and that if we're going to start using py3 for real, might as well start now.<br />
====Actions====<br />
* Reminder to reviewers: we need to be checking the code coverage of test cases very carefully so that new code has excellent coverage and will be likely to fail when tested with py2 in stable branches when a backport is proposed<br />
* Reminder to committers: be patient with reviewers when they ask for more tests!<br />
* Reminder to community: new features can use py3 language constructs; bugfixes likely to be backported should be more conservative and write for py2 compatabilty <br />
* Reminder to driver maintainers: ^^<br />
* Ivan and Sean have a green light to start removing py2 compatability<br />
===Policy migration===<br />
Some background: https://etherpad.openstack.org/p/policy-migration-steps<br />
Keystone has added a default read-only role and service-scoped roles, but they don't do anything until projects write policies that use them<br />
oslo.policy code has a way to define a default policy + deprecated default policy; during deprecation, the most permissive wins. This will allow easy migration to new policies for operators.<br />
<br />
There are still questions about how to set up testing for these. Keystone did only unit tests, but the tests were very heavyweight; had to set up all the users in the DB each time; they wish they had done things in tempest. But there is a concern that it may not be practical to do in tempest, either. (It may depend on the project.)<br />
<br />
A pop-up team is going to be started to help get the larger projects moved to the new Policy Code.<br />
====Actions====<br />
* rosmaita will investigate the different testing approaches. Note: It's possible that tempest will add the methods to create different users and the different projects will have to do their own testing using those.<br />
* rosmaita To look at the scoping options and understand what the impact on Cinder will be.<br />
** Need to create a matrix of our policies and different scopes.<br />
** Need to figure out how the administrative context fits in<br />
** Need to check current test coverage and where testing needs to be enhanced.<br />
** Don't have to have one person do it. It is possible to split up the work.<br />
* rosmaita and e0ne (and anyone else interested in this) to join the pop-up team to get more info and help get Cinder started.<br />
===Cinder V2 API removal===<br />
We can't just remove V2 code right now (e.g., the V2 extensions need to be moved to V3) but we can remove access to the API. Though actually that's not true either, there is a lot of background work that needs to be done before we remove V2 API:<br />
* Tempest still assumes that the V2 API will be there. Need to fix it.<br />
* OpenStack Client also has some V2 API assumptions.<br />
* Devstack also will not work.<br />
<br />
Sean has a patch to see how badly things break with V2 removal: https://review.opendev.org/554372<br />
<br />
V3 is pretty much exactly the same as V2. We should be able to change and just have people switch the endpoint and have it work. It would be nice if we could just update the catalog but that doesn't appear to be the case.<br />
====Actions====<br />
* follow up on this for the virtual PTG. What did we find with Sean's patch?<br />
** Create a list of the specific work items that need to be completed.<br />
** At that point we may be able to split the work up to an intern (if we have an intern).<br />
===Cinder REST API V4===<br />
We had talked in Vancouver about getting to a point after we have enough micro-versions piling up to move to V4.<br />
====Actions====<br />
* We don't need to do that in this release (let's get rid of V2 first), but it is something we need to keep in mind as a future goal.<br />
<br><br><br />
[https://www.youtube.com/watch?v=a-kq_EYrkq0 Video Recording Part 2]<br />
<br><br><br />
<br />
===Volume local cache===<br />
Requires both Cinder and Nova work:<br />
* Cinder spec: https://review.opendev.org/#/c/684556/<br />
* Nova spec: https://review.opendev.org/689070<br />
<br />
Currently there're different types of fast NVME SSDs, such as Intel Optane SSD, which r/w throughput can be 2.x~3.x GB/s, latency can be ~10 us. While typical remote volume for a VM can be hundreds of MB/s, latency can be millisecond level (iscsi / rbd). So these fast SSDs can be mounted on compute node locally and used as a cache for remote volumes. Regarding storage team, we need to add support in os-brick.<br />
<br />
Consensus was: there are some storage solutions this cannot be done for (Ceph, no mount point on host machine), some that might not require this (some vendors already have super-fast caching), and some it's worth doing for, so the overall feeling was supportive for this effort.<br />
<br />
See the PTG etherpad for details. Picture of the flip chart used during the discussion: https://twitter.com/jungleboyj/status/1192323512238776320<br />
====Actions====<br />
* Liang Fang to continue working on this<br />
===Mutable options ===<br />
The context for this is a NetApp customer request who wants to be able to change backend credentials without restarting any services.<br />
<br />
Problem is that the current mutable config can be done for the REST API, but doesn't extend beyond that. Further, changing driver credentials is a little more work since it may require reloading the driver or having a mechanism in all drivers to recognize and handle that change. Also, we don't want config options that are shared across drivers to be mutable.<br />
<br />
Gorka pointed out that a driver supporting Active-Active would not need mutable options for this purpose. It would be better to implement Active-Active instead of refresh credentials this way. A/A HA support has been ready for several releases now, but so far RBD has been the only driver to test and enable it.<br />
<br />
The team feels that using Active/Active is the best way to go.<br />
====Actions====<br />
* Gorka volunteered to support the NetApp team if they choose to implement A/A<br />
* need to add to the developer docs that just making an option mutable in oslo.config does not solve the problem for drivers (more info on the etherpad)<br />
<br />
<br><br><br />
[https://www.youtube.com/watch?v=4BRXtQ41gJw Video Recording Part 3]<br />
<br><br><br />
<br />
===Cross Project Discussion with Edge Working Group===<br />
Apparently the next version of TripleO will support storage at the edge. They were wondering if we knew anything about that. We don't.<br />
<br />
As far as edge persistent storage goes, telcos think about having NFS-only and have it in the core - scary concept.<br />
<br />
In considering the edge use case, it is important to understand the physical limitations of what people have in mind. For example, one small telco rack, or a smaller DC with air conditioning, or a bigger DC with AC and bigger storage unit, etc. You really can't talk about "the edge" (insert U2 joke here).<br />
<br />
See the etherpad for more.<br />
===Default volume types depending on project or user===<br />
Having a single volume type default is too restrictive for bigger clouds with multiple AZs and many tenants/projects. Operators want more defaults to use in particular situations.<br />
<br />
The selection of which default to use is easy; the hard part of this will be the code enabling creation of the default at the end user/project level. Will need:<br />
* new API calls (create, show, list, update, delete), new microversion<br />
* client support<br />
* tell horizon about it<br />
====Actions====<br />
* geguileo - write the spec<br />
** request from Glance: we may also want a per-service default (triggered when a service token is passed)<br />
===Cinder retype doesn't use driver assisted migration===<br />
Gorka thinks this doesn't depend on the driver; he thinks it's broken for all drivers. There is code in the manager that prevents the efficient path from being taken:<br />
https://github.com/openstack/cinder/blob/ca5c2ce4e8ae9fbc92181ac4ba09cec3429a71e6/cinder/volume/manager.py#L2490<br />
There was a reason for it; we need to review and see if it still holds.<br />
<br />
Ivan thinks this is just a bug. Though we don't have a bug open for it.<br />
====Actions====<br />
* e0ne to investigate and fix it if he can verify that it is broken.<br />
===EOL some of the currently open branches===<br />
We have 8 open branches plus master (ussuri). Sent an email to the ML asking for data so we can make a good decision about this:<br />
http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010385.html<br />
<br />
Got zero responses, so this apparently isn't seen as a big deal by the community.<br />
<br />
The policy is that we need to announce 6 months ahead of time the fact that we are planning to EOL a branch. This allows time for a vendor to come in and pick it up if necessary.<br />
So, if we want to drop branches we just need to announce that we are planning to EOL branches and then we can do it in 6 months.<br />
<br />
The driverfixes branches have not been used in quite a while.<br />
* Should we delete those? No, we don't really want to lose that history of commits.<br />
* Could we re-name them? Put 'archived' in the title or something to make it clear that it doesn't still take code. (Or just document that they are an archive of old driver fixes.)<br />
* When we EOL a driver we should probably make it a driverfixes branch. (Not clear on exactly what's being proposed here, need to follow up at VPTG.)<br />
====Actions====<br />
* rosmaita - find out about renaming branches from infra team; also, about read-only branches (change to gerrit so no patches can be proposed to the branch)?<br />
** proposal: EOL o, p and rename them archived-ocata, archived-pike<br />
* rosmaita - send proposal to ML that o, p are due to exit EM status in 6 months<br />
* revisit this at the Virtual PTG<br />
** the EOL policy was revised recently, no longer requires the 6 month waiting period<br />
** want to reconsider whether not deleting the EOL branches is a good idea if we're not going to merge anything into them<br />
<br />
<br><br><br />
[https://www.youtube.com/watch?v=VHyCKbqMJb4 Video Recording Part 4]<br />
<br><br><br />
<br />
===Discuss the latest User Survey Results===<br />
Here's a handy compiled list of only the Cinder responses: https://etherpad.openstack.org/p/cinder-2019-user-survey-question-responses<br />
====Actions====<br />
* replication needs better documentation so that people know we can failover and fail back correctly<br />
* ivan is planning to continue the generic backup driver work<br />
<br><br><br />
[https://www.youtube.com/watch?v=r4zURMzJpbA Video Recording Part 5]<br />
<br><br><br />
<br />
===Meeting with the Nova team===<br />
When we failover in Cinder, volumes are no longer usable in Nova, but we don't tell Nova that the failover has ocurred. Any procedure in Nova to correct the situation needs to be done manually. It would be better if we let Nova know that a failover has occurred so they can do something.<br />
<br />
A complication is that Nova can't simply detach and attach the volume because data that is in flight would be lost.<br />
<br />
How about boot from volume? In that case the instance is dead anyway because access to the volume has been lost. Could go through the shutdown, detach, attach, reboot path.<br />
Problem is that detach is going to fail. Need to force it or handle the failure. But we aren't sure that Nova will allow a detach of a boot volume. And we don't currently have a force detach API.<br />
<br />
Also discussed a possible Nova bug for images created from encrypted volumes: https://bugs.launchpad.net/nova/+bug/1852106 , though it's not clear that the scenario described in the bug can actually happen<br />
====Actions====<br />
* need to figure out how to pass the force to os-brick to detach volume and when rebooting a volume<br />
* rosmaita to investigate Bug #1852106<br />
<br />
==Friday (Shanghai)==<br />
===Meeting with the Glance team===<br />
<br><br><br />
[https://www.youtube.com/watch?v=p-cEMSj44Nc Video Recording Part 1]<br />
<br><br><br />
====Support for Glance multiple stores in Cinder====<br />
References: (cinder spec) https://review.openstack.org/#/c/641267/<br />
<br />
The Cinder team is still OK with this idea (which was approved for Train).<br />
=====Actions=====<br />
* retarget spec for Ussuri<br />
* get Abhishek's patch reviewed<br />
====Image snapshot co-location====<br />
For the Edge use case, Glance is planning to use info provided by Nova about what image a server was booted from to co-locate snapshots of that server in the same store as the original image. Would like to do the same with Cinder volumes uploaded as images. Just need a header that specifies the "base" image of the volume being uploaded as an image. We agreed that this is a separate use case from the above.<br />
=====Actions=====<br />
* Abhishek will write the spec for Cinder<br />
====Glance Cinder driver is very limited====<br />
We think it uses only default volume type, and also, it is not very well tested. We all agreed that this is a sad state of affairs.<br />
=====Actions=====<br />
* somebody should do something<br />
===Meet with Horizon about their proposed implementation of Cinder user messages===<br />
Horizon is interested in exposing the User Messages API. We agreed that this is a great idea.<br />
<br />
There's a question about having the message displayed in a requested language. It's possible that this is already handled at the REST API layer via the "Accept-Language" header. If it's not, that's probably the place to support this.<br />
====Actions====<br />
* rosmaita determine whether this would require a change to the API code, or whether existing code handles this already<br />
===Attach/Detach speed===<br />
Gorka was wondering whether there are any complaints about attach/detach speed in OpenStack, particularly since people are now using Cinder to provide volumes for Kubernetes (cinder in-tree driver, Cinder-CSI, Ember-CSI) and may be seeing a lot more attach/detach requests.<br />
<br />
Everybody seems to be OK with it, it's only geguileo who's complaining.<br />
====Actions====<br />
* not a concern at the moment<br />
===Topics from Train mid-cycle: status and carry-over to Ussuri===<br />
Notes about the Train mid-cycle: https://wiki.openstack.org/wiki/CinderTrainMidCycleSummary<br />
<br />
Mid-cycle etherpad: https://etherpad.openstack.org/p/cinder-train-mid-cycle-planning<br />
====Multiattach====<br />
All items need followup. Goals are:<br />
* short-term: document some guidance for how this feature should be tested<br />
* long-term: get some new tests into the cinder-tempest-plugin for this<br />
=====Actions=====<br />
* rosmaita draft the short-term document<br />
====iSCSI Ceph driver====<br />
Due to some downstream priorities changes, Walt is having trouble finding time to work on this.<br />
Ivan suggested that we encourage Walt to post whatever he's got, even if it's not working, so what he's learned isn't lost.<br />
There are some patches up and a github repo for some code Walt had to write that doesn't have a home in OpenStack or Ceph yet<br />
=====Actions=====<br />
* rosmaita: follow up with Walt<br />
* rosmaita: put together an etherpad with links to the work done so far<br />
====3rd Party CI Irregularities====<br />
Third-party testing by backend vendors of their driver code is very important to the project. But most of the 3rd Party CI appear to be pretty unstable.<br />
<br />
For most vendors, updating their 3rd Party CI to run python 3.7 in Train was not a simple task. It would be good if we could offer them better guidance about how to set up & maintain their 3rd Party CI. Would also like vendors to be running the cinder-tempest-plugin, but don't want to make it a demand unless we can make the path easier. (BTW, Datera is running the cinder-tempest-plugin in their CI!)<br />
<br />
Third Party CI Docs (partial list)<br />
* https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver<br />
* https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers<br />
* https://docs.openstack.org/infra/system-config/third_party.html<br />
* https://docs.openstack.org/cinder/latest/contributor/drivers.html<br />
<br />
=====Actions=====<br />
* Luigi has some ideas about using RDO Software Factory as a basis for 3rd Party CI; need to follow up with him on that<br />
* Gorka: will check about what RDO has available <br />
* e0ne: will look to see who's using cinder-tempest-plugin<br />
* the team: after gorka and e0ne report back, reorganize & update the 3rd party CI docs<br />
<br />
====Improve Automated Test Coverage====<br />
We want to do this via the cinder-tempest-plugin. Sophia (enriquetaso) is mentoring an Outreachy intern who has begun some work on this. Eric has been writing bugs to suggest test cases that need to be addressed.<br />
<br />
====SQLAlchemy to Alembic migration====<br />
No progress on this. Put in a proposal for a summer intern to work on this; maybe we'll get lucky.<br />
<br />
See https://etherpad.openstack.org/p/cinder-train-ptg-planning (line #247) for more info.<br />
<br />
====Capabilities Reporting====<br />
Operators need to read the vendor's manual to figure out which extra specs they can write for a particular backend, and what they're used for. it would be nice to drivers report their capabilities in a way that the operator can figure out this info from the CLI.<br />
<br />
Everyone agreed that we still want to do this. It will require an API change and there's already a spec for this:<br />
https://review.opendev.org/#/c/655939/1/specs/train/backend_capabilities.rst<br />
=====Actions=====<br />
* revisit at the Virtual PTG and figure out who's interested in working on it<br />
<br />
===Cinder Business===<br />
<br><br><br />
[https://www.youtube.com/watch?v=8YQ4TQyIzkw Video Recording Part 2]<br />
<br><br><br />
====Cinder Ussuri Priorities====<br />
We will finalize this after the Virtual PTG, but here's the initial list:<br />
* Increase testing coverage<br />
* Increase number of CIs running cinder-tempest-plugins<br />
* Better support for third party CIs: Make their life easier by having a way to deploy a robust system<br />
* Volume types per user/project/service-token<br />
** better documentation<br />
* Generic Backups<br />
* Improve HA Active-Active documentation<br />
** want to make it easier to test it<br />
* remove V2 API<br />
* remove python 2 support<br />
====Cinder-core update====<br />
See http://lists.openstack.org/pipermail/openstack-discuss/2019-November/010519.html<br />
We are at roughly the same review strength we had in Train.<br />
====Meeting Time Change update====<br />
We were holding off on this until after the Summit so that new contributors could participate in a poll.<br />
We'll consider the options from Liang Fang's original proposal at the Cinder weekly meeting: http://eavesdrop.openstack.org/meetings/cinder/2019/cinder.2019-10-23-16.00.log.html#l-166<br />
These are to move the meeting 1 or 2 hours earlier.<br />
There has also been some discussion on the ML: http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010328.html<br />
=====Actions=====<br />
* rosmaita put together a community poll<br />
====Virtual PTG====<br />
We discussed what the format should be. Consensus was to do it over 2 consecutive days, using 2 hours each day. This should make it easier for people to participate in at least part of the meeting. We want to do it soon; consensus was the week after KubeCon to avoid conflicts. So that would be the last week in November.<br />
=====Actions=====<br />
* rosmaita put together a community poll to determine days/times<br />
====Virtual Mid-Cycle====<br />
There is interest in having a midcycle. Although everyone recognizes that face-to-face is the best, contributors have been having trouble getting travel support. So we decided to do a completely Virtual Mid-Cycle meetup for Ussuri. We decided to figure out the format after we see how the Virtual PTG works out.<br />
<br />
==Monday (Virtual)==<br />
<br><br />
[https://www.youtube.com/watch?v=dwk2oKXxlfw Video Recording]<br />
<br><br />
===Forum session recap: Are You Using Upgrade Checks?===<br />
* https://etherpad.openstack.org/p/PVG-upgrade-check-forum<br />
Jay gave a quick recap of the Forum session. There are a number of action items in the etherpad above. They are assigned to jungleboyj right now as a TC action.<br />
<br />
There are still some questions about (a) how operators are using these, and (b) what kind of checks we should be providing from the development side. The Cinder team was seeing this as pre-check. Others are seeing it as a check that is used along the way while an upgrade is in process to ensure that things are ready before operators start up their services. Pre-checks seem to make sense for us; Sean noted that we could add an option to do some pre-checks to the cinder-status command.<br />
<br />
So what should the Cinder team do during the Ussuri cycle (before we have the above issues settled)? At the very least, we should still add them when a driver is unsupported and subject to removal:<br />
* inform operators that in order to use an unsupported driver, a flag has to be set in cinder.conf<br />
* inform operators that they need to contact the vendor about whether they have plans to have the driver re-instated; otherwise, the operator needs to prepare to migrate the affected volumes to a backend with a supported driver for the next Cinder release<br />
====Actions====<br />
* jungleboyj - Start a discussion on the mailing list to find out if anyone is actually using or has used the upgrade checks in production<br />
* need to figure out where the documentation for this goes<br />
===Snapshot co-location===<br />
The spec for this is: https://review.opendev.org/#/c/695630/<br />
<br />
This is related to glance multi-store support in Cinder, but the spec needs to specify more carefully what the use case is. We think that is: a user has a volume that was created from a glance image, and wants to upload it as an image; want to give Glance info so that in can put the new image in the same store as the original image. (So use of the term "snapshot" here may be inaccurate.)<br />
<br />
This feature depends on the implementation of the other glance multistore spec: https://review.opendev.org/#/c/661676/<br />
====Actions====<br />
* Rajat, Abhishek - update the spec<br />
===Python 2 support removal===<br />
Gave a quick summary of what we discussed in Shanghai (see above) so that we're all on the same page.<br />
<br />
There's a patch up now removing py2 testing from Cinder: https://review.opendev.org/695317 . Once that's approved, will do the same for the other components.<br />
<br />
General advice to Cinder developers about using Python 3 language features: https://wiki.openstack.org/wiki/CinderUssuriPTGSummary#Actions<br />
<br />
====Actions====<br />
* rosmaita - get the testing/gate patches merged, then let the good times roll<br />
===User messages===<br />
Quick discussion of the admin action "leakage" issue discussed on https://review.opendev.org/#/c/694954/<br />
<br />
Consensus was that it would be useful to expose admin-oriented actions in user messages that only admins would be able to view. Maybe set a special flag when the message is created, and then use the admin context to decide whether this gets shown or not. Agreed that the message content will be same as we have currently (that is, don't expose any sensitive information even to admins). We can wait until admin-facing user messages are being used and get feedback about whether more info is required or not.<br />
<br />
Ivan pointed out that this change should not require a new microversion, since there's no change to the user message API and no change to the current response.<br />
====Actions====<br />
* rosmaita - write up a spec<br />
===3rd Party CI irregularities===<br />
The issue we want to address is that the 3rd Party CI systems seem pretty unstable. We'd like to be able to provide some more support to make the infrastructures more reliable. Luigi suggested using RDO Software Factory as a basis for 3rd Party CI.<br />
<br />
References:<br />
* https://opendev.org/x/third-party-ci-tools<br />
* https://zuul-ci.org/docs/zuul/admin/quick-start.html<br />
====Actions====<br />
* Luigi - follow up with RDO team and get some feedback on how plausible this scenario is<br />
* e0ne - will look to see who's using cinder-tempest-plugin<br />
===Extending default volume type support for tenants===<br />
Quick recap of the Shanghai discussion (see above). Simon had mentioned that he might have developer at Pure who'd be interested in doing the implementation. Rajat volunteered to help support the implementation.<br />
====Actions====<br />
* Gorka - write up the spec<br />
* rosmaita - follow up with Simon<br />
===Quotas!===<br />
Eric has a patch up that may fix one of many problems: https://review.opendev.org/#/c/695096/ <br />
Eric thinks the patch could be optimized if someone is interested.<br />
<br />
The general problem is that we update multiple tables and there can be (are) race conditions and you wind up with strange situations like negative quota values or multiple quotas for the same project. Operators have posted some scripts to be used occasionally clean up the database, but it would be better to fix this in Cinder.<br />
===EOL for driverfixes/{m,n} and stable/{o,p}===<br />
Since the Shanghai discussion, a patch has merged that removes the 6 month waiting period for the transition from EM -> EOL: https://review.opendev.org/#/c/682381/<br />
<br />
There was a discussion about this in #openstack-tc last week: http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-11-22.log.html#t2019-11-22T15:35:01<br />
<br />
Consensus is that we should go ahead and do this.<br />
====Actions====<br />
* rosmaita - send notice on the ML that we are going to do this in one week<br />
** http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011136.html<br />
* rosmaita - put up a release patch to EOL the branches<br />
** https://review.opendev.org/#/c/696173/<br />
===Driver support matrix===<br />
Follow-up from the discussion in Shanghai. A suggestion was made that multipath should be a specific category in the support matrix.<br />
<br />
Consensus is that multipath is more a feature of the backend than of the driver. It is useful to know if drivers do it, but it's not the kind of thing like replication that they do or not. Also, there are options that have to be set in nova in order for it to be useful - nova:libvirt:use_volume_multipath. So there doesn't seem to be a point in adding this to the support matrix.<br />
<br />
==Wednesday (Virtual)==<br />
===v2 API removal update===<br />
Outward facing issues:<br />
* Sean's patch that removes v2 from the 'versions' response had some strange failures (but Rajat thinks there may be a quick fix).<br />
* Right now, devstack expects both v2 and v3 to be available and creates endpoints for both in the service catalog: https://opendev.org/openstack/devstack/src/branch/master/lib/cinder<br />
* On the other hand, it looks like tempest is v3 ready: https://review.opendev.org/#/c/530702/<br />
* Also, Ivan is pretty sure that Horizon uses v3 only.<br />
* We should notify Nova and Glance to make sure they don't rely on v2 for anything.<br />
<br />
Internal issues:<br />
* we should be able to clean up the stuff that v3 inherited from v2<br />
* we may not be able to clean up the v2/contrib stuff yet because of microversion reliance<br />
====Actions====<br />
* Rajat take a shot at fixing Sean's patch<br />
* rosmaita - work on the devstack stuff (service catalog)<br />
* we'll be optimistic about tempest<br />
* rosmaita - send a general email to the ML saying that we plan to do this<br />
===Forum session recap: How are you using Cinder's Volume Types?===<br />
Session etherpad: https://etherpad.openstack.org/p/PVG-how-using-cinder-volume-types<br />
<br />
Sean mentioned some highlights of the Forum session.<br />
* NECTAR is running a patch that allows a volume type to be assigned to an AZ: https://github.com/NeCTAR-RC/cinder/commit/d5a3d938a8e0934d31b5a3c568846b3d32843866<br />
* There were some questions about whether RBD supports volume online migration<br />
* Operators are interested in the "Support filter backend based on operation type" spec that was implemented in Rocky, but need some documentation explaining how to use it. The implementation is https://github.com/openstack/cinder/commit/e1ec4b4c2e1f0de512f09e38824c1d7e2fa38617<br />
====Actions====<br />
* e0ne is planning to do some testing around the RBD volume migration<br />
* need someone to pick up the documentation of "Support filter backend based on operation type"<br />
===Ceph iSCSI work===<br />
This is an important feature for Ironic.<br />
There's an etherpad gathering some of the work Walt's done on this: https://etherpad.openstack.org/p/cinder-ceph-iscsi-driver<br />
====Actions====<br />
rosmaita - follow up with Walt about his bandwidth and ask him to add missing stuff to the etherpad<br />
===Ussuri community goals ===<br />
Goal 1: Drop Python 2.7 Support -- we're going to do this in 2 phases. First is to get the python 2 check and gate jobs removed so we aren't depending on any py27 in the gate. Second will be to make the changes that will only allow Cinder to be installed with at least py36. That will follow in January or so when any other project that needs to install Cinder in py27 for their own testing has removed that dependency. <br />
<br />
The cinder patch to drop py2 testing is https://review.opendev.org/#/c/695317/ -- once that's merged, we'll do the same for os-brick, cinderclient, the brick-client-ext, and cinderlib.<br />
<br />
Goal 2: Project Specific New Contributor & PTL Docs -- the goal is not yet approved; it's at the formal vote stage: https://review.opendev.org/#/c/691737/. From comments on the patch, expectations are that the current PTL will do this with help from former PTLs. Luckily, we have 2 former PTLs who are still very active with the project. The open issue right now is that the docs are supposed to be consistent across projects, but there isn't a template for this yet.<br />
<br />
There's a pre-selected goal for V to migrate all legacy zuul jobs: https://review.opendev.org/#/c/691278/. gmann has a patch up moving our legacy jobs (grenade) to the cinder repo and making them py3: https://review.opendev.org/#/c/695787/. At some point, we'll need to convert them to bona fide Zuul v3 jobs. Luigi left a bunch of info on the etherpad about what the moving parts for this are, and he pointed out that reviews are welcome.<br />
<br />
It looks like another V goal is going to be "Consistent and secure default policies" goal, which was floated on the ML: http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010291.html. A "Policy Popup Team" is being organized now to do some work during this cycle: https://review.opendev.org/#/c/695993/<br />
====Actions====<br />
* everyone - keep an eye on the "remove py2 support" patches<br />
* everyone - reviews welcome on https://review.opendev.org/#/q/status:open+branch:master+topic:grenade_zuulv3<br />
===Backup Service Testing===<br />
This was a follow-up from the Train mid-cycle. The backup tests fail intermittently with timeouts. The situation now is that the cinder backup tests have been removed from tempest-full, but they are being run with tempest-integrated-storage (which basically means that failures will hit us but shouldn't block other projects). The issue could still use some investigation; Eric has a suggestion on the etherpad to write an elasticsearch query to look in the c-bak log for a specific IOError instead of looking for the timeout.<br />
<br />
Eric also noted that the test jobs run a long time and fail with a not helpful message -- they look for volume going active but don't notice that it's in an error state and should fail sooner. Failing fast could conserve some gate resources. Maybe someone wants to follow up with something like https://review.opendev.org/#/c/565766/ ?<br />
====Actions====<br />
* anyone who's interested - follow up on this<br />
* rosmaita - (came up during this discussion but not related to this topic) update etherpads with nondestructive translation instructions (get the read-only link to the etherpad and use translations tools there instead of in the writeable etherpad)<br />
===Cycle Priorities===<br />
====Increase testing coverage====<br />
We want to get more thorough tests into the cinder-tempest-plugin. Sofia (enriquetaso) is mentoring an Outreachy intern Anastasiya (anastzhyr) who will focus on this during her internship (3 Dec 2019 to 3 March 2020). The Cinder community can assist in this by (A) writing bugs tagged 'test-coverage' with specific ideas for tests that can be added, and (B) timely reviews of Anastaiya's patches.<br />
====Increase number of 3rd party CIs running cinder-tempest-plugin====<br />
This '''should be easyâ„¢''' for current 3rd party CIs (and actually may already be a requirement). Need to get a baseline for how many CIs are running it now.<br />
====Better support for 3rd Party CIs====<br />
We'd like to their lives easier by having a standard way to deploy a robust system. This may be possible using RDO Software Factory. Luigi has already brought this up at the RDO meeting and RDO (and the SF people) are supportive of this idea: http://eavesdrop.openstack.org/meetings/rdo_meeting___2019_11_27/2019/rdo_meeting___2019_11_27.2019-11-27-15.00.log.html#l-76<br />
<br />
WIP 3rd party CI doc section for SoftwareFactory:<br />
* https://softwarefactory-project.io/r/#/c/17097/<br />
* https://softwarefactory-project.io/logs/97/17097/2/check/sf-docs-build/ae24483/docs-html/guides/third_party_ci.html <br />
====Default volume-type enhancement====<br />
Gorka's working on a spec for having per user/project/service-token default volume-type; basic idea is what was discussed at the PTG (see above).<br />
Related to this is improving the documentation around volume types.<br />
====Generic Backups====<br />
Related patches:<br />
* https://review.opendev.org/#/c/620881/<br />
* https://review.opendev.org/#/c/630305/<br />
====Improve Active-Active (HA) Documentation====<br />
We're anticipating that operators will want to run Cinder in HA mode (Cinder active-active). Currently RBD supports running in HA.<br />
<br />
A driver has to set a flag in order for the service to run in active-active. But in addition to setting the flag, it would be good for the feature to actually work with that driver. The problem is that what you need to do is very driver dependent. We don't have tempest tests that verify that a driver can run in HA (but it should be possible to add tests such that if they fail, then you know HA is not happening).<br />
<br />
We need docs aimed at two audiences:<br />
* driver developers: want to clarify what you need to do to implement active-active and claim HA support. Should be able to give some general advice (like "watch out for race conditions on connection") to help in implementing and testing that their driver supports active-active.<br />
* operators: need to provide some advice about how to deploy Cinder in active-active mode (for example, you should have 3 API nodes, 3 scheduler nodes, 3 volume services)<br />
====Remove the v2 API====<br />
Should at least be able to get it out of the service catalog and remove the option to run it (and remove the option to not run v3), so that from an external point of view, all that's available is the v3 API. The refactoring to remove all v2 code from the API doesn't have to happen immediately.<br />
====Remove Python 2 support====<br />
This is a community goal (and it looks like we're in good shape to make this happen very early in the cycle).<br />
====Move away from squlalchemy- migrate to alembic====<br />
We're getting closer to the point where we will have no choice.<br />
===Should we hold a Virtual Mid-Cycle?===<br />
Cycle Schedule: https://releases.openstack.org/ussuri/schedule.html<br />
<br />
The consensus that we should have a Ussuri Mid-Cycle and that it should be virtual. While we were discussing the timing (have it close to the spec freeze? or closer to M-2), Eric pointed out that if we were going to use the model that we used for this Virtual PTG, namely, 2-hour sessions spread over a couple of days, there's no reason why the sessions have to be close together since we don't need to arrange any physical facilities. So we can be flexible and have 2 hour sessions whenever it makes sense.<br />
<br />
Right now, the sensible times to have 2-hour virtual meetings are:<br />
* around the Cinder Spec Freeze. The spec freeze is the last week in January and is at 15 weeks, which is exactly the middle of the cycle. Maybe have the Virtual meet-up the week before so unmerged specs can have some discussion if necessary?<br />
* at the "Cinder New Feature Status Checkpoint" (week of 16 March 2020), which is 3 weeks before the final release for client libraries.<br />
====Actions====<br />
* rosmaita - get feedback from the wider team at the weekly meeting and organize polls to determine the day of week and time</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=CinderUssuriPTGSummary&diff=173317CinderUssuriPTGSummary2019-12-04T18:22:51Z<p>Jay Bryant: /* Cinder Business */</p>
<hr />
<div>=== Introduction ===<br />
This page contains a summary of the subjects covered during the Ussuri PTG held in Shanghai, China, November 7-8, 2019.<br />
It also contains a summary of the Virtual PTG held November 25 and 27, 2019.<br />
<br />
The sessions were recorded. Links to the recordings will be added here when they are available.<br />
<br />
The full etherpad and all associated notes may be found here:<br />
* Shanghai: https://etherpad.openstack.org/p/shanghai-ptg-cinder<br />
* Virtual: https://etherpad.openstack.org/p/cinder-ussuri-virtual-ptg-planning<br />
<br />
==Thursday (Shanghai)==<br />
===Cinder project onboarding and "meet the Cinder developers" session===<br />
Everyone who attended were 100% satisfied and very complimentary about Cinder. Unfortunately, no one attended, so we spent the time figuring out how to get the recording equipment connected and positioned properly.<br />
<br />
<Br><br><br />
[https://www.youtube.com/watch?v=QQgiokX_EiI Video Recording Part 1]<br />
<br><br><br />
<br />
===Python 2 support & Work remaining to remove Py27 support===<br />
I came into this arguing that we need to keep Python 2 testing in master for a while -- at least while we are still supporting Python 2 in stable branches, because otherwise backports become a big problem (won't have a clean backport if any py3-only language features are used in a patch to master). Pretty much no one agreed with this.<br />
<br />
Sean pointed out that as libraries drop py2 support, we won't be able to use them in py2 testing anyway. Ivan and Sean can't wait to start ripping out py2-compatability code. Gorka didn't think that the extra effort to modify backports would be that big a deal, and that if we're going to start using py3 for real, might as well start now.<br />
====Actions====<br />
* Reminder to reviewers: we need to be checking the code coverage of test cases very carefully so that new code has excellent coverage and will be likely to fail when tested with py2 in stable branches when a backport is proposed<br />
* Reminder to committers: be patient with reviewers when they ask for more tests!<br />
* Reminder to community: new features can use py3 language constructs; bugfixes likely to be backported should be more conservative and write for py2 compatabilty <br />
* Reminder to driver maintainers: ^^<br />
* Ivan and Sean have a green light to start removing py2 compatability<br />
===Policy migration===<br />
Some background: https://etherpad.openstack.org/p/policy-migration-steps<br />
Keystone has added a default read-only role and service-scoped roles, but they don't do anything until projects write policies that use them<br />
oslo.policy code has a way to define a default policy + deprecated default policy; during deprecation, the most permissive wins. This will allow easy migration to new policies for operators.<br />
<br />
There are still questions about how to set up testing for these. Keystone did only unit tests, but the tests were very heavyweight; had to set up all the users in the DB each time; they wish they had done things in tempest. But there is a concern that it may not be practical to do in tempest, either. (It may depend on the project.)<br />
<br />
A pop-up team is going to be started to help get the larger projects moved to the new Policy Code.<br />
====Actions====<br />
* rosmaita will investigate the different testing approaches. Note: It's possible that tempest will add the methods to create different users and the different projects will have to do their own testing using those.<br />
* rosmaita To look at the scoping options and understand what the impact on Cinder will be.<br />
** Need to create a matrix of our policies and different scopes.<br />
** Need to figure out how the administrative context fits in<br />
** Need to check current test coverage and where testing needs to be enhanced.<br />
** Don't have to have one person do it. It is possible to split up the work.<br />
* rosmaita and e0ne (and anyone else interested in this) to join the pop-up team to get more info and help get Cinder started.<br />
===Cinder V2 API removal===<br />
We can't just remove V2 code right now (e.g., the V2 extensions need to be moved to V3) but we can remove access to the API. Though actually that's not true either, there is a lot of background work that needs to be done before we remove V2 API:<br />
* Tempest still assumes that the V2 API will be there. Need to fix it.<br />
* OpenStack Client also has some V2 API assumptions.<br />
* Devstack also will not work.<br />
<br />
Sean has a patch to see how badly things break with V2 removal: https://review.opendev.org/554372<br />
<br />
V3 is pretty much exactly the same as V2. We should be able to change and just have people switch the endpoint and have it work. It would be nice if we could just update the catalog but that doesn't appear to be the case.<br />
====Actions====<br />
* follow up on this for the virtual PTG. What did we find with Sean's patch?<br />
** Create a list of the specific work items that need to be completed.<br />
** At that point we may be able to split the work up to an intern (if we have an intern).<br />
===Cinder REST API V4===<br />
We had talked in Vancouver about getting to a point after we have enough micro-versions piling up to move to V4.<br />
====Actions====<br />
* We don't need to do that in this release (let's get rid of V2 first), but it is something we need to keep in mind as a future goal.<br />
<br><br><br />
[https://www.youtube.com/watch?v=a-kq_EYrkq0 Video Recording Part 2]<br />
<br><br><br />
<br />
===Volume local cache===<br />
Requires both Cinder and Nova work:<br />
* Cinder spec: https://review.opendev.org/#/c/684556/<br />
* Nova spec: https://review.opendev.org/689070<br />
<br />
Currently there're different types of fast NVME SSDs, such as Intel Optane SSD, which r/w throughput can be 2.x~3.x GB/s, latency can be ~10 us. While typical remote volume for a VM can be hundreds of MB/s, latency can be millisecond level (iscsi / rbd). So these fast SSDs can be mounted on compute node locally and used as a cache for remote volumes. Regarding storage team, we need to add support in os-brick.<br />
<br />
Consensus was: there are some storage solutions this cannot be done for (Ceph, no mount point on host machine), some that might not require this (some vendors already have super-fast caching), and some it's worth doing for, so the overall feeling was supportive for this effort.<br />
<br />
See the PTG etherpad for details. Picture of the flip chart used during the discussion: https://twitter.com/jungleboyj/status/1192323512238776320<br />
====Actions====<br />
* Liang Fang to continue working on this<br />
===Mutable options ===<br />
The context for this is a NetApp customer request who wants to be able to change backend credentials without restarting any services.<br />
<br />
Problem is that the current mutable config can be done for the REST API, but doesn't extend beyond that. Further, changing driver credentials is a little more work since it may require reloading the driver or having a mechanism in all drivers to recognize and handle that change. Also, we don't want config options that are shared across drivers to be mutable.<br />
<br />
Gorka pointed out that a driver supporting Active-Active would not need mutable options for this purpose. It would be better to implement Active-Active instead of refresh credentials this way. A/A HA support has been ready for several releases now, but so far RBD has been the only driver to test and enable it.<br />
<br />
The team feels that using Active/Active is the best way to go.<br />
====Actions====<br />
* Gorka volunteered to support the NetApp team if they choose to implement A/A<br />
* need to add to the developer docs that just making an option mutable in oslo.config does not solve the problem for drivers (more info on the etherpad)<br />
<br />
<br><br><br />
[https://www.youtube.com/watch?v=4BRXtQ41gJw Video Recording Part 3]<br />
<br><br><br />
<br />
===Cross Project Discussion with Edge Working Group===<br />
Apparently the next version of TripleO will support storage at the edge. They were wondering if we knew anything about that. We don't.<br />
<br />
As far as edge persistent storage goes, telcos think about having NFS-only and have it in the core - scary concept.<br />
<br />
In considering the edge use case, it is important to understand the physical limitations of what people have in mind. For example, one small telco rack, or a smaller DC with air conditioning, or a bigger DC with AC and bigger storage unit, etc. You really can't talk about "the edge" (insert U2 joke here).<br />
<br />
See the etherpad for more.<br />
===Default volume types depending on project or user===<br />
Having a single volume type default is too restrictive for bigger clouds with multiple AZs and many tenants/projects. Operators want more defaults to use in particular situations.<br />
<br />
The selection of which default to use is easy; the hard part of this will be the code enabling creation of the default at the end user/project level. Will need:<br />
* new API calls (create, show, list, update, delete), new microversion<br />
* client support<br />
* tell horizon about it<br />
====Actions====<br />
* geguileo - write the spec<br />
** request from Glance: we may also want a per-service default (triggered when a service token is passed)<br />
===Cinder retype doesn't use driver assisted migration===<br />
Gorka thinks this doesn't depend on the driver; he thinks it's broken for all drivers. There is code in the manager that prevents the efficient path from being taken:<br />
https://github.com/openstack/cinder/blob/ca5c2ce4e8ae9fbc92181ac4ba09cec3429a71e6/cinder/volume/manager.py#L2490<br />
There was a reason for it; we need to review and see if it still holds.<br />
<br />
Ivan thinks this is just a bug. Though we don't have a bug open for it.<br />
====Actions====<br />
* e0ne to investigate and fix it if he can verify that it is broken.<br />
===EOL some of the currently open branches===<br />
We have 8 open branches plus master (ussuri). Sent an email to the ML asking for data so we can make a good decision about this:<br />
http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010385.html<br />
<br />
Got zero responses, so this apparently isn't seen as a big deal by the community.<br />
<br />
The policy is that we need to announce 6 months ahead of time the fact that we are planning to EOL a branch. This allows time for a vendor to come in and pick it up if necessary.<br />
So, if we want to drop branches we just need to announce that we are planning to EOL branches and then we can do it in 6 months.<br />
<br />
The driverfixes branches have not been used in quite a while.<br />
* Should we delete those? No, we don't really want to lose that history of commits.<br />
* Could we re-name them? Put 'archived' in the title or something to make it clear that it doesn't still take code. (Or just document that they are an archive of old driver fixes.)<br />
* When we EOL a driver we should probably make it a driverfixes branch. (Not clear on exactly what's being proposed here, need to follow up at VPTG.)<br />
====Actions====<br />
* rosmaita - find out about renaming branches from infra team; also, about read-only branches (change to gerrit so no patches can be proposed to the branch)?<br />
** proposal: EOL o, p and rename them archived-ocata, archived-pike<br />
* rosmaita - send proposal to ML that o, p are due to exit EM status in 6 months<br />
* revisit this at the Virtual PTG<br />
** the EOL policy was revised recently, no longer requires the 6 month waiting period<br />
** want to reconsider whether not deleting the EOL branches is a good idea if we're not going to merge anything into them<br />
<br />
<br><br><br />
[https://www.youtube.com/watch?v=VHyCKbqMJb4 Video Recording Part 4]<br />
<br><br><br />
<br />
===Discuss the latest User Survey Results===<br />
Here's a handy compiled list of only the Cinder responses: https://etherpad.openstack.org/p/cinder-2019-user-survey-question-responses<br />
====Actions====<br />
* replication needs better documentation so that people know we can failover and fail back correctly<br />
* ivan is planning to continue the generic backup driver work<br />
<br><br><br />
[https://www.youtube.com/watch?v=r4zURMzJpbA Video Recording Part 5]<br />
<br><br><br />
<br />
===Meeting with the Nova team===<br />
When we failover in Cinder, volumes are no longer usable in Nova, but we don't tell Nova that the failover has ocurred. Any procedure in Nova to correct the situation needs to be done manually. It would be better if we let Nova know that a failover has occurred so they can do something.<br />
<br />
A complication is that Nova can't simply detach and attach the volume because data that is in flight would be lost.<br />
<br />
How about boot from volume? In that case the instance is dead anyway because access to the volume has been lost. Could go through the shutdown, detach, attach, reboot path.<br />
Problem is that detach is going to fail. Need to force it or handle the failure. But we aren't sure that Nova will allow a detach of a boot volume. And we don't currently have a force detach API.<br />
<br />
Also discussed a possible Nova bug for images created from encrypted volumes: https://bugs.launchpad.net/nova/+bug/1852106 , though it's not clear that the scenario described in the bug can actually happen<br />
====Actions====<br />
* need to figure out how to pass the force to os-brick to detach volume and when rebooting a volume<br />
* rosmaita to investigate Bug #1852106<br />
<br />
==Friday (Shanghai)==<br />
===Meeting with the Glance team===<br />
<br><br><br />
[https://www.youtube.com/watch?v=p-cEMSj44Nc Video Recording Part 1]<br />
<br><br><br />
====Support for Glance multiple stores in Cinder====<br />
References: (cinder spec) https://review.openstack.org/#/c/641267/<br />
<br />
The Cinder team is still OK with this idea (which was approved for Train).<br />
=====Actions=====<br />
* retarget spec for Ussuri<br />
* get Abhishek's patch reviewed<br />
====Image snapshot co-location====<br />
For the Edge use case, Glance is planning to use info provided by Nova about what image a server was booted from to co-locate snapshots of that server in the same store as the original image. Would like to do the same with Cinder volumes uploaded as images. Just need a header that specifies the "base" image of the volume being uploaded as an image. We agreed that this is a separate use case from the above.<br />
=====Actions=====<br />
* Abhishek will write the spec for Cinder<br />
====Glance Cinder driver is very limited====<br />
We think it uses only default volume type, and also, it is not very well tested. We all agreed that this is a sad state of affairs.<br />
=====Actions=====<br />
* somebody should do something<br />
===Meet with Horizon about their proposed implementation of Cinder user messages===<br />
Horizon is interested in exposing the User Messages API. We agreed that this is a great idea.<br />
<br />
There's a question about having the message displayed in a requested language. It's possible that this is already handled at the REST API layer via the "Accept-Language" header. If it's not, that's probably the place to support this.<br />
====Actions====<br />
* rosmaita determine whether this would require a change to the API code, or whether existing code handles this already<br />
===Attach/Detach speed===<br />
Gorka was wondering whether there are any complaints about attach/detach speed in OpenStack, particularly since people are now using Cinder to provide volumes for Kubernetes (cinder in-tree driver, Cinder-CSI, Ember-CSI) and may be seeing a lot more attach/detach requests.<br />
<br />
Everybody seems to be OK with it, it's only geguileo who's complaining.<br />
====Actions====<br />
* not a concern at the moment<br />
===Topics from Train mid-cycle: status and carry-over to Ussuri===<br />
Notes about the Train mid-cycle: https://wiki.openstack.org/wiki/CinderTrainMidCycleSummary<br />
<br />
Mid-cycle etherpad: https://etherpad.openstack.org/p/cinder-train-mid-cycle-planning<br />
====Multiattach====<br />
All items need followup. Goals are:<br />
* short-term: document some guidance for how this feature should be tested<br />
* long-term: get some new tests into the cinder-tempest-plugin for this<br />
=====Actions=====<br />
* rosmaita draft the short-term document<br />
====iSCSI Ceph driver====<br />
Due to some downstream priorities changes, Walt is having trouble finding time to work on this.<br />
Ivan suggested that we encourage Walt to post whatever he's got, even if it's not working, so what he's learned isn't lost.<br />
There are some patches up and a github repo for some code Walt had to write that doesn't have a home in OpenStack or Ceph yet<br />
=====Actions=====<br />
* rosmaita: follow up with Walt<br />
* rosmaita: put together an etherpad with links to the work done so far<br />
====3rd Party CI Irregularities====<br />
Third-party testing by backend vendors of their driver code is very important to the project. But most of the 3rd Party CI appear to be pretty unstable.<br />
<br />
For most vendors, updating their 3rd Party CI to run python 3.7 in Train was not a simple task. It would be good if we could offer them better guidance about how to set up & maintain their 3rd Party CI. Would also like vendors to be running the cinder-tempest-plugin, but don't want to make it a demand unless we can make the path easier. (BTW, Datera is running the cinder-tempest-plugin in their CI!)<br />
<br />
Third Party CI Docs (partial list)<br />
* https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver<br />
* https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers<br />
* https://docs.openstack.org/infra/system-config/third_party.html<br />
* https://docs.openstack.org/cinder/latest/contributor/drivers.html<br />
<br />
=====Actions=====<br />
* Luigi has some ideas about using RDO Software Factory as a basis for 3rd Party CI; need to follow up with him on that<br />
* Gorka: will check about what RDO has available <br />
* e0ne: will look to see who's using cinder-tempest-plugin<br />
* the team: after gorka and e0ne report back, reorganize & update the 3rd party CI docs<br />
<br />
====Improve Automated Test Coverage====<br />
We want to do this via the cinder-tempest-plugin. Sophia (enriquetaso) is mentoring an Outreachy intern who has begun some work on this. Eric has been writing bugs to suggest test cases that need to be addressed.<br />
<br />
====SQLAlchemy to Alembic migration====<br />
No progress on this. Put in a proposal for a summer intern to work on this; maybe we'll get lucky.<br />
<br />
See https://etherpad.openstack.org/p/cinder-train-ptg-planning (line #247) for more info.<br />
<br />
====Capabilities Reporting====<br />
Operators need to read the vendor's manual to figure out which extra specs they can write for a particular backend, and what they're used for. it would be nice to drivers report their capabilities in a way that the operator can figure out this info from the CLI.<br />
<br />
Everyone agreed that we still want to do this. It will require an API change and there's already a spec for this:<br />
https://review.opendev.org/#/c/655939/1/specs/train/backend_capabilities.rst<br />
=====Actions=====<br />
* revisit at the Virtual PTG and figure out who's interested in working on it<br />
<br />
===Cinder Business===<br />
<br><br><br />
[https://www.youtube.com/watch?v=8YQ4TQyIzkw Video Recording Part 2]<br />
<br><br><br />
====Cinder Ussuri Priorities====<br />
We will finalize this after the Virtual PTG, but here's the initial list:<br />
* Increase testing coverage<br />
* Increase number of CIs running cinder-tempest-plugins<br />
* Better support for third party CIs: Make their life easier by having a way to deploy a robust system<br />
* Volume types per user/project/service-token<br />
** better documentation<br />
* Generic Backups<br />
* Improve HA Active-Active documentation<br />
** want to make it easier to test it<br />
* remove V2 API<br />
* remove python 2 support<br />
====Cinder-core update====<br />
See http://lists.openstack.org/pipermail/openstack-discuss/2019-November/010519.html<br />
We are at roughly the same review strength we had in Train.<br />
====Meeting Time Change update====<br />
We were holding off on this until after the Summit so that new contributors could participate in a poll.<br />
We'll consider the options from Liang Fang's original proposal at the Cinder weekly meeting: http://eavesdrop.openstack.org/meetings/cinder/2019/cinder.2019-10-23-16.00.log.html#l-166<br />
These are to move the meeting 1 or 2 hours earlier.<br />
There has also been some discussion on the ML: http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010328.html<br />
=====Actions=====<br />
* rosmaita put together a community poll<br />
====Virtual PTG====<br />
We discussed what the format should be. Consensus was to do it over 2 consecutive days, using 2 hours each day. This should make it easier for people to participate in at least part of the meeting. We want to do it soon; consensus was the week after KubeCon to avoid conflicts. So that would be the last week in November.<br />
=====Actions=====<br />
* rosmaita put together a community poll to determine days/times<br />
====Virtual Mid-Cycle====<br />
There is interest in having a midcycle. Although everyone recognizes that face-to-face is the best, contributors have been having trouble getting travel support. So we decided to do a completely Virtual Mid-Cycle meetup for Ussuri. We decided to figure out the format after we see how the Virtual PTG works out.<br />
<br />
==Monday (Virtual)==<br />
===Forum session recap: Are You Using Upgrade Checks?===<br />
* https://etherpad.openstack.org/p/PVG-upgrade-check-forum<br />
Jay gave a quick recap of the Forum session. There are a number of action items in the etherpad above. They are assigned to jungleboyj right now as a TC action.<br />
<br />
There are still some questions about (a) how operators are using these, and (b) what kind of checks we should be providing from the development side. The Cinder team was seeing this as pre-check. Others are seeing it as a check that is used along the way while an upgrade is in process to ensure that things are ready before operators start up their services. Pre-checks seem to make sense for us; Sean noted that we could add an option to do some pre-checks to the cinder-status command.<br />
<br />
So what should the Cinder team do during the Ussuri cycle (before we have the above issues settled)? At the very least, we should still add them when a driver is unsupported and subject to removal:<br />
* inform operators that in order to use an unsupported driver, a flag has to be set in cinder.conf<br />
* inform operators that they need to contact the vendor about whether they have plans to have the driver re-instated; otherwise, the operator needs to prepare to migrate the affected volumes to a backend with a supported driver for the next Cinder release<br />
====Actions====<br />
* jungleboyj - Start a discussion on the mailing list to find out if anyone is actually using or has used the upgrade checks in production<br />
* need to figure out where the documentation for this goes<br />
===Snapshot co-location===<br />
The spec for this is: https://review.opendev.org/#/c/695630/<br />
<br />
This is related to glance multi-store support in Cinder, but the spec needs to specify more carefully what the use case is. We think that is: a user has a volume that was created from a glance image, and wants to upload it as an image; want to give Glance info so that in can put the new image in the same store as the original image. (So use of the term "snapshot" here may be inaccurate.)<br />
<br />
This feature depends on the implementation of the other glance multistore spec: https://review.opendev.org/#/c/661676/<br />
====Actions====<br />
* Rajat, Abhishek - update the spec<br />
===Python 2 support removal===<br />
Gave a quick summary of what we discussed in Shanghai (see above) so that we're all on the same page.<br />
<br />
There's a patch up now removing py2 testing from Cinder: https://review.opendev.org/695317 . Once that's approved, will do the same for the other components.<br />
<br />
General advice to Cinder developers about using Python 3 language features: https://wiki.openstack.org/wiki/CinderUssuriPTGSummary#Actions<br />
<br />
====Actions====<br />
* rosmaita - get the testing/gate patches merged, then let the good times roll<br />
===User messages===<br />
Quick discussion of the admin action "leakage" issue discussed on https://review.opendev.org/#/c/694954/<br />
<br />
Consensus was that it would be useful to expose admin-oriented actions in user messages that only admins would be able to view. Maybe set a special flag when the message is created, and then use the admin context to decide whether this gets shown or not. Agreed that the message content will be same as we have currently (that is, don't expose any sensitive information even to admins). We can wait until admin-facing user messages are being used and get feedback about whether more info is required or not.<br />
<br />
Ivan pointed out that this change should not require a new microversion, since there's no change to the user message API and no change to the current response.<br />
====Actions====<br />
* rosmaita - write up a spec<br />
===3rd Party CI irregularities===<br />
The issue we want to address is that the 3rd Party CI systems seem pretty unstable. We'd like to be able to provide some more support to make the infrastructures more reliable. Luigi suggested using RDO Software Factory as a basis for 3rd Party CI.<br />
<br />
References:<br />
* https://opendev.org/x/third-party-ci-tools<br />
* https://zuul-ci.org/docs/zuul/admin/quick-start.html<br />
====Actions====<br />
* Luigi - follow up with RDO team and get some feedback on how plausible this scenario is<br />
* e0ne - will look to see who's using cinder-tempest-plugin<br />
===Extending default volume type support for tenants===<br />
Quick recap of the Shanghai discussion (see above). Simon had mentioned that he might have developer at Pure who'd be interested in doing the implementation. Rajat volunteered to help support the implementation.<br />
====Actions====<br />
* Gorka - write up the spec<br />
* rosmaita - follow up with Simon<br />
===Quotas!===<br />
Eric has a patch up that may fix one of many problems: https://review.opendev.org/#/c/695096/ <br />
Eric thinks the patch could be optimized if someone is interested.<br />
<br />
The general problem is that we update multiple tables and there can be (are) race conditions and you wind up with strange situations like negative quota values or multiple quotas for the same project. Operators have posted some scripts to be used occasionally clean up the database, but it would be better to fix this in Cinder.<br />
===EOL for driverfixes/{m,n} and stable/{o,p}===<br />
Since the Shanghai discussion, a patch has merged that removes the 6 month waiting period for the transition from EM -> EOL: https://review.opendev.org/#/c/682381/<br />
<br />
There was a discussion about this in #openstack-tc last week: http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-11-22.log.html#t2019-11-22T15:35:01<br />
<br />
Consensus is that we should go ahead and do this.<br />
====Actions====<br />
* rosmaita - send notice on the ML that we are going to do this in one week<br />
** http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011136.html<br />
* rosmaita - put up a release patch to EOL the branches<br />
** https://review.opendev.org/#/c/696173/<br />
===Driver support matrix===<br />
Follow-up from the discussion in Shanghai. A suggestion was made that multipath should be a specific category in the support matrix.<br />
<br />
Consensus is that multipath is more a feature of the backend than of the driver. It is useful to know if drivers do it, but it's not the kind of thing like replication that they do or not. Also, there are options that have to be set in nova in order for it to be useful - nova:libvirt:use_volume_multipath. So there doesn't seem to be a point in adding this to the support matrix.<br />
==Wednesday (Virtual)==<br />
===v2 API removal update===<br />
Outward facing issues:<br />
* Sean's patch that removes v2 from the 'versions' response had some strange failures (but Rajat thinks there may be a quick fix).<br />
* Right now, devstack expects both v2 and v3 to be available and creates endpoints for both in the service catalog: https://opendev.org/openstack/devstack/src/branch/master/lib/cinder<br />
* On the other hand, it looks like tempest is v3 ready: https://review.opendev.org/#/c/530702/<br />
* Also, Ivan is pretty sure that Horizon uses v3 only.<br />
* We should notify Nova and Glance to make sure they don't rely on v2 for anything.<br />
<br />
Internal issues:<br />
* we should be able to clean up the stuff that v3 inherited from v2<br />
* we may not be able to clean up the v2/contrib stuff yet because of microversion reliance<br />
====Actions====<br />
* Rajat take a shot at fixing Sean's patch<br />
* rosmaita - work on the devstack stuff (service catalog)<br />
* we'll be optimistic about tempest<br />
* rosmaita - send a general email to the ML saying that we plan to do this<br />
===Forum session recap: How are you using Cinder's Volume Types?===<br />
Session etherpad: https://etherpad.openstack.org/p/PVG-how-using-cinder-volume-types<br />
<br />
Sean mentioned some highlights of the Forum session.<br />
* NECTAR is running a patch that allows a volume type to be assigned to an AZ: https://github.com/NeCTAR-RC/cinder/commit/d5a3d938a8e0934d31b5a3c568846b3d32843866<br />
* There were some questions about whether RBD supports volume online migration<br />
* Operators are interested in the "Support filter backend based on operation type" spec that was implemented in Rocky, but need some documentation explaining how to use it. The implementation is https://github.com/openstack/cinder/commit/e1ec4b4c2e1f0de512f09e38824c1d7e2fa38617<br />
====Actions====<br />
* e0ne is planning to do some testing around the RBD volume migration<br />
* need someone to pick up the documentation of "Support filter backend based on operation type"<br />
===Ceph iSCSI work===<br />
This is an important feature for Ironic.<br />
There's an etherpad gathering some of the work Walt's done on this: https://etherpad.openstack.org/p/cinder-ceph-iscsi-driver<br />
====Actions====<br />
rosmaita - follow up with Walt about his bandwidth and ask him to add missing stuff to the etherpad<br />
===Ussuri community goals ===<br />
Goal 1: Drop Python 2.7 Support -- we're going to do this in 2 phases. First is to get the python 2 check and gate jobs removed so we aren't depending on any py27 in the gate. Second will be to make the changes that will only allow Cinder to be installed with at least py36. That will follow in January or so when any other project that needs to install Cinder in py27 for their own testing has removed that dependency. <br />
<br />
The cinder patch to drop py2 testing is https://review.opendev.org/#/c/695317/ -- once that's merged, we'll do the same for os-brick, cinderclient, the brick-client-ext, and cinderlib.<br />
<br />
Goal 2: Project Specific New Contributor & PTL Docs -- the goal is not yet approved; it's at the formal vote stage: https://review.opendev.org/#/c/691737/. From comments on the patch, expectations are that the current PTL will do this with help from former PTLs. Luckily, we have 2 former PTLs who are still very active with the project. The open issue right now is that the docs are supposed to be consistent across projects, but there isn't a template for this yet.<br />
<br />
There's a pre-selected goal for V to migrate all legacy zuul jobs: https://review.opendev.org/#/c/691278/. gmann has a patch up moving our legacy jobs (grenade) to the cinder repo and making them py3: https://review.opendev.org/#/c/695787/. At some point, we'll need to convert them to bona fide Zuul v3 jobs. Luigi left a bunch of info on the etherpad about what the moving parts for this are, and he pointed out that reviews are welcome.<br />
<br />
It looks like another V goal is going to be "Consistent and secure default policies" goal, which was floated on the ML: http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010291.html. A "Policy Popup Team" is being organized now to do some work during this cycle: https://review.opendev.org/#/c/695993/<br />
====Actions====<br />
* everyone - keep an eye on the "remove py2 support" patches<br />
* everyone - reviews welcome on https://review.opendev.org/#/q/status:open+branch:master+topic:grenade_zuulv3<br />
===Backup Service Testing===<br />
This was a follow-up from the Train mid-cycle. The backup tests fail intermittently with timeouts. The situation now is that the cinder backup tests have been removed from tempest-full, but they are being run with tempest-integrated-storage (which basically means that failures will hit us but shouldn't block other projects). The issue could still use some investigation; Eric has a suggestion on the etherpad to write an elasticsearch query to look in the c-bak log for a specific IOError instead of looking for the timeout.<br />
<br />
Eric also noted that the test jobs run a long time and fail with a not helpful message -- they look for volume going active but don't notice that it's in an error state and should fail sooner. Failing fast could conserve some gate resources. Maybe someone wants to follow up with something like https://review.opendev.org/#/c/565766/ ?<br />
====Actions====<br />
* anyone who's interested - follow up on this<br />
* rosmaita - (came up during this discussion but not related to this topic) update etherpads with nondestructive translation instructions (get the read-only link to the etherpad and use translations tools there instead of in the writeable etherpad)<br />
===Cycle Priorities===<br />
====Increase testing coverage====<br />
We want to get more thorough tests into the cinder-tempest-plugin. Sofia (enriquetaso) is mentoring an Outreachy intern Anastasiya (anastzhyr) who will focus on this during her internship (3 Dec 2019 to 3 March 2020). The Cinder community can assist in this by (A) writing bugs tagged 'test-coverage' with specific ideas for tests that can be added, and (B) timely reviews of Anastaiya's patches.<br />
====Increase number of 3rd party CIs running cinder-tempest-plugin====<br />
This '''should be easyâ„¢''' for current 3rd party CIs (and actually may already be a requirement). Need to get a baseline for how many CIs are running it now.<br />
====Better support for 3rd Party CIs====<br />
We'd like to their lives easier by having a standard way to deploy a robust system. This may be possible using RDO Software Factory. Luigi has already brought this up at the RDO meeting and RDO (and the SF people) are supportive of this idea: http://eavesdrop.openstack.org/meetings/rdo_meeting___2019_11_27/2019/rdo_meeting___2019_11_27.2019-11-27-15.00.log.html#l-76<br />
<br />
WIP 3rd party CI doc section for SoftwareFactory:<br />
* https://softwarefactory-project.io/r/#/c/17097/<br />
* https://softwarefactory-project.io/logs/97/17097/2/check/sf-docs-build/ae24483/docs-html/guides/third_party_ci.html <br />
====Default volume-type enhancement====<br />
Gorka's working on a spec for having per user/project/service-token default volume-type; basic idea is what was discussed at the PTG (see above).<br />
Related to this is improving the documentation around volume types.<br />
====Generic Backups====<br />
Related patches:<br />
* https://review.opendev.org/#/c/620881/<br />
* https://review.opendev.org/#/c/630305/<br />
====Improve Active-Active (HA) Documentation====<br />
We're anticipating that operators will want to run Cinder in HA mode (Cinder active-active). Currently RBD supports running in HA.<br />
<br />
A driver has to set a flag in order for the service to run in active-active. But in addition to setting the flag, it would be good for the feature to actually work with that driver. The problem is that what you need to do is very driver dependent. We don't have tempest tests that verify that a driver can run in HA (but it should be possible to add tests such that if they fail, then you know HA is not happening).<br />
<br />
We need docs aimed at two audiences:<br />
* driver developers: want to clarify what you need to do to implement active-active and claim HA support. Should be able to give some general advice (like "watch out for race conditions on connection") to help in implementing and testing that their driver supports active-active.<br />
* operators: need to provide some advice about how to deploy Cinder in active-active mode (for example, you should have 3 API nodes, 3 scheduler nodes, 3 volume services)<br />
====Remove the v2 API====<br />
Should at least be able to get it out of the service catalog and remove the option to run it (and remove the option to not run v3), so that from an external point of view, all that's available is the v3 API. The refactoring to remove all v2 code from the API doesn't have to happen immediately.<br />
====Remove Python 2 support====<br />
This is a community goal (and it looks like we're in good shape to make this happen very early in the cycle).<br />
====Move away from squlalchemy- migrate to alembic====<br />
We're getting closer to the point where we will have no choice.<br />
===Should we hold a Virtual Mid-Cycle?===<br />
Cycle Schedule: https://releases.openstack.org/ussuri/schedule.html<br />
<br />
The consensus that we should have a Ussuri Mid-Cycle and that it should be virtual. While we were discussing the timing (have it close to the spec freeze? or closer to M-2), Eric pointed out that if we were going to use the model that we used for this Virtual PTG, namely, 2-hour sessions spread over a couple of days, there's no reason why the sessions have to be close together since we don't need to arrange any physical facilities. So we can be flexible and have 2 hour sessions whenever it makes sense.<br />
<br />
Right now, the sensible times to have 2-hour virtual meetings are:<br />
* around the Cinder Spec Freeze. The spec freeze is the last week in January and is at 15 weeks, which is exactly the middle of the cycle. Maybe have the Virtual meet-up the week before so unmerged specs can have some discussion if necessary?<br />
* at the "Cinder New Feature Status Checkpoint" (week of 16 March 2020), which is 3 weeks before the final release for client libraries.<br />
====Actions====<br />
* rosmaita - get feedback from the wider team at the weekly meeting and organize polls to determine the day of week and time</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=CinderUssuriPTGSummary&diff=173316CinderUssuriPTGSummary2019-12-04T18:21:49Z<p>Jay Bryant: /* Friday (Shanghai) */</p>
<hr />
<div>=== Introduction ===<br />
This page contains a summary of the subjects covered during the Ussuri PTG held in Shanghai, China, November 7-8, 2019.<br />
It also contains a summary of the Virtual PTG held November 25 and 27, 2019.<br />
<br />
The sessions were recorded. Links to the recordings will be added here when they are available.<br />
<br />
The full etherpad and all associated notes may be found here:<br />
* Shanghai: https://etherpad.openstack.org/p/shanghai-ptg-cinder<br />
* Virtual: https://etherpad.openstack.org/p/cinder-ussuri-virtual-ptg-planning<br />
<br />
==Thursday (Shanghai)==<br />
===Cinder project onboarding and "meet the Cinder developers" session===<br />
Everyone who attended were 100% satisfied and very complimentary about Cinder. Unfortunately, no one attended, so we spent the time figuring out how to get the recording equipment connected and positioned properly.<br />
<br />
<Br><br><br />
[https://www.youtube.com/watch?v=QQgiokX_EiI Video Recording Part 1]<br />
<br><br><br />
<br />
===Python 2 support & Work remaining to remove Py27 support===<br />
I came into this arguing that we need to keep Python 2 testing in master for a while -- at least while we are still supporting Python 2 in stable branches, because otherwise backports become a big problem (won't have a clean backport if any py3-only language features are used in a patch to master). Pretty much no one agreed with this.<br />
<br />
Sean pointed out that as libraries drop py2 support, we won't be able to use them in py2 testing anyway. Ivan and Sean can't wait to start ripping out py2-compatability code. Gorka didn't think that the extra effort to modify backports would be that big a deal, and that if we're going to start using py3 for real, might as well start now.<br />
====Actions====<br />
* Reminder to reviewers: we need to be checking the code coverage of test cases very carefully so that new code has excellent coverage and will be likely to fail when tested with py2 in stable branches when a backport is proposed<br />
* Reminder to committers: be patient with reviewers when they ask for more tests!<br />
* Reminder to community: new features can use py3 language constructs; bugfixes likely to be backported should be more conservative and write for py2 compatabilty <br />
* Reminder to driver maintainers: ^^<br />
* Ivan and Sean have a green light to start removing py2 compatability<br />
===Policy migration===<br />
Some background: https://etherpad.openstack.org/p/policy-migration-steps<br />
Keystone has added a default read-only role and service-scoped roles, but they don't do anything until projects write policies that use them<br />
oslo.policy code has a way to define a default policy + deprecated default policy; during deprecation, the most permissive wins. This will allow easy migration to new policies for operators.<br />
<br />
There are still questions about how to set up testing for these. Keystone did only unit tests, but the tests were very heavyweight; had to set up all the users in the DB each time; they wish they had done things in tempest. But there is a concern that it may not be practical to do in tempest, either. (It may depend on the project.)<br />
<br />
A pop-up team is going to be started to help get the larger projects moved to the new Policy Code.<br />
====Actions====<br />
* rosmaita will investigate the different testing approaches. Note: It's possible that tempest will add the methods to create different users and the different projects will have to do their own testing using those.<br />
* rosmaita To look at the scoping options and understand what the impact on Cinder will be.<br />
** Need to create a matrix of our policies and different scopes.<br />
** Need to figure out how the administrative context fits in<br />
** Need to check current test coverage and where testing needs to be enhanced.<br />
** Don't have to have one person do it. It is possible to split up the work.<br />
* rosmaita and e0ne (and anyone else interested in this) to join the pop-up team to get more info and help get Cinder started.<br />
===Cinder V2 API removal===<br />
We can't just remove V2 code right now (e.g., the V2 extensions need to be moved to V3) but we can remove access to the API. Though actually that's not true either, there is a lot of background work that needs to be done before we remove V2 API:<br />
* Tempest still assumes that the V2 API will be there. Need to fix it.<br />
* OpenStack Client also has some V2 API assumptions.<br />
* Devstack also will not work.<br />
<br />
Sean has a patch to see how badly things break with V2 removal: https://review.opendev.org/554372<br />
<br />
V3 is pretty much exactly the same as V2. We should be able to change and just have people switch the endpoint and have it work. It would be nice if we could just update the catalog but that doesn't appear to be the case.<br />
====Actions====<br />
* follow up on this for the virtual PTG. What did we find with Sean's patch?<br />
** Create a list of the specific work items that need to be completed.<br />
** At that point we may be able to split the work up to an intern (if we have an intern).<br />
===Cinder REST API V4===<br />
We had talked in Vancouver about getting to a point after we have enough micro-versions piling up to move to V4.<br />
====Actions====<br />
* We don't need to do that in this release (let's get rid of V2 first), but it is something we need to keep in mind as a future goal.<br />
<br><br><br />
[https://www.youtube.com/watch?v=a-kq_EYrkq0 Video Recording Part 2]<br />
<br><br><br />
<br />
===Volume local cache===<br />
Requires both Cinder and Nova work:<br />
* Cinder spec: https://review.opendev.org/#/c/684556/<br />
* Nova spec: https://review.opendev.org/689070<br />
<br />
Currently there're different types of fast NVME SSDs, such as Intel Optane SSD, which r/w throughput can be 2.x~3.x GB/s, latency can be ~10 us. While typical remote volume for a VM can be hundreds of MB/s, latency can be millisecond level (iscsi / rbd). So these fast SSDs can be mounted on compute node locally and used as a cache for remote volumes. Regarding storage team, we need to add support in os-brick.<br />
<br />
Consensus was: there are some storage solutions this cannot be done for (Ceph, no mount point on host machine), some that might not require this (some vendors already have super-fast caching), and some it's worth doing for, so the overall feeling was supportive for this effort.<br />
<br />
See the PTG etherpad for details. Picture of the flip chart used during the discussion: https://twitter.com/jungleboyj/status/1192323512238776320<br />
====Actions====<br />
* Liang Fang to continue working on this<br />
===Mutable options ===<br />
The context for this is a NetApp customer request who wants to be able to change backend credentials without restarting any services.<br />
<br />
Problem is that the current mutable config can be done for the REST API, but doesn't extend beyond that. Further, changing driver credentials is a little more work since it may require reloading the driver or having a mechanism in all drivers to recognize and handle that change. Also, we don't want config options that are shared across drivers to be mutable.<br />
<br />
Gorka pointed out that a driver supporting Active-Active would not need mutable options for this purpose. It would be better to implement Active-Active instead of refresh credentials this way. A/A HA support has been ready for several releases now, but so far RBD has been the only driver to test and enable it.<br />
<br />
The team feels that using Active/Active is the best way to go.<br />
====Actions====<br />
* Gorka volunteered to support the NetApp team if they choose to implement A/A<br />
* need to add to the developer docs that just making an option mutable in oslo.config does not solve the problem for drivers (more info on the etherpad)<br />
<br />
<br><br><br />
[https://www.youtube.com/watch?v=4BRXtQ41gJw Video Recording Part 3]<br />
<br><br><br />
<br />
===Cross Project Discussion with Edge Working Group===<br />
Apparently the next version of TripleO will support storage at the edge. They were wondering if we knew anything about that. We don't.<br />
<br />
As far as edge persistent storage goes, telcos think about having NFS-only and have it in the core - scary concept.<br />
<br />
In considering the edge use case, it is important to understand the physical limitations of what people have in mind. For example, one small telco rack, or a smaller DC with air conditioning, or a bigger DC with AC and bigger storage unit, etc. You really can't talk about "the edge" (insert U2 joke here).<br />
<br />
See the etherpad for more.<br />
===Default volume types depending on project or user===<br />
Having a single volume type default is too restrictive for bigger clouds with multiple AZs and many tenants/projects. Operators want more defaults to use in particular situations.<br />
<br />
The selection of which default to use is easy; the hard part of this will be the code enabling creation of the default at the end user/project level. Will need:<br />
* new API calls (create, show, list, update, delete), new microversion<br />
* client support<br />
* tell horizon about it<br />
====Actions====<br />
* geguileo - write the spec<br />
** request from Glance: we may also want a per-service default (triggered when a service token is passed)<br />
===Cinder retype doesn't use driver assisted migration===<br />
Gorka thinks this doesn't depend on the driver; he thinks it's broken for all drivers. There is code in the manager that prevents the efficient path from being taken:<br />
https://github.com/openstack/cinder/blob/ca5c2ce4e8ae9fbc92181ac4ba09cec3429a71e6/cinder/volume/manager.py#L2490<br />
There was a reason for it; we need to review and see if it still holds.<br />
<br />
Ivan thinks this is just a bug. Though we don't have a bug open for it.<br />
====Actions====<br />
* e0ne to investigate and fix it if he can verify that it is broken.<br />
===EOL some of the currently open branches===<br />
We have 8 open branches plus master (ussuri). Sent an email to the ML asking for data so we can make a good decision about this:<br />
http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010385.html<br />
<br />
Got zero responses, so this apparently isn't seen as a big deal by the community.<br />
<br />
The policy is that we need to announce 6 months ahead of time the fact that we are planning to EOL a branch. This allows time for a vendor to come in and pick it up if necessary.<br />
So, if we want to drop branches we just need to announce that we are planning to EOL branches and then we can do it in 6 months.<br />
<br />
The driverfixes branches have not been used in quite a while.<br />
* Should we delete those? No, we don't really want to lose that history of commits.<br />
* Could we re-name them? Put 'archived' in the title or something to make it clear that it doesn't still take code. (Or just document that they are an archive of old driver fixes.)<br />
* When we EOL a driver we should probably make it a driverfixes branch. (Not clear on exactly what's being proposed here, need to follow up at VPTG.)<br />
====Actions====<br />
* rosmaita - find out about renaming branches from infra team; also, about read-only branches (change to gerrit so no patches can be proposed to the branch)?<br />
** proposal: EOL o, p and rename them archived-ocata, archived-pike<br />
* rosmaita - send proposal to ML that o, p are due to exit EM status in 6 months<br />
* revisit this at the Virtual PTG<br />
** the EOL policy was revised recently, no longer requires the 6 month waiting period<br />
** want to reconsider whether not deleting the EOL branches is a good idea if we're not going to merge anything into them<br />
<br />
<br><br><br />
[https://www.youtube.com/watch?v=VHyCKbqMJb4 Video Recording Part 4]<br />
<br><br><br />
<br />
===Discuss the latest User Survey Results===<br />
Here's a handy compiled list of only the Cinder responses: https://etherpad.openstack.org/p/cinder-2019-user-survey-question-responses<br />
====Actions====<br />
* replication needs better documentation so that people know we can failover and fail back correctly<br />
* ivan is planning to continue the generic backup driver work<br />
<br><br><br />
[https://www.youtube.com/watch?v=r4zURMzJpbA Video Recording Part 5]<br />
<br><br><br />
<br />
===Meeting with the Nova team===<br />
When we failover in Cinder, volumes are no longer usable in Nova, but we don't tell Nova that the failover has ocurred. Any procedure in Nova to correct the situation needs to be done manually. It would be better if we let Nova know that a failover has occurred so they can do something.<br />
<br />
A complication is that Nova can't simply detach and attach the volume because data that is in flight would be lost.<br />
<br />
How about boot from volume? In that case the instance is dead anyway because access to the volume has been lost. Could go through the shutdown, detach, attach, reboot path.<br />
Problem is that detach is going to fail. Need to force it or handle the failure. But we aren't sure that Nova will allow a detach of a boot volume. And we don't currently have a force detach API.<br />
<br />
Also discussed a possible Nova bug for images created from encrypted volumes: https://bugs.launchpad.net/nova/+bug/1852106 , though it's not clear that the scenario described in the bug can actually happen<br />
====Actions====<br />
* need to figure out how to pass the force to os-brick to detach volume and when rebooting a volume<br />
* rosmaita to investigate Bug #1852106<br />
<br />
==Friday (Shanghai)==<br />
===Meeting with the Glance team===<br />
<br><br><br />
[https://www.youtube.com/watch?v=p-cEMSj44Nc Video Recording Part 1]<br />
<br><br><br />
====Support for Glance multiple stores in Cinder====<br />
References: (cinder spec) https://review.openstack.org/#/c/641267/<br />
<br />
The Cinder team is still OK with this idea (which was approved for Train).<br />
=====Actions=====<br />
* retarget spec for Ussuri<br />
* get Abhishek's patch reviewed<br />
====Image snapshot co-location====<br />
For the Edge use case, Glance is planning to use info provided by Nova about what image a server was booted from to co-locate snapshots of that server in the same store as the original image. Would like to do the same with Cinder volumes uploaded as images. Just need a header that specifies the "base" image of the volume being uploaded as an image. We agreed that this is a separate use case from the above.<br />
=====Actions=====<br />
* Abhishek will write the spec for Cinder<br />
====Glance Cinder driver is very limited====<br />
We think it uses only default volume type, and also, it is not very well tested. We all agreed that this is a sad state of affairs.<br />
=====Actions=====<br />
* somebody should do something<br />
===Meet with Horizon about their proposed implementation of Cinder user messages===<br />
Horizon is interested in exposing the User Messages API. We agreed that this is a great idea.<br />
<br />
There's a question about having the message displayed in a requested language. It's possible that this is already handled at the REST API layer via the "Accept-Language" header. If it's not, that's probably the place to support this.<br />
====Actions====<br />
* rosmaita determine whether this would require a change to the API code, or whether existing code handles this already<br />
===Attach/Detach speed===<br />
Gorka was wondering whether there are any complaints about attach/detach speed in OpenStack, particularly since people are now using Cinder to provide volumes for Kubernetes (cinder in-tree driver, Cinder-CSI, Ember-CSI) and may be seeing a lot more attach/detach requests.<br />
<br />
Everybody seems to be OK with it, it's only geguileo who's complaining.<br />
====Actions====<br />
* not a concern at the moment<br />
===Topics from Train mid-cycle: status and carry-over to Ussuri===<br />
Notes about the Train mid-cycle: https://wiki.openstack.org/wiki/CinderTrainMidCycleSummary<br />
<br />
Mid-cycle etherpad: https://etherpad.openstack.org/p/cinder-train-mid-cycle-planning<br />
====Multiattach====<br />
All items need followup. Goals are:<br />
* short-term: document some guidance for how this feature should be tested<br />
* long-term: get some new tests into the cinder-tempest-plugin for this<br />
=====Actions=====<br />
* rosmaita draft the short-term document<br />
====iSCSI Ceph driver====<br />
Due to some downstream priorities changes, Walt is having trouble finding time to work on this.<br />
Ivan suggested that we encourage Walt to post whatever he's got, even if it's not working, so what he's learned isn't lost.<br />
There are some patches up and a github repo for some code Walt had to write that doesn't have a home in OpenStack or Ceph yet<br />
=====Actions=====<br />
* rosmaita: follow up with Walt<br />
* rosmaita: put together an etherpad with links to the work done so far<br />
====3rd Party CI Irregularities====<br />
Third-party testing by backend vendors of their driver code is very important to the project. But most of the 3rd Party CI appear to be pretty unstable.<br />
<br />
For most vendors, updating their 3rd Party CI to run python 3.7 in Train was not a simple task. It would be good if we could offer them better guidance about how to set up & maintain their 3rd Party CI. Would also like vendors to be running the cinder-tempest-plugin, but don't want to make it a demand unless we can make the path easier. (BTW, Datera is running the cinder-tempest-plugin in their CI!)<br />
<br />
Third Party CI Docs (partial list)<br />
* https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver<br />
* https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers<br />
* https://docs.openstack.org/infra/system-config/third_party.html<br />
* https://docs.openstack.org/cinder/latest/contributor/drivers.html<br />
<br />
=====Actions=====<br />
* Luigi has some ideas about using RDO Software Factory as a basis for 3rd Party CI; need to follow up with him on that<br />
* Gorka: will check about what RDO has available <br />
* e0ne: will look to see who's using cinder-tempest-plugin<br />
* the team: after gorka and e0ne report back, reorganize & update the 3rd party CI docs<br />
<br />
====Improve Automated Test Coverage====<br />
We want to do this via the cinder-tempest-plugin. Sophia (enriquetaso) is mentoring an Outreachy intern who has begun some work on this. Eric has been writing bugs to suggest test cases that need to be addressed.<br />
<br />
====SQLAlchemy to Alembic migration====<br />
No progress on this. Put in a proposal for a summer intern to work on this; maybe we'll get lucky.<br />
<br />
See https://etherpad.openstack.org/p/cinder-train-ptg-planning (line #247) for more info.<br />
<br />
====Capabilities Reporting====<br />
Operators need to read the vendor's manual to figure out which extra specs they can write for a particular backend, and what they're used for. it would be nice to drivers report their capabilities in a way that the operator can figure out this info from the CLI.<br />
<br />
Everyone agreed that we still want to do this. It will require an API change and there's already a spec for this:<br />
https://review.opendev.org/#/c/655939/1/specs/train/backend_capabilities.rst<br />
=====Actions=====<br />
* revisit at the Virtual PTG and figure out who's interested in working on it<br />
<br />
===Cinder Business===<br />
====Cinder Ussuri Priorities====<br />
We will finalize this after the Virtual PTG, but here's the initial list:<br />
* Increase testing coverage<br />
* Increase number of CIs running cinder-tempest-plugins<br />
* Better support for third party CIs: Make their life easier by having a way to deploy a robust system<br />
* Volume types per user/project/service-token<br />
** better documentation<br />
* Generic Backups<br />
* Improve HA Active-Active documentation<br />
** want to make it easier to test it<br />
* remove V2 API<br />
* remove python 2 support<br />
====Cinder-core update====<br />
See http://lists.openstack.org/pipermail/openstack-discuss/2019-November/010519.html<br />
We are at roughly the same review strength we had in Train.<br />
====Meeting Time Change update====<br />
We were holding off on this until after the Summit so that new contributors could participate in a poll.<br />
We'll consider the options from Liang Fang's original proposal at the Cinder weekly meeting: http://eavesdrop.openstack.org/meetings/cinder/2019/cinder.2019-10-23-16.00.log.html#l-166<br />
These are to move the meeting 1 or 2 hours earlier.<br />
There has also been some discussion on the ML: http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010328.html<br />
=====Actions=====<br />
* rosmaita put together a community poll<br />
====Virtual PTG====<br />
We discussed what the format should be. Consensus was to do it over 2 consecutive days, using 2 hours each day. This should make it easier for people to participate in at least part of the meeting. We want to do it soon; consensus was the week after KubeCon to avoid conflicts. So that would be the last week in November.<br />
=====Actions=====<br />
* rosmaita put together a community poll to determine days/times<br />
====Virtual Mid-Cycle====<br />
There is interest in having a midcycle. Although everyone recognizes that face-to-face is the best, contributors have been having trouble getting travel support. So we decided to do a completely Virtual Mid-Cycle meetup for Ussuri. We decided to figure out the format after we see how the Virtual PTG works out.<br />
<br />
==Monday (Virtual)==<br />
===Forum session recap: Are You Using Upgrade Checks?===<br />
* https://etherpad.openstack.org/p/PVG-upgrade-check-forum<br />
Jay gave a quick recap of the Forum session. There are a number of action items in the etherpad above. They are assigned to jungleboyj right now as a TC action.<br />
<br />
There are still some questions about (a) how operators are using these, and (b) what kind of checks we should be providing from the development side. The Cinder team was seeing this as pre-check. Others are seeing it as a check that is used along the way while an upgrade is in process to ensure that things are ready before operators start up their services. Pre-checks seem to make sense for us; Sean noted that we could add an option to do some pre-checks to the cinder-status command.<br />
<br />
So what should the Cinder team do during the Ussuri cycle (before we have the above issues settled)? At the very least, we should still add them when a driver is unsupported and subject to removal:<br />
* inform operators that in order to use an unsupported driver, a flag has to be set in cinder.conf<br />
* inform operators that they need to contact the vendor about whether they have plans to have the driver re-instated; otherwise, the operator needs to prepare to migrate the affected volumes to a backend with a supported driver for the next Cinder release<br />
====Actions====<br />
* jungleboyj - Start a discussion on the mailing list to find out if anyone is actually using or has used the upgrade checks in production<br />
* need to figure out where the documentation for this goes<br />
===Snapshot co-location===<br />
The spec for this is: https://review.opendev.org/#/c/695630/<br />
<br />
This is related to glance multi-store support in Cinder, but the spec needs to specify more carefully what the use case is. We think that is: a user has a volume that was created from a glance image, and wants to upload it as an image; want to give Glance info so that in can put the new image in the same store as the original image. (So use of the term "snapshot" here may be inaccurate.)<br />
<br />
This feature depends on the implementation of the other glance multistore spec: https://review.opendev.org/#/c/661676/<br />
====Actions====<br />
* Rajat, Abhishek - update the spec<br />
===Python 2 support removal===<br />
Gave a quick summary of what we discussed in Shanghai (see above) so that we're all on the same page.<br />
<br />
There's a patch up now removing py2 testing from Cinder: https://review.opendev.org/695317 . Once that's approved, will do the same for the other components.<br />
<br />
General advice to Cinder developers about using Python 3 language features: https://wiki.openstack.org/wiki/CinderUssuriPTGSummary#Actions<br />
<br />
====Actions====<br />
* rosmaita - get the testing/gate patches merged, then let the good times roll<br />
===User messages===<br />
Quick discussion of the admin action "leakage" issue discussed on https://review.opendev.org/#/c/694954/<br />
<br />
Consensus was that it would be useful to expose admin-oriented actions in user messages that only admins would be able to view. Maybe set a special flag when the message is created, and then use the admin context to decide whether this gets shown or not. Agreed that the message content will be same as we have currently (that is, don't expose any sensitive information even to admins). We can wait until admin-facing user messages are being used and get feedback about whether more info is required or not.<br />
<br />
Ivan pointed out that this change should not require a new microversion, since there's no change to the user message API and no change to the current response.<br />
====Actions====<br />
* rosmaita - write up a spec<br />
===3rd Party CI irregularities===<br />
The issue we want to address is that the 3rd Party CI systems seem pretty unstable. We'd like to be able to provide some more support to make the infrastructures more reliable. Luigi suggested using RDO Software Factory as a basis for 3rd Party CI.<br />
<br />
References:<br />
* https://opendev.org/x/third-party-ci-tools<br />
* https://zuul-ci.org/docs/zuul/admin/quick-start.html<br />
====Actions====<br />
* Luigi - follow up with RDO team and get some feedback on how plausible this scenario is<br />
* e0ne - will look to see who's using cinder-tempest-plugin<br />
===Extending default volume type support for tenants===<br />
Quick recap of the Shanghai discussion (see above). Simon had mentioned that he might have developer at Pure who'd be interested in doing the implementation. Rajat volunteered to help support the implementation.<br />
====Actions====<br />
* Gorka - write up the spec<br />
* rosmaita - follow up with Simon<br />
===Quotas!===<br />
Eric has a patch up that may fix one of many problems: https://review.opendev.org/#/c/695096/ <br />
Eric thinks the patch could be optimized if someone is interested.<br />
<br />
The general problem is that we update multiple tables and there can be (are) race conditions and you wind up with strange situations like negative quota values or multiple quotas for the same project. Operators have posted some scripts to be used occasionally clean up the database, but it would be better to fix this in Cinder.<br />
===EOL for driverfixes/{m,n} and stable/{o,p}===<br />
Since the Shanghai discussion, a patch has merged that removes the 6 month waiting period for the transition from EM -> EOL: https://review.opendev.org/#/c/682381/<br />
<br />
There was a discussion about this in #openstack-tc last week: http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-11-22.log.html#t2019-11-22T15:35:01<br />
<br />
Consensus is that we should go ahead and do this.<br />
====Actions====<br />
* rosmaita - send notice on the ML that we are going to do this in one week<br />
** http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011136.html<br />
* rosmaita - put up a release patch to EOL the branches<br />
** https://review.opendev.org/#/c/696173/<br />
===Driver support matrix===<br />
Follow-up from the discussion in Shanghai. A suggestion was made that multipath should be a specific category in the support matrix.<br />
<br />
Consensus is that multipath is more a feature of the backend than of the driver. It is useful to know if drivers do it, but it's not the kind of thing like replication that they do or not. Also, there are options that have to be set in nova in order for it to be useful - nova:libvirt:use_volume_multipath. So there doesn't seem to be a point in adding this to the support matrix.<br />
==Wednesday (Virtual)==<br />
===v2 API removal update===<br />
Outward facing issues:<br />
* Sean's patch that removes v2 from the 'versions' response had some strange failures (but Rajat thinks there may be a quick fix).<br />
* Right now, devstack expects both v2 and v3 to be available and creates endpoints for both in the service catalog: https://opendev.org/openstack/devstack/src/branch/master/lib/cinder<br />
* On the other hand, it looks like tempest is v3 ready: https://review.opendev.org/#/c/530702/<br />
* Also, Ivan is pretty sure that Horizon uses v3 only.<br />
* We should notify Nova and Glance to make sure they don't rely on v2 for anything.<br />
<br />
Internal issues:<br />
* we should be able to clean up the stuff that v3 inherited from v2<br />
* we may not be able to clean up the v2/contrib stuff yet because of microversion reliance<br />
====Actions====<br />
* Rajat take a shot at fixing Sean's patch<br />
* rosmaita - work on the devstack stuff (service catalog)<br />
* we'll be optimistic about tempest<br />
* rosmaita - send a general email to the ML saying that we plan to do this<br />
===Forum session recap: How are you using Cinder's Volume Types?===<br />
Session etherpad: https://etherpad.openstack.org/p/PVG-how-using-cinder-volume-types<br />
<br />
Sean mentioned some highlights of the Forum session.<br />
* NECTAR is running a patch that allows a volume type to be assigned to an AZ: https://github.com/NeCTAR-RC/cinder/commit/d5a3d938a8e0934d31b5a3c568846b3d32843866<br />
* There were some questions about whether RBD supports volume online migration<br />
* Operators are interested in the "Support filter backend based on operation type" spec that was implemented in Rocky, but need some documentation explaining how to use it. The implementation is https://github.com/openstack/cinder/commit/e1ec4b4c2e1f0de512f09e38824c1d7e2fa38617<br />
====Actions====<br />
* e0ne is planning to do some testing around the RBD volume migration<br />
* need someone to pick up the documentation of "Support filter backend based on operation type"<br />
===Ceph iSCSI work===<br />
This is an important feature for Ironic.<br />
There's an etherpad gathering some of the work Walt's done on this: https://etherpad.openstack.org/p/cinder-ceph-iscsi-driver<br />
====Actions====<br />
rosmaita - follow up with Walt about his bandwidth and ask him to add missing stuff to the etherpad<br />
===Ussuri community goals ===<br />
Goal 1: Drop Python 2.7 Support -- we're going to do this in 2 phases. First is to get the python 2 check and gate jobs removed so we aren't depending on any py27 in the gate. Second will be to make the changes that will only allow Cinder to be installed with at least py36. That will follow in January or so when any other project that needs to install Cinder in py27 for their own testing has removed that dependency. <br />
<br />
The cinder patch to drop py2 testing is https://review.opendev.org/#/c/695317/ -- once that's merged, we'll do the same for os-brick, cinderclient, the brick-client-ext, and cinderlib.<br />
<br />
Goal 2: Project Specific New Contributor & PTL Docs -- the goal is not yet approved; it's at the formal vote stage: https://review.opendev.org/#/c/691737/. From comments on the patch, expectations are that the current PTL will do this with help from former PTLs. Luckily, we have 2 former PTLs who are still very active with the project. The open issue right now is that the docs are supposed to be consistent across projects, but there isn't a template for this yet.<br />
<br />
There's a pre-selected goal for V to migrate all legacy zuul jobs: https://review.opendev.org/#/c/691278/. gmann has a patch up moving our legacy jobs (grenade) to the cinder repo and making them py3: https://review.opendev.org/#/c/695787/. At some point, we'll need to convert them to bona fide Zuul v3 jobs. Luigi left a bunch of info on the etherpad about what the moving parts for this are, and he pointed out that reviews are welcome.<br />
<br />
It looks like another V goal is going to be "Consistent and secure default policies" goal, which was floated on the ML: http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010291.html. A "Policy Popup Team" is being organized now to do some work during this cycle: https://review.opendev.org/#/c/695993/<br />
====Actions====<br />
* everyone - keep an eye on the "remove py2 support" patches<br />
* everyone - reviews welcome on https://review.opendev.org/#/q/status:open+branch:master+topic:grenade_zuulv3<br />
===Backup Service Testing===<br />
This was a follow-up from the Train mid-cycle. The backup tests fail intermittently with timeouts. The situation now is that the cinder backup tests have been removed from tempest-full, but they are being run with tempest-integrated-storage (which basically means that failures will hit us but shouldn't block other projects). The issue could still use some investigation; Eric has a suggestion on the etherpad to write an elasticsearch query to look in the c-bak log for a specific IOError instead of looking for the timeout.<br />
<br />
Eric also noted that the test jobs run a long time and fail with a not helpful message -- they look for volume going active but don't notice that it's in an error state and should fail sooner. Failing fast could conserve some gate resources. Maybe someone wants to follow up with something like https://review.opendev.org/#/c/565766/ ?<br />
====Actions====<br />
* anyone who's interested - follow up on this<br />
* rosmaita - (came up during this discussion but not related to this topic) update etherpads with nondestructive translation instructions (get the read-only link to the etherpad and use translations tools there instead of in the writeable etherpad)<br />
===Cycle Priorities===<br />
====Increase testing coverage====<br />
We want to get more thorough tests into the cinder-tempest-plugin. Sofia (enriquetaso) is mentoring an Outreachy intern Anastasiya (anastzhyr) who will focus on this during her internship (3 Dec 2019 to 3 March 2020). The Cinder community can assist in this by (A) writing bugs tagged 'test-coverage' with specific ideas for tests that can be added, and (B) timely reviews of Anastaiya's patches.<br />
====Increase number of 3rd party CIs running cinder-tempest-plugin====<br />
This '''should be easyâ„¢''' for current 3rd party CIs (and actually may already be a requirement). Need to get a baseline for how many CIs are running it now.<br />
====Better support for 3rd Party CIs====<br />
We'd like to their lives easier by having a standard way to deploy a robust system. This may be possible using RDO Software Factory. Luigi has already brought this up at the RDO meeting and RDO (and the SF people) are supportive of this idea: http://eavesdrop.openstack.org/meetings/rdo_meeting___2019_11_27/2019/rdo_meeting___2019_11_27.2019-11-27-15.00.log.html#l-76<br />
<br />
WIP 3rd party CI doc section for SoftwareFactory:<br />
* https://softwarefactory-project.io/r/#/c/17097/<br />
* https://softwarefactory-project.io/logs/97/17097/2/check/sf-docs-build/ae24483/docs-html/guides/third_party_ci.html <br />
====Default volume-type enhancement====<br />
Gorka's working on a spec for having per user/project/service-token default volume-type; basic idea is what was discussed at the PTG (see above).<br />
Related to this is improving the documentation around volume types.<br />
====Generic Backups====<br />
Related patches:<br />
* https://review.opendev.org/#/c/620881/<br />
* https://review.opendev.org/#/c/630305/<br />
====Improve Active-Active (HA) Documentation====<br />
We're anticipating that operators will want to run Cinder in HA mode (Cinder active-active). Currently RBD supports running in HA.<br />
<br />
A driver has to set a flag in order for the service to run in active-active. But in addition to setting the flag, it would be good for the feature to actually work with that driver. The problem is that what you need to do is very driver dependent. We don't have tempest tests that verify that a driver can run in HA (but it should be possible to add tests such that if they fail, then you know HA is not happening).<br />
<br />
We need docs aimed at two audiences:<br />
* driver developers: want to clarify what you need to do to implement active-active and claim HA support. Should be able to give some general advice (like "watch out for race conditions on connection") to help in implementing and testing that their driver supports active-active.<br />
* operators: need to provide some advice about how to deploy Cinder in active-active mode (for example, you should have 3 API nodes, 3 scheduler nodes, 3 volume services)<br />
====Remove the v2 API====<br />
Should at least be able to get it out of the service catalog and remove the option to run it (and remove the option to not run v3), so that from an external point of view, all that's available is the v3 API. The refactoring to remove all v2 code from the API doesn't have to happen immediately.<br />
====Remove Python 2 support====<br />
This is a community goal (and it looks like we're in good shape to make this happen very early in the cycle).<br />
====Move away from squlalchemy- migrate to alembic====<br />
We're getting closer to the point where we will have no choice.<br />
===Should we hold a Virtual Mid-Cycle?===<br />
Cycle Schedule: https://releases.openstack.org/ussuri/schedule.html<br />
<br />
The consensus that we should have a Ussuri Mid-Cycle and that it should be virtual. While we were discussing the timing (have it close to the spec freeze? or closer to M-2), Eric pointed out that if we were going to use the model that we used for this Virtual PTG, namely, 2-hour sessions spread over a couple of days, there's no reason why the sessions have to be close together since we don't need to arrange any physical facilities. So we can be flexible and have 2 hour sessions whenever it makes sense.<br />
<br />
Right now, the sensible times to have 2-hour virtual meetings are:<br />
* around the Cinder Spec Freeze. The spec freeze is the last week in January and is at 15 weeks, which is exactly the middle of the cycle. Maybe have the Virtual meet-up the week before so unmerged specs can have some discussion if necessary?<br />
* at the "Cinder New Feature Status Checkpoint" (week of 16 March 2020), which is 3 weeks before the final release for client libraries.<br />
====Actions====<br />
* rosmaita - get feedback from the wider team at the weekly meeting and organize polls to determine the day of week and time</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=CinderUssuriPTGSummary&diff=173315CinderUssuriPTGSummary2019-12-04T18:21:04Z<p>Jay Bryant: /* Discuss the latest User Survey Results */</p>
<hr />
<div>=== Introduction ===<br />
This page contains a summary of the subjects covered during the Ussuri PTG held in Shanghai, China, November 7-8, 2019.<br />
It also contains a summary of the Virtual PTG held November 25 and 27, 2019.<br />
<br />
The sessions were recorded. Links to the recordings will be added here when they are available.<br />
<br />
The full etherpad and all associated notes may be found here:<br />
* Shanghai: https://etherpad.openstack.org/p/shanghai-ptg-cinder<br />
* Virtual: https://etherpad.openstack.org/p/cinder-ussuri-virtual-ptg-planning<br />
<br />
==Thursday (Shanghai)==<br />
===Cinder project onboarding and "meet the Cinder developers" session===<br />
Everyone who attended were 100% satisfied and very complimentary about Cinder. Unfortunately, no one attended, so we spent the time figuring out how to get the recording equipment connected and positioned properly.<br />
<br />
<Br><br><br />
[https://www.youtube.com/watch?v=QQgiokX_EiI Video Recording Part 1]<br />
<br><br><br />
<br />
===Python 2 support & Work remaining to remove Py27 support===<br />
I came into this arguing that we need to keep Python 2 testing in master for a while -- at least while we are still supporting Python 2 in stable branches, because otherwise backports become a big problem (won't have a clean backport if any py3-only language features are used in a patch to master). Pretty much no one agreed with this.<br />
<br />
Sean pointed out that as libraries drop py2 support, we won't be able to use them in py2 testing anyway. Ivan and Sean can't wait to start ripping out py2-compatability code. Gorka didn't think that the extra effort to modify backports would be that big a deal, and that if we're going to start using py3 for real, might as well start now.<br />
====Actions====<br />
* Reminder to reviewers: we need to be checking the code coverage of test cases very carefully so that new code has excellent coverage and will be likely to fail when tested with py2 in stable branches when a backport is proposed<br />
* Reminder to committers: be patient with reviewers when they ask for more tests!<br />
* Reminder to community: new features can use py3 language constructs; bugfixes likely to be backported should be more conservative and write for py2 compatabilty <br />
* Reminder to driver maintainers: ^^<br />
* Ivan and Sean have a green light to start removing py2 compatability<br />
===Policy migration===<br />
Some background: https://etherpad.openstack.org/p/policy-migration-steps<br />
Keystone has added a default read-only role and service-scoped roles, but they don't do anything until projects write policies that use them<br />
oslo.policy code has a way to define a default policy + deprecated default policy; during deprecation, the most permissive wins. This will allow easy migration to new policies for operators.<br />
<br />
There are still questions about how to set up testing for these. Keystone did only unit tests, but the tests were very heavyweight; had to set up all the users in the DB each time; they wish they had done things in tempest. But there is a concern that it may not be practical to do in tempest, either. (It may depend on the project.)<br />
<br />
A pop-up team is going to be started to help get the larger projects moved to the new Policy Code.<br />
====Actions====<br />
* rosmaita will investigate the different testing approaches. Note: It's possible that tempest will add the methods to create different users and the different projects will have to do their own testing using those.<br />
* rosmaita To look at the scoping options and understand what the impact on Cinder will be.<br />
** Need to create a matrix of our policies and different scopes.<br />
** Need to figure out how the administrative context fits in<br />
** Need to check current test coverage and where testing needs to be enhanced.<br />
** Don't have to have one person do it. It is possible to split up the work.<br />
* rosmaita and e0ne (and anyone else interested in this) to join the pop-up team to get more info and help get Cinder started.<br />
===Cinder V2 API removal===<br />
We can't just remove V2 code right now (e.g., the V2 extensions need to be moved to V3) but we can remove access to the API. Though actually that's not true either, there is a lot of background work that needs to be done before we remove V2 API:<br />
* Tempest still assumes that the V2 API will be there. Need to fix it.<br />
* OpenStack Client also has some V2 API assumptions.<br />
* Devstack also will not work.<br />
<br />
Sean has a patch to see how badly things break with V2 removal: https://review.opendev.org/554372<br />
<br />
V3 is pretty much exactly the same as V2. We should be able to change and just have people switch the endpoint and have it work. It would be nice if we could just update the catalog but that doesn't appear to be the case.<br />
====Actions====<br />
* follow up on this for the virtual PTG. What did we find with Sean's patch?<br />
** Create a list of the specific work items that need to be completed.<br />
** At that point we may be able to split the work up to an intern (if we have an intern).<br />
===Cinder REST API V4===<br />
We had talked in Vancouver about getting to a point after we have enough micro-versions piling up to move to V4.<br />
====Actions====<br />
* We don't need to do that in this release (let's get rid of V2 first), but it is something we need to keep in mind as a future goal.<br />
<br><br><br />
[https://www.youtube.com/watch?v=a-kq_EYrkq0 Video Recording Part 2]<br />
<br><br><br />
<br />
===Volume local cache===<br />
Requires both Cinder and Nova work:<br />
* Cinder spec: https://review.opendev.org/#/c/684556/<br />
* Nova spec: https://review.opendev.org/689070<br />
<br />
Currently there're different types of fast NVME SSDs, such as Intel Optane SSD, which r/w throughput can be 2.x~3.x GB/s, latency can be ~10 us. While typical remote volume for a VM can be hundreds of MB/s, latency can be millisecond level (iscsi / rbd). So these fast SSDs can be mounted on compute node locally and used as a cache for remote volumes. Regarding storage team, we need to add support in os-brick.<br />
<br />
Consensus was: there are some storage solutions this cannot be done for (Ceph, no mount point on host machine), some that might not require this (some vendors already have super-fast caching), and some it's worth doing for, so the overall feeling was supportive for this effort.<br />
<br />
See the PTG etherpad for details. Picture of the flip chart used during the discussion: https://twitter.com/jungleboyj/status/1192323512238776320<br />
====Actions====<br />
* Liang Fang to continue working on this<br />
===Mutable options ===<br />
The context for this is a NetApp customer request who wants to be able to change backend credentials without restarting any services.<br />
<br />
Problem is that the current mutable config can be done for the REST API, but doesn't extend beyond that. Further, changing driver credentials is a little more work since it may require reloading the driver or having a mechanism in all drivers to recognize and handle that change. Also, we don't want config options that are shared across drivers to be mutable.<br />
<br />
Gorka pointed out that a driver supporting Active-Active would not need mutable options for this purpose. It would be better to implement Active-Active instead of refresh credentials this way. A/A HA support has been ready for several releases now, but so far RBD has been the only driver to test and enable it.<br />
<br />
The team feels that using Active/Active is the best way to go.<br />
====Actions====<br />
* Gorka volunteered to support the NetApp team if they choose to implement A/A<br />
* need to add to the developer docs that just making an option mutable in oslo.config does not solve the problem for drivers (more info on the etherpad)<br />
<br />
<br><br><br />
[https://www.youtube.com/watch?v=4BRXtQ41gJw Video Recording Part 3]<br />
<br><br><br />
<br />
===Cross Project Discussion with Edge Working Group===<br />
Apparently the next version of TripleO will support storage at the edge. They were wondering if we knew anything about that. We don't.<br />
<br />
As far as edge persistent storage goes, telcos think about having NFS-only and have it in the core - scary concept.<br />
<br />
In considering the edge use case, it is important to understand the physical limitations of what people have in mind. For example, one small telco rack, or a smaller DC with air conditioning, or a bigger DC with AC and bigger storage unit, etc. You really can't talk about "the edge" (insert U2 joke here).<br />
<br />
See the etherpad for more.<br />
===Default volume types depending on project or user===<br />
Having a single volume type default is too restrictive for bigger clouds with multiple AZs and many tenants/projects. Operators want more defaults to use in particular situations.<br />
<br />
The selection of which default to use is easy; the hard part of this will be the code enabling creation of the default at the end user/project level. Will need:<br />
* new API calls (create, show, list, update, delete), new microversion<br />
* client support<br />
* tell horizon about it<br />
====Actions====<br />
* geguileo - write the spec<br />
** request from Glance: we may also want a per-service default (triggered when a service token is passed)<br />
===Cinder retype doesn't use driver assisted migration===<br />
Gorka thinks this doesn't depend on the driver; he thinks it's broken for all drivers. There is code in the manager that prevents the efficient path from being taken:<br />
https://github.com/openstack/cinder/blob/ca5c2ce4e8ae9fbc92181ac4ba09cec3429a71e6/cinder/volume/manager.py#L2490<br />
There was a reason for it; we need to review and see if it still holds.<br />
<br />
Ivan thinks this is just a bug. Though we don't have a bug open for it.<br />
====Actions====<br />
* e0ne to investigate and fix it if he can verify that it is broken.<br />
===EOL some of the currently open branches===<br />
We have 8 open branches plus master (ussuri). Sent an email to the ML asking for data so we can make a good decision about this:<br />
http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010385.html<br />
<br />
Got zero responses, so this apparently isn't seen as a big deal by the community.<br />
<br />
The policy is that we need to announce 6 months ahead of time the fact that we are planning to EOL a branch. This allows time for a vendor to come in and pick it up if necessary.<br />
So, if we want to drop branches we just need to announce that we are planning to EOL branches and then we can do it in 6 months.<br />
<br />
The driverfixes branches have not been used in quite a while.<br />
* Should we delete those? No, we don't really want to lose that history of commits.<br />
* Could we re-name them? Put 'archived' in the title or something to make it clear that it doesn't still take code. (Or just document that they are an archive of old driver fixes.)<br />
* When we EOL a driver we should probably make it a driverfixes branch. (Not clear on exactly what's being proposed here, need to follow up at VPTG.)<br />
====Actions====<br />
* rosmaita - find out about renaming branches from infra team; also, about read-only branches (change to gerrit so no patches can be proposed to the branch)?<br />
** proposal: EOL o, p and rename them archived-ocata, archived-pike<br />
* rosmaita - send proposal to ML that o, p are due to exit EM status in 6 months<br />
* revisit this at the Virtual PTG<br />
** the EOL policy was revised recently, no longer requires the 6 month waiting period<br />
** want to reconsider whether not deleting the EOL branches is a good idea if we're not going to merge anything into them<br />
<br />
<br><br><br />
[https://www.youtube.com/watch?v=VHyCKbqMJb4 Video Recording Part 4]<br />
<br><br><br />
<br />
===Discuss the latest User Survey Results===<br />
Here's a handy compiled list of only the Cinder responses: https://etherpad.openstack.org/p/cinder-2019-user-survey-question-responses<br />
====Actions====<br />
* replication needs better documentation so that people know we can failover and fail back correctly<br />
* ivan is planning to continue the generic backup driver work<br />
<br><br><br />
[https://www.youtube.com/watch?v=r4zURMzJpbA Video Recording Part 5]<br />
<br><br><br />
<br />
===Meeting with the Nova team===<br />
When we failover in Cinder, volumes are no longer usable in Nova, but we don't tell Nova that the failover has ocurred. Any procedure in Nova to correct the situation needs to be done manually. It would be better if we let Nova know that a failover has occurred so they can do something.<br />
<br />
A complication is that Nova can't simply detach and attach the volume because data that is in flight would be lost.<br />
<br />
How about boot from volume? In that case the instance is dead anyway because access to the volume has been lost. Could go through the shutdown, detach, attach, reboot path.<br />
Problem is that detach is going to fail. Need to force it or handle the failure. But we aren't sure that Nova will allow a detach of a boot volume. And we don't currently have a force detach API.<br />
<br />
Also discussed a possible Nova bug for images created from encrypted volumes: https://bugs.launchpad.net/nova/+bug/1852106 , though it's not clear that the scenario described in the bug can actually happen<br />
====Actions====<br />
* need to figure out how to pass the force to os-brick to detach volume and when rebooting a volume<br />
* rosmaita to investigate Bug #1852106<br />
<br />
==Friday (Shanghai)==<br />
===Meeting with the Glance team===<br />
====Support for Glance multiple stores in Cinder====<br />
References: (cinder spec) https://review.openstack.org/#/c/641267/<br />
<br />
The Cinder team is still OK with this idea (which was approved for Train).<br />
=====Actions=====<br />
* retarget spec for Ussuri<br />
* get Abhishek's patch reviewed<br />
====Image snapshot co-location====<br />
For the Edge use case, Glance is planning to use info provided by Nova about what image a server was booted from to co-locate snapshots of that server in the same store as the original image. Would like to do the same with Cinder volumes uploaded as images. Just need a header that specifies the "base" image of the volume being uploaded as an image. We agreed that this is a separate use case from the above.<br />
=====Actions=====<br />
* Abhishek will write the spec for Cinder<br />
====Glance Cinder driver is very limited====<br />
We think it uses only default volume type, and also, it is not very well tested. We all agreed that this is a sad state of affairs.<br />
=====Actions=====<br />
* somebody should do something<br />
===Meet with Horizon about their proposed implementation of Cinder user messages===<br />
Horizon is interested in exposing the User Messages API. We agreed that this is a great idea.<br />
<br />
There's a question about having the message displayed in a requested language. It's possible that this is already handled at the REST API layer via the "Accept-Language" header. If it's not, that's probably the place to support this.<br />
====Actions====<br />
* rosmaita determine whether this would require a change to the API code, or whether existing code handles this already<br />
===Attach/Detach speed===<br />
Gorka was wondering whether there are any complaints about attach/detach speed in OpenStack, particularly since people are now using Cinder to provide volumes for Kubernetes (cinder in-tree driver, Cinder-CSI, Ember-CSI) and may be seeing a lot more attach/detach requests.<br />
<br />
Everybody seems to be OK with it, it's only geguileo who's complaining.<br />
====Actions====<br />
* not a concern at the moment<br />
===Topics from Train mid-cycle: status and carry-over to Ussuri===<br />
Notes about the Train mid-cycle: https://wiki.openstack.org/wiki/CinderTrainMidCycleSummary<br />
<br />
Mid-cycle etherpad: https://etherpad.openstack.org/p/cinder-train-mid-cycle-planning<br />
====Multiattach====<br />
All items need followup. Goals are:<br />
* short-term: document some guidance for how this feature should be tested<br />
* long-term: get some new tests into the cinder-tempest-plugin for this<br />
=====Actions=====<br />
* rosmaita draft the short-term document<br />
====iSCSI Ceph driver====<br />
Due to some downstream priorities changes, Walt is having trouble finding time to work on this.<br />
Ivan suggested that we encourage Walt to post whatever he's got, even if it's not working, so what he's learned isn't lost.<br />
There are some patches up and a github repo for some code Walt had to write that doesn't have a home in OpenStack or Ceph yet<br />
=====Actions=====<br />
* rosmaita: follow up with Walt<br />
* rosmaita: put together an etherpad with links to the work done so far<br />
====3rd Party CI Irregularities====<br />
Third-party testing by backend vendors of their driver code is very important to the project. But most of the 3rd Party CI appear to be pretty unstable.<br />
<br />
For most vendors, updating their 3rd Party CI to run python 3.7 in Train was not a simple task. It would be good if we could offer them better guidance about how to set up & maintain their 3rd Party CI. Would also like vendors to be running the cinder-tempest-plugin, but don't want to make it a demand unless we can make the path easier. (BTW, Datera is running the cinder-tempest-plugin in their CI!)<br />
<br />
Third Party CI Docs (partial list)<br />
* https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver<br />
* https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers<br />
* https://docs.openstack.org/infra/system-config/third_party.html<br />
* https://docs.openstack.org/cinder/latest/contributor/drivers.html<br />
<br />
=====Actions=====<br />
* Luigi has some ideas about using RDO Software Factory as a basis for 3rd Party CI; need to follow up with him on that<br />
* Gorka: will check about what RDO has available <br />
* e0ne: will look to see who's using cinder-tempest-plugin<br />
* the team: after gorka and e0ne report back, reorganize & update the 3rd party CI docs<br />
<br />
====Improve Automated Test Coverage====<br />
We want to do this via the cinder-tempest-plugin. Sophia (enriquetaso) is mentoring an Outreachy intern who has begun some work on this. Eric has been writing bugs to suggest test cases that need to be addressed.<br />
<br />
====SQLAlchemy to Alembic migration====<br />
No progress on this. Put in a proposal for a summer intern to work on this; maybe we'll get lucky.<br />
<br />
See https://etherpad.openstack.org/p/cinder-train-ptg-planning (line #247) for more info.<br />
<br />
====Capabilities Reporting====<br />
Operators need to read the vendor's manual to figure out which extra specs they can write for a particular backend, and what they're used for. it would be nice to drivers report their capabilities in a way that the operator can figure out this info from the CLI.<br />
<br />
Everyone agreed that we still want to do this. It will require an API change and there's already a spec for this:<br />
https://review.opendev.org/#/c/655939/1/specs/train/backend_capabilities.rst<br />
=====Actions=====<br />
* revisit at the Virtual PTG and figure out who's interested in working on it<br />
<br />
===Cinder Business===<br />
====Cinder Ussuri Priorities====<br />
We will finalize this after the Virtual PTG, but here's the initial list:<br />
* Increase testing coverage<br />
* Increase number of CIs running cinder-tempest-plugins<br />
* Better support for third party CIs: Make their life easier by having a way to deploy a robust system<br />
* Volume types per user/project/service-token<br />
** better documentation<br />
* Generic Backups<br />
* Improve HA Active-Active documentation<br />
** want to make it easier to test it<br />
* remove V2 API<br />
* remove python 2 support<br />
====Cinder-core update====<br />
See http://lists.openstack.org/pipermail/openstack-discuss/2019-November/010519.html<br />
We are at roughly the same review strength we had in Train.<br />
====Meeting Time Change update====<br />
We were holding off on this until after the Summit so that new contributors could participate in a poll.<br />
We'll consider the options from Liang Fang's original proposal at the Cinder weekly meeting: http://eavesdrop.openstack.org/meetings/cinder/2019/cinder.2019-10-23-16.00.log.html#l-166<br />
These are to move the meeting 1 or 2 hours earlier.<br />
There has also been some discussion on the ML: http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010328.html<br />
=====Actions=====<br />
* rosmaita put together a community poll<br />
====Virtual PTG====<br />
We discussed what the format should be. Consensus was to do it over 2 consecutive days, using 2 hours each day. This should make it easier for people to participate in at least part of the meeting. We want to do it soon; consensus was the week after KubeCon to avoid conflicts. So that would be the last week in November.<br />
=====Actions=====<br />
* rosmaita put together a community poll to determine days/times<br />
====Virtual Mid-Cycle====<br />
There is interest in having a midcycle. Although everyone recognizes that face-to-face is the best, contributors have been having trouble getting travel support. So we decided to do a completely Virtual Mid-Cycle meetup for Ussuri. We decided to figure out the format after we see how the Virtual PTG works out.<br />
==Monday (Virtual)==<br />
===Forum session recap: Are You Using Upgrade Checks?===<br />
* https://etherpad.openstack.org/p/PVG-upgrade-check-forum<br />
Jay gave a quick recap of the Forum session. There are a number of action items in the etherpad above. They are assigned to jungleboyj right now as a TC action.<br />
<br />
There are still some questions about (a) how operators are using these, and (b) what kind of checks we should be providing from the development side. The Cinder team was seeing this as pre-check. Others are seeing it as a check that is used along the way while an upgrade is in process to ensure that things are ready before operators start up their services. Pre-checks seem to make sense for us; Sean noted that we could add an option to do some pre-checks to the cinder-status command.<br />
<br />
So what should the Cinder team do during the Ussuri cycle (before we have the above issues settled)? At the very least, we should still add them when a driver is unsupported and subject to removal:<br />
* inform operators that in order to use an unsupported driver, a flag has to be set in cinder.conf<br />
* inform operators that they need to contact the vendor about whether they have plans to have the driver re-instated; otherwise, the operator needs to prepare to migrate the affected volumes to a backend with a supported driver for the next Cinder release<br />
====Actions====<br />
* jungleboyj - Start a discussion on the mailing list to find out if anyone is actually using or has used the upgrade checks in production<br />
* need to figure out where the documentation for this goes<br />
===Snapshot co-location===<br />
The spec for this is: https://review.opendev.org/#/c/695630/<br />
<br />
This is related to glance multi-store support in Cinder, but the spec needs to specify more carefully what the use case is. We think that is: a user has a volume that was created from a glance image, and wants to upload it as an image; want to give Glance info so that in can put the new image in the same store as the original image. (So use of the term "snapshot" here may be inaccurate.)<br />
<br />
This feature depends on the implementation of the other glance multistore spec: https://review.opendev.org/#/c/661676/<br />
====Actions====<br />
* Rajat, Abhishek - update the spec<br />
===Python 2 support removal===<br />
Gave a quick summary of what we discussed in Shanghai (see above) so that we're all on the same page.<br />
<br />
There's a patch up now removing py2 testing from Cinder: https://review.opendev.org/695317 . Once that's approved, will do the same for the other components.<br />
<br />
General advice to Cinder developers about using Python 3 language features: https://wiki.openstack.org/wiki/CinderUssuriPTGSummary#Actions<br />
<br />
====Actions====<br />
* rosmaita - get the testing/gate patches merged, then let the good times roll<br />
===User messages===<br />
Quick discussion of the admin action "leakage" issue discussed on https://review.opendev.org/#/c/694954/<br />
<br />
Consensus was that it would be useful to expose admin-oriented actions in user messages that only admins would be able to view. Maybe set a special flag when the message is created, and then use the admin context to decide whether this gets shown or not. Agreed that the message content will be same as we have currently (that is, don't expose any sensitive information even to admins). We can wait until admin-facing user messages are being used and get feedback about whether more info is required or not.<br />
<br />
Ivan pointed out that this change should not require a new microversion, since there's no change to the user message API and no change to the current response.<br />
====Actions====<br />
* rosmaita - write up a spec<br />
===3rd Party CI irregularities===<br />
The issue we want to address is that the 3rd Party CI systems seem pretty unstable. We'd like to be able to provide some more support to make the infrastructures more reliable. Luigi suggested using RDO Software Factory as a basis for 3rd Party CI.<br />
<br />
References:<br />
* https://opendev.org/x/third-party-ci-tools<br />
* https://zuul-ci.org/docs/zuul/admin/quick-start.html<br />
====Actions====<br />
* Luigi - follow up with RDO team and get some feedback on how plausible this scenario is<br />
* e0ne - will look to see who's using cinder-tempest-plugin<br />
===Extending default volume type support for tenants===<br />
Quick recap of the Shanghai discussion (see above). Simon had mentioned that he might have developer at Pure who'd be interested in doing the implementation. Rajat volunteered to help support the implementation.<br />
====Actions====<br />
* Gorka - write up the spec<br />
* rosmaita - follow up with Simon<br />
===Quotas!===<br />
Eric has a patch up that may fix one of many problems: https://review.opendev.org/#/c/695096/ <br />
Eric thinks the patch could be optimized if someone is interested.<br />
<br />
The general problem is that we update multiple tables and there can be (are) race conditions and you wind up with strange situations like negative quota values or multiple quotas for the same project. Operators have posted some scripts to be used occasionally clean up the database, but it would be better to fix this in Cinder.<br />
===EOL for driverfixes/{m,n} and stable/{o,p}===<br />
Since the Shanghai discussion, a patch has merged that removes the 6 month waiting period for the transition from EM -> EOL: https://review.opendev.org/#/c/682381/<br />
<br />
There was a discussion about this in #openstack-tc last week: http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-11-22.log.html#t2019-11-22T15:35:01<br />
<br />
Consensus is that we should go ahead and do this.<br />
====Actions====<br />
* rosmaita - send notice on the ML that we are going to do this in one week<br />
** http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011136.html<br />
* rosmaita - put up a release patch to EOL the branches<br />
** https://review.opendev.org/#/c/696173/<br />
===Driver support matrix===<br />
Follow-up from the discussion in Shanghai. A suggestion was made that multipath should be a specific category in the support matrix.<br />
<br />
Consensus is that multipath is more a feature of the backend than of the driver. It is useful to know if drivers do it, but it's not the kind of thing like replication that they do or not. Also, there are options that have to be set in nova in order for it to be useful - nova:libvirt:use_volume_multipath. So there doesn't seem to be a point in adding this to the support matrix.<br />
==Wednesday (Virtual)==<br />
===v2 API removal update===<br />
Outward facing issues:<br />
* Sean's patch that removes v2 from the 'versions' response had some strange failures (but Rajat thinks there may be a quick fix).<br />
* Right now, devstack expects both v2 and v3 to be available and creates endpoints for both in the service catalog: https://opendev.org/openstack/devstack/src/branch/master/lib/cinder<br />
* On the other hand, it looks like tempest is v3 ready: https://review.opendev.org/#/c/530702/<br />
* Also, Ivan is pretty sure that Horizon uses v3 only.<br />
* We should notify Nova and Glance to make sure they don't rely on v2 for anything.<br />
<br />
Internal issues:<br />
* we should be able to clean up the stuff that v3 inherited from v2<br />
* we may not be able to clean up the v2/contrib stuff yet because of microversion reliance<br />
====Actions====<br />
* Rajat take a shot at fixing Sean's patch<br />
* rosmaita - work on the devstack stuff (service catalog)<br />
* we'll be optimistic about tempest<br />
* rosmaita - send a general email to the ML saying that we plan to do this<br />
===Forum session recap: How are you using Cinder's Volume Types?===<br />
Session etherpad: https://etherpad.openstack.org/p/PVG-how-using-cinder-volume-types<br />
<br />
Sean mentioned some highlights of the Forum session.<br />
* NECTAR is running a patch that allows a volume type to be assigned to an AZ: https://github.com/NeCTAR-RC/cinder/commit/d5a3d938a8e0934d31b5a3c568846b3d32843866<br />
* There were some questions about whether RBD supports volume online migration<br />
* Operators are interested in the "Support filter backend based on operation type" spec that was implemented in Rocky, but need some documentation explaining how to use it. The implementation is https://github.com/openstack/cinder/commit/e1ec4b4c2e1f0de512f09e38824c1d7e2fa38617<br />
====Actions====<br />
* e0ne is planning to do some testing around the RBD volume migration<br />
* need someone to pick up the documentation of "Support filter backend based on operation type"<br />
===Ceph iSCSI work===<br />
This is an important feature for Ironic.<br />
There's an etherpad gathering some of the work Walt's done on this: https://etherpad.openstack.org/p/cinder-ceph-iscsi-driver<br />
====Actions====<br />
rosmaita - follow up with Walt about his bandwidth and ask him to add missing stuff to the etherpad<br />
===Ussuri community goals ===<br />
Goal 1: Drop Python 2.7 Support -- we're going to do this in 2 phases. First is to get the python 2 check and gate jobs removed so we aren't depending on any py27 in the gate. Second will be to make the changes that will only allow Cinder to be installed with at least py36. That will follow in January or so when any other project that needs to install Cinder in py27 for their own testing has removed that dependency. <br />
<br />
The cinder patch to drop py2 testing is https://review.opendev.org/#/c/695317/ -- once that's merged, we'll do the same for os-brick, cinderclient, the brick-client-ext, and cinderlib.<br />
<br />
Goal 2: Project Specific New Contributor & PTL Docs -- the goal is not yet approved; it's at the formal vote stage: https://review.opendev.org/#/c/691737/. From comments on the patch, expectations are that the current PTL will do this with help from former PTLs. Luckily, we have 2 former PTLs who are still very active with the project. The open issue right now is that the docs are supposed to be consistent across projects, but there isn't a template for this yet.<br />
<br />
There's a pre-selected goal for V to migrate all legacy zuul jobs: https://review.opendev.org/#/c/691278/. gmann has a patch up moving our legacy jobs (grenade) to the cinder repo and making them py3: https://review.opendev.org/#/c/695787/. At some point, we'll need to convert them to bona fide Zuul v3 jobs. Luigi left a bunch of info on the etherpad about what the moving parts for this are, and he pointed out that reviews are welcome.<br />
<br />
It looks like another V goal is going to be "Consistent and secure default policies" goal, which was floated on the ML: http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010291.html. A "Policy Popup Team" is being organized now to do some work during this cycle: https://review.opendev.org/#/c/695993/<br />
====Actions====<br />
* everyone - keep an eye on the "remove py2 support" patches<br />
* everyone - reviews welcome on https://review.opendev.org/#/q/status:open+branch:master+topic:grenade_zuulv3<br />
===Backup Service Testing===<br />
This was a follow-up from the Train mid-cycle. The backup tests fail intermittently with timeouts. The situation now is that the cinder backup tests have been removed from tempest-full, but they are being run with tempest-integrated-storage (which basically means that failures will hit us but shouldn't block other projects). The issue could still use some investigation; Eric has a suggestion on the etherpad to write an elasticsearch query to look in the c-bak log for a specific IOError instead of looking for the timeout.<br />
<br />
Eric also noted that the test jobs run a long time and fail with a not helpful message -- they look for volume going active but don't notice that it's in an error state and should fail sooner. Failing fast could conserve some gate resources. Maybe someone wants to follow up with something like https://review.opendev.org/#/c/565766/ ?<br />
====Actions====<br />
* anyone who's interested - follow up on this<br />
* rosmaita - (came up during this discussion but not related to this topic) update etherpads with nondestructive translation instructions (get the read-only link to the etherpad and use translations tools there instead of in the writeable etherpad)<br />
===Cycle Priorities===<br />
====Increase testing coverage====<br />
We want to get more thorough tests into the cinder-tempest-plugin. Sofia (enriquetaso) is mentoring an Outreachy intern Anastasiya (anastzhyr) who will focus on this during her internship (3 Dec 2019 to 3 March 2020). The Cinder community can assist in this by (A) writing bugs tagged 'test-coverage' with specific ideas for tests that can be added, and (B) timely reviews of Anastaiya's patches.<br />
====Increase number of 3rd party CIs running cinder-tempest-plugin====<br />
This '''should be easyâ„¢''' for current 3rd party CIs (and actually may already be a requirement). Need to get a baseline for how many CIs are running it now.<br />
====Better support for 3rd Party CIs====<br />
We'd like to their lives easier by having a standard way to deploy a robust system. This may be possible using RDO Software Factory. Luigi has already brought this up at the RDO meeting and RDO (and the SF people) are supportive of this idea: http://eavesdrop.openstack.org/meetings/rdo_meeting___2019_11_27/2019/rdo_meeting___2019_11_27.2019-11-27-15.00.log.html#l-76<br />
<br />
WIP 3rd party CI doc section for SoftwareFactory:<br />
* https://softwarefactory-project.io/r/#/c/17097/<br />
* https://softwarefactory-project.io/logs/97/17097/2/check/sf-docs-build/ae24483/docs-html/guides/third_party_ci.html <br />
====Default volume-type enhancement====<br />
Gorka's working on a spec for having per user/project/service-token default volume-type; basic idea is what was discussed at the PTG (see above).<br />
Related to this is improving the documentation around volume types.<br />
====Generic Backups====<br />
Related patches:<br />
* https://review.opendev.org/#/c/620881/<br />
* https://review.opendev.org/#/c/630305/<br />
====Improve Active-Active (HA) Documentation====<br />
We're anticipating that operators will want to run Cinder in HA mode (Cinder active-active). Currently RBD supports running in HA.<br />
<br />
A driver has to set a flag in order for the service to run in active-active. But in addition to setting the flag, it would be good for the feature to actually work with that driver. The problem is that what you need to do is very driver dependent. We don't have tempest tests that verify that a driver can run in HA (but it should be possible to add tests such that if they fail, then you know HA is not happening).<br />
<br />
We need docs aimed at two audiences:<br />
* driver developers: want to clarify what you need to do to implement active-active and claim HA support. Should be able to give some general advice (like "watch out for race conditions on connection") to help in implementing and testing that their driver supports active-active.<br />
* operators: need to provide some advice about how to deploy Cinder in active-active mode (for example, you should have 3 API nodes, 3 scheduler nodes, 3 volume services)<br />
====Remove the v2 API====<br />
Should at least be able to get it out of the service catalog and remove the option to run it (and remove the option to not run v3), so that from an external point of view, all that's available is the v3 API. The refactoring to remove all v2 code from the API doesn't have to happen immediately.<br />
====Remove Python 2 support====<br />
This is a community goal (and it looks like we're in good shape to make this happen very early in the cycle).<br />
====Move away from squlalchemy- migrate to alembic====<br />
We're getting closer to the point where we will have no choice.<br />
===Should we hold a Virtual Mid-Cycle?===<br />
Cycle Schedule: https://releases.openstack.org/ussuri/schedule.html<br />
<br />
The consensus that we should have a Ussuri Mid-Cycle and that it should be virtual. While we were discussing the timing (have it close to the spec freeze? or closer to M-2), Eric pointed out that if we were going to use the model that we used for this Virtual PTG, namely, 2-hour sessions spread over a couple of days, there's no reason why the sessions have to be close together since we don't need to arrange any physical facilities. So we can be flexible and have 2 hour sessions whenever it makes sense.<br />
<br />
Right now, the sensible times to have 2-hour virtual meetings are:<br />
* around the Cinder Spec Freeze. The spec freeze is the last week in January and is at 15 weeks, which is exactly the middle of the cycle. Maybe have the Virtual meet-up the week before so unmerged specs can have some discussion if necessary?<br />
* at the "Cinder New Feature Status Checkpoint" (week of 16 March 2020), which is 3 weeks before the final release for client libraries.<br />
====Actions====<br />
* rosmaita - get feedback from the wider team at the weekly meeting and organize polls to determine the day of week and time</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=CinderUssuriPTGSummary&diff=173314CinderUssuriPTGSummary2019-12-04T18:20:16Z<p>Jay Bryant: /* EOL some of the currently open branches */</p>
<hr />
<div>=== Introduction ===<br />
This page contains a summary of the subjects covered during the Ussuri PTG held in Shanghai, China, November 7-8, 2019.<br />
It also contains a summary of the Virtual PTG held November 25 and 27, 2019.<br />
<br />
The sessions were recorded. Links to the recordings will be added here when they are available.<br />
<br />
The full etherpad and all associated notes may be found here:<br />
* Shanghai: https://etherpad.openstack.org/p/shanghai-ptg-cinder<br />
* Virtual: https://etherpad.openstack.org/p/cinder-ussuri-virtual-ptg-planning<br />
<br />
==Thursday (Shanghai)==<br />
===Cinder project onboarding and "meet the Cinder developers" session===<br />
Everyone who attended were 100% satisfied and very complimentary about Cinder. Unfortunately, no one attended, so we spent the time figuring out how to get the recording equipment connected and positioned properly.<br />
<br />
<Br><br><br />
[https://www.youtube.com/watch?v=QQgiokX_EiI Video Recording Part 1]<br />
<br><br><br />
<br />
===Python 2 support & Work remaining to remove Py27 support===<br />
I came into this arguing that we need to keep Python 2 testing in master for a while -- at least while we are still supporting Python 2 in stable branches, because otherwise backports become a big problem (won't have a clean backport if any py3-only language features are used in a patch to master). Pretty much no one agreed with this.<br />
<br />
Sean pointed out that as libraries drop py2 support, we won't be able to use them in py2 testing anyway. Ivan and Sean can't wait to start ripping out py2-compatability code. Gorka didn't think that the extra effort to modify backports would be that big a deal, and that if we're going to start using py3 for real, might as well start now.<br />
====Actions====<br />
* Reminder to reviewers: we need to be checking the code coverage of test cases very carefully so that new code has excellent coverage and will be likely to fail when tested with py2 in stable branches when a backport is proposed<br />
* Reminder to committers: be patient with reviewers when they ask for more tests!<br />
* Reminder to community: new features can use py3 language constructs; bugfixes likely to be backported should be more conservative and write for py2 compatabilty <br />
* Reminder to driver maintainers: ^^<br />
* Ivan and Sean have a green light to start removing py2 compatability<br />
===Policy migration===<br />
Some background: https://etherpad.openstack.org/p/policy-migration-steps<br />
Keystone has added a default read-only role and service-scoped roles, but they don't do anything until projects write policies that use them<br />
oslo.policy code has a way to define a default policy + deprecated default policy; during deprecation, the most permissive wins. This will allow easy migration to new policies for operators.<br />
<br />
There are still questions about how to set up testing for these. Keystone did only unit tests, but the tests were very heavyweight; had to set up all the users in the DB each time; they wish they had done things in tempest. But there is a concern that it may not be practical to do in tempest, either. (It may depend on the project.)<br />
<br />
A pop-up team is going to be started to help get the larger projects moved to the new Policy Code.<br />
====Actions====<br />
* rosmaita will investigate the different testing approaches. Note: It's possible that tempest will add the methods to create different users and the different projects will have to do their own testing using those.<br />
* rosmaita To look at the scoping options and understand what the impact on Cinder will be.<br />
** Need to create a matrix of our policies and different scopes.<br />
** Need to figure out how the administrative context fits in<br />
** Need to check current test coverage and where testing needs to be enhanced.<br />
** Don't have to have one person do it. It is possible to split up the work.<br />
* rosmaita and e0ne (and anyone else interested in this) to join the pop-up team to get more info and help get Cinder started.<br />
===Cinder V2 API removal===<br />
We can't just remove V2 code right now (e.g., the V2 extensions need to be moved to V3) but we can remove access to the API. Though actually that's not true either, there is a lot of background work that needs to be done before we remove V2 API:<br />
* Tempest still assumes that the V2 API will be there. Need to fix it.<br />
* OpenStack Client also has some V2 API assumptions.<br />
* Devstack also will not work.<br />
<br />
Sean has a patch to see how badly things break with V2 removal: https://review.opendev.org/554372<br />
<br />
V3 is pretty much exactly the same as V2. We should be able to change and just have people switch the endpoint and have it work. It would be nice if we could just update the catalog but that doesn't appear to be the case.<br />
====Actions====<br />
* follow up on this for the virtual PTG. What did we find with Sean's patch?<br />
** Create a list of the specific work items that need to be completed.<br />
** At that point we may be able to split the work up to an intern (if we have an intern).<br />
===Cinder REST API V4===<br />
We had talked in Vancouver about getting to a point after we have enough micro-versions piling up to move to V4.<br />
====Actions====<br />
* We don't need to do that in this release (let's get rid of V2 first), but it is something we need to keep in mind as a future goal.<br />
<br><br><br />
[https://www.youtube.com/watch?v=a-kq_EYrkq0 Video Recording Part 2]<br />
<br><br><br />
<br />
===Volume local cache===<br />
Requires both Cinder and Nova work:<br />
* Cinder spec: https://review.opendev.org/#/c/684556/<br />
* Nova spec: https://review.opendev.org/689070<br />
<br />
Currently there're different types of fast NVME SSDs, such as Intel Optane SSD, which r/w throughput can be 2.x~3.x GB/s, latency can be ~10 us. While typical remote volume for a VM can be hundreds of MB/s, latency can be millisecond level (iscsi / rbd). So these fast SSDs can be mounted on compute node locally and used as a cache for remote volumes. Regarding storage team, we need to add support in os-brick.<br />
<br />
Consensus was: there are some storage solutions this cannot be done for (Ceph, no mount point on host machine), some that might not require this (some vendors already have super-fast caching), and some it's worth doing for, so the overall feeling was supportive for this effort.<br />
<br />
See the PTG etherpad for details. Picture of the flip chart used during the discussion: https://twitter.com/jungleboyj/status/1192323512238776320<br />
====Actions====<br />
* Liang Fang to continue working on this<br />
===Mutable options ===<br />
The context for this is a NetApp customer request who wants to be able to change backend credentials without restarting any services.<br />
<br />
Problem is that the current mutable config can be done for the REST API, but doesn't extend beyond that. Further, changing driver credentials is a little more work since it may require reloading the driver or having a mechanism in all drivers to recognize and handle that change. Also, we don't want config options that are shared across drivers to be mutable.<br />
<br />
Gorka pointed out that a driver supporting Active-Active would not need mutable options for this purpose. It would be better to implement Active-Active instead of refresh credentials this way. A/A HA support has been ready for several releases now, but so far RBD has been the only driver to test and enable it.<br />
<br />
The team feels that using Active/Active is the best way to go.<br />
====Actions====<br />
* Gorka volunteered to support the NetApp team if they choose to implement A/A<br />
* need to add to the developer docs that just making an option mutable in oslo.config does not solve the problem for drivers (more info on the etherpad)<br />
<br />
<br><br><br />
[https://www.youtube.com/watch?v=4BRXtQ41gJw Video Recording Part 3]<br />
<br><br><br />
<br />
===Cross Project Discussion with Edge Working Group===<br />
Apparently the next version of TripleO will support storage at the edge. They were wondering if we knew anything about that. We don't.<br />
<br />
As far as edge persistent storage goes, telcos think about having NFS-only and have it in the core - scary concept.<br />
<br />
In considering the edge use case, it is important to understand the physical limitations of what people have in mind. For example, one small telco rack, or a smaller DC with air conditioning, or a bigger DC with AC and bigger storage unit, etc. You really can't talk about "the edge" (insert U2 joke here).<br />
<br />
See the etherpad for more.<br />
===Default volume types depending on project or user===<br />
Having a single volume type default is too restrictive for bigger clouds with multiple AZs and many tenants/projects. Operators want more defaults to use in particular situations.<br />
<br />
The selection of which default to use is easy; the hard part of this will be the code enabling creation of the default at the end user/project level. Will need:<br />
* new API calls (create, show, list, update, delete), new microversion<br />
* client support<br />
* tell horizon about it<br />
====Actions====<br />
* geguileo - write the spec<br />
** request from Glance: we may also want a per-service default (triggered when a service token is passed)<br />
===Cinder retype doesn't use driver assisted migration===<br />
Gorka thinks this doesn't depend on the driver; he thinks it's broken for all drivers. There is code in the manager that prevents the efficient path from being taken:<br />
https://github.com/openstack/cinder/blob/ca5c2ce4e8ae9fbc92181ac4ba09cec3429a71e6/cinder/volume/manager.py#L2490<br />
There was a reason for it; we need to review and see if it still holds.<br />
<br />
Ivan thinks this is just a bug. Though we don't have a bug open for it.<br />
====Actions====<br />
* e0ne to investigate and fix it if he can verify that it is broken.<br />
===EOL some of the currently open branches===<br />
We have 8 open branches plus master (ussuri). Sent an email to the ML asking for data so we can make a good decision about this:<br />
http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010385.html<br />
<br />
Got zero responses, so this apparently isn't seen as a big deal by the community.<br />
<br />
The policy is that we need to announce 6 months ahead of time the fact that we are planning to EOL a branch. This allows time for a vendor to come in and pick it up if necessary.<br />
So, if we want to drop branches we just need to announce that we are planning to EOL branches and then we can do it in 6 months.<br />
<br />
The driverfixes branches have not been used in quite a while.<br />
* Should we delete those? No, we don't really want to lose that history of commits.<br />
* Could we re-name them? Put 'archived' in the title or something to make it clear that it doesn't still take code. (Or just document that they are an archive of old driver fixes.)<br />
* When we EOL a driver we should probably make it a driverfixes branch. (Not clear on exactly what's being proposed here, need to follow up at VPTG.)<br />
====Actions====<br />
* rosmaita - find out about renaming branches from infra team; also, about read-only branches (change to gerrit so no patches can be proposed to the branch)?<br />
** proposal: EOL o, p and rename them archived-ocata, archived-pike<br />
* rosmaita - send proposal to ML that o, p are due to exit EM status in 6 months<br />
* revisit this at the Virtual PTG<br />
** the EOL policy was revised recently, no longer requires the 6 month waiting period<br />
** want to reconsider whether not deleting the EOL branches is a good idea if we're not going to merge anything into them<br />
<br />
<br><br><br />
[https://www.youtube.com/watch?v=VHyCKbqMJb4 Video Recording Part 4]<br />
<br><br><br />
<br />
===Discuss the latest User Survey Results===<br />
Here's a handy compiled list of only the Cinder responses: https://etherpad.openstack.org/p/cinder-2019-user-survey-question-responses<br />
====Actions====<br />
* replication needs better documentation so that people know we can failover and fail back correctly<br />
* ivan is planning to continue the generic backup driver work<br />
<br />
===Meeting with the Nova team===<br />
When we failover in Cinder, volumes are no longer usable in Nova, but we don't tell Nova that the failover has ocurred. Any procedure in Nova to correct the situation needs to be done manually. It would be better if we let Nova know that a failover has occurred so they can do something.<br />
<br />
A complication is that Nova can't simply detach and attach the volume because data that is in flight would be lost.<br />
<br />
How about boot from volume? In that case the instance is dead anyway because access to the volume has been lost. Could go through the shutdown, detach, attach, reboot path.<br />
Problem is that detach is going to fail. Need to force it or handle the failure. But we aren't sure that Nova will allow a detach of a boot volume. And we don't currently have a force detach API.<br />
<br />
Also discussed a possible Nova bug for images created from encrypted volumes: https://bugs.launchpad.net/nova/+bug/1852106 , though it's not clear that the scenario described in the bug can actually happen<br />
====Actions====<br />
* need to figure out how to pass the force to os-brick to detach volume and when rebooting a volume<br />
* rosmaita to investigate Bug #1852106<br />
<br />
==Friday (Shanghai)==<br />
===Meeting with the Glance team===<br />
====Support for Glance multiple stores in Cinder====<br />
References: (cinder spec) https://review.openstack.org/#/c/641267/<br />
<br />
The Cinder team is still OK with this idea (which was approved for Train).<br />
=====Actions=====<br />
* retarget spec for Ussuri<br />
* get Abhishek's patch reviewed<br />
====Image snapshot co-location====<br />
For the Edge use case, Glance is planning to use info provided by Nova about what image a server was booted from to co-locate snapshots of that server in the same store as the original image. Would like to do the same with Cinder volumes uploaded as images. Just need a header that specifies the "base" image of the volume being uploaded as an image. We agreed that this is a separate use case from the above.<br />
=====Actions=====<br />
* Abhishek will write the spec for Cinder<br />
====Glance Cinder driver is very limited====<br />
We think it uses only default volume type, and also, it is not very well tested. We all agreed that this is a sad state of affairs.<br />
=====Actions=====<br />
* somebody should do something<br />
===Meet with Horizon about their proposed implementation of Cinder user messages===<br />
Horizon is interested in exposing the User Messages API. We agreed that this is a great idea.<br />
<br />
There's a question about having the message displayed in a requested language. It's possible that this is already handled at the REST API layer via the "Accept-Language" header. If it's not, that's probably the place to support this.<br />
====Actions====<br />
* rosmaita determine whether this would require a change to the API code, or whether existing code handles this already<br />
===Attach/Detach speed===<br />
Gorka was wondering whether there are any complaints about attach/detach speed in OpenStack, particularly since people are now using Cinder to provide volumes for Kubernetes (cinder in-tree driver, Cinder-CSI, Ember-CSI) and may be seeing a lot more attach/detach requests.<br />
<br />
Everybody seems to be OK with it, it's only geguileo who's complaining.<br />
====Actions====<br />
* not a concern at the moment<br />
===Topics from Train mid-cycle: status and carry-over to Ussuri===<br />
Notes about the Train mid-cycle: https://wiki.openstack.org/wiki/CinderTrainMidCycleSummary<br />
<br />
Mid-cycle etherpad: https://etherpad.openstack.org/p/cinder-train-mid-cycle-planning<br />
====Multiattach====<br />
All items need followup. Goals are:<br />
* short-term: document some guidance for how this feature should be tested<br />
* long-term: get some new tests into the cinder-tempest-plugin for this<br />
=====Actions=====<br />
* rosmaita draft the short-term document<br />
====iSCSI Ceph driver====<br />
Due to some downstream priorities changes, Walt is having trouble finding time to work on this.<br />
Ivan suggested that we encourage Walt to post whatever he's got, even if it's not working, so what he's learned isn't lost.<br />
There are some patches up and a github repo for some code Walt had to write that doesn't have a home in OpenStack or Ceph yet<br />
=====Actions=====<br />
* rosmaita: follow up with Walt<br />
* rosmaita: put together an etherpad with links to the work done so far<br />
====3rd Party CI Irregularities====<br />
Third-party testing by backend vendors of their driver code is very important to the project. But most of the 3rd Party CI appear to be pretty unstable.<br />
<br />
For most vendors, updating their 3rd Party CI to run python 3.7 in Train was not a simple task. It would be good if we could offer them better guidance about how to set up & maintain their 3rd Party CI. Would also like vendors to be running the cinder-tempest-plugin, but don't want to make it a demand unless we can make the path easier. (BTW, Datera is running the cinder-tempest-plugin in their CI!)<br />
<br />
Third Party CI Docs (partial list)<br />
* https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver<br />
* https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers<br />
* https://docs.openstack.org/infra/system-config/third_party.html<br />
* https://docs.openstack.org/cinder/latest/contributor/drivers.html<br />
<br />
=====Actions=====<br />
* Luigi has some ideas about using RDO Software Factory as a basis for 3rd Party CI; need to follow up with him on that<br />
* Gorka: will check about what RDO has available <br />
* e0ne: will look to see who's using cinder-tempest-plugin<br />
* the team: after gorka and e0ne report back, reorganize & update the 3rd party CI docs<br />
<br />
====Improve Automated Test Coverage====<br />
We want to do this via the cinder-tempest-plugin. Sophia (enriquetaso) is mentoring an Outreachy intern who has begun some work on this. Eric has been writing bugs to suggest test cases that need to be addressed.<br />
<br />
====SQLAlchemy to Alembic migration====<br />
No progress on this. Put in a proposal for a summer intern to work on this; maybe we'll get lucky.<br />
<br />
See https://etherpad.openstack.org/p/cinder-train-ptg-planning (line #247) for more info.<br />
<br />
====Capabilities Reporting====<br />
Operators need to read the vendor's manual to figure out which extra specs they can write for a particular backend, and what they're used for. it would be nice to drivers report their capabilities in a way that the operator can figure out this info from the CLI.<br />
<br />
Everyone agreed that we still want to do this. It will require an API change and there's already a spec for this:<br />
https://review.opendev.org/#/c/655939/1/specs/train/backend_capabilities.rst<br />
=====Actions=====<br />
* revisit at the Virtual PTG and figure out who's interested in working on it<br />
<br />
===Cinder Business===<br />
====Cinder Ussuri Priorities====<br />
We will finalize this after the Virtual PTG, but here's the initial list:<br />
* Increase testing coverage<br />
* Increase number of CIs running cinder-tempest-plugins<br />
* Better support for third party CIs: Make their life easier by having a way to deploy a robust system<br />
* Volume types per user/project/service-token<br />
** better documentation<br />
* Generic Backups<br />
* Improve HA Active-Active documentation<br />
** want to make it easier to test it<br />
* remove V2 API<br />
* remove python 2 support<br />
====Cinder-core update====<br />
See http://lists.openstack.org/pipermail/openstack-discuss/2019-November/010519.html<br />
We are at roughly the same review strength we had in Train.<br />
====Meeting Time Change update====<br />
We were holding off on this until after the Summit so that new contributors could participate in a poll.<br />
We'll consider the options from Liang Fang's original proposal at the Cinder weekly meeting: http://eavesdrop.openstack.org/meetings/cinder/2019/cinder.2019-10-23-16.00.log.html#l-166<br />
These are to move the meeting 1 or 2 hours earlier.<br />
There has also been some discussion on the ML: http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010328.html<br />
=====Actions=====<br />
* rosmaita put together a community poll<br />
====Virtual PTG====<br />
We discussed what the format should be. Consensus was to do it over 2 consecutive days, using 2 hours each day. This should make it easier for people to participate in at least part of the meeting. We want to do it soon; consensus was the week after KubeCon to avoid conflicts. So that would be the last week in November.<br />
=====Actions=====<br />
* rosmaita put together a community poll to determine days/times<br />
====Virtual Mid-Cycle====<br />
There is interest in having a midcycle. Although everyone recognizes that face-to-face is the best, contributors have been having trouble getting travel support. So we decided to do a completely Virtual Mid-Cycle meetup for Ussuri. We decided to figure out the format after we see how the Virtual PTG works out.<br />
==Monday (Virtual)==<br />
===Forum session recap: Are You Using Upgrade Checks?===<br />
* https://etherpad.openstack.org/p/PVG-upgrade-check-forum<br />
Jay gave a quick recap of the Forum session. There are a number of action items in the etherpad above. They are assigned to jungleboyj right now as a TC action.<br />
<br />
There are still some questions about (a) how operators are using these, and (b) what kind of checks we should be providing from the development side. The Cinder team was seeing this as pre-check. Others are seeing it as a check that is used along the way while an upgrade is in process to ensure that things are ready before operators start up their services. Pre-checks seem to make sense for us; Sean noted that we could add an option to do some pre-checks to the cinder-status command.<br />
<br />
So what should the Cinder team do during the Ussuri cycle (before we have the above issues settled)? At the very least, we should still add them when a driver is unsupported and subject to removal:<br />
* inform operators that in order to use an unsupported driver, a flag has to be set in cinder.conf<br />
* inform operators that they need to contact the vendor about whether they have plans to have the driver re-instated; otherwise, the operator needs to prepare to migrate the affected volumes to a backend with a supported driver for the next Cinder release<br />
====Actions====<br />
* jungleboyj - Start a discussion on the mailing list to find out if anyone is actually using or has used the upgrade checks in production<br />
* need to figure out where the documentation for this goes<br />
===Snapshot co-location===<br />
The spec for this is: https://review.opendev.org/#/c/695630/<br />
<br />
This is related to glance multi-store support in Cinder, but the spec needs to specify more carefully what the use case is. We think that is: a user has a volume that was created from a glance image, and wants to upload it as an image; want to give Glance info so that in can put the new image in the same store as the original image. (So use of the term "snapshot" here may be inaccurate.)<br />
<br />
This feature depends on the implementation of the other glance multistore spec: https://review.opendev.org/#/c/661676/<br />
====Actions====<br />
* Rajat, Abhishek - update the spec<br />
===Python 2 support removal===<br />
Gave a quick summary of what we discussed in Shanghai (see above) so that we're all on the same page.<br />
<br />
There's a patch up now removing py2 testing from Cinder: https://review.opendev.org/695317 . Once that's approved, will do the same for the other components.<br />
<br />
General advice to Cinder developers about using Python 3 language features: https://wiki.openstack.org/wiki/CinderUssuriPTGSummary#Actions<br />
<br />
====Actions====<br />
* rosmaita - get the testing/gate patches merged, then let the good times roll<br />
===User messages===<br />
Quick discussion of the admin action "leakage" issue discussed on https://review.opendev.org/#/c/694954/<br />
<br />
Consensus was that it would be useful to expose admin-oriented actions in user messages that only admins would be able to view. Maybe set a special flag when the message is created, and then use the admin context to decide whether this gets shown or not. Agreed that the message content will be same as we have currently (that is, don't expose any sensitive information even to admins). We can wait until admin-facing user messages are being used and get feedback about whether more info is required or not.<br />
<br />
Ivan pointed out that this change should not require a new microversion, since there's no change to the user message API and no change to the current response.<br />
====Actions====<br />
* rosmaita - write up a spec<br />
===3rd Party CI irregularities===<br />
The issue we want to address is that the 3rd Party CI systems seem pretty unstable. We'd like to be able to provide some more support to make the infrastructures more reliable. Luigi suggested using RDO Software Factory as a basis for 3rd Party CI.<br />
<br />
References:<br />
* https://opendev.org/x/third-party-ci-tools<br />
* https://zuul-ci.org/docs/zuul/admin/quick-start.html<br />
====Actions====<br />
* Luigi - follow up with RDO team and get some feedback on how plausible this scenario is<br />
* e0ne - will look to see who's using cinder-tempest-plugin<br />
===Extending default volume type support for tenants===<br />
Quick recap of the Shanghai discussion (see above). Simon had mentioned that he might have developer at Pure who'd be interested in doing the implementation. Rajat volunteered to help support the implementation.<br />
====Actions====<br />
* Gorka - write up the spec<br />
* rosmaita - follow up with Simon<br />
===Quotas!===<br />
Eric has a patch up that may fix one of many problems: https://review.opendev.org/#/c/695096/ <br />
Eric thinks the patch could be optimized if someone is interested.<br />
<br />
The general problem is that we update multiple tables and there can be (are) race conditions and you wind up with strange situations like negative quota values or multiple quotas for the same project. Operators have posted some scripts to be used occasionally clean up the database, but it would be better to fix this in Cinder.<br />
===EOL for driverfixes/{m,n} and stable/{o,p}===<br />
Since the Shanghai discussion, a patch has merged that removes the 6 month waiting period for the transition from EM -> EOL: https://review.opendev.org/#/c/682381/<br />
<br />
There was a discussion about this in #openstack-tc last week: http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-11-22.log.html#t2019-11-22T15:35:01<br />
<br />
Consensus is that we should go ahead and do this.<br />
====Actions====<br />
* rosmaita - send notice on the ML that we are going to do this in one week<br />
** http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011136.html<br />
* rosmaita - put up a release patch to EOL the branches<br />
** https://review.opendev.org/#/c/696173/<br />
===Driver support matrix===<br />
Follow-up from the discussion in Shanghai. A suggestion was made that multipath should be a specific category in the support matrix.<br />
<br />
Consensus is that multipath is more a feature of the backend than of the driver. It is useful to know if drivers do it, but it's not the kind of thing like replication that they do or not. Also, there are options that have to be set in nova in order for it to be useful - nova:libvirt:use_volume_multipath. So there doesn't seem to be a point in adding this to the support matrix.<br />
==Wednesday (Virtual)==<br />
===v2 API removal update===<br />
Outward facing issues:<br />
* Sean's patch that removes v2 from the 'versions' response had some strange failures (but Rajat thinks there may be a quick fix).<br />
* Right now, devstack expects both v2 and v3 to be available and creates endpoints for both in the service catalog: https://opendev.org/openstack/devstack/src/branch/master/lib/cinder<br />
* On the other hand, it looks like tempest is v3 ready: https://review.opendev.org/#/c/530702/<br />
* Also, Ivan is pretty sure that Horizon uses v3 only.<br />
* We should notify Nova and Glance to make sure they don't rely on v2 for anything.<br />
<br />
Internal issues:<br />
* we should be able to clean up the stuff that v3 inherited from v2<br />
* we may not be able to clean up the v2/contrib stuff yet because of microversion reliance<br />
====Actions====<br />
* Rajat take a shot at fixing Sean's patch<br />
* rosmaita - work on the devstack stuff (service catalog)<br />
* we'll be optimistic about tempest<br />
* rosmaita - send a general email to the ML saying that we plan to do this<br />
===Forum session recap: How are you using Cinder's Volume Types?===<br />
Session etherpad: https://etherpad.openstack.org/p/PVG-how-using-cinder-volume-types<br />
<br />
Sean mentioned some highlights of the Forum session.<br />
* NECTAR is running a patch that allows a volume type to be assigned to an AZ: https://github.com/NeCTAR-RC/cinder/commit/d5a3d938a8e0934d31b5a3c568846b3d32843866<br />
* There were some questions about whether RBD supports volume online migration<br />
* Operators are interested in the "Support filter backend based on operation type" spec that was implemented in Rocky, but need some documentation explaining how to use it. The implementation is https://github.com/openstack/cinder/commit/e1ec4b4c2e1f0de512f09e38824c1d7e2fa38617<br />
====Actions====<br />
* e0ne is planning to do some testing around the RBD volume migration<br />
* need someone to pick up the documentation of "Support filter backend based on operation type"<br />
===Ceph iSCSI work===<br />
This is an important feature for Ironic.<br />
There's an etherpad gathering some of the work Walt's done on this: https://etherpad.openstack.org/p/cinder-ceph-iscsi-driver<br />
====Actions====<br />
rosmaita - follow up with Walt about his bandwidth and ask him to add missing stuff to the etherpad<br />
===Ussuri community goals ===<br />
Goal 1: Drop Python 2.7 Support -- we're going to do this in 2 phases. First is to get the python 2 check and gate jobs removed so we aren't depending on any py27 in the gate. Second will be to make the changes that will only allow Cinder to be installed with at least py36. That will follow in January or so when any other project that needs to install Cinder in py27 for their own testing has removed that dependency. <br />
<br />
The cinder patch to drop py2 testing is https://review.opendev.org/#/c/695317/ -- once that's merged, we'll do the same for os-brick, cinderclient, the brick-client-ext, and cinderlib.<br />
<br />
Goal 2: Project Specific New Contributor & PTL Docs -- the goal is not yet approved; it's at the formal vote stage: https://review.opendev.org/#/c/691737/. From comments on the patch, expectations are that the current PTL will do this with help from former PTLs. Luckily, we have 2 former PTLs who are still very active with the project. The open issue right now is that the docs are supposed to be consistent across projects, but there isn't a template for this yet.<br />
<br />
There's a pre-selected goal for V to migrate all legacy zuul jobs: https://review.opendev.org/#/c/691278/. gmann has a patch up moving our legacy jobs (grenade) to the cinder repo and making them py3: https://review.opendev.org/#/c/695787/. At some point, we'll need to convert them to bona fide Zuul v3 jobs. Luigi left a bunch of info on the etherpad about what the moving parts for this are, and he pointed out that reviews are welcome.<br />
<br />
It looks like another V goal is going to be "Consistent and secure default policies" goal, which was floated on the ML: http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010291.html. A "Policy Popup Team" is being organized now to do some work during this cycle: https://review.opendev.org/#/c/695993/<br />
====Actions====<br />
* everyone - keep an eye on the "remove py2 support" patches<br />
* everyone - reviews welcome on https://review.opendev.org/#/q/status:open+branch:master+topic:grenade_zuulv3<br />
===Backup Service Testing===<br />
This was a follow-up from the Train mid-cycle. The backup tests fail intermittently with timeouts. The situation now is that the cinder backup tests have been removed from tempest-full, but they are being run with tempest-integrated-storage (which basically means that failures will hit us but shouldn't block other projects). The issue could still use some investigation; Eric has a suggestion on the etherpad to write an elasticsearch query to look in the c-bak log for a specific IOError instead of looking for the timeout.<br />
<br />
Eric also noted that the test jobs run a long time and fail with a not helpful message -- they look for volume going active but don't notice that it's in an error state and should fail sooner. Failing fast could conserve some gate resources. Maybe someone wants to follow up with something like https://review.opendev.org/#/c/565766/ ?<br />
====Actions====<br />
* anyone who's interested - follow up on this<br />
* rosmaita - (came up during this discussion but not related to this topic) update etherpads with nondestructive translation instructions (get the read-only link to the etherpad and use translations tools there instead of in the writeable etherpad)<br />
===Cycle Priorities===<br />
====Increase testing coverage====<br />
We want to get more thorough tests into the cinder-tempest-plugin. Sofia (enriquetaso) is mentoring an Outreachy intern Anastasiya (anastzhyr) who will focus on this during her internship (3 Dec 2019 to 3 March 2020). The Cinder community can assist in this by (A) writing bugs tagged 'test-coverage' with specific ideas for tests that can be added, and (B) timely reviews of Anastaiya's patches.<br />
====Increase number of 3rd party CIs running cinder-tempest-plugin====<br />
This '''should be easyâ„¢''' for current 3rd party CIs (and actually may already be a requirement). Need to get a baseline for how many CIs are running it now.<br />
====Better support for 3rd Party CIs====<br />
We'd like to their lives easier by having a standard way to deploy a robust system. This may be possible using RDO Software Factory. Luigi has already brought this up at the RDO meeting and RDO (and the SF people) are supportive of this idea: http://eavesdrop.openstack.org/meetings/rdo_meeting___2019_11_27/2019/rdo_meeting___2019_11_27.2019-11-27-15.00.log.html#l-76<br />
<br />
WIP 3rd party CI doc section for SoftwareFactory:<br />
* https://softwarefactory-project.io/r/#/c/17097/<br />
* https://softwarefactory-project.io/logs/97/17097/2/check/sf-docs-build/ae24483/docs-html/guides/third_party_ci.html <br />
====Default volume-type enhancement====<br />
Gorka's working on a spec for having per user/project/service-token default volume-type; basic idea is what was discussed at the PTG (see above).<br />
Related to this is improving the documentation around volume types.<br />
====Generic Backups====<br />
Related patches:<br />
* https://review.opendev.org/#/c/620881/<br />
* https://review.opendev.org/#/c/630305/<br />
====Improve Active-Active (HA) Documentation====<br />
We're anticipating that operators will want to run Cinder in HA mode (Cinder active-active). Currently RBD supports running in HA.<br />
<br />
A driver has to set a flag in order for the service to run in active-active. But in addition to setting the flag, it would be good for the feature to actually work with that driver. The problem is that what you need to do is very driver dependent. We don't have tempest tests that verify that a driver can run in HA (but it should be possible to add tests such that if they fail, then you know HA is not happening).<br />
<br />
We need docs aimed at two audiences:<br />
* driver developers: want to clarify what you need to do to implement active-active and claim HA support. Should be able to give some general advice (like "watch out for race conditions on connection") to help in implementing and testing that their driver supports active-active.<br />
* operators: need to provide some advice about how to deploy Cinder in active-active mode (for example, you should have 3 API nodes, 3 scheduler nodes, 3 volume services)<br />
====Remove the v2 API====<br />
Should at least be able to get it out of the service catalog and remove the option to run it (and remove the option to not run v3), so that from an external point of view, all that's available is the v3 API. The refactoring to remove all v2 code from the API doesn't have to happen immediately.<br />
====Remove Python 2 support====<br />
This is a community goal (and it looks like we're in good shape to make this happen very early in the cycle).<br />
====Move away from squlalchemy- migrate to alembic====<br />
We're getting closer to the point where we will have no choice.<br />
===Should we hold a Virtual Mid-Cycle?===<br />
Cycle Schedule: https://releases.openstack.org/ussuri/schedule.html<br />
<br />
The consensus that we should have a Ussuri Mid-Cycle and that it should be virtual. While we were discussing the timing (have it close to the spec freeze? or closer to M-2), Eric pointed out that if we were going to use the model that we used for this Virtual PTG, namely, 2-hour sessions spread over a couple of days, there's no reason why the sessions have to be close together since we don't need to arrange any physical facilities. So we can be flexible and have 2 hour sessions whenever it makes sense.<br />
<br />
Right now, the sensible times to have 2-hour virtual meetings are:<br />
* around the Cinder Spec Freeze. The spec freeze is the last week in January and is at 15 weeks, which is exactly the middle of the cycle. Maybe have the Virtual meet-up the week before so unmerged specs can have some discussion if necessary?<br />
* at the "Cinder New Feature Status Checkpoint" (week of 16 March 2020), which is 3 weeks before the final release for client libraries.<br />
====Actions====<br />
* rosmaita - get feedback from the wider team at the weekly meeting and organize polls to determine the day of week and time</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=CinderUssuriPTGSummary&diff=173313CinderUssuriPTGSummary2019-12-04T18:19:07Z<p>Jay Bryant: /* Actions */</p>
<hr />
<div>=== Introduction ===<br />
This page contains a summary of the subjects covered during the Ussuri PTG held in Shanghai, China, November 7-8, 2019.<br />
It also contains a summary of the Virtual PTG held November 25 and 27, 2019.<br />
<br />
The sessions were recorded. Links to the recordings will be added here when they are available.<br />
<br />
The full etherpad and all associated notes may be found here:<br />
* Shanghai: https://etherpad.openstack.org/p/shanghai-ptg-cinder<br />
* Virtual: https://etherpad.openstack.org/p/cinder-ussuri-virtual-ptg-planning<br />
<br />
==Thursday (Shanghai)==<br />
===Cinder project onboarding and "meet the Cinder developers" session===<br />
Everyone who attended were 100% satisfied and very complimentary about Cinder. Unfortunately, no one attended, so we spent the time figuring out how to get the recording equipment connected and positioned properly.<br />
<br />
<Br><br><br />
[https://www.youtube.com/watch?v=QQgiokX_EiI Video Recording Part 1]<br />
<br><br><br />
<br />
===Python 2 support & Work remaining to remove Py27 support===<br />
I came into this arguing that we need to keep Python 2 testing in master for a while -- at least while we are still supporting Python 2 in stable branches, because otherwise backports become a big problem (won't have a clean backport if any py3-only language features are used in a patch to master). Pretty much no one agreed with this.<br />
<br />
Sean pointed out that as libraries drop py2 support, we won't be able to use them in py2 testing anyway. Ivan and Sean can't wait to start ripping out py2-compatability code. Gorka didn't think that the extra effort to modify backports would be that big a deal, and that if we're going to start using py3 for real, might as well start now.<br />
====Actions====<br />
* Reminder to reviewers: we need to be checking the code coverage of test cases very carefully so that new code has excellent coverage and will be likely to fail when tested with py2 in stable branches when a backport is proposed<br />
* Reminder to committers: be patient with reviewers when they ask for more tests!<br />
* Reminder to community: new features can use py3 language constructs; bugfixes likely to be backported should be more conservative and write for py2 compatabilty <br />
* Reminder to driver maintainers: ^^<br />
* Ivan and Sean have a green light to start removing py2 compatability<br />
===Policy migration===<br />
Some background: https://etherpad.openstack.org/p/policy-migration-steps<br />
Keystone has added a default read-only role and service-scoped roles, but they don't do anything until projects write policies that use them<br />
oslo.policy code has a way to define a default policy + deprecated default policy; during deprecation, the most permissive wins. This will allow easy migration to new policies for operators.<br />
<br />
There are still questions about how to set up testing for these. Keystone did only unit tests, but the tests were very heavyweight; had to set up all the users in the DB each time; they wish they had done things in tempest. But there is a concern that it may not be practical to do in tempest, either. (It may depend on the project.)<br />
<br />
A pop-up team is going to be started to help get the larger projects moved to the new Policy Code.<br />
====Actions====<br />
* rosmaita will investigate the different testing approaches. Note: It's possible that tempest will add the methods to create different users and the different projects will have to do their own testing using those.<br />
* rosmaita To look at the scoping options and understand what the impact on Cinder will be.<br />
** Need to create a matrix of our policies and different scopes.<br />
** Need to figure out how the administrative context fits in<br />
** Need to check current test coverage and where testing needs to be enhanced.<br />
** Don't have to have one person do it. It is possible to split up the work.<br />
* rosmaita and e0ne (and anyone else interested in this) to join the pop-up team to get more info and help get Cinder started.<br />
===Cinder V2 API removal===<br />
We can't just remove V2 code right now (e.g., the V2 extensions need to be moved to V3) but we can remove access to the API. Though actually that's not true either, there is a lot of background work that needs to be done before we remove V2 API:<br />
* Tempest still assumes that the V2 API will be there. Need to fix it.<br />
* OpenStack Client also has some V2 API assumptions.<br />
* Devstack also will not work.<br />
<br />
Sean has a patch to see how badly things break with V2 removal: https://review.opendev.org/554372<br />
<br />
V3 is pretty much exactly the same as V2. We should be able to change and just have people switch the endpoint and have it work. It would be nice if we could just update the catalog but that doesn't appear to be the case.<br />
====Actions====<br />
* follow up on this for the virtual PTG. What did we find with Sean's patch?<br />
** Create a list of the specific work items that need to be completed.<br />
** At that point we may be able to split the work up to an intern (if we have an intern).<br />
===Cinder REST API V4===<br />
We had talked in Vancouver about getting to a point after we have enough micro-versions piling up to move to V4.<br />
====Actions====<br />
* We don't need to do that in this release (let's get rid of V2 first), but it is something we need to keep in mind as a future goal.<br />
<br><br><br />
[https://www.youtube.com/watch?v=a-kq_EYrkq0 Video Recording Part 2]<br />
<br><br><br />
<br />
===Volume local cache===<br />
Requires both Cinder and Nova work:<br />
* Cinder spec: https://review.opendev.org/#/c/684556/<br />
* Nova spec: https://review.opendev.org/689070<br />
<br />
Currently there're different types of fast NVME SSDs, such as Intel Optane SSD, which r/w throughput can be 2.x~3.x GB/s, latency can be ~10 us. While typical remote volume for a VM can be hundreds of MB/s, latency can be millisecond level (iscsi / rbd). So these fast SSDs can be mounted on compute node locally and used as a cache for remote volumes. Regarding storage team, we need to add support in os-brick.<br />
<br />
Consensus was: there are some storage solutions this cannot be done for (Ceph, no mount point on host machine), some that might not require this (some vendors already have super-fast caching), and some it's worth doing for, so the overall feeling was supportive for this effort.<br />
<br />
See the PTG etherpad for details. Picture of the flip chart used during the discussion: https://twitter.com/jungleboyj/status/1192323512238776320<br />
====Actions====<br />
* Liang Fang to continue working on this<br />
===Mutable options ===<br />
The context for this is a NetApp customer request who wants to be able to change backend credentials without restarting any services.<br />
<br />
Problem is that the current mutable config can be done for the REST API, but doesn't extend beyond that. Further, changing driver credentials is a little more work since it may require reloading the driver or having a mechanism in all drivers to recognize and handle that change. Also, we don't want config options that are shared across drivers to be mutable.<br />
<br />
Gorka pointed out that a driver supporting Active-Active would not need mutable options for this purpose. It would be better to implement Active-Active instead of refresh credentials this way. A/A HA support has been ready for several releases now, but so far RBD has been the only driver to test and enable it.<br />
<br />
The team feels that using Active/Active is the best way to go.<br />
====Actions====<br />
* Gorka volunteered to support the NetApp team if they choose to implement A/A<br />
* need to add to the developer docs that just making an option mutable in oslo.config does not solve the problem for drivers (more info on the etherpad)<br />
<br />
<br><br><br />
[https://www.youtube.com/watch?v=4BRXtQ41gJw Video Recording Part 3]<br />
<br><br><br />
<br />
===Cross Project Discussion with Edge Working Group===<br />
Apparently the next version of TripleO will support storage at the edge. They were wondering if we knew anything about that. We don't.<br />
<br />
As far as edge persistent storage goes, telcos think about having NFS-only and have it in the core - scary concept.<br />
<br />
In considering the edge use case, it is important to understand the physical limitations of what people have in mind. For example, one small telco rack, or a smaller DC with air conditioning, or a bigger DC with AC and bigger storage unit, etc. You really can't talk about "the edge" (insert U2 joke here).<br />
<br />
See the etherpad for more.<br />
===Default volume types depending on project or user===<br />
Having a single volume type default is too restrictive for bigger clouds with multiple AZs and many tenants/projects. Operators want more defaults to use in particular situations.<br />
<br />
The selection of which default to use is easy; the hard part of this will be the code enabling creation of the default at the end user/project level. Will need:<br />
* new API calls (create, show, list, update, delete), new microversion<br />
* client support<br />
* tell horizon about it<br />
====Actions====<br />
* geguileo - write the spec<br />
** request from Glance: we may also want a per-service default (triggered when a service token is passed)<br />
===Cinder retype doesn't use driver assisted migration===<br />
Gorka thinks this doesn't depend on the driver; he thinks it's broken for all drivers. There is code in the manager that prevents the efficient path from being taken:<br />
https://github.com/openstack/cinder/blob/ca5c2ce4e8ae9fbc92181ac4ba09cec3429a71e6/cinder/volume/manager.py#L2490<br />
There was a reason for it; we need to review and see if it still holds.<br />
<br />
Ivan thinks this is just a bug. Though we don't have a bug open for it.<br />
====Actions====<br />
* e0ne to investigate and fix it if he can verify that it is broken.<br />
===EOL some of the currently open branches===<br />
We have 8 open branches plus master (ussuri). Sent an email to the ML asking for data so we can make a good decision about this:<br />
http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010385.html<br />
<br />
Got zero responses, so this apparently isn't seen as a big deal by the community.<br />
<br />
The policy is that we need to announce 6 months ahead of time the fact that we are planning to EOL a branch. This allows time for a vendor to come in and pick it up if necessary.<br />
So, if we want to drop branches we just need to announce that we are planning to EOL branches and then we can do it in 6 months.<br />
<br />
The driverfixes branches have not been used in quite a while.<br />
* Should we delete those? No, we don't really want to lose that history of commits.<br />
* Could we re-name them? Put 'archived' in the title or something to make it clear that it doesn't still take code. (Or just document that they are an archive of old driver fixes.)<br />
* When we EOL a driver we should probably make it a driverfixes branch. (Not clear on exactly what's being proposed here, need to follow up at VPTG.)<br />
====Actions====<br />
* rosmaita - find out about renaming branches from infra team; also, about read-only branches (change to gerrit so no patches can be proposed to the branch)?<br />
** proposal: EOL o, p and rename them archived-ocata, archived-pike<br />
* rosmaita - send proposal to ML that o, p are due to exit EM status in 6 months<br />
* revisit this at the Virtual PTG<br />
** the EOL policy was revised recently, no longer requires the 6 month waiting period<br />
** want to reconsider whether not deleting the EOL branches is a good idea if we're not going to merge anything into them<br />
<br />
(recording 4 starts here)<br />
===Discuss the latest User Survey Results===<br />
Here's a handy compiled list of only the Cinder responses: https://etherpad.openstack.org/p/cinder-2019-user-survey-question-responses<br />
====Actions====<br />
* replication needs better documentation so that people know we can failover and fail back correctly<br />
* ivan is planning to continue the generic backup driver work<br />
<br />
===Meeting with the Nova team===<br />
When we failover in Cinder, volumes are no longer usable in Nova, but we don't tell Nova that the failover has ocurred. Any procedure in Nova to correct the situation needs to be done manually. It would be better if we let Nova know that a failover has occurred so they can do something.<br />
<br />
A complication is that Nova can't simply detach and attach the volume because data that is in flight would be lost.<br />
<br />
How about boot from volume? In that case the instance is dead anyway because access to the volume has been lost. Could go through the shutdown, detach, attach, reboot path.<br />
Problem is that detach is going to fail. Need to force it or handle the failure. But we aren't sure that Nova will allow a detach of a boot volume. And we don't currently have a force detach API.<br />
<br />
Also discussed a possible Nova bug for images created from encrypted volumes: https://bugs.launchpad.net/nova/+bug/1852106 , though it's not clear that the scenario described in the bug can actually happen<br />
====Actions====<br />
* need to figure out how to pass the force to os-brick to detach volume and when rebooting a volume<br />
* rosmaita to investigate Bug #1852106<br />
<br />
==Friday (Shanghai)==<br />
===Meeting with the Glance team===<br />
====Support for Glance multiple stores in Cinder====<br />
References: (cinder spec) https://review.openstack.org/#/c/641267/<br />
<br />
The Cinder team is still OK with this idea (which was approved for Train).<br />
=====Actions=====<br />
* retarget spec for Ussuri<br />
* get Abhishek's patch reviewed<br />
====Image snapshot co-location====<br />
For the Edge use case, Glance is planning to use info provided by Nova about what image a server was booted from to co-locate snapshots of that server in the same store as the original image. Would like to do the same with Cinder volumes uploaded as images. Just need a header that specifies the "base" image of the volume being uploaded as an image. We agreed that this is a separate use case from the above.<br />
=====Actions=====<br />
* Abhishek will write the spec for Cinder<br />
====Glance Cinder driver is very limited====<br />
We think it uses only default volume type, and also, it is not very well tested. We all agreed that this is a sad state of affairs.<br />
=====Actions=====<br />
* somebody should do something<br />
===Meet with Horizon about their proposed implementation of Cinder user messages===<br />
Horizon is interested in exposing the User Messages API. We agreed that this is a great idea.<br />
<br />
There's a question about having the message displayed in a requested language. It's possible that this is already handled at the REST API layer via the "Accept-Language" header. If it's not, that's probably the place to support this.<br />
====Actions====<br />
* rosmaita determine whether this would require a change to the API code, or whether existing code handles this already<br />
===Attach/Detach speed===<br />
Gorka was wondering whether there are any complaints about attach/detach speed in OpenStack, particularly since people are now using Cinder to provide volumes for Kubernetes (cinder in-tree driver, Cinder-CSI, Ember-CSI) and may be seeing a lot more attach/detach requests.<br />
<br />
Everybody seems to be OK with it, it's only geguileo who's complaining.<br />
====Actions====<br />
* not a concern at the moment<br />
===Topics from Train mid-cycle: status and carry-over to Ussuri===<br />
Notes about the Train mid-cycle: https://wiki.openstack.org/wiki/CinderTrainMidCycleSummary<br />
<br />
Mid-cycle etherpad: https://etherpad.openstack.org/p/cinder-train-mid-cycle-planning<br />
====Multiattach====<br />
All items need followup. Goals are:<br />
* short-term: document some guidance for how this feature should be tested<br />
* long-term: get some new tests into the cinder-tempest-plugin for this<br />
=====Actions=====<br />
* rosmaita draft the short-term document<br />
====iSCSI Ceph driver====<br />
Due to some downstream priorities changes, Walt is having trouble finding time to work on this.<br />
Ivan suggested that we encourage Walt to post whatever he's got, even if it's not working, so what he's learned isn't lost.<br />
There are some patches up and a github repo for some code Walt had to write that doesn't have a home in OpenStack or Ceph yet<br />
=====Actions=====<br />
* rosmaita: follow up with Walt<br />
* rosmaita: put together an etherpad with links to the work done so far<br />
====3rd Party CI Irregularities====<br />
Third-party testing by backend vendors of their driver code is very important to the project. But most of the 3rd Party CI appear to be pretty unstable.<br />
<br />
For most vendors, updating their 3rd Party CI to run python 3.7 in Train was not a simple task. It would be good if we could offer them better guidance about how to set up & maintain their 3rd Party CI. Would also like vendors to be running the cinder-tempest-plugin, but don't want to make it a demand unless we can make the path easier. (BTW, Datera is running the cinder-tempest-plugin in their CI!)<br />
<br />
Third Party CI Docs (partial list)<br />
* https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver<br />
* https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers<br />
* https://docs.openstack.org/infra/system-config/third_party.html<br />
* https://docs.openstack.org/cinder/latest/contributor/drivers.html<br />
<br />
=====Actions=====<br />
* Luigi has some ideas about using RDO Software Factory as a basis for 3rd Party CI; need to follow up with him on that<br />
* Gorka: will check about what RDO has available <br />
* e0ne: will look to see who's using cinder-tempest-plugin<br />
* the team: after gorka and e0ne report back, reorganize & update the 3rd party CI docs<br />
<br />
====Improve Automated Test Coverage====<br />
We want to do this via the cinder-tempest-plugin. Sophia (enriquetaso) is mentoring an Outreachy intern who has begun some work on this. Eric has been writing bugs to suggest test cases that need to be addressed.<br />
<br />
====SQLAlchemy to Alembic migration====<br />
No progress on this. Put in a proposal for a summer intern to work on this; maybe we'll get lucky.<br />
<br />
See https://etherpad.openstack.org/p/cinder-train-ptg-planning (line #247) for more info.<br />
<br />
====Capabilities Reporting====<br />
Operators need to read the vendor's manual to figure out which extra specs they can write for a particular backend, and what they're used for. it would be nice to drivers report their capabilities in a way that the operator can figure out this info from the CLI.<br />
<br />
Everyone agreed that we still want to do this. It will require an API change and there's already a spec for this:<br />
https://review.opendev.org/#/c/655939/1/specs/train/backend_capabilities.rst<br />
=====Actions=====<br />
* revisit at the Virtual PTG and figure out who's interested in working on it<br />
<br />
===Cinder Business===<br />
====Cinder Ussuri Priorities====<br />
We will finalize this after the Virtual PTG, but here's the initial list:<br />
* Increase testing coverage<br />
* Increase number of CIs running cinder-tempest-plugins<br />
* Better support for third party CIs: Make their life easier by having a way to deploy a robust system<br />
* Volume types per user/project/service-token<br />
** better documentation<br />
* Generic Backups<br />
* Improve HA Active-Active documentation<br />
** want to make it easier to test it<br />
* remove V2 API<br />
* remove python 2 support<br />
====Cinder-core update====<br />
See http://lists.openstack.org/pipermail/openstack-discuss/2019-November/010519.html<br />
We are at roughly the same review strength we had in Train.<br />
====Meeting Time Change update====<br />
We were holding off on this until after the Summit so that new contributors could participate in a poll.<br />
We'll consider the options from Liang Fang's original proposal at the Cinder weekly meeting: http://eavesdrop.openstack.org/meetings/cinder/2019/cinder.2019-10-23-16.00.log.html#l-166<br />
These are to move the meeting 1 or 2 hours earlier.<br />
There has also been some discussion on the ML: http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010328.html<br />
=====Actions=====<br />
* rosmaita put together a community poll<br />
====Virtual PTG====<br />
We discussed what the format should be. Consensus was to do it over 2 consecutive days, using 2 hours each day. This should make it easier for people to participate in at least part of the meeting. We want to do it soon; consensus was the week after KubeCon to avoid conflicts. So that would be the last week in November.<br />
=====Actions=====<br />
* rosmaita put together a community poll to determine days/times<br />
====Virtual Mid-Cycle====<br />
There is interest in having a midcycle. Although everyone recognizes that face-to-face is the best, contributors have been having trouble getting travel support. So we decided to do a completely Virtual Mid-Cycle meetup for Ussuri. We decided to figure out the format after we see how the Virtual PTG works out.<br />
==Monday (Virtual)==<br />
===Forum session recap: Are You Using Upgrade Checks?===<br />
* https://etherpad.openstack.org/p/PVG-upgrade-check-forum<br />
Jay gave a quick recap of the Forum session. There are a number of action items in the etherpad above. They are assigned to jungleboyj right now as a TC action.<br />
<br />
There are still some questions about (a) how operators are using these, and (b) what kind of checks we should be providing from the development side. The Cinder team was seeing this as pre-check. Others are seeing it as a check that is used along the way while an upgrade is in process to ensure that things are ready before operators start up their services. Pre-checks seem to make sense for us; Sean noted that we could add an option to do some pre-checks to the cinder-status command.<br />
<br />
So what should the Cinder team do during the Ussuri cycle (before we have the above issues settled)? At the very least, we should still add them when a driver is unsupported and subject to removal:<br />
* inform operators that in order to use an unsupported driver, a flag has to be set in cinder.conf<br />
* inform operators that they need to contact the vendor about whether they have plans to have the driver re-instated; otherwise, the operator needs to prepare to migrate the affected volumes to a backend with a supported driver for the next Cinder release<br />
====Actions====<br />
* jungleboyj - Start a discussion on the mailing list to find out if anyone is actually using or has used the upgrade checks in production<br />
* need to figure out where the documentation for this goes<br />
===Snapshot co-location===<br />
The spec for this is: https://review.opendev.org/#/c/695630/<br />
<br />
This is related to glance multi-store support in Cinder, but the spec needs to specify more carefully what the use case is. We think that is: a user has a volume that was created from a glance image, and wants to upload it as an image; want to give Glance info so that in can put the new image in the same store as the original image. (So use of the term "snapshot" here may be inaccurate.)<br />
<br />
This feature depends on the implementation of the other glance multistore spec: https://review.opendev.org/#/c/661676/<br />
====Actions====<br />
* Rajat, Abhishek - update the spec<br />
===Python 2 support removal===<br />
Gave a quick summary of what we discussed in Shanghai (see above) so that we're all on the same page.<br />
<br />
There's a patch up now removing py2 testing from Cinder: https://review.opendev.org/695317 . Once that's approved, will do the same for the other components.<br />
<br />
General advice to Cinder developers about using Python 3 language features: https://wiki.openstack.org/wiki/CinderUssuriPTGSummary#Actions<br />
<br />
====Actions====<br />
* rosmaita - get the testing/gate patches merged, then let the good times roll<br />
===User messages===<br />
Quick discussion of the admin action "leakage" issue discussed on https://review.opendev.org/#/c/694954/<br />
<br />
Consensus was that it would be useful to expose admin-oriented actions in user messages that only admins would be able to view. Maybe set a special flag when the message is created, and then use the admin context to decide whether this gets shown or not. Agreed that the message content will be same as we have currently (that is, don't expose any sensitive information even to admins). We can wait until admin-facing user messages are being used and get feedback about whether more info is required or not.<br />
<br />
Ivan pointed out that this change should not require a new microversion, since there's no change to the user message API and no change to the current response.<br />
====Actions====<br />
* rosmaita - write up a spec<br />
===3rd Party CI irregularities===<br />
The issue we want to address is that the 3rd Party CI systems seem pretty unstable. We'd like to be able to provide some more support to make the infrastructures more reliable. Luigi suggested using RDO Software Factory as a basis for 3rd Party CI.<br />
<br />
References:<br />
* https://opendev.org/x/third-party-ci-tools<br />
* https://zuul-ci.org/docs/zuul/admin/quick-start.html<br />
====Actions====<br />
* Luigi - follow up with RDO team and get some feedback on how plausible this scenario is<br />
* e0ne - will look to see who's using cinder-tempest-plugin<br />
===Extending default volume type support for tenants===<br />
Quick recap of the Shanghai discussion (see above). Simon had mentioned that he might have developer at Pure who'd be interested in doing the implementation. Rajat volunteered to help support the implementation.<br />
====Actions====<br />
* Gorka - write up the spec<br />
* rosmaita - follow up with Simon<br />
===Quotas!===<br />
Eric has a patch up that may fix one of many problems: https://review.opendev.org/#/c/695096/ <br />
Eric thinks the patch could be optimized if someone is interested.<br />
<br />
The general problem is that we update multiple tables and there can be (are) race conditions and you wind up with strange situations like negative quota values or multiple quotas for the same project. Operators have posted some scripts to be used occasionally clean up the database, but it would be better to fix this in Cinder.<br />
===EOL for driverfixes/{m,n} and stable/{o,p}===<br />
Since the Shanghai discussion, a patch has merged that removes the 6 month waiting period for the transition from EM -> EOL: https://review.opendev.org/#/c/682381/<br />
<br />
There was a discussion about this in #openstack-tc last week: http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-11-22.log.html#t2019-11-22T15:35:01<br />
<br />
Consensus is that we should go ahead and do this.<br />
====Actions====<br />
* rosmaita - send notice on the ML that we are going to do this in one week<br />
** http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011136.html<br />
* rosmaita - put up a release patch to EOL the branches<br />
** https://review.opendev.org/#/c/696173/<br />
===Driver support matrix===<br />
Follow-up from the discussion in Shanghai. A suggestion was made that multipath should be a specific category in the support matrix.<br />
<br />
Consensus is that multipath is more a feature of the backend than of the driver. It is useful to know if drivers do it, but it's not the kind of thing like replication that they do or not. Also, there are options that have to be set in nova in order for it to be useful - nova:libvirt:use_volume_multipath. So there doesn't seem to be a point in adding this to the support matrix.<br />
==Wednesday (Virtual)==<br />
===v2 API removal update===<br />
Outward facing issues:<br />
* Sean's patch that removes v2 from the 'versions' response had some strange failures (but Rajat thinks there may be a quick fix).<br />
* Right now, devstack expects both v2 and v3 to be available and creates endpoints for both in the service catalog: https://opendev.org/openstack/devstack/src/branch/master/lib/cinder<br />
* On the other hand, it looks like tempest is v3 ready: https://review.opendev.org/#/c/530702/<br />
* Also, Ivan is pretty sure that Horizon uses v3 only.<br />
* We should notify Nova and Glance to make sure they don't rely on v2 for anything.<br />
<br />
Internal issues:<br />
* we should be able to clean up the stuff that v3 inherited from v2<br />
* we may not be able to clean up the v2/contrib stuff yet because of microversion reliance<br />
====Actions====<br />
* Rajat take a shot at fixing Sean's patch<br />
* rosmaita - work on the devstack stuff (service catalog)<br />
* we'll be optimistic about tempest<br />
* rosmaita - send a general email to the ML saying that we plan to do this<br />
===Forum session recap: How are you using Cinder's Volume Types?===<br />
Session etherpad: https://etherpad.openstack.org/p/PVG-how-using-cinder-volume-types<br />
<br />
Sean mentioned some highlights of the Forum session.<br />
* NECTAR is running a patch that allows a volume type to be assigned to an AZ: https://github.com/NeCTAR-RC/cinder/commit/d5a3d938a8e0934d31b5a3c568846b3d32843866<br />
* There were some questions about whether RBD supports volume online migration<br />
* Operators are interested in the "Support filter backend based on operation type" spec that was implemented in Rocky, but need some documentation explaining how to use it. The implementation is https://github.com/openstack/cinder/commit/e1ec4b4c2e1f0de512f09e38824c1d7e2fa38617<br />
====Actions====<br />
* e0ne is planning to do some testing around the RBD volume migration<br />
* need someone to pick up the documentation of "Support filter backend based on operation type"<br />
===Ceph iSCSI work===<br />
This is an important feature for Ironic.<br />
There's an etherpad gathering some of the work Walt's done on this: https://etherpad.openstack.org/p/cinder-ceph-iscsi-driver<br />
====Actions====<br />
rosmaita - follow up with Walt about his bandwidth and ask him to add missing stuff to the etherpad<br />
===Ussuri community goals ===<br />
Goal 1: Drop Python 2.7 Support -- we're going to do this in 2 phases. First is to get the python 2 check and gate jobs removed so we aren't depending on any py27 in the gate. Second will be to make the changes that will only allow Cinder to be installed with at least py36. That will follow in January or so when any other project that needs to install Cinder in py27 for their own testing has removed that dependency. <br />
<br />
The cinder patch to drop py2 testing is https://review.opendev.org/#/c/695317/ -- once that's merged, we'll do the same for os-brick, cinderclient, the brick-client-ext, and cinderlib.<br />
<br />
Goal 2: Project Specific New Contributor & PTL Docs -- the goal is not yet approved; it's at the formal vote stage: https://review.opendev.org/#/c/691737/. From comments on the patch, expectations are that the current PTL will do this with help from former PTLs. Luckily, we have 2 former PTLs who are still very active with the project. The open issue right now is that the docs are supposed to be consistent across projects, but there isn't a template for this yet.<br />
<br />
There's a pre-selected goal for V to migrate all legacy zuul jobs: https://review.opendev.org/#/c/691278/. gmann has a patch up moving our legacy jobs (grenade) to the cinder repo and making them py3: https://review.opendev.org/#/c/695787/. At some point, we'll need to convert them to bona fide Zuul v3 jobs. Luigi left a bunch of info on the etherpad about what the moving parts for this are, and he pointed out that reviews are welcome.<br />
<br />
It looks like another V goal is going to be "Consistent and secure default policies" goal, which was floated on the ML: http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010291.html. A "Policy Popup Team" is being organized now to do some work during this cycle: https://review.opendev.org/#/c/695993/<br />
====Actions====<br />
* everyone - keep an eye on the "remove py2 support" patches<br />
* everyone - reviews welcome on https://review.opendev.org/#/q/status:open+branch:master+topic:grenade_zuulv3<br />
===Backup Service Testing===<br />
This was a follow-up from the Train mid-cycle. The backup tests fail intermittently with timeouts. The situation now is that the cinder backup tests have been removed from tempest-full, but they are being run with tempest-integrated-storage (which basically means that failures will hit us but shouldn't block other projects). The issue could still use some investigation; Eric has a suggestion on the etherpad to write an elasticsearch query to look in the c-bak log for a specific IOError instead of looking for the timeout.<br />
<br />
Eric also noted that the test jobs run a long time and fail with a not helpful message -- they look for volume going active but don't notice that it's in an error state and should fail sooner. Failing fast could conserve some gate resources. Maybe someone wants to follow up with something like https://review.opendev.org/#/c/565766/ ?<br />
====Actions====<br />
* anyone who's interested - follow up on this<br />
* rosmaita - (came up during this discussion but not related to this topic) update etherpads with nondestructive translation instructions (get the read-only link to the etherpad and use translations tools there instead of in the writeable etherpad)<br />
===Cycle Priorities===<br />
====Increase testing coverage====<br />
We want to get more thorough tests into the cinder-tempest-plugin. Sofia (enriquetaso) is mentoring an Outreachy intern Anastasiya (anastzhyr) who will focus on this during her internship (3 Dec 2019 to 3 March 2020). The Cinder community can assist in this by (A) writing bugs tagged 'test-coverage' with specific ideas for tests that can be added, and (B) timely reviews of Anastaiya's patches.<br />
====Increase number of 3rd party CIs running cinder-tempest-plugin====<br />
This '''should be easyâ„¢''' for current 3rd party CIs (and actually may already be a requirement). Need to get a baseline for how many CIs are running it now.<br />
====Better support for 3rd Party CIs====<br />
We'd like to their lives easier by having a standard way to deploy a robust system. This may be possible using RDO Software Factory. Luigi has already brought this up at the RDO meeting and RDO (and the SF people) are supportive of this idea: http://eavesdrop.openstack.org/meetings/rdo_meeting___2019_11_27/2019/rdo_meeting___2019_11_27.2019-11-27-15.00.log.html#l-76<br />
<br />
WIP 3rd party CI doc section for SoftwareFactory:<br />
* https://softwarefactory-project.io/r/#/c/17097/<br />
* https://softwarefactory-project.io/logs/97/17097/2/check/sf-docs-build/ae24483/docs-html/guides/third_party_ci.html <br />
====Default volume-type enhancement====<br />
Gorka's working on a spec for having per user/project/service-token default volume-type; basic idea is what was discussed at the PTG (see above).<br />
Related to this is improving the documentation around volume types.<br />
====Generic Backups====<br />
Related patches:<br />
* https://review.opendev.org/#/c/620881/<br />
* https://review.opendev.org/#/c/630305/<br />
====Improve Active-Active (HA) Documentation====<br />
We're anticipating that operators will want to run Cinder in HA mode (Cinder active-active). Currently RBD supports running in HA.<br />
<br />
A driver has to set a flag in order for the service to run in active-active. But in addition to setting the flag, it would be good for the feature to actually work with that driver. The problem is that what you need to do is very driver dependent. We don't have tempest tests that verify that a driver can run in HA (but it should be possible to add tests such that if they fail, then you know HA is not happening).<br />
<br />
We need docs aimed at two audiences:<br />
* driver developers: want to clarify what you need to do to implement active-active and claim HA support. Should be able to give some general advice (like "watch out for race conditions on connection") to help in implementing and testing that their driver supports active-active.<br />
* operators: need to provide some advice about how to deploy Cinder in active-active mode (for example, you should have 3 API nodes, 3 scheduler nodes, 3 volume services)<br />
====Remove the v2 API====<br />
Should at least be able to get it out of the service catalog and remove the option to run it (and remove the option to not run v3), so that from an external point of view, all that's available is the v3 API. The refactoring to remove all v2 code from the API doesn't have to happen immediately.<br />
====Remove Python 2 support====<br />
This is a community goal (and it looks like we're in good shape to make this happen very early in the cycle).<br />
====Move away from squlalchemy- migrate to alembic====<br />
We're getting closer to the point where we will have no choice.<br />
===Should we hold a Virtual Mid-Cycle?===<br />
Cycle Schedule: https://releases.openstack.org/ussuri/schedule.html<br />
<br />
The consensus that we should have a Ussuri Mid-Cycle and that it should be virtual. While we were discussing the timing (have it close to the spec freeze? or closer to M-2), Eric pointed out that if we were going to use the model that we used for this Virtual PTG, namely, 2-hour sessions spread over a couple of days, there's no reason why the sessions have to be close together since we don't need to arrange any physical facilities. So we can be flexible and have 2 hour sessions whenever it makes sense.<br />
<br />
Right now, the sensible times to have 2-hour virtual meetings are:<br />
* around the Cinder Spec Freeze. The spec freeze is the last week in January and is at 15 weeks, which is exactly the middle of the cycle. Maybe have the Virtual meet-up the week before so unmerged specs can have some discussion if necessary?<br />
* at the "Cinder New Feature Status Checkpoint" (week of 16 March 2020), which is 3 weeks before the final release for client libraries.<br />
====Actions====<br />
* rosmaita - get feedback from the wider team at the weekly meeting and organize polls to determine the day of week and time</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=CinderUssuriPTGSummary&diff=173312CinderUssuriPTGSummary2019-12-04T18:18:17Z<p>Jay Bryant: /* Thursday (Shanghai) */</p>
<hr />
<div>=== Introduction ===<br />
This page contains a summary of the subjects covered during the Ussuri PTG held in Shanghai, China, November 7-8, 2019.<br />
It also contains a summary of the Virtual PTG held November 25 and 27, 2019.<br />
<br />
The sessions were recorded. Links to the recordings will be added here when they are available.<br />
<br />
The full etherpad and all associated notes may be found here:<br />
* Shanghai: https://etherpad.openstack.org/p/shanghai-ptg-cinder<br />
* Virtual: https://etherpad.openstack.org/p/cinder-ussuri-virtual-ptg-planning<br />
<br />
==Thursday (Shanghai)==<br />
===Cinder project onboarding and "meet the Cinder developers" session===<br />
Everyone who attended were 100% satisfied and very complimentary about Cinder. Unfortunately, no one attended, so we spent the time figuring out how to get the recording equipment connected and positioned properly.<br />
<br />
<Br><br><br />
[https://www.youtube.com/watch?v=QQgiokX_EiI Video Recording Part 1]<br />
<br><br><br />
<br />
===Python 2 support & Work remaining to remove Py27 support===<br />
I came into this arguing that we need to keep Python 2 testing in master for a while -- at least while we are still supporting Python 2 in stable branches, because otherwise backports become a big problem (won't have a clean backport if any py3-only language features are used in a patch to master). Pretty much no one agreed with this.<br />
<br />
Sean pointed out that as libraries drop py2 support, we won't be able to use them in py2 testing anyway. Ivan and Sean can't wait to start ripping out py2-compatability code. Gorka didn't think that the extra effort to modify backports would be that big a deal, and that if we're going to start using py3 for real, might as well start now.<br />
====Actions====<br />
* Reminder to reviewers: we need to be checking the code coverage of test cases very carefully so that new code has excellent coverage and will be likely to fail when tested with py2 in stable branches when a backport is proposed<br />
* Reminder to committers: be patient with reviewers when they ask for more tests!<br />
* Reminder to community: new features can use py3 language constructs; bugfixes likely to be backported should be more conservative and write for py2 compatabilty <br />
* Reminder to driver maintainers: ^^<br />
* Ivan and Sean have a green light to start removing py2 compatability<br />
===Policy migration===<br />
Some background: https://etherpad.openstack.org/p/policy-migration-steps<br />
Keystone has added a default read-only role and service-scoped roles, but they don't do anything until projects write policies that use them<br />
oslo.policy code has a way to define a default policy + deprecated default policy; during deprecation, the most permissive wins. This will allow easy migration to new policies for operators.<br />
<br />
There are still questions about how to set up testing for these. Keystone did only unit tests, but the tests were very heavyweight; had to set up all the users in the DB each time; they wish they had done things in tempest. But there is a concern that it may not be practical to do in tempest, either. (It may depend on the project.)<br />
<br />
A pop-up team is going to be started to help get the larger projects moved to the new Policy Code.<br />
====Actions====<br />
* rosmaita will investigate the different testing approaches. Note: It's possible that tempest will add the methods to create different users and the different projects will have to do their own testing using those.<br />
* rosmaita To look at the scoping options and understand what the impact on Cinder will be.<br />
** Need to create a matrix of our policies and different scopes.<br />
** Need to figure out how the administrative context fits in<br />
** Need to check current test coverage and where testing needs to be enhanced.<br />
** Don't have to have one person do it. It is possible to split up the work.<br />
* rosmaita and e0ne (and anyone else interested in this) to join the pop-up team to get more info and help get Cinder started.<br />
===Cinder V2 API removal===<br />
We can't just remove V2 code right now (e.g., the V2 extensions need to be moved to V3) but we can remove access to the API. Though actually that's not true either, there is a lot of background work that needs to be done before we remove V2 API:<br />
* Tempest still assumes that the V2 API will be there. Need to fix it.<br />
* OpenStack Client also has some V2 API assumptions.<br />
* Devstack also will not work.<br />
<br />
Sean has a patch to see how badly things break with V2 removal: https://review.opendev.org/554372<br />
<br />
V3 is pretty much exactly the same as V2. We should be able to change and just have people switch the endpoint and have it work. It would be nice if we could just update the catalog but that doesn't appear to be the case.<br />
====Actions====<br />
* follow up on this for the virtual PTG. What did we find with Sean's patch?<br />
** Create a list of the specific work items that need to be completed.<br />
** At that point we may be able to split the work up to an intern (if we have an intern).<br />
===Cinder REST API V4===<br />
We had talked in Vancouver about getting to a point after we have enough micro-versions piling up to move to V4.<br />
====Actions====<br />
* We don't need to do that in this release (let's get rid of V2 first), but it is something we need to keep in mind as a future goal.<br />
<br><br><br />
[https://www.youtube.com/watch?v=a-kq_EYrkq0 Video Recording Part 2]<br />
<br><br><br />
<br />
===Volume local cache===<br />
Requires both Cinder and Nova work:<br />
* Cinder spec: https://review.opendev.org/#/c/684556/<br />
* Nova spec: https://review.opendev.org/689070<br />
<br />
Currently there're different types of fast NVME SSDs, such as Intel Optane SSD, which r/w throughput can be 2.x~3.x GB/s, latency can be ~10 us. While typical remote volume for a VM can be hundreds of MB/s, latency can be millisecond level (iscsi / rbd). So these fast SSDs can be mounted on compute node locally and used as a cache for remote volumes. Regarding storage team, we need to add support in os-brick.<br />
<br />
Consensus was: there are some storage solutions this cannot be done for (Ceph, no mount point on host machine), some that might not require this (some vendors already have super-fast caching), and some it's worth doing for, so the overall feeling was supportive for this effort.<br />
<br />
See the PTG etherpad for details. Picture of the flip chart used during the discussion: https://twitter.com/jungleboyj/status/1192323512238776320<br />
====Actions====<br />
* Liang Fang to continue working on this<br />
===Mutable options ===<br />
The context for this is a NetApp customer request who wants to be able to change backend credentials without restarting any services.<br />
<br />
Problem is that the current mutable config can be done for the REST API, but doesn't extend beyond that. Further, changing driver credentials is a little more work since it may require reloading the driver or having a mechanism in all drivers to recognize and handle that change. Also, we don't want config options that are shared across drivers to be mutable.<br />
<br />
Gorka pointed out that a driver supporting Active-Active would not need mutable options for this purpose. It would be better to implement Active-Active instead of refresh credentials this way. A/A HA support has been ready for several releases now, but so far RBD has been the only driver to test and enable it.<br />
<br />
The team feels that using Active/Active is the best way to go.<br />
====Actions====<br />
* Gorka volunteered to support the NetApp team if they choose to implement A/A<br />
* need to add to the developer docs that just making an option mutable in oslo.config does not solve the problem for drivers (more info on the etherpad)<br />
<br />
(recording 3 starts here)<br />
===Cross Project Discussion with Edge Working Group===<br />
Apparently the next version of TripleO will support storage at the edge. They were wondering if we knew anything about that. We don't.<br />
<br />
As far as edge persistent storage goes, telcos think about having NFS-only and have it in the core - scary concept.<br />
<br />
In considering the edge use case, it is important to understand the physical limitations of what people have in mind. For example, one small telco rack, or a smaller DC with air conditioning, or a bigger DC with AC and bigger storage unit, etc. You really can't talk about "the edge" (insert U2 joke here).<br />
<br />
See the etherpad for more.<br />
===Default volume types depending on project or user===<br />
Having a single volume type default is too restrictive for bigger clouds with multiple AZs and many tenants/projects. Operators want more defaults to use in particular situations.<br />
<br />
The selection of which default to use is easy; the hard part of this will be the code enabling creation of the default at the end user/project level. Will need:<br />
* new API calls (create, show, list, update, delete), new microversion<br />
* client support<br />
* tell horizon about it<br />
====Actions====<br />
* geguileo - write the spec<br />
** request from Glance: we may also want a per-service default (triggered when a service token is passed)<br />
===Cinder retype doesn't use driver assisted migration===<br />
Gorka thinks this doesn't depend on the driver; he thinks it's broken for all drivers. There is code in the manager that prevents the efficient path from being taken:<br />
https://github.com/openstack/cinder/blob/ca5c2ce4e8ae9fbc92181ac4ba09cec3429a71e6/cinder/volume/manager.py#L2490<br />
There was a reason for it; we need to review and see if it still holds.<br />
<br />
Ivan thinks this is just a bug. Though we don't have a bug open for it.<br />
====Actions====<br />
* e0ne to investigate and fix it if he can verify that it is broken.<br />
===EOL some of the currently open branches===<br />
We have 8 open branches plus master (ussuri). Sent an email to the ML asking for data so we can make a good decision about this:<br />
http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010385.html<br />
<br />
Got zero responses, so this apparently isn't seen as a big deal by the community.<br />
<br />
The policy is that we need to announce 6 months ahead of time the fact that we are planning to EOL a branch. This allows time for a vendor to come in and pick it up if necessary.<br />
So, if we want to drop branches we just need to announce that we are planning to EOL branches and then we can do it in 6 months.<br />
<br />
The driverfixes branches have not been used in quite a while.<br />
* Should we delete those? No, we don't really want to lose that history of commits.<br />
* Could we re-name them? Put 'archived' in the title or something to make it clear that it doesn't still take code. (Or just document that they are an archive of old driver fixes.)<br />
* When we EOL a driver we should probably make it a driverfixes branch. (Not clear on exactly what's being proposed here, need to follow up at VPTG.)<br />
====Actions====<br />
* rosmaita - find out about renaming branches from infra team; also, about read-only branches (change to gerrit so no patches can be proposed to the branch)?<br />
** proposal: EOL o, p and rename them archived-ocata, archived-pike<br />
* rosmaita - send proposal to ML that o, p are due to exit EM status in 6 months<br />
* revisit this at the Virtual PTG<br />
** the EOL policy was revised recently, no longer requires the 6 month waiting period<br />
** want to reconsider whether not deleting the EOL branches is a good idea if we're not going to merge anything into them<br />
<br />
(recording 4 starts here)<br />
===Discuss the latest User Survey Results===<br />
Here's a handy compiled list of only the Cinder responses: https://etherpad.openstack.org/p/cinder-2019-user-survey-question-responses<br />
====Actions====<br />
* replication needs better documentation so that people know we can failover and fail back correctly<br />
* ivan is planning to continue the generic backup driver work<br />
<br />
===Meeting with the Nova team===<br />
When we failover in Cinder, volumes are no longer usable in Nova, but we don't tell Nova that the failover has ocurred. Any procedure in Nova to correct the situation needs to be done manually. It would be better if we let Nova know that a failover has occurred so they can do something.<br />
<br />
A complication is that Nova can't simply detach and attach the volume because data that is in flight would be lost.<br />
<br />
How about boot from volume? In that case the instance is dead anyway because access to the volume has been lost. Could go through the shutdown, detach, attach, reboot path.<br />
Problem is that detach is going to fail. Need to force it or handle the failure. But we aren't sure that Nova will allow a detach of a boot volume. And we don't currently have a force detach API.<br />
<br />
Also discussed a possible Nova bug for images created from encrypted volumes: https://bugs.launchpad.net/nova/+bug/1852106 , though it's not clear that the scenario described in the bug can actually happen<br />
====Actions====<br />
* need to figure out how to pass the force to os-brick to detach volume and when rebooting a volume<br />
* rosmaita to investigate Bug #1852106<br />
<br />
==Friday (Shanghai)==<br />
===Meeting with the Glance team===<br />
====Support for Glance multiple stores in Cinder====<br />
References: (cinder spec) https://review.openstack.org/#/c/641267/<br />
<br />
The Cinder team is still OK with this idea (which was approved for Train).<br />
=====Actions=====<br />
* retarget spec for Ussuri<br />
* get Abhishek's patch reviewed<br />
====Image snapshot co-location====<br />
For the Edge use case, Glance is planning to use info provided by Nova about what image a server was booted from to co-locate snapshots of that server in the same store as the original image. Would like to do the same with Cinder volumes uploaded as images. Just need a header that specifies the "base" image of the volume being uploaded as an image. We agreed that this is a separate use case from the above.<br />
=====Actions=====<br />
* Abhishek will write the spec for Cinder<br />
====Glance Cinder driver is very limited====<br />
We think it uses only default volume type, and also, it is not very well tested. We all agreed that this is a sad state of affairs.<br />
=====Actions=====<br />
* somebody should do something<br />
===Meet with Horizon about their proposed implementation of Cinder user messages===<br />
Horizon is interested in exposing the User Messages API. We agreed that this is a great idea.<br />
<br />
There's a question about having the message displayed in a requested language. It's possible that this is already handled at the REST API layer via the "Accept-Language" header. If it's not, that's probably the place to support this.<br />
====Actions====<br />
* rosmaita determine whether this would require a change to the API code, or whether existing code handles this already<br />
===Attach/Detach speed===<br />
Gorka was wondering whether there are any complaints about attach/detach speed in OpenStack, particularly since people are now using Cinder to provide volumes for Kubernetes (cinder in-tree driver, Cinder-CSI, Ember-CSI) and may be seeing a lot more attach/detach requests.<br />
<br />
Everybody seems to be OK with it, it's only geguileo who's complaining.<br />
====Actions====<br />
* not a concern at the moment<br />
===Topics from Train mid-cycle: status and carry-over to Ussuri===<br />
Notes about the Train mid-cycle: https://wiki.openstack.org/wiki/CinderTrainMidCycleSummary<br />
<br />
Mid-cycle etherpad: https://etherpad.openstack.org/p/cinder-train-mid-cycle-planning<br />
====Multiattach====<br />
All items need followup. Goals are:<br />
* short-term: document some guidance for how this feature should be tested<br />
* long-term: get some new tests into the cinder-tempest-plugin for this<br />
=====Actions=====<br />
* rosmaita draft the short-term document<br />
====iSCSI Ceph driver====<br />
Due to some downstream priorities changes, Walt is having trouble finding time to work on this.<br />
Ivan suggested that we encourage Walt to post whatever he's got, even if it's not working, so what he's learned isn't lost.<br />
There are some patches up and a github repo for some code Walt had to write that doesn't have a home in OpenStack or Ceph yet<br />
=====Actions=====<br />
* rosmaita: follow up with Walt<br />
* rosmaita: put together an etherpad with links to the work done so far<br />
====3rd Party CI Irregularities====<br />
Third-party testing by backend vendors of their driver code is very important to the project. But most of the 3rd Party CI appear to be pretty unstable.<br />
<br />
For most vendors, updating their 3rd Party CI to run python 3.7 in Train was not a simple task. It would be good if we could offer them better guidance about how to set up & maintain their 3rd Party CI. Would also like vendors to be running the cinder-tempest-plugin, but don't want to make it a demand unless we can make the path easier. (BTW, Datera is running the cinder-tempest-plugin in their CI!)<br />
<br />
Third Party CI Docs (partial list)<br />
* https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver<br />
* https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers<br />
* https://docs.openstack.org/infra/system-config/third_party.html<br />
* https://docs.openstack.org/cinder/latest/contributor/drivers.html<br />
<br />
=====Actions=====<br />
* Luigi has some ideas about using RDO Software Factory as a basis for 3rd Party CI; need to follow up with him on that<br />
* Gorka: will check about what RDO has available <br />
* e0ne: will look to see who's using cinder-tempest-plugin<br />
* the team: after gorka and e0ne report back, reorganize & update the 3rd party CI docs<br />
<br />
====Improve Automated Test Coverage====<br />
We want to do this via the cinder-tempest-plugin. Sophia (enriquetaso) is mentoring an Outreachy intern who has begun some work on this. Eric has been writing bugs to suggest test cases that need to be addressed.<br />
<br />
====SQLAlchemy to Alembic migration====<br />
No progress on this. Put in a proposal for a summer intern to work on this; maybe we'll get lucky.<br />
<br />
See https://etherpad.openstack.org/p/cinder-train-ptg-planning (line #247) for more info.<br />
<br />
====Capabilities Reporting====<br />
Operators need to read the vendor's manual to figure out which extra specs they can write for a particular backend, and what they're used for. it would be nice to drivers report their capabilities in a way that the operator can figure out this info from the CLI.<br />
<br />
Everyone agreed that we still want to do this. It will require an API change and there's already a spec for this:<br />
https://review.opendev.org/#/c/655939/1/specs/train/backend_capabilities.rst<br />
=====Actions=====<br />
* revisit at the Virtual PTG and figure out who's interested in working on it<br />
<br />
===Cinder Business===<br />
====Cinder Ussuri Priorities====<br />
We will finalize this after the Virtual PTG, but here's the initial list:<br />
* Increase testing coverage<br />
* Increase number of CIs running cinder-tempest-plugins<br />
* Better support for third party CIs: Make their life easier by having a way to deploy a robust system<br />
* Volume types per user/project/service-token<br />
** better documentation<br />
* Generic Backups<br />
* Improve HA Active-Active documentation<br />
** want to make it easier to test it<br />
* remove V2 API<br />
* remove python 2 support<br />
====Cinder-core update====<br />
See http://lists.openstack.org/pipermail/openstack-discuss/2019-November/010519.html<br />
We are at roughly the same review strength we had in Train.<br />
====Meeting Time Change update====<br />
We were holding off on this until after the Summit so that new contributors could participate in a poll.<br />
We'll consider the options from Liang Fang's original proposal at the Cinder weekly meeting: http://eavesdrop.openstack.org/meetings/cinder/2019/cinder.2019-10-23-16.00.log.html#l-166<br />
These are to move the meeting 1 or 2 hours earlier.<br />
There has also been some discussion on the ML: http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010328.html<br />
=====Actions=====<br />
* rosmaita put together a community poll<br />
====Virtual PTG====<br />
We discussed what the format should be. Consensus was to do it over 2 consecutive days, using 2 hours each day. This should make it easier for people to participate in at least part of the meeting. We want to do it soon; consensus was the week after KubeCon to avoid conflicts. So that would be the last week in November.<br />
=====Actions=====<br />
* rosmaita put together a community poll to determine days/times<br />
====Virtual Mid-Cycle====<br />
There is interest in having a midcycle. Although everyone recognizes that face-to-face is the best, contributors have been having trouble getting travel support. So we decided to do a completely Virtual Mid-Cycle meetup for Ussuri. We decided to figure out the format after we see how the Virtual PTG works out.<br />
==Monday (Virtual)==<br />
===Forum session recap: Are You Using Upgrade Checks?===<br />
* https://etherpad.openstack.org/p/PVG-upgrade-check-forum<br />
Jay gave a quick recap of the Forum session. There are a number of action items in the etherpad above. They are assigned to jungleboyj right now as a TC action.<br />
<br />
There are still some questions about (a) how operators are using these, and (b) what kind of checks we should be providing from the development side. The Cinder team was seeing this as pre-check. Others are seeing it as a check that is used along the way while an upgrade is in process to ensure that things are ready before operators start up their services. Pre-checks seem to make sense for us; Sean noted that we could add an option to do some pre-checks to the cinder-status command.<br />
<br />
So what should the Cinder team do during the Ussuri cycle (before we have the above issues settled)? At the very least, we should still add them when a driver is unsupported and subject to removal:<br />
* inform operators that in order to use an unsupported driver, a flag has to be set in cinder.conf<br />
* inform operators that they need to contact the vendor about whether they have plans to have the driver re-instated; otherwise, the operator needs to prepare to migrate the affected volumes to a backend with a supported driver for the next Cinder release<br />
====Actions====<br />
* jungleboyj - Start a discussion on the mailing list to find out if anyone is actually using or has used the upgrade checks in production<br />
* need to figure out where the documentation for this goes<br />
===Snapshot co-location===<br />
The spec for this is: https://review.opendev.org/#/c/695630/<br />
<br />
This is related to glance multi-store support in Cinder, but the spec needs to specify more carefully what the use case is. We think that is: a user has a volume that was created from a glance image, and wants to upload it as an image; want to give Glance info so that in can put the new image in the same store as the original image. (So use of the term "snapshot" here may be inaccurate.)<br />
<br />
This feature depends on the implementation of the other glance multistore spec: https://review.opendev.org/#/c/661676/<br />
====Actions====<br />
* Rajat, Abhishek - update the spec<br />
===Python 2 support removal===<br />
Gave a quick summary of what we discussed in Shanghai (see above) so that we're all on the same page.<br />
<br />
There's a patch up now removing py2 testing from Cinder: https://review.opendev.org/695317 . Once that's approved, will do the same for the other components.<br />
<br />
General advice to Cinder developers about using Python 3 language features: https://wiki.openstack.org/wiki/CinderUssuriPTGSummary#Actions<br />
<br />
====Actions====<br />
* rosmaita - get the testing/gate patches merged, then let the good times roll<br />
===User messages===<br />
Quick discussion of the admin action "leakage" issue discussed on https://review.opendev.org/#/c/694954/<br />
<br />
Consensus was that it would be useful to expose admin-oriented actions in user messages that only admins would be able to view. Maybe set a special flag when the message is created, and then use the admin context to decide whether this gets shown or not. Agreed that the message content will be same as we have currently (that is, don't expose any sensitive information even to admins). We can wait until admin-facing user messages are being used and get feedback about whether more info is required or not.<br />
<br />
Ivan pointed out that this change should not require a new microversion, since there's no change to the user message API and no change to the current response.<br />
====Actions====<br />
* rosmaita - write up a spec<br />
===3rd Party CI irregularities===<br />
The issue we want to address is that the 3rd Party CI systems seem pretty unstable. We'd like to be able to provide some more support to make the infrastructures more reliable. Luigi suggested using RDO Software Factory as a basis for 3rd Party CI.<br />
<br />
References:<br />
* https://opendev.org/x/third-party-ci-tools<br />
* https://zuul-ci.org/docs/zuul/admin/quick-start.html<br />
====Actions====<br />
* Luigi - follow up with RDO team and get some feedback on how plausible this scenario is<br />
* e0ne - will look to see who's using cinder-tempest-plugin<br />
===Extending default volume type support for tenants===<br />
Quick recap of the Shanghai discussion (see above). Simon had mentioned that he might have developer at Pure who'd be interested in doing the implementation. Rajat volunteered to help support the implementation.<br />
====Actions====<br />
* Gorka - write up the spec<br />
* rosmaita - follow up with Simon<br />
===Quotas!===<br />
Eric has a patch up that may fix one of many problems: https://review.opendev.org/#/c/695096/ <br />
Eric thinks the patch could be optimized if someone is interested.<br />
<br />
The general problem is that we update multiple tables and there can be (are) race conditions and you wind up with strange situations like negative quota values or multiple quotas for the same project. Operators have posted some scripts to be used occasionally clean up the database, but it would be better to fix this in Cinder.<br />
===EOL for driverfixes/{m,n} and stable/{o,p}===<br />
Since the Shanghai discussion, a patch has merged that removes the 6 month waiting period for the transition from EM -> EOL: https://review.opendev.org/#/c/682381/<br />
<br />
There was a discussion about this in #openstack-tc last week: http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-11-22.log.html#t2019-11-22T15:35:01<br />
<br />
Consensus is that we should go ahead and do this.<br />
====Actions====<br />
* rosmaita - send notice on the ML that we are going to do this in one week<br />
** http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011136.html<br />
* rosmaita - put up a release patch to EOL the branches<br />
** https://review.opendev.org/#/c/696173/<br />
===Driver support matrix===<br />
Follow-up from the discussion in Shanghai. A suggestion was made that multipath should be a specific category in the support matrix.<br />
<br />
Consensus is that multipath is more a feature of the backend than of the driver. It is useful to know if drivers do it, but it's not the kind of thing like replication that they do or not. Also, there are options that have to be set in nova in order for it to be useful - nova:libvirt:use_volume_multipath. So there doesn't seem to be a point in adding this to the support matrix.<br />
==Wednesday (Virtual)==<br />
===v2 API removal update===<br />
Outward facing issues:<br />
* Sean's patch that removes v2 from the 'versions' response had some strange failures (but Rajat thinks there may be a quick fix).<br />
* Right now, devstack expects both v2 and v3 to be available and creates endpoints for both in the service catalog: https://opendev.org/openstack/devstack/src/branch/master/lib/cinder<br />
* On the other hand, it looks like tempest is v3 ready: https://review.opendev.org/#/c/530702/<br />
* Also, Ivan is pretty sure that Horizon uses v3 only.<br />
* We should notify Nova and Glance to make sure they don't rely on v2 for anything.<br />
<br />
Internal issues:<br />
* we should be able to clean up the stuff that v3 inherited from v2<br />
* we may not be able to clean up the v2/contrib stuff yet because of microversion reliance<br />
====Actions====<br />
* Rajat take a shot at fixing Sean's patch<br />
* rosmaita - work on the devstack stuff (service catalog)<br />
* we'll be optimistic about tempest<br />
* rosmaita - send a general email to the ML saying that we plan to do this<br />
===Forum session recap: How are you using Cinder's Volume Types?===<br />
Session etherpad: https://etherpad.openstack.org/p/PVG-how-using-cinder-volume-types<br />
<br />
Sean mentioned some highlights of the Forum session.<br />
* NECTAR is running a patch that allows a volume type to be assigned to an AZ: https://github.com/NeCTAR-RC/cinder/commit/d5a3d938a8e0934d31b5a3c568846b3d32843866<br />
* There were some questions about whether RBD supports volume online migration<br />
* Operators are interested in the "Support filter backend based on operation type" spec that was implemented in Rocky, but need some documentation explaining how to use it. The implementation is https://github.com/openstack/cinder/commit/e1ec4b4c2e1f0de512f09e38824c1d7e2fa38617<br />
====Actions====<br />
* e0ne is planning to do some testing around the RBD volume migration<br />
* need someone to pick up the documentation of "Support filter backend based on operation type"<br />
===Ceph iSCSI work===<br />
This is an important feature for Ironic.<br />
There's an etherpad gathering some of the work Walt's done on this: https://etherpad.openstack.org/p/cinder-ceph-iscsi-driver<br />
====Actions====<br />
rosmaita - follow up with Walt about his bandwidth and ask him to add missing stuff to the etherpad<br />
===Ussuri community goals ===<br />
Goal 1: Drop Python 2.7 Support -- we're going to do this in 2 phases. First is to get the python 2 check and gate jobs removed so we aren't depending on any py27 in the gate. Second will be to make the changes that will only allow Cinder to be installed with at least py36. That will follow in January or so when any other project that needs to install Cinder in py27 for their own testing has removed that dependency. <br />
<br />
The cinder patch to drop py2 testing is https://review.opendev.org/#/c/695317/ -- once that's merged, we'll do the same for os-brick, cinderclient, the brick-client-ext, and cinderlib.<br />
<br />
Goal 2: Project Specific New Contributor & PTL Docs -- the goal is not yet approved; it's at the formal vote stage: https://review.opendev.org/#/c/691737/. From comments on the patch, expectations are that the current PTL will do this with help from former PTLs. Luckily, we have 2 former PTLs who are still very active with the project. The open issue right now is that the docs are supposed to be consistent across projects, but there isn't a template for this yet.<br />
<br />
There's a pre-selected goal for V to migrate all legacy zuul jobs: https://review.opendev.org/#/c/691278/. gmann has a patch up moving our legacy jobs (grenade) to the cinder repo and making them py3: https://review.opendev.org/#/c/695787/. At some point, we'll need to convert them to bona fide Zuul v3 jobs. Luigi left a bunch of info on the etherpad about what the moving parts for this are, and he pointed out that reviews are welcome.<br />
<br />
It looks like another V goal is going to be "Consistent and secure default policies" goal, which was floated on the ML: http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010291.html. A "Policy Popup Team" is being organized now to do some work during this cycle: https://review.opendev.org/#/c/695993/<br />
====Actions====<br />
* everyone - keep an eye on the "remove py2 support" patches<br />
* everyone - reviews welcome on https://review.opendev.org/#/q/status:open+branch:master+topic:grenade_zuulv3<br />
===Backup Service Testing===<br />
This was a follow-up from the Train mid-cycle. The backup tests fail intermittently with timeouts. The situation now is that the cinder backup tests have been removed from tempest-full, but they are being run with tempest-integrated-storage (which basically means that failures will hit us but shouldn't block other projects). The issue could still use some investigation; Eric has a suggestion on the etherpad to write an elasticsearch query to look in the c-bak log for a specific IOError instead of looking for the timeout.<br />
<br />
Eric also noted that the test jobs run a long time and fail with a not helpful message -- they look for volume going active but don't notice that it's in an error state and should fail sooner. Failing fast could conserve some gate resources. Maybe someone wants to follow up with something like https://review.opendev.org/#/c/565766/ ?<br />
====Actions====<br />
* anyone who's interested - follow up on this<br />
* rosmaita - (came up during this discussion but not related to this topic) update etherpads with nondestructive translation instructions (get the read-only link to the etherpad and use translations tools there instead of in the writeable etherpad)<br />
===Cycle Priorities===<br />
====Increase testing coverage====<br />
We want to get more thorough tests into the cinder-tempest-plugin. Sofia (enriquetaso) is mentoring an Outreachy intern Anastasiya (anastzhyr) who will focus on this during her internship (3 Dec 2019 to 3 March 2020). The Cinder community can assist in this by (A) writing bugs tagged 'test-coverage' with specific ideas for tests that can be added, and (B) timely reviews of Anastaiya's patches.<br />
====Increase number of 3rd party CIs running cinder-tempest-plugin====<br />
This '''should be easyâ„¢''' for current 3rd party CIs (and actually may already be a requirement). Need to get a baseline for how many CIs are running it now.<br />
====Better support for 3rd Party CIs====<br />
We'd like to their lives easier by having a standard way to deploy a robust system. This may be possible using RDO Software Factory. Luigi has already brought this up at the RDO meeting and RDO (and the SF people) are supportive of this idea: http://eavesdrop.openstack.org/meetings/rdo_meeting___2019_11_27/2019/rdo_meeting___2019_11_27.2019-11-27-15.00.log.html#l-76<br />
<br />
WIP 3rd party CI doc section for SoftwareFactory:<br />
* https://softwarefactory-project.io/r/#/c/17097/<br />
* https://softwarefactory-project.io/logs/97/17097/2/check/sf-docs-build/ae24483/docs-html/guides/third_party_ci.html <br />
====Default volume-type enhancement====<br />
Gorka's working on a spec for having per user/project/service-token default volume-type; basic idea is what was discussed at the PTG (see above).<br />
Related to this is improving the documentation around volume types.<br />
====Generic Backups====<br />
Related patches:<br />
* https://review.opendev.org/#/c/620881/<br />
* https://review.opendev.org/#/c/630305/<br />
====Improve Active-Active (HA) Documentation====<br />
We're anticipating that operators will want to run Cinder in HA mode (Cinder active-active). Currently RBD supports running in HA.<br />
<br />
A driver has to set a flag in order for the service to run in active-active. But in addition to setting the flag, it would be good for the feature to actually work with that driver. The problem is that what you need to do is very driver dependent. We don't have tempest tests that verify that a driver can run in HA (but it should be possible to add tests such that if they fail, then you know HA is not happening).<br />
<br />
We need docs aimed at two audiences:<br />
* driver developers: want to clarify what you need to do to implement active-active and claim HA support. Should be able to give some general advice (like "watch out for race conditions on connection") to help in implementing and testing that their driver supports active-active.<br />
* operators: need to provide some advice about how to deploy Cinder in active-active mode (for example, you should have 3 API nodes, 3 scheduler nodes, 3 volume services)<br />
====Remove the v2 API====<br />
Should at least be able to get it out of the service catalog and remove the option to run it (and remove the option to not run v3), so that from an external point of view, all that's available is the v3 API. The refactoring to remove all v2 code from the API doesn't have to happen immediately.<br />
====Remove Python 2 support====<br />
This is a community goal (and it looks like we're in good shape to make this happen very early in the cycle).<br />
====Move away from squlalchemy- migrate to alembic====<br />
We're getting closer to the point where we will have no choice.<br />
===Should we hold a Virtual Mid-Cycle?===<br />
Cycle Schedule: https://releases.openstack.org/ussuri/schedule.html<br />
<br />
The consensus that we should have a Ussuri Mid-Cycle and that it should be virtual. While we were discussing the timing (have it close to the spec freeze? or closer to M-2), Eric pointed out that if we were going to use the model that we used for this Virtual PTG, namely, 2-hour sessions spread over a couple of days, there's no reason why the sessions have to be close together since we don't need to arrange any physical facilities. So we can be flexible and have 2 hour sessions whenever it makes sense.<br />
<br />
Right now, the sensible times to have 2-hour virtual meetings are:<br />
* around the Cinder Spec Freeze. The spec freeze is the last week in January and is at 15 weeks, which is exactly the middle of the cycle. Maybe have the Virtual meet-up the week before so unmerged specs can have some discussion if necessary?<br />
* at the "Cinder New Feature Status Checkpoint" (week of 16 March 2020), which is 3 weeks before the final release for client libraries.<br />
====Actions====<br />
* rosmaita - get feedback from the wider team at the weekly meeting and organize polls to determine the day of week and time</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=CinderUssuriPTGSummary&diff=173311CinderUssuriPTGSummary2019-12-04T18:17:48Z<p>Jay Bryant: /* Cinder REST API V4 */</p>
<hr />
<div>=== Introduction ===<br />
This page contains a summary of the subjects covered during the Ussuri PTG held in Shanghai, China, November 7-8, 2019.<br />
It also contains a summary of the Virtual PTG held November 25 and 27, 2019.<br />
<br />
The sessions were recorded. Links to the recordings will be added here when they are available.<br />
<br />
The full etherpad and all associated notes may be found here:<br />
* Shanghai: https://etherpad.openstack.org/p/shanghai-ptg-cinder<br />
* Virtual: https://etherpad.openstack.org/p/cinder-ussuri-virtual-ptg-planning<br />
<br />
==Thursday (Shanghai)==<br />
===Cinder project onboarding and "meet the Cinder developers" session===<br />
Everyone who attended were 100% satisfied and very complimentary about Cinder. Unfortunately, no one attended, so we spent the time figuring out how to get the recording equipment connected and positioned properly.<br />
<br />
[https://www.youtube.com/watch?v=QQgiokX_EiI Video Recording Part 1]<br />
<br />
===Python 2 support & Work remaining to remove Py27 support===<br />
I came into this arguing that we need to keep Python 2 testing in master for a while -- at least while we are still supporting Python 2 in stable branches, because otherwise backports become a big problem (won't have a clean backport if any py3-only language features are used in a patch to master). Pretty much no one agreed with this.<br />
<br />
Sean pointed out that as libraries drop py2 support, we won't be able to use them in py2 testing anyway. Ivan and Sean can't wait to start ripping out py2-compatability code. Gorka didn't think that the extra effort to modify backports would be that big a deal, and that if we're going to start using py3 for real, might as well start now.<br />
====Actions====<br />
* Reminder to reviewers: we need to be checking the code coverage of test cases very carefully so that new code has excellent coverage and will be likely to fail when tested with py2 in stable branches when a backport is proposed<br />
* Reminder to committers: be patient with reviewers when they ask for more tests!<br />
* Reminder to community: new features can use py3 language constructs; bugfixes likely to be backported should be more conservative and write for py2 compatabilty <br />
* Reminder to driver maintainers: ^^<br />
* Ivan and Sean have a green light to start removing py2 compatability<br />
===Policy migration===<br />
Some background: https://etherpad.openstack.org/p/policy-migration-steps<br />
Keystone has added a default read-only role and service-scoped roles, but they don't do anything until projects write policies that use them<br />
oslo.policy code has a way to define a default policy + deprecated default policy; during deprecation, the most permissive wins. This will allow easy migration to new policies for operators.<br />
<br />
There are still questions about how to set up testing for these. Keystone did only unit tests, but the tests were very heavyweight; had to set up all the users in the DB each time; they wish they had done things in tempest. But there is a concern that it may not be practical to do in tempest, either. (It may depend on the project.)<br />
<br />
A pop-up team is going to be started to help get the larger projects moved to the new Policy Code.<br />
====Actions====<br />
* rosmaita will investigate the different testing approaches. Note: It's possible that tempest will add the methods to create different users and the different projects will have to do their own testing using those.<br />
* rosmaita To look at the scoping options and understand what the impact on Cinder will be.<br />
** Need to create a matrix of our policies and different scopes.<br />
** Need to figure out how the administrative context fits in<br />
** Need to check current test coverage and where testing needs to be enhanced.<br />
** Don't have to have one person do it. It is possible to split up the work.<br />
* rosmaita and e0ne (and anyone else interested in this) to join the pop-up team to get more info and help get Cinder started.<br />
===Cinder V2 API removal===<br />
We can't just remove V2 code right now (e.g., the V2 extensions need to be moved to V3) but we can remove access to the API. Though actually that's not true either, there is a lot of background work that needs to be done before we remove V2 API:<br />
* Tempest still assumes that the V2 API will be there. Need to fix it.<br />
* OpenStack Client also has some V2 API assumptions.<br />
* Devstack also will not work.<br />
<br />
Sean has a patch to see how badly things break with V2 removal: https://review.opendev.org/554372<br />
<br />
V3 is pretty much exactly the same as V2. We should be able to change and just have people switch the endpoint and have it work. It would be nice if we could just update the catalog but that doesn't appear to be the case.<br />
====Actions====<br />
* follow up on this for the virtual PTG. What did we find with Sean's patch?<br />
** Create a list of the specific work items that need to be completed.<br />
** At that point we may be able to split the work up to an intern (if we have an intern).<br />
===Cinder REST API V4===<br />
We had talked in Vancouver about getting to a point after we have enough micro-versions piling up to move to V4.<br />
====Actions====<br />
* We don't need to do that in this release (let's get rid of V2 first), but it is something we need to keep in mind as a future goal.<br />
<br><br><br />
[https://www.youtube.com/watch?v=a-kq_EYrkq0 Video Recording Part 2]<br />
<br><br><br />
<br />
===Volume local cache===<br />
Requires both Cinder and Nova work:<br />
* Cinder spec: https://review.opendev.org/#/c/684556/<br />
* Nova spec: https://review.opendev.org/689070<br />
<br />
Currently there're different types of fast NVME SSDs, such as Intel Optane SSD, which r/w throughput can be 2.x~3.x GB/s, latency can be ~10 us. While typical remote volume for a VM can be hundreds of MB/s, latency can be millisecond level (iscsi / rbd). So these fast SSDs can be mounted on compute node locally and used as a cache for remote volumes. Regarding storage team, we need to add support in os-brick.<br />
<br />
Consensus was: there are some storage solutions this cannot be done for (Ceph, no mount point on host machine), some that might not require this (some vendors already have super-fast caching), and some it's worth doing for, so the overall feeling was supportive for this effort.<br />
<br />
See the PTG etherpad for details. Picture of the flip chart used during the discussion: https://twitter.com/jungleboyj/status/1192323512238776320<br />
====Actions====<br />
* Liang Fang to continue working on this<br />
===Mutable options ===<br />
The context for this is a NetApp customer request who wants to be able to change backend credentials without restarting any services.<br />
<br />
Problem is that the current mutable config can be done for the REST API, but doesn't extend beyond that. Further, changing driver credentials is a little more work since it may require reloading the driver or having a mechanism in all drivers to recognize and handle that change. Also, we don't want config options that are shared across drivers to be mutable.<br />
<br />
Gorka pointed out that a driver supporting Active-Active would not need mutable options for this purpose. It would be better to implement Active-Active instead of refresh credentials this way. A/A HA support has been ready for several releases now, but so far RBD has been the only driver to test and enable it.<br />
<br />
The team feels that using Active/Active is the best way to go.<br />
====Actions====<br />
* Gorka volunteered to support the NetApp team if they choose to implement A/A<br />
* need to add to the developer docs that just making an option mutable in oslo.config does not solve the problem for drivers (more info on the etherpad)<br />
<br />
(recording 3 starts here)<br />
===Cross Project Discussion with Edge Working Group===<br />
Apparently the next version of TripleO will support storage at the edge. They were wondering if we knew anything about that. We don't.<br />
<br />
As far as edge persistent storage goes, telcos think about having NFS-only and have it in the core - scary concept.<br />
<br />
In considering the edge use case, it is important to understand the physical limitations of what people have in mind. For example, one small telco rack, or a smaller DC with air conditioning, or a bigger DC with AC and bigger storage unit, etc. You really can't talk about "the edge" (insert U2 joke here).<br />
<br />
See the etherpad for more.<br />
===Default volume types depending on project or user===<br />
Having a single volume type default is too restrictive for bigger clouds with multiple AZs and many tenants/projects. Operators want more defaults to use in particular situations.<br />
<br />
The selection of which default to use is easy; the hard part of this will be the code enabling creation of the default at the end user/project level. Will need:<br />
* new API calls (create, show, list, update, delete), new microversion<br />
* client support<br />
* tell horizon about it<br />
====Actions====<br />
* geguileo - write the spec<br />
** request from Glance: we may also want a per-service default (triggered when a service token is passed)<br />
===Cinder retype doesn't use driver assisted migration===<br />
Gorka thinks this doesn't depend on the driver; he thinks it's broken for all drivers. There is code in the manager that prevents the efficient path from being taken:<br />
https://github.com/openstack/cinder/blob/ca5c2ce4e8ae9fbc92181ac4ba09cec3429a71e6/cinder/volume/manager.py#L2490<br />
There was a reason for it; we need to review and see if it still holds.<br />
<br />
Ivan thinks this is just a bug. Though we don't have a bug open for it.<br />
====Actions====<br />
* e0ne to investigate and fix it if he can verify that it is broken.<br />
===EOL some of the currently open branches===<br />
We have 8 open branches plus master (ussuri). Sent an email to the ML asking for data so we can make a good decision about this:<br />
http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010385.html<br />
<br />
Got zero responses, so this apparently isn't seen as a big deal by the community.<br />
<br />
The policy is that we need to announce 6 months ahead of time the fact that we are planning to EOL a branch. This allows time for a vendor to come in and pick it up if necessary.<br />
So, if we want to drop branches we just need to announce that we are planning to EOL branches and then we can do it in 6 months.<br />
<br />
The driverfixes branches have not been used in quite a while.<br />
* Should we delete those? No, we don't really want to lose that history of commits.<br />
* Could we re-name them? Put 'archived' in the title or something to make it clear that it doesn't still take code. (Or just document that they are an archive of old driver fixes.)<br />
* When we EOL a driver we should probably make it a driverfixes branch. (Not clear on exactly what's being proposed here, need to follow up at VPTG.)<br />
====Actions====<br />
* rosmaita - find out about renaming branches from infra team; also, about read-only branches (change to gerrit so no patches can be proposed to the branch)?<br />
** proposal: EOL o, p and rename them archived-ocata, archived-pike<br />
* rosmaita - send proposal to ML that o, p are due to exit EM status in 6 months<br />
* revisit this at the Virtual PTG<br />
** the EOL policy was revised recently, no longer requires the 6 month waiting period<br />
** want to reconsider whether not deleting the EOL branches is a good idea if we're not going to merge anything into them<br />
<br />
(recording 4 starts here)<br />
===Discuss the latest User Survey Results===<br />
Here's a handy compiled list of only the Cinder responses: https://etherpad.openstack.org/p/cinder-2019-user-survey-question-responses<br />
====Actions====<br />
* replication needs better documentation so that people know we can failover and fail back correctly<br />
* ivan is planning to continue the generic backup driver work<br />
<br />
===Meeting with the Nova team===<br />
When we failover in Cinder, volumes are no longer usable in Nova, but we don't tell Nova that the failover has ocurred. Any procedure in Nova to correct the situation needs to be done manually. It would be better if we let Nova know that a failover has occurred so they can do something.<br />
<br />
A complication is that Nova can't simply detach and attach the volume because data that is in flight would be lost.<br />
<br />
How about boot from volume? In that case the instance is dead anyway because access to the volume has been lost. Could go through the shutdown, detach, attach, reboot path.<br />
Problem is that detach is going to fail. Need to force it or handle the failure. But we aren't sure that Nova will allow a detach of a boot volume. And we don't currently have a force detach API.<br />
<br />
Also discussed a possible Nova bug for images created from encrypted volumes: https://bugs.launchpad.net/nova/+bug/1852106 , though it's not clear that the scenario described in the bug can actually happen<br />
====Actions====<br />
* need to figure out how to pass the force to os-brick to detach volume and when rebooting a volume<br />
* rosmaita to investigate Bug #1852106<br />
<br />
==Friday (Shanghai)==<br />
===Meeting with the Glance team===<br />
====Support for Glance multiple stores in Cinder====<br />
References: (cinder spec) https://review.openstack.org/#/c/641267/<br />
<br />
The Cinder team is still OK with this idea (which was approved for Train).<br />
=====Actions=====<br />
* retarget spec for Ussuri<br />
* get Abhishek's patch reviewed<br />
====Image snapshot co-location====<br />
For the Edge use case, Glance is planning to use info provided by Nova about what image a server was booted from to co-locate snapshots of that server in the same store as the original image. Would like to do the same with Cinder volumes uploaded as images. Just need a header that specifies the "base" image of the volume being uploaded as an image. We agreed that this is a separate use case from the above.<br />
=====Actions=====<br />
* Abhishek will write the spec for Cinder<br />
====Glance Cinder driver is very limited====<br />
We think it uses only default volume type, and also, it is not very well tested. We all agreed that this is a sad state of affairs.<br />
=====Actions=====<br />
* somebody should do something<br />
===Meet with Horizon about their proposed implementation of Cinder user messages===<br />
Horizon is interested in exposing the User Messages API. We agreed that this is a great idea.<br />
<br />
There's a question about having the message displayed in a requested language. It's possible that this is already handled at the REST API layer via the "Accept-Language" header. If it's not, that's probably the place to support this.<br />
====Actions====<br />
* rosmaita determine whether this would require a change to the API code, or whether existing code handles this already<br />
===Attach/Detach speed===<br />
Gorka was wondering whether there are any complaints about attach/detach speed in OpenStack, particularly since people are now using Cinder to provide volumes for Kubernetes (cinder in-tree driver, Cinder-CSI, Ember-CSI) and may be seeing a lot more attach/detach requests.<br />
<br />
Everybody seems to be OK with it, it's only geguileo who's complaining.<br />
====Actions====<br />
* not a concern at the moment<br />
===Topics from Train mid-cycle: status and carry-over to Ussuri===<br />
Notes about the Train mid-cycle: https://wiki.openstack.org/wiki/CinderTrainMidCycleSummary<br />
<br />
Mid-cycle etherpad: https://etherpad.openstack.org/p/cinder-train-mid-cycle-planning<br />
====Multiattach====<br />
All items need followup. Goals are:<br />
* short-term: document some guidance for how this feature should be tested<br />
* long-term: get some new tests into the cinder-tempest-plugin for this<br />
=====Actions=====<br />
* rosmaita draft the short-term document<br />
====iSCSI Ceph driver====<br />
Due to some downstream priorities changes, Walt is having trouble finding time to work on this.<br />
Ivan suggested that we encourage Walt to post whatever he's got, even if it's not working, so what he's learned isn't lost.<br />
There are some patches up and a github repo for some code Walt had to write that doesn't have a home in OpenStack or Ceph yet<br />
=====Actions=====<br />
* rosmaita: follow up with Walt<br />
* rosmaita: put together an etherpad with links to the work done so far<br />
====3rd Party CI Irregularities====<br />
Third-party testing by backend vendors of their driver code is very important to the project. But most of the 3rd Party CI appear to be pretty unstable.<br />
<br />
For most vendors, updating their 3rd Party CI to run python 3.7 in Train was not a simple task. It would be good if we could offer them better guidance about how to set up & maintain their 3rd Party CI. Would also like vendors to be running the cinder-tempest-plugin, but don't want to make it a demand unless we can make the path easier. (BTW, Datera is running the cinder-tempest-plugin in their CI!)<br />
<br />
Third Party CI Docs (partial list)<br />
* https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver<br />
* https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers<br />
* https://docs.openstack.org/infra/system-config/third_party.html<br />
* https://docs.openstack.org/cinder/latest/contributor/drivers.html<br />
<br />
=====Actions=====<br />
* Luigi has some ideas about using RDO Software Factory as a basis for 3rd Party CI; need to follow up with him on that<br />
* Gorka: will check about what RDO has available <br />
* e0ne: will look to see who's using cinder-tempest-plugin<br />
* the team: after gorka and e0ne report back, reorganize & update the 3rd party CI docs<br />
<br />
====Improve Automated Test Coverage====<br />
We want to do this via the cinder-tempest-plugin. Sophia (enriquetaso) is mentoring an Outreachy intern who has begun some work on this. Eric has been writing bugs to suggest test cases that need to be addressed.<br />
<br />
====SQLAlchemy to Alembic migration====<br />
No progress on this. Put in a proposal for a summer intern to work on this; maybe we'll get lucky.<br />
<br />
See https://etherpad.openstack.org/p/cinder-train-ptg-planning (line #247) for more info.<br />
<br />
====Capabilities Reporting====<br />
Operators need to read the vendor's manual to figure out which extra specs they can write for a particular backend, and what they're used for. it would be nice to drivers report their capabilities in a way that the operator can figure out this info from the CLI.<br />
<br />
Everyone agreed that we still want to do this. It will require an API change and there's already a spec for this:<br />
https://review.opendev.org/#/c/655939/1/specs/train/backend_capabilities.rst<br />
=====Actions=====<br />
* revisit at the Virtual PTG and figure out who's interested in working on it<br />
<br />
===Cinder Business===<br />
====Cinder Ussuri Priorities====<br />
We will finalize this after the Virtual PTG, but here's the initial list:<br />
* Increase testing coverage<br />
* Increase number of CIs running cinder-tempest-plugins<br />
* Better support for third party CIs: Make their life easier by having a way to deploy a robust system<br />
* Volume types per user/project/service-token<br />
** better documentation<br />
* Generic Backups<br />
* Improve HA Active-Active documentation<br />
** want to make it easier to test it<br />
* remove V2 API<br />
* remove python 2 support<br />
====Cinder-core update====<br />
See http://lists.openstack.org/pipermail/openstack-discuss/2019-November/010519.html<br />
We are at roughly the same review strength we had in Train.<br />
====Meeting Time Change update====<br />
We were holding off on this until after the Summit so that new contributors could participate in a poll.<br />
We'll consider the options from Liang Fang's original proposal at the Cinder weekly meeting: http://eavesdrop.openstack.org/meetings/cinder/2019/cinder.2019-10-23-16.00.log.html#l-166<br />
These are to move the meeting 1 or 2 hours earlier.<br />
There has also been some discussion on the ML: http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010328.html<br />
=====Actions=====<br />
* rosmaita put together a community poll<br />
====Virtual PTG====<br />
We discussed what the format should be. Consensus was to do it over 2 consecutive days, using 2 hours each day. This should make it easier for people to participate in at least part of the meeting. We want to do it soon; consensus was the week after KubeCon to avoid conflicts. So that would be the last week in November.<br />
=====Actions=====<br />
* rosmaita put together a community poll to determine days/times<br />
====Virtual Mid-Cycle====<br />
There is interest in having a midcycle. Although everyone recognizes that face-to-face is the best, contributors have been having trouble getting travel support. So we decided to do a completely Virtual Mid-Cycle meetup for Ussuri. We decided to figure out the format after we see how the Virtual PTG works out.<br />
==Monday (Virtual)==<br />
===Forum session recap: Are You Using Upgrade Checks?===<br />
* https://etherpad.openstack.org/p/PVG-upgrade-check-forum<br />
Jay gave a quick recap of the Forum session. There are a number of action items in the etherpad above. They are assigned to jungleboyj right now as a TC action.<br />
<br />
There are still some questions about (a) how operators are using these, and (b) what kind of checks we should be providing from the development side. The Cinder team was seeing this as pre-check. Others are seeing it as a check that is used along the way while an upgrade is in process to ensure that things are ready before operators start up their services. Pre-checks seem to make sense for us; Sean noted that we could add an option to do some pre-checks to the cinder-status command.<br />
<br />
So what should the Cinder team do during the Ussuri cycle (before we have the above issues settled)? At the very least, we should still add them when a driver is unsupported and subject to removal:<br />
* inform operators that in order to use an unsupported driver, a flag has to be set in cinder.conf<br />
* inform operators that they need to contact the vendor about whether they have plans to have the driver re-instated; otherwise, the operator needs to prepare to migrate the affected volumes to a backend with a supported driver for the next Cinder release<br />
====Actions====<br />
* jungleboyj - Start a discussion on the mailing list to find out if anyone is actually using or has used the upgrade checks in production<br />
* need to figure out where the documentation for this goes<br />
===Snapshot co-location===<br />
The spec for this is: https://review.opendev.org/#/c/695630/<br />
<br />
This is related to glance multi-store support in Cinder, but the spec needs to specify more carefully what the use case is. We think that is: a user has a volume that was created from a glance image, and wants to upload it as an image; want to give Glance info so that in can put the new image in the same store as the original image. (So use of the term "snapshot" here may be inaccurate.)<br />
<br />
This feature depends on the implementation of the other glance multistore spec: https://review.opendev.org/#/c/661676/<br />
====Actions====<br />
* Rajat, Abhishek - update the spec<br />
===Python 2 support removal===<br />
Gave a quick summary of what we discussed in Shanghai (see above) so that we're all on the same page.<br />
<br />
There's a patch up now removing py2 testing from Cinder: https://review.opendev.org/695317 . Once that's approved, will do the same for the other components.<br />
<br />
General advice to Cinder developers about using Python 3 language features: https://wiki.openstack.org/wiki/CinderUssuriPTGSummary#Actions<br />
<br />
====Actions====<br />
* rosmaita - get the testing/gate patches merged, then let the good times roll<br />
===User messages===<br />
Quick discussion of the admin action "leakage" issue discussed on https://review.opendev.org/#/c/694954/<br />
<br />
Consensus was that it would be useful to expose admin-oriented actions in user messages that only admins would be able to view. Maybe set a special flag when the message is created, and then use the admin context to decide whether this gets shown or not. Agreed that the message content will be same as we have currently (that is, don't expose any sensitive information even to admins). We can wait until admin-facing user messages are being used and get feedback about whether more info is required or not.<br />
<br />
Ivan pointed out that this change should not require a new microversion, since there's no change to the user message API and no change to the current response.<br />
====Actions====<br />
* rosmaita - write up a spec<br />
===3rd Party CI irregularities===<br />
The issue we want to address is that the 3rd Party CI systems seem pretty unstable. We'd like to be able to provide some more support to make the infrastructures more reliable. Luigi suggested using RDO Software Factory as a basis for 3rd Party CI.<br />
<br />
References:<br />
* https://opendev.org/x/third-party-ci-tools<br />
* https://zuul-ci.org/docs/zuul/admin/quick-start.html<br />
====Actions====<br />
* Luigi - follow up with RDO team and get some feedback on how plausible this scenario is<br />
* e0ne - will look to see who's using cinder-tempest-plugin<br />
===Extending default volume type support for tenants===<br />
Quick recap of the Shanghai discussion (see above). Simon had mentioned that he might have developer at Pure who'd be interested in doing the implementation. Rajat volunteered to help support the implementation.<br />
====Actions====<br />
* Gorka - write up the spec<br />
* rosmaita - follow up with Simon<br />
===Quotas!===<br />
Eric has a patch up that may fix one of many problems: https://review.opendev.org/#/c/695096/ <br />
Eric thinks the patch could be optimized if someone is interested.<br />
<br />
The general problem is that we update multiple tables and there can be (are) race conditions and you wind up with strange situations like negative quota values or multiple quotas for the same project. Operators have posted some scripts to be used occasionally clean up the database, but it would be better to fix this in Cinder.<br />
===EOL for driverfixes/{m,n} and stable/{o,p}===<br />
Since the Shanghai discussion, a patch has merged that removes the 6 month waiting period for the transition from EM -> EOL: https://review.opendev.org/#/c/682381/<br />
<br />
There was a discussion about this in #openstack-tc last week: http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-11-22.log.html#t2019-11-22T15:35:01<br />
<br />
Consensus is that we should go ahead and do this.<br />
====Actions====<br />
* rosmaita - send notice on the ML that we are going to do this in one week<br />
** http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011136.html<br />
* rosmaita - put up a release patch to EOL the branches<br />
** https://review.opendev.org/#/c/696173/<br />
===Driver support matrix===<br />
Follow-up from the discussion in Shanghai. A suggestion was made that multipath should be a specific category in the support matrix.<br />
<br />
Consensus is that multipath is more a feature of the backend than of the driver. It is useful to know if drivers do it, but it's not the kind of thing like replication that they do or not. Also, there are options that have to be set in nova in order for it to be useful - nova:libvirt:use_volume_multipath. So there doesn't seem to be a point in adding this to the support matrix.<br />
==Wednesday (Virtual)==<br />
===v2 API removal update===<br />
Outward facing issues:<br />
* Sean's patch that removes v2 from the 'versions' response had some strange failures (but Rajat thinks there may be a quick fix).<br />
* Right now, devstack expects both v2 and v3 to be available and creates endpoints for both in the service catalog: https://opendev.org/openstack/devstack/src/branch/master/lib/cinder<br />
* On the other hand, it looks like tempest is v3 ready: https://review.opendev.org/#/c/530702/<br />
* Also, Ivan is pretty sure that Horizon uses v3 only.<br />
* We should notify Nova and Glance to make sure they don't rely on v2 for anything.<br />
<br />
Internal issues:<br />
* we should be able to clean up the stuff that v3 inherited from v2<br />
* we may not be able to clean up the v2/contrib stuff yet because of microversion reliance<br />
====Actions====<br />
* Rajat take a shot at fixing Sean's patch<br />
* rosmaita - work on the devstack stuff (service catalog)<br />
* we'll be optimistic about tempest<br />
* rosmaita - send a general email to the ML saying that we plan to do this<br />
===Forum session recap: How are you using Cinder's Volume Types?===<br />
Session etherpad: https://etherpad.openstack.org/p/PVG-how-using-cinder-volume-types<br />
<br />
Sean mentioned some highlights of the Forum session.<br />
* NECTAR is running a patch that allows a volume type to be assigned to an AZ: https://github.com/NeCTAR-RC/cinder/commit/d5a3d938a8e0934d31b5a3c568846b3d32843866<br />
* There were some questions about whether RBD supports volume online migration<br />
* Operators are interested in the "Support filter backend based on operation type" spec that was implemented in Rocky, but need some documentation explaining how to use it. The implementation is https://github.com/openstack/cinder/commit/e1ec4b4c2e1f0de512f09e38824c1d7e2fa38617<br />
====Actions====<br />
* e0ne is planning to do some testing around the RBD volume migration<br />
* need someone to pick up the documentation of "Support filter backend based on operation type"<br />
===Ceph iSCSI work===<br />
This is an important feature for Ironic.<br />
There's an etherpad gathering some of the work Walt's done on this: https://etherpad.openstack.org/p/cinder-ceph-iscsi-driver<br />
====Actions====<br />
rosmaita - follow up with Walt about his bandwidth and ask him to add missing stuff to the etherpad<br />
===Ussuri community goals ===<br />
Goal 1: Drop Python 2.7 Support -- we're going to do this in 2 phases. First is to get the python 2 check and gate jobs removed so we aren't depending on any py27 in the gate. Second will be to make the changes that will only allow Cinder to be installed with at least py36. That will follow in January or so when any other project that needs to install Cinder in py27 for their own testing has removed that dependency. <br />
<br />
The cinder patch to drop py2 testing is https://review.opendev.org/#/c/695317/ -- once that's merged, we'll do the same for os-brick, cinderclient, the brick-client-ext, and cinderlib.<br />
<br />
Goal 2: Project Specific New Contributor & PTL Docs -- the goal is not yet approved; it's at the formal vote stage: https://review.opendev.org/#/c/691737/. From comments on the patch, expectations are that the current PTL will do this with help from former PTLs. Luckily, we have 2 former PTLs who are still very active with the project. The open issue right now is that the docs are supposed to be consistent across projects, but there isn't a template for this yet.<br />
<br />
There's a pre-selected goal for V to migrate all legacy zuul jobs: https://review.opendev.org/#/c/691278/. gmann has a patch up moving our legacy jobs (grenade) to the cinder repo and making them py3: https://review.opendev.org/#/c/695787/. At some point, we'll need to convert them to bona fide Zuul v3 jobs. Luigi left a bunch of info on the etherpad about what the moving parts for this are, and he pointed out that reviews are welcome.<br />
<br />
It looks like another V goal is going to be "Consistent and secure default policies" goal, which was floated on the ML: http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010291.html. A "Policy Popup Team" is being organized now to do some work during this cycle: https://review.opendev.org/#/c/695993/<br />
====Actions====<br />
* everyone - keep an eye on the "remove py2 support" patches<br />
* everyone - reviews welcome on https://review.opendev.org/#/q/status:open+branch:master+topic:grenade_zuulv3<br />
===Backup Service Testing===<br />
This was a follow-up from the Train mid-cycle. The backup tests fail intermittently with timeouts. The situation now is that the cinder backup tests have been removed from tempest-full, but they are being run with tempest-integrated-storage (which basically means that failures will hit us but shouldn't block other projects). The issue could still use some investigation; Eric has a suggestion on the etherpad to write an elasticsearch query to look in the c-bak log for a specific IOError instead of looking for the timeout.<br />
<br />
Eric also noted that the test jobs run a long time and fail with a not helpful message -- they look for volume going active but don't notice that it's in an error state and should fail sooner. Failing fast could conserve some gate resources. Maybe someone wants to follow up with something like https://review.opendev.org/#/c/565766/ ?<br />
====Actions====<br />
* anyone who's interested - follow up on this<br />
* rosmaita - (came up during this discussion but not related to this topic) update etherpads with nondestructive translation instructions (get the read-only link to the etherpad and use translations tools there instead of in the writeable etherpad)<br />
===Cycle Priorities===<br />
====Increase testing coverage====<br />
We want to get more thorough tests into the cinder-tempest-plugin. Sofia (enriquetaso) is mentoring an Outreachy intern Anastasiya (anastzhyr) who will focus on this during her internship (3 Dec 2019 to 3 March 2020). The Cinder community can assist in this by (A) writing bugs tagged 'test-coverage' with specific ideas for tests that can be added, and (B) timely reviews of Anastaiya's patches.<br />
====Increase number of 3rd party CIs running cinder-tempest-plugin====<br />
This '''should be easyâ„¢''' for current 3rd party CIs (and actually may already be a requirement). Need to get a baseline for how many CIs are running it now.<br />
====Better support for 3rd Party CIs====<br />
We'd like to their lives easier by having a standard way to deploy a robust system. This may be possible using RDO Software Factory. Luigi has already brought this up at the RDO meeting and RDO (and the SF people) are supportive of this idea: http://eavesdrop.openstack.org/meetings/rdo_meeting___2019_11_27/2019/rdo_meeting___2019_11_27.2019-11-27-15.00.log.html#l-76<br />
<br />
WIP 3rd party CI doc section for SoftwareFactory:<br />
* https://softwarefactory-project.io/r/#/c/17097/<br />
* https://softwarefactory-project.io/logs/97/17097/2/check/sf-docs-build/ae24483/docs-html/guides/third_party_ci.html <br />
====Default volume-type enhancement====<br />
Gorka's working on a spec for having per user/project/service-token default volume-type; basic idea is what was discussed at the PTG (see above).<br />
Related to this is improving the documentation around volume types.<br />
====Generic Backups====<br />
Related patches:<br />
* https://review.opendev.org/#/c/620881/<br />
* https://review.opendev.org/#/c/630305/<br />
====Improve Active-Active (HA) Documentation====<br />
We're anticipating that operators will want to run Cinder in HA mode (Cinder active-active). Currently RBD supports running in HA.<br />
<br />
A driver has to set a flag in order for the service to run in active-active. But in addition to setting the flag, it would be good for the feature to actually work with that driver. The problem is that what you need to do is very driver dependent. We don't have tempest tests that verify that a driver can run in HA (but it should be possible to add tests such that if they fail, then you know HA is not happening).<br />
<br />
We need docs aimed at two audiences:<br />
* driver developers: want to clarify what you need to do to implement active-active and claim HA support. Should be able to give some general advice (like "watch out for race conditions on connection") to help in implementing and testing that their driver supports active-active.<br />
* operators: need to provide some advice about how to deploy Cinder in active-active mode (for example, you should have 3 API nodes, 3 scheduler nodes, 3 volume services)<br />
====Remove the v2 API====<br />
Should at least be able to get it out of the service catalog and remove the option to run it (and remove the option to not run v3), so that from an external point of view, all that's available is the v3 API. The refactoring to remove all v2 code from the API doesn't have to happen immediately.<br />
====Remove Python 2 support====<br />
This is a community goal (and it looks like we're in good shape to make this happen very early in the cycle).<br />
====Move away from squlalchemy- migrate to alembic====<br />
We're getting closer to the point where we will have no choice.<br />
===Should we hold a Virtual Mid-Cycle?===<br />
Cycle Schedule: https://releases.openstack.org/ussuri/schedule.html<br />
<br />
The consensus that we should have a Ussuri Mid-Cycle and that it should be virtual. While we were discussing the timing (have it close to the spec freeze? or closer to M-2), Eric pointed out that if we were going to use the model that we used for this Virtual PTG, namely, 2-hour sessions spread over a couple of days, there's no reason why the sessions have to be close together since we don't need to arrange any physical facilities. So we can be flexible and have 2 hour sessions whenever it makes sense.<br />
<br />
Right now, the sensible times to have 2-hour virtual meetings are:<br />
* around the Cinder Spec Freeze. The spec freeze is the last week in January and is at 15 weeks, which is exactly the middle of the cycle. Maybe have the Virtual meet-up the week before so unmerged specs can have some discussion if necessary?<br />
* at the "Cinder New Feature Status Checkpoint" (week of 16 March 2020), which is 3 weeks before the final release for client libraries.<br />
====Actions====<br />
* rosmaita - get feedback from the wider team at the weekly meeting and organize polls to determine the day of week and time</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=CinderUssuriPTGSummary&diff=173310CinderUssuriPTGSummary2019-12-04T18:17:18Z<p>Jay Bryant: /* Cinder REST API V4 */</p>
<hr />
<div>=== Introduction ===<br />
This page contains a summary of the subjects covered during the Ussuri PTG held in Shanghai, China, November 7-8, 2019.<br />
It also contains a summary of the Virtual PTG held November 25 and 27, 2019.<br />
<br />
The sessions were recorded. Links to the recordings will be added here when they are available.<br />
<br />
The full etherpad and all associated notes may be found here:<br />
* Shanghai: https://etherpad.openstack.org/p/shanghai-ptg-cinder<br />
* Virtual: https://etherpad.openstack.org/p/cinder-ussuri-virtual-ptg-planning<br />
<br />
==Thursday (Shanghai)==<br />
===Cinder project onboarding and "meet the Cinder developers" session===<br />
Everyone who attended were 100% satisfied and very complimentary about Cinder. Unfortunately, no one attended, so we spent the time figuring out how to get the recording equipment connected and positioned properly.<br />
<br />
[https://www.youtube.com/watch?v=QQgiokX_EiI Video Recording Part 1]<br />
<br />
===Python 2 support & Work remaining to remove Py27 support===<br />
I came into this arguing that we need to keep Python 2 testing in master for a while -- at least while we are still supporting Python 2 in stable branches, because otherwise backports become a big problem (won't have a clean backport if any py3-only language features are used in a patch to master). Pretty much no one agreed with this.<br />
<br />
Sean pointed out that as libraries drop py2 support, we won't be able to use them in py2 testing anyway. Ivan and Sean can't wait to start ripping out py2-compatability code. Gorka didn't think that the extra effort to modify backports would be that big a deal, and that if we're going to start using py3 for real, might as well start now.<br />
====Actions====<br />
* Reminder to reviewers: we need to be checking the code coverage of test cases very carefully so that new code has excellent coverage and will be likely to fail when tested with py2 in stable branches when a backport is proposed<br />
* Reminder to committers: be patient with reviewers when they ask for more tests!<br />
* Reminder to community: new features can use py3 language constructs; bugfixes likely to be backported should be more conservative and write for py2 compatabilty <br />
* Reminder to driver maintainers: ^^<br />
* Ivan and Sean have a green light to start removing py2 compatability<br />
===Policy migration===<br />
Some background: https://etherpad.openstack.org/p/policy-migration-steps<br />
Keystone has added a default read-only role and service-scoped roles, but they don't do anything until projects write policies that use them<br />
oslo.policy code has a way to define a default policy + deprecated default policy; during deprecation, the most permissive wins. This will allow easy migration to new policies for operators.<br />
<br />
There are still questions about how to set up testing for these. Keystone did only unit tests, but the tests were very heavyweight; had to set up all the users in the DB each time; they wish they had done things in tempest. But there is a concern that it may not be practical to do in tempest, either. (It may depend on the project.)<br />
<br />
A pop-up team is going to be started to help get the larger projects moved to the new Policy Code.<br />
====Actions====<br />
* rosmaita will investigate the different testing approaches. Note: It's possible that tempest will add the methods to create different users and the different projects will have to do their own testing using those.<br />
* rosmaita To look at the scoping options and understand what the impact on Cinder will be.<br />
** Need to create a matrix of our policies and different scopes.<br />
** Need to figure out how the administrative context fits in<br />
** Need to check current test coverage and where testing needs to be enhanced.<br />
** Don't have to have one person do it. It is possible to split up the work.<br />
* rosmaita and e0ne (and anyone else interested in this) to join the pop-up team to get more info and help get Cinder started.<br />
===Cinder V2 API removal===<br />
We can't just remove V2 code right now (e.g., the V2 extensions need to be moved to V3) but we can remove access to the API. Though actually that's not true either, there is a lot of background work that needs to be done before we remove V2 API:<br />
* Tempest still assumes that the V2 API will be there. Need to fix it.<br />
* OpenStack Client also has some V2 API assumptions.<br />
* Devstack also will not work.<br />
<br />
Sean has a patch to see how badly things break with V2 removal: https://review.opendev.org/554372<br />
<br />
V3 is pretty much exactly the same as V2. We should be able to change and just have people switch the endpoint and have it work. It would be nice if we could just update the catalog but that doesn't appear to be the case.<br />
====Actions====<br />
* follow up on this for the virtual PTG. What did we find with Sean's patch?<br />
** Create a list of the specific work items that need to be completed.<br />
** At that point we may be able to split the work up to an intern (if we have an intern).<br />
===Cinder REST API V4===<br />
We had talked in Vancouver about getting to a point after we have enough micro-versions piling up to move to V4.<br />
====Actions====<br />
* We don't need to do that in this release (let's get rid of V2 first), but it is something we need to keep in mind as a future goal.<br />
<br />
[https://www.youtube.com/watch?v=a-kq_EYrkq0 Video Recording Part 2]<br />
<br />
===Volume local cache===<br />
Requires both Cinder and Nova work:<br />
* Cinder spec: https://review.opendev.org/#/c/684556/<br />
* Nova spec: https://review.opendev.org/689070<br />
<br />
Currently there're different types of fast NVME SSDs, such as Intel Optane SSD, which r/w throughput can be 2.x~3.x GB/s, latency can be ~10 us. While typical remote volume for a VM can be hundreds of MB/s, latency can be millisecond level (iscsi / rbd). So these fast SSDs can be mounted on compute node locally and used as a cache for remote volumes. Regarding storage team, we need to add support in os-brick.<br />
<br />
Consensus was: there are some storage solutions this cannot be done for (Ceph, no mount point on host machine), some that might not require this (some vendors already have super-fast caching), and some it's worth doing for, so the overall feeling was supportive for this effort.<br />
<br />
See the PTG etherpad for details. Picture of the flip chart used during the discussion: https://twitter.com/jungleboyj/status/1192323512238776320<br />
====Actions====<br />
* Liang Fang to continue working on this<br />
===Mutable options ===<br />
The context for this is a NetApp customer request who wants to be able to change backend credentials without restarting any services.<br />
<br />
Problem is that the current mutable config can be done for the REST API, but doesn't extend beyond that. Further, changing driver credentials is a little more work since it may require reloading the driver or having a mechanism in all drivers to recognize and handle that change. Also, we don't want config options that are shared across drivers to be mutable.<br />
<br />
Gorka pointed out that a driver supporting Active-Active would not need mutable options for this purpose. It would be better to implement Active-Active instead of refresh credentials this way. A/A HA support has been ready for several releases now, but so far RBD has been the only driver to test and enable it.<br />
<br />
The team feels that using Active/Active is the best way to go.<br />
====Actions====<br />
* Gorka volunteered to support the NetApp team if they choose to implement A/A<br />
* need to add to the developer docs that just making an option mutable in oslo.config does not solve the problem for drivers (more info on the etherpad)<br />
<br />
(recording 3 starts here)<br />
===Cross Project Discussion with Edge Working Group===<br />
Apparently the next version of TripleO will support storage at the edge. They were wondering if we knew anything about that. We don't.<br />
<br />
As far as edge persistent storage goes, telcos think about having NFS-only and have it in the core - scary concept.<br />
<br />
In considering the edge use case, it is important to understand the physical limitations of what people have in mind. For example, one small telco rack, or a smaller DC with air conditioning, or a bigger DC with AC and bigger storage unit, etc. You really can't talk about "the edge" (insert U2 joke here).<br />
<br />
See the etherpad for more.<br />
===Default volume types depending on project or user===<br />
Having a single volume type default is too restrictive for bigger clouds with multiple AZs and many tenants/projects. Operators want more defaults to use in particular situations.<br />
<br />
The selection of which default to use is easy; the hard part of this will be the code enabling creation of the default at the end user/project level. Will need:<br />
* new API calls (create, show, list, update, delete), new microversion<br />
* client support<br />
* tell horizon about it<br />
====Actions====<br />
* geguileo - write the spec<br />
** request from Glance: we may also want a per-service default (triggered when a service token is passed)<br />
===Cinder retype doesn't use driver assisted migration===<br />
Gorka thinks this doesn't depend on the driver; he thinks it's broken for all drivers. There is code in the manager that prevents the efficient path from being taken:<br />
https://github.com/openstack/cinder/blob/ca5c2ce4e8ae9fbc92181ac4ba09cec3429a71e6/cinder/volume/manager.py#L2490<br />
There was a reason for it; we need to review and see if it still holds.<br />
<br />
Ivan thinks this is just a bug. Though we don't have a bug open for it.<br />
====Actions====<br />
* e0ne to investigate and fix it if he can verify that it is broken.<br />
===EOL some of the currently open branches===<br />
We have 8 open branches plus master (ussuri). Sent an email to the ML asking for data so we can make a good decision about this:<br />
http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010385.html<br />
<br />
Got zero responses, so this apparently isn't seen as a big deal by the community.<br />
<br />
The policy is that we need to announce 6 months ahead of time the fact that we are planning to EOL a branch. This allows time for a vendor to come in and pick it up if necessary.<br />
So, if we want to drop branches we just need to announce that we are planning to EOL branches and then we can do it in 6 months.<br />
<br />
The driverfixes branches have not been used in quite a while.<br />
* Should we delete those? No, we don't really want to lose that history of commits.<br />
* Could we re-name them? Put 'archived' in the title or something to make it clear that it doesn't still take code. (Or just document that they are an archive of old driver fixes.)<br />
* When we EOL a driver we should probably make it a driverfixes branch. (Not clear on exactly what's being proposed here, need to follow up at VPTG.)<br />
====Actions====<br />
* rosmaita - find out about renaming branches from infra team; also, about read-only branches (change to gerrit so no patches can be proposed to the branch)?<br />
** proposal: EOL o, p and rename them archived-ocata, archived-pike<br />
* rosmaita - send proposal to ML that o, p are due to exit EM status in 6 months<br />
* revisit this at the Virtual PTG<br />
** the EOL policy was revised recently, no longer requires the 6 month waiting period<br />
** want to reconsider whether not deleting the EOL branches is a good idea if we're not going to merge anything into them<br />
<br />
(recording 4 starts here)<br />
===Discuss the latest User Survey Results===<br />
Here's a handy compiled list of only the Cinder responses: https://etherpad.openstack.org/p/cinder-2019-user-survey-question-responses<br />
====Actions====<br />
* replication needs better documentation so that people know we can failover and fail back correctly<br />
* ivan is planning to continue the generic backup driver work<br />
<br />
===Meeting with the Nova team===<br />
When we failover in Cinder, volumes are no longer usable in Nova, but we don't tell Nova that the failover has ocurred. Any procedure in Nova to correct the situation needs to be done manually. It would be better if we let Nova know that a failover has occurred so they can do something.<br />
<br />
A complication is that Nova can't simply detach and attach the volume because data that is in flight would be lost.<br />
<br />
How about boot from volume? In that case the instance is dead anyway because access to the volume has been lost. Could go through the shutdown, detach, attach, reboot path.<br />
Problem is that detach is going to fail. Need to force it or handle the failure. But we aren't sure that Nova will allow a detach of a boot volume. And we don't currently have a force detach API.<br />
<br />
Also discussed a possible Nova bug for images created from encrypted volumes: https://bugs.launchpad.net/nova/+bug/1852106 , though it's not clear that the scenario described in the bug can actually happen<br />
====Actions====<br />
* need to figure out how to pass the force to os-brick to detach volume and when rebooting a volume<br />
* rosmaita to investigate Bug #1852106<br />
<br />
==Friday (Shanghai)==<br />
===Meeting with the Glance team===<br />
====Support for Glance multiple stores in Cinder====<br />
References: (cinder spec) https://review.openstack.org/#/c/641267/<br />
<br />
The Cinder team is still OK with this idea (which was approved for Train).<br />
=====Actions=====<br />
* retarget spec for Ussuri<br />
* get Abhishek's patch reviewed<br />
====Image snapshot co-location====<br />
For the Edge use case, Glance is planning to use info provided by Nova about what image a server was booted from to co-locate snapshots of that server in the same store as the original image. Would like to do the same with Cinder volumes uploaded as images. Just need a header that specifies the "base" image of the volume being uploaded as an image. We agreed that this is a separate use case from the above.<br />
=====Actions=====<br />
* Abhishek will write the spec for Cinder<br />
====Glance Cinder driver is very limited====<br />
We think it uses only default volume type, and also, it is not very well tested. We all agreed that this is a sad state of affairs.<br />
=====Actions=====<br />
* somebody should do something<br />
===Meet with Horizon about their proposed implementation of Cinder user messages===<br />
Horizon is interested in exposing the User Messages API. We agreed that this is a great idea.<br />
<br />
There's a question about having the message displayed in a requested language. It's possible that this is already handled at the REST API layer via the "Accept-Language" header. If it's not, that's probably the place to support this.<br />
====Actions====<br />
* rosmaita determine whether this would require a change to the API code, or whether existing code handles this already<br />
===Attach/Detach speed===<br />
Gorka was wondering whether there are any complaints about attach/detach speed in OpenStack, particularly since people are now using Cinder to provide volumes for Kubernetes (cinder in-tree driver, Cinder-CSI, Ember-CSI) and may be seeing a lot more attach/detach requests.<br />
<br />
Everybody seems to be OK with it, it's only geguileo who's complaining.<br />
====Actions====<br />
* not a concern at the moment<br />
===Topics from Train mid-cycle: status and carry-over to Ussuri===<br />
Notes about the Train mid-cycle: https://wiki.openstack.org/wiki/CinderTrainMidCycleSummary<br />
<br />
Mid-cycle etherpad: https://etherpad.openstack.org/p/cinder-train-mid-cycle-planning<br />
====Multiattach====<br />
All items need followup. Goals are:<br />
* short-term: document some guidance for how this feature should be tested<br />
* long-term: get some new tests into the cinder-tempest-plugin for this<br />
=====Actions=====<br />
* rosmaita draft the short-term document<br />
====iSCSI Ceph driver====<br />
Due to some downstream priorities changes, Walt is having trouble finding time to work on this.<br />
Ivan suggested that we encourage Walt to post whatever he's got, even if it's not working, so what he's learned isn't lost.<br />
There are some patches up and a github repo for some code Walt had to write that doesn't have a home in OpenStack or Ceph yet<br />
=====Actions=====<br />
* rosmaita: follow up with Walt<br />
* rosmaita: put together an etherpad with links to the work done so far<br />
====3rd Party CI Irregularities====<br />
Third-party testing by backend vendors of their driver code is very important to the project. But most of the 3rd Party CI appear to be pretty unstable.<br />
<br />
For most vendors, updating their 3rd Party CI to run python 3.7 in Train was not a simple task. It would be good if we could offer them better guidance about how to set up & maintain their 3rd Party CI. Would also like vendors to be running the cinder-tempest-plugin, but don't want to make it a demand unless we can make the path easier. (BTW, Datera is running the cinder-tempest-plugin in their CI!)<br />
<br />
Third Party CI Docs (partial list)<br />
* https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver<br />
* https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers<br />
* https://docs.openstack.org/infra/system-config/third_party.html<br />
* https://docs.openstack.org/cinder/latest/contributor/drivers.html<br />
<br />
=====Actions=====<br />
* Luigi has some ideas about using RDO Software Factory as a basis for 3rd Party CI; need to follow up with him on that<br />
* Gorka: will check about what RDO has available <br />
* e0ne: will look to see who's using cinder-tempest-plugin<br />
* the team: after gorka and e0ne report back, reorganize & update the 3rd party CI docs<br />
<br />
====Improve Automated Test Coverage====<br />
We want to do this via the cinder-tempest-plugin. Sophia (enriquetaso) is mentoring an Outreachy intern who has begun some work on this. Eric has been writing bugs to suggest test cases that need to be addressed.<br />
<br />
====SQLAlchemy to Alembic migration====<br />
No progress on this. Put in a proposal for a summer intern to work on this; maybe we'll get lucky.<br />
<br />
See https://etherpad.openstack.org/p/cinder-train-ptg-planning (line #247) for more info.<br />
<br />
====Capabilities Reporting====<br />
Operators need to read the vendor's manual to figure out which extra specs they can write for a particular backend, and what they're used for. it would be nice to drivers report their capabilities in a way that the operator can figure out this info from the CLI.<br />
<br />
Everyone agreed that we still want to do this. It will require an API change and there's already a spec for this:<br />
https://review.opendev.org/#/c/655939/1/specs/train/backend_capabilities.rst<br />
=====Actions=====<br />
* revisit at the Virtual PTG and figure out who's interested in working on it<br />
<br />
===Cinder Business===<br />
====Cinder Ussuri Priorities====<br />
We will finalize this after the Virtual PTG, but here's the initial list:<br />
* Increase testing coverage<br />
* Increase number of CIs running cinder-tempest-plugins<br />
* Better support for third party CIs: Make their life easier by having a way to deploy a robust system<br />
* Volume types per user/project/service-token<br />
** better documentation<br />
* Generic Backups<br />
* Improve HA Active-Active documentation<br />
** want to make it easier to test it<br />
* remove V2 API<br />
* remove python 2 support<br />
====Cinder-core update====<br />
See http://lists.openstack.org/pipermail/openstack-discuss/2019-November/010519.html<br />
We are at roughly the same review strength we had in Train.<br />
====Meeting Time Change update====<br />
We were holding off on this until after the Summit so that new contributors could participate in a poll.<br />
We'll consider the options from Liang Fang's original proposal at the Cinder weekly meeting: http://eavesdrop.openstack.org/meetings/cinder/2019/cinder.2019-10-23-16.00.log.html#l-166<br />
These are to move the meeting 1 or 2 hours earlier.<br />
There has also been some discussion on the ML: http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010328.html<br />
=====Actions=====<br />
* rosmaita put together a community poll<br />
====Virtual PTG====<br />
We discussed what the format should be. Consensus was to do it over 2 consecutive days, using 2 hours each day. This should make it easier for people to participate in at least part of the meeting. We want to do it soon; consensus was the week after KubeCon to avoid conflicts. So that would be the last week in November.<br />
=====Actions=====<br />
* rosmaita put together a community poll to determine days/times<br />
====Virtual Mid-Cycle====<br />
There is interest in having a midcycle. Although everyone recognizes that face-to-face is the best, contributors have been having trouble getting travel support. So we decided to do a completely Virtual Mid-Cycle meetup for Ussuri. We decided to figure out the format after we see how the Virtual PTG works out.<br />
==Monday (Virtual)==<br />
===Forum session recap: Are You Using Upgrade Checks?===<br />
* https://etherpad.openstack.org/p/PVG-upgrade-check-forum<br />
Jay gave a quick recap of the Forum session. There are a number of action items in the etherpad above. They are assigned to jungleboyj right now as a TC action.<br />
<br />
There are still some questions about (a) how operators are using these, and (b) what kind of checks we should be providing from the development side. The Cinder team was seeing this as pre-check. Others are seeing it as a check that is used along the way while an upgrade is in process to ensure that things are ready before operators start up their services. Pre-checks seem to make sense for us; Sean noted that we could add an option to do some pre-checks to the cinder-status command.<br />
<br />
So what should the Cinder team do during the Ussuri cycle (before we have the above issues settled)? At the very least, we should still add them when a driver is unsupported and subject to removal:<br />
* inform operators that in order to use an unsupported driver, a flag has to be set in cinder.conf<br />
* inform operators that they need to contact the vendor about whether they have plans to have the driver re-instated; otherwise, the operator needs to prepare to migrate the affected volumes to a backend with a supported driver for the next Cinder release<br />
====Actions====<br />
* jungleboyj - Start a discussion on the mailing list to find out if anyone is actually using or has used the upgrade checks in production<br />
* need to figure out where the documentation for this goes<br />
===Snapshot co-location===<br />
The spec for this is: https://review.opendev.org/#/c/695630/<br />
<br />
This is related to glance multi-store support in Cinder, but the spec needs to specify more carefully what the use case is. We think that is: a user has a volume that was created from a glance image, and wants to upload it as an image; want to give Glance info so that in can put the new image in the same store as the original image. (So use of the term "snapshot" here may be inaccurate.)<br />
<br />
This feature depends on the implementation of the other glance multistore spec: https://review.opendev.org/#/c/661676/<br />
====Actions====<br />
* Rajat, Abhishek - update the spec<br />
===Python 2 support removal===<br />
Gave a quick summary of what we discussed in Shanghai (see above) so that we're all on the same page.<br />
<br />
There's a patch up now removing py2 testing from Cinder: https://review.opendev.org/695317 . Once that's approved, will do the same for the other components.<br />
<br />
General advice to Cinder developers about using Python 3 language features: https://wiki.openstack.org/wiki/CinderUssuriPTGSummary#Actions<br />
<br />
====Actions====<br />
* rosmaita - get the testing/gate patches merged, then let the good times roll<br />
===User messages===<br />
Quick discussion of the admin action "leakage" issue discussed on https://review.opendev.org/#/c/694954/<br />
<br />
Consensus was that it would be useful to expose admin-oriented actions in user messages that only admins would be able to view. Maybe set a special flag when the message is created, and then use the admin context to decide whether this gets shown or not. Agreed that the message content will be same as we have currently (that is, don't expose any sensitive information even to admins). We can wait until admin-facing user messages are being used and get feedback about whether more info is required or not.<br />
<br />
Ivan pointed out that this change should not require a new microversion, since there's no change to the user message API and no change to the current response.<br />
====Actions====<br />
* rosmaita - write up a spec<br />
===3rd Party CI irregularities===<br />
The issue we want to address is that the 3rd Party CI systems seem pretty unstable. We'd like to be able to provide some more support to make the infrastructures more reliable. Luigi suggested using RDO Software Factory as a basis for 3rd Party CI.<br />
<br />
References:<br />
* https://opendev.org/x/third-party-ci-tools<br />
* https://zuul-ci.org/docs/zuul/admin/quick-start.html<br />
====Actions====<br />
* Luigi - follow up with RDO team and get some feedback on how plausible this scenario is<br />
* e0ne - will look to see who's using cinder-tempest-plugin<br />
===Extending default volume type support for tenants===<br />
Quick recap of the Shanghai discussion (see above). Simon had mentioned that he might have developer at Pure who'd be interested in doing the implementation. Rajat volunteered to help support the implementation.<br />
====Actions====<br />
* Gorka - write up the spec<br />
* rosmaita - follow up with Simon<br />
===Quotas!===<br />
Eric has a patch up that may fix one of many problems: https://review.opendev.org/#/c/695096/ <br />
Eric thinks the patch could be optimized if someone is interested.<br />
<br />
The general problem is that we update multiple tables and there can be (are) race conditions and you wind up with strange situations like negative quota values or multiple quotas for the same project. Operators have posted some scripts to be used occasionally clean up the database, but it would be better to fix this in Cinder.<br />
===EOL for driverfixes/{m,n} and stable/{o,p}===<br />
Since the Shanghai discussion, a patch has merged that removes the 6 month waiting period for the transition from EM -> EOL: https://review.opendev.org/#/c/682381/<br />
<br />
There was a discussion about this in #openstack-tc last week: http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-11-22.log.html#t2019-11-22T15:35:01<br />
<br />
Consensus is that we should go ahead and do this.<br />
====Actions====<br />
* rosmaita - send notice on the ML that we are going to do this in one week<br />
** http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011136.html<br />
* rosmaita - put up a release patch to EOL the branches<br />
** https://review.opendev.org/#/c/696173/<br />
===Driver support matrix===<br />
Follow-up from the discussion in Shanghai. A suggestion was made that multipath should be a specific category in the support matrix.<br />
<br />
Consensus is that multipath is more a feature of the backend than of the driver. It is useful to know if drivers do it, but it's not the kind of thing like replication that they do or not. Also, there are options that have to be set in nova in order for it to be useful - nova:libvirt:use_volume_multipath. So there doesn't seem to be a point in adding this to the support matrix.<br />
==Wednesday (Virtual)==<br />
===v2 API removal update===<br />
Outward facing issues:<br />
* Sean's patch that removes v2 from the 'versions' response had some strange failures (but Rajat thinks there may be a quick fix).<br />
* Right now, devstack expects both v2 and v3 to be available and creates endpoints for both in the service catalog: https://opendev.org/openstack/devstack/src/branch/master/lib/cinder<br />
* On the other hand, it looks like tempest is v3 ready: https://review.opendev.org/#/c/530702/<br />
* Also, Ivan is pretty sure that Horizon uses v3 only.<br />
* We should notify Nova and Glance to make sure they don't rely on v2 for anything.<br />
<br />
Internal issues:<br />
* we should be able to clean up the stuff that v3 inherited from v2<br />
* we may not be able to clean up the v2/contrib stuff yet because of microversion reliance<br />
====Actions====<br />
* Rajat take a shot at fixing Sean's patch<br />
* rosmaita - work on the devstack stuff (service catalog)<br />
* we'll be optimistic about tempest<br />
* rosmaita - send a general email to the ML saying that we plan to do this<br />
===Forum session recap: How are you using Cinder's Volume Types?===<br />
Session etherpad: https://etherpad.openstack.org/p/PVG-how-using-cinder-volume-types<br />
<br />
Sean mentioned some highlights of the Forum session.<br />
* NECTAR is running a patch that allows a volume type to be assigned to an AZ: https://github.com/NeCTAR-RC/cinder/commit/d5a3d938a8e0934d31b5a3c568846b3d32843866<br />
* There were some questions about whether RBD supports volume online migration<br />
* Operators are interested in the "Support filter backend based on operation type" spec that was implemented in Rocky, but need some documentation explaining how to use it. The implementation is https://github.com/openstack/cinder/commit/e1ec4b4c2e1f0de512f09e38824c1d7e2fa38617<br />
====Actions====<br />
* e0ne is planning to do some testing around the RBD volume migration<br />
* need someone to pick up the documentation of "Support filter backend based on operation type"<br />
===Ceph iSCSI work===<br />
This is an important feature for Ironic.<br />
There's an etherpad gathering some of the work Walt's done on this: https://etherpad.openstack.org/p/cinder-ceph-iscsi-driver<br />
====Actions====<br />
rosmaita - follow up with Walt about his bandwidth and ask him to add missing stuff to the etherpad<br />
===Ussuri community goals ===<br />
Goal 1: Drop Python 2.7 Support -- we're going to do this in 2 phases. First is to get the python 2 check and gate jobs removed so we aren't depending on any py27 in the gate. Second will be to make the changes that will only allow Cinder to be installed with at least py36. That will follow in January or so when any other project that needs to install Cinder in py27 for their own testing has removed that dependency. <br />
<br />
The cinder patch to drop py2 testing is https://review.opendev.org/#/c/695317/ -- once that's merged, we'll do the same for os-brick, cinderclient, the brick-client-ext, and cinderlib.<br />
<br />
Goal 2: Project Specific New Contributor & PTL Docs -- the goal is not yet approved; it's at the formal vote stage: https://review.opendev.org/#/c/691737/. From comments on the patch, expectations are that the current PTL will do this with help from former PTLs. Luckily, we have 2 former PTLs who are still very active with the project. The open issue right now is that the docs are supposed to be consistent across projects, but there isn't a template for this yet.<br />
<br />
There's a pre-selected goal for V to migrate all legacy zuul jobs: https://review.opendev.org/#/c/691278/. gmann has a patch up moving our legacy jobs (grenade) to the cinder repo and making them py3: https://review.opendev.org/#/c/695787/. At some point, we'll need to convert them to bona fide Zuul v3 jobs. Luigi left a bunch of info on the etherpad about what the moving parts for this are, and he pointed out that reviews are welcome.<br />
<br />
It looks like another V goal is going to be "Consistent and secure default policies" goal, which was floated on the ML: http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010291.html. A "Policy Popup Team" is being organized now to do some work during this cycle: https://review.opendev.org/#/c/695993/<br />
====Actions====<br />
* everyone - keep an eye on the "remove py2 support" patches<br />
* everyone - reviews welcome on https://review.opendev.org/#/q/status:open+branch:master+topic:grenade_zuulv3<br />
===Backup Service Testing===<br />
This was a follow-up from the Train mid-cycle. The backup tests fail intermittently with timeouts. The situation now is that the cinder backup tests have been removed from tempest-full, but they are being run with tempest-integrated-storage (which basically means that failures will hit us but shouldn't block other projects). The issue could still use some investigation; Eric has a suggestion on the etherpad to write an elasticsearch query to look in the c-bak log for a specific IOError instead of looking for the timeout.<br />
<br />
Eric also noted that the test jobs run a long time and fail with a not helpful message -- they look for volume going active but don't notice that it's in an error state and should fail sooner. Failing fast could conserve some gate resources. Maybe someone wants to follow up with something like https://review.opendev.org/#/c/565766/ ?<br />
====Actions====<br />
* anyone who's interested - follow up on this<br />
* rosmaita - (came up during this discussion but not related to this topic) update etherpads with nondestructive translation instructions (get the read-only link to the etherpad and use translations tools there instead of in the writeable etherpad)<br />
===Cycle Priorities===<br />
====Increase testing coverage====<br />
We want to get more thorough tests into the cinder-tempest-plugin. Sofia (enriquetaso) is mentoring an Outreachy intern Anastasiya (anastzhyr) who will focus on this during her internship (3 Dec 2019 to 3 March 2020). The Cinder community can assist in this by (A) writing bugs tagged 'test-coverage' with specific ideas for tests that can be added, and (B) timely reviews of Anastaiya's patches.<br />
====Increase number of 3rd party CIs running cinder-tempest-plugin====<br />
This '''should be easyâ„¢''' for current 3rd party CIs (and actually may already be a requirement). Need to get a baseline for how many CIs are running it now.<br />
====Better support for 3rd Party CIs====<br />
We'd like to their lives easier by having a standard way to deploy a robust system. This may be possible using RDO Software Factory. Luigi has already brought this up at the RDO meeting and RDO (and the SF people) are supportive of this idea: http://eavesdrop.openstack.org/meetings/rdo_meeting___2019_11_27/2019/rdo_meeting___2019_11_27.2019-11-27-15.00.log.html#l-76<br />
<br />
WIP 3rd party CI doc section for SoftwareFactory:<br />
* https://softwarefactory-project.io/r/#/c/17097/<br />
* https://softwarefactory-project.io/logs/97/17097/2/check/sf-docs-build/ae24483/docs-html/guides/third_party_ci.html <br />
====Default volume-type enhancement====<br />
Gorka's working on a spec for having per user/project/service-token default volume-type; basic idea is what was discussed at the PTG (see above).<br />
Related to this is improving the documentation around volume types.<br />
====Generic Backups====<br />
Related patches:<br />
* https://review.opendev.org/#/c/620881/<br />
* https://review.opendev.org/#/c/630305/<br />
====Improve Active-Active (HA) Documentation====<br />
We're anticipating that operators will want to run Cinder in HA mode (Cinder active-active). Currently RBD supports running in HA.<br />
<br />
A driver has to set a flag in order for the service to run in active-active. But in addition to setting the flag, it would be good for the feature to actually work with that driver. The problem is that what you need to do is very driver dependent. We don't have tempest tests that verify that a driver can run in HA (but it should be possible to add tests such that if they fail, then you know HA is not happening).<br />
<br />
We need docs aimed at two audiences:<br />
* driver developers: want to clarify what you need to do to implement active-active and claim HA support. Should be able to give some general advice (like "watch out for race conditions on connection") to help in implementing and testing that their driver supports active-active.<br />
* operators: need to provide some advice about how to deploy Cinder in active-active mode (for example, you should have 3 API nodes, 3 scheduler nodes, 3 volume services)<br />
====Remove the v2 API====<br />
Should at least be able to get it out of the service catalog and remove the option to run it (and remove the option to not run v3), so that from an external point of view, all that's available is the v3 API. The refactoring to remove all v2 code from the API doesn't have to happen immediately.<br />
====Remove Python 2 support====<br />
This is a community goal (and it looks like we're in good shape to make this happen very early in the cycle).<br />
====Move away from squlalchemy- migrate to alembic====<br />
We're getting closer to the point where we will have no choice.<br />
===Should we hold a Virtual Mid-Cycle?===<br />
Cycle Schedule: https://releases.openstack.org/ussuri/schedule.html<br />
<br />
The consensus that we should have a Ussuri Mid-Cycle and that it should be virtual. While we were discussing the timing (have it close to the spec freeze? or closer to M-2), Eric pointed out that if we were going to use the model that we used for this Virtual PTG, namely, 2-hour sessions spread over a couple of days, there's no reason why the sessions have to be close together since we don't need to arrange any physical facilities. So we can be flexible and have 2 hour sessions whenever it makes sense.<br />
<br />
Right now, the sensible times to have 2-hour virtual meetings are:<br />
* around the Cinder Spec Freeze. The spec freeze is the last week in January and is at 15 weeks, which is exactly the middle of the cycle. Maybe have the Virtual meet-up the week before so unmerged specs can have some discussion if necessary?<br />
* at the "Cinder New Feature Status Checkpoint" (week of 16 March 2020), which is 3 weeks before the final release for client libraries.<br />
====Actions====<br />
* rosmaita - get feedback from the wider team at the weekly meeting and organize polls to determine the day of week and time</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=CinderUssuriPTGSummary&diff=173309CinderUssuriPTGSummary2019-12-04T18:16:09Z<p>Jay Bryant: /* Thursday (Shanghai) */</p>
<hr />
<div>=== Introduction ===<br />
This page contains a summary of the subjects covered during the Ussuri PTG held in Shanghai, China, November 7-8, 2019.<br />
It also contains a summary of the Virtual PTG held November 25 and 27, 2019.<br />
<br />
The sessions were recorded. Links to the recordings will be added here when they are available.<br />
<br />
The full etherpad and all associated notes may be found here:<br />
* Shanghai: https://etherpad.openstack.org/p/shanghai-ptg-cinder<br />
* Virtual: https://etherpad.openstack.org/p/cinder-ussuri-virtual-ptg-planning<br />
<br />
==Thursday (Shanghai)==<br />
===Cinder project onboarding and "meet the Cinder developers" session===<br />
Everyone who attended were 100% satisfied and very complimentary about Cinder. Unfortunately, no one attended, so we spent the time figuring out how to get the recording equipment connected and positioned properly.<br />
<br />
[https://www.youtube.com/watch?v=QQgiokX_EiI Video Recording Part 1]<br />
<br />
===Python 2 support & Work remaining to remove Py27 support===<br />
I came into this arguing that we need to keep Python 2 testing in master for a while -- at least while we are still supporting Python 2 in stable branches, because otherwise backports become a big problem (won't have a clean backport if any py3-only language features are used in a patch to master). Pretty much no one agreed with this.<br />
<br />
Sean pointed out that as libraries drop py2 support, we won't be able to use them in py2 testing anyway. Ivan and Sean can't wait to start ripping out py2-compatability code. Gorka didn't think that the extra effort to modify backports would be that big a deal, and that if we're going to start using py3 for real, might as well start now.<br />
====Actions====<br />
* Reminder to reviewers: we need to be checking the code coverage of test cases very carefully so that new code has excellent coverage and will be likely to fail when tested with py2 in stable branches when a backport is proposed<br />
* Reminder to committers: be patient with reviewers when they ask for more tests!<br />
* Reminder to community: new features can use py3 language constructs; bugfixes likely to be backported should be more conservative and write for py2 compatabilty <br />
* Reminder to driver maintainers: ^^<br />
* Ivan and Sean have a green light to start removing py2 compatability<br />
===Policy migration===<br />
Some background: https://etherpad.openstack.org/p/policy-migration-steps<br />
Keystone has added a default read-only role and service-scoped roles, but they don't do anything until projects write policies that use them<br />
oslo.policy code has a way to define a default policy + deprecated default policy; during deprecation, the most permissive wins. This will allow easy migration to new policies for operators.<br />
<br />
There are still questions about how to set up testing for these. Keystone did only unit tests, but the tests were very heavyweight; had to set up all the users in the DB each time; they wish they had done things in tempest. But there is a concern that it may not be practical to do in tempest, either. (It may depend on the project.)<br />
<br />
A pop-up team is going to be started to help get the larger projects moved to the new Policy Code.<br />
====Actions====<br />
* rosmaita will investigate the different testing approaches. Note: It's possible that tempest will add the methods to create different users and the different projects will have to do their own testing using those.<br />
* rosmaita To look at the scoping options and understand what the impact on Cinder will be.<br />
** Need to create a matrix of our policies and different scopes.<br />
** Need to figure out how the administrative context fits in<br />
** Need to check current test coverage and where testing needs to be enhanced.<br />
** Don't have to have one person do it. It is possible to split up the work.<br />
* rosmaita and e0ne (and anyone else interested in this) to join the pop-up team to get more info and help get Cinder started.<br />
===Cinder V2 API removal===<br />
We can't just remove V2 code right now (e.g., the V2 extensions need to be moved to V3) but we can remove access to the API. Though actually that's not true either, there is a lot of background work that needs to be done before we remove V2 API:<br />
* Tempest still assumes that the V2 API will be there. Need to fix it.<br />
* OpenStack Client also has some V2 API assumptions.<br />
* Devstack also will not work.<br />
<br />
Sean has a patch to see how badly things break with V2 removal: https://review.opendev.org/554372<br />
<br />
V3 is pretty much exactly the same as V2. We should be able to change and just have people switch the endpoint and have it work. It would be nice if we could just update the catalog but that doesn't appear to be the case.<br />
====Actions====<br />
* follow up on this for the virtual PTG. What did we find with Sean's patch?<br />
** Create a list of the specific work items that need to be completed.<br />
** At that point we may be able to split the work up to an intern (if we have an intern).<br />
===Cinder REST API V4===<br />
We had talked in Vancouver about getting to a point after we have enough micro-versions piling up to move to V4.<br />
====Actions====<br />
* We don't need to do that in this release (let's get rid of V2 first), but it is something we need to keep in mind as a future goal.<br />
<br />
(recording 2 starts here)<br />
===Volume local cache===<br />
Requires both Cinder and Nova work:<br />
* Cinder spec: https://review.opendev.org/#/c/684556/<br />
* Nova spec: https://review.opendev.org/689070<br />
<br />
Currently there're different types of fast NVME SSDs, such as Intel Optane SSD, which r/w throughput can be 2.x~3.x GB/s, latency can be ~10 us. While typical remote volume for a VM can be hundreds of MB/s, latency can be millisecond level (iscsi / rbd). So these fast SSDs can be mounted on compute node locally and used as a cache for remote volumes. Regarding storage team, we need to add support in os-brick.<br />
<br />
Consensus was: there are some storage solutions this cannot be done for (Ceph, no mount point on host machine), some that might not require this (some vendors already have super-fast caching), and some it's worth doing for, so the overall feeling was supportive for this effort.<br />
<br />
See the PTG etherpad for details. Picture of the flip chart used during the discussion: https://twitter.com/jungleboyj/status/1192323512238776320<br />
====Actions====<br />
* Liang Fang to continue working on this<br />
===Mutable options ===<br />
The context for this is a NetApp customer request who wants to be able to change backend credentials without restarting any services.<br />
<br />
Problem is that the current mutable config can be done for the REST API, but doesn't extend beyond that. Further, changing driver credentials is a little more work since it may require reloading the driver or having a mechanism in all drivers to recognize and handle that change. Also, we don't want config options that are shared across drivers to be mutable.<br />
<br />
Gorka pointed out that a driver supporting Active-Active would not need mutable options for this purpose. It would be better to implement Active-Active instead of refresh credentials this way. A/A HA support has been ready for several releases now, but so far RBD has been the only driver to test and enable it.<br />
<br />
The team feels that using Active/Active is the best way to go.<br />
====Actions====<br />
* Gorka volunteered to support the NetApp team if they choose to implement A/A<br />
* need to add to the developer docs that just making an option mutable in oslo.config does not solve the problem for drivers (more info on the etherpad)<br />
<br />
(recording 3 starts here)<br />
===Cross Project Discussion with Edge Working Group===<br />
Apparently the next version of TripleO will support storage at the edge. They were wondering if we knew anything about that. We don't.<br />
<br />
As far as edge persistent storage goes, telcos think about having NFS-only and have it in the core - scary concept.<br />
<br />
In considering the edge use case, it is important to understand the physical limitations of what people have in mind. For example, one small telco rack, or a smaller DC with air conditioning, or a bigger DC with AC and bigger storage unit, etc. You really can't talk about "the edge" (insert U2 joke here).<br />
<br />
See the etherpad for more.<br />
===Default volume types depending on project or user===<br />
Having a single volume type default is too restrictive for bigger clouds with multiple AZs and many tenants/projects. Operators want more defaults to use in particular situations.<br />
<br />
The selection of which default to use is easy; the hard part of this will be the code enabling creation of the default at the end user/project level. Will need:<br />
* new API calls (create, show, list, update, delete), new microversion<br />
* client support<br />
* tell horizon about it<br />
====Actions====<br />
* geguileo - write the spec<br />
** request from Glance: we may also want a per-service default (triggered when a service token is passed)<br />
===Cinder retype doesn't use driver assisted migration===<br />
Gorka thinks this doesn't depend on the driver; he thinks it's broken for all drivers. There is code in the manager that prevents the efficient path from being taken:<br />
https://github.com/openstack/cinder/blob/ca5c2ce4e8ae9fbc92181ac4ba09cec3429a71e6/cinder/volume/manager.py#L2490<br />
There was a reason for it; we need to review and see if it still holds.<br />
<br />
Ivan thinks this is just a bug. Though we don't have a bug open for it.<br />
====Actions====<br />
* e0ne to investigate and fix it if he can verify that it is broken.<br />
===EOL some of the currently open branches===<br />
We have 8 open branches plus master (ussuri). Sent an email to the ML asking for data so we can make a good decision about this:<br />
http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010385.html<br />
<br />
Got zero responses, so this apparently isn't seen as a big deal by the community.<br />
<br />
The policy is that we need to announce 6 months ahead of time the fact that we are planning to EOL a branch. This allows time for a vendor to come in and pick it up if necessary.<br />
So, if we want to drop branches we just need to announce that we are planning to EOL branches and then we can do it in 6 months.<br />
<br />
The driverfixes branches have not been used in quite a while.<br />
* Should we delete those? No, we don't really want to lose that history of commits.<br />
* Could we re-name them? Put 'archived' in the title or something to make it clear that it doesn't still take code. (Or just document that they are an archive of old driver fixes.)<br />
* When we EOL a driver we should probably make it a driverfixes branch. (Not clear on exactly what's being proposed here, need to follow up at VPTG.)<br />
====Actions====<br />
* rosmaita - find out about renaming branches from infra team; also, about read-only branches (change to gerrit so no patches can be proposed to the branch)?<br />
** proposal: EOL o, p and rename them archived-ocata, archived-pike<br />
* rosmaita - send proposal to ML that o, p are due to exit EM status in 6 months<br />
* revisit this at the Virtual PTG<br />
** the EOL policy was revised recently, no longer requires the 6 month waiting period<br />
** want to reconsider whether not deleting the EOL branches is a good idea if we're not going to merge anything into them<br />
<br />
(recording 4 starts here)<br />
===Discuss the latest User Survey Results===<br />
Here's a handy compiled list of only the Cinder responses: https://etherpad.openstack.org/p/cinder-2019-user-survey-question-responses<br />
====Actions====<br />
* replication needs better documentation so that people know we can failover and fail back correctly<br />
* ivan is planning to continue the generic backup driver work<br />
<br />
===Meeting with the Nova team===<br />
When we failover in Cinder, volumes are no longer usable in Nova, but we don't tell Nova that the failover has ocurred. Any procedure in Nova to correct the situation needs to be done manually. It would be better if we let Nova know that a failover has occurred so they can do something.<br />
<br />
A complication is that Nova can't simply detach and attach the volume because data that is in flight would be lost.<br />
<br />
How about boot from volume? In that case the instance is dead anyway because access to the volume has been lost. Could go through the shutdown, detach, attach, reboot path.<br />
Problem is that detach is going to fail. Need to force it or handle the failure. But we aren't sure that Nova will allow a detach of a boot volume. And we don't currently have a force detach API.<br />
<br />
Also discussed a possible Nova bug for images created from encrypted volumes: https://bugs.launchpad.net/nova/+bug/1852106 , though it's not clear that the scenario described in the bug can actually happen<br />
====Actions====<br />
* need to figure out how to pass the force to os-brick to detach volume and when rebooting a volume<br />
* rosmaita to investigate Bug #1852106<br />
<br />
==Friday (Shanghai)==<br />
===Meeting with the Glance team===<br />
====Support for Glance multiple stores in Cinder====<br />
References: (cinder spec) https://review.openstack.org/#/c/641267/<br />
<br />
The Cinder team is still OK with this idea (which was approved for Train).<br />
=====Actions=====<br />
* retarget spec for Ussuri<br />
* get Abhishek's patch reviewed<br />
====Image snapshot co-location====<br />
For the Edge use case, Glance is planning to use info provided by Nova about what image a server was booted from to co-locate snapshots of that server in the same store as the original image. Would like to do the same with Cinder volumes uploaded as images. Just need a header that specifies the "base" image of the volume being uploaded as an image. We agreed that this is a separate use case from the above.<br />
=====Actions=====<br />
* Abhishek will write the spec for Cinder<br />
====Glance Cinder driver is very limited====<br />
We think it uses only default volume type, and also, it is not very well tested. We all agreed that this is a sad state of affairs.<br />
=====Actions=====<br />
* somebody should do something<br />
===Meet with Horizon about their proposed implementation of Cinder user messages===<br />
Horizon is interested in exposing the User Messages API. We agreed that this is a great idea.<br />
<br />
There's a question about having the message displayed in a requested language. It's possible that this is already handled at the REST API layer via the "Accept-Language" header. If it's not, that's probably the place to support this.<br />
====Actions====<br />
* rosmaita determine whether this would require a change to the API code, or whether existing code handles this already<br />
===Attach/Detach speed===<br />
Gorka was wondering whether there are any complaints about attach/detach speed in OpenStack, particularly since people are now using Cinder to provide volumes for Kubernetes (cinder in-tree driver, Cinder-CSI, Ember-CSI) and may be seeing a lot more attach/detach requests.<br />
<br />
Everybody seems to be OK with it, it's only geguileo who's complaining.<br />
====Actions====<br />
* not a concern at the moment<br />
===Topics from Train mid-cycle: status and carry-over to Ussuri===<br />
Notes about the Train mid-cycle: https://wiki.openstack.org/wiki/CinderTrainMidCycleSummary<br />
<br />
Mid-cycle etherpad: https://etherpad.openstack.org/p/cinder-train-mid-cycle-planning<br />
====Multiattach====<br />
All items need followup. Goals are:<br />
* short-term: document some guidance for how this feature should be tested<br />
* long-term: get some new tests into the cinder-tempest-plugin for this<br />
=====Actions=====<br />
* rosmaita draft the short-term document<br />
====iSCSI Ceph driver====<br />
Due to some downstream priorities changes, Walt is having trouble finding time to work on this.<br />
Ivan suggested that we encourage Walt to post whatever he's got, even if it's not working, so what he's learned isn't lost.<br />
There are some patches up and a github repo for some code Walt had to write that doesn't have a home in OpenStack or Ceph yet<br />
=====Actions=====<br />
* rosmaita: follow up with Walt<br />
* rosmaita: put together an etherpad with links to the work done so far<br />
====3rd Party CI Irregularities====<br />
Third-party testing by backend vendors of their driver code is very important to the project. But most of the 3rd Party CI appear to be pretty unstable.<br />
<br />
For most vendors, updating their 3rd Party CI to run python 3.7 in Train was not a simple task. It would be good if we could offer them better guidance about how to set up & maintain their 3rd Party CI. Would also like vendors to be running the cinder-tempest-plugin, but don't want to make it a demand unless we can make the path easier. (BTW, Datera is running the cinder-tempest-plugin in their CI!)<br />
<br />
Third Party CI Docs (partial list)<br />
* https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver<br />
* https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers<br />
* https://docs.openstack.org/infra/system-config/third_party.html<br />
* https://docs.openstack.org/cinder/latest/contributor/drivers.html<br />
<br />
=====Actions=====<br />
* Luigi has some ideas about using RDO Software Factory as a basis for 3rd Party CI; need to follow up with him on that<br />
* Gorka: will check about what RDO has available <br />
* e0ne: will look to see who's using cinder-tempest-plugin<br />
* the team: after gorka and e0ne report back, reorganize & update the 3rd party CI docs<br />
<br />
====Improve Automated Test Coverage====<br />
We want to do this via the cinder-tempest-plugin. Sophia (enriquetaso) is mentoring an Outreachy intern who has begun some work on this. Eric has been writing bugs to suggest test cases that need to be addressed.<br />
<br />
====SQLAlchemy to Alembic migration====<br />
No progress on this. Put in a proposal for a summer intern to work on this; maybe we'll get lucky.<br />
<br />
See https://etherpad.openstack.org/p/cinder-train-ptg-planning (line #247) for more info.<br />
<br />
====Capabilities Reporting====<br />
Operators need to read the vendor's manual to figure out which extra specs they can write for a particular backend, and what they're used for. it would be nice to drivers report their capabilities in a way that the operator can figure out this info from the CLI.<br />
<br />
Everyone agreed that we still want to do this. It will require an API change and there's already a spec for this:<br />
https://review.opendev.org/#/c/655939/1/specs/train/backend_capabilities.rst<br />
=====Actions=====<br />
* revisit at the Virtual PTG and figure out who's interested in working on it<br />
<br />
===Cinder Business===<br />
====Cinder Ussuri Priorities====<br />
We will finalize this after the Virtual PTG, but here's the initial list:<br />
* Increase testing coverage<br />
* Increase number of CIs running cinder-tempest-plugins<br />
* Better support for third party CIs: Make their life easier by having a way to deploy a robust system<br />
* Volume types per user/project/service-token<br />
** better documentation<br />
* Generic Backups<br />
* Improve HA Active-Active documentation<br />
** want to make it easier to test it<br />
* remove V2 API<br />
* remove python 2 support<br />
====Cinder-core update====<br />
See http://lists.openstack.org/pipermail/openstack-discuss/2019-November/010519.html<br />
We are at roughly the same review strength we had in Train.<br />
====Meeting Time Change update====<br />
We were holding off on this until after the Summit so that new contributors could participate in a poll.<br />
We'll consider the options from Liang Fang's original proposal at the Cinder weekly meeting: http://eavesdrop.openstack.org/meetings/cinder/2019/cinder.2019-10-23-16.00.log.html#l-166<br />
These are to move the meeting 1 or 2 hours earlier.<br />
There has also been some discussion on the ML: http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010328.html<br />
=====Actions=====<br />
* rosmaita put together a community poll<br />
====Virtual PTG====<br />
We discussed what the format should be. Consensus was to do it over 2 consecutive days, using 2 hours each day. This should make it easier for people to participate in at least part of the meeting. We want to do it soon; consensus was the week after KubeCon to avoid conflicts. So that would be the last week in November.<br />
=====Actions=====<br />
* rosmaita put together a community poll to determine days/times<br />
====Virtual Mid-Cycle====<br />
There is interest in having a midcycle. Although everyone recognizes that face-to-face is the best, contributors have been having trouble getting travel support. So we decided to do a completely Virtual Mid-Cycle meetup for Ussuri. We decided to figure out the format after we see how the Virtual PTG works out.<br />
==Monday (Virtual)==<br />
===Forum session recap: Are You Using Upgrade Checks?===<br />
* https://etherpad.openstack.org/p/PVG-upgrade-check-forum<br />
Jay gave a quick recap of the Forum session. There are a number of action items in the etherpad above. They are assigned to jungleboyj right now as a TC action.<br />
<br />
There are still some questions about (a) how operators are using these, and (b) what kind of checks we should be providing from the development side. The Cinder team was seeing this as pre-check. Others are seeing it as a check that is used along the way while an upgrade is in process to ensure that things are ready before operators start up their services. Pre-checks seem to make sense for us; Sean noted that we could add an option to do some pre-checks to the cinder-status command.<br />
<br />
So what should the Cinder team do during the Ussuri cycle (before we have the above issues settled)? At the very least, we should still add them when a driver is unsupported and subject to removal:<br />
* inform operators that in order to use an unsupported driver, a flag has to be set in cinder.conf<br />
* inform operators that they need to contact the vendor about whether they have plans to have the driver re-instated; otherwise, the operator needs to prepare to migrate the affected volumes to a backend with a supported driver for the next Cinder release<br />
====Actions====<br />
* jungleboyj - Start a discussion on the mailing list to find out if anyone is actually using or has used the upgrade checks in production<br />
* need to figure out where the documentation for this goes<br />
===Snapshot co-location===<br />
The spec for this is: https://review.opendev.org/#/c/695630/<br />
<br />
This is related to glance multi-store support in Cinder, but the spec needs to specify more carefully what the use case is. We think that is: a user has a volume that was created from a glance image, and wants to upload it as an image; want to give Glance info so that in can put the new image in the same store as the original image. (So use of the term "snapshot" here may be inaccurate.)<br />
<br />
This feature depends on the implementation of the other glance multistore spec: https://review.opendev.org/#/c/661676/<br />
====Actions====<br />
* Rajat, Abhishek - update the spec<br />
===Python 2 support removal===<br />
Gave a quick summary of what we discussed in Shanghai (see above) so that we're all on the same page.<br />
<br />
There's a patch up now removing py2 testing from Cinder: https://review.opendev.org/695317 . Once that's approved, will do the same for the other components.<br />
<br />
General advice to Cinder developers about using Python 3 language features: https://wiki.openstack.org/wiki/CinderUssuriPTGSummary#Actions<br />
<br />
====Actions====<br />
* rosmaita - get the testing/gate patches merged, then let the good times roll<br />
===User messages===<br />
Quick discussion of the admin action "leakage" issue discussed on https://review.opendev.org/#/c/694954/<br />
<br />
Consensus was that it would be useful to expose admin-oriented actions in user messages that only admins would be able to view. Maybe set a special flag when the message is created, and then use the admin context to decide whether this gets shown or not. Agreed that the message content will be same as we have currently (that is, don't expose any sensitive information even to admins). We can wait until admin-facing user messages are being used and get feedback about whether more info is required or not.<br />
<br />
Ivan pointed out that this change should not require a new microversion, since there's no change to the user message API and no change to the current response.<br />
====Actions====<br />
* rosmaita - write up a spec<br />
===3rd Party CI irregularities===<br />
The issue we want to address is that the 3rd Party CI systems seem pretty unstable. We'd like to be able to provide some more support to make the infrastructures more reliable. Luigi suggested using RDO Software Factory as a basis for 3rd Party CI.<br />
<br />
References:<br />
* https://opendev.org/x/third-party-ci-tools<br />
* https://zuul-ci.org/docs/zuul/admin/quick-start.html<br />
====Actions====<br />
* Luigi - follow up with RDO team and get some feedback on how plausible this scenario is<br />
* e0ne - will look to see who's using cinder-tempest-plugin<br />
===Extending default volume type support for tenants===<br />
Quick recap of the Shanghai discussion (see above). Simon had mentioned that he might have developer at Pure who'd be interested in doing the implementation. Rajat volunteered to help support the implementation.<br />
====Actions====<br />
* Gorka - write up the spec<br />
* rosmaita - follow up with Simon<br />
===Quotas!===<br />
Eric has a patch up that may fix one of many problems: https://review.opendev.org/#/c/695096/ <br />
Eric thinks the patch could be optimized if someone is interested.<br />
<br />
The general problem is that we update multiple tables and there can be (are) race conditions and you wind up with strange situations like negative quota values or multiple quotas for the same project. Operators have posted some scripts to be used occasionally clean up the database, but it would be better to fix this in Cinder.<br />
===EOL for driverfixes/{m,n} and stable/{o,p}===<br />
Since the Shanghai discussion, a patch has merged that removes the 6 month waiting period for the transition from EM -> EOL: https://review.opendev.org/#/c/682381/<br />
<br />
There was a discussion about this in #openstack-tc last week: http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-11-22.log.html#t2019-11-22T15:35:01<br />
<br />
Consensus is that we should go ahead and do this.<br />
====Actions====<br />
* rosmaita - send notice on the ML that we are going to do this in one week<br />
** http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011136.html<br />
* rosmaita - put up a release patch to EOL the branches<br />
** https://review.opendev.org/#/c/696173/<br />
===Driver support matrix===<br />
Follow-up from the discussion in Shanghai. A suggestion was made that multipath should be a specific category in the support matrix.<br />
<br />
Consensus is that multipath is more a feature of the backend than of the driver. It is useful to know if drivers do it, but it's not the kind of thing like replication that they do or not. Also, there are options that have to be set in nova in order for it to be useful - nova:libvirt:use_volume_multipath. So there doesn't seem to be a point in adding this to the support matrix.<br />
==Wednesday (Virtual)==<br />
===v2 API removal update===<br />
Outward facing issues:<br />
* Sean's patch that removes v2 from the 'versions' response had some strange failures (but Rajat thinks there may be a quick fix).<br />
* Right now, devstack expects both v2 and v3 to be available and creates endpoints for both in the service catalog: https://opendev.org/openstack/devstack/src/branch/master/lib/cinder<br />
* On the other hand, it looks like tempest is v3 ready: https://review.opendev.org/#/c/530702/<br />
* Also, Ivan is pretty sure that Horizon uses v3 only.<br />
* We should notify Nova and Glance to make sure they don't rely on v2 for anything.<br />
<br />
Internal issues:<br />
* we should be able to clean up the stuff that v3 inherited from v2<br />
* we may not be able to clean up the v2/contrib stuff yet because of microversion reliance<br />
====Actions====<br />
* Rajat take a shot at fixing Sean's patch<br />
* rosmaita - work on the devstack stuff (service catalog)<br />
* we'll be optimistic about tempest<br />
* rosmaita - send a general email to the ML saying that we plan to do this<br />
===Forum session recap: How are you using Cinder's Volume Types?===<br />
Session etherpad: https://etherpad.openstack.org/p/PVG-how-using-cinder-volume-types<br />
<br />
Sean mentioned some highlights of the Forum session.<br />
* NECTAR is running a patch that allows a volume type to be assigned to an AZ: https://github.com/NeCTAR-RC/cinder/commit/d5a3d938a8e0934d31b5a3c568846b3d32843866<br />
* There were some questions about whether RBD supports volume online migration<br />
* Operators are interested in the "Support filter backend based on operation type" spec that was implemented in Rocky, but need some documentation explaining how to use it. The implementation is https://github.com/openstack/cinder/commit/e1ec4b4c2e1f0de512f09e38824c1d7e2fa38617<br />
====Actions====<br />
* e0ne is planning to do some testing around the RBD volume migration<br />
* need someone to pick up the documentation of "Support filter backend based on operation type"<br />
===Ceph iSCSI work===<br />
This is an important feature for Ironic.<br />
There's an etherpad gathering some of the work Walt's done on this: https://etherpad.openstack.org/p/cinder-ceph-iscsi-driver<br />
====Actions====<br />
rosmaita - follow up with Walt about his bandwidth and ask him to add missing stuff to the etherpad<br />
===Ussuri community goals ===<br />
Goal 1: Drop Python 2.7 Support -- we're going to do this in 2 phases. First is to get the python 2 check and gate jobs removed so we aren't depending on any py27 in the gate. Second will be to make the changes that will only allow Cinder to be installed with at least py36. That will follow in January or so when any other project that needs to install Cinder in py27 for their own testing has removed that dependency. <br />
<br />
The cinder patch to drop py2 testing is https://review.opendev.org/#/c/695317/ -- once that's merged, we'll do the same for os-brick, cinderclient, the brick-client-ext, and cinderlib.<br />
<br />
Goal 2: Project Specific New Contributor & PTL Docs -- the goal is not yet approved; it's at the formal vote stage: https://review.opendev.org/#/c/691737/. From comments on the patch, expectations are that the current PTL will do this with help from former PTLs. Luckily, we have 2 former PTLs who are still very active with the project. The open issue right now is that the docs are supposed to be consistent across projects, but there isn't a template for this yet.<br />
<br />
There's a pre-selected goal for V to migrate all legacy zuul jobs: https://review.opendev.org/#/c/691278/. gmann has a patch up moving our legacy jobs (grenade) to the cinder repo and making them py3: https://review.opendev.org/#/c/695787/. At some point, we'll need to convert them to bona fide Zuul v3 jobs. Luigi left a bunch of info on the etherpad about what the moving parts for this are, and he pointed out that reviews are welcome.<br />
<br />
It looks like another V goal is going to be "Consistent and secure default policies" goal, which was floated on the ML: http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010291.html. A "Policy Popup Team" is being organized now to do some work during this cycle: https://review.opendev.org/#/c/695993/<br />
====Actions====<br />
* everyone - keep an eye on the "remove py2 support" patches<br />
* everyone - reviews welcome on https://review.opendev.org/#/q/status:open+branch:master+topic:grenade_zuulv3<br />
===Backup Service Testing===<br />
This was a follow-up from the Train mid-cycle. The backup tests fail intermittently with timeouts. The situation now is that the cinder backup tests have been removed from tempest-full, but they are being run with tempest-integrated-storage (which basically means that failures will hit us but shouldn't block other projects). The issue could still use some investigation; Eric has a suggestion on the etherpad to write an elasticsearch query to look in the c-bak log for a specific IOError instead of looking for the timeout.<br />
<br />
Eric also noted that the test jobs run a long time and fail with a not helpful message -- they look for volume going active but don't notice that it's in an error state and should fail sooner. Failing fast could conserve some gate resources. Maybe someone wants to follow up with something like https://review.opendev.org/#/c/565766/ ?<br />
====Actions====<br />
* anyone who's interested - follow up on this<br />
* rosmaita - (came up during this discussion but not related to this topic) update etherpads with nondestructive translation instructions (get the read-only link to the etherpad and use translations tools there instead of in the writeable etherpad)<br />
===Cycle Priorities===<br />
====Increase testing coverage====<br />
We want to get more thorough tests into the cinder-tempest-plugin. Sofia (enriquetaso) is mentoring an Outreachy intern Anastasiya (anastzhyr) who will focus on this during her internship (3 Dec 2019 to 3 March 2020). The Cinder community can assist in this by (A) writing bugs tagged 'test-coverage' with specific ideas for tests that can be added, and (B) timely reviews of Anastaiya's patches.<br />
====Increase number of 3rd party CIs running cinder-tempest-plugin====<br />
This '''should be easyâ„¢''' for current 3rd party CIs (and actually may already be a requirement). Need to get a baseline for how many CIs are running it now.<br />
====Better support for 3rd Party CIs====<br />
We'd like to their lives easier by having a standard way to deploy a robust system. This may be possible using RDO Software Factory. Luigi has already brought this up at the RDO meeting and RDO (and the SF people) are supportive of this idea: http://eavesdrop.openstack.org/meetings/rdo_meeting___2019_11_27/2019/rdo_meeting___2019_11_27.2019-11-27-15.00.log.html#l-76<br />
<br />
WIP 3rd party CI doc section for SoftwareFactory:<br />
* https://softwarefactory-project.io/r/#/c/17097/<br />
* https://softwarefactory-project.io/logs/97/17097/2/check/sf-docs-build/ae24483/docs-html/guides/third_party_ci.html <br />
====Default volume-type enhancement====<br />
Gorka's working on a spec for having per user/project/service-token default volume-type; basic idea is what was discussed at the PTG (see above).<br />
Related to this is improving the documentation around volume types.<br />
====Generic Backups====<br />
Related patches:<br />
* https://review.opendev.org/#/c/620881/<br />
* https://review.opendev.org/#/c/630305/<br />
====Improve Active-Active (HA) Documentation====<br />
We're anticipating that operators will want to run Cinder in HA mode (Cinder active-active). Currently RBD supports running in HA.<br />
<br />
A driver has to set a flag in order for the service to run in active-active. But in addition to setting the flag, it would be good for the feature to actually work with that driver. The problem is that what you need to do is very driver dependent. We don't have tempest tests that verify that a driver can run in HA (but it should be possible to add tests such that if they fail, then you know HA is not happening).<br />
<br />
We need docs aimed at two audiences:<br />
* driver developers: want to clarify what you need to do to implement active-active and claim HA support. Should be able to give some general advice (like "watch out for race conditions on connection") to help in implementing and testing that their driver supports active-active.<br />
* operators: need to provide some advice about how to deploy Cinder in active-active mode (for example, you should have 3 API nodes, 3 scheduler nodes, 3 volume services)<br />
====Remove the v2 API====<br />
Should at least be able to get it out of the service catalog and remove the option to run it (and remove the option to not run v3), so that from an external point of view, all that's available is the v3 API. The refactoring to remove all v2 code from the API doesn't have to happen immediately.<br />
====Remove Python 2 support====<br />
This is a community goal (and it looks like we're in good shape to make this happen very early in the cycle).<br />
====Move away from squlalchemy- migrate to alembic====<br />
We're getting closer to the point where we will have no choice.<br />
===Should we hold a Virtual Mid-Cycle?===<br />
Cycle Schedule: https://releases.openstack.org/ussuri/schedule.html<br />
<br />
The consensus that we should have a Ussuri Mid-Cycle and that it should be virtual. While we were discussing the timing (have it close to the spec freeze? or closer to M-2), Eric pointed out that if we were going to use the model that we used for this Virtual PTG, namely, 2-hour sessions spread over a couple of days, there's no reason why the sessions have to be close together since we don't need to arrange any physical facilities. So we can be flexible and have 2 hour sessions whenever it makes sense.<br />
<br />
Right now, the sensible times to have 2-hour virtual meetings are:<br />
* around the Cinder Spec Freeze. The spec freeze is the last week in January and is at 15 weeks, which is exactly the middle of the cycle. Maybe have the Virtual meet-up the week before so unmerged specs can have some discussion if necessary?<br />
* at the "Cinder New Feature Status Checkpoint" (week of 16 March 2020), which is 3 weeks before the final release for client libraries.<br />
====Actions====<br />
* rosmaita - get feedback from the wider team at the weekly meeting and organize polls to determine the day of week and time</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=Forum/Shanghai2019&diff=172975Forum/Shanghai20192019-11-04T09:08:23Z<p>Jay Bryant: /* Tuesday November 5 */</p>
<hr />
<div>== Etherpads ==<br />
The grand list of all of the Shanghai 2019 [[Forum]] etherpads. Please add links to etherpads below!<br />
(You might use the prior Forum entries for ideas: https://wiki.openstack.org/wiki/Forum/Berlin2018 or https://wiki.openstack.org/wiki/Forum/Denver2019)<br />
<br />
At the Forum the entire OpenStack community (users and developers) gathers to brainstorm the requirements for the next release, gather feedback on the past version and have strategic discussions that go beyond just one release cycle. The Berlin Forum was the start of the planning phase for the '''U''' development cycle. Please prepare session ideas with feedback from the '''Train''' release in mind.<br />
<br />
=== Monday November 4 ===<br />
* [10:50-11:30] [https://etherpad.openstack.org/p/PVG-billing-openstack How should we do billing for OpenStack deployments?]<br />
* [11:40-12:20] [https://etherpad.openstack.org/p/PVG-Deletion-of-resources Project Resource Cleanup]<br />
* [11:40-12:20] [https://etherpad.openstack.org/p/shanghai-ptg-ops-war-stories Ops War Stories]<br />
* [13:20-14:00] [https://etherpad.openstack.org/p/placeholder Placeholder]<br />
* [13:20-14:00] [https://etherpad.openstack.org/p/PVG-forum-qa-ops-user-feedback Users/Operators adoption of QA tools/plugins]<br />
* [14:10-14:50] [https://etherpad.openstack.org/p/PVG-how-using-cinder-volume-types How are you using Cinder's Volume Types?]<br />
* [14:10-14:50] [https://etherpad.openstack.org/p/PVG-ironic-operator-feedback Ironic Operator Feedback]<br />
* [15:00-15:40] [https://etherpad.openstack.org/p/PVG-ironic-snapshot-support Ironic - Snapshots?]<br />
* [16:00-16:40] [https://etherpad.openstack.org/p/PVG-edge-wg-forum Edge Use Cases and Reference Architectures Discussion]<br />
* [16:50-17:30] [https://etherpad.openstack.org/p/PVG-keystone-forum-policy Next steps for policy in OpenStack]<br />
<br />
=== Tuesday November 5 ===<br />
* [09:00-09:40] [https://etherpad.openstack.org/p/placeholder Placeholder]<br />
* [09:50-10:30] [https://etherpad.openstack.org/p/PVG-upgrade-check-forum Are You Using Upgrade Checks?]<br />
* [10:50-11:30] [https://etherpad.openstack.org/p/placeholder Placeholder]<br />
* [11:40-12:20] [https://etherpad.openstack.org/p/placeholder Placeholder]<br />
<br />
=== Wednesday November 6 ===<br />
<br />
==List of Brainstorming Etherpads==<br />
<br />
===Catch-alls===<br />
If you want to post an idea, but aren't working with a specific team or working group, you can use these:<br />
*[https://etherpad.openstack.org/p/PVG-UC-brainstorming UC Catch All ]<br />
<br />
===Etherpads from Teams and Working Groups===<br />
* Auto-scaling SIG: https://etherpad.openstack.org/p/PVG-auto-scaling-sig<br />
* Charms: https://etherpad.openstack.org/p/shanghai-ptg-openstack-charms<br />
* Cinder: https://etherpad.openstack.org/p/cinder-shanghai-forum-proposals<br />
* Cyborg: https://etherpad.openstack.org/p/PVG-forum-cyborg <br />
* First Contact SIG: https://etherpad.openstack.org/p/shanghai-forum-fc-sig-brainstorming<br />
* Heat: https://etherpad.openstack.org/p/PVG-heat<br />
* Manila: https://etherpad.openstack.org/p/manila-shanghai-forum-brainstorming<br />
* Meta SIG: https://etherpad.openstack.org/p/PVG-meta-sig<br />
* Neutron: https://etherpad.openstack.org/p/neutron-shanghai-forum-brainstorming<br />
* Oslo: https://etherpad.openstack.org/p/oslo-shanghai-topics<br />
* QA: https://etherpad.openstack.org/p/PVG-forum-qa-brainstorming<br />
* Self-healing SIG: https://etherpad.openstack.org/p/SHA-self-healing-SIG<br />
* StoryBoard: https://etherpad.openstack.org/p/storyboard-shanghai-ptg-planning<br />
* TC: https://etherpad.openstack.org/p/PVG-TC-brainstorming<br />
* Keystone: https://etherpad.openstack.org/p/PVG-keystone-forum<br />
* Public Cloud SIG: https://etherpad.openstack.org/p/PVG-PublicCloud-SIG-brainstorming<br />
* OpsMeetups Team: https://etherpad.openstack.org/p/PVG-OPS-Forum-Brainstorming<br />
<br />
===Etherpads from Pilot projects===<br />
* StarlingX: https://etherpad.openstack.org/p/PVG-StarlingX-brainstorming</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=Meetings/Oslo&diff=172529Meetings/Oslo2019-09-23T19:47:34Z<p>Jay Bryant: /* Agenda Template */</p>
<hr />
<div>Oslo will hold IRC meetings weekly at the time scheduled below.<br />
<br />
If there's an Oslo topic you think warrants a project meeting, please add it to the agenda section below and notify the [http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss openstack-discuss@lists.openstack.org] mailing list. Please give everyone at least 24 hours notice.<br />
<br />
'''Revised on:''' {{REVISIONMONTH1}}/{{REVISIONDAY}}/{{REVISIONYEAR}} by {{REVISIONUSER}}<br />
<br />
== Agenda for Next Meeting ==<br />
<br />
See http://eavesdrop.openstack.org/#Oslo_Team_Meeting<br />
<br />
* PTG - https://etherpad.openstack.org/p/oslo-shanghai-topics<br />
* Ping list for Ussuri<br />
* PTL for V<br />
<br />
=== Agenda Template ===<br />
Courtesy ping list for Ussuri: bnemec, <br />
<br />
#startmeeting oslo<br />
Courtesy ping for bnemec, jungleboyj, moguimar, hberaud, kgiusti, redrobot, stephenfin, johnsom, gsantomaggio, jungleboyj<br />
#link https://wiki.openstack.org/wiki/Meetings/Oslo#Agenda_for_Next_Meeting<br />
#topic Red flags for/from liaisons<br />
#topic Releases<br />
#topic Action items from last meeting<br />
<One-off topics><br />
#topic Weekly Wayward Review<br />
#topic Open discussion<br />
#endmeeting<br />
<br />
== General Information ==<br />
=== Regular Meeting Schedule ===<br />
* What day: Monday<br />
* What time: [https://www.timeanddate.com/worldclock/converter.html?iso=20180212T150000&p1=1440&p2=195&p3=43&p4=4675&p5=224 1500 UTC]<br />
* Where: #openstack-oslo on freenode<br />
* Who: All are welcome to participate<br />
<br />
=== Notes from Previous Meetings ===<br />
<br />
'''Current: ''' http://eavesdrop.openstack.org/meetings/oslo<br />
<br />
'''Historical'''<br />
<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-07-11-16.01.html Jul 11, 2014] - topics: oslo.db exception handling; sprint report<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-06-27-16.00.html Jun 27, 2014]<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-06-20-16.01.html Jun 20, 2014] - topics: oslo.db initial release; oslo.messaging good progress in neutron; alpha releases of 5 libraries next week; oslo.db test bugs reported by devananda<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-06-13-16.00.html Jun 13, 2014] - topics: oslo.db alpha release; db migration bug; <br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-06-06-16.00.html Jun 06, 2014] - topics: juno specs, spec approval process<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-05-30-16.00.html May 30, 2014]<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-05-23-16.01.html May 23, 2014] - topics: osprofile (postponed), run_test.sh, juno specs, oslo.test issue in tempest<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-05-09-16.02.html May 09, 2014] - topics: oslo-specs, oslo.messaging, summit prep, oslo.db, oslo.i18n<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-04-25-16.00.html April 24, 2014] - topics: oslotest, oslo.db, oslo.i18n, creating a specs repo<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-02-28-14.00.html Feb 28, 2014] - topics: icehouse feature freeze; syncing cinder & nova; uuidutils<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-02-14-14.01.html Feb 14, 2014] - topics: oslo.db, icehouse-3<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-01-31-14.01.html Jan 31, 2014] - topics: translation, deprecation policy, adopting taskflow, stevedore, and cliff<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-11-15-14.01.html Nov 15, 2013] - topics: translation, pecan/wsme common code, icehouse scheduling<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-10-25-14.00.html Oct 25, 2013] - topics: deprecated decorator and delayed translation implementation plan<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-10-11-14.00.html Oct 11, 2013] - topics: delayed translations<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-08-16-14.00.html Aug 16, 2013] - topic was new messaging API, message security and reject/reque/ack<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-07-19-14.00.html July 19, 2013] - topic was new messaging API, message security, qpid/proton messaging driver and removing logging dependency on eventlet<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-06-07-14.00.html June 7, 2013] - topic was new messaging API and message security<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-05-03-14.01.html May 3, 2013] - topic was new messaging API and message security<br />
<br />
(In case the list of notes is not up to date, please consult http://eavesdrop.openstack.org/meetings/oslo/)</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=OpenStack_Upstream_Institute_Occasions&diff=172203OpenStack Upstream Institute Occasions2019-08-27T18:47:24Z<p>Jay Bryant: /* Shanghai Crew */</p>
<hr />
<div>==Shanghai Training, 2019==<br />
<br />
During the Open Infrastructure Summit Shanghai event, November 2-3, 2019<br />
<br />
=== Shanghai Crew===<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Name !! IRC !! Mail !! Time Zone !! Projects/Areas !! Notes<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Cinder, Nova, Telemetry<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Pacific Time, US (UTC-8)<br />
|Storyboard, Docs, First Contact SIG, Technical Elections, Mentoring & Outreach<br />
|<br />
|-<br />
|Gergely Csatari <br />
|csatari<br />
|gergely.csatari@nokia.com<br />
|EST (GMT+3)<br />
|Docs, general processes, serving cofee<br />
|<br />
|-<br />
|Jay Bryant<br />
|jungleboyj<br />
|jsbryant@electronicjungle.net<br />
|Central US Time (UTC-6)<br />
|Cinder, Storage, Docs, Oslo<br />
|Will be working with Paul Xu on-site in SH to help coordinate the event.<br />
|}<br />
<br />
==Toyko Training, 2019==<br />
<br />
During the OpenStack Days Toyko event, second half July 23, 2019<br />
<br />
===Tokyo Crew===<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Name !! IRC !! Mail !! Time Zone !! Projects/Areas !! Notes<br />
|-<br />
|Ghanshyam Mann<br />
|gmann<br />
|gmann@ghanshyammann.com<br />
|UTC-6 (CDT)<br />
|QA, Nova, API, Infra, First Contact SIG<br />
|<br />
|-<br />
|Masayuki Igawa<br />
|masayukig<br />
|masayuki@igawa.io<br />
|UTC+9 (JST)<br />
|QA, Infra<br />
|<br />
|-<br />
|Kota Tsuyuzaki<br />
|kota<br />
|kota.tsuyuzaki.pc@hco.ntt.co.jp<br />
|UTC+9 (JST)<br />
|Swift<br />
|<br />
|-<br />
|Rikimaru Honjo<br />
|<br />
|honjo.rikimaru@po.ntt-tx.co.jp<br />
|UTC+9 (JST)<br />
|<br />
|<br />
|}<br />
<br />
==Denver Training, 2019==<br />
<br />
Before the Open Infrastructure Summit Denver, April 28-29, 2019.<br />
<br />
===Denver Crew===<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Name !! IRC !! Mail !! Time Zone !! Projects/Areas !! Notes<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Cinder, Nova, Telemetry<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Pacific Time, US (UTC-8)<br />
|Storyboard, Docs, First Contact SIG, Technical Elections, Mentoring & Outreach<br />
|<br />
|-<br />
|Matt Oliver<br />
|mattoliverau<br />
|matt@oliver.net.au<br />
|UTC+11 (AEST)<br />
|Swift, First Contact SIG<br />
|<br />
|-<br />
|Tony Breeds<br />
|tonyb<br />
|tony@bakeyournoodle.com<br />
|UTC+1100/+1000 (AEDT/AEST)<br />
|Requirements, Releases, Extended Maintenance, Nova(ish), Infra<br />
|<br />
|-<br />
|Colleen Murphy<br />
|cmurphy<br />
|colleen@gazlene.net<br />
|UTC+1/2 (CET/CEST)<br />
|Keystone, Infra, Rpm-Packaging, First Contact SIG<br />
|Sunday only<br />
|<br />
|-<br />
|Jay Bryant<br />
|jsbryant<br />
|jsbryant@electronicjungle.net<br />
|UTC-6 (CDT)<br />
|Cinder, Manila, Docs, First Contact SIG<br />
|<br />
|-<br />
|Ghanshyam Mann<br />
|gmann<br />
|gmann@ghanshyammann.com<br />
|UTC-6 (CDT)<br />
|QA, Nova, API, Infra, First Contact SIG<br />
|Will be in Board Meeting in parallel<br />
|}<br />
<br />
==Berlin Training, 2018==<br />
<br />
Before the OpenStack Summit Berlin, November 11-12, 2018.<br />
<br />
===Berlin Crew===<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Name !! IRC !! Mail !! Time Zone !! Projects/Areas !! Notes<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Cinder, Nova, Telemetry<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Pacific Time, US (UTC-8)<br />
|Storyboard, Docs, First Contact SIG, Technical Elections, Mentoring & Outreach<br />
|<br />
|-<br />
|Ghanshyam Mann<br />
|gmann<br />
|gmann@ghanshyammann.com<br />
|JST (UTC+9)<br />
|QA, Nova, API<br />
|Will be in Board Meeting on Monday<br />
|-<br />
|Amy Marrich<br />
|spotz<br />
|amy@demarco.com<br />
|Central Time, US<br />
|OSA, Docs. Community, Users<br />
|Will have Board Meeting one day<br />
|-<br />
|Jay Bryant<br />
|jungleboyj<br />
|jsbryant@electronicjungle.net<br />
|CDT (UTC-5)<br />
|Cinder, Docs, Oslo, Manila<br />
|Will be there Saturday and Sunday<br />
|-<br />
|Gergely Csatari<br />
|csatari<br />
|gergely.csatari@nokia.com<br />
|CET<br />
|Docs<br />
|<br />
|-<br />
|Colleen Murphy<br />
|cmurphy<br />
|colleen@gazlene.net<br />
|CET<br />
|Keystone<br />
|<br />
|-<br />
|Mark Korondi<br />
|kmarc<br />
|korondi.mark@gmail.com<br />
|CET<br />
|Training VM<br />
|<br />
|-<br />
|Tony Breeds<br />
|tonyb<br />
|tony@bakeyournoodle.com<br />
| UTC +10<br />
| Stable, Release Management, Nova<br />
|<br />
|-<br />
|Victoria Martinez de la Cruz<br />
|vkmc<br />
|vkmc@redhat.com<br />
| UTC-3<br />
| Manila<br />
|<br />
|-<br />
| Ell Marquez<br />
|<br />
| ellstripes@gmail.com<br />
|<br />
|<br />
|<br />
|-<br />
|Armstrong Foundjem<br />
|armstrong<br />
|foundjem@ieee.org<br />
|UTC-5<br />
|Mentoring<br />
|-<br />
|Daniel Abad<br />
|vabada<br />
|d.abad@cern.ch<br />
| UTC +1<br />
| Ironic<br />
|<br />
|}<br />
<br />
==Vancouver Training, 2018==<br />
<br />
Before the OpenStack Summit Vancouver, May 19-20, 2018.<br />
<br />
===Vancouver Crew===<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Name !! IRC !! Mail !! Time Zone !! Projects/Areas !! Notes<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Cinder, Nova, Telemetry<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Pacific Time, US<br />
|Cinder, os-brick, Storyboard<br />
|<br />
|-<br />
|Masayuki Igawa<br />
|masayukig<br />
|masayuki@igawa.io<br />
|JST<br />
|QA(Tempest, stestr, openstack-health, stackviz, ...)<br />
|still need to figure out my travel<br />
|-<br />
|Ghanshyam Mann<br />
|gmann<br />
|gmann@ghanshyammann.com<br />
|JST (UTC+9)<br />
|QA, Nova, API<br />
|<br />
|-<br />
|Amy Marrich<br />
|spotz<br />
|amy@demarco.com<br />
|Central Time, US<br />
|OSA, Docs. Community, Users<br />
|Will have board meeting on Sunday afternoon<br />
|-<br />
|Mark Korondi<br />
|kmarc<br />
|korondi.mark@gmail.com<br />
|CET<br />
|VM image,devstack,cli git vim, etc.<br />
|still need to figure out my travel<br />
|-<br />
|Ian Y. Choi<br />
|ianychoi<br />
|ianyrchoi@gmail.com<br />
|KST (UTC+9)<br />
|Docs. I18n<br />
|<br />
|-<br />
|Matthew Treinish<br />
|mtreinish<br />
|mtreinish@kortar.org<br />
|EDT (UTC-4)<br />
|<br />
|<br />
|-<br />
|Jay Bryant<br />
|jungleboyj<br />
|jsbryant@electronicjungle.net<br />
|CDT (UTC-5)<br />
|Cinder, Docs, Oslo, Manila<br />
|Will be there Saturday and Sunday<br />
|-<br />
|Gergely Csatari<br />
|csatari<br />
|gergely.csatari@nokia.com<br />
|CET<br />
|Docs<br />
|<br />
|-<br />
|Rich Wellum<br />
|rwellum<br />
|rich.wellum@nokia.com<br />
|EST<br />
|Kolla, Openstack-Helm<br />
|<br />
|-<br />
|Mars Toktonaliev<br />
|marst<br />
|mars.toktonaliev@nokia.com<br />
|CST<br />
|Docs<br />
|<br />
|-<br />
|Victoria Martinez de la Cruz<br />
|vkmc<br />
|vkmc@redhat.com<br />
|UTC-3<br />
|Manila<br />
|<br />
|}<br />
<br />
==Sydney Training==<br />
<br />
Before the OpenStack Summit Sydney, November 4-5, 2017.<br />
<br />
===Sydney Crew===<br />
<br />
{| class="wikitable"<br />
|Name<br />
|IRC<br />
|Mail<br />
|Time Zone<br />
|Projects/Areas<br />
|Notes<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Telemetry, Cinder, Nova<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Pacific Time, US<br />
|Cinder, os-brick, Storyboard<br />
|<br />
|-<br />
|Mark Korondi<br />
|kmArc<br />
|korondi.mark@gmail.com<br />
|CET<br />
|Docs, Upstream Institute<br />
|<br />
|-<br />
|Matthew Treinish<br />
|mtreinish<br />
|mtreinish@kortar.org<br />
|Eastern Time, US<br />
|QA, Infra, API, Nova, Glance<br />
|<br />
|-<br />
|Ghanshyam Mann<br />
|gmann<br />
|ghanshyammann@gmail.com<br />
|JST<br />
|QA, Nova, API<br />
|<br />
|-<br />
|Amy Marrich<br />
|spotz<br />
|amy@demarco.com<br />
|Central Time, US<br />
|OpenStack-Ansible, Docs<br />
|<br />
|-<br />
|Sean McGinnis<br />
|smcginnis<br />
|sean.mcginnis@gmail.com<br />
|Central Time, US<br />
|Cinder<br />
|<br />
|-<br />
|Jay Bryant<br />
|jungleboyj<br />
|jsbryant@electronicjungle.net<br />
|Central Time, US<br />
|Cinder, Docs<br />
|<br />
|-<br />
|Gergely Csatari<br />
|csatari<br />
|gergely.csatari@nokia.com<br />
|CET<br />
|Docs<br />
|<br />
|-<br />
|Victoria Martinez de la Cruz<br />
|vkmc<br />
|victoria@redhat.com<br />
|ART<br />
|Manila<br />
|<br />
|}<br />
<br />
==Copenhagen Training==<br />
<br />
Before the OpenStack Days Nordic event, October 18, 2017.<br />
<br />
===Copenhagen Crew===<br />
<br />
{| class="wikitable"<br />
|Name<br />
|IRC<br />
|Mail<br />
|Time Zone<br />
|Projects/Areas<br />
|Notes<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Telemetry, Cinder, Nova<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Central Time, US<br />
|Cinder, os-brick, Storyboard<br />
|<br />
|-<br />
|Mark Korondi<br />
|kmArc<br />
|korondi.mark@gmail.com<br />
|CET<br />
|Docs, Upstream Institute<br />
|<br />
|}<br />
<br />
==London Office Hours==<br />
<br />
During the OpenStack Days UK event, September 26, 2017.<br />
<br />
===London Crew===<br />
<br />
{| class="wikitable"<br />
|Name<br />
|IRC<br />
|Mail<br />
|Time Zone<br />
|Projects/Areas<br />
|Notes<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Telemetry, Cinder, Nova<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Central Time, US<br />
|Cinder, os-brick, Storyboard<br />
|<br />
|}<br />
<br />
==Beijing Training==<br />
<br />
Before the OPNFV Summit, June 14-15, 2017.<br />
<br />
===Beijing Crew===<br />
<br />
{| class="wikitable"<br />
|Name<br />
|IRC<br />
|Mail<br />
|Time Zone<br />
|Projects/Areas<br />
|Notes<br />
|-<br />
|Dave Neary<br />
|dneary<br />
|dneary@redhat.com<br />
|Eastern Time, US<br />
|OPNFV and RDO - not directly in OpenStack<br />
|<br />
|-<br />
|Gergely Csatari<br />
|csatari<br />
|gergely.csatari@nokia.com<br />
|CET<br />
|Docs, api-refs, Training Guides<br />
|<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Telemetry, Cinder, Nova<br />
|<br />
|-<br />
|JiangShan 江姗<br />
|<br />
|jiangshan@ctsi.com.cn<br />
|<br />
|<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Central Time, US<br />
|Cinder, os-brick, Storyboard<br />
|<br />
|-<br />
|Leo Ma<br />
|<br />
|majiajun@unitedstack.com<br />
|<br />
|<br />
|<br />
|-<br />
|Rossella Sblendido<br />
|rossella_s<br />
|rsblendido@suse.com<br />
|CET<br />
|Neutron<br />
|<br />
|-<br />
|ShangXiao 尚啸<br />
|<br />
|shangxiao@ctsi.com.cn<br />
|<br />
|<br />
|<br />
|}<br />
<br />
== Boston training ==<br />
<br />
Before the OpenStack Summit, May 6-7, 2017.<br />
<br />
===Boston Crew===<br />
<br />
{| class="wikitable"<br />
|Name<br />
|IRC<br />
|Mail<br />
|Time Zone<br />
|Projects/Areas<br />
|-<br />
|Amy Marrich<br />
|spotz<br />
|amy@demarco.com<br />
|Central Time, US<br />
|Ansible<br />
|-<br />
|Flavio Percoco<br />
|flaper87<br />
|flavio@redhat.com<br />
|<br />
|<br />
|-<br />
|Gergely Csatari<br />
|csatari<br />
|gergely.csatari@nokia.com<br />
|CET<br />
|Docs, api-refs, Training Guides<br />
|-<br />
|Ghanshyam Mann<br />
|gmann<br />
|ghanshyammann@gmail.com<br />
|JST(UTC+9)<br />
|QA<br />
|-<br />
|Ian Y. Choi<br />
|ianychoi<br />
|ianyrchoi@gmail.com<br />
|KST(UTC+9)<br />
|Training Guides + Mentor for I18n<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Telemetry, Cinder, Nova<br />
|-<br />
|Jay Bryant<br />
|jungleboyj<br />
|jungleboyj@gmail.com<br />
|Central Time<br />
|Manila<br />
|-<br />
|Jay Pipes<br />
|jaypipes<br />
|jaypipes@gmail.com<br />
|Eastern US<br />
|-<br />
|KATO Tomoyuki<br />
|katomo<br />
|kato.tomoyuki@jp.fujitsu.com<br />
|<br />
|Docs, Training Guides, I18n<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Central US<br />
|Cinder, os-brick, Storyboard<br />
|-<br />
|Mars Toktonaliev<br />
|marst<br />
|mars.toktonaliev@nokia.com<br />
|CST<br />
|<br />
|-<br />
|Mark Korondi<br />
|kmARC<br />
|korondi.mark@gmail.com<br />
|CET<br />
|Training Guides, Swift, Training VM<br />
|-<br />
|Marton Kiss<br />
|mrmartin<br />
|marton.kiss@gmail.com<br />
|CET<br />
|<br />
|-<br />
|Matt Dorn<br />
|madorn<br />
|madorn@gmail.com<br />
|Central Time, US<br />
|Docs, Training Guides<br />
|-<br />
|Miguel A Lavalle<br />
|mlavalle<br />
|malavall@us.ibm.com<br />
|US Central Time<br />
|Neutron, Tempest<br />
|-<br />
|Samantha Blanco<br />
|blancos<br />
|samantha.blanco@att.com<br />
|Eastern Time, US<br />
|Patrole, Murano<br />
|-<br />
|Sean McGinnis<br />
|smcginnis<br />
|sean.mcginnis@gmail.com<br />
|Central Time, US<br />
|Cinder<br />
|-<br />
|Trevor McCasland<br />
|trevormc<br />
|tm2086@att.com<br />
|Central Time, US<br />
|Neutron, Trove<br />
|-<br />
|Victoria MartÃnez de la Cruz<br />
|vkmc<br />
|victoria@redhat.com<br />
|<br />
|Manila<br />
|}</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=OpenStack_Upstream_Institute_Occasions&diff=172202OpenStack Upstream Institute Occasions2019-08-27T18:47:07Z<p>Jay Bryant: /* Shanghai Crew */</p>
<hr />
<div>==Shanghai Training, 2019==<br />
<br />
During the Open Infrastructure Summit Shanghai event, November 2-3, 2019<br />
<br />
=== Shanghai Crew===<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Name !! IRC !! Mail !! Time Zone !! Projects/Areas !! Notes<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Cinder, Nova, Telemetry<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Pacific Time, US (UTC-8)<br />
|Storyboard, Docs, First Contact SIG, Technical Elections, Mentoring & Outreach<br />
|<br />
|-<br />
|Gergely Csatari <br />
|csatari<br />
|gergely.csatari@nokia.com<br />
|EST (GMT+3)<br />
|Docs, general processes, serving cofee<br />
|<br />
|-<br />
|Jay Bryant<br />
|jungleboyj<br />
|jsbryant@electronicjungle.net<br />
|Central US Time (UTC-6)<br />
|Cinder, Storage, Docs, Oslo<br />
|Will be working with Paul XU on-site in SH to help coordinate the event.<br />
|}<br />
<br />
==Toyko Training, 2019==<br />
<br />
During the OpenStack Days Toyko event, second half July 23, 2019<br />
<br />
===Tokyo Crew===<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Name !! IRC !! Mail !! Time Zone !! Projects/Areas !! Notes<br />
|-<br />
|Ghanshyam Mann<br />
|gmann<br />
|gmann@ghanshyammann.com<br />
|UTC-6 (CDT)<br />
|QA, Nova, API, Infra, First Contact SIG<br />
|<br />
|-<br />
|Masayuki Igawa<br />
|masayukig<br />
|masayuki@igawa.io<br />
|UTC+9 (JST)<br />
|QA, Infra<br />
|<br />
|-<br />
|Kota Tsuyuzaki<br />
|kota<br />
|kota.tsuyuzaki.pc@hco.ntt.co.jp<br />
|UTC+9 (JST)<br />
|Swift<br />
|<br />
|-<br />
|Rikimaru Honjo<br />
|<br />
|honjo.rikimaru@po.ntt-tx.co.jp<br />
|UTC+9 (JST)<br />
|<br />
|<br />
|}<br />
<br />
==Denver Training, 2019==<br />
<br />
Before the Open Infrastructure Summit Denver, April 28-29, 2019.<br />
<br />
===Denver Crew===<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Name !! IRC !! Mail !! Time Zone !! Projects/Areas !! Notes<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Cinder, Nova, Telemetry<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Pacific Time, US (UTC-8)<br />
|Storyboard, Docs, First Contact SIG, Technical Elections, Mentoring & Outreach<br />
|<br />
|-<br />
|Matt Oliver<br />
|mattoliverau<br />
|matt@oliver.net.au<br />
|UTC+11 (AEST)<br />
|Swift, First Contact SIG<br />
|<br />
|-<br />
|Tony Breeds<br />
|tonyb<br />
|tony@bakeyournoodle.com<br />
|UTC+1100/+1000 (AEDT/AEST)<br />
|Requirements, Releases, Extended Maintenance, Nova(ish), Infra<br />
|<br />
|-<br />
|Colleen Murphy<br />
|cmurphy<br />
|colleen@gazlene.net<br />
|UTC+1/2 (CET/CEST)<br />
|Keystone, Infra, Rpm-Packaging, First Contact SIG<br />
|Sunday only<br />
|<br />
|-<br />
|Jay Bryant<br />
|jsbryant<br />
|jsbryant@electronicjungle.net<br />
|UTC-6 (CDT)<br />
|Cinder, Manila, Docs, First Contact SIG<br />
|<br />
|-<br />
|Ghanshyam Mann<br />
|gmann<br />
|gmann@ghanshyammann.com<br />
|UTC-6 (CDT)<br />
|QA, Nova, API, Infra, First Contact SIG<br />
|Will be in Board Meeting in parallel<br />
|}<br />
<br />
==Berlin Training, 2018==<br />
<br />
Before the OpenStack Summit Berlin, November 11-12, 2018.<br />
<br />
===Berlin Crew===<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Name !! IRC !! Mail !! Time Zone !! Projects/Areas !! Notes<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Cinder, Nova, Telemetry<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Pacific Time, US (UTC-8)<br />
|Storyboard, Docs, First Contact SIG, Technical Elections, Mentoring & Outreach<br />
|<br />
|-<br />
|Ghanshyam Mann<br />
|gmann<br />
|gmann@ghanshyammann.com<br />
|JST (UTC+9)<br />
|QA, Nova, API<br />
|Will be in Board Meeting on Monday<br />
|-<br />
|Amy Marrich<br />
|spotz<br />
|amy@demarco.com<br />
|Central Time, US<br />
|OSA, Docs. Community, Users<br />
|Will have Board Meeting one day<br />
|-<br />
|Jay Bryant<br />
|jungleboyj<br />
|jsbryant@electronicjungle.net<br />
|CDT (UTC-5)<br />
|Cinder, Docs, Oslo, Manila<br />
|Will be there Saturday and Sunday<br />
|-<br />
|Gergely Csatari<br />
|csatari<br />
|gergely.csatari@nokia.com<br />
|CET<br />
|Docs<br />
|<br />
|-<br />
|Colleen Murphy<br />
|cmurphy<br />
|colleen@gazlene.net<br />
|CET<br />
|Keystone<br />
|<br />
|-<br />
|Mark Korondi<br />
|kmarc<br />
|korondi.mark@gmail.com<br />
|CET<br />
|Training VM<br />
|<br />
|-<br />
|Tony Breeds<br />
|tonyb<br />
|tony@bakeyournoodle.com<br />
| UTC +10<br />
| Stable, Release Management, Nova<br />
|<br />
|-<br />
|Victoria Martinez de la Cruz<br />
|vkmc<br />
|vkmc@redhat.com<br />
| UTC-3<br />
| Manila<br />
|<br />
|-<br />
| Ell Marquez<br />
|<br />
| ellstripes@gmail.com<br />
|<br />
|<br />
|<br />
|-<br />
|Armstrong Foundjem<br />
|armstrong<br />
|foundjem@ieee.org<br />
|UTC-5<br />
|Mentoring<br />
|-<br />
|Daniel Abad<br />
|vabada<br />
|d.abad@cern.ch<br />
| UTC +1<br />
| Ironic<br />
|<br />
|}<br />
<br />
==Vancouver Training, 2018==<br />
<br />
Before the OpenStack Summit Vancouver, May 19-20, 2018.<br />
<br />
===Vancouver Crew===<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Name !! IRC !! Mail !! Time Zone !! Projects/Areas !! Notes<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Cinder, Nova, Telemetry<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Pacific Time, US<br />
|Cinder, os-brick, Storyboard<br />
|<br />
|-<br />
|Masayuki Igawa<br />
|masayukig<br />
|masayuki@igawa.io<br />
|JST<br />
|QA(Tempest, stestr, openstack-health, stackviz, ...)<br />
|still need to figure out my travel<br />
|-<br />
|Ghanshyam Mann<br />
|gmann<br />
|gmann@ghanshyammann.com<br />
|JST (UTC+9)<br />
|QA, Nova, API<br />
|<br />
|-<br />
|Amy Marrich<br />
|spotz<br />
|amy@demarco.com<br />
|Central Time, US<br />
|OSA, Docs. Community, Users<br />
|Will have board meeting on Sunday afternoon<br />
|-<br />
|Mark Korondi<br />
|kmarc<br />
|korondi.mark@gmail.com<br />
|CET<br />
|VM image,devstack,cli git vim, etc.<br />
|still need to figure out my travel<br />
|-<br />
|Ian Y. Choi<br />
|ianychoi<br />
|ianyrchoi@gmail.com<br />
|KST (UTC+9)<br />
|Docs. I18n<br />
|<br />
|-<br />
|Matthew Treinish<br />
|mtreinish<br />
|mtreinish@kortar.org<br />
|EDT (UTC-4)<br />
|<br />
|<br />
|-<br />
|Jay Bryant<br />
|jungleboyj<br />
|jsbryant@electronicjungle.net<br />
|CDT (UTC-5)<br />
|Cinder, Docs, Oslo, Manila<br />
|Will be there Saturday and Sunday<br />
|-<br />
|Gergely Csatari<br />
|csatari<br />
|gergely.csatari@nokia.com<br />
|CET<br />
|Docs<br />
|<br />
|-<br />
|Rich Wellum<br />
|rwellum<br />
|rich.wellum@nokia.com<br />
|EST<br />
|Kolla, Openstack-Helm<br />
|<br />
|-<br />
|Mars Toktonaliev<br />
|marst<br />
|mars.toktonaliev@nokia.com<br />
|CST<br />
|Docs<br />
|<br />
|-<br />
|Victoria Martinez de la Cruz<br />
|vkmc<br />
|vkmc@redhat.com<br />
|UTC-3<br />
|Manila<br />
|<br />
|}<br />
<br />
==Sydney Training==<br />
<br />
Before the OpenStack Summit Sydney, November 4-5, 2017.<br />
<br />
===Sydney Crew===<br />
<br />
{| class="wikitable"<br />
|Name<br />
|IRC<br />
|Mail<br />
|Time Zone<br />
|Projects/Areas<br />
|Notes<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Telemetry, Cinder, Nova<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Pacific Time, US<br />
|Cinder, os-brick, Storyboard<br />
|<br />
|-<br />
|Mark Korondi<br />
|kmArc<br />
|korondi.mark@gmail.com<br />
|CET<br />
|Docs, Upstream Institute<br />
|<br />
|-<br />
|Matthew Treinish<br />
|mtreinish<br />
|mtreinish@kortar.org<br />
|Eastern Time, US<br />
|QA, Infra, API, Nova, Glance<br />
|<br />
|-<br />
|Ghanshyam Mann<br />
|gmann<br />
|ghanshyammann@gmail.com<br />
|JST<br />
|QA, Nova, API<br />
|<br />
|-<br />
|Amy Marrich<br />
|spotz<br />
|amy@demarco.com<br />
|Central Time, US<br />
|OpenStack-Ansible, Docs<br />
|<br />
|-<br />
|Sean McGinnis<br />
|smcginnis<br />
|sean.mcginnis@gmail.com<br />
|Central Time, US<br />
|Cinder<br />
|<br />
|-<br />
|Jay Bryant<br />
|jungleboyj<br />
|jsbryant@electronicjungle.net<br />
|Central Time, US<br />
|Cinder, Docs<br />
|<br />
|-<br />
|Gergely Csatari<br />
|csatari<br />
|gergely.csatari@nokia.com<br />
|CET<br />
|Docs<br />
|<br />
|-<br />
|Victoria Martinez de la Cruz<br />
|vkmc<br />
|victoria@redhat.com<br />
|ART<br />
|Manila<br />
|<br />
|}<br />
<br />
==Copenhagen Training==<br />
<br />
Before the OpenStack Days Nordic event, October 18, 2017.<br />
<br />
===Copenhagen Crew===<br />
<br />
{| class="wikitable"<br />
|Name<br />
|IRC<br />
|Mail<br />
|Time Zone<br />
|Projects/Areas<br />
|Notes<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Telemetry, Cinder, Nova<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Central Time, US<br />
|Cinder, os-brick, Storyboard<br />
|<br />
|-<br />
|Mark Korondi<br />
|kmArc<br />
|korondi.mark@gmail.com<br />
|CET<br />
|Docs, Upstream Institute<br />
|<br />
|}<br />
<br />
==London Office Hours==<br />
<br />
During the OpenStack Days UK event, September 26, 2017.<br />
<br />
===London Crew===<br />
<br />
{| class="wikitable"<br />
|Name<br />
|IRC<br />
|Mail<br />
|Time Zone<br />
|Projects/Areas<br />
|Notes<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Telemetry, Cinder, Nova<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Central Time, US<br />
|Cinder, os-brick, Storyboard<br />
|<br />
|}<br />
<br />
==Beijing Training==<br />
<br />
Before the OPNFV Summit, June 14-15, 2017.<br />
<br />
===Beijing Crew===<br />
<br />
{| class="wikitable"<br />
|Name<br />
|IRC<br />
|Mail<br />
|Time Zone<br />
|Projects/Areas<br />
|Notes<br />
|-<br />
|Dave Neary<br />
|dneary<br />
|dneary@redhat.com<br />
|Eastern Time, US<br />
|OPNFV and RDO - not directly in OpenStack<br />
|<br />
|-<br />
|Gergely Csatari<br />
|csatari<br />
|gergely.csatari@nokia.com<br />
|CET<br />
|Docs, api-refs, Training Guides<br />
|<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Telemetry, Cinder, Nova<br />
|<br />
|-<br />
|JiangShan 江姗<br />
|<br />
|jiangshan@ctsi.com.cn<br />
|<br />
|<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Central Time, US<br />
|Cinder, os-brick, Storyboard<br />
|<br />
|-<br />
|Leo Ma<br />
|<br />
|majiajun@unitedstack.com<br />
|<br />
|<br />
|<br />
|-<br />
|Rossella Sblendido<br />
|rossella_s<br />
|rsblendido@suse.com<br />
|CET<br />
|Neutron<br />
|<br />
|-<br />
|ShangXiao 尚啸<br />
|<br />
|shangxiao@ctsi.com.cn<br />
|<br />
|<br />
|<br />
|}<br />
<br />
== Boston training ==<br />
<br />
Before the OpenStack Summit, May 6-7, 2017.<br />
<br />
===Boston Crew===<br />
<br />
{| class="wikitable"<br />
|Name<br />
|IRC<br />
|Mail<br />
|Time Zone<br />
|Projects/Areas<br />
|-<br />
|Amy Marrich<br />
|spotz<br />
|amy@demarco.com<br />
|Central Time, US<br />
|Ansible<br />
|-<br />
|Flavio Percoco<br />
|flaper87<br />
|flavio@redhat.com<br />
|<br />
|<br />
|-<br />
|Gergely Csatari<br />
|csatari<br />
|gergely.csatari@nokia.com<br />
|CET<br />
|Docs, api-refs, Training Guides<br />
|-<br />
|Ghanshyam Mann<br />
|gmann<br />
|ghanshyammann@gmail.com<br />
|JST(UTC+9)<br />
|QA<br />
|-<br />
|Ian Y. Choi<br />
|ianychoi<br />
|ianyrchoi@gmail.com<br />
|KST(UTC+9)<br />
|Training Guides + Mentor for I18n<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Telemetry, Cinder, Nova<br />
|-<br />
|Jay Bryant<br />
|jungleboyj<br />
|jungleboyj@gmail.com<br />
|Central Time<br />
|Manila<br />
|-<br />
|Jay Pipes<br />
|jaypipes<br />
|jaypipes@gmail.com<br />
|Eastern US<br />
|-<br />
|KATO Tomoyuki<br />
|katomo<br />
|kato.tomoyuki@jp.fujitsu.com<br />
|<br />
|Docs, Training Guides, I18n<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Central US<br />
|Cinder, os-brick, Storyboard<br />
|-<br />
|Mars Toktonaliev<br />
|marst<br />
|mars.toktonaliev@nokia.com<br />
|CST<br />
|<br />
|-<br />
|Mark Korondi<br />
|kmARC<br />
|korondi.mark@gmail.com<br />
|CET<br />
|Training Guides, Swift, Training VM<br />
|-<br />
|Marton Kiss<br />
|mrmartin<br />
|marton.kiss@gmail.com<br />
|CET<br />
|<br />
|-<br />
|Matt Dorn<br />
|madorn<br />
|madorn@gmail.com<br />
|Central Time, US<br />
|Docs, Training Guides<br />
|-<br />
|Miguel A Lavalle<br />
|mlavalle<br />
|malavall@us.ibm.com<br />
|US Central Time<br />
|Neutron, Tempest<br />
|-<br />
|Samantha Blanco<br />
|blancos<br />
|samantha.blanco@att.com<br />
|Eastern Time, US<br />
|Patrole, Murano<br />
|-<br />
|Sean McGinnis<br />
|smcginnis<br />
|sean.mcginnis@gmail.com<br />
|Central Time, US<br />
|Cinder<br />
|-<br />
|Trevor McCasland<br />
|trevormc<br />
|tm2086@att.com<br />
|Central Time, US<br />
|Neutron, Trove<br />
|-<br />
|Victoria MartÃnez de la Cruz<br />
|vkmc<br />
|victoria@redhat.com<br />
|<br />
|Manila<br />
|}</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=OpenStack_Upstream_Institute_Occasions&diff=172201OpenStack Upstream Institute Occasions2019-08-27T18:46:46Z<p>Jay Bryant: /* Shanghai Crew */</p>
<hr />
<div>==Shanghai Training, 2019==<br />
<br />
During the Open Infrastructure Summit Shanghai event, November 2-3, 2019<br />
<br />
=== Shanghai Crew===<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Name !! IRC !! Mail !! Time Zone !! Projects/Areas !! Notes<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Cinder, Nova, Telemetry<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Pacific Time, US (UTC-8)<br />
|Storyboard, Docs, First Contact SIG, Technical Elections, Mentoring & Outreach<br />
|<br />
|-<br />
|Gergely Csatari <br />
|csatari<br />
|gergely.csatari@nokia.com<br />
|EST (GMT+3)<br />
|Docs, general processes, serving cofee<br />
|<br />
|<br />
|-<br />
|Jay Bryant<br />
|jungleboyj<br />
|jsbryant@electronicjungle.net<br />
|Central US Time (UTC-6)<br />
|Cinder, Storage, Docs, Oslo<br />
|Will be working with Paul XU on-site in SH to help coordinate the event.<br />
|}<br />
<br />
==Toyko Training, 2019==<br />
<br />
During the OpenStack Days Toyko event, second half July 23, 2019<br />
<br />
===Tokyo Crew===<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Name !! IRC !! Mail !! Time Zone !! Projects/Areas !! Notes<br />
|-<br />
|Ghanshyam Mann<br />
|gmann<br />
|gmann@ghanshyammann.com<br />
|UTC-6 (CDT)<br />
|QA, Nova, API, Infra, First Contact SIG<br />
|<br />
|-<br />
|Masayuki Igawa<br />
|masayukig<br />
|masayuki@igawa.io<br />
|UTC+9 (JST)<br />
|QA, Infra<br />
|<br />
|-<br />
|Kota Tsuyuzaki<br />
|kota<br />
|kota.tsuyuzaki.pc@hco.ntt.co.jp<br />
|UTC+9 (JST)<br />
|Swift<br />
|<br />
|-<br />
|Rikimaru Honjo<br />
|<br />
|honjo.rikimaru@po.ntt-tx.co.jp<br />
|UTC+9 (JST)<br />
|<br />
|<br />
|}<br />
<br />
==Denver Training, 2019==<br />
<br />
Before the Open Infrastructure Summit Denver, April 28-29, 2019.<br />
<br />
===Denver Crew===<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Name !! IRC !! Mail !! Time Zone !! Projects/Areas !! Notes<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Cinder, Nova, Telemetry<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Pacific Time, US (UTC-8)<br />
|Storyboard, Docs, First Contact SIG, Technical Elections, Mentoring & Outreach<br />
|<br />
|-<br />
|Matt Oliver<br />
|mattoliverau<br />
|matt@oliver.net.au<br />
|UTC+11 (AEST)<br />
|Swift, First Contact SIG<br />
|<br />
|-<br />
|Tony Breeds<br />
|tonyb<br />
|tony@bakeyournoodle.com<br />
|UTC+1100/+1000 (AEDT/AEST)<br />
|Requirements, Releases, Extended Maintenance, Nova(ish), Infra<br />
|<br />
|-<br />
|Colleen Murphy<br />
|cmurphy<br />
|colleen@gazlene.net<br />
|UTC+1/2 (CET/CEST)<br />
|Keystone, Infra, Rpm-Packaging, First Contact SIG<br />
|Sunday only<br />
|<br />
|-<br />
|Jay Bryant<br />
|jsbryant<br />
|jsbryant@electronicjungle.net<br />
|UTC-6 (CDT)<br />
|Cinder, Manila, Docs, First Contact SIG<br />
|<br />
|-<br />
|Ghanshyam Mann<br />
|gmann<br />
|gmann@ghanshyammann.com<br />
|UTC-6 (CDT)<br />
|QA, Nova, API, Infra, First Contact SIG<br />
|Will be in Board Meeting in parallel<br />
|}<br />
<br />
==Berlin Training, 2018==<br />
<br />
Before the OpenStack Summit Berlin, November 11-12, 2018.<br />
<br />
===Berlin Crew===<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Name !! IRC !! Mail !! Time Zone !! Projects/Areas !! Notes<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Cinder, Nova, Telemetry<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Pacific Time, US (UTC-8)<br />
|Storyboard, Docs, First Contact SIG, Technical Elections, Mentoring & Outreach<br />
|<br />
|-<br />
|Ghanshyam Mann<br />
|gmann<br />
|gmann@ghanshyammann.com<br />
|JST (UTC+9)<br />
|QA, Nova, API<br />
|Will be in Board Meeting on Monday<br />
|-<br />
|Amy Marrich<br />
|spotz<br />
|amy@demarco.com<br />
|Central Time, US<br />
|OSA, Docs. Community, Users<br />
|Will have Board Meeting one day<br />
|-<br />
|Jay Bryant<br />
|jungleboyj<br />
|jsbryant@electronicjungle.net<br />
|CDT (UTC-5)<br />
|Cinder, Docs, Oslo, Manila<br />
|Will be there Saturday and Sunday<br />
|-<br />
|Gergely Csatari<br />
|csatari<br />
|gergely.csatari@nokia.com<br />
|CET<br />
|Docs<br />
|<br />
|-<br />
|Colleen Murphy<br />
|cmurphy<br />
|colleen@gazlene.net<br />
|CET<br />
|Keystone<br />
|<br />
|-<br />
|Mark Korondi<br />
|kmarc<br />
|korondi.mark@gmail.com<br />
|CET<br />
|Training VM<br />
|<br />
|-<br />
|Tony Breeds<br />
|tonyb<br />
|tony@bakeyournoodle.com<br />
| UTC +10<br />
| Stable, Release Management, Nova<br />
|<br />
|-<br />
|Victoria Martinez de la Cruz<br />
|vkmc<br />
|vkmc@redhat.com<br />
| UTC-3<br />
| Manila<br />
|<br />
|-<br />
| Ell Marquez<br />
|<br />
| ellstripes@gmail.com<br />
|<br />
|<br />
|<br />
|-<br />
|Armstrong Foundjem<br />
|armstrong<br />
|foundjem@ieee.org<br />
|UTC-5<br />
|Mentoring<br />
|-<br />
|Daniel Abad<br />
|vabada<br />
|d.abad@cern.ch<br />
| UTC +1<br />
| Ironic<br />
|<br />
|}<br />
<br />
==Vancouver Training, 2018==<br />
<br />
Before the OpenStack Summit Vancouver, May 19-20, 2018.<br />
<br />
===Vancouver Crew===<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Name !! IRC !! Mail !! Time Zone !! Projects/Areas !! Notes<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Cinder, Nova, Telemetry<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Pacific Time, US<br />
|Cinder, os-brick, Storyboard<br />
|<br />
|-<br />
|Masayuki Igawa<br />
|masayukig<br />
|masayuki@igawa.io<br />
|JST<br />
|QA(Tempest, stestr, openstack-health, stackviz, ...)<br />
|still need to figure out my travel<br />
|-<br />
|Ghanshyam Mann<br />
|gmann<br />
|gmann@ghanshyammann.com<br />
|JST (UTC+9)<br />
|QA, Nova, API<br />
|<br />
|-<br />
|Amy Marrich<br />
|spotz<br />
|amy@demarco.com<br />
|Central Time, US<br />
|OSA, Docs. Community, Users<br />
|Will have board meeting on Sunday afternoon<br />
|-<br />
|Mark Korondi<br />
|kmarc<br />
|korondi.mark@gmail.com<br />
|CET<br />
|VM image,devstack,cli git vim, etc.<br />
|still need to figure out my travel<br />
|-<br />
|Ian Y. Choi<br />
|ianychoi<br />
|ianyrchoi@gmail.com<br />
|KST (UTC+9)<br />
|Docs. I18n<br />
|<br />
|-<br />
|Matthew Treinish<br />
|mtreinish<br />
|mtreinish@kortar.org<br />
|EDT (UTC-4)<br />
|<br />
|<br />
|-<br />
|Jay Bryant<br />
|jungleboyj<br />
|jsbryant@electronicjungle.net<br />
|CDT (UTC-5)<br />
|Cinder, Docs, Oslo, Manila<br />
|Will be there Saturday and Sunday<br />
|-<br />
|Gergely Csatari<br />
|csatari<br />
|gergely.csatari@nokia.com<br />
|CET<br />
|Docs<br />
|<br />
|-<br />
|Rich Wellum<br />
|rwellum<br />
|rich.wellum@nokia.com<br />
|EST<br />
|Kolla, Openstack-Helm<br />
|<br />
|-<br />
|Mars Toktonaliev<br />
|marst<br />
|mars.toktonaliev@nokia.com<br />
|CST<br />
|Docs<br />
|<br />
|-<br />
|Victoria Martinez de la Cruz<br />
|vkmc<br />
|vkmc@redhat.com<br />
|UTC-3<br />
|Manila<br />
|<br />
|}<br />
<br />
==Sydney Training==<br />
<br />
Before the OpenStack Summit Sydney, November 4-5, 2017.<br />
<br />
===Sydney Crew===<br />
<br />
{| class="wikitable"<br />
|Name<br />
|IRC<br />
|Mail<br />
|Time Zone<br />
|Projects/Areas<br />
|Notes<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Telemetry, Cinder, Nova<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Pacific Time, US<br />
|Cinder, os-brick, Storyboard<br />
|<br />
|-<br />
|Mark Korondi<br />
|kmArc<br />
|korondi.mark@gmail.com<br />
|CET<br />
|Docs, Upstream Institute<br />
|<br />
|-<br />
|Matthew Treinish<br />
|mtreinish<br />
|mtreinish@kortar.org<br />
|Eastern Time, US<br />
|QA, Infra, API, Nova, Glance<br />
|<br />
|-<br />
|Ghanshyam Mann<br />
|gmann<br />
|ghanshyammann@gmail.com<br />
|JST<br />
|QA, Nova, API<br />
|<br />
|-<br />
|Amy Marrich<br />
|spotz<br />
|amy@demarco.com<br />
|Central Time, US<br />
|OpenStack-Ansible, Docs<br />
|<br />
|-<br />
|Sean McGinnis<br />
|smcginnis<br />
|sean.mcginnis@gmail.com<br />
|Central Time, US<br />
|Cinder<br />
|<br />
|-<br />
|Jay Bryant<br />
|jungleboyj<br />
|jsbryant@electronicjungle.net<br />
|Central Time, US<br />
|Cinder, Docs<br />
|<br />
|-<br />
|Gergely Csatari<br />
|csatari<br />
|gergely.csatari@nokia.com<br />
|CET<br />
|Docs<br />
|<br />
|-<br />
|Victoria Martinez de la Cruz<br />
|vkmc<br />
|victoria@redhat.com<br />
|ART<br />
|Manila<br />
|<br />
|}<br />
<br />
==Copenhagen Training==<br />
<br />
Before the OpenStack Days Nordic event, October 18, 2017.<br />
<br />
===Copenhagen Crew===<br />
<br />
{| class="wikitable"<br />
|Name<br />
|IRC<br />
|Mail<br />
|Time Zone<br />
|Projects/Areas<br />
|Notes<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Telemetry, Cinder, Nova<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Central Time, US<br />
|Cinder, os-brick, Storyboard<br />
|<br />
|-<br />
|Mark Korondi<br />
|kmArc<br />
|korondi.mark@gmail.com<br />
|CET<br />
|Docs, Upstream Institute<br />
|<br />
|}<br />
<br />
==London Office Hours==<br />
<br />
During the OpenStack Days UK event, September 26, 2017.<br />
<br />
===London Crew===<br />
<br />
{| class="wikitable"<br />
|Name<br />
|IRC<br />
|Mail<br />
|Time Zone<br />
|Projects/Areas<br />
|Notes<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Telemetry, Cinder, Nova<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Central Time, US<br />
|Cinder, os-brick, Storyboard<br />
|<br />
|}<br />
<br />
==Beijing Training==<br />
<br />
Before the OPNFV Summit, June 14-15, 2017.<br />
<br />
===Beijing Crew===<br />
<br />
{| class="wikitable"<br />
|Name<br />
|IRC<br />
|Mail<br />
|Time Zone<br />
|Projects/Areas<br />
|Notes<br />
|-<br />
|Dave Neary<br />
|dneary<br />
|dneary@redhat.com<br />
|Eastern Time, US<br />
|OPNFV and RDO - not directly in OpenStack<br />
|<br />
|-<br />
|Gergely Csatari<br />
|csatari<br />
|gergely.csatari@nokia.com<br />
|CET<br />
|Docs, api-refs, Training Guides<br />
|<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Telemetry, Cinder, Nova<br />
|<br />
|-<br />
|JiangShan 江姗<br />
|<br />
|jiangshan@ctsi.com.cn<br />
|<br />
|<br />
|<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Central Time, US<br />
|Cinder, os-brick, Storyboard<br />
|<br />
|-<br />
|Leo Ma<br />
|<br />
|majiajun@unitedstack.com<br />
|<br />
|<br />
|<br />
|-<br />
|Rossella Sblendido<br />
|rossella_s<br />
|rsblendido@suse.com<br />
|CET<br />
|Neutron<br />
|<br />
|-<br />
|ShangXiao 尚啸<br />
|<br />
|shangxiao@ctsi.com.cn<br />
|<br />
|<br />
|<br />
|}<br />
<br />
== Boston training ==<br />
<br />
Before the OpenStack Summit, May 6-7, 2017.<br />
<br />
===Boston Crew===<br />
<br />
{| class="wikitable"<br />
|Name<br />
|IRC<br />
|Mail<br />
|Time Zone<br />
|Projects/Areas<br />
|-<br />
|Amy Marrich<br />
|spotz<br />
|amy@demarco.com<br />
|Central Time, US<br />
|Ansible<br />
|-<br />
|Flavio Percoco<br />
|flaper87<br />
|flavio@redhat.com<br />
|<br />
|<br />
|-<br />
|Gergely Csatari<br />
|csatari<br />
|gergely.csatari@nokia.com<br />
|CET<br />
|Docs, api-refs, Training Guides<br />
|-<br />
|Ghanshyam Mann<br />
|gmann<br />
|ghanshyammann@gmail.com<br />
|JST(UTC+9)<br />
|QA<br />
|-<br />
|Ian Y. Choi<br />
|ianychoi<br />
|ianyrchoi@gmail.com<br />
|KST(UTC+9)<br />
|Training Guides + Mentor for I18n<br />
|-<br />
|Ildiko Vancsa<br />
|ildikov<br />
|ildiko@openstack.org<br />
|CET<br />
|Docs, Telemetry, Cinder, Nova<br />
|-<br />
|Jay Bryant<br />
|jungleboyj<br />
|jungleboyj@gmail.com<br />
|Central Time<br />
|Manila<br />
|-<br />
|Jay Pipes<br />
|jaypipes<br />
|jaypipes@gmail.com<br />
|Eastern US<br />
|-<br />
|KATO Tomoyuki<br />
|katomo<br />
|kato.tomoyuki@jp.fujitsu.com<br />
|<br />
|Docs, Training Guides, I18n<br />
|-<br />
|Kendall Nelson<br />
|diablo_rojo<br />
|knelson@openstack.org<br />
|Central US<br />
|Cinder, os-brick, Storyboard<br />
|-<br />
|Mars Toktonaliev<br />
|marst<br />
|mars.toktonaliev@nokia.com<br />
|CST<br />
|<br />
|-<br />
|Mark Korondi<br />
|kmARC<br />
|korondi.mark@gmail.com<br />
|CET<br />
|Training Guides, Swift, Training VM<br />
|-<br />
|Marton Kiss<br />
|mrmartin<br />
|marton.kiss@gmail.com<br />
|CET<br />
|<br />
|-<br />
|Matt Dorn<br />
|madorn<br />
|madorn@gmail.com<br />
|Central Time, US<br />
|Docs, Training Guides<br />
|-<br />
|Miguel A Lavalle<br />
|mlavalle<br />
|malavall@us.ibm.com<br />
|US Central Time<br />
|Neutron, Tempest<br />
|-<br />
|Samantha Blanco<br />
|blancos<br />
|samantha.blanco@att.com<br />
|Eastern Time, US<br />
|Patrole, Murano<br />
|-<br />
|Sean McGinnis<br />
|smcginnis<br />
|sean.mcginnis@gmail.com<br />
|Central Time, US<br />
|Cinder<br />
|-<br />
|Trevor McCasland<br />
|trevormc<br />
|tm2086@att.com<br />
|Central Time, US<br />
|Neutron, Trove<br />
|-<br />
|Victoria MartÃnez de la Cruz<br />
|vkmc<br />
|victoria@redhat.com<br />
|<br />
|Manila<br />
|}</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=CinderTrainMidCycleSummary&diff=172199CinderTrainMidCycleSummary2019-08-27T17:45:07Z<p>Jay Bryant: Created page with "=== Introduction === This page contains a summary of the subjects covered during the Train Mid-Cycle held in Morrisville, North Carolina, USA, August 21 and 22, 2019. The ful..."</p>
<hr />
<div>=== Introduction ===<br />
This page contains a summary of the subjects covered during the Train Mid-Cycle held in Morrisville, North Carolina, USA, August 21 and 22, 2019.<br />
<br />
The full etherpad and all associated notes may be found [https://etherpad.openstack.org/p/cinder-train-mid-cycle-planning here.]<br />
<br /><br />
<br /><br />
<br /><br />
<br />
=Train Mid-Cycle Summary=<br />
<br />
==Wednesday 8/21/2019==<br />
[https://www.youtube.com/watch?v=x2zuFy3stR8 Video Recording Part 1]<br />
<br />
=== Python-Cinderclient major version bump work===<br />
*'''Summary:''' Reviewed the open patches that were appropriate to release with a major version bump. Agreed that we should work on getting the patches merged and released in time to ship with Train.<br />
<br><br />
*'''Action (whoami-rajat):''' To fix up his patch for options being sent to --sort.<br />
*'''Action (team):''' To review outstanding patches and get them merged before 9/9/19.<br />
<br><br />
<br />
=== Cinder PDF Documentation Creation===<br />
*'''Summary:''' Walt has been working towards the Train goal of getting PDFs generated for our documentation. Has patches up for each of the Cinder projects and had some issues to work through for Cinder due to the size of the documentation.<br />
<br><br />
*'''Action (hemna):''' To remove WIP from the patches that are ready to be reviewed/merged.<br />
*'''Action (rosmaita):''' To look at problems building cinder-lib documentation and try to fix it up. <br />
<br><br />
<br />
=== Multi-Attach===<br />
*'''Summary:''' Multi-attach has been in place for a while now but we are not requiring any 3rd Party CI for it and are not sure how well this is actually working in the wild. Red Hat is planning to test it for their next release as it is one of the highlighted features.<br />
<br><br />
*'''Action (rosmaita):''' Going to draft documentation as to what tests should be run before submitting a multi-attach enablement patch.<br />
*'''Action (eharney):''' Concerned about the read only functionality for multi-attach. He is going to check into what, if any, support there is for this in libvirt.<br />
*'''Action (jungleboyj):''' To take a look if the driver that have support enabled are running the muti-attach test case and is it passing?<br />
*'''Action (jungleboyj):''' To open bugs in the case that they aren't testing multi-attach. The flag will be removed if they don't respond to the bug.<br />
<br />
=== Discussion of Stable Backport Policies===<br />
*'''Summary:''' Concerns have been raised by the stable branch team about some of the things we have allowed for backport. The problem is that with longer lived stable branches and the removal of the driver fixes branches we are missing a method for backporting some changes that distributors need. We could not come up with another way to resolve this issue.<br />
<br><br />
*'''Action (jungleboyj):''' To update our documentation to explain why our backport policies are looser than other projects.<br />
*'''Action (jungleboyj):''' To communicate this to the stable release team and work to address/resolve any concerns. <br />
<br><br />
<br />
[https://www.youtube.com/watch?v=mMYLhOXjIqg Video Recording Part 2]<br />
<br />
=== Review of the Support Matrix===<br />
*'''Summary:''' In the last couple of summits it has been useful to review the support matrix for accuracy and needed additions. Once again this proved to be a useful exercise.<br />
<br><br />
*'''Action (jungleboyj):''' Remove the old matrix from the Wiki and move it to a place that is clearly old.<br />
*'''Action (jungleboyj):''' Add manage/unmanage support.<br />
*'''Action (jungleboyj):''' Add manage/unmanage snapshot<br />
*'''Action (jungleboyj):''' Investigate whether list manageable is needed as a separate item.<br />
*'''Action (jungleboyj):''' For replication check to see if we need to adding failover/failback support as a separate item.<br />
<br><br />
<br />
=== iSCSI Ceph Driver Update===<br />
*'''Summary:''' The driver has been created and appears to work properly. The problem is that it has proven very hard to get a CI to work for the Ceph iSCSI driver. There aren't good pypi packages to support it and Ubuntu doesn't come with the right level of support. SuSE's LEAP supports it but is not supported by devstack. So, due to many complications, the driver missed Train but may be able to make it into the U release.<br />
<br><br />
*'''Action (hemna):''' To continue working on the driver and CI with the hope of solving issues for the U release.<br />
*'''Action (hemna):''' To work to see if there is a better way to implement deploying Ceph for Devstack. Ceph-Ansible may be a better fit.<br />
<br><br />
<br />
=== How to Deal with 3rd Party CI Testing Irregularities===<br />
*'''Summary:''' We continue to have issues getting many vendors to consistently run 3rd Party CI. The difficulty getting systems moved over to using Py3.7 recently has just once again highlighted the issues here. We were happy that some vendors have responded to the notes about Py3.7 but many still haven't. We will have to unsupport them as they will not work by the end of the U release.<br />
<br><br />
*'''Action (jungleboyj):''' Start unsupporting all the drivers that are not testing Py3.<br />
*'''Action (jungleboyj):''' To unsupport the IBM drivers as they have been out of compliance for quite a while now.<br />
*'''Action (jungleboyj):''' Review/update the Third Party CI requirements page.<br />
*'''Action (HELP NEEDED):''' Someone needs to check into what tests are being run by each vendor to make sure that they appear correct.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=30uwamc6FYk Video Recording Part 3]<br />
<br />
<br><br />
=== Appropriate Upgrade Checks===<br />
*'''Summary:''' There was some confusion/disagreement as to what have upgrade checks created for it. Also some confusion as to why the checks are backported to the previous release so that they can be run before upgrading the environment in question.<br />
<br><br />
*'''Action (jungleboyj):''' Ensure that we are checking for all the right drivers that have been removed in Train. Make any updates to things that aren't accurate.<br />
*'''Action (jungleboyj):''' Check to see what happens for drivers that are added to the check and then removed. Don't think we have an issue here, but need to ensure that is the case.<br />
*'''Action (jungleboyj):''' Look into adding an option to check for unsupported drivers and then ensure that the right flag is set for using unsupported drivers.<br />
*'''Action (jungleboyj):''' To create a forum topic proposal for this to find out how people are using this in their environments and what, if anything, should be done to improve the functionality.<br />
<br><br />
<br />
=== Improve Automated Test Coverage===<br />
*'''Summary:''' There is no disagreement that we have gaps in our automated test coverage. There are definitely Tempest API tests that need to be added. This would be something good for an Outreachy intern to help with perhaps.<br />
<br><br />
*'''Action (eharney):''' To put together a list of tests that need to be written. Perhaps open bugs to document all of them.<br />
*'''Action (eharney):''' Determine a tag to use in LaunchPad to use for all the bugs. 'test-coverage' maybe?<br />
<br><br />
<br />
=== Review of open reviews for client and non-client libraries===<br />
*'''Summary:''' Wanted to make sure that we were on track for our client and non-client library freeze dates. A scan of the open reviews showed that we were more or less in good shape.<br />
<br><br />
*'''Action (team):''' To do reviews and stay on top of changes as they come in.<br />
<br><br />
<br />
=== Deprecate the Back-up Service===<br />
*'''Summary:''' Not really the goal of the discussion, the topic was designed to get our attention that backup may not be tested anymore. Backup testing has been removed from many check/gate jobs due to frequent failures. Wasn't clear as to whether it was still be tested anywhere. It appears that some of them are being run in check jobs but not all of them.<br />
<br><br />
*'''Action (eharney):''' To dig into the failures that are being seen to figure out if there is any pattern to the failures.<br />
*'''Action (team):''' Need to figure out where to run these tests long term. We don't really have a good job right now that is focused on testing Cinder. Do we need to create one?<br />
<br><br />
<br />
==Thursday 8/22/2019==<br />
<br><br />
[https://www.youtube.com/watch?v=nkKMbz01WY8 Video Recording Part 1]<br />
<br><br />
=== v2 API Removal===<br />
*'''Summary:''' Been working for some time on getting the V2 API removed as it is a subset of the V3 API. Sean has a patch out there to propose the removal but it is failing the checks. We have decided that we don't want to try to get this into Train but it is something that we want to get fixed soon.<br />
<br><br />
*'''Action (jungleboyj):''' To follow up with Sean to find out if he is still working on this.<br />
*'''Action (HELP NEEDED):''' Devstack needs to be updated to not use V2 anymore.<br />
<br><br />
<br />
=== Active/Active HA Support===<br />
*'''Summary:''' Discussion about who, with anyone, is using this. Red Hat is interested as they are wanting to ship this with their next release. Currently Macrosan and Ceph are the only drivers that list it as supported. Unfortunately testing it is hard. Jon Bernard has done some verification with Ceph and will continue to do so.<br />
<br><br />
*'''Action (HELP NEEDED):''' Really should get some automated testing for this in place.<br />
<br><br />
<br />
=== Default Volume Type Change===<br />
*'''Summary:''' Agreed that we want to try to get this into Train and still have time. There was some discussion as to whether we needed to add a check to ensure that the new default type isn't deleted. The type, however, can't be deleted if it is in use so we agreed to not change the default behavior as it covers the concern raised.<br />
<br><br />
*'''Action (team):''' To review and work to merge the patch.<br />
*'''Action (jungleboyj):''' Schedule a Forum Session to understand how people are using Volume Types.<br />
<br><br />
<br />
=== Backup Test Leaking Notifications===<br />
*'''Summary:''' Had a ToDo from Denver to follow up on this issue. It appears that patches that Eric created to avoid having notifications impact other tests has worked. So, we can take this one off the list.<br />
<br><br />
<br />
=== Dependency Install Mechanism for Containers===<br />
*'''Summary:''' Had a follow up from the Denver PTG to discuss this. Walt merged patch that moved the code from the driver requirements file into setup.py. This will help to get containers properly configured for drivers. There are a few requirements that couldn't be included as they would not pass the global requirements, requirements.<br />
<br><br />
<br />
=== SQLAlchemy to Alembic Migration===<br />
*'''Summary:''' We have gotten the database migrations collapsed down which was one of the goals after Denver. There isn't documentation we are aware of on how to do this but there are examples in what Glance and Manila have done. We should get this done in the U release. Could possibly be a good work item for an Outreachy person.<br />
<br><br />
*'''Action (jungleboyj):''' Find out if there is a Deadline that this has to happen by.<br />
<br><br />
<br />
=== IPv6 Impact on Drivers===<br />
*'''Summary:''' Some unexpected issues were found in the LVM driver when using IPv6. Wanted to start discussion to make sure other drivers don't have an issue. There isn't a requirement for IPv6 but it is considered good practice to support it. Walt's experience has been that Cinder works with IPv6 and can even be deployed by Devstack to use IPV6.<br />
<br><br />
*'''Action (jungleboyj):''' Add a note to the driver development documentation that we strongly encourage IPv6 support and testing. Should also encourage people to use the host address option for IP addresses rather than a string or IP option.<br />
*'''Action (HELP NEEDED):''' We should look at driver config options to make sure that they are using the host address option.<br />
*'''Action (HELP NEEDED):''' Test os-brick with IPv6.<br />
<br><br />
[https://www.youtube.com/watch?v=YEqhdeiTlXw Video Recording Part 2]<br />
<br><br />
<br />
=== Replication===<br />
*'''Summary:''' Not really clear what the state of this function is. DId a lot of work to get it included but it isn't clear how well it works now. Gorka tested it with RBD a while ago and made some fixes. Not sure how well this works for other drivers.<br />
<br><br />
*'''Action (jungleboyj):''' Follow up with the vendors and find out if they are using this and how well it is working.<br />
<br><br />
<br />
=== Capabilities Reporting===<br />
*'''Summary:''' Continue to discuss this and the fact that it is something that we need to do for the good of Cinder. We have gotten beyond the blocks that, we believe, prevented this from happening in the past. Need to move forward to make this happen.<br />
<br><br />
*'''Action (eharney and hemna):''' Restore the old specs to restart this effort. The existing specs might be able to be combined into one.<br />
*'''Action (jungleboyj):''' Create a pointer to the links from the Denver Summit/PTG.<br />
<br><br />
[https://www.youtube.com/watch?v=rdAQau5oTTc Video Recording Part 3]<br />
<br><br />
<br />
=== Capabilities Reporting===<br />
*'''Summary:''' We did a review of the specs that we were shooting to get into Train and made updates in the etherpad. Details can be seen there. <br />
<br><br />
*'''Action (team):''' Review the etherpad and help to review code changes that need to go into Train.<br />
*'''Action (jungleboyj):''' Clean up specs that aren't going to make it as planned.<br />
*'''Action (jungleboyj):''' Create an untargeted folder for specs that are approved but not assigned to a release.<br />
<br><br />
<br />
=== Cinder Mutable Options===<br />
*'''Summary:''' Continued discussion around this feature. Something we have been discussing for quite some time. Doesn't appear that it has ever been fully implemented but there is continued interest in it. <br />
<br><br />
*'''Action (jungleboyj):''' Move the spec as it did not land in the release where it was targeted.<br />
*'''Action (jungleboyj):''' Follow up with NetApp to see if they are still working on this.<br />
<br><br />
<br />
=== Future Mid-Cycles===<br />
*'''Summary:''' Brian and the team had mixed feelings on this topic. The face to face meetings have been very productive but it is hard to get everyone to a physical location for the meetings. If not everyone can make it, it is better to just have an all virtual event. We weren't really able to reach an agreement so there will be follow-up.<br />
<br><br />
*'''Action (rosmaita):''' Put together a google survey to get a feeling from the team as to how they would like to proceed.<br />
<br></div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=Cinder&diff=172177Cinder2019-08-26T19:48:31Z<p>Jay Bryant: /* PTG and Summit Meeting Summaries */</p>
<hr />
<div>'''Note:''' The wiki.openstack.org pages are for development team collaboration and documentation. If you are looking for official project documentation, please go to https://docs.openstack.org/cinder/latest/.<br />
<br />
'''Official Title:''' OpenStack Block Storage Cinder<br /><br />
<br />
'''PTL:''' Jay Bryant <jsbryant at electronicjungle d0t net><br /><br />
<br />
'''Mission Statement:''' <blockquote>To implement services and libraries to provide on demand, self-service access to Block Storage resources. Provide Software Defined Block Storage via abstraction and automation on top of various traditional backend block storage devices.</blockquote><br />
<br />
== Description ==<br />
Cinder is a Block Storage service for OpenStack. It's designed to present storage resources to end users that can be consumed by the OpenStack Compute Project (Nova). This is done through the use of either a reference implementation (LVM) or plugin drivers for other storage. The short description of Cinder is that it virtualizes the management of block storage devices and provides end users with a self service API to request and consume those resources without requiring any knowledge of where their storage is actually deployed or on what type of device.<br />
<br />
== Documentation ==<br />
See https://docs.openstack.org/cinder<br />
<br />
== Core Team ==<br />
See [https://review.openstack.org/#/admin/groups/83,members current members].<br />
<br />
== Project Meetings ==<br />
See [[CinderMeetings|Meetings/Cinder]].<br />
<br />
== Getting in Touch ==<br />
We use the [http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss openstack-discuss@lists.openstack.org] mailing list for discussions using subjects with the prefix "[cinder]".<br />
* Mailing list archive: http://lists.openstack.org/pipermail/openstack-discuss/<br />
* For discussions prior to Mon Nov 19 00:04:26 UTC 2018, see the old "dev list" archive: http://lists.openstack.org/pipermail/openstack-dev/<br />
<br />
<br />
We also hang out on IRC in #openstack-cinder on freenode.<br />
* IRC logs are available in: [http://eavesdrop.openstack.org/irclogs/%23openstack-cinder/ http://eavesdrop.openstack.org/irclogs/#openstack-cinder/]<br />
<br />
== Related projects ==<br />
* [https://github.com/openstack/python-cinderclient Python Cinder client]<br />
* [https://wiki.openstack.org/wiki/CinderBrick Brick]<br />
<br />
== Core Volume Drivers ==<br />
For a list of the core drivers in each OpenStack release and the volume operations they support, see https://docs.openstack.org/cinder/latest/reference/support-matrix.html<br />
<br />
== Contributing Code ==<br />
For any new features, significant code changes, new drivers, or major bug fixes, please add a release note along with your patch. See the [http://docs.openstack.org/developer/reno/usage.html#creating-new-release-notes Reno Documentation] for details on how to generate new release notes.<br />
<br />
=== How To Contribute A Driver ===<br />
See [https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver How to contribute a driver]<br />
<br />
NOTE: For people working on getting their CI to handle Python 3, see [https://wiki.openstack.org/wiki/Cinder/3rdParty-drivers-py3-update Cinder Third Party CI update to Python 3.7]<br />
<br />
=== How To Contribute A New Feature ===<br />
See [https://wiki.openstack.org/wiki/Cinder/how-to-contribute-new-feature How to contribute a new feature]<br />
<br />
== Sample cinder.conf ==<br />
The cinder.conf.sample is no longer maintained and tested in the source tree. Currently you can obtain a copy by running the command 'tox -e genconfig' in a cloned version of the Cinder project and then look in etc/cinder/ for the cinder.conf.sample file. <br />
<br />
The newly generated file will have all options in the Cinder project, driver options included.<br />
<br />
For more information about the generation of the file, please see: [https://github.com/openstack/cinder/blob/master/doc/source/devref/genconfig.rst Cinder Sample Configuration Devref]<br />
<br />
== Resources ==<br />
===Etherpads===<br />
====Active====<br />
*[https://etherpad.openstack.org/p/cinder-spec-review-tracking Spec Review Tracking]<br />
*[https://etherpad.openstack.org/p/cinder-outreachy-project-ideas Outreachy Project Ideas]<br />
*[https://etherpad.openstack.org/p/cinder-default-iscsihelper-lio Default iscsihelper LIO]<br />
<br />
<br />
====Historic====<br />
*[https://etherpad.openstack.org/p/cinder-nova-api-changes Cinder/Nova API Changes]<br />
*[https://etherpad.openstack.org/p/newton-cinder-midcycle Newton Midcycle]<br />
*[https://etherpad.openstack.org/p/newton-cinder-summit-ideas Newton Summit Ideas]<br />
*[https://etherpad.openstack.org/p/cinder-mataka-release-final-push Mitaka Final Push]<br />
*[https://etherpad.openstack.org/p/mitaka-cinder-spec-review-tracking Mitaka Spec Review Tracking]<br />
*[https://etherpad.openstack.org/p/mitaka-cinder-midcycle Mitaka Midcycle Meetup- Planning]<br />
*[https://etherpad.openstack.org/p/cinder-mitaka-summit-topics Mitaka Summit- Planning]<br />
*[https://etherpad.openstack.org/p/cinder-meetup-summer-2015 Liberty Midcycle Meetup- Notes]<br />
*[https://etherpad.openstack.org/p/cinder-liberty-midcycle-meetup Liberty Midcycle Meetup- Planning]<br />
<br />
=== Review Links ===<br />
* [https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fcinder+OR%0Aproject%3Aopenstack%2Fpython%2Dcinderclient+OR%0Aproject%3Aopenstack%2Fpython%2Dbrick%2Dcinderclient%2Dext+OR%0Aproject%3Aopenstack%2Fos%2Dbrick+OR%0Aproject%3Aopenstack%2Fcinderlib%29+status%3Aopen&title=Cinder+Priorities+Dashboard&High+Priority+Changes=label%3AReview%2DPriority%3D2&Priority+Changes=label%3AReview%2DPriority%3D1&Blocked+Reviews=label%3AReview%2DPriority%3D%2D1 Cinder Priority Reviews Dashboard]<br />
* [https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fcinder+OR%0Aproject%3Aopenstack%2Fpython%2Dcinderclient+OR%0Aproject%3Aopenstack%2Fos%2Dbrick+OR%0Aproject%3Aopenstack%2Fcinderlib+OR%0Aproject%3Aopenstack%2Fpython%2Dbrick%2Dcinderclient%2Dext+OR%0Aproject%3Aopenstack%2Fcinder%2Dspecs%29+status%3Aopen&title=Cinder+Review+Dashboard&Cinder+Specs=project%3Aopenstack%2Fcinder%2Dspecs&Needs+Final+%2B2=label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D2+NOT+reviewedby%3Aself&Small+Patches=NOT+label%3ACode%2DReview%3C%3D%2D1%2Ccinder%2Dcore+delta%3A%3C%3D10&Bug+Fixes+without+Negative+Feedback=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+NOT+owner%3Aself+limit%3A50+branch%3Amaster+topic%3A%5Ebug.%2A+NOT+reviewedby%3Aself&Blueprints+without+Negative+Feedback=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3AVerified%3E%3D1+NOT+owner%3Aself+NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D2+limit%3A50+branch%3Amaster+topic%3A%5Ebp.%2A+NOT+reviewedby%3Aself&Without+Negative+Feedback=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3AVerified%3E%3D1+NOT+owner%3Aself+NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D2+limit%3A50+branch%3Amaster+NOT+topic%3A%5Ebug.%2A+NOT+topic%3A%5Ebp.%2A+NOT+reviewedby%3Aself&5+Days+Without+Feedback=NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D1+NOT+is%3Areviewed+age%3A5d&Own+Patches=owner%3Aself&Patches+I+%2D2%27d=label%3ACode%2DReview%3C%3D%2D2%2Cself&Stable+Branches=branch%3A%5Estable%2F.%2A+NOT+reviewedby%3Aself Cinder Projects Review Inbox]<br />
* [https://bugs.launchpad.net/cinder/+bugs?field.searchtext=&orderby=-importance&search=Search&field.status%3Alist=INPROGRESS&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=-drivers&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_no_branches.used=&field.has_blueprints.used=&field.has_no_blueprints.used= In progress bugs]<br />
* [https://bugs.launchpad.net/cinder/+bugs?field.searchtext=&orderby=-importance&search=Search&field.status%3Alist=NEW&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=-drivers&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_no_branches.used=&field.has_blueprints.used=&field.has_no_blueprints.used= New bugs]<br />
* Stable Branches Reviews<br />
** [https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fcinder+OR%0Aproject%3Aopenstack%2Fpython%2Dcinderclient+OR%0Aproject%3Aopenstack%2Fos%2Dbrick+OR%0Aproject%3Aopenstack%2Fcinderlib+OR%0Aproject%3Aopenstack%2Fpython%2Dbrick%2Dcinderclient%2Dext%29+status%3Aopen%0A%28branch%3A%5Edriverfixes%2F.%2A+OR%0Abranch%3A%5Estable%2F.%2A%29&title=Cinder+Project%3A+All+Stable+and+Driverfix+Branches&Needs+Final+%2B2=label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D2&Without+Negative+Feedback=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3AVerified%3E%3D1+NOT+owner%3Aself+NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D2+limit%3A50&Probably+Not=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3ACode%2Dreview%3C%3D%2D1+NOT+owner%3Aself+limit%3A50&Own+Patches=owner%3Aself&Patches+I+%2D2%27d=label%3ACode%2DReview%3C%3D%2D2%2Cself all stable and driverfix branches]<br />
** [https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fcinder+OR%0Aproject%3Aopenstack%2Fpython%2Dcinderclient+OR%0Aproject%3Aopenstack%2Fos%2Dbrick+OR%0Aproject%3Aopenstack%2Fcinderlib+OR%0Aproject%3Aopenstack%2Fpython%2Dbrick%2Dcinderclient%2Dext%29%0Astatus%3Aopen%0Abranch%3Astable%2Fstein&title=Cinder+stable%2Fstein+Reviews&Needs+Final+%2B2=label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D2&Without+Negative+Feedback=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3AVerified%3E%3D1+NOT+owner%3Aself+NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D2+limit%3A50&Probably+Not=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3ACode%2Dreview%3C%3D%2D1+NOT+owner%3Aself+limit%3A50&Own+Patches=owner%3Aself&Patches+I+%2D2%27d=label%3ACode%2DReview%3C%3D%2D2%2Cself stable/stein only]<br />
** [https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fcinder+OR%0Aproject%3Aopenstack%2Fpython%2Dcinderclient+OR%0Aproject%3Aopenstack%2Fos%2Dbrick+OR%0Aproject%3Aopenstack%2Fcinderlib+OR%0Aproject%3Aopenstack%2Fpython%2Dbrick%2Dcinderclient%2Dext%29%0Astatus%3Aopen%0Abranch%3Astable%2Frocky&title=Cinder+stable%2Frocky+Reviews&Needs+Final+%2B2=label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D2&Without+Negative+Feedback=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3AVerified%3E%3D1+NOT+owner%3Aself+NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D2+limit%3A50&Probably+Not=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3ACode%2Dreview%3C%3D%2D1+NOT+owner%3Aself+limit%3A50&Own+Patches=owner%3Aself&Patches+I+%2D2%27d=label%3ACode%2DReview%3C%3D%2D2%2Cself stable/rocky only]<br />
** [https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fcinder+OR%0Aproject%3Aopenstack%2Fpython%2Dcinderclient+OR%0Aproject%3Aopenstack%2Fos%2Dbrick+OR%0Aproject%3Aopenstack%2Fcinderlib+OR%0Aproject%3Aopenstack%2Fpython%2Dbrick%2Dcinderclient%2Dext%29%0Astatus%3Aopen%0Abranch%3Astable%2Fqueens&title=Cinder+stable%2Fqueens+Reviews&Needs+Final+%2B2=label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D2&Without+Negative+Feedback=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3AVerified%3E%3D1+NOT+owner%3Aself+NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D2+limit%3A50&Probably+Not=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3ACode%2Dreview%3C%3D%2D1+NOT+owner%3Aself+limit%3A50&Own+Patches=owner%3Aself&Patches+I+%2D2%27d=label%3ACode%2DReview%3C%3D%2D2%2Cself stable/queens only]<br />
** [https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fcinder+OR%0Aproject%3Aopenstack%2Fpython%2Dcinderclient+OR%0Aproject%3Aopenstack%2Fos%2Dbrick+OR%0Aproject%3Aopenstack%2Fcinderlib+OR%0Aproject%3Aopenstack%2Fpython%2Dbrick%2Dcinderclient%2Dext%29+status%3Aopen%0A%28branch%3A%5Edriverfixes%2F.%2A+OR%0Abranch%3Astable%2Focata+OR%0Abranch%3Astable%2Fpike%29&title=Cinder+Extended+Maintenance+Branches+Reviews&Needs+Final+%2B2=label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D2&Without+Negative+Feedback=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3AVerified%3E%3D1+NOT+owner%3Aself+NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D2+limit%3A50&Probably+Not=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3ACode%2Dreview%3C%3D%2D1+NOT+owner%3Aself+limit%3A50&Own+Patches=owner%3Aself&Patches+I+%2D2%27d=label%3ACode%2DReview%3C%3D%2D2%2Cself extended maintenance (including driverfixes) only]<br />
<br />
=== PTG and Summit Meeting Summaries ===<br />
*[[CinderTrainMidCycleSummary|Train Mid-Cycle Summary]]<br />
*[[CinderTrainSummitandPTGSummary|Train Summit and PTG Summary]]<br />
*[[CinderSteinMidCycleSummary|Stein Mid-Cycle Summary]]<br />
*[[CinderSteinPTGSummary|Stein PTG Summary]]<br />
*[[VancouverSummit2018Summary|Vancouver Summit 2018 Summary]]<br />
*[[CinderRockyPTGSummary|Rocky PTG Summary]]<br />
*[[CinderQueensPTGSummary|Queens PTG Summary]]<br />
*[[CinderPikePTGSummary|Pike PTG Summary]]<br />
<br />
=== Cinder YouTube Channel ===<br />
* [https://www.youtube.com/channel/UCJ8Koy4gsISMy0qW3CWZmaQ/videos Midcycle/PTG Videos and Related Content]<br />
<br />
[[Category: Cinder]]</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=CinderTrainSummitandPTGSummary&diff=170006CinderTrainSummitandPTGSummary2019-05-14T19:52:39Z<p>Jay Bryant: /* Train PTG Summary */</p>
<hr />
<div>=== Introduction ===<br />
This page contains a summary of the subjects covered during the Train Summit and PTG held in Denver, Colorado, USA, April 28 to May 4, 2019.<br />
<br />
The full etherpad and all associated notes may be found [https://etherpad.openstack.org/p/cinder-train-ptg-planning here.]<br />
<br /><br />
<br /><br />
<br /><br />
<br />
=Train PTG Summary=<br />
<br />
*'''[https://www.dropbox.com/sh/fydqjehy9h5y728/AABBJVYSTddOKYFP4XIUyATea/Cinder?dl=0&subfolder_nav_tracking=1 Team Photos] '''<br />
<br />
==Thursday 5/2/2019==<br />
[https://www.youtube.com/watch?v=cg8gYLjjjyI Video Recording Part 1]<br />
<br />
=== Stein Retrospective===<br />
*'''Summary:''' The release went relatively well though we had issues getting releases out We kept our deadlines and which was good and added the priority dashboard which was also beneficial.<br />
<br><br />
*'''Action (jungleboyj):''' Clean up the Cinder Wiki pages.<br />
*'''Action (team):''' Checkout dashboards and see if anything needs to be updated.<br />
*'''Action (smcginnis):''' Add release notes for the cinderclient backports to call out the changes that upgrading will bring.<br />
<br><br />
<br />
===Ceph iSCSI Support===<br />
*'''Summary:''' There is great interest in getting this support in place from multiple consumers. We should continue to push trying to get this in place.<br />
<br><br />
*'''Action (jungleboyj):''' Get the spec that Lenovo has started updated and pushed up for review.<br />
*'''Action (eharney):''' Check with internal Red Hat teams to make sure that there are not others already working on this.<br />
*'''Action (hemna):''' To reach out to the Ceph community and see how receptive they would be to client changes to support iSCSI use cases.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=N6D6ib7T9Io&feature=em-lbcastemail Video Recording Part 2]<br />
<br />
=== Cinder/Glance creating image from volume with Ceph===<br />
*'''Summary:''' Determined that the problem really needs to be handled on the Glance side so there was no action required by Cinder.<br />
<br><br />
<br />
===Glance image properties and Cinder encrypted volume key management===<br />
*'''Summary:''' Keys are not getting deleted from the HSM like they are supposed to be. Glance should be deleting keys when they are done being used. This was acceptable if we make it very clear the key is one that may be deleted.<br />
<br><br />
*'''Action (rosmaita):''' Draft up a spec proposing this functionality.<br />
*'''Action (eharney):''' Will write a Cinder spec to handle the transition from the old to the new approach of key management.<br />
<br><br />
<br />
===Encryption key manage on volume clone===<br />
*'''Summary:''' Right now when we clone volumes we just make a copy of the encryption key. Seems like this really should be a new key.<br />
<br><br />
*'''Action (eharney):''' Go through how this work with snapshots -- snapshots keep the old key, volumes get a new key<br />
*'''Action (eharney):''' Write a spec based on the results of the investigation.<br />
<br><br />
<br />
===Continued discussion of old CG API removal===<br />
*'''Summary:''' It appears that all drivers that still have the old functions in place actually route to the appropriate generic code. So, it should be safe to remove the old API.<br />
<br><br />
*'''Action (smcginnis):''' To clean up the old code in the volume manager.<br />
*'''Action (smcginnis):''' To clean up database tables for CGs (ConsistencyGroup and CGSnapshot)<br />
*'''Action (smcginnis):''' Encourage drivers that still have the old code in place to remove the code.<br />
<br><br />
<br />
===Backup tests notifications leaking===<br />
*'''Summary:''' The leak of backup.createprogress notifications has been causing gate failures for some time. Can work around the issue by ignoring the notifications but we really should fix it.<br />
<br><br />
*'''Action (eharney):''' Continue to work with Rajat to find the source of the problem.<br />
<br><br />
<br />
===Optional dependency install mechanism===<br />
*'''Summary:''' Drivers that require externally available packages don't work easily in containers. This isn't good given the general movement to the use of containers.<br />
<br><br />
*'''Action (team):''' Watch new drivers for such dependencies. If they have them make sure they do a good try/except import of the package. Also ensure the licensing is appropriate.<br />
*'''Action (hemna):''' Work on verifying the dependent packages and determining a way to resolve the problem for containers.<br />
<br><br />
<br />
===Passwords in cinder.conf/oslo.config with Castellan driver===<br />
*'''Summary:''' Consumers don't want to be putting clear passwords in the config files. There is a better way to do things and we should move to supporting it.<br />
<br><br />
*'''Action (eharney):''' To investigate how we can implement this for Cinder.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=HaWmybpkloI&feature=em-lbcastemail Video Recording Part 3]<br />
<br />
===Fall mid-cycle planning===<br />
*'''Summary:''' No one had objections to doing the mid-cycle at Lenovo again. So, we will plan for 8/21 to 8/23/2019 at the Lenovo Campus in Morrisville, NC<br />
<br><br />
*'''Action (eharney):''' To get more East Coast Cinder people involved.<br />
*'''Action (jungleboyj):''' To confirm with Lenovo and make sure to keep Tom Barron in the loop.<br />
<br><br />
<br />
===Migration from SQL-Alchemy to Alembic===<br />
*'''Summary:''' We need to do this at some point since SQL-Alchemy is going away. Glance has already done the work and we can learn from them.<br />
<br><br />
*'''Action (smcginnis):''' Will look into compacting our DB upgrades again. Currently back at the Ocata level.<br />
*'''Action (smcginnis):''' Try to get guidance on how to proceed from zzzeek.<br />
<br><br />
<br />
===Syncing scheduler stats in an HA environment===<br />
*'''Summary:''' As more people are running an active/active HA environment we need to think about this more to make sure that scheduler instances stay in sync.<br />
<br><br />
*'''Action (e0ne):''' Update the old spec he wrote about this to address comments: https://review.opendev.org/#/c/556529/2<br />
*'''Action (geguileo):'' To help out Ivan as he starts reworking the spec.<br />
<br><br />
<br />
===Pre-release checklist===<br />
*'''Summary:''' We have made mistakes over the last couple of releases as far as getting libraries created before the release, etc. We need to do better. Hopefully creating a checklist will help. https://docs.openstack.org/cinder/latest/contributor/releasecycle.html<br />
<br><br />
*'''Action (jungleboyj):''' To do follow-up patches to add additional details to the checklist.<br />
*'''Action (eharney):''' Has additional comments on the content to merge.<br />
<br><br />
<br />
===Privsep and rootwrap===<br />
*'''Summary:''' We haven't really improved things in privsep as we are still using the rootwrap style of commands instead of breaking things down to using python functions to be more secure. We should improve this.<br />
<br><br />
*'''Action (eharney):''' To take a look at his privsep patch for LIO and see if it could be pushed up.<br />
*'''Action (eharney):''' Try to get better granularity on privileges<br />
<br><br />
<br />
===abc removal===<br />
*'''Summary:''' ABC has never worked the way we intended. At this point is would be best to just remove it. We should plan to work on this in the U release as we are moving to Py3.<br />
<br><br />
<br />
===Delete volume from DB===<br />
*'''Summary:''' Support organizations would like a way to delete volumes that doesn't go through the driver code but people have concerns with going straight to the DB. Need to find a middle ground.<br />
<br><br />
*'''Action (eharney):''' Going to look at the patch that was proposed and suggest that unmanage have an --ignore-state option added.<br />
<br><br />
<br />
===Quiesced snapshots===<br />
*'''Summary:''' People are still surprised we can't do this. We should work with Nova to see if we can make this happen.<br />
<br><br />
<br />
===Talk about merging the RSD driver===<br />
*'''Summary:''' 3rd Party CI looks good and the team has been responsive to comments. Close to being ready to merge.<br />
<br><br />
*'''Action (team):''' To review the driver and try to get it in place.<br />
*'''Action (e0ne):''' To make sure to review it given that he had experience with the PoC for the NVMe driver.<br />
<br><br />
<br />
===3rd Party CI===<br />
*'''Summary:''' We need to make sure that all 3rd Party CIs are running py3 by milestone 2 and that they have resolved all issues with the change in repo names.<br />
<br><br />
*'''Action (jungleboyj):''' Make sure that all CIs are running py3 testing by milestone-2 in Train.<br />
*'''Action (jungleboyj):''' Propose unsupported patches for those that fail to meet the requirement.<br />
*'''Action (jungleboyj):''' Need to also ensure that all 3rd Party CIs are running the Cinder Tempest Plugin.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=5rdyrccCqWw Video Recording Part 4]<br />
<br />
===Generic Backup Driver Discussion===<br />
*'''Summary:''' The team would still like to get this in place and Ivan has continued to work on it. Need to review the patches that are out there to help this.<br />
<br><br />
*'''Action (team):''' Review patches that are currently out there: https://review.opendev.org/#/q/status:open+project:openstack/cinder+branch:master+topic:bp/backup-host-selection-algorigthm and https://review.opendev.org/#/c/500094/<br />
*'''Action (e0ne):''' To continue to push the team to review the patches.<br />
<br><br />
<br />
===Driver Folder Clean-up/Refactoring===<br />
*'''Summary:''' The volume driver folder is a bit of an inconsistent mess and it would be good to clean up. There is also a mess in the code to deal with.<br />
<br><br />
*'''Action (smcginnis):''' Going to work on creating a patch to bring consistency to the subfolders in volume/drivers.<br />
*'''Action (hemna):''' Will work to move exceptions for individual drivers out of cinder/exceptions.py into individual drivers.<br />
*'''Action (jungleboyj):''' There is old code left around from previously removed drivers in os-brick. Review os-brick and remove what is appropriate.<br />
<br><br />
<br />
===Optimized Backup drivers===<br />
*'''Summary:''' NetApp indicated interest in creating an optimized backup driver for their storage backend. The team didn't have a concern with this idea.<br />
<br><br />
*'''Action (erlon):''' To propose a review to implement this for the team to review.<br />
<br><br />
<br />
==Friday 5/3/2019==<br />
<br />
[https://www.youtube.com/watch?v=d6QYQTOzJRM Video Recording Part 5]<br />
<br />
===Cinderclient design discussion===<br />
*'''Summary:''' There are issues in cinderclient when it comes to filtering. There are bugs and unexpected behavior that should be fixed.<br />
<br><br />
*'''Action (whoami-rajat):''' Update patch to allow multiple '--filters' to be specified. https://review.opendev.org/#/c/587610/<br />
*'''Action (whoami-rajat):''' Fix the bugs documented in the etherpad by Eric.<br />
<br><br />
<br />
===Default behavior of listing (volume|group} types====<br />
*'''Summary:''' The default behavior between the client and the API is inconsistent. We should understand this and fix it.<br />
<br><br />
*'''Action (whoami-rajat):''' Fix the bug where the server hides private types from the user that has access to them.<br />
*'''Action (team):''' Review the patch that is out there and decide if we want to merge it: https://review.opendev.org/#/c/641698/<br />
<br><br />
<br />
===Avoiding untyped volumes update===<br />
*'''Summary:''' Despite additional design discussion we landed back on the fact that we want to continue with the original design proposal of creating a new default type that is unlikely to clash with anything that administrator previously created.<br />
<br><br />
*'''Action (eharney):''' Write a but to evaluate whether our default volume type should be public.<br />
*'''Action (e0ne):''' Write a bug to deal with the fact that default_volume_type can be set in cinder.conf but the type doesn't exist. You can create with no type, but it should fail.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=r3NMEuReQ-Q Video Recording Part 6]<br />
<br />
=== Leverage Hardware acceleration in Cinder===<br />
*'''Summary:''' Intel is interested in speeding image compression/decompression using accelerators. Cinder team is ok with this but thinks it should be implemented in a generic manner via oslo-util or something similar.<br />
<br><br />
*'''Action (lixiaoy1):''' Propose new functionality to Oslo.<br />
*'''Action (lixiaoy1):''' Work with Cinder to get use of the new library function implemented. Same will need to be done with Glance.<br />
*'''Action (lixiaoy1):''' Finally work with Nova to get the new way of compressing images and new format supported.<br />
<br><br />
<br />
===Cross Project time with Nova===<br />
*'''Summary:''' Discussed a number of different topics but there were no significant work items to come out of the discussion for Cinder. Details can be seen here: https://etherpad.openstack.org/p/ptg-train-xproj-nova-cinder<br />
<br><br />
<br />
===Status of multiattach===<br />
*'''Summary:''' Red Hat has started testing in their environment and they are seeing some race conditions, etc. They are addressing issues as they find them.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=STYpJ5GwmeY Video Recording Part 7]<br />
<br />
===Cinder mutable options===<br />
*'''Summary:''' This is a community goal that we need to get appropriately implemented. Not totally clear as to where our implementation is at and we would like to address that.<br />
<br><br />
*'''Action (erlon):''' Update this spec https://review.opendev.org/656011 to be current and include results of our discussion.<br />
*'''Action (erlon):''' Make sure that we have the right plumbing in place to support this functionality. It sounds like we may already have it there.<br />
*'''Action (erlon):''' Come up with a way to add to our sample config output to indicate which options are reloadable an not.<br />
<br><br />
<br />
===Cinder support for Glance multi-store feature===<br />
*'''Summary:''' Glance supports multiple stores and it should be possible to decide which store is used when Cinder interacts with Glance.<br />
<br><br />
*'''Action (rosmaita):''' Work with abhishek to update the spec that is out there to initially implement this support via volume_type. If the approach isn't well received, other solutions can be considered.<br />
<br><br />
<br />
===Supporting fast copy volume to image===<br />
*'''Summary:''' Storage backends like Datera and Ceph can copy images faster using backend support, but there isn't currently a code path for this. We should correct this.<br />
<br><br />
*'''Action (_alastor_):''' Propose the work he has done in Glance locally to upstream.<br />
*'''Action (_alastor_):''' Work with rosmaita to get other details for implementing this worked out.<br />
<br><br />
<br />
===Cinder handling image-associated metadata===<br />
*'''Summary:''' We need to come up with a better framework for Cinder to handle Glance's image metadata. There are fields like those for the image signature that we don't want to be saving.<br />
<br><br />
*'''Action (rosmaita):''' Will write up a spec with a proposal for the fields that shouldn't be saved in Cinder.<br />
<br><br />
<br />
===Capabilities reporting update===<br />
*'''Summary:''' HP actually started implementing a lot of this work back in Liberty. We just need to figure out where that work was left and keep pushing things forward.<br />
<br><br />
*'''Action (_alastor_):''' Help the team document how this should be utilized by vendor's drivers.<br />
*'''Action (eharney):''' Understand the capabilities support that was merged back in liberty.<br />
*'''Action (eharney):''' Write a new spec that references the old spec and then makes appropriate updates on what the functionality really is.<br />
*'''Action (team):''' Need to get the reference architectures updated to report capabilities properly.<br />
*'''Action (team):''' If the reference architectures go well then we should probably add this as a requirement for drivers.<br />
*'''Action (jungleboyj):''' Set up a bi-weekly meeting to discuss this. <br />
<br><br />
<br />
===python-cinderclient major version release===<br />
*'''Summary:''' There are a number of changes we want to get in before doing a Cinderclient release with a major version change. We agreed that we want to change the way we are handling MV to default to the highest available and then downgrade to the highest version that the server supports.<br />
<br><br />
*'''Action (eharney):''' Update https://review.opendev.org/#/c/647871/ to implement the support agreed upon above.<br />
*'''Action (rosmaita):''' Will wait to do a cinderclient release until after all the changes that require a major version bump are in place.<br />
<br><br />
<br />
===cinderclient integration with OpenStack===<br />
*'''Summary:''' Our current approach to doing client development is not great as thing are going into python-cinderclient and then eventually being picked up for openstackclient without any Cinder core review action. Would like to look at making 'openstack volume' commands an alias to Cinder commands made available via plugin.<br />
<br><br />
*'''Action (abishop):''' Will reach out to Dean Troyer and find out if they would be supportive of this approach.<br />
*'''Action (abishop):''' Investigate the questions and concerns raised during our discussion to get answers and bring them back to the team.<br />
<br><br />
<br />
=== Continued Storyboard Discussion?===<br />
*'''Summary:''' Team is still not in a hurry to make this change. Manila isn't planning to move during Train. Don't know that anyone else is either so we are not going to push this further at this point in time.<br />
<br><br />
<br />
===py37 failures===<br />
*'''Summary:''' Team can't agree whether or not to add a py37 job to our tests because right now it is always failing. Sean thinks that we should add it as we know that OpenStack is going to be moving and we need to get test coverage in place.<br />
<br><br />
*'''Action (jungleboyj):''' To reach out to Helen Walsh asking her team to investigate why their driver fails and to please address.<br />
<br />
===Placement Discussion===<br />
*'''Summary:''' Placement is agnostic as to what goes into the service. If we have information that can be of use to them, they will use it. No one, however, is asking for this right now so it was felt that our efforts could be better utilized elsewhere. So, no action is needed right now.<br />
<br />
<br><br />
<br />
=OpenInfra Summit Forum Sessions=<br />
<br><b><br />
==Cinder Capability Reporting==<br />
We held a forum session to discuss how Cinder can better report capabilities from drivers. The goal being to make it possible for administrators to better understand the capabilities available from their storage backends. If this information is more readily accessible it should, then, be easier to create volume types that more completely utilize their storage backends.<br />
<br><br />
*'''Etherpad:''' [https://etherpad.openstack.org/p/denver-forum-cinder-improving-drvr-cap-rep https://etherpad.openstack.org/p/denver-forum-cinder-improving-drvr-cap-rep]<br />
*'''YouTube Recording:''' [https://www.youtube.com/watch?v=avZgbk8hh2s https://www.youtube.com/watch?v=avZgbk8hh2s]<br />
<br><br />
The outcome of this session was a discovery that much of the work for this had already been done by HP back in the liberty release and that we really just needed to come to agreement upon completing the work. We added further discussion to our PTG agenda and you can see the results of that discussion above.<br />
<br><br><br />
==Cinder User Feedback Session==<br />
This forum session was a second attempt to give users of Cinder the opportunity to share their concerns about Cinder and to ask questions.<br />
<br><br />
*'''Etherpad:''' [https://etherpad.openstack.org/p/denver-forum-cinder-direct-user-feedback https://etherpad.openstack.org/p/denver-forum-cinder-direct-user-feedback]<br />
*'''YouTube Recording:''' [https://www.youtube.com/watch?v=UVN42jsUq-U https://www.youtube.com/watch?v=UVN42jsUq-U]<br />
<br><br />
As with the past times we have tried this the session was lightly attended. The administrators in the room all indicated that Cinder generally works pretty well. There were requests for some functionality that had already gone in to recent releases. Also some questions around issues with the number of volumes that can be attached. That problem appears to be a limitation of virtio.<br />
<br><br />
==Cinder Project Update==<br />
[https://www.slideshare.net/JayBryant2/cinder-project-update-denver-summit-2019 https://www.slideshare.net/JayBryant2/cinder-project-update-denver-summit-2019]<br />
<br><br />
==Cinder On-Boarding Education==<br />
[https://www.slideshare.net/JayBryant2/cinder-project-onboarding-openinfra-summit-denver-2019 https://www.slideshare.net/JayBryant2/cinder-project-onboarding-openinfra-summit-denver-2019]</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=CinderTrainSummitandPTGSummary&diff=170005CinderTrainSummitandPTGSummary2019-05-14T19:52:05Z<p>Jay Bryant: /* Train PTG Summary */</p>
<hr />
<div>=== Introduction ===<br />
This page contains a summary of the subjects covered during the Train Summit and PTG held in Denver, Colorado, USA, April 28 to May 4, 2019.<br />
<br />
The full etherpad and all associated notes may be found [https://etherpad.openstack.org/p/cinder-train-ptg-planning here.]<br />
<br /><br />
<br /><br />
<br /><br />
<br />
=Train PTG Summary=<br />
<br />
*'''[Team Photos https://www.dropbox.com/sh/fydqjehy9h5y728/AABBJVYSTddOKYFP4XIUyATea/Cinder?dl=0&subfolder_nav_tracking=1] '''<br />
<br />
==Thursday 5/2/2019==<br />
[https://www.youtube.com/watch?v=cg8gYLjjjyI Video Recording Part 1]<br />
<br />
=== Stein Retrospective===<br />
*'''Summary:''' The release went relatively well though we had issues getting releases out We kept our deadlines and which was good and added the priority dashboard which was also beneficial.<br />
<br><br />
*'''Action (jungleboyj):''' Clean up the Cinder Wiki pages.<br />
*'''Action (team):''' Checkout dashboards and see if anything needs to be updated.<br />
*'''Action (smcginnis):''' Add release notes for the cinderclient backports to call out the changes that upgrading will bring.<br />
<br><br />
<br />
===Ceph iSCSI Support===<br />
*'''Summary:''' There is great interest in getting this support in place from multiple consumers. We should continue to push trying to get this in place.<br />
<br><br />
*'''Action (jungleboyj):''' Get the spec that Lenovo has started updated and pushed up for review.<br />
*'''Action (eharney):''' Check with internal Red Hat teams to make sure that there are not others already working on this.<br />
*'''Action (hemna):''' To reach out to the Ceph community and see how receptive they would be to client changes to support iSCSI use cases.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=N6D6ib7T9Io&feature=em-lbcastemail Video Recording Part 2]<br />
<br />
=== Cinder/Glance creating image from volume with Ceph===<br />
*'''Summary:''' Determined that the problem really needs to be handled on the Glance side so there was no action required by Cinder.<br />
<br><br />
<br />
===Glance image properties and Cinder encrypted volume key management===<br />
*'''Summary:''' Keys are not getting deleted from the HSM like they are supposed to be. Glance should be deleting keys when they are done being used. This was acceptable if we make it very clear the key is one that may be deleted.<br />
<br><br />
*'''Action (rosmaita):''' Draft up a spec proposing this functionality.<br />
*'''Action (eharney):''' Will write a Cinder spec to handle the transition from the old to the new approach of key management.<br />
<br><br />
<br />
===Encryption key manage on volume clone===<br />
*'''Summary:''' Right now when we clone volumes we just make a copy of the encryption key. Seems like this really should be a new key.<br />
<br><br />
*'''Action (eharney):''' Go through how this work with snapshots -- snapshots keep the old key, volumes get a new key<br />
*'''Action (eharney):''' Write a spec based on the results of the investigation.<br />
<br><br />
<br />
===Continued discussion of old CG API removal===<br />
*'''Summary:''' It appears that all drivers that still have the old functions in place actually route to the appropriate generic code. So, it should be safe to remove the old API.<br />
<br><br />
*'''Action (smcginnis):''' To clean up the old code in the volume manager.<br />
*'''Action (smcginnis):''' To clean up database tables for CGs (ConsistencyGroup and CGSnapshot)<br />
*'''Action (smcginnis):''' Encourage drivers that still have the old code in place to remove the code.<br />
<br><br />
<br />
===Backup tests notifications leaking===<br />
*'''Summary:''' The leak of backup.createprogress notifications has been causing gate failures for some time. Can work around the issue by ignoring the notifications but we really should fix it.<br />
<br><br />
*'''Action (eharney):''' Continue to work with Rajat to find the source of the problem.<br />
<br><br />
<br />
===Optional dependency install mechanism===<br />
*'''Summary:''' Drivers that require externally available packages don't work easily in containers. This isn't good given the general movement to the use of containers.<br />
<br><br />
*'''Action (team):''' Watch new drivers for such dependencies. If they have them make sure they do a good try/except import of the package. Also ensure the licensing is appropriate.<br />
*'''Action (hemna):''' Work on verifying the dependent packages and determining a way to resolve the problem for containers.<br />
<br><br />
<br />
===Passwords in cinder.conf/oslo.config with Castellan driver===<br />
*'''Summary:''' Consumers don't want to be putting clear passwords in the config files. There is a better way to do things and we should move to supporting it.<br />
<br><br />
*'''Action (eharney):''' To investigate how we can implement this for Cinder.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=HaWmybpkloI&feature=em-lbcastemail Video Recording Part 3]<br />
<br />
===Fall mid-cycle planning===<br />
*'''Summary:''' No one had objections to doing the mid-cycle at Lenovo again. So, we will plan for 8/21 to 8/23/2019 at the Lenovo Campus in Morrisville, NC<br />
<br><br />
*'''Action (eharney):''' To get more East Coast Cinder people involved.<br />
*'''Action (jungleboyj):''' To confirm with Lenovo and make sure to keep Tom Barron in the loop.<br />
<br><br />
<br />
===Migration from SQL-Alchemy to Alembic===<br />
*'''Summary:''' We need to do this at some point since SQL-Alchemy is going away. Glance has already done the work and we can learn from them.<br />
<br><br />
*'''Action (smcginnis):''' Will look into compacting our DB upgrades again. Currently back at the Ocata level.<br />
*'''Action (smcginnis):''' Try to get guidance on how to proceed from zzzeek.<br />
<br><br />
<br />
===Syncing scheduler stats in an HA environment===<br />
*'''Summary:''' As more people are running an active/active HA environment we need to think about this more to make sure that scheduler instances stay in sync.<br />
<br><br />
*'''Action (e0ne):''' Update the old spec he wrote about this to address comments: https://review.opendev.org/#/c/556529/2<br />
*'''Action (geguileo):'' To help out Ivan as he starts reworking the spec.<br />
<br><br />
<br />
===Pre-release checklist===<br />
*'''Summary:''' We have made mistakes over the last couple of releases as far as getting libraries created before the release, etc. We need to do better. Hopefully creating a checklist will help. https://docs.openstack.org/cinder/latest/contributor/releasecycle.html<br />
<br><br />
*'''Action (jungleboyj):''' To do follow-up patches to add additional details to the checklist.<br />
*'''Action (eharney):''' Has additional comments on the content to merge.<br />
<br><br />
<br />
===Privsep and rootwrap===<br />
*'''Summary:''' We haven't really improved things in privsep as we are still using the rootwrap style of commands instead of breaking things down to using python functions to be more secure. We should improve this.<br />
<br><br />
*'''Action (eharney):''' To take a look at his privsep patch for LIO and see if it could be pushed up.<br />
*'''Action (eharney):''' Try to get better granularity on privileges<br />
<br><br />
<br />
===abc removal===<br />
*'''Summary:''' ABC has never worked the way we intended. At this point is would be best to just remove it. We should plan to work on this in the U release as we are moving to Py3.<br />
<br><br />
<br />
===Delete volume from DB===<br />
*'''Summary:''' Support organizations would like a way to delete volumes that doesn't go through the driver code but people have concerns with going straight to the DB. Need to find a middle ground.<br />
<br><br />
*'''Action (eharney):''' Going to look at the patch that was proposed and suggest that unmanage have an --ignore-state option added.<br />
<br><br />
<br />
===Quiesced snapshots===<br />
*'''Summary:''' People are still surprised we can't do this. We should work with Nova to see if we can make this happen.<br />
<br><br />
<br />
===Talk about merging the RSD driver===<br />
*'''Summary:''' 3rd Party CI looks good and the team has been responsive to comments. Close to being ready to merge.<br />
<br><br />
*'''Action (team):''' To review the driver and try to get it in place.<br />
*'''Action (e0ne):''' To make sure to review it given that he had experience with the PoC for the NVMe driver.<br />
<br><br />
<br />
===3rd Party CI===<br />
*'''Summary:''' We need to make sure that all 3rd Party CIs are running py3 by milestone 2 and that they have resolved all issues with the change in repo names.<br />
<br><br />
*'''Action (jungleboyj):''' Make sure that all CIs are running py3 testing by milestone-2 in Train.<br />
*'''Action (jungleboyj):''' Propose unsupported patches for those that fail to meet the requirement.<br />
*'''Action (jungleboyj):''' Need to also ensure that all 3rd Party CIs are running the Cinder Tempest Plugin.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=5rdyrccCqWw Video Recording Part 4]<br />
<br />
===Generic Backup Driver Discussion===<br />
*'''Summary:''' The team would still like to get this in place and Ivan has continued to work on it. Need to review the patches that are out there to help this.<br />
<br><br />
*'''Action (team):''' Review patches that are currently out there: https://review.opendev.org/#/q/status:open+project:openstack/cinder+branch:master+topic:bp/backup-host-selection-algorigthm and https://review.opendev.org/#/c/500094/<br />
*'''Action (e0ne):''' To continue to push the team to review the patches.<br />
<br><br />
<br />
===Driver Folder Clean-up/Refactoring===<br />
*'''Summary:''' The volume driver folder is a bit of an inconsistent mess and it would be good to clean up. There is also a mess in the code to deal with.<br />
<br><br />
*'''Action (smcginnis):''' Going to work on creating a patch to bring consistency to the subfolders in volume/drivers.<br />
*'''Action (hemna):''' Will work to move exceptions for individual drivers out of cinder/exceptions.py into individual drivers.<br />
*'''Action (jungleboyj):''' There is old code left around from previously removed drivers in os-brick. Review os-brick and remove what is appropriate.<br />
<br><br />
<br />
===Optimized Backup drivers===<br />
*'''Summary:''' NetApp indicated interest in creating an optimized backup driver for their storage backend. The team didn't have a concern with this idea.<br />
<br><br />
*'''Action (erlon):''' To propose a review to implement this for the team to review.<br />
<br><br />
<br />
==Friday 5/3/2019==<br />
<br />
[https://www.youtube.com/watch?v=d6QYQTOzJRM Video Recording Part 5]<br />
<br />
===Cinderclient design discussion===<br />
*'''Summary:''' There are issues in cinderclient when it comes to filtering. There are bugs and unexpected behavior that should be fixed.<br />
<br><br />
*'''Action (whoami-rajat):''' Update patch to allow multiple '--filters' to be specified. https://review.opendev.org/#/c/587610/<br />
*'''Action (whoami-rajat):''' Fix the bugs documented in the etherpad by Eric.<br />
<br><br />
<br />
===Default behavior of listing (volume|group} types====<br />
*'''Summary:''' The default behavior between the client and the API is inconsistent. We should understand this and fix it.<br />
<br><br />
*'''Action (whoami-rajat):''' Fix the bug where the server hides private types from the user that has access to them.<br />
*'''Action (team):''' Review the patch that is out there and decide if we want to merge it: https://review.opendev.org/#/c/641698/<br />
<br><br />
<br />
===Avoiding untyped volumes update===<br />
*'''Summary:''' Despite additional design discussion we landed back on the fact that we want to continue with the original design proposal of creating a new default type that is unlikely to clash with anything that administrator previously created.<br />
<br><br />
*'''Action (eharney):''' Write a but to evaluate whether our default volume type should be public.<br />
*'''Action (e0ne):''' Write a bug to deal with the fact that default_volume_type can be set in cinder.conf but the type doesn't exist. You can create with no type, but it should fail.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=r3NMEuReQ-Q Video Recording Part 6]<br />
<br />
=== Leverage Hardware acceleration in Cinder===<br />
*'''Summary:''' Intel is interested in speeding image compression/decompression using accelerators. Cinder team is ok with this but thinks it should be implemented in a generic manner via oslo-util or something similar.<br />
<br><br />
*'''Action (lixiaoy1):''' Propose new functionality to Oslo.<br />
*'''Action (lixiaoy1):''' Work with Cinder to get use of the new library function implemented. Same will need to be done with Glance.<br />
*'''Action (lixiaoy1):''' Finally work with Nova to get the new way of compressing images and new format supported.<br />
<br><br />
<br />
===Cross Project time with Nova===<br />
*'''Summary:''' Discussed a number of different topics but there were no significant work items to come out of the discussion for Cinder. Details can be seen here: https://etherpad.openstack.org/p/ptg-train-xproj-nova-cinder<br />
<br><br />
<br />
===Status of multiattach===<br />
*'''Summary:''' Red Hat has started testing in their environment and they are seeing some race conditions, etc. They are addressing issues as they find them.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=STYpJ5GwmeY Video Recording Part 7]<br />
<br />
===Cinder mutable options===<br />
*'''Summary:''' This is a community goal that we need to get appropriately implemented. Not totally clear as to where our implementation is at and we would like to address that.<br />
<br><br />
*'''Action (erlon):''' Update this spec https://review.opendev.org/656011 to be current and include results of our discussion.<br />
*'''Action (erlon):''' Make sure that we have the right plumbing in place to support this functionality. It sounds like we may already have it there.<br />
*'''Action (erlon):''' Come up with a way to add to our sample config output to indicate which options are reloadable an not.<br />
<br><br />
<br />
===Cinder support for Glance multi-store feature===<br />
*'''Summary:''' Glance supports multiple stores and it should be possible to decide which store is used when Cinder interacts with Glance.<br />
<br><br />
*'''Action (rosmaita):''' Work with abhishek to update the spec that is out there to initially implement this support via volume_type. If the approach isn't well received, other solutions can be considered.<br />
<br><br />
<br />
===Supporting fast copy volume to image===<br />
*'''Summary:''' Storage backends like Datera and Ceph can copy images faster using backend support, but there isn't currently a code path for this. We should correct this.<br />
<br><br />
*'''Action (_alastor_):''' Propose the work he has done in Glance locally to upstream.<br />
*'''Action (_alastor_):''' Work with rosmaita to get other details for implementing this worked out.<br />
<br><br />
<br />
===Cinder handling image-associated metadata===<br />
*'''Summary:''' We need to come up with a better framework for Cinder to handle Glance's image metadata. There are fields like those for the image signature that we don't want to be saving.<br />
<br><br />
*'''Action (rosmaita):''' Will write up a spec with a proposal for the fields that shouldn't be saved in Cinder.<br />
<br><br />
<br />
===Capabilities reporting update===<br />
*'''Summary:''' HP actually started implementing a lot of this work back in Liberty. We just need to figure out where that work was left and keep pushing things forward.<br />
<br><br />
*'''Action (_alastor_):''' Help the team document how this should be utilized by vendor's drivers.<br />
*'''Action (eharney):''' Understand the capabilities support that was merged back in liberty.<br />
*'''Action (eharney):''' Write a new spec that references the old spec and then makes appropriate updates on what the functionality really is.<br />
*'''Action (team):''' Need to get the reference architectures updated to report capabilities properly.<br />
*'''Action (team):''' If the reference architectures go well then we should probably add this as a requirement for drivers.<br />
*'''Action (jungleboyj):''' Set up a bi-weekly meeting to discuss this. <br />
<br><br />
<br />
===python-cinderclient major version release===<br />
*'''Summary:''' There are a number of changes we want to get in before doing a Cinderclient release with a major version change. We agreed that we want to change the way we are handling MV to default to the highest available and then downgrade to the highest version that the server supports.<br />
<br><br />
*'''Action (eharney):''' Update https://review.opendev.org/#/c/647871/ to implement the support agreed upon above.<br />
*'''Action (rosmaita):''' Will wait to do a cinderclient release until after all the changes that require a major version bump are in place.<br />
<br><br />
<br />
===cinderclient integration with OpenStack===<br />
*'''Summary:''' Our current approach to doing client development is not great as thing are going into python-cinderclient and then eventually being picked up for openstackclient without any Cinder core review action. Would like to look at making 'openstack volume' commands an alias to Cinder commands made available via plugin.<br />
<br><br />
*'''Action (abishop):''' Will reach out to Dean Troyer and find out if they would be supportive of this approach.<br />
*'''Action (abishop):''' Investigate the questions and concerns raised during our discussion to get answers and bring them back to the team.<br />
<br><br />
<br />
=== Continued Storyboard Discussion?===<br />
*'''Summary:''' Team is still not in a hurry to make this change. Manila isn't planning to move during Train. Don't know that anyone else is either so we are not going to push this further at this point in time.<br />
<br><br />
<br />
===py37 failures===<br />
*'''Summary:''' Team can't agree whether or not to add a py37 job to our tests because right now it is always failing. Sean thinks that we should add it as we know that OpenStack is going to be moving and we need to get test coverage in place.<br />
<br><br />
*'''Action (jungleboyj):''' To reach out to Helen Walsh asking her team to investigate why their driver fails and to please address.<br />
<br />
===Placement Discussion===<br />
*'''Summary:''' Placement is agnostic as to what goes into the service. If we have information that can be of use to them, they will use it. No one, however, is asking for this right now so it was felt that our efforts could be better utilized elsewhere. So, no action is needed right now.<br />
<br />
<br><br />
<br />
=OpenInfra Summit Forum Sessions=<br />
<br><b><br />
==Cinder Capability Reporting==<br />
We held a forum session to discuss how Cinder can better report capabilities from drivers. The goal being to make it possible for administrators to better understand the capabilities available from their storage backends. If this information is more readily accessible it should, then, be easier to create volume types that more completely utilize their storage backends.<br />
<br><br />
*'''Etherpad:''' [https://etherpad.openstack.org/p/denver-forum-cinder-improving-drvr-cap-rep https://etherpad.openstack.org/p/denver-forum-cinder-improving-drvr-cap-rep]<br />
*'''YouTube Recording:''' [https://www.youtube.com/watch?v=avZgbk8hh2s https://www.youtube.com/watch?v=avZgbk8hh2s]<br />
<br><br />
The outcome of this session was a discovery that much of the work for this had already been done by HP back in the liberty release and that we really just needed to come to agreement upon completing the work. We added further discussion to our PTG agenda and you can see the results of that discussion above.<br />
<br><br><br />
==Cinder User Feedback Session==<br />
This forum session was a second attempt to give users of Cinder the opportunity to share their concerns about Cinder and to ask questions.<br />
<br><br />
*'''Etherpad:''' [https://etherpad.openstack.org/p/denver-forum-cinder-direct-user-feedback https://etherpad.openstack.org/p/denver-forum-cinder-direct-user-feedback]<br />
*'''YouTube Recording:''' [https://www.youtube.com/watch?v=UVN42jsUq-U https://www.youtube.com/watch?v=UVN42jsUq-U]<br />
<br><br />
As with the past times we have tried this the session was lightly attended. The administrators in the room all indicated that Cinder generally works pretty well. There were requests for some functionality that had already gone in to recent releases. Also some questions around issues with the number of volumes that can be attached. That problem appears to be a limitation of virtio.<br />
<br><br />
==Cinder Project Update==<br />
[https://www.slideshare.net/JayBryant2/cinder-project-update-denver-summit-2019 https://www.slideshare.net/JayBryant2/cinder-project-update-denver-summit-2019]<br />
<br><br />
==Cinder On-Boarding Education==<br />
[https://www.slideshare.net/JayBryant2/cinder-project-onboarding-openinfra-summit-denver-2019 https://www.slideshare.net/JayBryant2/cinder-project-onboarding-openinfra-summit-denver-2019]</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=CinderTrainSummitandPTGSummary&diff=170004CinderTrainSummitandPTGSummary2019-05-14T19:46:22Z<p>Jay Bryant: /* OpenInfra Summit Forum Sessions */</p>
<hr />
<div>=== Introduction ===<br />
This page contains a summary of the subjects covered during the Train Summit and PTG held in Denver, Colorado, USA, April 28 to May 4, 2019.<br />
<br />
The full etherpad and all associated notes may be found [https://etherpad.openstack.org/p/cinder-train-ptg-planning here.]<br />
<br /><br />
<br /><br />
<br /><br />
<br />
=Train PTG Summary=<br />
<br />
==Thursday 5/2/2019==<br />
[https://www.youtube.com/watch?v=cg8gYLjjjyI Video Recording Part 1]<br />
<br />
=== Stein Retrospective===<br />
*'''Summary:''' The release went relatively well though we had issues getting releases out We kept our deadlines and which was good and added the priority dashboard which was also beneficial.<br />
<br><br />
*'''Action (jungleboyj):''' Clean up the Cinder Wiki pages.<br />
*'''Action (team):''' Checkout dashboards and see if anything needs to be updated.<br />
*'''Action (smcginnis):''' Add release notes for the cinderclient backports to call out the changes that upgrading will bring.<br />
<br><br />
<br />
===Ceph iSCSI Support===<br />
*'''Summary:''' There is great interest in getting this support in place from multiple consumers. We should continue to push trying to get this in place.<br />
<br><br />
*'''Action (jungleboyj):''' Get the spec that Lenovo has started updated and pushed up for review.<br />
*'''Action (eharney):''' Check with internal Red Hat teams to make sure that there are not others already working on this.<br />
*'''Action (hemna):''' To reach out to the Ceph community and see how receptive they would be to client changes to support iSCSI use cases.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=N6D6ib7T9Io&feature=em-lbcastemail Video Recording Part 2]<br />
<br />
=== Cinder/Glance creating image from volume with Ceph===<br />
*'''Summary:''' Determined that the problem really needs to be handled on the Glance side so there was no action required by Cinder.<br />
<br><br />
<br />
===Glance image properties and Cinder encrypted volume key management===<br />
*'''Summary:''' Keys are not getting deleted from the HSM like they are supposed to be. Glance should be deleting keys when they are done being used. This was acceptable if we make it very clear the key is one that may be deleted.<br />
<br><br />
*'''Action (rosmaita):''' Draft up a spec proposing this functionality.<br />
*'''Action (eharney):''' Will write a Cinder spec to handle the transition from the old to the new approach of key management.<br />
<br><br />
<br />
===Encryption key manage on volume clone===<br />
*'''Summary:''' Right now when we clone volumes we just make a copy of the encryption key. Seems like this really should be a new key.<br />
<br><br />
*'''Action (eharney):''' Go through how this work with snapshots -- snapshots keep the old key, volumes get a new key<br />
*'''Action (eharney):''' Write a spec based on the results of the investigation.<br />
<br><br />
<br />
===Continued discussion of old CG API removal===<br />
*'''Summary:''' It appears that all drivers that still have the old functions in place actually route to the appropriate generic code. So, it should be safe to remove the old API.<br />
<br><br />
*'''Action (smcginnis):''' To clean up the old code in the volume manager.<br />
*'''Action (smcginnis):''' To clean up database tables for CGs (ConsistencyGroup and CGSnapshot)<br />
*'''Action (smcginnis):''' Encourage drivers that still have the old code in place to remove the code.<br />
<br><br />
<br />
===Backup tests notifications leaking===<br />
*'''Summary:''' The leak of backup.createprogress notifications has been causing gate failures for some time. Can work around the issue by ignoring the notifications but we really should fix it.<br />
<br><br />
*'''Action (eharney):''' Continue to work with Rajat to find the source of the problem.<br />
<br><br />
<br />
===Optional dependency install mechanism===<br />
*'''Summary:''' Drivers that require externally available packages don't work easily in containers. This isn't good given the general movement to the use of containers.<br />
<br><br />
*'''Action (team):''' Watch new drivers for such dependencies. If they have them make sure they do a good try/except import of the package. Also ensure the licensing is appropriate.<br />
*'''Action (hemna):''' Work on verifying the dependent packages and determining a way to resolve the problem for containers.<br />
<br><br />
<br />
===Passwords in cinder.conf/oslo.config with Castellan driver===<br />
*'''Summary:''' Consumers don't want to be putting clear passwords in the config files. There is a better way to do things and we should move to supporting it.<br />
<br><br />
*'''Action (eharney):''' To investigate how we can implement this for Cinder.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=HaWmybpkloI&feature=em-lbcastemail Video Recording Part 3]<br />
<br />
===Fall mid-cycle planning===<br />
*'''Summary:''' No one had objections to doing the mid-cycle at Lenovo again. So, we will plan for 8/21 to 8/23/2019 at the Lenovo Campus in Morrisville, NC<br />
<br><br />
*'''Action (eharney):''' To get more East Coast Cinder people involved.<br />
*'''Action (jungleboyj):''' To confirm with Lenovo and make sure to keep Tom Barron in the loop.<br />
<br><br />
<br />
===Migration from SQL-Alchemy to Alembic===<br />
*'''Summary:''' We need to do this at some point since SQL-Alchemy is going away. Glance has already done the work and we can learn from them.<br />
<br><br />
*'''Action (smcginnis):''' Will look into compacting our DB upgrades again. Currently back at the Ocata level.<br />
*'''Action (smcginnis):''' Try to get guidance on how to proceed from zzzeek.<br />
<br><br />
<br />
===Syncing scheduler stats in an HA environment===<br />
*'''Summary:''' As more people are running an active/active HA environment we need to think about this more to make sure that scheduler instances stay in sync.<br />
<br><br />
*'''Action (e0ne):''' Update the old spec he wrote about this to address comments: https://review.opendev.org/#/c/556529/2<br />
*'''Action (geguileo):'' To help out Ivan as he starts reworking the spec.<br />
<br><br />
<br />
===Pre-release checklist===<br />
*'''Summary:''' We have made mistakes over the last couple of releases as far as getting libraries created before the release, etc. We need to do better. Hopefully creating a checklist will help. https://docs.openstack.org/cinder/latest/contributor/releasecycle.html<br />
<br><br />
*'''Action (jungleboyj):''' To do follow-up patches to add additional details to the checklist.<br />
*'''Action (eharney):''' Has additional comments on the content to merge.<br />
<br><br />
<br />
===Privsep and rootwrap===<br />
*'''Summary:''' We haven't really improved things in privsep as we are still using the rootwrap style of commands instead of breaking things down to using python functions to be more secure. We should improve this.<br />
<br><br />
*'''Action (eharney):''' To take a look at his privsep patch for LIO and see if it could be pushed up.<br />
*'''Action (eharney):''' Try to get better granularity on privileges<br />
<br><br />
<br />
===abc removal===<br />
*'''Summary:''' ABC has never worked the way we intended. At this point is would be best to just remove it. We should plan to work on this in the U release as we are moving to Py3.<br />
<br><br />
<br />
===Delete volume from DB===<br />
*'''Summary:''' Support organizations would like a way to delete volumes that doesn't go through the driver code but people have concerns with going straight to the DB. Need to find a middle ground.<br />
<br><br />
*'''Action (eharney):''' Going to look at the patch that was proposed and suggest that unmanage have an --ignore-state option added.<br />
<br><br />
<br />
===Quiesced snapshots===<br />
*'''Summary:''' People are still surprised we can't do this. We should work with Nova to see if we can make this happen.<br />
<br><br />
<br />
===Talk about merging the RSD driver===<br />
*'''Summary:''' 3rd Party CI looks good and the team has been responsive to comments. Close to being ready to merge.<br />
<br><br />
*'''Action (team):''' To review the driver and try to get it in place.<br />
*'''Action (e0ne):''' To make sure to review it given that he had experience with the PoC for the NVMe driver.<br />
<br><br />
<br />
===3rd Party CI===<br />
*'''Summary:''' We need to make sure that all 3rd Party CIs are running py3 by milestone 2 and that they have resolved all issues with the change in repo names.<br />
<br><br />
*'''Action (jungleboyj):''' Make sure that all CIs are running py3 testing by milestone-2 in Train.<br />
*'''Action (jungleboyj):''' Propose unsupported patches for those that fail to meet the requirement.<br />
*'''Action (jungleboyj):''' Need to also ensure that all 3rd Party CIs are running the Cinder Tempest Plugin.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=5rdyrccCqWw Video Recording Part 4]<br />
<br />
===Generic Backup Driver Discussion===<br />
*'''Summary:''' The team would still like to get this in place and Ivan has continued to work on it. Need to review the patches that are out there to help this.<br />
<br><br />
*'''Action (team):''' Review patches that are currently out there: https://review.opendev.org/#/q/status:open+project:openstack/cinder+branch:master+topic:bp/backup-host-selection-algorigthm and https://review.opendev.org/#/c/500094/<br />
*'''Action (e0ne):''' To continue to push the team to review the patches.<br />
<br><br />
<br />
===Driver Folder Clean-up/Refactoring===<br />
*'''Summary:''' The volume driver folder is a bit of an inconsistent mess and it would be good to clean up. There is also a mess in the code to deal with.<br />
<br><br />
*'''Action (smcginnis):''' Going to work on creating a patch to bring consistency to the subfolders in volume/drivers.<br />
*'''Action (hemna):''' Will work to move exceptions for individual drivers out of cinder/exceptions.py into individual drivers.<br />
*'''Action (jungleboyj):''' There is old code left around from previously removed drivers in os-brick. Review os-brick and remove what is appropriate.<br />
<br><br />
<br />
===Optimized Backup drivers===<br />
*'''Summary:''' NetApp indicated interest in creating an optimized backup driver for their storage backend. The team didn't have a concern with this idea.<br />
<br><br />
*'''Action (erlon):''' To propose a review to implement this for the team to review.<br />
<br><br />
<br />
==Friday 5/3/2019==<br />
<br />
[https://www.youtube.com/watch?v=d6QYQTOzJRM Video Recording Part 5]<br />
<br />
===Cinderclient design discussion===<br />
*'''Summary:''' There are issues in cinderclient when it comes to filtering. There are bugs and unexpected behavior that should be fixed.<br />
<br><br />
*'''Action (whoami-rajat):''' Update patch to allow multiple '--filters' to be specified. https://review.opendev.org/#/c/587610/<br />
*'''Action (whoami-rajat):''' Fix the bugs documented in the etherpad by Eric.<br />
<br><br />
<br />
===Default behavior of listing (volume|group} types====<br />
*'''Summary:''' The default behavior between the client and the API is inconsistent. We should understand this and fix it.<br />
<br><br />
*'''Action (whoami-rajat):''' Fix the bug where the server hides private types from the user that has access to them.<br />
*'''Action (team):''' Review the patch that is out there and decide if we want to merge it: https://review.opendev.org/#/c/641698/<br />
<br><br />
<br />
===Avoiding untyped volumes update===<br />
*'''Summary:''' Despite additional design discussion we landed back on the fact that we want to continue with the original design proposal of creating a new default type that is unlikely to clash with anything that administrator previously created.<br />
<br><br />
*'''Action (eharney):''' Write a but to evaluate whether our default volume type should be public.<br />
*'''Action (e0ne):''' Write a bug to deal with the fact that default_volume_type can be set in cinder.conf but the type doesn't exist. You can create with no type, but it should fail.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=r3NMEuReQ-Q Video Recording Part 6]<br />
<br />
=== Leverage Hardware acceleration in Cinder===<br />
*'''Summary:''' Intel is interested in speeding image compression/decompression using accelerators. Cinder team is ok with this but thinks it should be implemented in a generic manner via oslo-util or something similar.<br />
<br><br />
*'''Action (lixiaoy1):''' Propose new functionality to Oslo.<br />
*'''Action (lixiaoy1):''' Work with Cinder to get use of the new library function implemented. Same will need to be done with Glance.<br />
*'''Action (lixiaoy1):''' Finally work with Nova to get the new way of compressing images and new format supported.<br />
<br><br />
<br />
===Cross Project time with Nova===<br />
*'''Summary:''' Discussed a number of different topics but there were no significant work items to come out of the discussion for Cinder. Details can be seen here: https://etherpad.openstack.org/p/ptg-train-xproj-nova-cinder<br />
<br><br />
<br />
===Status of multiattach===<br />
*'''Summary:''' Red Hat has started testing in their environment and they are seeing some race conditions, etc. They are addressing issues as they find them.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=STYpJ5GwmeY Video Recording Part 7]<br />
<br />
===Cinder mutable options===<br />
*'''Summary:''' This is a community goal that we need to get appropriately implemented. Not totally clear as to where our implementation is at and we would like to address that.<br />
<br><br />
*'''Action (erlon):''' Update this spec https://review.opendev.org/656011 to be current and include results of our discussion.<br />
*'''Action (erlon):''' Make sure that we have the right plumbing in place to support this functionality. It sounds like we may already have it there.<br />
*'''Action (erlon):''' Come up with a way to add to our sample config output to indicate which options are reloadable an not.<br />
<br><br />
<br />
===Cinder support for Glance multi-store feature===<br />
*'''Summary:''' Glance supports multiple stores and it should be possible to decide which store is used when Cinder interacts with Glance.<br />
<br><br />
*'''Action (rosmaita):''' Work with abhishek to update the spec that is out there to initially implement this support via volume_type. If the approach isn't well received, other solutions can be considered.<br />
<br><br />
<br />
===Supporting fast copy volume to image===<br />
*'''Summary:''' Storage backends like Datera and Ceph can copy images faster using backend support, but there isn't currently a code path for this. We should correct this.<br />
<br><br />
*'''Action (_alastor_):''' Propose the work he has done in Glance locally to upstream.<br />
*'''Action (_alastor_):''' Work with rosmaita to get other details for implementing this worked out.<br />
<br><br />
<br />
===Cinder handling image-associated metadata===<br />
*'''Summary:''' We need to come up with a better framework for Cinder to handle Glance's image metadata. There are fields like those for the image signature that we don't want to be saving.<br />
<br><br />
*'''Action (rosmaita):''' Will write up a spec with a proposal for the fields that shouldn't be saved in Cinder.<br />
<br><br />
<br />
===Capabilities reporting update===<br />
*'''Summary:''' HP actually started implementing a lot of this work back in Liberty. We just need to figure out where that work was left and keep pushing things forward.<br />
<br><br />
*'''Action (_alastor_):''' Help the team document how this should be utilized by vendor's drivers.<br />
*'''Action (eharney):''' Understand the capabilities support that was merged back in liberty.<br />
*'''Action (eharney):''' Write a new spec that references the old spec and then makes appropriate updates on what the functionality really is.<br />
*'''Action (team):''' Need to get the reference architectures updated to report capabilities properly.<br />
*'''Action (team):''' If the reference architectures go well then we should probably add this as a requirement for drivers.<br />
*'''Action (jungleboyj):''' Set up a bi-weekly meeting to discuss this. <br />
<br><br />
<br />
===python-cinderclient major version release===<br />
*'''Summary:''' There are a number of changes we want to get in before doing a Cinderclient release with a major version change. We agreed that we want to change the way we are handling MV to default to the highest available and then downgrade to the highest version that the server supports.<br />
<br><br />
*'''Action (eharney):''' Update https://review.opendev.org/#/c/647871/ to implement the support agreed upon above.<br />
*'''Action (rosmaita):''' Will wait to do a cinderclient release until after all the changes that require a major version bump are in place.<br />
<br><br />
<br />
===cinderclient integration with OpenStack===<br />
*'''Summary:''' Our current approach to doing client development is not great as thing are going into python-cinderclient and then eventually being picked up for openstackclient without any Cinder core review action. Would like to look at making 'openstack volume' commands an alias to Cinder commands made available via plugin.<br />
<br><br />
*'''Action (abishop):''' Will reach out to Dean Troyer and find out if they would be supportive of this approach.<br />
*'''Action (abishop):''' Investigate the questions and concerns raised during our discussion to get answers and bring them back to the team.<br />
<br><br />
<br />
=== Continued Storyboard Discussion?===<br />
*'''Summary:''' Team is still not in a hurry to make this change. Manila isn't planning to move during Train. Don't know that anyone else is either so we are not going to push this further at this point in time.<br />
<br><br />
<br />
===py37 failures===<br />
*'''Summary:''' Team can't agree whether or not to add a py37 job to our tests because right now it is always failing. Sean thinks that we should add it as we know that OpenStack is going to be moving and we need to get test coverage in place.<br />
<br><br />
*'''Action (jungleboyj):''' To reach out to Helen Walsh asking her team to investigate why their driver fails and to please address.<br />
<br />
===Placement Discussion===<br />
*'''Summary:''' Placement is agnostic as to what goes into the service. If we have information that can be of use to them, they will use it. No one, however, is asking for this right now so it was felt that our efforts could be better utilized elsewhere. So, no action is needed right now.<br />
<br />
<br><br />
=OpenInfra Summit Forum Sessions=<br />
<br><b><br />
==Cinder Capability Reporting==<br />
We held a forum session to discuss how Cinder can better report capabilities from drivers. The goal being to make it possible for administrators to better understand the capabilities available from their storage backends. If this information is more readily accessible it should, then, be easier to create volume types that more completely utilize their storage backends.<br />
<br><br />
*'''Etherpad:''' [https://etherpad.openstack.org/p/denver-forum-cinder-improving-drvr-cap-rep https://etherpad.openstack.org/p/denver-forum-cinder-improving-drvr-cap-rep]<br />
*'''YouTube Recording:''' [https://www.youtube.com/watch?v=avZgbk8hh2s https://www.youtube.com/watch?v=avZgbk8hh2s]<br />
<br><br />
The outcome of this session was a discovery that much of the work for this had already been done by HP back in the liberty release and that we really just needed to come to agreement upon completing the work. We added further discussion to our PTG agenda and you can see the results of that discussion above.<br />
<br><br><br />
==Cinder User Feedback Session==<br />
This forum session was a second attempt to give users of Cinder the opportunity to share their concerns about Cinder and to ask questions.<br />
<br><br />
*'''Etherpad:''' [https://etherpad.openstack.org/p/denver-forum-cinder-direct-user-feedback https://etherpad.openstack.org/p/denver-forum-cinder-direct-user-feedback]<br />
*'''YouTube Recording:''' [https://www.youtube.com/watch?v=UVN42jsUq-U https://www.youtube.com/watch?v=UVN42jsUq-U]<br />
<br><br />
As with the past times we have tried this the session was lightly attended. The administrators in the room all indicated that Cinder generally works pretty well. There were requests for some functionality that had already gone in to recent releases. Also some questions around issues with the number of volumes that can be attached. That problem appears to be a limitation of virtio.<br />
<br><br />
==Cinder Project Update==<br />
[https://www.slideshare.net/JayBryant2/cinder-project-update-denver-summit-2019 https://www.slideshare.net/JayBryant2/cinder-project-update-denver-summit-2019]<br />
<br><br />
==Cinder On-Boarding Education==<br />
[https://www.slideshare.net/JayBryant2/cinder-project-onboarding-openinfra-summit-denver-2019 https://www.slideshare.net/JayBryant2/cinder-project-onboarding-openinfra-summit-denver-2019]</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=CinderTrainSummitandPTGSummary&diff=170002CinderTrainSummitandPTGSummary2019-05-14T19:37:02Z<p>Jay Bryant: /* OpenInfra Summit Forum Sessions */</p>
<hr />
<div>=== Introduction ===<br />
This page contains a summary of the subjects covered during the Train Summit and PTG held in Denver, Colorado, USA, April 28 to May 4, 2019.<br />
<br />
The full etherpad and all associated notes may be found [https://etherpad.openstack.org/p/cinder-train-ptg-planning here.]<br />
<br /><br />
<br /><br />
<br /><br />
<br />
=Train PTG Summary=<br />
<br />
==Thursday 5/2/2019==<br />
[https://www.youtube.com/watch?v=cg8gYLjjjyI Video Recording Part 1]<br />
<br />
=== Stein Retrospective===<br />
*'''Summary:''' The release went relatively well though we had issues getting releases out We kept our deadlines and which was good and added the priority dashboard which was also beneficial.<br />
<br><br />
*'''Action (jungleboyj):''' Clean up the Cinder Wiki pages.<br />
*'''Action (team):''' Checkout dashboards and see if anything needs to be updated.<br />
*'''Action (smcginnis):''' Add release notes for the cinderclient backports to call out the changes that upgrading will bring.<br />
<br><br />
<br />
===Ceph iSCSI Support===<br />
*'''Summary:''' There is great interest in getting this support in place from multiple consumers. We should continue to push trying to get this in place.<br />
<br><br />
*'''Action (jungleboyj):''' Get the spec that Lenovo has started updated and pushed up for review.<br />
*'''Action (eharney):''' Check with internal Red Hat teams to make sure that there are not others already working on this.<br />
*'''Action (hemna):''' To reach out to the Ceph community and see how receptive they would be to client changes to support iSCSI use cases.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=N6D6ib7T9Io&feature=em-lbcastemail Video Recording Part 2]<br />
<br />
=== Cinder/Glance creating image from volume with Ceph===<br />
*'''Summary:''' Determined that the problem really needs to be handled on the Glance side so there was no action required by Cinder.<br />
<br><br />
<br />
===Glance image properties and Cinder encrypted volume key management===<br />
*'''Summary:''' Keys are not getting deleted from the HSM like they are supposed to be. Glance should be deleting keys when they are done being used. This was acceptable if we make it very clear the key is one that may be deleted.<br />
<br><br />
*'''Action (rosmaita):''' Draft up a spec proposing this functionality.<br />
*'''Action (eharney):''' Will write a Cinder spec to handle the transition from the old to the new approach of key management.<br />
<br><br />
<br />
===Encryption key manage on volume clone===<br />
*'''Summary:''' Right now when we clone volumes we just make a copy of the encryption key. Seems like this really should be a new key.<br />
<br><br />
*'''Action (eharney):''' Go through how this work with snapshots -- snapshots keep the old key, volumes get a new key<br />
*'''Action (eharney):''' Write a spec based on the results of the investigation.<br />
<br><br />
<br />
===Continued discussion of old CG API removal===<br />
*'''Summary:''' It appears that all drivers that still have the old functions in place actually route to the appropriate generic code. So, it should be safe to remove the old API.<br />
<br><br />
*'''Action (smcginnis):''' To clean up the old code in the volume manager.<br />
*'''Action (smcginnis):''' To clean up database tables for CGs (ConsistencyGroup and CGSnapshot)<br />
*'''Action (smcginnis):''' Encourage drivers that still have the old code in place to remove the code.<br />
<br><br />
<br />
===Backup tests notifications leaking===<br />
*'''Summary:''' The leak of backup.createprogress notifications has been causing gate failures for some time. Can work around the issue by ignoring the notifications but we really should fix it.<br />
<br><br />
*'''Action (eharney):''' Continue to work with Rajat to find the source of the problem.<br />
<br><br />
<br />
===Optional dependency install mechanism===<br />
*'''Summary:''' Drivers that require externally available packages don't work easily in containers. This isn't good given the general movement to the use of containers.<br />
<br><br />
*'''Action (team):''' Watch new drivers for such dependencies. If they have them make sure they do a good try/except import of the package. Also ensure the licensing is appropriate.<br />
*'''Action (hemna):''' Work on verifying the dependent packages and determining a way to resolve the problem for containers.<br />
<br><br />
<br />
===Passwords in cinder.conf/oslo.config with Castellan driver===<br />
*'''Summary:''' Consumers don't want to be putting clear passwords in the config files. There is a better way to do things and we should move to supporting it.<br />
<br><br />
*'''Action (eharney):''' To investigate how we can implement this for Cinder.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=HaWmybpkloI&feature=em-lbcastemail Video Recording Part 3]<br />
<br />
===Fall mid-cycle planning===<br />
*'''Summary:''' No one had objections to doing the mid-cycle at Lenovo again. So, we will plan for 8/21 to 8/23/2019 at the Lenovo Campus in Morrisville, NC<br />
<br><br />
*'''Action (eharney):''' To get more East Coast Cinder people involved.<br />
*'''Action (jungleboyj):''' To confirm with Lenovo and make sure to keep Tom Barron in the loop.<br />
<br><br />
<br />
===Migration from SQL-Alchemy to Alembic===<br />
*'''Summary:''' We need to do this at some point since SQL-Alchemy is going away. Glance has already done the work and we can learn from them.<br />
<br><br />
*'''Action (smcginnis):''' Will look into compacting our DB upgrades again. Currently back at the Ocata level.<br />
*'''Action (smcginnis):''' Try to get guidance on how to proceed from zzzeek.<br />
<br><br />
<br />
===Syncing scheduler stats in an HA environment===<br />
*'''Summary:''' As more people are running an active/active HA environment we need to think about this more to make sure that scheduler instances stay in sync.<br />
<br><br />
*'''Action (e0ne):''' Update the old spec he wrote about this to address comments: https://review.opendev.org/#/c/556529/2<br />
*'''Action (geguileo):'' To help out Ivan as he starts reworking the spec.<br />
<br><br />
<br />
===Pre-release checklist===<br />
*'''Summary:''' We have made mistakes over the last couple of releases as far as getting libraries created before the release, etc. We need to do better. Hopefully creating a checklist will help. https://docs.openstack.org/cinder/latest/contributor/releasecycle.html<br />
<br><br />
*'''Action (jungleboyj):''' To do follow-up patches to add additional details to the checklist.<br />
*'''Action (eharney):''' Has additional comments on the content to merge.<br />
<br><br />
<br />
===Privsep and rootwrap===<br />
*'''Summary:''' We haven't really improved things in privsep as we are still using the rootwrap style of commands instead of breaking things down to using python functions to be more secure. We should improve this.<br />
<br><br />
*'''Action (eharney):''' To take a look at his privsep patch for LIO and see if it could be pushed up.<br />
*'''Action (eharney):''' Try to get better granularity on privileges<br />
<br><br />
<br />
===abc removal===<br />
*'''Summary:''' ABC has never worked the way we intended. At this point is would be best to just remove it. We should plan to work on this in the U release as we are moving to Py3.<br />
<br><br />
<br />
===Delete volume from DB===<br />
*'''Summary:''' Support organizations would like a way to delete volumes that doesn't go through the driver code but people have concerns with going straight to the DB. Need to find a middle ground.<br />
<br><br />
*'''Action (eharney):''' Going to look at the patch that was proposed and suggest that unmanage have an --ignore-state option added.<br />
<br><br />
<br />
===Quiesced snapshots===<br />
*'''Summary:''' People are still surprised we can't do this. We should work with Nova to see if we can make this happen.<br />
<br><br />
<br />
===Talk about merging the RSD driver===<br />
*'''Summary:''' 3rd Party CI looks good and the team has been responsive to comments. Close to being ready to merge.<br />
<br><br />
*'''Action (team):''' To review the driver and try to get it in place.<br />
*'''Action (e0ne):''' To make sure to review it given that he had experience with the PoC for the NVMe driver.<br />
<br><br />
<br />
===3rd Party CI===<br />
*'''Summary:''' We need to make sure that all 3rd Party CIs are running py3 by milestone 2 and that they have resolved all issues with the change in repo names.<br />
<br><br />
*'''Action (jungleboyj):''' Make sure that all CIs are running py3 testing by milestone-2 in Train.<br />
*'''Action (jungleboyj):''' Propose unsupported patches for those that fail to meet the requirement.<br />
*'''Action (jungleboyj):''' Need to also ensure that all 3rd Party CIs are running the Cinder Tempest Plugin.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=5rdyrccCqWw Video Recording Part 4]<br />
<br />
===Generic Backup Driver Discussion===<br />
*'''Summary:''' The team would still like to get this in place and Ivan has continued to work on it. Need to review the patches that are out there to help this.<br />
<br><br />
*'''Action (team):''' Review patches that are currently out there: https://review.opendev.org/#/q/status:open+project:openstack/cinder+branch:master+topic:bp/backup-host-selection-algorigthm and https://review.opendev.org/#/c/500094/<br />
*'''Action (e0ne):''' To continue to push the team to review the patches.<br />
<br><br />
<br />
===Driver Folder Clean-up/Refactoring===<br />
*'''Summary:''' The volume driver folder is a bit of an inconsistent mess and it would be good to clean up. There is also a mess in the code to deal with.<br />
<br><br />
*'''Action (smcginnis):''' Going to work on creating a patch to bring consistency to the subfolders in volume/drivers.<br />
*'''Action (hemna):''' Will work to move exceptions for individual drivers out of cinder/exceptions.py into individual drivers.<br />
*'''Action (jungleboyj):''' There is old code left around from previously removed drivers in os-brick. Review os-brick and remove what is appropriate.<br />
<br><br />
<br />
===Optimized Backup drivers===<br />
*'''Summary:''' NetApp indicated interest in creating an optimized backup driver for their storage backend. The team didn't have a concern with this idea.<br />
<br><br />
*'''Action (erlon):''' To propose a review to implement this for the team to review.<br />
<br><br />
<br />
==Friday 5/3/2019==<br />
<br />
[https://www.youtube.com/watch?v=d6QYQTOzJRM Video Recording Part 5]<br />
<br />
===Cinderclient design discussion===<br />
*'''Summary:''' There are issues in cinderclient when it comes to filtering. There are bugs and unexpected behavior that should be fixed.<br />
<br><br />
*'''Action (whoami-rajat):''' Update patch to allow multiple '--filters' to be specified. https://review.opendev.org/#/c/587610/<br />
*'''Action (whoami-rajat):''' Fix the bugs documented in the etherpad by Eric.<br />
<br><br />
<br />
===Default behavior of listing (volume|group} types====<br />
*'''Summary:''' The default behavior between the client and the API is inconsistent. We should understand this and fix it.<br />
<br><br />
*'''Action (whoami-rajat):''' Fix the bug where the server hides private types from the user that has access to them.<br />
*'''Action (team):''' Review the patch that is out there and decide if we want to merge it: https://review.opendev.org/#/c/641698/<br />
<br><br />
<br />
===Avoiding untyped volumes update===<br />
*'''Summary:''' Despite additional design discussion we landed back on the fact that we want to continue with the original design proposal of creating a new default type that is unlikely to clash with anything that administrator previously created.<br />
<br><br />
*'''Action (eharney):''' Write a but to evaluate whether our default volume type should be public.<br />
*'''Action (e0ne):''' Write a bug to deal with the fact that default_volume_type can be set in cinder.conf but the type doesn't exist. You can create with no type, but it should fail.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=r3NMEuReQ-Q Video Recording Part 6]<br />
<br />
=== Leverage Hardware acceleration in Cinder===<br />
*'''Summary:''' Intel is interested in speeding image compression/decompression using accelerators. Cinder team is ok with this but thinks it should be implemented in a generic manner via oslo-util or something similar.<br />
<br><br />
*'''Action (lixiaoy1):''' Propose new functionality to Oslo.<br />
*'''Action (lixiaoy1):''' Work with Cinder to get use of the new library function implemented. Same will need to be done with Glance.<br />
*'''Action (lixiaoy1):''' Finally work with Nova to get the new way of compressing images and new format supported.<br />
<br><br />
<br />
===Cross Project time with Nova===<br />
*'''Summary:''' Discussed a number of different topics but there were no significant work items to come out of the discussion for Cinder. Details can be seen here: https://etherpad.openstack.org/p/ptg-train-xproj-nova-cinder<br />
<br><br />
<br />
===Status of multiattach===<br />
*'''Summary:''' Red Hat has started testing in their environment and they are seeing some race conditions, etc. They are addressing issues as they find them.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=STYpJ5GwmeY Video Recording Part 7]<br />
<br />
===Cinder mutable options===<br />
*'''Summary:''' This is a community goal that we need to get appropriately implemented. Not totally clear as to where our implementation is at and we would like to address that.<br />
<br><br />
*'''Action (erlon):''' Update this spec https://review.opendev.org/656011 to be current and include results of our discussion.<br />
*'''Action (erlon):''' Make sure that we have the right plumbing in place to support this functionality. It sounds like we may already have it there.<br />
*'''Action (erlon):''' Come up with a way to add to our sample config output to indicate which options are reloadable an not.<br />
<br><br />
<br />
===Cinder support for Glance multi-store feature===<br />
*'''Summary:''' Glance supports multiple stores and it should be possible to decide which store is used when Cinder interacts with Glance.<br />
<br><br />
*'''Action (rosmaita):''' Work with abhishek to update the spec that is out there to initially implement this support via volume_type. If the approach isn't well received, other solutions can be considered.<br />
<br><br />
<br />
===Supporting fast copy volume to image===<br />
*'''Summary:''' Storage backends like Datera and Ceph can copy images faster using backend support, but there isn't currently a code path for this. We should correct this.<br />
<br><br />
*'''Action (_alastor_):''' Propose the work he has done in Glance locally to upstream.<br />
*'''Action (_alastor_):''' Work with rosmaita to get other details for implementing this worked out.<br />
<br><br />
<br />
===Cinder handling image-associated metadata===<br />
*'''Summary:''' We need to come up with a better framework for Cinder to handle Glance's image metadata. There are fields like those for the image signature that we don't want to be saving.<br />
<br><br />
*'''Action (rosmaita):''' Will write up a spec with a proposal for the fields that shouldn't be saved in Cinder.<br />
<br><br />
<br />
===Capabilities reporting update===<br />
*'''Summary:''' HP actually started implementing a lot of this work back in Liberty. We just need to figure out where that work was left and keep pushing things forward.<br />
<br><br />
*'''Action (_alastor_):''' Help the team document how this should be utilized by vendor's drivers.<br />
*'''Action (eharney):''' Understand the capabilities support that was merged back in liberty.<br />
*'''Action (eharney):''' Write a new spec that references the old spec and then makes appropriate updates on what the functionality really is.<br />
*'''Action (team):''' Need to get the reference architectures updated to report capabilities properly.<br />
*'''Action (team):''' If the reference architectures go well then we should probably add this as a requirement for drivers.<br />
*'''Action (jungleboyj):''' Set up a bi-weekly meeting to discuss this. <br />
<br><br />
<br />
===python-cinderclient major version release===<br />
*'''Summary:''' There are a number of changes we want to get in before doing a Cinderclient release with a major version change. We agreed that we want to change the way we are handling MV to default to the highest available and then downgrade to the highest version that the server supports.<br />
<br><br />
*'''Action (eharney):''' Update https://review.opendev.org/#/c/647871/ to implement the support agreed upon above.<br />
*'''Action (rosmaita):''' Will wait to do a cinderclient release until after all the changes that require a major version bump are in place.<br />
<br><br />
<br />
===cinderclient integration with OpenStack===<br />
*'''Summary:''' Our current approach to doing client development is not great as thing are going into python-cinderclient and then eventually being picked up for openstackclient without any Cinder core review action. Would like to look at making 'openstack volume' commands an alias to Cinder commands made available via plugin.<br />
<br><br />
*'''Action (abishop):''' Will reach out to Dean Troyer and find out if they would be supportive of this approach.<br />
*'''Action (abishop):''' Investigate the questions and concerns raised during our discussion to get answers and bring them back to the team.<br />
<br><br />
<br />
=== Continued Storyboard Discussion?===<br />
*'''Summary:''' Team is still not in a hurry to make this change. Manila isn't planning to move during Train. Don't know that anyone else is either so we are not going to push this further at this point in time.<br />
<br><br />
<br />
===py37 failures===<br />
*'''Summary:''' Team can't agree whether or not to add a py37 job to our tests because right now it is always failing. Sean thinks that we should add it as we know that OpenStack is going to be moving and we need to get test coverage in place.<br />
<br><br />
*'''Action (jungleboyj):''' To reach out to Helen Walsh asking her team to investigate why their driver fails and to please address.<br />
<br />
===Placement Discussion===<br />
*'''Summary:''' Placement is agnostic as to what goes into the service. If we have information that can be of use to them, they will use it. No one, however, is asking for this right now so it was felt that our efforts could be better utilized elsewhere. So, no action is needed right now.<br />
<br />
<br><br />
=OpenInfra Summit Forum Sessions=<br />
<br><b><br />
==Cinder Capability Reporting==<br />
We held a forum session to discuss how Cinder can better report capabilities from drivers. The goal being to make it possible for administrators to better understand the capabilities available from their storage backends. If this information is more readily accessible it should, then, be easier to create volume types that more completely utilize their storage backends.<br />
<br><br />
*'''Etherpad:''' [https://etherpad.openstack.org/p/denver-forum-cinder-improving-drvr-cap-rep https://etherpad.openstack.org/p/denver-forum-cinder-improving-drvr-cap-rep]<br />
*'''YouTube Recording:''' [https://www.youtube.com/watch?v=avZgbk8hh2s https://www.youtube.com/watch?v=avZgbk8hh2s]<br />
<br><br />
The outcome of this session was a discovery that much of the work for this had already been done by HP back in the liberty release and that we really just needed to come to agreement upon completing the work. We added further discussion to our PTG agenda and you can see the results of that discussion above.<br />
<br><br><br />
==Cinder User Feedback Session==<br />
This forum session was a second attempt to give users of Cinder the opportunity to share their concerns about Cinder and to ask questions.<br />
<br><br />
*'''Etherpad:''' [https://etherpad.openstack.org/p/denver-forum-cinder-direct-user-feedback https://etherpad.openstack.org/p/denver-forum-cinder-direct-user-feedback]<br />
*'''YouTube Recording:''' [https://www.youtube.com/watch?v=UVN42jsUq-U https://www.youtube.com/watch?v=UVN42jsUq-U]<br />
<br><br />
As with the past times we have tried this the session was lightly attended. The administrators in the room all indicated that Cinder generally works pretty well. There were requests for some functionality that had already gone in to recent releases. Also some questions around issues with the number of volumes that can be attached. That problem appears to be a limitation of virtio.</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=CinderTrainSummitandPTGSummary&diff=170001CinderTrainSummitandPTGSummary2019-05-14T19:35:58Z<p>Jay Bryant: /* Train PTG Summary */</p>
<hr />
<div>=== Introduction ===<br />
This page contains a summary of the subjects covered during the Train Summit and PTG held in Denver, Colorado, USA, April 28 to May 4, 2019.<br />
<br />
The full etherpad and all associated notes may be found [https://etherpad.openstack.org/p/cinder-train-ptg-planning here.]<br />
<br /><br />
<br /><br />
<br /><br />
<br />
=Train PTG Summary=<br />
<br />
==Thursday 5/2/2019==<br />
[https://www.youtube.com/watch?v=cg8gYLjjjyI Video Recording Part 1]<br />
<br />
=== Stein Retrospective===<br />
*'''Summary:''' The release went relatively well though we had issues getting releases out We kept our deadlines and which was good and added the priority dashboard which was also beneficial.<br />
<br><br />
*'''Action (jungleboyj):''' Clean up the Cinder Wiki pages.<br />
*'''Action (team):''' Checkout dashboards and see if anything needs to be updated.<br />
*'''Action (smcginnis):''' Add release notes for the cinderclient backports to call out the changes that upgrading will bring.<br />
<br><br />
<br />
===Ceph iSCSI Support===<br />
*'''Summary:''' There is great interest in getting this support in place from multiple consumers. We should continue to push trying to get this in place.<br />
<br><br />
*'''Action (jungleboyj):''' Get the spec that Lenovo has started updated and pushed up for review.<br />
*'''Action (eharney):''' Check with internal Red Hat teams to make sure that there are not others already working on this.<br />
*'''Action (hemna):''' To reach out to the Ceph community and see how receptive they would be to client changes to support iSCSI use cases.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=N6D6ib7T9Io&feature=em-lbcastemail Video Recording Part 2]<br />
<br />
=== Cinder/Glance creating image from volume with Ceph===<br />
*'''Summary:''' Determined that the problem really needs to be handled on the Glance side so there was no action required by Cinder.<br />
<br><br />
<br />
===Glance image properties and Cinder encrypted volume key management===<br />
*'''Summary:''' Keys are not getting deleted from the HSM like they are supposed to be. Glance should be deleting keys when they are done being used. This was acceptable if we make it very clear the key is one that may be deleted.<br />
<br><br />
*'''Action (rosmaita):''' Draft up a spec proposing this functionality.<br />
*'''Action (eharney):''' Will write a Cinder spec to handle the transition from the old to the new approach of key management.<br />
<br><br />
<br />
===Encryption key manage on volume clone===<br />
*'''Summary:''' Right now when we clone volumes we just make a copy of the encryption key. Seems like this really should be a new key.<br />
<br><br />
*'''Action (eharney):''' Go through how this work with snapshots -- snapshots keep the old key, volumes get a new key<br />
*'''Action (eharney):''' Write a spec based on the results of the investigation.<br />
<br><br />
<br />
===Continued discussion of old CG API removal===<br />
*'''Summary:''' It appears that all drivers that still have the old functions in place actually route to the appropriate generic code. So, it should be safe to remove the old API.<br />
<br><br />
*'''Action (smcginnis):''' To clean up the old code in the volume manager.<br />
*'''Action (smcginnis):''' To clean up database tables for CGs (ConsistencyGroup and CGSnapshot)<br />
*'''Action (smcginnis):''' Encourage drivers that still have the old code in place to remove the code.<br />
<br><br />
<br />
===Backup tests notifications leaking===<br />
*'''Summary:''' The leak of backup.createprogress notifications has been causing gate failures for some time. Can work around the issue by ignoring the notifications but we really should fix it.<br />
<br><br />
*'''Action (eharney):''' Continue to work with Rajat to find the source of the problem.<br />
<br><br />
<br />
===Optional dependency install mechanism===<br />
*'''Summary:''' Drivers that require externally available packages don't work easily in containers. This isn't good given the general movement to the use of containers.<br />
<br><br />
*'''Action (team):''' Watch new drivers for such dependencies. If they have them make sure they do a good try/except import of the package. Also ensure the licensing is appropriate.<br />
*'''Action (hemna):''' Work on verifying the dependent packages and determining a way to resolve the problem for containers.<br />
<br><br />
<br />
===Passwords in cinder.conf/oslo.config with Castellan driver===<br />
*'''Summary:''' Consumers don't want to be putting clear passwords in the config files. There is a better way to do things and we should move to supporting it.<br />
<br><br />
*'''Action (eharney):''' To investigate how we can implement this for Cinder.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=HaWmybpkloI&feature=em-lbcastemail Video Recording Part 3]<br />
<br />
===Fall mid-cycle planning===<br />
*'''Summary:''' No one had objections to doing the mid-cycle at Lenovo again. So, we will plan for 8/21 to 8/23/2019 at the Lenovo Campus in Morrisville, NC<br />
<br><br />
*'''Action (eharney):''' To get more East Coast Cinder people involved.<br />
*'''Action (jungleboyj):''' To confirm with Lenovo and make sure to keep Tom Barron in the loop.<br />
<br><br />
<br />
===Migration from SQL-Alchemy to Alembic===<br />
*'''Summary:''' We need to do this at some point since SQL-Alchemy is going away. Glance has already done the work and we can learn from them.<br />
<br><br />
*'''Action (smcginnis):''' Will look into compacting our DB upgrades again. Currently back at the Ocata level.<br />
*'''Action (smcginnis):''' Try to get guidance on how to proceed from zzzeek.<br />
<br><br />
<br />
===Syncing scheduler stats in an HA environment===<br />
*'''Summary:''' As more people are running an active/active HA environment we need to think about this more to make sure that scheduler instances stay in sync.<br />
<br><br />
*'''Action (e0ne):''' Update the old spec he wrote about this to address comments: https://review.opendev.org/#/c/556529/2<br />
*'''Action (geguileo):'' To help out Ivan as he starts reworking the spec.<br />
<br><br />
<br />
===Pre-release checklist===<br />
*'''Summary:''' We have made mistakes over the last couple of releases as far as getting libraries created before the release, etc. We need to do better. Hopefully creating a checklist will help. https://docs.openstack.org/cinder/latest/contributor/releasecycle.html<br />
<br><br />
*'''Action (jungleboyj):''' To do follow-up patches to add additional details to the checklist.<br />
*'''Action (eharney):''' Has additional comments on the content to merge.<br />
<br><br />
<br />
===Privsep and rootwrap===<br />
*'''Summary:''' We haven't really improved things in privsep as we are still using the rootwrap style of commands instead of breaking things down to using python functions to be more secure. We should improve this.<br />
<br><br />
*'''Action (eharney):''' To take a look at his privsep patch for LIO and see if it could be pushed up.<br />
*'''Action (eharney):''' Try to get better granularity on privileges<br />
<br><br />
<br />
===abc removal===<br />
*'''Summary:''' ABC has never worked the way we intended. At this point is would be best to just remove it. We should plan to work on this in the U release as we are moving to Py3.<br />
<br><br />
<br />
===Delete volume from DB===<br />
*'''Summary:''' Support organizations would like a way to delete volumes that doesn't go through the driver code but people have concerns with going straight to the DB. Need to find a middle ground.<br />
<br><br />
*'''Action (eharney):''' Going to look at the patch that was proposed and suggest that unmanage have an --ignore-state option added.<br />
<br><br />
<br />
===Quiesced snapshots===<br />
*'''Summary:''' People are still surprised we can't do this. We should work with Nova to see if we can make this happen.<br />
<br><br />
<br />
===Talk about merging the RSD driver===<br />
*'''Summary:''' 3rd Party CI looks good and the team has been responsive to comments. Close to being ready to merge.<br />
<br><br />
*'''Action (team):''' To review the driver and try to get it in place.<br />
*'''Action (e0ne):''' To make sure to review it given that he had experience with the PoC for the NVMe driver.<br />
<br><br />
<br />
===3rd Party CI===<br />
*'''Summary:''' We need to make sure that all 3rd Party CIs are running py3 by milestone 2 and that they have resolved all issues with the change in repo names.<br />
<br><br />
*'''Action (jungleboyj):''' Make sure that all CIs are running py3 testing by milestone-2 in Train.<br />
*'''Action (jungleboyj):''' Propose unsupported patches for those that fail to meet the requirement.<br />
*'''Action (jungleboyj):''' Need to also ensure that all 3rd Party CIs are running the Cinder Tempest Plugin.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=5rdyrccCqWw Video Recording Part 4]<br />
<br />
===Generic Backup Driver Discussion===<br />
*'''Summary:''' The team would still like to get this in place and Ivan has continued to work on it. Need to review the patches that are out there to help this.<br />
<br><br />
*'''Action (team):''' Review patches that are currently out there: https://review.opendev.org/#/q/status:open+project:openstack/cinder+branch:master+topic:bp/backup-host-selection-algorigthm and https://review.opendev.org/#/c/500094/<br />
*'''Action (e0ne):''' To continue to push the team to review the patches.<br />
<br><br />
<br />
===Driver Folder Clean-up/Refactoring===<br />
*'''Summary:''' The volume driver folder is a bit of an inconsistent mess and it would be good to clean up. There is also a mess in the code to deal with.<br />
<br><br />
*'''Action (smcginnis):''' Going to work on creating a patch to bring consistency to the subfolders in volume/drivers.<br />
*'''Action (hemna):''' Will work to move exceptions for individual drivers out of cinder/exceptions.py into individual drivers.<br />
*'''Action (jungleboyj):''' There is old code left around from previously removed drivers in os-brick. Review os-brick and remove what is appropriate.<br />
<br><br />
<br />
===Optimized Backup drivers===<br />
*'''Summary:''' NetApp indicated interest in creating an optimized backup driver for their storage backend. The team didn't have a concern with this idea.<br />
<br><br />
*'''Action (erlon):''' To propose a review to implement this for the team to review.<br />
<br><br />
<br />
==Friday 5/3/2019==<br />
<br />
[https://www.youtube.com/watch?v=d6QYQTOzJRM Video Recording Part 5]<br />
<br />
===Cinderclient design discussion===<br />
*'''Summary:''' There are issues in cinderclient when it comes to filtering. There are bugs and unexpected behavior that should be fixed.<br />
<br><br />
*'''Action (whoami-rajat):''' Update patch to allow multiple '--filters' to be specified. https://review.opendev.org/#/c/587610/<br />
*'''Action (whoami-rajat):''' Fix the bugs documented in the etherpad by Eric.<br />
<br><br />
<br />
===Default behavior of listing (volume|group} types====<br />
*'''Summary:''' The default behavior between the client and the API is inconsistent. We should understand this and fix it.<br />
<br><br />
*'''Action (whoami-rajat):''' Fix the bug where the server hides private types from the user that has access to them.<br />
*'''Action (team):''' Review the patch that is out there and decide if we want to merge it: https://review.opendev.org/#/c/641698/<br />
<br><br />
<br />
===Avoiding untyped volumes update===<br />
*'''Summary:''' Despite additional design discussion we landed back on the fact that we want to continue with the original design proposal of creating a new default type that is unlikely to clash with anything that administrator previously created.<br />
<br><br />
*'''Action (eharney):''' Write a but to evaluate whether our default volume type should be public.<br />
*'''Action (e0ne):''' Write a bug to deal with the fact that default_volume_type can be set in cinder.conf but the type doesn't exist. You can create with no type, but it should fail.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=r3NMEuReQ-Q Video Recording Part 6]<br />
<br />
=== Leverage Hardware acceleration in Cinder===<br />
*'''Summary:''' Intel is interested in speeding image compression/decompression using accelerators. Cinder team is ok with this but thinks it should be implemented in a generic manner via oslo-util or something similar.<br />
<br><br />
*'''Action (lixiaoy1):''' Propose new functionality to Oslo.<br />
*'''Action (lixiaoy1):''' Work with Cinder to get use of the new library function implemented. Same will need to be done with Glance.<br />
*'''Action (lixiaoy1):''' Finally work with Nova to get the new way of compressing images and new format supported.<br />
<br><br />
<br />
===Cross Project time with Nova===<br />
*'''Summary:''' Discussed a number of different topics but there were no significant work items to come out of the discussion for Cinder. Details can be seen here: https://etherpad.openstack.org/p/ptg-train-xproj-nova-cinder<br />
<br><br />
<br />
===Status of multiattach===<br />
*'''Summary:''' Red Hat has started testing in their environment and they are seeing some race conditions, etc. They are addressing issues as they find them.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=STYpJ5GwmeY Video Recording Part 7]<br />
<br />
===Cinder mutable options===<br />
*'''Summary:''' This is a community goal that we need to get appropriately implemented. Not totally clear as to where our implementation is at and we would like to address that.<br />
<br><br />
*'''Action (erlon):''' Update this spec https://review.opendev.org/656011 to be current and include results of our discussion.<br />
*'''Action (erlon):''' Make sure that we have the right plumbing in place to support this functionality. It sounds like we may already have it there.<br />
*'''Action (erlon):''' Come up with a way to add to our sample config output to indicate which options are reloadable an not.<br />
<br><br />
<br />
===Cinder support for Glance multi-store feature===<br />
*'''Summary:''' Glance supports multiple stores and it should be possible to decide which store is used when Cinder interacts with Glance.<br />
<br><br />
*'''Action (rosmaita):''' Work with abhishek to update the spec that is out there to initially implement this support via volume_type. If the approach isn't well received, other solutions can be considered.<br />
<br><br />
<br />
===Supporting fast copy volume to image===<br />
*'''Summary:''' Storage backends like Datera and Ceph can copy images faster using backend support, but there isn't currently a code path for this. We should correct this.<br />
<br><br />
*'''Action (_alastor_):''' Propose the work he has done in Glance locally to upstream.<br />
*'''Action (_alastor_):''' Work with rosmaita to get other details for implementing this worked out.<br />
<br><br />
<br />
===Cinder handling image-associated metadata===<br />
*'''Summary:''' We need to come up with a better framework for Cinder to handle Glance's image metadata. There are fields like those for the image signature that we don't want to be saving.<br />
<br><br />
*'''Action (rosmaita):''' Will write up a spec with a proposal for the fields that shouldn't be saved in Cinder.<br />
<br><br />
<br />
===Capabilities reporting update===<br />
*'''Summary:''' HP actually started implementing a lot of this work back in Liberty. We just need to figure out where that work was left and keep pushing things forward.<br />
<br><br />
*'''Action (_alastor_):''' Help the team document how this should be utilized by vendor's drivers.<br />
*'''Action (eharney):''' Understand the capabilities support that was merged back in liberty.<br />
*'''Action (eharney):''' Write a new spec that references the old spec and then makes appropriate updates on what the functionality really is.<br />
*'''Action (team):''' Need to get the reference architectures updated to report capabilities properly.<br />
*'''Action (team):''' If the reference architectures go well then we should probably add this as a requirement for drivers.<br />
*'''Action (jungleboyj):''' Set up a bi-weekly meeting to discuss this. <br />
<br><br />
<br />
===python-cinderclient major version release===<br />
*'''Summary:''' There are a number of changes we want to get in before doing a Cinderclient release with a major version change. We agreed that we want to change the way we are handling MV to default to the highest available and then downgrade to the highest version that the server supports.<br />
<br><br />
*'''Action (eharney):''' Update https://review.opendev.org/#/c/647871/ to implement the support agreed upon above.<br />
*'''Action (rosmaita):''' Will wait to do a cinderclient release until after all the changes that require a major version bump are in place.<br />
<br><br />
<br />
===cinderclient integration with OpenStack===<br />
*'''Summary:''' Our current approach to doing client development is not great as thing are going into python-cinderclient and then eventually being picked up for openstackclient without any Cinder core review action. Would like to look at making 'openstack volume' commands an alias to Cinder commands made available via plugin.<br />
<br><br />
*'''Action (abishop):''' Will reach out to Dean Troyer and find out if they would be supportive of this approach.<br />
*'''Action (abishop):''' Investigate the questions and concerns raised during our discussion to get answers and bring them back to the team.<br />
<br><br />
<br />
=== Continued Storyboard Discussion?===<br />
*'''Summary:''' Team is still not in a hurry to make this change. Manila isn't planning to move during Train. Don't know that anyone else is either so we are not going to push this further at this point in time.<br />
<br><br />
<br />
===py37 failures===<br />
*'''Summary:''' Team can't agree whether or not to add a py37 job to our tests because right now it is always failing. Sean thinks that we should add it as we know that OpenStack is going to be moving and we need to get test coverage in place.<br />
<br><br />
*'''Action (jungleboyj):''' To reach out to Helen Walsh asking her team to investigate why their driver fails and to please address.<br />
<br />
===Placement Discussion===<br />
*'''Summary:''' Placement is agnostic as to what goes into the service. If we have information that can be of use to them, they will use it. No one, however, is asking for this right now so it was felt that our efforts could be better utilized elsewhere. So, no action is needed right now.<br />
<br />
=OpenInfra Summit Forum Sessions=<br />
<br />
==Cinder Capability Reporting==<br />
We held a forum session to discuss how Cinder can better report capabilities from drivers. The goal being to make it possible for administrators to better understand the capabilities available from their storage backends. If this information is more readily accessible it should, then, be easier to create volume types that more completely utilize their storage backends.<br />
<br />
*'''Etherpad:''' [https://etherpad.openstack.org/p/denver-forum-cinder-improving-drvr-cap-rep https://etherpad.openstack.org/p/denver-forum-cinder-improving-drvr-cap-rep]<br />
*'''YouTube Recording:''' [https://www.youtube.com/watch?v=avZgbk8hh2s https://www.youtube.com/watch?v=avZgbk8hh2s]<br />
<br />
The outcome of this session was a discovery that much of the work for this had already been done by HP back in the liberty release and that we really just needed to come to agreement upon completing the work. We added further discussion to our PTG agenda and you can see the results of that discussion above.<br />
<br />
==Cinder User Feedback Session==<br />
This forum session was a second attempt to give users of Cinder the opportunity to share their concerns about Cinder and to ask questions.<br />
<br />
*'''Etherpad:''' [https://etherpad.openstack.org/p/denver-forum-cinder-direct-user-feedback https://etherpad.openstack.org/p/denver-forum-cinder-direct-user-feedback]<br />
*'''YouTube Recording:''' [https://www.youtube.com/watch?v=UVN42jsUq-U https://www.youtube.com/watch?v=UVN42jsUq-U]<br />
<br />
As with the past times we have tried this the session was lightly attended. The administrators in the room all indicated that Cinder generally works pretty well. There were requests for some functionality that had already gone in to recent releases. Also some questions around issues with the number of volumes that can be attached. That problem appears to be a limitation of virtio.</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=CinderTrainSummitandPTGSummary&diff=169971CinderTrainSummitandPTGSummary2019-05-13T19:08:16Z<p>Jay Bryant: /* Thursday 5/2/2019 */</p>
<hr />
<div>=== Introduction ===<br />
This page contains a summary of the subjects covered during the Train Summit and PTG held in Denver, Colorado, USA, April 28 to May 4, 2019.<br />
<br />
The full etherpad and all associated notes may be found [https://etherpad.openstack.org/p/cinder-train-ptg-planning here.]<br />
<br /><br />
<br /><br />
<br /><br />
<br />
=Train PTG Summary=<br />
<br />
==Thursday 5/2/2019==<br />
[https://www.youtube.com/watch?v=cg8gYLjjjyI Video Recording Part 1]<br />
<br />
=== Stein Retrospective===<br />
*'''Summary:''' The release went relatively well though we had issues getting releases out We kept our deadlines and which was good and added the priority dashboard which was also beneficial.<br />
<br><br />
*'''Action (jungleboyj):''' Clean up the Cinder Wiki pages.<br />
*'''Action (team):''' Checkout dashboards and see if anything needs to be updated.<br />
*'''Action (smcginnis):''' Add release notes for the cinderclient backports to call out the changes that upgrading will bring.<br />
<br><br />
<br />
===Ceph iSCSI Support===<br />
*'''Summary:''' There is great interest in getting this support in place from multiple consumers. We should continue to push trying to get this in place.<br />
<br><br />
*'''Action (jungleboyj):''' Get the spec that Lenovo has started updated and pushed up for review.<br />
*'''Action (eharney):''' Check with internal Red Hat teams to make sure that there are not others already working on this.<br />
*'''Action (hemna):''' To reach out to the Ceph community and see how receptive they would be to client changes to support iSCSI use cases.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=N6D6ib7T9Io&feature=em-lbcastemail Video Recording Part 2]<br />
<br />
=== Cinder/Glance creating image from volume with Ceph===<br />
*'''Summary:''' Determined that the problem really needs to be handled on the Glance side so there was no action required by Cinder.<br />
<br><br />
<br />
===Glance image properties and Cinder encrypted volume key management===<br />
*'''Summary:''' Keys are not getting deleted from the HSM like they are supposed to be. Glance should be deleting keys when they are done being used. This was acceptable if we make it very clear the key is one that may be deleted.<br />
<br><br />
*'''Action (rosmaita):''' Draft up a spec proposing this functionality.<br />
*'''Action (eharney):''' Will write a Cinder spec to handle the transition from the old to the new approach of key management.<br />
<br><br />
<br />
===Encryption key manage on volume clone===<br />
*'''Summary:''' Right now when we clone volumes we just make a copy of the encryption key. Seems like this really should be a new key.<br />
<br><br />
*'''Action (eharney):''' Go through how this work with snapshots -- snapshots keep the old key, volumes get a new key<br />
*'''Action (eharney):''' Write a spec based on the results of the investigation.<br />
<br><br />
<br />
===Continued discussion of old CG API removal===<br />
*'''Summary:''' It appears that all drivers that still have the old functions in place actually route to the appropriate generic code. So, it should be safe to remove the old API.<br />
<br><br />
*'''Action (smcginnis):''' To clean up the old code in the volume manager.<br />
*'''Action (smcginnis):''' To clean up database tables for CGs (ConsistencyGroup and CGSnapshot)<br />
*'''Action (smcginnis):''' Encourage drivers that still have the old code in place to remove the code.<br />
<br><br />
<br />
===Backup tests notifications leaking===<br />
*'''Summary:''' The leak of backup.createprogress notifications has been causing gate failures for some time. Can work around the issue by ignoring the notifications but we really should fix it.<br />
<br><br />
*'''Action (eharney):''' Continue to work with Rajat to find the source of the problem.<br />
<br><br />
<br />
===Optional dependency install mechanism===<br />
*'''Summary:''' Drivers that require externally available packages don't work easily in containers. This isn't good given the general movement to the use of containers.<br />
<br><br />
*'''Action (team):''' Watch new drivers for such dependencies. If they have them make sure they do a good try/except import of the package. Also ensure the licensing is appropriate.<br />
*'''Action (hemna):''' Work on verifying the dependent packages and determining a way to resolve the problem for containers.<br />
<br><br />
<br />
===Passwords in cinder.conf/oslo.config with Castellan driver===<br />
*'''Summary:''' Consumers don't want to be putting clear passwords in the config files. There is a better way to do things and we should move to supporting it.<br />
<br><br />
*'''Action (eharney):''' To investigate how we can implement this for Cinder.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=HaWmybpkloI&feature=em-lbcastemail Video Recording Part 3]<br />
<br />
===Fall mid-cycle planning===<br />
*'''Summary:''' No one had objections to doing the mid-cycle at Lenovo again. So, we will plan for 8/21 to 8/23/2019 at the Lenovo Campus in Morrisville, NC<br />
<br><br />
*'''Action (eharney):''' To get more East Coast Cinder people involved.<br />
*'''Action (jungleboyj):''' To confirm with Lenovo and make sure to keep Tom Barron in the loop.<br />
<br><br />
<br />
===Migration from SQL-Alchemy to Alembic===<br />
*'''Summary:''' We need to do this at some point since SQL-Alchemy is going away. Glance has already done the work and we can learn from them.<br />
<br><br />
*'''Action (smcginnis):''' Will look into compacting our DB upgrades again. Currently back at the Ocata level.<br />
*'''Action (smcginnis):''' Try to get guidance on how to proceed from zzzeek.<br />
<br><br />
<br />
===Syncing scheduler stats in an HA environment===<br />
*'''Summary:''' As more people are running an active/active HA environment we need to think about this more to make sure that scheduler instances stay in sync.<br />
<br><br />
*'''Action (e0ne):''' Update the old spec he wrote about this to address comments: https://review.opendev.org/#/c/556529/2<br />
*'''Action (geguileo):'' To help out Ivan as he starts reworking the spec.<br />
<br><br />
<br />
===Pre-release checklist===<br />
*'''Summary:''' We have made mistakes over the last couple of releases as far as getting libraries created before the release, etc. We need to do better. Hopefully creating a checklist will help. https://docs.openstack.org/cinder/latest/contributor/releasecycle.html<br />
<br><br />
*'''Action (jungleboyj):''' To do follow-up patches to add additional details to the checklist.<br />
*'''Action (eharney):''' Has additional comments on the content to merge.<br />
<br><br />
<br />
===Privsep and rootwrap===<br />
*'''Summary:''' We haven't really improved things in privsep as we are still using the rootwrap style of commands instead of breaking things down to using python functions to be more secure. We should improve this.<br />
<br><br />
*'''Action (eharney):''' To take a look at his privsep patch for LIO and see if it could be pushed up.<br />
*'''Action (eharney):''' Try to get better granularity on privileges<br />
<br><br />
<br />
===abc removal===<br />
*'''Summary:''' ABC has never worked the way we intended. At this point is would be best to just remove it. We should plan to work on this in the U release as we are moving to Py3.<br />
<br><br />
<br />
===Delete volume from DB===<br />
*'''Summary:''' Support organizations would like a way to delete volumes that doesn't go through the driver code but people have concerns with going straight to the DB. Need to find a middle ground.<br />
<br><br />
*'''Action (eharney):''' Going to look at the patch that was proposed and suggest that unmanage have an --ignore-state option added.<br />
<br><br />
<br />
===Quiesced snapshots===<br />
*'''Summary:''' People are still surprised we can't do this. We should work with Nova to see if we can make this happen.<br />
<br><br />
<br />
===Talk about merging the RSD driver===<br />
*'''Summary:''' 3rd Party CI looks good and the team has been responsive to comments. Close to being ready to merge.<br />
<br><br />
*'''Action (team):''' To review the driver and try to get it in place.<br />
*'''Action (e0ne):''' To make sure to review it given that he had experience with the PoC for the NVMe driver.<br />
<br><br />
<br />
===3rd Party CI===<br />
*'''Summary:''' We need to make sure that all 3rd Party CIs are running py3 by milestone 2 and that they have resolved all issues with the change in repo names.<br />
<br><br />
*'''Action (jungleboyj):''' Make sure that all CIs are running py3 testing by milestone-2 in Train.<br />
*'''Action (jungleboyj):''' Propose unsupported patches for those that fail to meet the requirement.<br />
*'''Action (jungleboyj):''' Need to also ensure that all 3rd Party CIs are running the Cinder Tempest Plugin.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=5rdyrccCqWw Video Recording Part 4]<br />
<br />
===Generic Backup Driver Discussion===<br />
*'''Summary:''' The team would still like to get this in place and Ivan has continued to work on it. Need to review the patches that are out there to help this.<br />
<br><br />
*'''Action (team):''' Review patches that are currently out there: https://review.opendev.org/#/q/status:open+project:openstack/cinder+branch:master+topic:bp/backup-host-selection-algorigthm and https://review.opendev.org/#/c/500094/<br />
*'''Action (e0ne):''' To continue to push the team to review the patches.<br />
<br><br />
<br />
===Driver Folder Clean-up/Refactoring===<br />
*'''Summary:''' The volume driver folder is a bit of an inconsistent mess and it would be good to clean up. There is also a mess in the code to deal with.<br />
<br><br />
*'''Action (smcginnis):''' Going to work on creating a patch to bring consistency to the subfolders in volume/drivers.<br />
*'''Action (hemna):''' Will work to move exceptions for individual drivers out of cinder/exceptions.py into individual drivers.<br />
*'''Action (jungleboyj):''' There is old code left around from previously removed drivers in os-brick. Review os-brick and remove what is appropriate.<br />
<br><br />
<br />
===Optimized Backup drivers===<br />
*'''Summary:''' NetApp indicated interest in creating an optimized backup driver for their storage backend. The team didn't have a concern with this idea.<br />
<br><br />
*'''Action (erlon):''' To propose a review to implement this for the team to review.<br />
<br><br />
<br />
==Friday 5/3/2019==<br />
<br />
[https://www.youtube.com/watch?v=d6QYQTOzJRM Video Recording Part 5]<br />
<br />
===Cinderclient design discussion===<br />
*'''Summary:''' There are issues in cinderclient when it comes to filtering. There are bugs and unexpected behavior that should be fixed.<br />
<br><br />
*'''Action (whoami-rajat):''' Update patch to allow multiple '--filters' to be specified. https://review.opendev.org/#/c/587610/<br />
*'''Action (whoami-rajat):''' Fix the bugs documented in the etherpad by Eric.<br />
<br><br />
<br />
===Default behavior of listing (volume|group} types====<br />
*'''Summary:''' The default behavior between the client and the API is inconsistent. We should understand this and fix it.<br />
<br><br />
*'''Action (whoami-rajat):''' Fix the bug where the server hides private types from the user that has access to them.<br />
*'''Action (team):''' Review the patch that is out there and decide if we want to merge it: https://review.opendev.org/#/c/641698/<br />
<br><br />
<br />
===Avoiding untyped volumes update===<br />
*'''Summary:''' Despite additional design discussion we landed back on the fact that we want to continue with the original design proposal of creating a new default type that is unlikely to clash with anything that administrator previously created.<br />
<br><br />
*'''Action (eharney):''' Write a but to evaluate whether our default volume type should be public.<br />
*'''Action (e0ne):''' Write a bug to deal with the fact that default_volume_type can be set in cinder.conf but the type doesn't exist. You can create with no type, but it should fail.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=r3NMEuReQ-Q Video Recording Part 6]<br />
<br />
=== Leverage Hardware acceleration in Cinder===<br />
*'''Summary:''' Intel is interested in speeding image compression/decompression using accelerators. Cinder team is ok with this but thinks it should be implemented in a generic manner via oslo-util or something similar.<br />
<br><br />
*'''Action (lixiaoy1):''' Propose new functionality to Oslo.<br />
*'''Action (lixiaoy1):''' Work with Cinder to get use of the new library function implemented. Same will need to be done with Glance.<br />
*'''Action (lixiaoy1):''' Finally work with Nova to get the new way of compressing images and new format supported.<br />
<br><br />
<br />
===Cross Project time with Nova===<br />
*'''Summary:''' Discussed a number of different topics but there were no significant work items to come out of the discussion for Cinder. Details can be seen here: https://etherpad.openstack.org/p/ptg-train-xproj-nova-cinder<br />
<br><br />
<br />
===Status of multiattach===<br />
*'''Summary:''' Red Hat has started testing in their environment and they are seeing some race conditions, etc. They are addressing issues as they find them.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=STYpJ5GwmeY Video Recording Part 7]<br />
<br />
===Cinder mutable options===<br />
*'''Summary:''' This is a community goal that we need to get appropriately implemented. Not totally clear as to where our implementation is at and we would like to address that.<br />
<br><br />
*'''Action (erlon):''' Update this spec https://review.opendev.org/656011 to be current and include results of our discussion.<br />
*'''Action (erlon):''' Make sure that we have the right plumbing in place to support this functionality. It sounds like we may already have it there.<br />
*'''Action (erlon):''' Come up with a way to add to our sample config output to indicate which options are reloadable an not.<br />
<br><br />
<br />
===Cinder support for Glance multi-store feature===<br />
*'''Summary:''' Glance supports multiple stores and it should be possible to decide which store is used when Cinder interacts with Glance.<br />
<br><br />
*'''Action (rosmaita):''' Work with abhishek to update the spec that is out there to initially implement this support via volume_type. If the approach isn't well received, other solutions can be considered.<br />
<br><br />
<br />
===Supporting fast copy volume to image===<br />
*'''Summary:''' Storage backends like Datera and Ceph can copy images faster using backend support, but there isn't currently a code path for this. We should correct this.<br />
<br><br />
*'''Action (_alastor_):''' Propose the work he has done in Glance locally to upstream.<br />
*'''Action (_alastor_):''' Work with rosmaita to get other details for implementing this worked out.<br />
<br><br />
<br />
===Cinder handling image-associated metadata===<br />
*'''Summary:''' We need to come up with a better framework for Cinder to handle Glance's image metadata. There are fields like those for the image signature that we don't want to be saving.<br />
<br><br />
*'''Action (rosmaita):''' Will write up a spec with a proposal for the fields that shouldn't be saved in Cinder.<br />
<br><br />
<br />
===Capabilities reporting update===<br />
*'''Summary:''' HP actually started implementing a lot of this work back in Liberty. We just need to figure out where that work was left and keep pushing things forward.<br />
<br><br />
*'''Action (_alastor_):''' Help the team document how this should be utilized by vendor's drivers.<br />
*'''Action (eharney):''' Understand the capabilities support that was merged back in liberty.<br />
*'''Action (eharney):''' Write a new spec that references the old spec and then makes appropriate updates on what the functionality really is.<br />
*'''Action (team):''' Need to get the reference architectures updated to report capabilities properly.<br />
*'''Action (team):''' If the reference architectures go well then we should probably add this as a requirement for drivers.<br />
*'''Action (jungleboyj):''' Set up a bi-weekly meeting to discuss this. <br />
<br><br />
<br />
===python-cinderclient major version release===<br />
*'''Summary:''' There are a number of changes we want to get in before doing a Cinderclient release with a major version change. We agreed that we want to change the way we are handling MV to default to the highest available and then downgrade to the highest version that the server supports.<br />
<br><br />
*'''Action (eharney):''' Update https://review.opendev.org/#/c/647871/ to implement the support agreed upon above.<br />
*'''Action (rosmaita):''' Will wait to do a cinderclient release until after all the changes that require a major version bump are in place.<br />
<br><br />
<br />
===cinderclient integration with OpenStack===<br />
*'''Summary:''' Our current approach to doing client development is not great as thing are going into python-cinderclient and then eventually being picked up for openstackclient without any Cinder core review action. Would like to look at making 'openstack volume' commands an alias to Cinder commands made available via plugin.<br />
<br><br />
*'''Action (abishop):''' Will reach out to Dean Troyer and find out if they would be supportive of this approach.<br />
*'''Action (abishop):''' Investigate the questions and concerns raised during our discussion to get answers and bring them back to the team.<br />
<br><br />
<br />
=== Continued Storyboard Discussion?===<br />
*'''Summary:''' Team is still not in a hurry to make this change. Manila isn't planning to move during Train. Don't know that anyone else is either so we are not going to push this further at this point in time.<br />
<br><br />
<br />
===py37 failures===<br />
*'''Summary:''' Team can't agree whether or not to add a py37 job to our tests because right now it is always failing. Sean thinks that we should add it as we know that OpenStack is going to be moving and we need to get test coverage in place.<br />
<br><br />
*'''Action (jungleboyj):''' To reach out to Helen Walsh asking her team to investigate why their driver fails and to please address.<br />
<br />
===Placement Discussion===<br />
*'''Summary:''' Placement is agnostic as to what goes into the service. If we have information that can be of use to them, they will use it. No one, however, is asking for this right now so it was felt that our efforts could be better utilized elsewhere. So, no action is needed right now.</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=CinderTrainSummitandPTGSummary&diff=169966CinderTrainSummitandPTGSummary2019-05-13T16:58:35Z<p>Jay Bryant: /* 3rd Party CI */</p>
<hr />
<div>=== Introduction ===<br />
This page contains a summary of the subjects covered during the Train Summit and PTG held in Denver, Colorado, USA, April 28 to May 4, 2019.<br />
<br />
The full etherpad and all associated notes may be found [https://etherpad.openstack.org/p/cinder-train-ptg-planning here.]<br />
<br /><br />
<br /><br />
<br /><br />
<br />
=Train PTG Summary=<br />
<br />
==Thursday 5/2/2019==<br />
[https://www.youtube.com/watch?v=cg8gYLjjjyI Video Recording Part 1]<br />
<br />
=== Stein Retrospective===<br />
*'''Summary:''' The release went relatively well though we had issues getting releases out We kept our deadlines and which was good and added the priority dashboard which was also beneficial.<br />
<br><br />
*'''Action (jungleboyj):''' Clean up the Cinder Wiki pages.<br />
*'''Action (team):''' Checkout dashboards and see if anything needs to be updated.<br />
*'''Action (smcginnis):''' Add release notes for the cinderclient backports to call out the changes that upgrading will bring.<br />
<br><br />
<br />
===Ceph iSCSI Support===<br />
*'''Summary:''' There is great interest in getting this support in place from multiple consumers. We should continue to push trying to get this in place.<br />
<br><br />
*'''Action (jungleboyj):''' Get the spec that Lenovo has started updated and pushed up for review.<br />
*'''Action (eharney):''' Check with internal Red Hat teams to make sure that there are not others already working on this.<br />
*'''Action (hemna):''' To reach out to the Ceph community and see how receptive they would be to client changes to support iSCSI use cases.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=N6D6ib7T9Io&feature=em-lbcastemail Video Recording Part 2]<br />
<br />
=== Cinder/Glance creating image from volume with Ceph===<br />
*'''Summary:''' Determined that the problem really needs to be handled on the Glance side so there was no action required by Cinder.<br />
<br><br />
<br />
===Glance image properties and Cinder encrypted volume key management===<br />
*'''Summary:''' Keys are not getting deleted from the HSM like they are supposed to be. Glance should be deleting keys when they are done being used. This was acceptable if we make it very clear the key is one that may be deleted.<br />
<br><br />
*'''Action (rosmaita):''' Draft up a spec proposing this functionality.<br />
*'''Action (eharney):''' Will write a Cinder spec to handle the transition from the old to the new approach of key management.<br />
<br><br />
<br />
===Encryption key manage on volume clone===<br />
*'''Summary:''' Right now when we clone volumes we just make a copy of the encryption key. Seems like this really should be a new key.<br />
<br><br />
*'''Action (eharney):''' Go through how this work with snapshots -- snapshots keep the old key, volumes get a new key<br />
*'''Action (eharney):''' Write a spec based on the results of the investigation.<br />
<br><br />
<br />
===Continued discussion of old CG API removal===<br />
*'''Summary:''' It appears that all drivers that still have the old functions in place actually route to the appropriate generic code. So, it should be safe to remove the old API.<br />
<br><br />
*'''Action (smcginnis):''' To clean up the old code in the volume manager.<br />
*'''Action (smcginnis):''' To clean up database tables for CGs (ConsistencyGroup and CGSnapshot)<br />
*'''Action (smcginnis):''' Encourage drivers that still have the old code in place to remove the code.<br />
<br><br />
<br />
===Backup tests notifications leaking===<br />
*'''Summary:''' The leak of backup.createprogress notifications has been causing gate failures for some time. Can work around the issue by ignoring the notifications but we really should fix it.<br />
<br><br />
*'''Action (eharney):''' Continue to work with Rajat to find the source of the problem.<br />
<br><br />
<br />
===Optional dependency install mechanism===<br />
*'''Summary:''' Drivers that require externally available packages don't work easily in containers. This isn't good given the general movement to the use of containers.<br />
<br><br />
*'''Action (team):''' Watch new drivers for such dependencies. If they have them make sure they do a good try/except import of the package. Also ensure the licensing is appropriate.<br />
*'''Action (hemna):''' Work on verifying the dependent packages and determining a way to resolve the problem for containers.<br />
<br><br />
<br />
===Passwords in cinder.conf/oslo.config with Castellan driver===<br />
*'''Summary:''' Consumers don't want to be putting clear passwords in the config files. There is a better way to do things and we should move to supporting it.<br />
<br><br />
*'''Action (eharney):''' To investigate how we can implement this for Cinder.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=HaWmybpkloI&feature=em-lbcastemail Video Recording Part 3]<br />
<br />
===Fall mid-cycle planning===<br />
*'''Summary:''' No one had objections to doing the mid-cycle at Lenovo again. So, we will plan for 8/21 to 8/23/2019 at the Lenovo Campus in Morrisville, NC<br />
<br><br />
*'''Action (eharney):''' To get more East Coast Cinder people involved.<br />
*'''Action (jungleboyj):''' To confirm with Lenovo and make sure to keep Tom Barron in the loop.<br />
<br><br />
<br />
===Migration from SQL-Alchemy to Alembic===<br />
*'''Summary:''' We need to do this at some point since SQL-Alchemy is going away. Glance has already done the work and we can learn from them.<br />
<br><br />
*'''Action (smcginnis):''' Will look into compacting our DB upgrades again. Currently back at the Ocata level.<br />
*'''Action (smcginnis):''' Try to get guidance on how to proceed from zzzeek.<br />
<br><br />
<br />
===Syncing scheduler stats in an HA environment===<br />
*'''Summary:''' As more people are running an active/active HA environment we need to think about this more to make sure that scheduler instances stay in sync.<br />
<br><br />
*'''Action (e0ne):''' Update the old spec he wrote about this to address comments: https://review.opendev.org/#/c/556529/2<br />
*'''Action (geguileo):'' To help out Ivan as he starts reworking the spec.<br />
<br><br />
<br />
===Pre-release checklist===<br />
*'''Summary:''' We have made mistakes over the last couple of releases as far as getting libraries created before the release, etc. We need to do better. Hopefully creating a checklist will help. https://docs.openstack.org/cinder/latest/contributor/releasecycle.html<br />
<br><br />
*'''Action (jungleboyj):''' To do follow-up patches to add additional details to the checklist.<br />
*'''Action (eharney):''' Has additional comments on the content to merge.<br />
<br><br />
<br />
===Privsep and rootwrap===<br />
*'''Summary:''' We haven't really improved things in privsep as we are still using the rootwrap style of commands instead of breaking things down to using python functions to be more secure. We should improve this.<br />
<br><br />
*'''Action (eharney):''' To take a look at his privsep patch for LIO and see if it could be pushed up.<br />
*'''Action (eharney):''' Try to get better granularity on privileges<br />
<br><br />
<br />
===abc removal===<br />
*'''Summary:''' ABC has never worked the way we intended. At this point is would be best to just remove it. We should plan to work on this in the U release as we are moving to Py3.<br />
<br><br />
<br />
===Delete volume from DB===<br />
*'''Summary:''' Support organizations would like a way to delete volumes that doesn't go through the driver code but people have concerns with going straight to the DB. Need to find a middle ground.<br />
<br><br />
*'''Action (eharney):''' Going to look at the patch that was proposed and suggest that unmanage have an --ignore-state option added.<br />
<br><br />
<br />
===Quiesced snapshots===<br />
*'''Summary:''' People are still surprised we can't do this. We should work with Nova to see if we can make this happen.<br />
<br><br />
<br />
===Talk about merging the RSD driver===<br />
*'''Summary:''' 3rd Party CI looks good and the team has been responsive to comments. Close to being ready to merge.<br />
<br><br />
*'''Action (team):''' To review the driver and try to get it in place.<br />
*'''Action (e0ne):''' To make sure to review it given that he had experience with the PoC for the NVMe driver.<br />
<br><br />
<br />
===3rd Party CI===<br />
*'''Summary:''' We need to make sure that all 3rd Party CIs are running py3 by milestone 2 and that they have resolved all issues with the change in repo names.<br />
<br><br />
*'''Action (jungleboyj):''' Make sure that all CIs are running py3 testing by milestone-2 in Train.<br />
*'''Action (jungleboyj):''' Propose unsupported patches for those that fail to meet the requirement.<br />
*'''Action (jungleboyj):''' Need to also ensure that all 3rd Party CIs are running the Cinder Tempest Plugin.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=5rdyrccCqWw Video Recording Part 4]<br />
<br />
===Generic Backup Driver Discussion===<br />
*'''Summary:''' The team would still like to get this in place and Ivan has continued to work on it. Need to review the patches that are out there to help this.<br />
<br><br />
*'''Action (team):''' Review patches that are currently out there: https://review.opendev.org/#/q/status:open+project:openstack/cinder+branch:master+topic:bp/backup-host-selection-algorigthm and https://review.opendev.org/#/c/500094/<br />
*'''Action (e0ne):''' To continue to push the team to review the patches.<br />
<br><br />
<br />
===Driver Folder Clean-up/Refactoring===<br />
*'''Summary:''' The volume driver folder is a bit of an inconsistent mess and it would be good to clean up. There is also a mess in the code to deal with.<br />
<br><br />
*'''Action (smcginnis):''' Going to work on creating a patch to bring consistency to the subfolders in volume/drivers.<br />
*'''Action (hemna):''' Will work to move exceptions for individual drivers out of cinder/exceptions.py into individual drivers.<br />
*'''Action (jungleboyj):''' There is old code left around from previously removed drivers in os-brick. Review os-brick and remove what is appropriate.<br />
<br><br />
<br />
===Optimized Backup drivers===<br />
*'''Summary:''' NetApp indicated interest in creating an optimized backup driver for their storage backend. The team didn't have a concern with this idea.<br />
<br><br />
*'''Action (erlon):''' To propose a review to implement this for the team to review.</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=CinderTrainSummitandPTGSummary&diff=169949CinderTrainSummitandPTGSummary2019-05-10T20:01:42Z<p>Jay Bryant: Created page with "=== Introduction === This page contains a summary of the subjects covered during the Train Summit and PTG held in Denver, Colorado, USA, April 28 to May 4, 2019. The full eth..."</p>
<hr />
<div>=== Introduction ===<br />
This page contains a summary of the subjects covered during the Train Summit and PTG held in Denver, Colorado, USA, April 28 to May 4, 2019.<br />
<br />
The full etherpad and all associated notes may be found [https://etherpad.openstack.org/p/cinder-train-ptg-planning here.]<br />
<br /><br />
<br /><br />
<br /><br />
<br />
=Train PTG Summary=<br />
<br />
==Thursday 5/2/2019==<br />
[https://www.youtube.com/watch?v=cg8gYLjjjyI Video Recording Part 1]<br />
<br />
=== Stein Retrospective===<br />
*'''Summary:''' The release went relatively well though we had issues getting releases out We kept our deadlines and which was good and added the priority dashboard which was also beneficial.<br />
<br><br />
*'''Action (jungleboyj):''' Clean up the Cinder Wiki pages.<br />
*'''Action (team):''' Checkout dashboards and see if anything needs to be updated.<br />
*'''Action (smcginnis):''' Add release notes for the cinderclient backports to call out the changes that upgrading will bring.<br />
<br><br />
<br />
===Ceph iSCSI Support===<br />
*'''Summary:''' There is great interest in getting this support in place from multiple consumers. We should continue to push trying to get this in place.<br />
<br><br />
*'''Action (jungleboyj):''' Get the spec that Lenovo has started updated and pushed up for review.<br />
*'''Action (eharney):''' Check with internal Red Hat teams to make sure that there are not others already working on this.<br />
*'''Action (hemna):''' To reach out to the Ceph community and see how receptive they would be to client changes to support iSCSI use cases.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=N6D6ib7T9Io&feature=em-lbcastemail Video Recording Part 2]<br />
<br />
=== Cinder/Glance creating image from volume with Ceph===<br />
*'''Summary:''' Determined that the problem really needs to be handled on the Glance side so there was no action required by Cinder.<br />
<br><br />
<br />
===Glance image properties and Cinder encrypted volume key management===<br />
*'''Summary:''' Keys are not getting deleted from the HSM like they are supposed to be. Glance should be deleting keys when they are done being used. This was acceptable if we make it very clear the key is one that may be deleted.<br />
<br><br />
*'''Action (rosmaita):''' Draft up a spec proposing this functionality.<br />
*'''Action (eharney):''' Will write a Cinder spec to handle the transition from the old to the new approach of key management.<br />
<br><br />
<br />
===Encryption key manage on volume clone===<br />
*'''Summary:''' Right now when we clone volumes we just make a copy of the encryption key. Seems like this really should be a new key.<br />
<br><br />
*'''Action (eharney):''' Go through how this work with snapshots -- snapshots keep the old key, volumes get a new key<br />
*'''Action (eharney):''' Write a spec based on the results of the investigation.<br />
<br><br />
<br />
===Continued discussion of old CG API removal===<br />
*'''Summary:''' It appears that all drivers that still have the old functions in place actually route to the appropriate generic code. So, it should be safe to remove the old API.<br />
<br><br />
*'''Action (smcginnis):''' To clean up the old code in the volume manager.<br />
*'''Action (smcginnis):''' To clean up database tables for CGs (ConsistencyGroup and CGSnapshot)<br />
*'''Action (smcginnis):''' Encourage drivers that still have the old code in place to remove the code.<br />
<br><br />
<br />
===Backup tests notifications leaking===<br />
*'''Summary:''' The leak of backup.createprogress notifications has been causing gate failures for some time. Can work around the issue by ignoring the notifications but we really should fix it.<br />
<br><br />
*'''Action (eharney):''' Continue to work with Rajat to find the source of the problem.<br />
<br><br />
<br />
===Optional dependency install mechanism===<br />
*'''Summary:''' Drivers that require externally available packages don't work easily in containers. This isn't good given the general movement to the use of containers.<br />
<br><br />
*'''Action (team):''' Watch new drivers for such dependencies. If they have them make sure they do a good try/except import of the package. Also ensure the licensing is appropriate.<br />
*'''Action (hemna):''' Work on verifying the dependent packages and determining a way to resolve the problem for containers.<br />
<br><br />
<br />
===Passwords in cinder.conf/oslo.config with Castellan driver===<br />
*'''Summary:''' Consumers don't want to be putting clear passwords in the config files. There is a better way to do things and we should move to supporting it.<br />
<br><br />
*'''Action (eharney):''' To investigate how we can implement this for Cinder.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=HaWmybpkloI&feature=em-lbcastemail Video Recording Part 3]<br />
<br />
===Fall mid-cycle planning===<br />
*'''Summary:''' No one had objections to doing the mid-cycle at Lenovo again. So, we will plan for 8/21 to 8/23/2019 at the Lenovo Campus in Morrisville, NC<br />
<br><br />
*'''Action (eharney):''' To get more East Coast Cinder people involved.<br />
*'''Action (jungleboyj):''' To confirm with Lenovo and make sure to keep Tom Barron in the loop.<br />
<br><br />
<br />
===Migration from SQL-Alchemy to Alembic===<br />
*'''Summary:''' We need to do this at some point since SQL-Alchemy is going away. Glance has already done the work and we can learn from them.<br />
<br><br />
*'''Action (smcginnis):''' Will look into compacting our DB upgrades again. Currently back at the Ocata level.<br />
*'''Action (smcginnis):''' Try to get guidance on how to proceed from zzzeek.<br />
<br><br />
<br />
===Syncing scheduler stats in an HA environment===<br />
*'''Summary:''' As more people are running an active/active HA environment we need to think about this more to make sure that scheduler instances stay in sync.<br />
<br><br />
*'''Action (e0ne):''' Update the old spec he wrote about this to address comments: https://review.opendev.org/#/c/556529/2<br />
*'''Action (geguileo):'' To help out Ivan as he starts reworking the spec.<br />
<br><br />
<br />
===Pre-release checklist===<br />
*'''Summary:''' We have made mistakes over the last couple of releases as far as getting libraries created before the release, etc. We need to do better. Hopefully creating a checklist will help. https://docs.openstack.org/cinder/latest/contributor/releasecycle.html<br />
<br><br />
*'''Action (jungleboyj):''' To do follow-up patches to add additional details to the checklist.<br />
*'''Action (eharney):''' Has additional comments on the content to merge.<br />
<br><br />
<br />
===Privsep and rootwrap===<br />
*'''Summary:''' We haven't really improved things in privsep as we are still using the rootwrap style of commands instead of breaking things down to using python functions to be more secure. We should improve this.<br />
<br><br />
*'''Action (eharney):''' To take a look at his privsep patch for LIO and see if it could be pushed up.<br />
*'''Action (eharney):''' Try to get better granularity on privileges<br />
<br><br />
<br />
===abc removal===<br />
*'''Summary:''' ABC has never worked the way we intended. At this point is would be best to just remove it. We should plan to work on this in the U release as we are moving to Py3.<br />
<br><br />
<br />
===Delete volume from DB===<br />
*'''Summary:''' Support organizations would like a way to delete volumes that doesn't go through the driver code but people have concerns with going straight to the DB. Need to find a middle ground.<br />
<br><br />
*'''Action (eharney):''' Going to look at the patch that was proposed and suggest that unmanage have an --ignore-state option added.<br />
<br><br />
<br />
===Quiesced snapshots===<br />
*'''Summary:''' People are still surprised we can't do this. We should work with Nova to see if we can make this happen.<br />
<br><br />
<br />
===Talk about merging the RSD driver===<br />
*'''Summary:''' 3rd Party CI looks good and the team has been responsive to comments. Close to being ready to merge.<br />
<br><br />
*'''Action (team):''' To review the driver and try to get it in place.<br />
*'''Action (e0ne):''' To make sure to review it given that he had experience with the PoC for the NVMe driver.<br />
<br><br />
<br />
===3rd Party CI===<br />
*'''Summary:''' We need to make sure that all 3rd Party CIs are running py3 by milestone 2 and that they have resolved all issues with the change in repo names.<br />
<br><br />
*'''Action (jungleboyj):''' Make sure that all CIs are running py3 testing by milestone-2 in Train.<br />
*'''Action (jungleboyj):''' Propose unsupported patches for those that fail to meet the requirement.<br />
*'''Action (jungleboyj):''' Need to also ensure that all 3rd Party CIs are running the Cinder Tempest Plugin.<br />
<br><br />
<br />
[https://www.youtube.com/watch?v=5rdyrccCqWw Video Recording Part 4]</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=Cinder&diff=169948Cinder2019-05-10T18:39:47Z<p>Jay Bryant: /* PTG and Summit Meeting Summaries */</p>
<hr />
<div>'''Note:''' The wiki.openstack.org pages are for development team collaboration and documentation. If you are looking for official project documentation, please go to https://docs.openstack.org/cinder/latest/.<br />
<br />
'''Official Title:''' OpenStack Block Storage Cinder<br /><br />
<br />
'''PTL:''' Jay Bryant <jsbryant at electronicjungle d0t net><br /><br />
<br />
'''Mission Statement:''' <blockquote>To implement services and libraries to provide on demand, self-service access to Block Storage resources. Provide Software Defined Block Storage via abstraction and automation on top of various traditional backend block storage devices.</blockquote><br />
<br />
== Description ==<br />
Cinder is a Block Storage service for OpenStack. It's designed to present storage resources to end users that can be consumed by the OpenStack Compute Project (Nova). This is done through the use of either a reference implementation (LVM) or plugin drivers for other storage. The short description of Cinder is that it virtualizes the management of block storage devices and provides end users with a self service API to request and consume those resources without requiring any knowledge of where their storage is actually deployed or on what type of device.<br />
<br />
== Documentation ==<br />
See https://docs.openstack.org/cinder<br />
<br />
== Core Team ==<br />
See [https://review.openstack.org/#/admin/groups/83,members current members].<br />
<br />
== Project Meetings ==<br />
See [[CinderMeetings|Meetings/Cinder]].<br />
<br />
== Getting in Touch ==<br />
We use the [http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss openstack-discuss@lists.openstack.org] mailing list for discussions using subjects with the prefix "[cinder]".<br />
* Mailing list archive: http://lists.openstack.org/pipermail/openstack-discuss/<br />
* For discussions prior to Mon Nov 19 00:04:26 UTC 2018, see the old "dev list" archive: http://lists.openstack.org/pipermail/openstack-dev/<br />
<br />
<br />
We also hang out on IRC in #openstack-cinder on freenode.<br />
* IRC logs are available in: [http://eavesdrop.openstack.org/irclogs/%23openstack-cinder/ http://eavesdrop.openstack.org/irclogs/#openstack-cinder/]<br />
<br />
== Related projects ==<br />
* [https://github.com/openstack/python-cinderclient Python Cinder client]<br />
* [https://wiki.openstack.org/wiki/CinderBrick Brick]<br />
<br />
== Core Volume Drivers ==<br />
For a list of the core drivers in each OpenStack release and the volume operations they support, see https://docs.openstack.org/cinder/latest/reference/support-matrix.html<br />
<br />
== Contributing Code ==<br />
For any new features, significant code changes, new drivers, or major bug fixes, please add a release note along with your patch. See the [http://docs.openstack.org/developer/reno/usage.html#creating-new-release-notes Reno Documentation] for details on how to generate new release notes.<br />
<br />
=== How To Contribute A Driver ===<br />
See [https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver How to contribute a driver]<br />
<br />
=== How To Contribute A New Feature ===<br />
See [https://wiki.openstack.org/wiki/Cinder/how-to-contribute-new-feature How to contribute a new feature]<br />
<br />
== Sample cinder.conf ==<br />
The cinder.conf.sample is no longer maintained and tested in the source tree. Currently you can obtain a copy by running the command 'tox -e genconfig' in a cloned version of the Cinder project and then look in etc/cinder/ for the cinder.conf.sample file. <br />
<br />
The newly generated file will have all options in the Cinder project, driver options included.<br />
<br />
For more information about the generation of the file, please see: [https://github.com/openstack/cinder/blob/master/doc/source/devref/genconfig.rst Cinder Sample Configuration Devref]<br />
<br />
== Resources ==<br />
===Etherpads===<br />
====Active====<br />
*[https://etherpad.openstack.org/p/cinder-spec-review-tracking Spec Review Tracking]<br />
*[https://etherpad.openstack.org/p/cinder-outreachy-project-ideas Outreachy Project Ideas]<br />
*[https://etherpad.openstack.org/p/cinder-default-iscsihelper-lio Default iscsihelper LIO]<br />
<br />
<br />
====Historic====<br />
*[https://etherpad.openstack.org/p/cinder-nova-api-changes Cinder/Nova API Changes]<br />
*[https://etherpad.openstack.org/p/newton-cinder-midcycle Newton Midcycle]<br />
*[https://etherpad.openstack.org/p/newton-cinder-summit-ideas Newton Summit Ideas]<br />
*[https://etherpad.openstack.org/p/cinder-mataka-release-final-push Mitaka Final Push]<br />
*[https://etherpad.openstack.org/p/mitaka-cinder-spec-review-tracking Mitaka Spec Review Tracking]<br />
*[https://etherpad.openstack.org/p/mitaka-cinder-midcycle Mitaka Midcycle Meetup- Planning]<br />
*[https://etherpad.openstack.org/p/cinder-mitaka-summit-topics Mitaka Summit- Planning]<br />
*[https://etherpad.openstack.org/p/cinder-meetup-summer-2015 Liberty Midcycle Meetup- Notes]<br />
*[https://etherpad.openstack.org/p/cinder-liberty-midcycle-meetup Liberty Midcycle Meetup- Planning]<br />
<br />
=== Review Links ===<br />
* [https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fcinder+OR%0Aproject%3Aopenstack%2Fpython%2Dcinderclient+OR%0Aproject%3Aopenstack%2Fpython%2Dbrick%2Dcinderclient%2Dext+OR%0Aproject%3Aopenstack%2Fos%2Dbrick+OR%0Aproject%3Aopenstack%2Fcinderlib%29+status%3Aopen&title=Cinder+Priorities+Dashboard&High+Priority+Changes=label%3AReview%2DPriority%3D2&Priority+Changes=label%3AReview%2DPriority%3D1&Blocked+Reviews=label%3AReview%2DPriority%3D%2D1 Cinder Priority Reviews Dashboard]<br />
* [https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fcinder+OR%0Aproject%3Aopenstack%2Fpython%2Dcinderclient+OR%0Aproject%3Aopenstack%2Fos%2Dbrick+OR%0Aproject%3Aopenstack%2Fcinderlib+OR%0Aproject%3Aopenstack%2Fpython%2Dbrick%2Dcinderclient%2Dext+OR%0Aproject%3Aopenstack%2Fcinder%2Dspecs%29+status%3Aopen&title=Cinder+Review+Dashboard&Cinder+Specs=project%3Aopenstack%2Fcinder%2Dspecs&Needs+Final+%2B2=label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D2+NOT+reviewedby%3Aself&Small+Patches=NOT+label%3ACode%2DReview%3C%3D%2D1%2Ccinder%2Dcore+delta%3A%3C%3D10&Bug+Fixes+without+Negative+Feedback=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+NOT+owner%3Aself+limit%3A50+branch%3Amaster+topic%3A%5Ebug.%2A+NOT+reviewedby%3Aself&Blueprints+without+Negative+Feedback=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3AVerified%3E%3D1+NOT+owner%3Aself+NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D2+limit%3A50+branch%3Amaster+topic%3A%5Ebp.%2A+NOT+reviewedby%3Aself&Without+Negative+Feedback=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3AVerified%3E%3D1+NOT+owner%3Aself+NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D2+limit%3A50+branch%3Amaster+NOT+topic%3A%5Ebug.%2A+NOT+topic%3A%5Ebp.%2A+NOT+reviewedby%3Aself&5+Days+Without+Feedback=NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D1+NOT+is%3Areviewed+age%3A5d&Own+Patches=owner%3Aself&Patches+I+%2D2%27d=label%3ACode%2DReview%3C%3D%2D2%2Cself&Stable+Branches=branch%3A%5Estable%2F.%2A+NOT+reviewedby%3Aself Cinder Projects Review Inbox]<br />
* [https://bugs.launchpad.net/cinder/+bugs?field.searchtext=&orderby=-importance&search=Search&field.status%3Alist=INPROGRESS&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=-drivers&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_no_branches.used=&field.has_blueprints.used=&field.has_no_blueprints.used= In progress bugs]<br />
* [https://bugs.launchpad.net/cinder/+bugs?field.searchtext=&orderby=-importance&search=Search&field.status%3Alist=NEW&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=-drivers&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_no_branches.used=&field.has_blueprints.used=&field.has_no_blueprints.used= New bugs]<br />
* Stable Branches Reviews<br />
** [https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fcinder+OR%0Aproject%3Aopenstack%2Fpython%2Dcinderclient+OR%0Aproject%3Aopenstack%2Fos%2Dbrick+OR%0Aproject%3Aopenstack%2Fcinderlib+OR%0Aproject%3Aopenstack%2Fpython%2Dbrick%2Dcinderclient%2Dext%29+status%3Aopen%0A%28branch%3A%5Edriverfixes%2F.%2A+OR%0Abranch%3A%5Estable%2F.%2A%29&title=Cinder+Project%3A+All+Stable+and+Driverfix+Branches&Needs+Final+%2B2=label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D2&Without+Negative+Feedback=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3AVerified%3E%3D1+NOT+owner%3Aself+NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D2+limit%3A50&Probably+Not=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3ACode%2Dreview%3C%3D%2D1+NOT+owner%3Aself+limit%3A50&Own+Patches=owner%3Aself&Patches+I+%2D2%27d=label%3ACode%2DReview%3C%3D%2D2%2Cself all stable and driverfix branches]<br />
** [https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fcinder+OR%0Aproject%3Aopenstack%2Fpython%2Dcinderclient+OR%0Aproject%3Aopenstack%2Fos%2Dbrick+OR%0Aproject%3Aopenstack%2Fcinderlib+OR%0Aproject%3Aopenstack%2Fpython%2Dbrick%2Dcinderclient%2Dext%29%0Astatus%3Aopen%0Abranch%3Astable%2Fstein&title=Cinder+stable%2Fstein+Reviews&Needs+Final+%2B2=label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D2&Without+Negative+Feedback=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3AVerified%3E%3D1+NOT+owner%3Aself+NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D2+limit%3A50&Probably+Not=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3ACode%2Dreview%3C%3D%2D1+NOT+owner%3Aself+limit%3A50&Own+Patches=owner%3Aself&Patches+I+%2D2%27d=label%3ACode%2DReview%3C%3D%2D2%2Cself stable/stein only]<br />
** [https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fcinder+OR%0Aproject%3Aopenstack%2Fpython%2Dcinderclient+OR%0Aproject%3Aopenstack%2Fos%2Dbrick+OR%0Aproject%3Aopenstack%2Fcinderlib+OR%0Aproject%3Aopenstack%2Fpython%2Dbrick%2Dcinderclient%2Dext%29%0Astatus%3Aopen%0Abranch%3Astable%2Frocky&title=Cinder+stable%2Frocky+Reviews&Needs+Final+%2B2=label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D2&Without+Negative+Feedback=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3AVerified%3E%3D1+NOT+owner%3Aself+NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D2+limit%3A50&Probably+Not=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3ACode%2Dreview%3C%3D%2D1+NOT+owner%3Aself+limit%3A50&Own+Patches=owner%3Aself&Patches+I+%2D2%27d=label%3ACode%2DReview%3C%3D%2D2%2Cself stable/rocky only]<br />
** [https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fcinder+OR%0Aproject%3Aopenstack%2Fpython%2Dcinderclient+OR%0Aproject%3Aopenstack%2Fos%2Dbrick+OR%0Aproject%3Aopenstack%2Fcinderlib+OR%0Aproject%3Aopenstack%2Fpython%2Dbrick%2Dcinderclient%2Dext%29%0Astatus%3Aopen%0Abranch%3Astable%2Fqueens&title=Cinder+stable%2Fqueens+Reviews&Needs+Final+%2B2=label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D2&Without+Negative+Feedback=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3AVerified%3E%3D1+NOT+owner%3Aself+NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D2+limit%3A50&Probably+Not=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3ACode%2Dreview%3C%3D%2D1+NOT+owner%3Aself+limit%3A50&Own+Patches=owner%3Aself&Patches+I+%2D2%27d=label%3ACode%2DReview%3C%3D%2D2%2Cself stable/queens only]<br />
** [https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fcinder+OR%0Aproject%3Aopenstack%2Fpython%2Dcinderclient+OR%0Aproject%3Aopenstack%2Fos%2Dbrick+OR%0Aproject%3Aopenstack%2Fcinderlib+OR%0Aproject%3Aopenstack%2Fpython%2Dbrick%2Dcinderclient%2Dext%29+status%3Aopen%0A%28branch%3A%5Edriverfixes%2F.%2A+OR%0Abranch%3Astable%2Focata+OR%0Abranch%3Astable%2Fpike%29&title=Cinder+Extended+Maintenance+Branches+Reviews&Needs+Final+%2B2=label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D2&Without+Negative+Feedback=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3AVerified%3E%3D1+NOT+owner%3Aself+NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D2+limit%3A50&Probably+Not=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3ACode%2Dreview%3C%3D%2D1+NOT+owner%3Aself+limit%3A50&Own+Patches=owner%3Aself&Patches+I+%2D2%27d=label%3ACode%2DReview%3C%3D%2D2%2Cself extended maintenance (including driverfixes) only]<br />
<br />
=== PTG and Summit Meeting Summaries ===<br />
*[[CinderTrainSummitandPTGSummary|Train Summit and PTG Summary]]<br />
*[[CinderSteinMidCycleSummary|Stein Mid-Cycle Summary]]<br />
*[[CinderSteinPTGSummary|Stein PTG Summary]]<br />
*[[VancouverSummit2018Summary|Vancouver Summit 2018 Summary]]<br />
*[[CinderRockyPTGSummary|Rocky PTG Summary]]<br />
*[[CinderQueensPTGSummary|Queens PTG Summary]]<br />
*[[CinderPikePTGSummary|Pike PTG Summary]]<br />
<br />
=== Cinder YouTube Channel ===<br />
* [https://www.youtube.com/channel/UCJ8Koy4gsISMy0qW3CWZmaQ/videos Midcycle/PTG Videos and Related Content]<br />
<br />
[[Category: Cinder]]</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=Cinder&diff=169814Cinder2019-05-02T22:00:39Z<p>Jay Bryant: /* Etherpads */</p>
<hr />
<div>'''Note:''' The wiki.openstack.org pages are for development team collaboration and documentation. If you are looking for official project documentation, please go to https://docs.openstack.org/cinder/latest/.<br />
<br />
'''Official Title:''' OpenStack Block Storage Cinder<br /><br />
<br />
'''PTL:''' Jay Bryant <jsbryant at electronicjungle d0t net><br /><br />
<br />
'''Mission Statement:''' <blockquote>To implement services and libraries to provide on demand, self-service access to Block Storage resources. Provide Software Defined Block Storage via abstraction and automation on top of various traditional backend block storage devices.</blockquote><br />
<br />
== Description ==<br />
Cinder is a Block Storage service for OpenStack. It's designed to present storage resources to end users that can be consumed by the OpenStack Compute Project (Nova). This is done through the use of either a reference implementation (LVM) or plugin drivers for other storage. The short description of Cinder is that it virtualizes the management of block storage devices and provides end users with a self service API to request and consume those resources without requiring any knowledge of where their storage is actually deployed or on what type of device.<br />
<br />
== Documentation ==<br />
See https://docs.openstack.org/cinder<br />
<br />
== Core Team ==<br />
See [https://review.openstack.org/#/admin/groups/83,members current members].<br />
<br />
== Project Meetings ==<br />
See [[CinderMeetings|Meetings/Cinder]].<br />
<br />
== Getting in Touch ==<br />
We use the [http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss openstack-discuss@lists.openstack.org] mailing list for discussions using subjects with the prefix "[cinder]".<br />
* Mailing list archive: http://lists.openstack.org/pipermail/openstack-discuss/<br />
* For discussions prior to Mon Nov 19 00:04:26 UTC 2018, see the old "dev list" archive: http://lists.openstack.org/pipermail/openstack-dev/<br />
<br />
<br />
We also hang out on IRC in #openstack-cinder on freenode.<br />
* IRC logs are available in: [http://eavesdrop.openstack.org/irclogs/%23openstack-cinder/ http://eavesdrop.openstack.org/irclogs/#openstack-cinder/]<br />
<br />
== Related projects ==<br />
* [https://github.com/openstack/python-cinderclient Python Cinder client]<br />
* [https://wiki.openstack.org/wiki/CinderBrick Brick]<br />
<br />
== Core Volume Drivers ==<br />
For a list of the core drivers in each OpenStack release and the volume operations they support, see https://docs.openstack.org/cinder/latest/reference/support-matrix.html<br />
<br />
== Contributing Code ==<br />
For any new features, significant code changes, new drivers, or major bug fixes, please add a release note along with your patch. See the [http://docs.openstack.org/developer/reno/usage.html#creating-new-release-notes Reno Documentation] for details on how to generate new release notes.<br />
<br />
=== How To Contribute A Driver ===<br />
See [https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver How to contribute a driver]<br />
<br />
=== How To Contribute A New Feature ===<br />
See [https://wiki.openstack.org/wiki/Cinder/how-to-contribute-new-feature How to contribute a new feature]<br />
<br />
== Sample cinder.conf ==<br />
The cinder.conf.sample is no longer maintained and tested in the source tree. Currently you can obtain a copy by running the command 'tox -e genconfig' in a cloned version of the Cinder project and then look in etc/cinder/ for the cinder.conf.sample file. <br />
<br />
The newly generated file will have all options in the Cinder project, driver options included.<br />
<br />
For more information about the generation of the file, please see: [https://github.com/openstack/cinder/blob/master/doc/source/devref/genconfig.rst Cinder Sample Configuration Devref]<br />
<br />
== Resources ==<br />
===Etherpads===<br />
====Active====<br />
*[https://etherpad.openstack.org/p/cinder-spec-review-tracking Spec Review Tracking]<br />
*[https://etherpad.openstack.org/p/cinder-outreachy-project-ideas Outreachy Project Ideas]<br />
*[https://etherpad.openstack.org/p/cinder-default-iscsihelper-lio Default iscsihelper LIO]<br />
<br />
<br />
====Historic====<br />
*[https://etherpad.openstack.org/p/cinder-nova-api-changes Cinder/Nova API Changes]<br />
*[https://etherpad.openstack.org/p/newton-cinder-midcycle Newton Midcycle]<br />
*[https://etherpad.openstack.org/p/newton-cinder-summit-ideas Newton Summit Ideas]<br />
*[https://etherpad.openstack.org/p/cinder-mataka-release-final-push Mitaka Final Push]<br />
*[https://etherpad.openstack.org/p/mitaka-cinder-spec-review-tracking Mitaka Spec Review Tracking]<br />
*[https://etherpad.openstack.org/p/mitaka-cinder-midcycle Mitaka Midcycle Meetup- Planning]<br />
*[https://etherpad.openstack.org/p/cinder-mitaka-summit-topics Mitaka Summit- Planning]<br />
*[https://etherpad.openstack.org/p/cinder-meetup-summer-2015 Liberty Midcycle Meetup- Notes]<br />
*[https://etherpad.openstack.org/p/cinder-liberty-midcycle-meetup Liberty Midcycle Meetup- Planning]<br />
<br />
=== Review Links ===<br />
* [https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fcinder+OR%0Aproject%3Aopenstack%2Fpython%2Dcinderclient+OR%0Aproject%3Aopenstack%2Fpython%2Dbrick%2Dcinderclient%2Dext+OR%0Aproject%3Aopenstack%2Fos%2Dbrick+OR%0Aproject%3Aopenstack%2Fcinderlib%29+status%3Aopen&title=Cinder+Priorities+Dashboard&High+Priority+Changes=label%3AReview%2DPriority%3D2&Priority+Changes=label%3AReview%2DPriority%3D1&Blocked+Reviews=label%3AReview%2DPriority%3D%2D1 Cinder Priority Reviews Dashboard]<br />
* [https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fcinder+OR%0Aproject%3Aopenstack%2Fpython%2Dcinderclient+OR%0Aproject%3Aopenstack%2Fos%2Dbrick+OR%0Aproject%3Aopenstack%2Fcinderlib+OR%0Aproject%3Aopenstack%2Fpython%2Dbrick%2Dcinderclient%2Dext+OR%0Aproject%3Aopenstack%2Fcinder%2Dspecs%29+status%3Aopen&title=Cinder+Review+Dashboard&Cinder+Specs=project%3Aopenstack%2Fcinder%2Dspecs&Needs+Final+%2B2=label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D2+NOT+reviewedby%3Aself&Small+Patches=NOT+label%3ACode%2DReview%3C%3D%2D1%2Ccinder%2Dcore+delta%3A%3C%3D10&Bug+Fixes+without+Negative+Feedback=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+NOT+owner%3Aself+limit%3A50+branch%3Amaster+topic%3A%5Ebug.%2A+NOT+reviewedby%3Aself&Blueprints+without+Negative+Feedback=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3AVerified%3E%3D1+NOT+owner%3Aself+NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D2+limit%3A50+branch%3Amaster+topic%3A%5Ebp.%2A+NOT+reviewedby%3Aself&Without+Negative+Feedback=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3AVerified%3E%3D1+NOT+owner%3Aself+NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D2+limit%3A50+branch%3Amaster+NOT+topic%3A%5Ebug.%2A+NOT+topic%3A%5Ebp.%2A+NOT+reviewedby%3Aself&5+Days+Without+Feedback=NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D1+NOT+is%3Areviewed+age%3A5d&Own+Patches=owner%3Aself&Patches+I+%2D2%27d=label%3ACode%2DReview%3C%3D%2D2%2Cself&Stable+Branches=branch%3A%5Estable%2F.%2A+NOT+reviewedby%3Aself Cinder Projects Review Inbox]<br />
* [https://bugs.launchpad.net/cinder/+bugs?field.searchtext=&orderby=-importance&search=Search&field.status%3Alist=INPROGRESS&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=-drivers&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_no_branches.used=&field.has_blueprints.used=&field.has_no_blueprints.used= In progress bugs]<br />
* [https://bugs.launchpad.net/cinder/+bugs?field.searchtext=&orderby=-importance&search=Search&field.status%3Alist=NEW&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=-drivers&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_no_branches.used=&field.has_blueprints.used=&field.has_no_blueprints.used= New bugs]<br />
* Stable Branches Reviews<br />
** [https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fcinder+OR%0Aproject%3Aopenstack%2Fpython%2Dcinderclient+OR%0Aproject%3Aopenstack%2Fos%2Dbrick+OR%0Aproject%3Aopenstack%2Fcinderlib+OR%0Aproject%3Aopenstack%2Fpython%2Dbrick%2Dcinderclient%2Dext%29+status%3Aopen%0A%28branch%3A%5Edriverfixes%2F.%2A+OR%0Abranch%3A%5Estable%2F.%2A%29&title=Cinder+Project%3A+All+Stable+and+Driverfix+Branches&Needs+Final+%2B2=label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D2&Without+Negative+Feedback=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3AVerified%3E%3D1+NOT+owner%3Aself+NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D2+limit%3A50&Probably+Not=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3ACode%2Dreview%3C%3D%2D1+NOT+owner%3Aself+limit%3A50&Own+Patches=owner%3Aself&Patches+I+%2D2%27d=label%3ACode%2DReview%3C%3D%2D2%2Cself all stable and driverfix branches]<br />
** [https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fcinder+OR%0Aproject%3Aopenstack%2Fpython%2Dcinderclient+OR%0Aproject%3Aopenstack%2Fos%2Dbrick+OR%0Aproject%3Aopenstack%2Fcinderlib+OR%0Aproject%3Aopenstack%2Fpython%2Dbrick%2Dcinderclient%2Dext%29%0Astatus%3Aopen%0Abranch%3Astable%2Fstein&title=Cinder+stable%2Fstein+Reviews&Needs+Final+%2B2=label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D2&Without+Negative+Feedback=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3AVerified%3E%3D1+NOT+owner%3Aself+NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D2+limit%3A50&Probably+Not=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3ACode%2Dreview%3C%3D%2D1+NOT+owner%3Aself+limit%3A50&Own+Patches=owner%3Aself&Patches+I+%2D2%27d=label%3ACode%2DReview%3C%3D%2D2%2Cself stable/stein only]<br />
** [https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fcinder+OR%0Aproject%3Aopenstack%2Fpython%2Dcinderclient+OR%0Aproject%3Aopenstack%2Fos%2Dbrick+OR%0Aproject%3Aopenstack%2Fcinderlib+OR%0Aproject%3Aopenstack%2Fpython%2Dbrick%2Dcinderclient%2Dext%29%0Astatus%3Aopen%0Abranch%3Astable%2Frocky&title=Cinder+stable%2Frocky+Reviews&Needs+Final+%2B2=label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D2&Without+Negative+Feedback=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3AVerified%3E%3D1+NOT+owner%3Aself+NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D2+limit%3A50&Probably+Not=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3ACode%2Dreview%3C%3D%2D1+NOT+owner%3Aself+limit%3A50&Own+Patches=owner%3Aself&Patches+I+%2D2%27d=label%3ACode%2DReview%3C%3D%2D2%2Cself stable/rocky only]<br />
** [https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fcinder+OR%0Aproject%3Aopenstack%2Fpython%2Dcinderclient+OR%0Aproject%3Aopenstack%2Fos%2Dbrick+OR%0Aproject%3Aopenstack%2Fcinderlib+OR%0Aproject%3Aopenstack%2Fpython%2Dbrick%2Dcinderclient%2Dext%29%0Astatus%3Aopen%0Abranch%3Astable%2Fqueens&title=Cinder+stable%2Fqueens+Reviews&Needs+Final+%2B2=label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D2&Without+Negative+Feedback=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3AVerified%3E%3D1+NOT+owner%3Aself+NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D2+limit%3A50&Probably+Not=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3ACode%2Dreview%3C%3D%2D1+NOT+owner%3Aself+limit%3A50&Own+Patches=owner%3Aself&Patches+I+%2D2%27d=label%3ACode%2DReview%3C%3D%2D2%2Cself stable/queens only]<br />
** [https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fcinder+OR%0Aproject%3Aopenstack%2Fpython%2Dcinderclient+OR%0Aproject%3Aopenstack%2Fos%2Dbrick+OR%0Aproject%3Aopenstack%2Fcinderlib+OR%0Aproject%3Aopenstack%2Fpython%2Dbrick%2Dcinderclient%2Dext%29+status%3Aopen%0A%28branch%3A%5Edriverfixes%2F.%2A+OR%0Abranch%3Astable%2Focata+OR%0Abranch%3Astable%2Fpike%29&title=Cinder+Extended+Maintenance+Branches+Reviews&Needs+Final+%2B2=label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D2&Without+Negative+Feedback=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3AVerified%3E%3D1+NOT+owner%3Aself+NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D2+limit%3A50&Probably+Not=NOT+label%3AWorkflow%3E%3D1+NOT+label%3AWorkflow%3C%3D%2D1+label%3ACode%2Dreview%3C%3D%2D1+NOT+owner%3Aself+limit%3A50&Own+Patches=owner%3Aself&Patches+I+%2D2%27d=label%3ACode%2DReview%3C%3D%2D2%2Cself extended maintenance (including driverfixes) only]<br />
<br />
=== PTG and Summit Meeting Summaries ===<br />
*[[CinderSteinMidCycleSummary|Stein Mid-Cycle Summary]]<br />
*[[CinderSteinPTGSummary|Stein PTG Summary]]<br />
*[[VancouverSummit2018Summary|Vancouver Summit 2018 Summary]]<br />
*[[CinderRockyPTGSummary|Rocky PTG Summary]]<br />
*[[CinderQueensPTGSummary|Queens PTG Summary]]<br />
*[[CinderPikePTGSummary|Pike PTG Summary]]<br />
<br />
=== Cinder YouTube Channel ===<br />
* [https://www.youtube.com/channel/UCJ8Koy4gsISMy0qW3CWZmaQ/videos Midcycle/PTG Videos and Related Content]<br />
<br />
[[Category: Cinder]]</div>Jay Bryanthttps://wiki.openstack.org/w/index.php?title=Forum/Denver2019&diff=169615Forum/Denver20192019-04-24T13:20:18Z<p>Jay Bryant: /* Wednesday May 1 */</p>
<hr />
<div>== Etherpads ==<br />
The grand list of all of the Denver 2019 [[Forum]] etherpads. Please add links to etherpads below!<br />
(You might use the prior Forum entries for ideas: https://wiki.openstack.org/wiki/Forum/Berlin2018 )<br />
<br />
At the Forum the entire OpenStack community (users and developers) gathers to brainstorm the requirements for the next release, gather feedback on the past version and have strategic discussions that go beyond just one release cycle. The Berlin Forum was the start of the planning phase for the '''T''' development cycle. Please prepare session ideas with feedback from the '''Stein''' release in mind.<br />
<br />
=== Monday April 29 ===<br />
[11:10-11:50] [https://etherpad.openstack.org/p/DEN-keystone-forum-sessions-app-creds Keystone Application Credentials: Status and Planning]<br />
<br />
[12:00-12:40] [https://etherpad.openstack.org/p/DEN-keystone-forum-sessions-operator-feedback Keystone Operator Feedback]<br />
<br />
[15:50-16:30] [https://etherpad.openstack.org/p/storyboard-pain-points Ibuprofen for Your StoryBoard Pain Points]<br />
<br />
=== Tuesday April 30 ===<br />
[9:50-10:30] [https://etherpad.openstack.org/p/DEN-deployment-tools-capabilities Deployment tools: define common capabilities]<br />
<br />
[10:50-11:30] [https://etherpad.openstack.org/p/DEN-ECG-MVP-feedback Edge Computing Group MVP Architecture feedback]<br />
<br />
[11:40-12:20] [https://etherpad.openstack.org/p/DEN-ECG-use-cases-discssion-feedback Edge Computing use cases discussion and feedback]<br />
<br />
[13:40-14:20] [https://etherpad.openstack.org/p/DEN-update-on-placement-extraction-from-nova Update on placement extraction from nova]<br />
<br />
[14:30-15:10] [https://etherpad.openstack.org/p/DEN-ECG-roadmap-and-feedback What is the Edge Computing Group and what it should be doing in the next 6 months?]<br />
<br />
[14:30-15:10] [https://etherpad.openstack.org/p/DEN-ptl-tips-and-tricks PTL Tips and Tricks]<br />
<br />
[16:20-17:00] [https://etherpad.openstack.org/p/new-contribs-state-and-deduplication Welcoming New Contributors State of the Union and Deduplication of Efforts] <br />
<br />
[1710-1750] [https://etherpad.openstack.org/p/DEN-ops-war-stories-LT Ops War Stories/Architecture Show and Tell Lightning Talks]<br />
<br />
=== Wednesday May 1 ===<br />
[1340-1240] [https://etherpad.openstack.org/p/DEN-osc-compute-api-gaps Closing compute API feature gaps in the openstack CLI]<br />
<br />
[13:40-14:20] [https://etherpad.openstack.org/p/denver-forum-cinder-improving-drvr-cap-rep Improving Cinder Driver Capability Reporting]<br />
<br />
[14:30-15:10] [https://etherpad.openstack.org/p/denver-forum-cinder-direct-user-feedback Cinder Opportunity for Direct User Feedback]<br />
<br />
[2:50-3:30] [https://etherpad.openstack.org/p/consumption-models Consumption Models for Service Projects]<br />
<br />
[4:20-5:00] [https://etherpad.openstack.org/p/DEN-drive-common-goals OpenStack: how to drive common goals]<br />
<br />
==List of Brainstorming Etherpads==<br />
<br />
===Catch-alls===<br />
If you want to post an idea, but aren't working with a specific team or working group, you can use these:<br />
* [https://etherpad.openstack.org/p/DEN-Train-TC-brainstorming Technical Committee Catch-all]<br />
* [https://etherpad.openstack.org/p/DEN-Train-UC-brainstorming User Committee Catch-all]<br />
<br />
===Etherpads from Teams and Working Groups===<br />
* [https://etherpad.openstack.org/p/DEN-auto-scaling-SIG Auto scaling SIG]<br />
* [https://etherpad.openstack.org/p/DEN-Train-EWG-brainstorming Enterprise Working Group (EWG)]<br />
* [https://etherpad.openstack.org/p/Denver-2019-Forum-DCN-Brainstorming DCN]<br />
* [https://etherpad.openstack.org/p/DEN-fenix-forum-brainstorming Fenix]<br />
* [https://etherpad.openstack.org/p/FC_SIG_Denver_forum_topics First Contact SIG]<br />
* [https://etherpad.openstack.org/p/DEN-train-ironic-brainstorming Ironic]<br />
* [https://etherpad.openstack.org/p/kayobe-train-forum Kayobe]<br />
* [https://etherpad.openstack.org/p/DEN-keystone-forum-sessions Keystone]<br />
* [https://etherpad.openstack.org/p/DEN-train-forum-manila-brainstorming Manila]<br />
* [https://etherpad.openstack.org/p/DEN-train-nova-brainstorming Nova]<br />
* [https://etherpad.openstack.org/p/edge-wg-forum-preparation-denver-2019 OSF Edge Computing Group]<br />
* [https://etherpad.openstack.org/p/oslo-train-topics Oslo]<br />
* [https://etherpad.openstack.org/p/DEN-Train-PublicCloudWG-brainstorming Public Cloud WG]<br />
* [https://etherpad.openstack.org/p/DEN-train-forum-qa-brainstorming QA]<br />
* [https://etherpad.openstack.org/p/DEN-self-healing-SIG Self healing SIG]<br />
* [https://etherpad.openstack.org/p/SB_train_forum_brainstorming StoryBoard]<br />
* [https://etherpad.openstack.org/p/DEN-Train-TC-brainstorming Technical Committee]<br />
* [https://etherpad.openstack.org/p/tripleo-train-topics TripleO]<br />
<br />
===Etherpads from Pilot projects===<br />
* [https://etherpad.openstack.org/p/stx-forum-preparation-denver-2019 StarlingX]</div>Jay Bryant