https://wiki.openstack.org/w/api.php?action=feedcontributions&user=John-griffith&feedformat=atomOpenStack - User contributions [en]2024-03-19T03:40:59ZUser contributionsMediaWiki 1.28.2https://wiki.openstack.org/w/index.php?title=Cinder/tested-3rdParty-drivers&diff=77504Cinder/tested-3rdParty-drivers2015-04-14T20:04:23Z<p>John-griffith: /* FAQ */</p>
<hr />
<div>= Driver Testing =<br />
<br />
=== Description ===<br />
<br />
The Cinder community (and other OpenStack projects) have agreed that if a vendor wishes to submit a driver for their particular storage device that said vendor should also be required to set up a third party CI system in their lab which runs [https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#What_tests_do_I_use.3F Tempest volume tests] against their storage device for every Cinder commit, and provides feedback in to Gerrit.<br />
<br />
=== Deadlines ===<br />
All [https://github.com/openstack/cinder/tree/master/cinder/volume/drivers volume drivers] need to have a CI by end of K-3, March 19th 2015. Failure will result in removal in the Kilo release. Discussion regarding this was in the #openstack-meeting IRC room during the Cinder meeting. [http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-01-14-16.00.log.html#l-21 Read discussion logs]<br />
<br />
=== Third Party CI Requirements ===<br />
* See the official [http://ci.openstack.org/third_party.html Third Party Testing] wiki.<br />
* Test all volume drivers your company has integrated in Cinder.<br />
* Test all fabrics your solution uses.<br />
<br />
<br />
For example, if your company has two volume drivers in Cinder and they both use ISCSI and FibreChannel, you would need to have a CI that tests against four backends and reports the results for each backend, for every Cinder upstream patch<br />
<br />
=== Existing CI Solutions ===<br />
* Puppet modules for deploying OpenStack CI<br />
** [https://github.com/rasselin/os-ext-testing Git repo]<br />
** [https://github.com/rasselin/os-ext-testing-data/blob/master/etc/jenkins_jobs/config/dsvm-cinder-driver.yaml.sample Sample Jenkins Job Builder Job Template for Cinder drivers]<br />
** Fork of Jay Pipe's external test repo and more up-to-date. Uses nodepool.<br />
<br />
* Simple OpenStack Continuous Integration (sos-ci)<br />
** [https://github.com/j-griffith/sos-ci Git repo]<br />
** Builds Devstack virtual machines with Ansible.<br />
<br />
* Jay Pipe's external testing series<br />
** '''Note:''' Jay's repo is outdated, but the articles are useful to read. A more updated fork exists.<br />
** [https://github.com/jaypipes/os-ext-testing Git Repo]<br />
** [http://www.joinfu.com/2014/01/understanding-the-openstack-ci-system/ Understanding the OpenStack CI]<br />
** [http://www.joinfu.com/2014/02/setting-up-an-external-openstack-testing-system/ Setting up CI Part 1]<br />
** [http://www.joinfu.com/2014/02/setting-up-an-openstack-external-testing-system-part-2 Setting up a CI Part 2]<br />
<br />
=== Current Reporting Cinder CI's ===<br />
See the [https://wiki.openstack.org/wiki/Cinder/third-party-ci-status list].<br />
<br />
=== Questions ===<br />
* Join [https://wiki.openstack.org/wiki/Meetings/ThirdParty Third Party Meeting]<br />
* Reach out to IRC nicks DuncanT or asselin on Freenode #openstack-cinder.<br />
<br />
=== FAQ ===<br />
<br />
==== What tests do I use? ====<br />
Use the OpenStack integration test suite [http://git.openstack.org/cgit/openstack/tempest Tempest]. The volume related tests can be started with the following command from a Tempest repo:<br />
<br />
<pre><br />
tox -e all -- volume | tee -a console.log.out<br />
</pre><br />
<br />
For those using devstack-gate export this variable before running the job:<br />
<pre><br />
export DEVSTACK_GATE_TEMPEST_REGEX="volume"<br />
</pre><br />
<br />
==== How do I configure DevStack so my Driver Passes Tempest? ====<br />
Sample local.conf for devstack setup.<br />
<br />
<pre><br />
[[local|localrc]]<br />
ADMIN_PASSWORD=password<br />
MYSQL_PASSWORD=password<br />
RABBIT_PASSWORD=password<br />
SERVICE_PASSWORD=password<br />
SERVICE_TOKEN=password<br />
<br />
# These options define expected driver capabilities<br />
TEMPEST_VOLUME_DRIVER=foo<br />
TEMPEST_VOLUME_VENDOR="Foo Inc"<br />
TEMPEST_STORAGE_PROTOCOL=iSCSI<br />
<br />
# These options allow you to specify a branch other than "master" be used<br />
CINDER_REPO=https://review.openstack.org/openstack/cinder<br />
CINDER_BRANCH=refs/changes/83/72183/4<br />
<br />
# Disable security groups entirely<br />
Q_USE_SECGROUP=False<br />
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver<br />
CINDER_SECURE_DELETE=False<br />
<br />
[[post-config|$CINDER_CONF]]<br />
volume_driver=cinder.volume.drivers.foo.FooDriver<br />
</pre><br />
<br />
==== How do I run my CI to test all cinder patches with my driver not yet merged? ====<br />
If using devstack-gate use the pre-test-hook to cherry-pick your driver on top of the cinder patch under review<br />
<br />
<pre><br />
function pre_test_hook {<br />
cd $BASE/new/cinder<br />
git fetch https://review.openstack.org/openstack/cinder $PATCH && git cherry-pick FETCH_HEAD<br />
}<br />
</pre><br />
<br />
Note you may wish to substitute the review.openstack.org repo with your own github location before submitting to gerrit.<br />
<br />
Otherwise you can make the changes prior to calling stack.sh or via a custom [http://docs.openstack.org/developer/devstack/plugins.html devstack plugin].<br />
<br />
==== When thirdparty CI voting will be required? ====<br />
Once third party CI's become more common and stable, we'll revisit the subject. For now you can review the [http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-10-15-16.00.log.html discussion] on the decision.<br />
<br />
<br />
[[Category: Cinder]]</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=Getting_The_Code&diff=63198Getting The Code2014-09-21T15:57:52Z<p>John-griffith: </p>
<hr />
<div><br />
= Getting the source code =<br />
<br />
OpenStack manages source code in git using a code review tool called Gerrit. The workflow for working with Gerrit is described at [[GerritWorkflow]]. Git repositories are mirrored to [http://git.openstack.org/ git.openstack.org] and [https://github.com/openstack Github].<br />
<br />
To get a copy of an OpenStack project, you can clone a repo from [http://git.openstack.org/ git.openstack.org] and browse the source code at [http://git.openstack.org/cgit git.openstack.org/cgit]. For instance, to clone the Swift repo:<br />
<br />
<pre><nowiki><br />
git clone git://git.openstack.org/openstack/swift<br />
</nowiki></pre><br />
<br />
Alternatively, you can use the [https://github.com/openstack Github mirror] to clone repos and browse code. The git.openstack.org and GitHub mirrors are maintained the same way and contain the same code, so you can use either one, with the difference being that git.openstack.org is hosted by the OpenStack organization.<br />
<br />
You can also get stable releases of the code from the OpenStack projects on Launchpad, for example:<br />
<br />
* [https://launchpad.net/nova/icehouse/2014.1.1 Compute (Nova)]<br />
* [https://launchpad.net/swift/icehouse/1.13.1 Object Storage (Swift)] <br />
* [https://launchpad.net/glance/icehouse/2014.1.1 Image Service (Glance)] <br />
* [https://launchpad.net/neutron/icehouse/2014.1.1 Networking (Neutron)] <br />
* [https://launchpad.net/cinder/icehouse/2014.1.1 Block Storage (Cinder)]<br />
* [https://launchpad.net/keystone/icehouse/2014.1.1 Identity (Keystone)]<br />
* [https://launchpad.net/horizon/icehouse/2014.1.1 Dashboard (Horizon)]<br />
* [https://launchpad.net/ceilometer/icehouse/2014.1.1 Telemetry (Ceilometer)]<br />
* [https://launchpad.net/heat/icehouse/2014.1.1 Orchestration (Heat)]<br />
* [https://launchpad.net/trove/icehouse/2014.1 Database Service (Trove)]<br />
<br />
= Getting dependencies =<br />
<br />
See [[DevStack]].<br />
<br />
----<br />
[[Category:Nova]]<br />
[[Category:Swift]] [[Category:HowTo]]<br />
<br />
= Hacking on your laptop and running unit tests =<br />
<br />
Questions about running unit tests locally are fairly common. While all of the projects are pretty similar in how this works, it's best to consult each projects documentation for things like setting up a dev environment and running unit tests.<br />
<br />
Each project should publish this info to docs.openstack.org (http://docs.openstack.org/developmer/<PROJECT_NAME>/devref/development.environment.html<br />
<br />
For example:<br />
<br />
http://docs.openstack.org/developer/cinder/devref/development.environment.html</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=Cinder/third-party-ci-status&diff=62913Cinder/third-party-ci-status2014-09-17T16:39:47Z<p>John-griffith: </p>
<hr />
<div>Cinder third-party-ci-status<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Driver Name !! Contact || CI Status !! Issues !! ETA !! Implementation<br />
|-<br />
| SolidFireDriver || jgriffith <john.griffith8@gmail.com> || Running In-House, not yet reporting || Random Attach Failures with Encrypted volume types || Reporting || combination of Duncan's KISS_CI and ansible-playbook<br />
|-<br />
| PureISCSIDriver || openstack-dev@purestorage.com, patrickeast (irc) || Watching openstack/cinder but not commenting/voting yet, tests appear stable but are very slow with current configuration || Parallel testing has intermittent test failures seemingly related to ISCSI connection problems, without parallel testing builds are very slow || Once we've verified we can handle the test load on openstack/cinder we will enable commenting and request permission to vote || Jenkins following jaypipes tutorial<br />
|-<br />
| DellEQLSanISCSIDriver || smcginnis <openstack-cinder@dell.com> || Watching openstack-dev/sandbox, noop test is stable, temptest test runs failing || test_volume_boot_pattern fails, then all others || Hoping to clear up tempest issues ASAP, then switch to use EQL driver, then watching openstack/cinder, then voting || Jenkins following jaypipes tutorial<br />
|-<br />
| 3PAR - ISCSI|| Ramy Asselin <ramy.asselin@hp.com> irc:asselin|| Watching Cinder || Ignoring *test_volume_create_get_update_delete_as_clone || Running on silent mode. Will re-enable Sept 2. || Fork of jaypipes using nodepool: https://github.com/rasselin/os-ext-testing<br />
|-<br />
| 3PAR - FC|| Ramy Asselin <ramy.asselin@hp.com> irc:asselin|| WIP || FC Passthrough working, but attaches still fail. Need to debug. || No ETA || Fork of jaypipes using nodepool: https://github.com/rasselin/os-ext-testing<br />
|-<br />
| Lefthand || Ramy Asselin <ramy.asselin@hp.com> irc:asselin|| Watching Cinder || Ignoring *test_volume_create_get_update_delete_as_clone || Test results posted|| Fork of jaypipes using nodepool: https://github.com/rasselin/os-ext-testing<br />
|-<br />
| X-IO FC & ISCSI || Richard Hedlind <openstack-ci@x-io.com> || Watching openstack-dev/sandbox, only noop stable || Fails in test_snapshot_pattern || Hope to be stable by end of this week || Jenkins following jaypipes tutorial<br />
|-<br />
| IBM Storwize Driver || Yixuan Zhang <yixuanzh@cn.ibm.com> || Ever watching cinder, Gerrit port 29418 was disabled for China user. 1 test case not stable || test_volume_create_get_update_delete_from_image sometimes failed || try to establish a new Jenkins master in Israel lab || Jenkins following jaypipes tutorial<br />
|-<br />
| Scality Driver || David Pineau <david.pineau@scality.com> || Watching cinder, currently disabled as testing has not yet reached success: http://r.ci.devsca.com:8080/ || 59 tests currently failing (a part is probably due to misconfiguration)test_delete_server, test_server_rescue_negative, test_volume_quotas_negative, test_volume_types, test_snapshot_metadata, test_snapshot_actions, test_volumes_get, test_volumes_actions, test_volumes_backup, test_volumes_metadata, test_volume_transfer, test_volume_boot_pattern, test_encrypted_cinder_volumes, boto.test_ec2_instance_run... || Silent until Success on a test run is achieved, will report when considered stable (WIP) || Jenkins following jaypipes tutorial<br />
|-<br />
| IBM XIV || Alon Marx <alonma@il.ibm.com> || Running In-House, not yet reporting || Random Failures, logs filling up causing additional failures || Silent, not voting until stable || Jenkins following jaypipes tutorial <br />
|-<br />
| EMC VMAX || Xing Yang <xing.yang@emc.com> || Reporting, but not for every commit || Reproducible failures with test_volume_boot_pattern. Random Failures. Need to clean up manually. || Test results posted || Jenkins following jaypipes tutorial <br />
|-<br />
| EMC VNX || Xing Yang <xing.yang@emc.com> || Reporting, but not for every commit || Can't handle more than 20 commits a day with 1 slave node. Need to clean up manually. || Test results posted || Jenkins following jaypipes tutorial <br />
|-<br />
| EMC XIO || Xing Yang <xing.yang@emc.com> || Running In-House, not yet reporting || Random Failures. || Silent, not voting until stable || Jenkins following jaypipes tutorial <br />
|-<br />
| IBM DS8000 || Eddie Lin <edlin@us.ibm.com> || Watching openstack-dev/sandbox, noop test is stable, dsvm-tempest-full failing || Was ok a week or two ago but now test_volume_boot_pattern fails consistently and other random failures (1-2 additional per run) || Silent, not voting until stable || Jenkins following jaypipes tutorial<br />
|-<br />
| HDS CI - HNAS || Marcus/Erlon irc:marcusvrn,erlon <OpenstackDevelopment@hds.com> || Watching openstack-dev/sandbox, tests relatively stable (api.volume). || Ignoring test_volume_create_get_update_delete_as_clone using iSCSI driver. NFS driver is working well. || Plan on watching cinder project at the end of this week || Jenkins following rasselin tutorial + VMWare tools<br />
|-<br />
| HDS CI - HBSD || Marcus/Erlon irc:marcusvrn,erlon <OpenstackDevelopment@hds.com> || Start to create a job for the drivers (the job template are created already) || || Plan on having jobs watching/working on sandbox project at the beginning of the next week || Jenkins following rasselin tutorial + VMWare tools<br />
|-<br />
| IBM NAS || Sasikanth Eda <sasikanth.eda@in.ibm.com> || Running In-House, not yet reporting || Need to clean up manually. || Silent, not voting until stable || Jenkins following jaypipes tutorial <br />
|-<br />
| IBM GPFS || Sasikanth Eda <sasikanth.eda.in.ibm.com> || Running In-House, not yet reporting || Need to clean up manually. || Silent, not voting until stable || Jenkins following jaypipes tutorial <br />
|-<br />
| Datera || Mike Perez <thingee@gmail.com> || Running In-House, not yet reporting || Waiting for a stable Datera storage image to bring vms up to test against. || Silent, not voting until stable || Jenkins following jaypipes tutorial <br />
|-<br />
| Nimble Storage - iSCSI || Jay Wang <jwang@nimblestorage.com> || Running In-House, not yet reporting || Failed on the dsvm-tempest-full, but selected tests would work, such as API test. Still looking into all tempest errors. Sometimes there's communication issue between Zuul, Gearman and Jenkins jobs. Still looking into the communication issue. May need to re-setup the CI environment. Need to fix the CI email distribution name as well. || Silent, not voting until stable || Jenkins following jaypipes tutorial <br />
|-<br />
| NetApp || Andrew Kerr <xdl-openstack-jenkins@netapp.com> || WIP - Part of a larger project || Working to bring up larger system || A couple weeks || Customized fork of OpenStack Infra Gate <br />
|-<br />
| Example || Example || Example || Example || Example || Example<br />
|}<br />
<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Failing Tempest Test !! Frequency (intermittent/consistent) !! LP-Bug !! Header text !! Header text<br />
|-<br />
| test_volume_boot_pattern || consistent || https://bugs.launchpad.net/tempest/+bug/1351441 || Example || Example<br />
|-<br />
| tempest.api.volume.admin.test_snapshots_actions.SnapshotsActionsTestXML|| intermittent|| Example || Example || Example<br />
|-<br />
| test_volume_create_get_update_delete_as_clone || consistent|| https://bugs.launchpad.net/cinder/+bug/1349639|| Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|}</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=Cinder/third-party-ci-status&diff=61482Cinder/third-party-ci-status2014-08-27T17:22:49Z<p>John-griffith: </p>
<hr />
<div>Cinder third-party-ci-status<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Driver Name !! Contact || CI Status !! Issues !! ETA !! Implementation<br />
|-<br />
| SolidFireDriver || jgriffith <john.griffith8@gmail.com> || Running In-House, not yet reporting || Random Attach Failures with Encrypted volume types || Plan on making results public this week, not voting until stable || combination of Duncan's KISS_CI and ansible-playbook<br />
|-<br />
| Example || Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example || Example<br />
|}<br />
<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Failing Tempest Test !! Frequency (intermittent/consistent) !! LP-Bug !! Header text !! Header text<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|}</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=Cinder/third-party-ci-status&diff=61476Cinder/third-party-ci-status2014-08-27T17:03:51Z<p>John-griffith: </p>
<hr />
<div>Cinder third-party-ci-status<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Driver Name !! CI Status !! Issues !! ETA !! Implementation<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|}<br />
<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Failing Tempest Test !! Frequency (intermittent/consistent) !! LP-Bug !! Header text !! Header text<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|}</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=Cinder/third-party-ci-status&diff=61475Cinder/third-party-ci-status2014-08-27T17:02:52Z<p>John-griffith: </p>
<hr />
<div>Cinder third-party-ci-status<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Driver Name !! CI Status !! Issues !! ETA !! Implementation<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|}<br />
<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Header text !! Header text !! Header text !! Header text !! Header text<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|}</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=Cinder/third-party-ci-status&diff=61474Cinder/third-party-ci-status2014-08-27T17:01:59Z<p>John-griffith: </p>
<hr />
<div>Cinder third-party-ci-status<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! Driver Name !! CI Status !! Issues !! ETA !! Implementation<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example || Example<br />
|}</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=Cinder/third-party-ci-status&diff=61473Cinder/third-party-ci-status2014-08-27T16:58:58Z<p>John-griffith: Created page with "Cinder third-party-ci-status"</p>
<hr />
<div>Cinder third-party-ci-status</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=CinderMeetings&diff=60969CinderMeetings2014-08-20T14:02:08Z<p>John-griffith: /* Next meeting */</p>
<hr />
<div><br />
= Weekly Cinder team meeting =<br />
'''NOTE MEETING TIME: Wed's at 16:00 UTC'''<br />
<br />
If you're interested in Cinder or Block Storage in general for OpenStack, we have a weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, on Wednesdays at 16:00 UTC. Please feel free to add items to the agenda below. NOTE: When adding topics please include your IRC name so we know who's topic it is and how to get more info.<br />
<br />
== Next meeting ==<br />
'''NOTE:''' ''Include your IRC nickname next to agenda items so that you can be called upon in the meeting and arrive at the meeting promptly if placing items in agenda. You might want to put this on your calendar if you are adding items.''<br />
<br />
'''Aug 20th, 2014 16:00 UTC'''<br />
<br />
* FPF Tomorrow Aug 21'st (jgriffith)<br />
* Time to start thinking about the Summit and how to be more effective with our time there (jgriffith)<br />
* The idea of a maintenance/ish release (jgriffith)<br />
<br />
== Previous meetings ==<br />
'''Aug 13th, 2014 16:00 UTC'''<br />
NO MEETING TODAY, MID CYCLE MEETUP<br />
<br />
'''Aug 6th, 2014 16:00 UTC'''<br />
* Cinder mid cycle meetup next week August 12-14 (scottda)<br />
** https://etherpad.openstack.org/p/CinderMidCycleMeetupAug2014 <br />
** HP site should be set. Ping scottda with any issues/problems/concerns<br />
** Virtual meetup will need to be taken care of<br />
* Volume replication (ronenkat)<br />
** Alternative approach based on jgriffith driver based replication: https://etherpad.openstack.org/p/juno-cinder-volume-replication-apparochs<br />
<br />
'''July 30'th, 2014 16:00 UTC'''<br />
* Planning cinderclient tag for Thursday morning July 31'st, let's catch up on client changes and testing prior to that (jgriffith)<br />
* Breaking the inheritance between data and control path in Volume drivers https://review.openstack.org/#/c/105923/ (jgriffith)<br />
* Consistency groups https://review.openstack.org/#/c/104732/ (xyang)<br />
* Hitachi Block Storage cinder driver https://review.openstack.org/#/c/90379/ (saguchi)<br />
* Volume replication https://review.openstack.org/#/c/106718/ (ronenkat)<br />
** 17:00 UTC - Volume replication driver owner overview and Q & A<br />
** Callin information: passcode: 6406941 call-in numbers: https://www.teleconference.att.com/servlet/glbAccess?process=1&accessCode=6406941&accessNumber=1809417783#C2<br />
* NFS secure option -- default to 666 vs 660 vs force admin choice (bswartz)<br />
* It is code cleanup tag merge week (DuncanT)<br />
** https://review.openstack.org/#/q/project:openstack/cinder+comment:code_cleanup_batching+-status:merged,n,z<br />
<br />
'''July 23th, 2014 16:00 UTC'''<br />
<br />
* J2 Milestone (DuncanT)<br />
** JGriffith favours a freeze exception for all drivers taht currently have code / BP up, but bouncing all new ones <br />
** Review priorities<br />
*** Driver specs<br />
*** CG groups - a big change that requires driver changes, so needs lots of eyes and time for driver maintainers to do their thing too: https://review.openstack.org/#/c/104743/<br />
*** Pool scheduling - https://review.openstack.org/#/c/98715/<br />
*** Others?<br />
* Plug for weekly 3rd Party CI meeting (Mondays at 18:00 UTC [1 pm Central]) (jungelboyj)<br />
** I attended this week's meeting and gave a high level status. They are looking for more participation.<br />
* ProphetStor Cinder drivers (stevetan)<br />
** Get feedback on progress of our DPL driver and documentation required https://review.openstack.org/#/c/95829/<br />
** Get direction from community for our Federator SDS driver https://review.openstack.org/#/c/99616/<br />
* Volume replication - work in progress (ronenkat)<br />
** https://review.openstack.org/#/c/106718/2<br />
* NFS Security, if there's time.<br />
** https://blueprints.launchpad.net/cinder/+spec/secure-nfs<br />
<br />
'''July 16th, 2014 16:00 UTC'''<br />
* Putting the fun back into cinder development.<br />
** There's been a mailing list thread recently about how nit-picky reviews are getting about typos, white space and the like, and how it is a motivation killer. I'm inclinded to agree - the formatting of the doc strings, fullstops at the end of comments etc doesn't actually improve the code much at all, and getting a -1 for it is a buzz kill of the highest order. Should we leave that sort of thing to the gate, and say that if there is no hacking check for it then it isn't important in general? (DuncanT)<br />
<br />
* How to proceed with cinder/openstack requirements? python-dbus for https://review.openstack.org/99013, see mailing list conclusion http://lists.openstack.org/pipermail/openstack-dev/2014-July/040182.html (flip214)<br />
* code churn, not sure where/when to start, fear of merge conflicts (flip214)<br />
<br />
* Hitachi Block Storage cinder driver (tsekiyama)<br />
** We want to get some feedback about how we can make this forward<br />
** Review: https://review.openstack.org/#/c/90379/<br />
<br />
* Log translations https://review.openstack.org/#/c/105665/ is still stuck - any thoughts? Options I can see: (DuncanT)<br />
** A better technical solution - should be possible where the message format is not expanded outside the logging call i.e.:<br />
Ok:<br />
<nowiki><br />
LOG.warning("The flubigar id %d exploded messily", flu_id)<br />
</nowiki><br />
Not ok:<br />
<nowiki><br />
msg = _("The flubigar id %d exploded messily") % flu_id<br />
LOG.warning(msg)<br />
</nowiki><br />
<br />
** We don't break up our message categories<br />
*** This makes life harder for the translation team, makes us inconsistent with Openstack in general but keeps the code from descending into ugliness<br />
** Related discussion on enabling translation (jungleboyj):<br />
*** Have two patches awaiting approval: Explicit import of _() https://review.openstack.org/105315 and enable lazy translation: https://review.openstack.org/105561<br />
*** Need to get these merged so we are running with the changes.<br />
* 3rd Party CI (jungleboyj):<br />
** Clarification on when drivers are going to be removed.<br />
<br />
== Previous meetings ==<br />
<br />
'''July 9, 2014 16:00 UTC'''<br />
<br />
* flip214 to jgriffith: Status of Separation of Connectors from Driver/Device Interface?<br />
* Quick check: Is everybody happy in principle with the text of https://wiki.openstack.org/wiki/CinderCodeCleanupPatches ? (DuncanT)<br />
<br />
<br />
'''July 2nd, 2014 16:00 UTC'''<br />
* Batching up mechanical code cleanup until the one week after each milestone (DuncanT)<br />
** See https://review.openstack.org/#/c/102872/ for example and https://review.openstack.org/#/c/101847<br />
** Log translations and hacking fixes fall into this class<br />
** Means you only take one bit hit per milestone for rebases<br />
** Does require some tracking so they don't get missed (and I will suck at said tracking, enviably)<br />
* LVM: Support a volume-group on shared storage (mtanino)<br />
** Want to quickly discuss the driver benefit, driver comparison, performance(P8-P14): https://wiki.openstack.org/w/images/0/08/Cinder-Support_LVM_on_a_sharedLU.pdf<br />
** Review comments? https://review.openstack.org/#/c/92479/<br />
* Cinder Third Party CI Names (asselin)<br />
** Online discussion of this thread: http://lists.openstack.org/pipermail/openstack-dev/2014-July/039103.html<br />
<br />
'''June 25th, 2014 16:00 UTC'''<br />
* Consistency groups [xyang]<br />
** Cinder spec review: https://review.openstack.org/#/c/96665/<br />
* CI status [xyang]<br />
** [http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg26258.html Third-Party CI Issue (asselin): direct access to review.openstack.org port 29418 required]l<br />
* Pools implementation [navneet]<br />
** Comparison etherpad https://etherpad.openstack.org/p/cinder-pool-impl-comparison<br />
** Decision to select implementation<br />
* keystoneclient integration with cinderclient [hrybacki / ayoung]<br />
** Discuss integration and collaboration possibilities<br />
<br />
<br />
<br />
'''June 18th, 2014 16:00 UTC'''<br />
* It's review day !?! [jdg]<br />
* Mid cycle meetup plans/updates [jdg]<br />
** https://etherpad.openstack.org/p/CinderMidCycleMeetupAug2014<br />
* Separation of Connectors from Driver/Device Interface (status update) [jdg]<br />
* Updates on 3'rd party CI [jdg]<br />
* Things we need to decide upon (not today, but do your homework for next week)<br />
** Software Define Storage layers/drivers<br />
** Pools implementation<br />
<br />
'''June 11th, 2014 16:00 UTC'''<br />
* Volume replication (ronenkat)<br />
** Blueprint and spec review/comments? https://review.openstack.org/#/c/98308<br />
* oslo.db (jungleboyj)<br />
** Want to quickly discuss the review out there for this: https://review.openstack.org/#/c/77125/<br />
** Move to current oslo.db? Wait for library work?<br />
* oslo logging discussion (jungleboyj)<br />
** Removing translation of debug messages<br />
** Adding _LE, _LI, _LW<br />
* 3rd party cinder ci (asselin)<br />
** Looking for volunteers to test out my fork of jaypipe's 3rd party ci setup which has support for nodepool & http proxies.<br />
** https://github.com/rasselin/os-ext-testing<br />
** https://github.com/rasselin/os-ext-testing-data<br />
** [http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg26258.html Third-Party CI Issue: direct access to review.openstack.org port 29418 required]l<br />
* HDS HNAS Cinder drivers (sombrafam)<br />
** As we are trying check-in for quite a while, we want to get some feedback on the missing steps<br />
** First thread: https://review.openstack.org/#/c/74371/<br />
** Continuation: https://review.openstack.org/#/c/82505/<br />
** Current thread in discussion: https://review.openstack.org/#/c/84244/<br />
* Mid-cyce Sprint (scottda)<br />
** HP in Fort Collins, CO can host on site<br />
** The thought was 10-20 developers<br />
** Large room is available July 14,15,17,18, 21-25, 27-Aug 1 ... Other options exist<br />
* Backend Pools (navneet)<br />
** which way to go? There are two WIPs.<br />
** comparisons between the two approaches? Any wiki/etherpad present or to be prepared for documenting opinions?<br />
<br />
'''June 4th, 2014 16:00 UTC'''<br />
* Volume backup modification (navneet)<br />
** Blueprint and spec review/comments? https://blueprints.launchpad.net/cinder/+spec/vol-backup-service-per-backend<br />
* Dynamic multi pool (navneet)<br />
** Review comments? https://review.openstack.org/#/c/85760/<br />
** Implementation approach comparison.<br />
* 3rd party ci (asselin)<br />
** I have a conflict with another meeting, but my WIP to add nodepool into jaypipe's 3rd party ci solution is available here: https://github.com/rasselin/os-ext-testing/tree/nodepool<br />
* oslo.db (jungleboyj)<br />
** Want to quickly discuss the review out there for this: https://review.openstack.org/#/c/77125/<br />
** Move to current oslo.db? Wait for library work?<br />
** Need to drop off the meeting about 40 minutes in so if we can cover before then it would be appreciated. :-)<br />
<br />
'''May 28th, 2014 16:00 UTC'''<br />
* 3rd Party CI (jungleboyj)<br />
** What tempest test cases to run?<br />
** iSCSI only? What about for FC only drivers then?<br />
** Progress on where to record results?<br />
* SSH host keys (jungleboyj)<br />
** https://launchpad.net/bugs/1320050 and https://bugs.launchpad.net/cinder/+bug/1320056<br />
** Need plan to get this addressed by all drivers using SSH. (New config options?)<br />
** Way to get this backported to Havana?<br />
* Dynamic multi-pools (navneet)<br />
** Status and WIP review (https://review.openstack.org/#/c/85760/)<br />
** Back manager design improvement/rewriting for better rpc message handling.<br />
** Back up service for multi pools.<br />
* cinder-specs (jgriffith)<br />
** Specs repo is live<br />
** Process<br />
** Reviews<br />
<br />
'''May 21st, 2014 16:00 UTC'''<br />
* Consistency Groups (xyang)<br />
** A few people have concerns on the restriction of one volume type per CG. Should we allow one CG to have multiple volume types on the same backend? Let's discuss about it.<br />
* Third-Party CI (jgriffith)<br />
** Who's started, who's planning to and how can we help support each other to get this going smoothly<br />
* Moving GlusterFS snapshot code into the NFS RemoteFs driver (mberlin)<br />
** The GlusterFS snapshot code using qcow2 snapshots is useful for all file based storage systems. I would volunteer to move the GlusterFS snapshot code into the general RemoteFs driver - making it easier to get [https://review.openstack.org/#/c/94186/ our driver] accepted ;-)<br />
** Eric Harney is fine with this and planned to do this for Juno anyway ([https://blueprints.launchpad.net/cinder/+spec/remotefs-snaps see his blueprint]). I've put it on the agenda to make sure others also agree with this approach.<br />
<br />
'''May 7th, 2014 16:00 UTC'''<br />
* Limit == 0 in API [https://review.openstack.org/#/c/86207/ patch review] - thingee<br />
<br />
'''April 16th, 2014 16:00 UTC'''<br />
* Release Status<br />
* Summit Session Updates<br />
* Next Stop ATL!!!<br />
* Cinder resource status - thingee<br />
<br />
'''April 9th, 2014 16:00 UTC'''<br />
(Agenda entered retrospectively)<br />
* Cinder Spec (jgriffith) <br />
** Just a heads up that cinder blueprints will move to a gerrit based process shortly, a la nova. Details and wiki entry to follow.<br />
* RC2 status (jgriffith) <br />
** Just after cutting RC2, a bunch of bugs<br />
* Testing RC code (jgriffith)<br />
** Get on it, folks!<br />
**Looks like theres some serious, intermittent performance issues in the API somewhere...<br />
<br />
'''April 2, 2014 16:00 UTC'''<br />
_Meeting cancelled and summary discussion held on #openstack-cinder_<br />
<br />
* Release status and bugs<br />
* -2s left on reviews from before Junos opened - please check if they are still valid<br />
- https://review.openstack.org/#/c/73446/ (JGriffith)<br />
- https://review.openstack.org/#/c/80550/ (JBryant)<br />
- https://review.openstack.org/#/c/82100/ (Avishay)<br />
- https://review.openstack.org/#/c/74158/ (Avishay)<br />
- + a whole bunch of stable branch stuff<br />
<br />
== Previous meetings ==<br />
<br />
'''Mar 26, 2014 16:00 UTC'''<br />
* RC1 updates (jgriffith)<br />
* Design Summit Sessions (jgriffith)<br />
http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-03-26-16.00.log.html<br />
<br />
'''Mar 19, 2014 16:00 UTC'''<br />
* ProphetStor Driver Exception request for Icehouse (jgriffith)<br />
* Bug status/updates (jgriffith)<br />
* What we should be punting to Juno (aka immediate -2 in Gerrit) (jgriffith)<br />
* Continuous Integration for Cinder Certification (jungleboyj)<br />
<br />
'''Mar 12, 2014 16:00 UTC'''<br />
* Cancelled due to nothing on the agenda. Ad-hoc discussion on #openstack-cinder instead<br />
<br />
'''Mar 5, 2014 16:00 UTC'''<br />
* Volume replication - avishay<br />
* [https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage New LVM-based driver for shared storage] - mtanino<br />
* DRBD/drbdmanage driver for cinder - philr<br />
<br />
'''Feb 19, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* [https://review.openstack.org/#/c/73745 Milestone Consideration for Drivers] -thingee<br />
* [https://etherpad.openstack.org/p/cinder-hack-201402 Hack-a-thon details] -thingee<br />
* [https://review.openstack.org/#/c/66737/ scheduling for local storage] -DuncanT<br />
<br />
'''Feb 5, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* Multiple pools per backend (bswartz)<br />
'''Jan 8, 2014 16:00 UTC'''<br />
* I2 is just around the corner, blueprint updates<br />
* Alternating meeting time proposal, results on feedback<br />
* Driver cert test, it's there... use it<br />
* Prioritizing patches and reviews<br />
'''December 18, 2013 16:00 UTC'''<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/cinder-backup-recover-api cinder backup recovery api - import/export backups] - avishay<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/copy-volume-to-image-task-flow] - Griffith<br />
* [https://blueprints.launchpad.net/cinder/+spec/admin-defined-capabilities Admin-defined capabilities] - Ollie<br />
* Why is type manage an extension? -Thingee<br />
<br />
'''December 11, 2013 16:00 UTC'''<br />
* Proposal of [https://etherpad.openstack.org/p/cinder-extensions extension packages] -Thingee<br />
<br />
'''December 4, 2013 16:00 UTC'''<br />
* Progressing with [https://wiki.openstack.org/wiki/Cinder/blueprints/multi-attach-volume multi-attach / shared-volume] - sgordon<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-acls-for-volumes Access Control List design discussion] - alatynskaya<br />
<br />
'''November 27, 2013 16:00 UTC'''<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-continuous-volume-replication-v2 Updated volume mirroring design] - avishay<br />
* Start using only Mock for new tests... [http://lists.openstack.org/pipermail/openstack-dev/2013-November/018501.html Related Nova Discussion] - Thingee<br />
* Rate limiting came up in the summit, and [http://lists.openstack.org/pipermail/openstack-dev/2013-November/020291.html on openstack-dev] - avishay<br />
* Metadata backup (https://review.openstack.org/#/c/51900/) progress RFC - dosaboy<br />
<br />
<br />
'''November 20, 2013 16:00 UTC'''<br />
* I-1 scheduling - JGriffith<br />
<br />
<br />
'''November 13, 2013 16:00 UTC'''<br />
* patches should update doc files where necessary to ease writing of release notes (Avishay?)<br />
* fencing host from storage (Ehud Trainin)<br />
* Summarize priority of tasks from summit discussions (https://wiki.openstack.org/wiki/Summit/Icehouse/Etherpads#Cinder and https://etherpad.openstack.org/p/cinder-icehouse-summary) Griff<br />
<br />
<br />
'''October 30, 2013 16:00 UTC'''<br />
* cinder backup metadata support - http://goo.gl/Jkg2FV (dosaboy)<br />
* fencing and unfencing host from storage - https://blueprints.launchpad.net/cinder/+spec/fencing-and-unfencing (Ehud Trainin)<br />
<br />
'''October 23, 2013 16:00 UTC'''<br />
* Nexenta backup driver https://review.openstack.org/#/c/47005/ - DuncanT<br />
<br />
<br />
'''October 2, 2013 16:00 UTC'''<br />
* What's still broken in Havana<br />
:* Backups and multibackend (https://code.launchpad.net/bugs/1228223): Fix committed<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066): Fix committed<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here): '''???'''<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896): '''Still open'''<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469): Probable fix committed<br />
:* Summary of gate issue pertaining to Cinder can be viewed here: http://paste.openstack.org/show/47798/<br />
:* Moving to taskflow - avishay<br />
<br />
<br />
'''Sept 25, 2013, 16:00 UTC'''<br />
* PTL nomination process is open until the 26'th, if you want to run send your nomination proposal out to the dev ML<br />
* What's broken in Havana<br />
:* Backups (specifically when configured with multi-backend volumes)<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066)<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here)<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896)<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469)<br />
:* ????<br />
* Cinderclient release plans/status? (Eharney)<br />
* OSLO imports (DuncanT)<br />
* bp/cinder-backup-improvements (dosaboy)<br />
* bp/multi-attach (zhiyan)<br />
<br />
<br />
<br />
'''Aug 21, 2013, 16:00 UTC'''<br />
# No agenda, no meeting.<br />
<br />
'''Aug 14, 2013, 16:00 UTC'''<br />
# Volume migration status - avishay<br />
# API extensions using metadata. This comes from the [https://review.openstack.org/#/c/38322/ readonly volume attach support]. - thingee<br />
[http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-08-14-16.00.log.html IRC Log]<br />
<br />
'''Aug 7, 2013, 16:00 UTC'''<br />
# [https://bugs.launchpad.net/cinder/+bug/1209199 RFC - make all rbd clones copy-on-write] -- Dosaboy<br />
# V1 API removal issues, plans and timescales - DuncanT<br />
<br />
== Meeting Minutes ==<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2014/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2013/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2012/</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=Cinder&diff=60336Cinder2014-08-12T21:51:12Z<p>John-griffith: /* OpenStack Block Storage ("Cinder") */</p>
<hr />
<div><br />
<br />
= OpenStack Block Storage ("Cinder") =<br />
<br />
{| border="1" cellpadding="2" cellspacing="0"<br />
| [[https://launchpad.net/cinder/ Cinder on launchpad (including bug tracker and blueprints)]]<br />
|-<br />
| [[https://github.com/openstack/cinder Source code]]<br />
|-<br />
| [[http://docs.openstack.org/developer/cinder/ Developer docs]]<br />
|}<br />
<br />
== Mission Statement ==<br />
To implement services and libraries to provide on demand, self-service access to Block Storage resources. Provide Software Defined Block Storage via abstraction and automation on top of various traditional backend block storage devices.<br />
<br />
== Description ==<br />
Cinder is a Block Storage service for OpenStack. It's designed to allow the use of either a reference implementation (LVM) to present storage resources to end users that can be consumed by the OpenStack Compute Project (Nova). The short description of Cinder is that it virtualizes pools of block storage devices and provides end users with a self service API to request and consume those resources without requiring any knowledge of where their storage is actually deployed or on what type of device.<br />
<br />
== Related projects ==<br />
* Python Cinder client<br />
* Block Storage API documentation<br />
<br />
== What is Cinder ? ==<br />
<br />
Cinder provides an infrastructure for managing volumes in OpenStack. It was originally a Nova component called nova-volume, but has become an independent project since the Folsom release.<br />
<br />
== Reasoning: ==<br />
# Nova is currently a very large project; managing all of the dependencies in linkages of services within Nova can make the ability to advance new features and functionality very difficult.<br />
# As a result of the many components and dependencies in Nova, it's difficult for anybody to really have a complete view of Nova and to be a true expert. This makes the job of core team member on Nova very difficult, and inhibits good thorough reviews of bug and blueprint submissions. <br />
# Block storage is a critical component of [[OpenStack]], as such it warrants focused and dedicated attention.<br />
# Having Block Storage as a dedicated core project in [[OpenStack]] enables the ability to greatly improve functionality and reliability of the block storage component of [[OpenStack]]<br />
<br />
== Documents: ==<br />
* Cinder deep dive (updated for Grizzly): [[File:cinder-grizzly-deep-dive-pub.pdf]]<br />
<br />
== Minimum Driver Features ==<br />
See [https://github.com/openstack/cinder/blob/master/doc/source/devref/drivers.rst driver dev docs]<br />
<br />
=== Keeping consistant with multi backend ===<br />
In order to maintain consistency with multi backend, do not directly use FLAGS.my_flag, instead use the self.configuration that is provided to the volume drivers. If this does not exist, look @ lvm.py and add it to your driver. using FLAGS.my_flag instead of self.configuration.my_flag will cause multi backend to not work properly. Multi backend relies on the configurations to be within a specific config group in the config file, and the self.configuration abstracts that away from the drivers.<br />
<br />
== Keeping informed and providing '''CONSTRUCTIVE INPUT''' ==<br />
The Cinder team currently meets on a weekly basis in #openstack-meeting at 16:00 UTC on Wednesdays. I try to keep the meetings wiki agenda page http://wiki.openstack.org/CinderMeetings up to date and follow it. Also keep in mind that '''anybody''' is able to add/suggest agenda items via the meeting wiki page.<br />
<br />
Of course, there's also IRC... a number of us monitor #openstack-cinder or you can always send a PM to jgriffith (that's me)<br />
<br />
== Concerns from the community: ==<br />
=== Compatibility and Migration: ===<br />
There has been a significant amount of concern raised regarding "compatibility"; unfortunately this seems to mean different things to different people. For those that haven't looked at the Cinder code or tried a demo in devstack, here are some question/answers:<br />
<br />
* Do the same nova client commands I use for volumes today still work the same? '''YES'''<br />
* Do the same euca2ools that I use for volumes today still work the same? '''YES'''<br />
* Does block storage still work the same as it does today in terms of LVM, iSCSI and the drivers that are curently in place? '''YES'''<br />
* Are the associated database tables the same as they are in the current nova volume code? '''For the most part YES, all volume related tables and columns are migrated, non-volume related tables however are not present'''<br />
* Does it use the same nova database as we use today? '''No, it does require a new independent database'''<br />
* Are you going to implement cinder with complete disregard for my current install and completely change everything out from under me? '''ABSOLUTELY NOT'''<br />
* Are you going to test migrating from nova-vol to Cinder? '''YES'''<br />
* Are those migration tests going to be done just using fakes/unit tests? '''NO, we would require running setups, most likely devstack'''<br />
* Are you planning to provide migration scripts/tools to move from nova to cinder? '''YES'''<br />
<br />
=== Additional thoughts to keep in mind: ===<br />
* The Cinder core team is fortunate enough to have a number of members who currently work for companies that are using [[OpenStack]] in production environments. There is a strong representation and the concerns of Providers is in fact a major consideration<br />
* The goal is '''NOT''' to throw away nova-volume as it is today, but to separate it, focus on it and improve it.<br />
* Migration is one of the top priorities for introduction of Cinder into Folsom (regardless of whether nova-volume is still in place or not). This is something that is just considered a part of the requirements for the project.<br />
<br />
== Cinder Core Drivers ==<br />
For a list of the core drivers in each OpenStack release and the volume operations they support, see https://wiki.openstack.org/wiki/CinderSupportMatrix<br />
<br />
== Notes About Submitting Patches ==<br />
Everyone is welcome to sign the CLA and submit code. Please be sure you familiarize yourself with the "how to contribute guide" (https://wiki.openstack.org/wiki/How_To_Contribute#If_you.27re_a_developer).<br />
<br />
Keep in mind, there is a disproportionate number of submitters to reviewers. YOU can help with this!! Anybody is welcome to review patches, jump in, give a review. It's a great way to learn more about the code and to help you make better submissions in the future. It also helps your karma, when you submit a patch if you're an active reviewer core team members are more likely to notice your patch and give it some attention before some others.<br />
<br />
== Cinder Plugins ==<br />
How to submit a plugin/driver: https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver<br />
<br />
Cinder Plugin/Driver certification page: https://wiki.openstack.org/wiki/Cinder/certified-drivers<br />
<br />
The following plugins (from other sources) are avaialble for this project<br />
* [https://wiki.openstack.org/wiki/Mellanox-Cinder Mellanox Cinder Plugin] Mellanox Cinder Plugin<br />
<br />
== Configuring devstack to use your driver and backend ==<br />
One of the things you'll be required to do when submitting a new driver is running your backend and driver in a devstack environment and executing the tempest volume tests against it. Currently we provide a driver_cert wrapper (mentioned in the how-to-contribute-a-driver section). One thing that causes some confusion is how do I configure devstack to use my backend device. It used to be that your driver info would have to be added to lib/cinder in devstack to set your options. We then created a cinder/plugin module in devstack. Fortunately though it's MUCH easier than that. For *most* drivers, the only changes that are made consist of cinder.conf file changes. That can easily be accomplished by using devstacks local.conf file (more info here: http://devstack.org/configuration.html). For more complex actions (like the need to install packages etc, the plugin directory in devstack can be used). An example of what this file would look like to add driver FOO is shown below, the default localrc section is included for completeness, but the section of interest is the post-config cinder.conf section:<br />
<br />
<nowiki>[[local|localrc]]</nowiki><br /><br />
<sub>:# Passwords<br /><br />
ADMIN_PASSWORD=password<br /><br />
MYSQL_PASSWORD=password<br /><br />
RABBIT_PASSWORD=password<br /><br />
SERVICE_PASSWORD=password<br /><br />
SERVICE_TOKEN=password<br /><br />
SCREEN_LOGDIR=/opt/stack/logs<br /><br />
HOST_IP=172.16.140.246<br /><br />
disable_service n-net<br /><br />
enable_service q-svc<br /><br />
enable_service q-agt<br /><br />
enable_service q-dhcp<br /><br />
enable_service q-l3<br /><br />
enable_service q-meta<br /><br />
enable_service neutron<br /><br />
<br /><br />
<br />
<nowiki># These options define expected driver capabilities </nowiki><br /><br />
TEMPEST_VOLUME_DRIVER=foo<br /><br />
TEMPEST_VOLUME_VENDOR="Foo Inc"<br /><br />
TEMPEST_STORAGE_PROTOCOL=iSCSI<br /><br />
<br /><br />
<nowiki># These options allow you to specify a branch other than "master" be used </nowiki><br /><br />
CINDER_REPO=https://review.openstack.org/openstack/cinder<br /><br />
CINDER_BRANCH=refs/changes/83/72183/4<br /><br />
<br /><br />
<nowiki># Disable security groups entirely</nowiki><br /><br />
Q_USE_SECGROUP=False<br /><br />
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver<br /><br />
CINDER_SECURE_DELETE=False<br /><br />
<br /><br />
<nowiki>[[post-config|$CINDER_CONF]]</nowiki><br /><br />
volume_driver = cinder.volume.drivers.foo.FooDriver<br /><br />
foos_var = something<br /><br />
another_foo_var = something-else</sub><br /><br />
<br /><br />
<br />
== Cinder Brick Proposal ==<br />
https://wiki.openstack.org/wiki/CinderBrick</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=CinderMeetings&diff=59421CinderMeetings2014-07-30T15:15:54Z<p>John-griffith: /* Next meeting */</p>
<hr />
<div><br />
= Weekly Cinder team meeting =<br />
'''NOTE MEETING TIME: Wed's at 16:00 UTC'''<br />
<br />
If you're interested in Cinder or Block Storage in general for OpenStack, we have a weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, on Wednesdays at 16:00 UTC. Please feel free to add items to the agenda below. NOTE: When adding topics please include your IRC name so we know who's topic it is and how to get more info.<br />
<br />
== Next meeting ==<br />
'''NOTE:''' ''Include your IRC nickname next to agenda items so that you can be called upon in the meeting and arrive at the meeting promptly if placing items in agenda. You might want to put this on your calendar if you are adding items.''<br />
<br />
'''July 30'th, 2014 16:00 UTC'''<br />
* Planning cinderclient tag for Thursday morning July 31'st, let's catch up on client changes and testing prior to that (jgriffith)<br />
* Breaking the inheritance between data and control path in Volume drivers https://review.openstack.org/#/c/105923/ (jgriffith)<br />
* Consistency groups https://review.openstack.org/#/c/104732/ (xyang)<br />
* Hitachi Block Storage cinder driver https://review.openstack.org/#/c/90379/ (saguchi)<br />
* Volume replication https://review.openstack.org/#/c/106718/ (ronenkat)<br />
** 17:00 UTC - Volume replication driver owner overview and Q & A<br />
** Callin information: passcode: 6406941 call-in numbers: https://www.teleconference.att.com/servlet/glbAccess?process=1&accessCode=6406941&accessNumber=1809417783#C2<br />
<br />
== Previous meetings ==<br />
'''July 23th, 2014 16:00 UTC'''<br />
<br />
* J2 Milestone (DuncanT)<br />
** JGriffith favours a freeze exception for all drivers taht currently have code / BP up, but bouncing all new ones <br />
** Review priorities<br />
*** Driver specs<br />
*** CG groups - a big change that requires driver changes, so needs lots of eyes and time for driver maintainers to do their thing too: https://review.openstack.org/#/c/104743/<br />
*** Pool scheduling - https://review.openstack.org/#/c/98715/<br />
*** Others?<br />
* Plug for weekly 3rd Party CI meeting (Mondays at 18:00 UTC [1 pm Central]) (jungelboyj)<br />
** I attended this week's meeting and gave a high level status. They are looking for more participation.<br />
* ProphetStor Cinder drivers (stevetan)<br />
** Get feedback on progress of our DPL driver and documentation required https://review.openstack.org/#/c/95829/<br />
** Get direction from community for our Federator SDS driver https://review.openstack.org/#/c/99616/<br />
* Volume replication - work in progress (ronenkat)<br />
** https://review.openstack.org/#/c/106718/2<br />
* NFS Security, if there's time.<br />
** https://blueprints.launchpad.net/cinder/+spec/secure-nfs<br />
<br />
'''July 16th, 2014 16:00 UTC'''<br />
* Putting the fun back into cinder development.<br />
** There's been a mailing list thread recently about how nit-picky reviews are getting about typos, white space and the like, and how it is a motivation killer. I'm inclinded to agree - the formatting of the doc strings, fullstops at the end of comments etc doesn't actually improve the code much at all, and getting a -1 for it is a buzz kill of the highest order. Should we leave that sort of thing to the gate, and say that if there is no hacking check for it then it isn't important in general? (DuncanT)<br />
<br />
* How to proceed with cinder/openstack requirements? python-dbus for https://review.openstack.org/99013, see mailing list conclusion http://lists.openstack.org/pipermail/openstack-dev/2014-July/040182.html (flip214)<br />
* code churn, not sure where/when to start, fear of merge conflicts (flip214)<br />
<br />
* Hitachi Block Storage cinder driver (tsekiyama)<br />
** We want to get some feedback about how we can make this forward<br />
** Review: https://review.openstack.org/#/c/90379/<br />
<br />
* Log translations https://review.openstack.org/#/c/105665/ is still stuck - any thoughts? Options I can see: (DuncanT)<br />
** A better technical solution - should be possible where the message format is not expanded outside the logging call i.e.:<br />
Ok:<br />
<nowiki><br />
LOG.warning("The flubigar id %d exploded messily", flu_id)<br />
</nowiki><br />
Not ok:<br />
<nowiki><br />
msg = _("The flubigar id %d exploded messily") % flu_id<br />
LOG.warning(msg)<br />
</nowiki><br />
<br />
** We don't break up our message categories<br />
*** This makes life harder for the translation team, makes us inconsistent with Openstack in general but keeps the code from descending into ugliness<br />
** Related discussion on enabling translation (jungleboyj):<br />
*** Have two patches awaiting approval: Explicit import of _() https://review.openstack.org/105315 and enable lazy translation: https://review.openstack.org/105561<br />
*** Need to get these merged so we are running with the changes.<br />
* 3rd Party CI (jungleboyj):<br />
** Clarification on when drivers are going to be removed.<br />
<br />
== Previous meetings ==<br />
<br />
'''July 9, 2014 16:00 UTC'''<br />
<br />
* flip214 to jgriffith: Status of Separation of Connectors from Driver/Device Interface?<br />
* Quick check: Is everybody happy in principle with the text of https://wiki.openstack.org/wiki/CinderCodeCleanupPatches ? (DuncanT)<br />
<br />
<br />
'''July 2nd, 2014 16:00 UTC'''<br />
* Batching up mechanical code cleanup until the one week after each milestone (DuncanT)<br />
** See https://review.openstack.org/#/c/102872/ for example and https://review.openstack.org/#/c/101847<br />
** Log translations and hacking fixes fall into this class<br />
** Means you only take one bit hit per milestone for rebases<br />
** Does require some tracking so they don't get missed (and I will suck at said tracking, enviably)<br />
* LVM: Support a volume-group on shared storage (mtanino)<br />
** Want to quickly discuss the driver benefit, driver comparison, performance(P8-P14): https://wiki.openstack.org/w/images/0/08/Cinder-Support_LVM_on_a_sharedLU.pdf<br />
** Review comments? https://review.openstack.org/#/c/92479/<br />
* Cinder Third Party CI Names (asselin)<br />
** Online discussion of this thread: http://lists.openstack.org/pipermail/openstack-dev/2014-July/039103.html<br />
<br />
'''June 25th, 2014 16:00 UTC'''<br />
* Consistency groups [xyang]<br />
** Cinder spec review: https://review.openstack.org/#/c/96665/<br />
* CI status [xyang]<br />
** [http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg26258.html Third-Party CI Issue (asselin): direct access to review.openstack.org port 29418 required]l<br />
* Pools implementation [navneet]<br />
** Comparison etherpad https://etherpad.openstack.org/p/cinder-pool-impl-comparison<br />
** Decision to select implementation<br />
* keystoneclient integration with cinderclient [hrybacki / ayoung]<br />
** Discuss integration and collaboration possibilities<br />
<br />
<br />
<br />
'''June 18th, 2014 16:00 UTC'''<br />
* It's review day !?! [jdg]<br />
* Mid cycle meetup plans/updates [jdg]<br />
** https://etherpad.openstack.org/p/CinderMidCycleMeetupAug2014<br />
* Separation of Connectors from Driver/Device Interface (status update) [jdg]<br />
* Updates on 3'rd party CI [jdg]<br />
* Things we need to decide upon (not today, but do your homework for next week)<br />
** Software Define Storage layers/drivers<br />
** Pools implementation<br />
<br />
'''June 11th, 2014 16:00 UTC'''<br />
* Volume replication (ronenkat)<br />
** Blueprint and spec review/comments? https://review.openstack.org/#/c/98308<br />
* oslo.db (jungleboyj)<br />
** Want to quickly discuss the review out there for this: https://review.openstack.org/#/c/77125/<br />
** Move to current oslo.db? Wait for library work?<br />
* oslo logging discussion (jungleboyj)<br />
** Removing translation of debug messages<br />
** Adding _LE, _LI, _LW<br />
* 3rd party cinder ci (asselin)<br />
** Looking for volunteers to test out my fork of jaypipe's 3rd party ci setup which has support for nodepool & http proxies.<br />
** https://github.com/rasselin/os-ext-testing<br />
** https://github.com/rasselin/os-ext-testing-data<br />
** [http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg26258.html Third-Party CI Issue: direct access to review.openstack.org port 29418 required]l<br />
* HDS HNAS Cinder drivers (sombrafam)<br />
** As we are trying check-in for quite a while, we want to get some feedback on the missing steps<br />
** First thread: https://review.openstack.org/#/c/74371/<br />
** Continuation: https://review.openstack.org/#/c/82505/<br />
** Current thread in discussion: https://review.openstack.org/#/c/84244/<br />
* Mid-cyce Sprint (scottda)<br />
** HP in Fort Collins, CO can host on site<br />
** The thought was 10-20 developers<br />
** Large room is available July 14,15,17,18, 21-25, 27-Aug 1 ... Other options exist<br />
* Backend Pools (navneet)<br />
** which way to go? There are two WIPs.<br />
** comparisons between the two approaches? Any wiki/etherpad present or to be prepared for documenting opinions?<br />
<br />
'''June 4th, 2014 16:00 UTC'''<br />
* Volume backup modification (navneet)<br />
** Blueprint and spec review/comments? https://blueprints.launchpad.net/cinder/+spec/vol-backup-service-per-backend<br />
* Dynamic multi pool (navneet)<br />
** Review comments? https://review.openstack.org/#/c/85760/<br />
** Implementation approach comparison.<br />
* 3rd party ci (asselin)<br />
** I have a conflict with another meeting, but my WIP to add nodepool into jaypipe's 3rd party ci solution is available here: https://github.com/rasselin/os-ext-testing/tree/nodepool<br />
* oslo.db (jungleboyj)<br />
** Want to quickly discuss the review out there for this: https://review.openstack.org/#/c/77125/<br />
** Move to current oslo.db? Wait for library work?<br />
** Need to drop off the meeting about 40 minutes in so if we can cover before then it would be appreciated. :-)<br />
<br />
'''May 28th, 2014 16:00 UTC'''<br />
* 3rd Party CI (jungleboyj)<br />
** What tempest test cases to run?<br />
** iSCSI only? What about for FC only drivers then?<br />
** Progress on where to record results?<br />
* SSH host keys (jungleboyj)<br />
** https://launchpad.net/bugs/1320050 and https://bugs.launchpad.net/cinder/+bug/1320056<br />
** Need plan to get this addressed by all drivers using SSH. (New config options?)<br />
** Way to get this backported to Havana?<br />
* Dynamic multi-pools (navneet)<br />
** Status and WIP review (https://review.openstack.org/#/c/85760/)<br />
** Back manager design improvement/rewriting for better rpc message handling.<br />
** Back up service for multi pools.<br />
* cinder-specs (jgriffith)<br />
** Specs repo is live<br />
** Process<br />
** Reviews<br />
<br />
'''May 21st, 2014 16:00 UTC'''<br />
* Consistency Groups (xyang)<br />
** A few people have concerns on the restriction of one volume type per CG. Should we allow one CG to have multiple volume types on the same backend? Let's discuss about it.<br />
* Third-Party CI (jgriffith)<br />
** Who's started, who's planning to and how can we help support each other to get this going smoothly<br />
* Moving GlusterFS snapshot code into the NFS RemoteFs driver (mberlin)<br />
** The GlusterFS snapshot code using qcow2 snapshots is useful for all file based storage systems. I would volunteer to move the GlusterFS snapshot code into the general RemoteFs driver - making it easier to get [https://review.openstack.org/#/c/94186/ our driver] accepted ;-)<br />
** Eric Harney is fine with this and planned to do this for Juno anyway ([https://blueprints.launchpad.net/cinder/+spec/remotefs-snaps see his blueprint]). I've put it on the agenda to make sure others also agree with this approach.<br />
<br />
'''May 7th, 2014 16:00 UTC'''<br />
* Limit == 0 in API [https://review.openstack.org/#/c/86207/ patch review] - thingee<br />
<br />
'''April 16th, 2014 16:00 UTC'''<br />
* Release Status<br />
* Summit Session Updates<br />
* Next Stop ATL!!!<br />
* Cinder resource status - thingee<br />
<br />
'''April 9th, 2014 16:00 UTC'''<br />
(Agenda entered retrospectively)<br />
* Cinder Spec (jgriffith) <br />
** Just a heads up that cinder blueprints will move to a gerrit based process shortly, a la nova. Details and wiki entry to follow.<br />
* RC2 status (jgriffith) <br />
** Just after cutting RC2, a bunch of bugs<br />
* Testing RC code (jgriffith)<br />
** Get on it, folks!<br />
**Looks like theres some serious, intermittent performance issues in the API somewhere...<br />
<br />
'''April 2, 2014 16:00 UTC'''<br />
_Meeting cancelled and summary discussion held on #openstack-cinder_<br />
<br />
* Release status and bugs<br />
* -2s left on reviews from before Junos opened - please check if they are still valid<br />
- https://review.openstack.org/#/c/73446/ (JGriffith)<br />
- https://review.openstack.org/#/c/80550/ (JBryant)<br />
- https://review.openstack.org/#/c/82100/ (Avishay)<br />
- https://review.openstack.org/#/c/74158/ (Avishay)<br />
- + a whole bunch of stable branch stuff<br />
<br />
== Previous meetings ==<br />
<br />
'''Mar 26, 2014 16:00 UTC'''<br />
* RC1 updates (jgriffith)<br />
* Design Summit Sessions (jgriffith)<br />
http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-03-26-16.00.log.html<br />
<br />
'''Mar 19, 2014 16:00 UTC'''<br />
* ProphetStor Driver Exception request for Icehouse (jgriffith)<br />
* Bug status/updates (jgriffith)<br />
* What we should be punting to Juno (aka immediate -2 in Gerrit) (jgriffith)<br />
* Continuous Integration for Cinder Certification (jungleboyj)<br />
<br />
'''Mar 12, 2014 16:00 UTC'''<br />
* Cancelled due to nothing on the agenda. Ad-hoc discussion on #openstack-cinder instead<br />
<br />
'''Mar 5, 2014 16:00 UTC'''<br />
* Volume replication - avishay<br />
* [https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage New LVM-based driver for shared storage] - mtanino<br />
* DRBD/drbdmanage driver for cinder - philr<br />
<br />
'''Feb 19, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* [https://review.openstack.org/#/c/73745 Milestone Consideration for Drivers] -thingee<br />
* [https://etherpad.openstack.org/p/cinder-hack-201402 Hack-a-thon details] -thingee<br />
* [https://review.openstack.org/#/c/66737/ scheduling for local storage] -DuncanT<br />
<br />
'''Feb 5, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* Multiple pools per backend (bswartz)<br />
'''Jan 8, 2014 16:00 UTC'''<br />
* I2 is just around the corner, blueprint updates<br />
* Alternating meeting time proposal, results on feedback<br />
* Driver cert test, it's there... use it<br />
* Prioritizing patches and reviews<br />
'''December 18, 2013 16:00 UTC'''<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/cinder-backup-recover-api cinder backup recovery api - import/export backups] - avishay<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/copy-volume-to-image-task-flow] - Griffith<br />
* [https://blueprints.launchpad.net/cinder/+spec/admin-defined-capabilities Admin-defined capabilities] - Ollie<br />
* Why is type manage an extension? -Thingee<br />
<br />
'''December 11, 2013 16:00 UTC'''<br />
* Proposal of [https://etherpad.openstack.org/p/cinder-extensions extension packages] -Thingee<br />
<br />
'''December 4, 2013 16:00 UTC'''<br />
* Progressing with [https://wiki.openstack.org/wiki/Cinder/blueprints/multi-attach-volume multi-attach / shared-volume] - sgordon<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-acls-for-volumes Access Control List design discussion] - alatynskaya<br />
<br />
'''November 27, 2013 16:00 UTC'''<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-continuous-volume-replication-v2 Updated volume mirroring design] - avishay<br />
* Start using only Mock for new tests... [http://lists.openstack.org/pipermail/openstack-dev/2013-November/018501.html Related Nova Discussion] - Thingee<br />
* Rate limiting came up in the summit, and [http://lists.openstack.org/pipermail/openstack-dev/2013-November/020291.html on openstack-dev] - avishay<br />
* Metadata backup (https://review.openstack.org/#/c/51900/) progress RFC - dosaboy<br />
<br />
<br />
'''November 20, 2013 16:00 UTC'''<br />
* I-1 scheduling - JGriffith<br />
<br />
<br />
'''November 13, 2013 16:00 UTC'''<br />
* patches should update doc files where necessary to ease writing of release notes (Avishay?)<br />
* fencing host from storage (Ehud Trainin)<br />
* Summarize priority of tasks from summit discussions (https://wiki.openstack.org/wiki/Summit/Icehouse/Etherpads#Cinder and https://etherpad.openstack.org/p/cinder-icehouse-summary) Griff<br />
<br />
<br />
'''October 30, 2013 16:00 UTC'''<br />
* cinder backup metadata support - http://goo.gl/Jkg2FV (dosaboy)<br />
* fencing and unfencing host from storage - https://blueprints.launchpad.net/cinder/+spec/fencing-and-unfencing (Ehud Trainin)<br />
<br />
'''October 23, 2013 16:00 UTC'''<br />
* Nexenta backup driver https://review.openstack.org/#/c/47005/ - DuncanT<br />
<br />
<br />
'''October 2, 2013 16:00 UTC'''<br />
* What's still broken in Havana<br />
:* Backups and multibackend (https://code.launchpad.net/bugs/1228223): Fix committed<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066): Fix committed<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here): '''???'''<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896): '''Still open'''<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469): Probable fix committed<br />
:* Summary of gate issue pertaining to Cinder can be viewed here: http://paste.openstack.org/show/47798/<br />
:* Moving to taskflow - avishay<br />
<br />
<br />
'''Sept 25, 2013, 16:00 UTC'''<br />
* PTL nomination process is open until the 26'th, if you want to run send your nomination proposal out to the dev ML<br />
* What's broken in Havana<br />
:* Backups (specifically when configured with multi-backend volumes)<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066)<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here)<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896)<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469)<br />
:* ????<br />
* Cinderclient release plans/status? (Eharney)<br />
* OSLO imports (DuncanT)<br />
* bp/cinder-backup-improvements (dosaboy)<br />
* bp/multi-attach (zhiyan)<br />
<br />
<br />
<br />
'''Aug 21, 2013, 16:00 UTC'''<br />
# No agenda, no meeting.<br />
<br />
'''Aug 14, 2013, 16:00 UTC'''<br />
# Volume migration status - avishay<br />
# API extensions using metadata. This comes from the [https://review.openstack.org/#/c/38322/ readonly volume attach support]. - thingee<br />
[http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-08-14-16.00.log.html IRC Log]<br />
<br />
'''Aug 7, 2013, 16:00 UTC'''<br />
# [https://bugs.launchpad.net/cinder/+bug/1209199 RFC - make all rbd clones copy-on-write] -- Dosaboy<br />
# V1 API removal issues, plans and timescales - DuncanT<br />
<br />
== Meeting Minutes ==<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2014/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2013/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2012/</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=CinderMeetings&diff=59294CinderMeetings2014-07-28T22:51:38Z<p>John-griffith: /* Next meeting */</p>
<hr />
<div><br />
= Weekly Cinder team meeting =<br />
'''NOTE MEETING TIME: Wed's at 16:00 UTC'''<br />
<br />
If you're interested in Cinder or Block Storage in general for OpenStack, we have a weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, on Wednesdays at 16:00 UTC. Please feel free to add items to the agenda below. NOTE: When adding topics please include your IRC name so we know who's topic it is and how to get more info.<br />
<br />
== Next meeting ==<br />
'''NOTE:''' ''Include your IRC nickname next to agenda items so that you can be called upon in the meeting and arrive at the meeting promptly if placing items in agenda. You might want to put this on your calendar if you are adding items.''<br />
<br />
'''July 30'th, 2014 16:00 UTC'''<br />
* Planning cinderclient tag for Thursday morning July 31'st, let's catch up on client changes and testing prior to that (jgriffith)<br />
* Breaking the inheritance between data and control path in Volume drivers https://review.openstack.org/#/c/107205/ (jgriffith)<br />
<br />
== Previous meetings ==<br />
'''July 23th, 2014 16:00 UTC'''<br />
<br />
* J2 Milestone (DuncanT)<br />
** JGriffith favours a freeze exception for all drivers taht currently have code / BP up, but bouncing all new ones <br />
** Review priorities<br />
*** Driver specs<br />
*** CG groups - a big change that requires driver changes, so needs lots of eyes and time for driver maintainers to do their thing too: https://review.openstack.org/#/c/104743/<br />
*** Pool scheduling - https://review.openstack.org/#/c/98715/<br />
*** Others?<br />
* Plug for weekly 3rd Party CI meeting (Mondays at 18:00 UTC [1 pm Central]) (jungelboyj)<br />
** I attended this week's meeting and gave a high level status. They are looking for more participation.<br />
* ProphetStor Cinder drivers (stevetan)<br />
** Get feedback on progress of our DPL driver and documentation required https://review.openstack.org/#/c/95829/<br />
** Get direction from community for our Federator SDS driver https://review.openstack.org/#/c/99616/<br />
* Volume replication - work in progress (ronenkat)<br />
** https://review.openstack.org/#/c/106718/2<br />
* NFS Security, if there's time.<br />
** https://blueprints.launchpad.net/cinder/+spec/secure-nfs<br />
<br />
'''July 16th, 2014 16:00 UTC'''<br />
* Putting the fun back into cinder development.<br />
** There's been a mailing list thread recently about how nit-picky reviews are getting about typos, white space and the like, and how it is a motivation killer. I'm inclinded to agree - the formatting of the doc strings, fullstops at the end of comments etc doesn't actually improve the code much at all, and getting a -1 for it is a buzz kill of the highest order. Should we leave that sort of thing to the gate, and say that if there is no hacking check for it then it isn't important in general? (DuncanT)<br />
<br />
* How to proceed with cinder/openstack requirements? python-dbus for https://review.openstack.org/99013, see mailing list conclusion http://lists.openstack.org/pipermail/openstack-dev/2014-July/040182.html (flip214)<br />
* code churn, not sure where/when to start, fear of merge conflicts (flip214)<br />
<br />
* Hitachi Block Storage cinder driver (tsekiyama)<br />
** We want to get some feedback about how we can make this forward<br />
** Review: https://review.openstack.org/#/c/90379/<br />
<br />
* Log translations https://review.openstack.org/#/c/105665/ is still stuck - any thoughts? Options I can see: (DuncanT)<br />
** A better technical solution - should be possible where the message format is not expanded outside the logging call i.e.:<br />
Ok:<br />
<nowiki><br />
LOG.warning("The flubigar id %d exploded messily", flu_id)<br />
</nowiki><br />
Not ok:<br />
<nowiki><br />
msg = _("The flubigar id %d exploded messily") % flu_id<br />
LOG.warning(msg)<br />
</nowiki><br />
<br />
** We don't break up our message categories<br />
*** This makes life harder for the translation team, makes us inconsistent with Openstack in general but keeps the code from descending into ugliness<br />
** Related discussion on enabling translation (jungleboyj):<br />
*** Have two patches awaiting approval: Explicit import of _() https://review.openstack.org/105315 and enable lazy translation: https://review.openstack.org/105561<br />
*** Need to get these merged so we are running with the changes.<br />
* 3rd Party CI (jungleboyj):<br />
** Clarification on when drivers are going to be removed.<br />
<br />
== Previous meetings ==<br />
<br />
'''July 9, 2014 16:00 UTC'''<br />
<br />
* flip214 to jgriffith: Status of Separation of Connectors from Driver/Device Interface?<br />
* Quick check: Is everybody happy in principle with the text of https://wiki.openstack.org/wiki/CinderCodeCleanupPatches ? (DuncanT)<br />
<br />
<br />
'''July 2nd, 2014 16:00 UTC'''<br />
* Batching up mechanical code cleanup until the one week after each milestone (DuncanT)<br />
** See https://review.openstack.org/#/c/102872/ for example and https://review.openstack.org/#/c/101847<br />
** Log translations and hacking fixes fall into this class<br />
** Means you only take one bit hit per milestone for rebases<br />
** Does require some tracking so they don't get missed (and I will suck at said tracking, enviably)<br />
* LVM: Support a volume-group on shared storage (mtanino)<br />
** Want to quickly discuss the driver benefit, driver comparison, performance(P8-P14): https://wiki.openstack.org/w/images/0/08/Cinder-Support_LVM_on_a_sharedLU.pdf<br />
** Review comments? https://review.openstack.org/#/c/92479/<br />
* Cinder Third Party CI Names (asselin)<br />
** Online discussion of this thread: http://lists.openstack.org/pipermail/openstack-dev/2014-July/039103.html<br />
<br />
'''June 25th, 2014 16:00 UTC'''<br />
* Consistency groups [xyang]<br />
** Cinder spec review: https://review.openstack.org/#/c/96665/<br />
* CI status [xyang]<br />
** [http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg26258.html Third-Party CI Issue (asselin): direct access to review.openstack.org port 29418 required]l<br />
* Pools implementation [navneet]<br />
** Comparison etherpad https://etherpad.openstack.org/p/cinder-pool-impl-comparison<br />
** Decision to select implementation<br />
* keystoneclient integration with cinderclient [hrybacki / ayoung]<br />
** Discuss integration and collaboration possibilities<br />
<br />
<br />
<br />
'''June 18th, 2014 16:00 UTC'''<br />
* It's review day !?! [jdg]<br />
* Mid cycle meetup plans/updates [jdg]<br />
** https://etherpad.openstack.org/p/CinderMidCycleMeetupAug2014<br />
* Separation of Connectors from Driver/Device Interface (status update) [jdg]<br />
* Updates on 3'rd party CI [jdg]<br />
* Things we need to decide upon (not today, but do your homework for next week)<br />
** Software Define Storage layers/drivers<br />
** Pools implementation<br />
<br />
'''June 11th, 2014 16:00 UTC'''<br />
* Volume replication (ronenkat)<br />
** Blueprint and spec review/comments? https://review.openstack.org/#/c/98308<br />
* oslo.db (jungleboyj)<br />
** Want to quickly discuss the review out there for this: https://review.openstack.org/#/c/77125/<br />
** Move to current oslo.db? Wait for library work?<br />
* oslo logging discussion (jungleboyj)<br />
** Removing translation of debug messages<br />
** Adding _LE, _LI, _LW<br />
* 3rd party cinder ci (asselin)<br />
** Looking for volunteers to test out my fork of jaypipe's 3rd party ci setup which has support for nodepool & http proxies.<br />
** https://github.com/rasselin/os-ext-testing<br />
** https://github.com/rasselin/os-ext-testing-data<br />
** [http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg26258.html Third-Party CI Issue: direct access to review.openstack.org port 29418 required]l<br />
* HDS HNAS Cinder drivers (sombrafam)<br />
** As we are trying check-in for quite a while, we want to get some feedback on the missing steps<br />
** First thread: https://review.openstack.org/#/c/74371/<br />
** Continuation: https://review.openstack.org/#/c/82505/<br />
** Current thread in discussion: https://review.openstack.org/#/c/84244/<br />
* Mid-cyce Sprint (scottda)<br />
** HP in Fort Collins, CO can host on site<br />
** The thought was 10-20 developers<br />
** Large room is available July 14,15,17,18, 21-25, 27-Aug 1 ... Other options exist<br />
* Backend Pools (navneet)<br />
** which way to go? There are two WIPs.<br />
** comparisons between the two approaches? Any wiki/etherpad present or to be prepared for documenting opinions?<br />
<br />
'''June 4th, 2014 16:00 UTC'''<br />
* Volume backup modification (navneet)<br />
** Blueprint and spec review/comments? https://blueprints.launchpad.net/cinder/+spec/vol-backup-service-per-backend<br />
* Dynamic multi pool (navneet)<br />
** Review comments? https://review.openstack.org/#/c/85760/<br />
** Implementation approach comparison.<br />
* 3rd party ci (asselin)<br />
** I have a conflict with another meeting, but my WIP to add nodepool into jaypipe's 3rd party ci solution is available here: https://github.com/rasselin/os-ext-testing/tree/nodepool<br />
* oslo.db (jungleboyj)<br />
** Want to quickly discuss the review out there for this: https://review.openstack.org/#/c/77125/<br />
** Move to current oslo.db? Wait for library work?<br />
** Need to drop off the meeting about 40 minutes in so if we can cover before then it would be appreciated. :-)<br />
<br />
'''May 28th, 2014 16:00 UTC'''<br />
* 3rd Party CI (jungleboyj)<br />
** What tempest test cases to run?<br />
** iSCSI only? What about for FC only drivers then?<br />
** Progress on where to record results?<br />
* SSH host keys (jungleboyj)<br />
** https://launchpad.net/bugs/1320050 and https://bugs.launchpad.net/cinder/+bug/1320056<br />
** Need plan to get this addressed by all drivers using SSH. (New config options?)<br />
** Way to get this backported to Havana?<br />
* Dynamic multi-pools (navneet)<br />
** Status and WIP review (https://review.openstack.org/#/c/85760/)<br />
** Back manager design improvement/rewriting for better rpc message handling.<br />
** Back up service for multi pools.<br />
* cinder-specs (jgriffith)<br />
** Specs repo is live<br />
** Process<br />
** Reviews<br />
<br />
'''May 21st, 2014 16:00 UTC'''<br />
* Consistency Groups (xyang)<br />
** A few people have concerns on the restriction of one volume type per CG. Should we allow one CG to have multiple volume types on the same backend? Let's discuss about it.<br />
* Third-Party CI (jgriffith)<br />
** Who's started, who's planning to and how can we help support each other to get this going smoothly<br />
* Moving GlusterFS snapshot code into the NFS RemoteFs driver (mberlin)<br />
** The GlusterFS snapshot code using qcow2 snapshots is useful for all file based storage systems. I would volunteer to move the GlusterFS snapshot code into the general RemoteFs driver - making it easier to get [https://review.openstack.org/#/c/94186/ our driver] accepted ;-)<br />
** Eric Harney is fine with this and planned to do this for Juno anyway ([https://blueprints.launchpad.net/cinder/+spec/remotefs-snaps see his blueprint]). I've put it on the agenda to make sure others also agree with this approach.<br />
<br />
'''May 7th, 2014 16:00 UTC'''<br />
* Limit == 0 in API [https://review.openstack.org/#/c/86207/ patch review] - thingee<br />
<br />
'''April 16th, 2014 16:00 UTC'''<br />
* Release Status<br />
* Summit Session Updates<br />
* Next Stop ATL!!!<br />
* Cinder resource status - thingee<br />
<br />
'''April 9th, 2014 16:00 UTC'''<br />
(Agenda entered retrospectively)<br />
* Cinder Spec (jgriffith) <br />
** Just a heads up that cinder blueprints will move to a gerrit based process shortly, a la nova. Details and wiki entry to follow.<br />
* RC2 status (jgriffith) <br />
** Just after cutting RC2, a bunch of bugs<br />
* Testing RC code (jgriffith)<br />
** Get on it, folks!<br />
**Looks like theres some serious, intermittent performance issues in the API somewhere...<br />
<br />
'''April 2, 2014 16:00 UTC'''<br />
_Meeting cancelled and summary discussion held on #openstack-cinder_<br />
<br />
* Release status and bugs<br />
* -2s left on reviews from before Junos opened - please check if they are still valid<br />
- https://review.openstack.org/#/c/73446/ (JGriffith)<br />
- https://review.openstack.org/#/c/80550/ (JBryant)<br />
- https://review.openstack.org/#/c/82100/ (Avishay)<br />
- https://review.openstack.org/#/c/74158/ (Avishay)<br />
- + a whole bunch of stable branch stuff<br />
<br />
== Previous meetings ==<br />
<br />
'''Mar 26, 2014 16:00 UTC'''<br />
* RC1 updates (jgriffith)<br />
* Design Summit Sessions (jgriffith)<br />
http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-03-26-16.00.log.html<br />
<br />
'''Mar 19, 2014 16:00 UTC'''<br />
* ProphetStor Driver Exception request for Icehouse (jgriffith)<br />
* Bug status/updates (jgriffith)<br />
* What we should be punting to Juno (aka immediate -2 in Gerrit) (jgriffith)<br />
* Continuous Integration for Cinder Certification (jungleboyj)<br />
<br />
'''Mar 12, 2014 16:00 UTC'''<br />
* Cancelled due to nothing on the agenda. Ad-hoc discussion on #openstack-cinder instead<br />
<br />
'''Mar 5, 2014 16:00 UTC'''<br />
* Volume replication - avishay<br />
* [https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage New LVM-based driver for shared storage] - mtanino<br />
* DRBD/drbdmanage driver for cinder - philr<br />
<br />
'''Feb 19, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* [https://review.openstack.org/#/c/73745 Milestone Consideration for Drivers] -thingee<br />
* [https://etherpad.openstack.org/p/cinder-hack-201402 Hack-a-thon details] -thingee<br />
* [https://review.openstack.org/#/c/66737/ scheduling for local storage] -DuncanT<br />
<br />
'''Feb 5, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* Multiple pools per backend (bswartz)<br />
'''Jan 8, 2014 16:00 UTC'''<br />
* I2 is just around the corner, blueprint updates<br />
* Alternating meeting time proposal, results on feedback<br />
* Driver cert test, it's there... use it<br />
* Prioritizing patches and reviews<br />
'''December 18, 2013 16:00 UTC'''<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/cinder-backup-recover-api cinder backup recovery api - import/export backups] - avishay<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/copy-volume-to-image-task-flow] - Griffith<br />
* [https://blueprints.launchpad.net/cinder/+spec/admin-defined-capabilities Admin-defined capabilities] - Ollie<br />
* Why is type manage an extension? -Thingee<br />
<br />
'''December 11, 2013 16:00 UTC'''<br />
* Proposal of [https://etherpad.openstack.org/p/cinder-extensions extension packages] -Thingee<br />
<br />
'''December 4, 2013 16:00 UTC'''<br />
* Progressing with [https://wiki.openstack.org/wiki/Cinder/blueprints/multi-attach-volume multi-attach / shared-volume] - sgordon<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-acls-for-volumes Access Control List design discussion] - alatynskaya<br />
<br />
'''November 27, 2013 16:00 UTC'''<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-continuous-volume-replication-v2 Updated volume mirroring design] - avishay<br />
* Start using only Mock for new tests... [http://lists.openstack.org/pipermail/openstack-dev/2013-November/018501.html Related Nova Discussion] - Thingee<br />
* Rate limiting came up in the summit, and [http://lists.openstack.org/pipermail/openstack-dev/2013-November/020291.html on openstack-dev] - avishay<br />
* Metadata backup (https://review.openstack.org/#/c/51900/) progress RFC - dosaboy<br />
<br />
<br />
'''November 20, 2013 16:00 UTC'''<br />
* I-1 scheduling - JGriffith<br />
<br />
<br />
'''November 13, 2013 16:00 UTC'''<br />
* patches should update doc files where necessary to ease writing of release notes (Avishay?)<br />
* fencing host from storage (Ehud Trainin)<br />
* Summarize priority of tasks from summit discussions (https://wiki.openstack.org/wiki/Summit/Icehouse/Etherpads#Cinder and https://etherpad.openstack.org/p/cinder-icehouse-summary) Griff<br />
<br />
<br />
'''October 30, 2013 16:00 UTC'''<br />
* cinder backup metadata support - http://goo.gl/Jkg2FV (dosaboy)<br />
* fencing and unfencing host from storage - https://blueprints.launchpad.net/cinder/+spec/fencing-and-unfencing (Ehud Trainin)<br />
<br />
'''October 23, 2013 16:00 UTC'''<br />
* Nexenta backup driver https://review.openstack.org/#/c/47005/ - DuncanT<br />
<br />
<br />
'''October 2, 2013 16:00 UTC'''<br />
* What's still broken in Havana<br />
:* Backups and multibackend (https://code.launchpad.net/bugs/1228223): Fix committed<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066): Fix committed<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here): '''???'''<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896): '''Still open'''<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469): Probable fix committed<br />
:* Summary of gate issue pertaining to Cinder can be viewed here: http://paste.openstack.org/show/47798/<br />
:* Moving to taskflow - avishay<br />
<br />
<br />
'''Sept 25, 2013, 16:00 UTC'''<br />
* PTL nomination process is open until the 26'th, if you want to run send your nomination proposal out to the dev ML<br />
* What's broken in Havana<br />
:* Backups (specifically when configured with multi-backend volumes)<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066)<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here)<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896)<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469)<br />
:* ????<br />
* Cinderclient release plans/status? (Eharney)<br />
* OSLO imports (DuncanT)<br />
* bp/cinder-backup-improvements (dosaboy)<br />
* bp/multi-attach (zhiyan)<br />
<br />
<br />
<br />
'''Aug 21, 2013, 16:00 UTC'''<br />
# No agenda, no meeting.<br />
<br />
'''Aug 14, 2013, 16:00 UTC'''<br />
# Volume migration status - avishay<br />
# API extensions using metadata. This comes from the [https://review.openstack.org/#/c/38322/ readonly volume attach support]. - thingee<br />
[http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-08-14-16.00.log.html IRC Log]<br />
<br />
'''Aug 7, 2013, 16:00 UTC'''<br />
# [https://bugs.launchpad.net/cinder/+bug/1209199 RFC - make all rbd clones copy-on-write] -- Dosaboy<br />
# V1 API removal issues, plans and timescales - DuncanT<br />
<br />
== Meeting Minutes ==<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2014/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2013/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2012/</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=CinderMeetings&diff=59293CinderMeetings2014-07-28T22:50:42Z<p>John-griffith: /* Weekly Cinder team meeting */</p>
<hr />
<div><br />
= Weekly Cinder team meeting =<br />
'''NOTE MEETING TIME: Wed's at 16:00 UTC'''<br />
<br />
If you're interested in Cinder or Block Storage in general for OpenStack, we have a weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, on Wednesdays at 16:00 UTC. Please feel free to add items to the agenda below. NOTE: When adding topics please include your IRC name so we know who's topic it is and how to get more info.<br />
<br />
== Next meeting ==<br />
'''NOTE:''' ''Include your IRC nickname next to agenda items so that you can be called upon in the meeting and arrive at the meeting promptly if placing items in agenda. You might want to put this on your calendar if you are adding items.''<br />
<br />
'''July 30'th, 2014 16:00 UTC'''<br />
<br />
** Planning cinderclient tag for Thursday morning July 31'st, let's catch up on client changes and testing prior to that (jgriffith)<br />
** Breaking the inheritance between data and control path in Volume drivers https://review.openstack.org/#/c/107205/ (jgriffith)<br />
<br />
<br />
<br />
== Previous meetings ==<br />
'''July 23th, 2014 16:00 UTC'''<br />
<br />
* J2 Milestone (DuncanT)<br />
** JGriffith favours a freeze exception for all drivers taht currently have code / BP up, but bouncing all new ones <br />
** Review priorities<br />
*** Driver specs<br />
*** CG groups - a big change that requires driver changes, so needs lots of eyes and time for driver maintainers to do their thing too: https://review.openstack.org/#/c/104743/<br />
*** Pool scheduling - https://review.openstack.org/#/c/98715/<br />
*** Others?<br />
* Plug for weekly 3rd Party CI meeting (Mondays at 18:00 UTC [1 pm Central]) (jungelboyj)<br />
** I attended this week's meeting and gave a high level status. They are looking for more participation.<br />
* ProphetStor Cinder drivers (stevetan)<br />
** Get feedback on progress of our DPL driver and documentation required https://review.openstack.org/#/c/95829/<br />
** Get direction from community for our Federator SDS driver https://review.openstack.org/#/c/99616/<br />
* Volume replication - work in progress (ronenkat)<br />
** https://review.openstack.org/#/c/106718/2<br />
* NFS Security, if there's time.<br />
** https://blueprints.launchpad.net/cinder/+spec/secure-nfs<br />
<br />
'''July 16th, 2014 16:00 UTC'''<br />
* Putting the fun back into cinder development.<br />
** There's been a mailing list thread recently about how nit-picky reviews are getting about typos, white space and the like, and how it is a motivation killer. I'm inclinded to agree - the formatting of the doc strings, fullstops at the end of comments etc doesn't actually improve the code much at all, and getting a -1 for it is a buzz kill of the highest order. Should we leave that sort of thing to the gate, and say that if there is no hacking check for it then it isn't important in general? (DuncanT)<br />
<br />
* How to proceed with cinder/openstack requirements? python-dbus for https://review.openstack.org/99013, see mailing list conclusion http://lists.openstack.org/pipermail/openstack-dev/2014-July/040182.html (flip214)<br />
* code churn, not sure where/when to start, fear of merge conflicts (flip214)<br />
<br />
* Hitachi Block Storage cinder driver (tsekiyama)<br />
** We want to get some feedback about how we can make this forward<br />
** Review: https://review.openstack.org/#/c/90379/<br />
<br />
* Log translations https://review.openstack.org/#/c/105665/ is still stuck - any thoughts? Options I can see: (DuncanT)<br />
** A better technical solution - should be possible where the message format is not expanded outside the logging call i.e.:<br />
Ok:<br />
<nowiki><br />
LOG.warning("The flubigar id %d exploded messily", flu_id)<br />
</nowiki><br />
Not ok:<br />
<nowiki><br />
msg = _("The flubigar id %d exploded messily") % flu_id<br />
LOG.warning(msg)<br />
</nowiki><br />
<br />
** We don't break up our message categories<br />
*** This makes life harder for the translation team, makes us inconsistent with Openstack in general but keeps the code from descending into ugliness<br />
** Related discussion on enabling translation (jungleboyj):<br />
*** Have two patches awaiting approval: Explicit import of _() https://review.openstack.org/105315 and enable lazy translation: https://review.openstack.org/105561<br />
*** Need to get these merged so we are running with the changes.<br />
* 3rd Party CI (jungleboyj):<br />
** Clarification on when drivers are going to be removed.<br />
<br />
== Previous meetings ==<br />
<br />
'''July 9, 2014 16:00 UTC'''<br />
<br />
* flip214 to jgriffith: Status of Separation of Connectors from Driver/Device Interface?<br />
* Quick check: Is everybody happy in principle with the text of https://wiki.openstack.org/wiki/CinderCodeCleanupPatches ? (DuncanT)<br />
<br />
<br />
'''July 2nd, 2014 16:00 UTC'''<br />
* Batching up mechanical code cleanup until the one week after each milestone (DuncanT)<br />
** See https://review.openstack.org/#/c/102872/ for example and https://review.openstack.org/#/c/101847<br />
** Log translations and hacking fixes fall into this class<br />
** Means you only take one bit hit per milestone for rebases<br />
** Does require some tracking so they don't get missed (and I will suck at said tracking, enviably)<br />
* LVM: Support a volume-group on shared storage (mtanino)<br />
** Want to quickly discuss the driver benefit, driver comparison, performance(P8-P14): https://wiki.openstack.org/w/images/0/08/Cinder-Support_LVM_on_a_sharedLU.pdf<br />
** Review comments? https://review.openstack.org/#/c/92479/<br />
* Cinder Third Party CI Names (asselin)<br />
** Online discussion of this thread: http://lists.openstack.org/pipermail/openstack-dev/2014-July/039103.html<br />
<br />
'''June 25th, 2014 16:00 UTC'''<br />
* Consistency groups [xyang]<br />
** Cinder spec review: https://review.openstack.org/#/c/96665/<br />
* CI status [xyang]<br />
** [http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg26258.html Third-Party CI Issue (asselin): direct access to review.openstack.org port 29418 required]l<br />
* Pools implementation [navneet]<br />
** Comparison etherpad https://etherpad.openstack.org/p/cinder-pool-impl-comparison<br />
** Decision to select implementation<br />
* keystoneclient integration with cinderclient [hrybacki / ayoung]<br />
** Discuss integration and collaboration possibilities<br />
<br />
<br />
<br />
'''June 18th, 2014 16:00 UTC'''<br />
* It's review day !?! [jdg]<br />
* Mid cycle meetup plans/updates [jdg]<br />
** https://etherpad.openstack.org/p/CinderMidCycleMeetupAug2014<br />
* Separation of Connectors from Driver/Device Interface (status update) [jdg]<br />
* Updates on 3'rd party CI [jdg]<br />
* Things we need to decide upon (not today, but do your homework for next week)<br />
** Software Define Storage layers/drivers<br />
** Pools implementation<br />
<br />
'''June 11th, 2014 16:00 UTC'''<br />
* Volume replication (ronenkat)<br />
** Blueprint and spec review/comments? https://review.openstack.org/#/c/98308<br />
* oslo.db (jungleboyj)<br />
** Want to quickly discuss the review out there for this: https://review.openstack.org/#/c/77125/<br />
** Move to current oslo.db? Wait for library work?<br />
* oslo logging discussion (jungleboyj)<br />
** Removing translation of debug messages<br />
** Adding _LE, _LI, _LW<br />
* 3rd party cinder ci (asselin)<br />
** Looking for volunteers to test out my fork of jaypipe's 3rd party ci setup which has support for nodepool & http proxies.<br />
** https://github.com/rasselin/os-ext-testing<br />
** https://github.com/rasselin/os-ext-testing-data<br />
** [http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg26258.html Third-Party CI Issue: direct access to review.openstack.org port 29418 required]l<br />
* HDS HNAS Cinder drivers (sombrafam)<br />
** As we are trying check-in for quite a while, we want to get some feedback on the missing steps<br />
** First thread: https://review.openstack.org/#/c/74371/<br />
** Continuation: https://review.openstack.org/#/c/82505/<br />
** Current thread in discussion: https://review.openstack.org/#/c/84244/<br />
* Mid-cyce Sprint (scottda)<br />
** HP in Fort Collins, CO can host on site<br />
** The thought was 10-20 developers<br />
** Large room is available July 14,15,17,18, 21-25, 27-Aug 1 ... Other options exist<br />
* Backend Pools (navneet)<br />
** which way to go? There are two WIPs.<br />
** comparisons between the two approaches? Any wiki/etherpad present or to be prepared for documenting opinions?<br />
<br />
'''June 4th, 2014 16:00 UTC'''<br />
* Volume backup modification (navneet)<br />
** Blueprint and spec review/comments? https://blueprints.launchpad.net/cinder/+spec/vol-backup-service-per-backend<br />
* Dynamic multi pool (navneet)<br />
** Review comments? https://review.openstack.org/#/c/85760/<br />
** Implementation approach comparison.<br />
* 3rd party ci (asselin)<br />
** I have a conflict with another meeting, but my WIP to add nodepool into jaypipe's 3rd party ci solution is available here: https://github.com/rasselin/os-ext-testing/tree/nodepool<br />
* oslo.db (jungleboyj)<br />
** Want to quickly discuss the review out there for this: https://review.openstack.org/#/c/77125/<br />
** Move to current oslo.db? Wait for library work?<br />
** Need to drop off the meeting about 40 minutes in so if we can cover before then it would be appreciated. :-)<br />
<br />
'''May 28th, 2014 16:00 UTC'''<br />
* 3rd Party CI (jungleboyj)<br />
** What tempest test cases to run?<br />
** iSCSI only? What about for FC only drivers then?<br />
** Progress on where to record results?<br />
* SSH host keys (jungleboyj)<br />
** https://launchpad.net/bugs/1320050 and https://bugs.launchpad.net/cinder/+bug/1320056<br />
** Need plan to get this addressed by all drivers using SSH. (New config options?)<br />
** Way to get this backported to Havana?<br />
* Dynamic multi-pools (navneet)<br />
** Status and WIP review (https://review.openstack.org/#/c/85760/)<br />
** Back manager design improvement/rewriting for better rpc message handling.<br />
** Back up service for multi pools.<br />
* cinder-specs (jgriffith)<br />
** Specs repo is live<br />
** Process<br />
** Reviews<br />
<br />
'''May 21st, 2014 16:00 UTC'''<br />
* Consistency Groups (xyang)<br />
** A few people have concerns on the restriction of one volume type per CG. Should we allow one CG to have multiple volume types on the same backend? Let's discuss about it.<br />
* Third-Party CI (jgriffith)<br />
** Who's started, who's planning to and how can we help support each other to get this going smoothly<br />
* Moving GlusterFS snapshot code into the NFS RemoteFs driver (mberlin)<br />
** The GlusterFS snapshot code using qcow2 snapshots is useful for all file based storage systems. I would volunteer to move the GlusterFS snapshot code into the general RemoteFs driver - making it easier to get [https://review.openstack.org/#/c/94186/ our driver] accepted ;-)<br />
** Eric Harney is fine with this and planned to do this for Juno anyway ([https://blueprints.launchpad.net/cinder/+spec/remotefs-snaps see his blueprint]). I've put it on the agenda to make sure others also agree with this approach.<br />
<br />
'''May 7th, 2014 16:00 UTC'''<br />
* Limit == 0 in API [https://review.openstack.org/#/c/86207/ patch review] - thingee<br />
<br />
'''April 16th, 2014 16:00 UTC'''<br />
* Release Status<br />
* Summit Session Updates<br />
* Next Stop ATL!!!<br />
* Cinder resource status - thingee<br />
<br />
'''April 9th, 2014 16:00 UTC'''<br />
(Agenda entered retrospectively)<br />
* Cinder Spec (jgriffith) <br />
** Just a heads up that cinder blueprints will move to a gerrit based process shortly, a la nova. Details and wiki entry to follow.<br />
* RC2 status (jgriffith) <br />
** Just after cutting RC2, a bunch of bugs<br />
* Testing RC code (jgriffith)<br />
** Get on it, folks!<br />
**Looks like theres some serious, intermittent performance issues in the API somewhere...<br />
<br />
'''April 2, 2014 16:00 UTC'''<br />
_Meeting cancelled and summary discussion held on #openstack-cinder_<br />
<br />
* Release status and bugs<br />
* -2s left on reviews from before Junos opened - please check if they are still valid<br />
- https://review.openstack.org/#/c/73446/ (JGriffith)<br />
- https://review.openstack.org/#/c/80550/ (JBryant)<br />
- https://review.openstack.org/#/c/82100/ (Avishay)<br />
- https://review.openstack.org/#/c/74158/ (Avishay)<br />
- + a whole bunch of stable branch stuff<br />
<br />
== Previous meetings ==<br />
<br />
'''Mar 26, 2014 16:00 UTC'''<br />
* RC1 updates (jgriffith)<br />
* Design Summit Sessions (jgriffith)<br />
http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-03-26-16.00.log.html<br />
<br />
'''Mar 19, 2014 16:00 UTC'''<br />
* ProphetStor Driver Exception request for Icehouse (jgriffith)<br />
* Bug status/updates (jgriffith)<br />
* What we should be punting to Juno (aka immediate -2 in Gerrit) (jgriffith)<br />
* Continuous Integration for Cinder Certification (jungleboyj)<br />
<br />
'''Mar 12, 2014 16:00 UTC'''<br />
* Cancelled due to nothing on the agenda. Ad-hoc discussion on #openstack-cinder instead<br />
<br />
'''Mar 5, 2014 16:00 UTC'''<br />
* Volume replication - avishay<br />
* [https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage New LVM-based driver for shared storage] - mtanino<br />
* DRBD/drbdmanage driver for cinder - philr<br />
<br />
'''Feb 19, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* [https://review.openstack.org/#/c/73745 Milestone Consideration for Drivers] -thingee<br />
* [https://etherpad.openstack.org/p/cinder-hack-201402 Hack-a-thon details] -thingee<br />
* [https://review.openstack.org/#/c/66737/ scheduling for local storage] -DuncanT<br />
<br />
'''Feb 5, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* Multiple pools per backend (bswartz)<br />
'''Jan 8, 2014 16:00 UTC'''<br />
* I2 is just around the corner, blueprint updates<br />
* Alternating meeting time proposal, results on feedback<br />
* Driver cert test, it's there... use it<br />
* Prioritizing patches and reviews<br />
'''December 18, 2013 16:00 UTC'''<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/cinder-backup-recover-api cinder backup recovery api - import/export backups] - avishay<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/copy-volume-to-image-task-flow] - Griffith<br />
* [https://blueprints.launchpad.net/cinder/+spec/admin-defined-capabilities Admin-defined capabilities] - Ollie<br />
* Why is type manage an extension? -Thingee<br />
<br />
'''December 11, 2013 16:00 UTC'''<br />
* Proposal of [https://etherpad.openstack.org/p/cinder-extensions extension packages] -Thingee<br />
<br />
'''December 4, 2013 16:00 UTC'''<br />
* Progressing with [https://wiki.openstack.org/wiki/Cinder/blueprints/multi-attach-volume multi-attach / shared-volume] - sgordon<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-acls-for-volumes Access Control List design discussion] - alatynskaya<br />
<br />
'''November 27, 2013 16:00 UTC'''<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-continuous-volume-replication-v2 Updated volume mirroring design] - avishay<br />
* Start using only Mock for new tests... [http://lists.openstack.org/pipermail/openstack-dev/2013-November/018501.html Related Nova Discussion] - Thingee<br />
* Rate limiting came up in the summit, and [http://lists.openstack.org/pipermail/openstack-dev/2013-November/020291.html on openstack-dev] - avishay<br />
* Metadata backup (https://review.openstack.org/#/c/51900/) progress RFC - dosaboy<br />
<br />
<br />
'''November 20, 2013 16:00 UTC'''<br />
* I-1 scheduling - JGriffith<br />
<br />
<br />
'''November 13, 2013 16:00 UTC'''<br />
* patches should update doc files where necessary to ease writing of release notes (Avishay?)<br />
* fencing host from storage (Ehud Trainin)<br />
* Summarize priority of tasks from summit discussions (https://wiki.openstack.org/wiki/Summit/Icehouse/Etherpads#Cinder and https://etherpad.openstack.org/p/cinder-icehouse-summary) Griff<br />
<br />
<br />
'''October 30, 2013 16:00 UTC'''<br />
* cinder backup metadata support - http://goo.gl/Jkg2FV (dosaboy)<br />
* fencing and unfencing host from storage - https://blueprints.launchpad.net/cinder/+spec/fencing-and-unfencing (Ehud Trainin)<br />
<br />
'''October 23, 2013 16:00 UTC'''<br />
* Nexenta backup driver https://review.openstack.org/#/c/47005/ - DuncanT<br />
<br />
<br />
'''October 2, 2013 16:00 UTC'''<br />
* What's still broken in Havana<br />
:* Backups and multibackend (https://code.launchpad.net/bugs/1228223): Fix committed<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066): Fix committed<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here): '''???'''<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896): '''Still open'''<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469): Probable fix committed<br />
:* Summary of gate issue pertaining to Cinder can be viewed here: http://paste.openstack.org/show/47798/<br />
:* Moving to taskflow - avishay<br />
<br />
<br />
'''Sept 25, 2013, 16:00 UTC'''<br />
* PTL nomination process is open until the 26'th, if you want to run send your nomination proposal out to the dev ML<br />
* What's broken in Havana<br />
:* Backups (specifically when configured with multi-backend volumes)<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066)<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here)<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896)<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469)<br />
:* ????<br />
* Cinderclient release plans/status? (Eharney)<br />
* OSLO imports (DuncanT)<br />
* bp/cinder-backup-improvements (dosaboy)<br />
* bp/multi-attach (zhiyan)<br />
<br />
<br />
<br />
'''Aug 21, 2013, 16:00 UTC'''<br />
# No agenda, no meeting.<br />
<br />
'''Aug 14, 2013, 16:00 UTC'''<br />
# Volume migration status - avishay<br />
# API extensions using metadata. This comes from the [https://review.openstack.org/#/c/38322/ readonly volume attach support]. - thingee<br />
[http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-08-14-16.00.log.html IRC Log]<br />
<br />
'''Aug 7, 2013, 16:00 UTC'''<br />
# [https://bugs.launchpad.net/cinder/+bug/1209199 RFC - make all rbd clones copy-on-write] -- Dosaboy<br />
# V1 API removal issues, plans and timescales - DuncanT<br />
<br />
== Meeting Minutes ==<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2014/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2013/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2012/</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=CinderMeetings&diff=59292CinderMeetings2014-07-28T22:50:01Z<p>John-griffith: /* Next meeting */</p>
<hr />
<div><br />
= Weekly Cinder team meeting =<br />
'''NOTE MEETING TIME: Wed's at 16:00 UTC'''<br />
<br />
If you're interested in Cinder or Block Storage in general for OpenStack, we have a weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, on Wednesdays at 16:00 UTC. Please feel free to add items to the agenda below. NOTE: When adding topics please include your IRC name so we know who's topic it is and how to get more info.<br />
<br />
== Next meeting ==<br />
'''NOTE:''' ''Include your IRC nickname next to agenda items so that you can be called upon in the meeting and arrive at the meeting promptly if placing items in agenda. You might want to put this on your calendar if you are adding items.''<br />
<br />
'''July 30'th, 2014 16:00 UTC'''<br />
<br />
** Planning cinderclient tag for Thursday morning July 31'st, let's catch up on client changes and testing prior to that (jgriffith)<br />
** Breaking the inheritance between data and control path in Volume drivers https://review.openstack.org/#/c/107205/ (jgriffith)<br />
<br />
'''July 23th, 2014 16:00 UTC'''<br />
<br />
* J2 Milestone (DuncanT)<br />
** JGriffith favours a freeze exception for all drivers taht currently have code / BP up, but bouncing all new ones <br />
** Review priorities<br />
*** Driver specs<br />
*** CG groups - a big change that requires driver changes, so needs lots of eyes and time for driver maintainers to do their thing too: https://review.openstack.org/#/c/104743/<br />
*** Pool scheduling - https://review.openstack.org/#/c/98715/<br />
*** Others?<br />
* Plug for weekly 3rd Party CI meeting (Mondays at 18:00 UTC [1 pm Central]) (jungelboyj)<br />
** I attended this week's meeting and gave a high level status. They are looking for more participation.<br />
* ProphetStor Cinder drivers (stevetan)<br />
** Get feedback on progress of our DPL driver and documentation required https://review.openstack.org/#/c/95829/<br />
** Get direction from community for our Federator SDS driver https://review.openstack.org/#/c/99616/<br />
* Volume replication - work in progress (ronenkat)<br />
** https://review.openstack.org/#/c/106718/2<br />
* NFS Security, if there's time.<br />
** https://blueprints.launchpad.net/cinder/+spec/secure-nfs<br />
<br />
== Previous meetings ==<br />
'''July 16th, 2014 16:00 UTC'''<br />
* Putting the fun back into cinder development.<br />
** There's been a mailing list thread recently about how nit-picky reviews are getting about typos, white space and the like, and how it is a motivation killer. I'm inclinded to agree - the formatting of the doc strings, fullstops at the end of comments etc doesn't actually improve the code much at all, and getting a -1 for it is a buzz kill of the highest order. Should we leave that sort of thing to the gate, and say that if there is no hacking check for it then it isn't important in general? (DuncanT)<br />
<br />
* How to proceed with cinder/openstack requirements? python-dbus for https://review.openstack.org/99013, see mailing list conclusion http://lists.openstack.org/pipermail/openstack-dev/2014-July/040182.html (flip214)<br />
* code churn, not sure where/when to start, fear of merge conflicts (flip214)<br />
<br />
* Hitachi Block Storage cinder driver (tsekiyama)<br />
** We want to get some feedback about how we can make this forward<br />
** Review: https://review.openstack.org/#/c/90379/<br />
<br />
* Log translations https://review.openstack.org/#/c/105665/ is still stuck - any thoughts? Options I can see: (DuncanT)<br />
** A better technical solution - should be possible where the message format is not expanded outside the logging call i.e.:<br />
Ok:<br />
<nowiki><br />
LOG.warning("The flubigar id %d exploded messily", flu_id)<br />
</nowiki><br />
Not ok:<br />
<nowiki><br />
msg = _("The flubigar id %d exploded messily") % flu_id<br />
LOG.warning(msg)<br />
</nowiki><br />
<br />
** We don't break up our message categories<br />
*** This makes life harder for the translation team, makes us inconsistent with Openstack in general but keeps the code from descending into ugliness<br />
** Related discussion on enabling translation (jungleboyj):<br />
*** Have two patches awaiting approval: Explicit import of _() https://review.openstack.org/105315 and enable lazy translation: https://review.openstack.org/105561<br />
*** Need to get these merged so we are running with the changes.<br />
* 3rd Party CI (jungleboyj):<br />
** Clarification on when drivers are going to be removed.<br />
<br />
== Previous meetings ==<br />
<br />
'''July 9, 2014 16:00 UTC'''<br />
<br />
* flip214 to jgriffith: Status of Separation of Connectors from Driver/Device Interface?<br />
* Quick check: Is everybody happy in principle with the text of https://wiki.openstack.org/wiki/CinderCodeCleanupPatches ? (DuncanT)<br />
<br />
<br />
'''July 2nd, 2014 16:00 UTC'''<br />
* Batching up mechanical code cleanup until the one week after each milestone (DuncanT)<br />
** See https://review.openstack.org/#/c/102872/ for example and https://review.openstack.org/#/c/101847<br />
** Log translations and hacking fixes fall into this class<br />
** Means you only take one bit hit per milestone for rebases<br />
** Does require some tracking so they don't get missed (and I will suck at said tracking, enviably)<br />
* LVM: Support a volume-group on shared storage (mtanino)<br />
** Want to quickly discuss the driver benefit, driver comparison, performance(P8-P14): https://wiki.openstack.org/w/images/0/08/Cinder-Support_LVM_on_a_sharedLU.pdf<br />
** Review comments? https://review.openstack.org/#/c/92479/<br />
* Cinder Third Party CI Names (asselin)<br />
** Online discussion of this thread: http://lists.openstack.org/pipermail/openstack-dev/2014-July/039103.html<br />
<br />
'''June 25th, 2014 16:00 UTC'''<br />
* Consistency groups [xyang]<br />
** Cinder spec review: https://review.openstack.org/#/c/96665/<br />
* CI status [xyang]<br />
** [http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg26258.html Third-Party CI Issue (asselin): direct access to review.openstack.org port 29418 required]l<br />
* Pools implementation [navneet]<br />
** Comparison etherpad https://etherpad.openstack.org/p/cinder-pool-impl-comparison<br />
** Decision to select implementation<br />
* keystoneclient integration with cinderclient [hrybacki / ayoung]<br />
** Discuss integration and collaboration possibilities<br />
<br />
<br />
<br />
'''June 18th, 2014 16:00 UTC'''<br />
* It's review day !?! [jdg]<br />
* Mid cycle meetup plans/updates [jdg]<br />
** https://etherpad.openstack.org/p/CinderMidCycleMeetupAug2014<br />
* Separation of Connectors from Driver/Device Interface (status update) [jdg]<br />
* Updates on 3'rd party CI [jdg]<br />
* Things we need to decide upon (not today, but do your homework for next week)<br />
** Software Define Storage layers/drivers<br />
** Pools implementation<br />
<br />
'''June 11th, 2014 16:00 UTC'''<br />
* Volume replication (ronenkat)<br />
** Blueprint and spec review/comments? https://review.openstack.org/#/c/98308<br />
* oslo.db (jungleboyj)<br />
** Want to quickly discuss the review out there for this: https://review.openstack.org/#/c/77125/<br />
** Move to current oslo.db? Wait for library work?<br />
* oslo logging discussion (jungleboyj)<br />
** Removing translation of debug messages<br />
** Adding _LE, _LI, _LW<br />
* 3rd party cinder ci (asselin)<br />
** Looking for volunteers to test out my fork of jaypipe's 3rd party ci setup which has support for nodepool & http proxies.<br />
** https://github.com/rasselin/os-ext-testing<br />
** https://github.com/rasselin/os-ext-testing-data<br />
** [http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg26258.html Third-Party CI Issue: direct access to review.openstack.org port 29418 required]l<br />
* HDS HNAS Cinder drivers (sombrafam)<br />
** As we are trying check-in for quite a while, we want to get some feedback on the missing steps<br />
** First thread: https://review.openstack.org/#/c/74371/<br />
** Continuation: https://review.openstack.org/#/c/82505/<br />
** Current thread in discussion: https://review.openstack.org/#/c/84244/<br />
* Mid-cyce Sprint (scottda)<br />
** HP in Fort Collins, CO can host on site<br />
** The thought was 10-20 developers<br />
** Large room is available July 14,15,17,18, 21-25, 27-Aug 1 ... Other options exist<br />
* Backend Pools (navneet)<br />
** which way to go? There are two WIPs.<br />
** comparisons between the two approaches? Any wiki/etherpad present or to be prepared for documenting opinions?<br />
<br />
'''June 4th, 2014 16:00 UTC'''<br />
* Volume backup modification (navneet)<br />
** Blueprint and spec review/comments? https://blueprints.launchpad.net/cinder/+spec/vol-backup-service-per-backend<br />
* Dynamic multi pool (navneet)<br />
** Review comments? https://review.openstack.org/#/c/85760/<br />
** Implementation approach comparison.<br />
* 3rd party ci (asselin)<br />
** I have a conflict with another meeting, but my WIP to add nodepool into jaypipe's 3rd party ci solution is available here: https://github.com/rasselin/os-ext-testing/tree/nodepool<br />
* oslo.db (jungleboyj)<br />
** Want to quickly discuss the review out there for this: https://review.openstack.org/#/c/77125/<br />
** Move to current oslo.db? Wait for library work?<br />
** Need to drop off the meeting about 40 minutes in so if we can cover before then it would be appreciated. :-)<br />
<br />
'''May 28th, 2014 16:00 UTC'''<br />
* 3rd Party CI (jungleboyj)<br />
** What tempest test cases to run?<br />
** iSCSI only? What about for FC only drivers then?<br />
** Progress on where to record results?<br />
* SSH host keys (jungleboyj)<br />
** https://launchpad.net/bugs/1320050 and https://bugs.launchpad.net/cinder/+bug/1320056<br />
** Need plan to get this addressed by all drivers using SSH. (New config options?)<br />
** Way to get this backported to Havana?<br />
* Dynamic multi-pools (navneet)<br />
** Status and WIP review (https://review.openstack.org/#/c/85760/)<br />
** Back manager design improvement/rewriting for better rpc message handling.<br />
** Back up service for multi pools.<br />
* cinder-specs (jgriffith)<br />
** Specs repo is live<br />
** Process<br />
** Reviews<br />
<br />
'''May 21st, 2014 16:00 UTC'''<br />
* Consistency Groups (xyang)<br />
** A few people have concerns on the restriction of one volume type per CG. Should we allow one CG to have multiple volume types on the same backend? Let's discuss about it.<br />
* Third-Party CI (jgriffith)<br />
** Who's started, who's planning to and how can we help support each other to get this going smoothly<br />
* Moving GlusterFS snapshot code into the NFS RemoteFs driver (mberlin)<br />
** The GlusterFS snapshot code using qcow2 snapshots is useful for all file based storage systems. I would volunteer to move the GlusterFS snapshot code into the general RemoteFs driver - making it easier to get [https://review.openstack.org/#/c/94186/ our driver] accepted ;-)<br />
** Eric Harney is fine with this and planned to do this for Juno anyway ([https://blueprints.launchpad.net/cinder/+spec/remotefs-snaps see his blueprint]). I've put it on the agenda to make sure others also agree with this approach.<br />
<br />
'''May 7th, 2014 16:00 UTC'''<br />
* Limit == 0 in API [https://review.openstack.org/#/c/86207/ patch review] - thingee<br />
<br />
'''April 16th, 2014 16:00 UTC'''<br />
* Release Status<br />
* Summit Session Updates<br />
* Next Stop ATL!!!<br />
* Cinder resource status - thingee<br />
<br />
'''April 9th, 2014 16:00 UTC'''<br />
(Agenda entered retrospectively)<br />
* Cinder Spec (jgriffith) <br />
** Just a heads up that cinder blueprints will move to a gerrit based process shortly, a la nova. Details and wiki entry to follow.<br />
* RC2 status (jgriffith) <br />
** Just after cutting RC2, a bunch of bugs<br />
* Testing RC code (jgriffith)<br />
** Get on it, folks!<br />
**Looks like theres some serious, intermittent performance issues in the API somewhere...<br />
<br />
'''April 2, 2014 16:00 UTC'''<br />
_Meeting cancelled and summary discussion held on #openstack-cinder_<br />
<br />
* Release status and bugs<br />
* -2s left on reviews from before Junos opened - please check if they are still valid<br />
- https://review.openstack.org/#/c/73446/ (JGriffith)<br />
- https://review.openstack.org/#/c/80550/ (JBryant)<br />
- https://review.openstack.org/#/c/82100/ (Avishay)<br />
- https://review.openstack.org/#/c/74158/ (Avishay)<br />
- + a whole bunch of stable branch stuff<br />
<br />
== Previous meetings ==<br />
<br />
'''Mar 26, 2014 16:00 UTC'''<br />
* RC1 updates (jgriffith)<br />
* Design Summit Sessions (jgriffith)<br />
http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-03-26-16.00.log.html<br />
<br />
'''Mar 19, 2014 16:00 UTC'''<br />
* ProphetStor Driver Exception request for Icehouse (jgriffith)<br />
* Bug status/updates (jgriffith)<br />
* What we should be punting to Juno (aka immediate -2 in Gerrit) (jgriffith)<br />
* Continuous Integration for Cinder Certification (jungleboyj)<br />
<br />
'''Mar 12, 2014 16:00 UTC'''<br />
* Cancelled due to nothing on the agenda. Ad-hoc discussion on #openstack-cinder instead<br />
<br />
'''Mar 5, 2014 16:00 UTC'''<br />
* Volume replication - avishay<br />
* [https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage New LVM-based driver for shared storage] - mtanino<br />
* DRBD/drbdmanage driver for cinder - philr<br />
<br />
'''Feb 19, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* [https://review.openstack.org/#/c/73745 Milestone Consideration for Drivers] -thingee<br />
* [https://etherpad.openstack.org/p/cinder-hack-201402 Hack-a-thon details] -thingee<br />
* [https://review.openstack.org/#/c/66737/ scheduling for local storage] -DuncanT<br />
<br />
'''Feb 5, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* Multiple pools per backend (bswartz)<br />
'''Jan 8, 2014 16:00 UTC'''<br />
* I2 is just around the corner, blueprint updates<br />
* Alternating meeting time proposal, results on feedback<br />
* Driver cert test, it's there... use it<br />
* Prioritizing patches and reviews<br />
'''December 18, 2013 16:00 UTC'''<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/cinder-backup-recover-api cinder backup recovery api - import/export backups] - avishay<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/copy-volume-to-image-task-flow] - Griffith<br />
* [https://blueprints.launchpad.net/cinder/+spec/admin-defined-capabilities Admin-defined capabilities] - Ollie<br />
* Why is type manage an extension? -Thingee<br />
<br />
'''December 11, 2013 16:00 UTC'''<br />
* Proposal of [https://etherpad.openstack.org/p/cinder-extensions extension packages] -Thingee<br />
<br />
'''December 4, 2013 16:00 UTC'''<br />
* Progressing with [https://wiki.openstack.org/wiki/Cinder/blueprints/multi-attach-volume multi-attach / shared-volume] - sgordon<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-acls-for-volumes Access Control List design discussion] - alatynskaya<br />
<br />
'''November 27, 2013 16:00 UTC'''<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-continuous-volume-replication-v2 Updated volume mirroring design] - avishay<br />
* Start using only Mock for new tests... [http://lists.openstack.org/pipermail/openstack-dev/2013-November/018501.html Related Nova Discussion] - Thingee<br />
* Rate limiting came up in the summit, and [http://lists.openstack.org/pipermail/openstack-dev/2013-November/020291.html on openstack-dev] - avishay<br />
* Metadata backup (https://review.openstack.org/#/c/51900/) progress RFC - dosaboy<br />
<br />
<br />
'''November 20, 2013 16:00 UTC'''<br />
* I-1 scheduling - JGriffith<br />
<br />
<br />
'''November 13, 2013 16:00 UTC'''<br />
* patches should update doc files where necessary to ease writing of release notes (Avishay?)<br />
* fencing host from storage (Ehud Trainin)<br />
* Summarize priority of tasks from summit discussions (https://wiki.openstack.org/wiki/Summit/Icehouse/Etherpads#Cinder and https://etherpad.openstack.org/p/cinder-icehouse-summary) Griff<br />
<br />
<br />
'''October 30, 2013 16:00 UTC'''<br />
* cinder backup metadata support - http://goo.gl/Jkg2FV (dosaboy)<br />
* fencing and unfencing host from storage - https://blueprints.launchpad.net/cinder/+spec/fencing-and-unfencing (Ehud Trainin)<br />
<br />
'''October 23, 2013 16:00 UTC'''<br />
* Nexenta backup driver https://review.openstack.org/#/c/47005/ - DuncanT<br />
<br />
<br />
'''October 2, 2013 16:00 UTC'''<br />
* What's still broken in Havana<br />
:* Backups and multibackend (https://code.launchpad.net/bugs/1228223): Fix committed<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066): Fix committed<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here): '''???'''<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896): '''Still open'''<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469): Probable fix committed<br />
:* Summary of gate issue pertaining to Cinder can be viewed here: http://paste.openstack.org/show/47798/<br />
:* Moving to taskflow - avishay<br />
<br />
<br />
'''Sept 25, 2013, 16:00 UTC'''<br />
* PTL nomination process is open until the 26'th, if you want to run send your nomination proposal out to the dev ML<br />
* What's broken in Havana<br />
:* Backups (specifically when configured with multi-backend volumes)<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066)<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here)<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896)<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469)<br />
:* ????<br />
* Cinderclient release plans/status? (Eharney)<br />
* OSLO imports (DuncanT)<br />
* bp/cinder-backup-improvements (dosaboy)<br />
* bp/multi-attach (zhiyan)<br />
<br />
<br />
<br />
'''Aug 21, 2013, 16:00 UTC'''<br />
# No agenda, no meeting.<br />
<br />
'''Aug 14, 2013, 16:00 UTC'''<br />
# Volume migration status - avishay<br />
# API extensions using metadata. This comes from the [https://review.openstack.org/#/c/38322/ readonly volume attach support]. - thingee<br />
[http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-08-14-16.00.log.html IRC Log]<br />
<br />
'''Aug 7, 2013, 16:00 UTC'''<br />
# [https://bugs.launchpad.net/cinder/+bug/1209199 RFC - make all rbd clones copy-on-write] -- Dosaboy<br />
# V1 API removal issues, plans and timescales - DuncanT<br />
<br />
== Meeting Minutes ==<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2014/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2013/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2012/</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=Cinder/tested-3rdParty-drivers&diff=59285Cinder/tested-3rdParty-drivers2014-07-28T19:42:04Z<p>John-griffith: /* Driver 3rd party testing */</p>
<hr />
<div>= Driver Testing =<br />
<br />
=== Testing for current Icehouse release ===<br />
The idea of requiring drivers to run functional tests is new to the Icehouse release. To get started with this process we've implemented a simple wrapper around the tempest volume.api tests at https://github.com/openstack-dev/devstack/tree/master/driver_certs . The process currently is for each vendor to run this test against their backend driver in their own environment. The wrapper is very simple, it just does a fresh clone of the cinder and tempest repos and restarts services, then runs the tempest volume.api tagged tests in the tempest suites and collects the output to a temporary log file.<br />
<br />
This is far from extensive or ideal, however it's already uncovered a number of issues with existing drivers and has proven to be a beneficial process. Currently this is a manual process that we'd like to see run/updated at each milestone and at each RC for Icehouse. The current list of the most recent run of the tests and a link to the resultant log files is included in the table found at the bottom of this wiki page.<br />
<br />
=== Testing requirements for upcoming Juno release ===<br />
To be designated as compatible, a third-party plugin and/or driver code must implement external third party testing. The testing should be Tempest executed against a Devstack build with the proposed code changes. The environment managed by the vendor should be configured to incorporate the plugin and/or driver solution. The OpenStack Infrastructure team has provided details on how to integrate 3rd party testing at:<br />
<br />
http://ci.openstack.org/third_party.html<br />
<br />
and Tempest can be found at:<br />
<br />
https://github.com/openstack/tempest<br />
<br />
The Cinder team expects that the third party testing will provide a +/-1 verify vote for all changes to any cinder code. In addition, the Cinder team expects that the third party test will also vote on all code submissions by the jenkins user. The jenkins user regularly submits requirements changes and the Cinder team hopes to catch any possible regressions as early as possible. More information on drivers, the Cinder CI policy and additional links to setting up a CI system at:<br />
<br />
https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver<br />
<br />
=== Most Recent Results for Icehouse ===<br />
{| class="wikitable"<br />
|-<br />
! Driver Name !! Pass/Fail !! Link to Log Files !! Date of Test Run<br />
|-<br />
| SolidFire || Pass || https://s3.amazonaws.com/solidfire-cert-results/tmp.wsfgEXbccC || Feb 13, 2014<br />
|-<br />
| IBM XIV || Pass || https://bugs.launchpad.net/cinder/+bug/1281119 || Mar 20, 2014<br />
|-<br />
| IBM GPFS || Pass || https://bugs.launchpad.net/cinder/+bug/1280482 || Feb 14, 2014<br />
|-<br />
| NetApp 7-Mode (iSCSI) || Pass || https://bugs.launchpad.net/cinder/+bug/1298023 || Mar 26, 2014<br />
|-<br />
| NetApp 7-Mode (NFS) || Pass || https://bugs.launchpad.net/cinder/+bug/1298035 || Mar 26, 2014<br />
|-<br />
| NetApp C-Mode (iSCSI) || Pass || https://bugs.launchpad.net/cinder/+bug/1293728 || Mar 17, 2014<br />
|-<br />
| NetApp C-Mode (NFS) || Pass || https://bugs.launchpad.net/cinder/+bug/1294733 || Mar 19, 2014<br />
|-<br />
| NetApp E-Series (iSCSI) || Pass || https://launchpadlibrarian.net/166301064/eseries_cinder_certification.log || Feb 14, 2014<br />
|-<br />
| IBM NAS (SONAS & Storwize V7000 Unified) || Pass || https://launchpadlibrarian.net/166535360/tmp.Q9QyLRIqGx || Feb 15, 2014<br />
|-<br />
| HP 3PAR StoreServ(FC) || Pass || https://bugs.launchpad.net/cinder/+bug/1278575 || Feb 5, 2014<br />
|-<br />
| HP 3PAR StoreServ(ISCSI) || Pass || https://bugs.launchpad.net/cinder/+bug/1278577 || Feb 5, 2014<br />
|-<br />
| HP LeftHand StoreVirtual || Pass || https://bugs.launchpad.net/cinder/+bug/1276809 || Feb 3, 2014<br />
|-<br />
| HP MSA (FC) || Pass || https://bugs.launchpad.net/cinder/+bug/1282628 || Feb 20, 2014<br />
|-<br />
| IBM Storwize/SVC (FC) || Pass || https://bugs.launchpad.net/cinder/+bug/1280736 || Feb 16, 2014<br />
|-<br />
| IBM Storwize/SVC (iSCSI) || Pass || https://bugs.launchpad.net/cinder/+bug/1279252 || Feb 12, 2014<br />
|-<br />
| EMC SMI-S (FC) || Pass || https://bugs.launchpad.net/cinder/+bug/1286529 || Mar 1, 2014<br />
|-<br />
| EMC SMI-S (iSCSI) || Pass || https://bugs.launchpad.net/cinder/+bug/1286529 || Mar 1, 2014<br />
|-<br />
| Dell EqualLogic || Pass || https://bugs.launchpad.net/cinder/+bug/1289061 || Mar 6, 2014<br />
|-<br />
| EMC VNX Direct (iSCSI) || Pass || https://bugs.launchpad.net/cinder/+bug/1283507 || Mar 4, 2014<br />
|-<br />
| VMware Vmdk Driver || Pass || https://bugs.launchpad.net/cinder/+bug/1295544 || Mar 21, 2014<br />
|-<br />
| Nimble Storage (iSCSI) || Pass || https://bugs.launchpad.net/cinder/+bug/1308624 || Jun 25, 2014<br />
|-<br />
| Zadara Storage || Pass || https://bugs.launchpad.net/cinder/+bug/1346692 || Jul 22, 2014<br />
|-<br />
| Example || Example || Example || Example<br />
|}</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=Cinder/tested-3rdParty-drivers&diff=59283Cinder/tested-3rdParty-drivers2014-07-28T19:40:50Z<p>John-griffith: John-griffith moved page Cinder/certified-drivers to Cinder/tested-3rdParty-drivers: Certain community members have strong objections to the work "Certified" so changing it to something less controversial.</p>
<hr />
<div>= Driver Certification =<br />
<br />
=== Testing for current Icehouse release ===<br />
The idea of requiring drivers to run functional tests and certify is new to the Icehouse release. To get started with this process we've implemented a simple wrapper around the tempest volume.api tests at https://github.com/openstack-dev/devstack/tree/master/driver_certs . The process currently is for each vendor to run this certification test against their backend driver in their own environment. The wrapper is very simple, it just does a fresh clone of the cinder and tempest repos and restarts services, then runs the tempest volume.api tagged tests in the tempest suites and collects the output to a temporary log file.<br />
<br />
This is far from extensive or ideal, however it's already uncovered a number of issues with existing drivers and has proven to be a beneficial process. Currently this is a manual process that we'd like to see run/updated at each milestone and at each RC for Icehouse. The current list of the most recent run of the certification tests and a link to the resultant log files is included in the table found at the bottom of this wiki page.<br />
<br />
=== Testing requirements for upcoming Juno release ===<br />
To be designated as compatible, a third-party plugin and/or driver code must implement external third party testing. The testing should be Tempest executed against a Devstack build with the proposed code changes. The environment managed by the vendor should be configured to incorporate the plugin and/or driver solution. The OpenStack Infrastructure team has provided details on how to integrate 3rd party testing at:<br />
<br />
http://ci.openstack.org/third_party.html<br />
<br />
and Tempest can be found at:<br />
<br />
https://github.com/openstack/tempest<br />
<br />
The Cinder team expects that the third party testing will provide a +/-1 verify vote for all changes to any cinder code. In addition, the Cinder team expects that the third party test will also vote on all code submissions by the jenkins user. The jenkins user regularly submits requirements changes and the Cinder team hopes to catch any possible regressions as early as possible. More information on drivers, the Cinder CI policy and additional links to setting up a CI system at:<br />
<br />
https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver<br />
<br />
=== Most Recent Results for Icehouse ===<br />
{| class="wikitable"<br />
|-<br />
! Driver Name !! Pass/Fail !! Link to Log Files !! Date of Test Run<br />
|-<br />
| SolidFire || Pass || https://s3.amazonaws.com/solidfire-cert-results/tmp.wsfgEXbccC || Feb 13, 2014<br />
|-<br />
| IBM XIV || Pass || https://bugs.launchpad.net/cinder/+bug/1281119 || Mar 20, 2014<br />
|-<br />
| IBM GPFS || Pass || https://bugs.launchpad.net/cinder/+bug/1280482 || Feb 14, 2014<br />
|-<br />
| NetApp 7-Mode (iSCSI) || Pass || https://bugs.launchpad.net/cinder/+bug/1298023 || Mar 26, 2014<br />
|-<br />
| NetApp 7-Mode (NFS) || Pass || https://bugs.launchpad.net/cinder/+bug/1298035 || Mar 26, 2014<br />
|-<br />
| NetApp C-Mode (iSCSI) || Pass || https://bugs.launchpad.net/cinder/+bug/1293728 || Mar 17, 2014<br />
|-<br />
| NetApp C-Mode (NFS) || Pass || https://bugs.launchpad.net/cinder/+bug/1294733 || Mar 19, 2014<br />
|-<br />
| NetApp E-Series (iSCSI) || Pass || https://launchpadlibrarian.net/166301064/eseries_cinder_certification.log || Feb 14, 2014<br />
|-<br />
| IBM NAS (SONAS & Storwize V7000 Unified) || Pass || https://launchpadlibrarian.net/166535360/tmp.Q9QyLRIqGx || Feb 15, 2014<br />
|-<br />
| HP 3PAR StoreServ(FC) || Pass || https://bugs.launchpad.net/cinder/+bug/1278575 || Feb 5, 2014<br />
|-<br />
| HP 3PAR StoreServ(ISCSI) || Pass || https://bugs.launchpad.net/cinder/+bug/1278577 || Feb 5, 2014<br />
|-<br />
| HP LeftHand StoreVirtual || Pass || https://bugs.launchpad.net/cinder/+bug/1276809 || Feb 3, 2014<br />
|-<br />
| HP MSA (FC) || Pass || https://bugs.launchpad.net/cinder/+bug/1282628 || Feb 20, 2014<br />
|-<br />
| IBM Storwize/SVC (FC) || Pass || https://bugs.launchpad.net/cinder/+bug/1280736 || Feb 16, 2014<br />
|-<br />
| IBM Storwize/SVC (iSCSI) || Pass || https://bugs.launchpad.net/cinder/+bug/1279252 || Feb 12, 2014<br />
|-<br />
| EMC SMI-S (FC) || Pass || https://bugs.launchpad.net/cinder/+bug/1286529 || Mar 1, 2014<br />
|-<br />
| EMC SMI-S (iSCSI) || Pass || https://bugs.launchpad.net/cinder/+bug/1286529 || Mar 1, 2014<br />
|-<br />
| Dell EqualLogic || Pass || https://bugs.launchpad.net/cinder/+bug/1289061 || Mar 6, 2014<br />
|-<br />
| EMC VNX Direct (iSCSI) || Pass || https://bugs.launchpad.net/cinder/+bug/1283507 || Mar 4, 2014<br />
|-<br />
| VMware Vmdk Driver || Pass || https://bugs.launchpad.net/cinder/+bug/1295544 || Mar 21, 2014<br />
|-<br />
| Nimble Storage (iSCSI) || Pass || https://bugs.launchpad.net/cinder/+bug/1308624 || Jun 25, 2014<br />
|-<br />
| Zadara Storage || Pass || https://bugs.launchpad.net/cinder/+bug/1346692 || Jul 22, 2014<br />
|-<br />
| Example || Example || Example || Example<br />
|}</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=Cinder/certified-drivers&diff=59284Cinder/certified-drivers2014-07-28T19:40:50Z<p>John-griffith: John-griffith moved page Cinder/certified-drivers to Cinder/tested-3rdParty-drivers: Certain community members have strong objections to the work "Certified" so changing it to something less controversial.</p>
<hr />
<div>#REDIRECT [[Cinder/tested-3rdParty-drivers]]</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=CinderMeetings&diff=57658CinderMeetings2014-07-08T15:36:05Z<p>John-griffith: /* Weekly Cinder team meeting */</p>
<hr />
<div><br />
= Weekly Cinder team meeting =<br />
'''NOTE MEETING TIME: Wed's at 16:00 UTC'''<br />
<br />
If you're interested in Cinder or Block Storage in general for OpenStack, we have a weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, on Wednesdays at 16:00 UTC. Please feel free to add items to the agenda below. NOTE: When adding topics please include your IRC name so we know who's topic it is and how to get more info.<br />
<br />
== Next meeting ==<br />
'''NOTE:''' ''Include your IRC nickname next to agenda items so that you can be called upon in the meeting and arrive at the meeting promptly if placing items in agenda. You might want to put this on your calendar if you are adding items.''<br />
<br />
'''July 9, 2014 16:00 UTC'''<br />
<br />
<br />
<br />
== Previous meetings ==<br />
'''July 2nd, 2014 16:00 UTC'''<br />
* Batching up mechanical code cleanup until the one week after each milestone (DuncanT)<br />
** See https://review.openstack.org/#/c/102872/ for example and https://review.openstack.org/#/c/101847<br />
** Log translations and hacking fixes fall into this class<br />
** Means you only take one bit hit per milestone for rebases<br />
** Does require some tracking so they don't get missed (and I will suck at said tracking, enviably)<br />
* LVM: Support a volume-group on shared storage (mtanino)<br />
** Want to quickly discuss the driver benefit, driver comparison, performance(P8-P14): https://wiki.openstack.org/w/images/0/08/Cinder-Support_LVM_on_a_sharedLU.pdf<br />
** Review comments? https://review.openstack.org/#/c/92479/<br />
* Cinder Third Party CI Names (asselin)<br />
** Online discussion of this thread: http://lists.openstack.org/pipermail/openstack-dev/2014-July/039103.html<br />
<br />
'''June 25th, 2014 16:00 UTC'''<br />
* Consistency groups [xyang]<br />
** Cinder spec review: https://review.openstack.org/#/c/96665/<br />
* CI status [xyang]<br />
** [http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg26258.html Third-Party CI Issue (asselin): direct access to review.openstack.org port 29418 required]l<br />
* Pools implementation [navneet]<br />
** Comparison etherpad https://etherpad.openstack.org/p/cinder-pool-impl-comparison<br />
** Decision to select implementation<br />
* keystoneclient integration with cinderclient [hrybacki / ayoung]<br />
** Discuss integration and collaboration possibilities<br />
<br />
<br />
<br />
'''June 18th, 2014 16:00 UTC'''<br />
* It's review day !?! [jdg]<br />
* Mid cycle meetup plans/updates [jdg]<br />
** https://etherpad.openstack.org/p/CinderMidCycleMeetupAug2014<br />
* Separation of Connectors from Driver/Device Interface (status update) [jdg]<br />
* Updates on 3'rd party CI [jdg]<br />
* Things we need to decide upon (not today, but do your homework for next week)<br />
** Software Define Storage layers/drivers<br />
** Pools implementation<br />
<br />
'''June 11th, 2014 16:00 UTC'''<br />
* Volume replication (ronenkat)<br />
** Blueprint and spec review/comments? https://review.openstack.org/#/c/98308<br />
* oslo.db (jungleboyj)<br />
** Want to quickly discuss the review out there for this: https://review.openstack.org/#/c/77125/<br />
** Move to current oslo.db? Wait for library work?<br />
* oslo logging discussion (jungleboyj)<br />
** Removing translation of debug messages<br />
** Adding _LE, _LI, _LW<br />
* 3rd party cinder ci (asselin)<br />
** Looking for volunteers to test out my fork of jaypipe's 3rd party ci setup which has support for nodepool & http proxies.<br />
** https://github.com/rasselin/os-ext-testing<br />
** https://github.com/rasselin/os-ext-testing-data<br />
** [http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg26258.html Third-Party CI Issue: direct access to review.openstack.org port 29418 required]l<br />
* HDS HNAS Cinder drivers (sombrafam)<br />
** As we are trying check-in for quite a while, we want to get some feedback on the missing steps<br />
** First thread: https://review.openstack.org/#/c/74371/<br />
** Continuation: https://review.openstack.org/#/c/82505/<br />
** Current thread in discussion: https://review.openstack.org/#/c/84244/<br />
* Mid-cyce Sprint (scottda)<br />
** HP in Fort Collins, CO can host on site<br />
** The thought was 10-20 developers<br />
** Large room is available July 14,15,17,18, 21-25, 27-Aug 1 ... Other options exist<br />
* Backend Pools (navneet)<br />
** which way to go? There are two WIPs.<br />
** comparisons between the two approaches? Any wiki/etherpad present or to be prepared for documenting opinions?<br />
<br />
'''June 4th, 2014 16:00 UTC'''<br />
* Volume backup modification (navneet)<br />
** Blueprint and spec review/comments? https://blueprints.launchpad.net/cinder/+spec/vol-backup-service-per-backend<br />
* Dynamic multi pool (navneet)<br />
** Review comments? https://review.openstack.org/#/c/85760/<br />
** Implementation approach comparison.<br />
* 3rd party ci (asselin)<br />
** I have a conflict with another meeting, but my WIP to add nodepool into jaypipe's 3rd party ci solution is available here: https://github.com/rasselin/os-ext-testing/tree/nodepool<br />
* oslo.db (jungleboyj)<br />
** Want to quickly discuss the review out there for this: https://review.openstack.org/#/c/77125/<br />
** Move to current oslo.db? Wait for library work?<br />
** Need to drop off the meeting about 40 minutes in so if we can cover before then it would be appreciated. :-)<br />
<br />
'''May 28th, 2014 16:00 UTC'''<br />
* 3rd Party CI (jungleboyj)<br />
** What tempest test cases to run?<br />
** iSCSI only? What about for FC only drivers then?<br />
** Progress on where to record results?<br />
* SSH host keys (jungleboyj)<br />
** https://launchpad.net/bugs/1320050 and https://bugs.launchpad.net/cinder/+bug/1320056<br />
** Need plan to get this addressed by all drivers using SSH. (New config options?)<br />
** Way to get this backported to Havana?<br />
* Dynamic multi-pools (navneet)<br />
** Status and WIP review (https://review.openstack.org/#/c/85760/)<br />
** Back manager design improvement/rewriting for better rpc message handling.<br />
** Back up service for multi pools.<br />
* cinder-specs (jgriffith)<br />
** Specs repo is live<br />
** Process<br />
** Reviews<br />
<br />
'''May 21st, 2014 16:00 UTC'''<br />
* Consistency Groups (xyang)<br />
** A few people have concerns on the restriction of one volume type per CG. Should we allow one CG to have multiple volume types on the same backend? Let's discuss about it.<br />
* Third-Party CI (jgriffith)<br />
** Who's started, who's planning to and how can we help support each other to get this going smoothly<br />
* Moving GlusterFS snapshot code into the NFS RemoteFs driver (mberlin)<br />
** The GlusterFS snapshot code using qcow2 snapshots is useful for all file based storage systems. I would volunteer to move the GlusterFS snapshot code into the general RemoteFs driver - making it easier to get [https://review.openstack.org/#/c/94186/ our driver] accepted ;-)<br />
** Eric Harney is fine with this and planned to do this for Juno anyway ([https://blueprints.launchpad.net/cinder/+spec/remotefs-snaps see his blueprint]). I've put it on the agenda to make sure others also agree with this approach.<br />
<br />
'''May 7th, 2014 16:00 UTC'''<br />
* Limit == 0 in API [https://review.openstack.org/#/c/86207/ patch review] - thingee<br />
<br />
'''April 16th, 2014 16:00 UTC'''<br />
* Release Status<br />
* Summit Session Updates<br />
* Next Stop ATL!!!<br />
* Cinder resource status - thingee<br />
<br />
'''April 9th, 2014 16:00 UTC'''<br />
(Agenda entered retrospectively)<br />
* Cinder Spec (jgriffith) <br />
** Just a heads up that cinder blueprints will move to a gerrit based process shortly, a la nova. Details and wiki entry to follow.<br />
* RC2 status (jgriffith) <br />
** Just after cutting RC2, a bunch of bugs<br />
* Testing RC code (jgriffith)<br />
** Get on it, folks!<br />
**Looks like theres some serious, intermittent performance issues in the API somewhere...<br />
<br />
'''April 2, 2014 16:00 UTC'''<br />
_Meeting cancelled and summary discussion held on #openstack-cinder_<br />
<br />
* Release status and bugs<br />
* -2s left on reviews from before Junos opened - please check if they are still valid<br />
- https://review.openstack.org/#/c/73446/ (JGriffith)<br />
- https://review.openstack.org/#/c/80550/ (JBryant)<br />
- https://review.openstack.org/#/c/82100/ (Avishay)<br />
- https://review.openstack.org/#/c/74158/ (Avishay)<br />
- + a whole bunch of stable branch stuff<br />
<br />
== Previous meetings ==<br />
<br />
'''Mar 26, 2014 16:00 UTC'''<br />
* RC1 updates (jgriffith)<br />
* Design Summit Sessions (jgriffith)<br />
http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-03-26-16.00.log.html<br />
<br />
'''Mar 19, 2014 16:00 UTC'''<br />
* ProphetStor Driver Exception request for Icehouse (jgriffith)<br />
* Bug status/updates (jgriffith)<br />
* What we should be punting to Juno (aka immediate -2 in Gerrit) (jgriffith)<br />
* Continuous Integration for Cinder Certification (jungleboyj)<br />
<br />
'''Mar 12, 2014 16:00 UTC'''<br />
* Cancelled due to nothing on the agenda. Ad-hoc discussion on #openstack-cinder instead<br />
<br />
'''Mar 5, 2014 16:00 UTC'''<br />
* Volume replication - avishay<br />
* [https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage New LVM-based driver for shared storage] - mtanino<br />
* DRBD/drbdmanage driver for cinder - philr<br />
<br />
'''Feb 19, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* [https://review.openstack.org/#/c/73745 Milestone Consideration for Drivers] -thingee<br />
* [https://etherpad.openstack.org/p/cinder-hack-201402 Hack-a-thon details] -thingee<br />
* [https://review.openstack.org/#/c/66737/ scheduling for local storage] -DuncanT<br />
<br />
'''Feb 5, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* Multiple pools per backend (bswartz)<br />
'''Jan 8, 2014 16:00 UTC'''<br />
* I2 is just around the corner, blueprint updates<br />
* Alternating meeting time proposal, results on feedback<br />
* Driver cert test, it's there... use it<br />
* Prioritizing patches and reviews<br />
'''December 18, 2013 16:00 UTC'''<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/cinder-backup-recover-api cinder backup recovery api - import/export backups] - avishay<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/copy-volume-to-image-task-flow] - Griffith<br />
* [https://blueprints.launchpad.net/cinder/+spec/admin-defined-capabilities Admin-defined capabilities] - Ollie<br />
* Why is type manage an extension? -Thingee<br />
<br />
'''December 11, 2013 16:00 UTC'''<br />
* Proposal of [https://etherpad.openstack.org/p/cinder-extensions extension packages] -Thingee<br />
<br />
'''December 4, 2013 16:00 UTC'''<br />
* Progressing with [https://wiki.openstack.org/wiki/Cinder/blueprints/multi-attach-volume multi-attach / shared-volume] - sgordon<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-acls-for-volumes Access Control List design discussion] - alatynskaya<br />
<br />
'''November 27, 2013 16:00 UTC'''<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-continuous-volume-replication-v2 Updated volume mirroring design] - avishay<br />
* Start using only Mock for new tests... [http://lists.openstack.org/pipermail/openstack-dev/2013-November/018501.html Related Nova Discussion] - Thingee<br />
* Rate limiting came up in the summit, and [http://lists.openstack.org/pipermail/openstack-dev/2013-November/020291.html on openstack-dev] - avishay<br />
* Metadata backup (https://review.openstack.org/#/c/51900/) progress RFC - dosaboy<br />
<br />
<br />
'''November 20, 2013 16:00 UTC'''<br />
* I-1 scheduling - JGriffith<br />
<br />
<br />
'''November 13, 2013 16:00 UTC'''<br />
* patches should update doc files where necessary to ease writing of release notes (Avishay?)<br />
* fencing host from storage (Ehud Trainin)<br />
* Summarize priority of tasks from summit discussions (https://wiki.openstack.org/wiki/Summit/Icehouse/Etherpads#Cinder and https://etherpad.openstack.org/p/cinder-icehouse-summary) Griff<br />
<br />
<br />
'''October 30, 2013 16:00 UTC'''<br />
* cinder backup metadata support - http://goo.gl/Jkg2FV (dosaboy)<br />
* fencing and unfencing host from storage - https://blueprints.launchpad.net/cinder/+spec/fencing-and-unfencing (Ehud Trainin)<br />
<br />
'''October 23, 2013 16:00 UTC'''<br />
* Nexenta backup driver https://review.openstack.org/#/c/47005/ - DuncanT<br />
<br />
<br />
'''October 2, 2013 16:00 UTC'''<br />
* What's still broken in Havana<br />
:* Backups and multibackend (https://code.launchpad.net/bugs/1228223): Fix committed<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066): Fix committed<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here): '''???'''<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896): '''Still open'''<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469): Probable fix committed<br />
:* Summary of gate issue pertaining to Cinder can be viewed here: http://paste.openstack.org/show/47798/<br />
:* Moving to taskflow - avishay<br />
<br />
<br />
'''Sept 25, 2013, 16:00 UTC'''<br />
* PTL nomination process is open until the 26'th, if you want to run send your nomination proposal out to the dev ML<br />
* What's broken in Havana<br />
:* Backups (specifically when configured with multi-backend volumes)<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066)<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here)<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896)<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469)<br />
:* ????<br />
* Cinderclient release plans/status? (Eharney)<br />
* OSLO imports (DuncanT)<br />
* bp/cinder-backup-improvements (dosaboy)<br />
* bp/multi-attach (zhiyan)<br />
<br />
<br />
<br />
'''Aug 21, 2013, 16:00 UTC'''<br />
# No agenda, no meeting.<br />
<br />
'''Aug 14, 2013, 16:00 UTC'''<br />
# Volume migration status - avishay<br />
# API extensions using metadata. This comes from the [https://review.openstack.org/#/c/38322/ readonly volume attach support]. - thingee<br />
[http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-08-14-16.00.log.html IRC Log]<br />
<br />
'''Aug 7, 2013, 16:00 UTC'''<br />
# [https://bugs.launchpad.net/cinder/+bug/1209199 RFC - make all rbd clones copy-on-write] -- Dosaboy<br />
# V1 API removal issues, plans and timescales - DuncanT<br />
<br />
== Meeting Minutes ==<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2014/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2013/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2012/</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=CinderMeetings&diff=56167CinderMeetings2014-06-18T00:06:33Z<p>John-griffith: /* Next meeting */</p>
<hr />
<div><br />
= Weekly Cinder team meeting =<br />
'''NOTE MEETING TIME: Wed's at 16:00 UTC'''<br />
<br />
If you're interested in Cinder or Block Storage in general for OpenStack, we have a weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, on Wednesdays at 16:00 UTC. Please feel free to add items to the agenda below. NOTE: When adding topics please include your IRC name so we know who's topic it is and how to get more info.<br />
<br />
== Next meeting ==<br />
'''NOTE:''' ''Include your IRC nickname next to agenda items so that you can be called upon in the meeting and arrive at the meeting promptly if placing items in agenda. You might want to put this on your calendar if you are adding items.''<br />
<br />
'''June 11th, 2014 16:00 UTC'''<br />
* It's review day !?! [jdg]<br />
* Mid cycle meetup plans/updates [jdg]<br />
* Separation of Connectors from Driver/Device Interface (status update) [jdg]<br />
* Updates on 3'rd party CI [jdg]<br />
* Things we need to decide upon (not today, but do your homework for next week)<br />
** Software Define Storage layers/drivers<br />
** Pools implementation<br />
<br />
== Previous meetings ==<br />
<br />
'''June 11th, 2014 16:00 UTC'''<br />
* Volume replication (ronenkat)<br />
** Blueprint and spec review/comments? https://review.openstack.org/#/c/98308<br />
* oslo.db (jungleboyj)<br />
** Want to quickly discuss the review out there for this: https://review.openstack.org/#/c/77125/<br />
** Move to current oslo.db? Wait for library work?<br />
* oslo logging discussion (jungleboyj)<br />
** Removing translation of debug messages<br />
** Adding _LE, _LI, _LW<br />
* 3rd party cinder ci (asselin)<br />
** Looking for volunteers to test out my fork of jaypipe's 3rd party ci setup which has support for nodepool & http proxies.<br />
** https://github.com/rasselin/os-ext-testing<br />
** https://github.com/rasselin/os-ext-testing-data<br />
** [http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg26258.html Third-Party CI Issue: direct access to review.openstack.org port 29418 required]l<br />
* HDS HNAS Cinder drivers (sombrafam)<br />
** As we are trying check-in for quite a while, we want to get some feedback on the missing steps<br />
** First thread: https://review.openstack.org/#/c/74371/<br />
** Continuation: https://review.openstack.org/#/c/82505/<br />
** Current thread in discussion: https://review.openstack.org/#/c/84244/<br />
* Mid-cyce Sprint (scottda)<br />
** HP in Fort Collins, CO can host on site<br />
** The thought was 10-20 developers<br />
** Large room is available July 14,15,17,18, 21-25, 27-Aug 1 ... Other options exist<br />
* Backend Pools (navneet)<br />
** which way to go? There are two WIPs.<br />
** comparisons between the two approaches? Any wiki/etherpad present or to be prepared for documenting opinions?<br />
<br />
'''June 4th, 2014 16:00 UTC'''<br />
* Volume backup modification (navneet)<br />
** Blueprint and spec review/comments? https://blueprints.launchpad.net/cinder/+spec/vol-backup-service-per-backend<br />
* Dynamic multi pool (navneet)<br />
** Review comments? https://review.openstack.org/#/c/85760/<br />
** Implementation approach comparison.<br />
* 3rd party ci (asselin)<br />
** I have a conflict with another meeting, but my WIP to add nodepool into jaypipe's 3rd party ci solution is available here: https://github.com/rasselin/os-ext-testing/tree/nodepool<br />
* oslo.db (jungleboyj)<br />
** Want to quickly discuss the review out there for this: https://review.openstack.org/#/c/77125/<br />
** Move to current oslo.db? Wait for library work?<br />
** Need to drop off the meeting about 40 minutes in so if we can cover before then it would be appreciated. :-)<br />
<br />
'''May 28th, 2014 16:00 UTC'''<br />
* 3rd Party CI (jungleboyj)<br />
** What tempest test cases to run?<br />
** iSCSI only? What about for FC only drivers then?<br />
** Progress on where to record results?<br />
* SSH host keys (jungleboyj)<br />
** https://launchpad.net/bugs/1320050 and https://bugs.launchpad.net/cinder/+bug/1320056<br />
** Need plan to get this addressed by all drivers using SSH. (New config options?)<br />
** Way to get this backported to Havana?<br />
* Dynamic multi-pools (navneet)<br />
** Status and WIP review (https://review.openstack.org/#/c/85760/)<br />
** Back manager design improvement/rewriting for better rpc message handling.<br />
** Back up service for multi pools.<br />
* cinder-specs (jgriffith)<br />
** Specs repo is live<br />
** Process<br />
** Reviews<br />
<br />
'''May 21st, 2014 16:00 UTC'''<br />
* Consistency Groups (xyang)<br />
** A few people have concerns on the restriction of one volume type per CG. Should we allow one CG to have multiple volume types on the same backend? Let's discuss about it.<br />
* Third-Party CI (jgriffith)<br />
** Who's started, who's planning to and how can we help support each other to get this going smoothly<br />
* Moving GlusterFS snapshot code into the NFS RemoteFs driver (mberlin)<br />
** The GlusterFS snapshot code using qcow2 snapshots is useful for all file based storage systems. I would volunteer to move the GlusterFS snapshot code into the general RemoteFs driver - making it easier to get [https://review.openstack.org/#/c/94186/ our driver] accepted ;-)<br />
** Eric Harney is fine with this and planned to do this for Juno anyway ([https://blueprints.launchpad.net/cinder/+spec/remotefs-snaps see his blueprint]). I've put it on the agenda to make sure others also agree with this approach.<br />
<br />
'''May 7th, 2014 16:00 UTC'''<br />
* Limit == 0 in API [https://review.openstack.org/#/c/86207/ patch review] - thingee<br />
<br />
'''April 16th, 2014 16:00 UTC'''<br />
* Release Status<br />
* Summit Session Updates<br />
* Next Stop ATL!!!<br />
* Cinder resource status - thingee<br />
<br />
'''April 9th, 2014 16:00 UTC'''<br />
(Agenda entered retrospectively)<br />
* Cinder Spec (jgriffith) <br />
** Just a heads up that cinder blueprints will move to a gerrit based process shortly, a la nova. Details and wiki entry to follow.<br />
* RC2 status (jgriffith) <br />
** Just after cutting RC2, a bunch of bugs<br />
* Testing RC code (jgriffith)<br />
** Get on it, folks!<br />
**Looks like theres some serious, intermittent performance issues in the API somewhere...<br />
<br />
'''April 2, 2014 16:00 UTC'''<br />
_Meeting cancelled and summary discussion held on #openstack-cinder_<br />
<br />
* Release status and bugs<br />
* -2s left on reviews from before Junos opened - please check if they are still valid<br />
- https://review.openstack.org/#/c/73446/ (JGriffith)<br />
- https://review.openstack.org/#/c/80550/ (JBryant)<br />
- https://review.openstack.org/#/c/82100/ (Avishay)<br />
- https://review.openstack.org/#/c/74158/ (Avishay)<br />
- + a whole bunch of stable branch stuff<br />
<br />
== Previous meetings ==<br />
<br />
'''Mar 26, 2014 16:00 UTC'''<br />
* RC1 updates (jgriffith)<br />
* Design Summit Sessions (jgriffith)<br />
http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-03-26-16.00.log.html<br />
<br />
'''Mar 19, 2014 16:00 UTC'''<br />
* ProphetStor Driver Exception request for Icehouse (jgriffith)<br />
* Bug status/updates (jgriffith)<br />
* What we should be punting to Juno (aka immediate -2 in Gerrit) (jgriffith)<br />
* Continuous Integration for Cinder Certification (jungleboyj)<br />
<br />
'''Mar 12, 2014 16:00 UTC'''<br />
* Cancelled due to nothing on the agenda. Ad-hoc discussion on #openstack-cinder instead<br />
<br />
'''Mar 5, 2014 16:00 UTC'''<br />
* Volume replication - avishay<br />
* [https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage New LVM-based driver for shared storage] - mtanino<br />
* DRBD/drbdmanage driver for cinder - philr<br />
<br />
'''Feb 19, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* [https://review.openstack.org/#/c/73745 Milestone Consideration for Drivers] -thingee<br />
* [https://etherpad.openstack.org/p/cinder-hack-201402 Hack-a-thon details] -thingee<br />
* [https://review.openstack.org/#/c/66737/ scheduling for local storage] -DuncanT<br />
<br />
'''Feb 5, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* Multiple pools per backend (bswartz)<br />
'''Jan 8, 2014 16:00 UTC'''<br />
* I2 is just around the corner, blueprint updates<br />
* Alternating meeting time proposal, results on feedback<br />
* Driver cert test, it's there... use it<br />
* Prioritizing patches and reviews<br />
'''December 18, 2013 16:00 UTC'''<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/cinder-backup-recover-api cinder backup recovery api - import/export backups] - avishay<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/copy-volume-to-image-task-flow] - Griffith<br />
* [https://blueprints.launchpad.net/cinder/+spec/admin-defined-capabilities Admin-defined capabilities] - Ollie<br />
* Why is type manage an extension? -Thingee<br />
<br />
'''December 11, 2013 16:00 UTC'''<br />
* Proposal of [https://etherpad.openstack.org/p/cinder-extensions extension packages] -Thingee<br />
<br />
'''December 4, 2013 16:00 UTC'''<br />
* Progressing with [https://wiki.openstack.org/wiki/Cinder/blueprints/multi-attach-volume multi-attach / shared-volume] - sgordon<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-acls-for-volumes Access Control List design discussion] - alatynskaya<br />
<br />
'''November 27, 2013 16:00 UTC'''<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-continuous-volume-replication-v2 Updated volume mirroring design] - avishay<br />
* Start using only Mock for new tests... [http://lists.openstack.org/pipermail/openstack-dev/2013-November/018501.html Related Nova Discussion] - Thingee<br />
* Rate limiting came up in the summit, and [http://lists.openstack.org/pipermail/openstack-dev/2013-November/020291.html on openstack-dev] - avishay<br />
* Metadata backup (https://review.openstack.org/#/c/51900/) progress RFC - dosaboy<br />
<br />
<br />
'''November 20, 2013 16:00 UTC'''<br />
* I-1 scheduling - JGriffith<br />
<br />
<br />
'''November 13, 2013 16:00 UTC'''<br />
* patches should update doc files where necessary to ease writing of release notes (Avishay?)<br />
* fencing host from storage (Ehud Trainin)<br />
* Summarize priority of tasks from summit discussions (https://wiki.openstack.org/wiki/Summit/Icehouse/Etherpads#Cinder and https://etherpad.openstack.org/p/cinder-icehouse-summary) Griff<br />
<br />
<br />
'''October 30, 2013 16:00 UTC'''<br />
* cinder backup metadata support - http://goo.gl/Jkg2FV (dosaboy)<br />
* fencing and unfencing host from storage - https://blueprints.launchpad.net/cinder/+spec/fencing-and-unfencing (Ehud Trainin)<br />
<br />
'''October 23, 2013 16:00 UTC'''<br />
* Nexenta backup driver https://review.openstack.org/#/c/47005/ - DuncanT<br />
<br />
<br />
'''October 2, 2013 16:00 UTC'''<br />
* What's still broken in Havana<br />
:* Backups and multibackend (https://code.launchpad.net/bugs/1228223): Fix committed<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066): Fix committed<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here): '''???'''<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896): '''Still open'''<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469): Probable fix committed<br />
:* Summary of gate issue pertaining to Cinder can be viewed here: http://paste.openstack.org/show/47798/<br />
:* Moving to taskflow - avishay<br />
<br />
<br />
'''Sept 25, 2013, 16:00 UTC'''<br />
* PTL nomination process is open until the 26'th, if you want to run send your nomination proposal out to the dev ML<br />
* What's broken in Havana<br />
:* Backups (specifically when configured with multi-backend volumes)<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066)<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here)<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896)<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469)<br />
:* ????<br />
* Cinderclient release plans/status? (Eharney)<br />
* OSLO imports (DuncanT)<br />
* bp/cinder-backup-improvements (dosaboy)<br />
* bp/multi-attach (zhiyan)<br />
<br />
<br />
<br />
'''Aug 21, 2013, 16:00 UTC'''<br />
# No agenda, no meeting.<br />
<br />
'''Aug 14, 2013, 16:00 UTC'''<br />
# Volume migration status - avishay<br />
# API extensions using metadata. This comes from the [https://review.openstack.org/#/c/38322/ readonly volume attach support]. - thingee<br />
[http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-08-14-16.00.log.html IRC Log]<br />
<br />
'''Aug 7, 2013, 16:00 UTC'''<br />
# [https://bugs.launchpad.net/cinder/+bug/1209199 RFC - make all rbd clones copy-on-write] -- Dosaboy<br />
# V1 API removal issues, plans and timescales - DuncanT<br />
<br />
== Meeting Minutes ==<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2014/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2013/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2012/</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=CinderMeetings&diff=56166CinderMeetings2014-06-18T00:02:51Z<p>John-griffith: /* Previous meetings */</p>
<hr />
<div><br />
= Weekly Cinder team meeting =<br />
'''NOTE MEETING TIME: Wed's at 16:00 UTC'''<br />
<br />
If you're interested in Cinder or Block Storage in general for OpenStack, we have a weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, on Wednesdays at 16:00 UTC. Please feel free to add items to the agenda below. NOTE: When adding topics please include your IRC name so we know who's topic it is and how to get more info.<br />
<br />
== Next meeting ==<br />
'''NOTE:''' ''Include your IRC nickname next to agenda items so that you can be called upon in the meeting and arrive at the meeting promptly if placing items in agenda. You might want to put this on your calendar if you are adding items.''<br />
<br />
== Previous meetings ==<br />
<br />
'''June 11th, 2014 16:00 UTC'''<br />
* Volume replication (ronenkat)<br />
** Blueprint and spec review/comments? https://review.openstack.org/#/c/98308<br />
* oslo.db (jungleboyj)<br />
** Want to quickly discuss the review out there for this: https://review.openstack.org/#/c/77125/<br />
** Move to current oslo.db? Wait for library work?<br />
* oslo logging discussion (jungleboyj)<br />
** Removing translation of debug messages<br />
** Adding _LE, _LI, _LW<br />
* 3rd party cinder ci (asselin)<br />
** Looking for volunteers to test out my fork of jaypipe's 3rd party ci setup which has support for nodepool & http proxies.<br />
** https://github.com/rasselin/os-ext-testing<br />
** https://github.com/rasselin/os-ext-testing-data<br />
** [http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg26258.html Third-Party CI Issue: direct access to review.openstack.org port 29418 required]l<br />
* HDS HNAS Cinder drivers (sombrafam)<br />
** As we are trying check-in for quite a while, we want to get some feedback on the missing steps<br />
** First thread: https://review.openstack.org/#/c/74371/<br />
** Continuation: https://review.openstack.org/#/c/82505/<br />
** Current thread in discussion: https://review.openstack.org/#/c/84244/<br />
* Mid-cyce Sprint (scottda)<br />
** HP in Fort Collins, CO can host on site<br />
** The thought was 10-20 developers<br />
** Large room is available July 14,15,17,18, 21-25, 27-Aug 1 ... Other options exist<br />
* Backend Pools (navneet)<br />
** which way to go? There are two WIPs.<br />
** comparisons between the two approaches? Any wiki/etherpad present or to be prepared for documenting opinions?<br />
<br />
'''June 4th, 2014 16:00 UTC'''<br />
* Volume backup modification (navneet)<br />
** Blueprint and spec review/comments? https://blueprints.launchpad.net/cinder/+spec/vol-backup-service-per-backend<br />
* Dynamic multi pool (navneet)<br />
** Review comments? https://review.openstack.org/#/c/85760/<br />
** Implementation approach comparison.<br />
* 3rd party ci (asselin)<br />
** I have a conflict with another meeting, but my WIP to add nodepool into jaypipe's 3rd party ci solution is available here: https://github.com/rasselin/os-ext-testing/tree/nodepool<br />
* oslo.db (jungleboyj)<br />
** Want to quickly discuss the review out there for this: https://review.openstack.org/#/c/77125/<br />
** Move to current oslo.db? Wait for library work?<br />
** Need to drop off the meeting about 40 minutes in so if we can cover before then it would be appreciated. :-)<br />
<br />
'''May 28th, 2014 16:00 UTC'''<br />
* 3rd Party CI (jungleboyj)<br />
** What tempest test cases to run?<br />
** iSCSI only? What about for FC only drivers then?<br />
** Progress on where to record results?<br />
* SSH host keys (jungleboyj)<br />
** https://launchpad.net/bugs/1320050 and https://bugs.launchpad.net/cinder/+bug/1320056<br />
** Need plan to get this addressed by all drivers using SSH. (New config options?)<br />
** Way to get this backported to Havana?<br />
* Dynamic multi-pools (navneet)<br />
** Status and WIP review (https://review.openstack.org/#/c/85760/)<br />
** Back manager design improvement/rewriting for better rpc message handling.<br />
** Back up service for multi pools.<br />
* cinder-specs (jgriffith)<br />
** Specs repo is live<br />
** Process<br />
** Reviews<br />
<br />
'''May 21st, 2014 16:00 UTC'''<br />
* Consistency Groups (xyang)<br />
** A few people have concerns on the restriction of one volume type per CG. Should we allow one CG to have multiple volume types on the same backend? Let's discuss about it.<br />
* Third-Party CI (jgriffith)<br />
** Who's started, who's planning to and how can we help support each other to get this going smoothly<br />
* Moving GlusterFS snapshot code into the NFS RemoteFs driver (mberlin)<br />
** The GlusterFS snapshot code using qcow2 snapshots is useful for all file based storage systems. I would volunteer to move the GlusterFS snapshot code into the general RemoteFs driver - making it easier to get [https://review.openstack.org/#/c/94186/ our driver] accepted ;-)<br />
** Eric Harney is fine with this and planned to do this for Juno anyway ([https://blueprints.launchpad.net/cinder/+spec/remotefs-snaps see his blueprint]). I've put it on the agenda to make sure others also agree with this approach.<br />
<br />
'''May 7th, 2014 16:00 UTC'''<br />
* Limit == 0 in API [https://review.openstack.org/#/c/86207/ patch review] - thingee<br />
<br />
'''April 16th, 2014 16:00 UTC'''<br />
* Release Status<br />
* Summit Session Updates<br />
* Next Stop ATL!!!<br />
* Cinder resource status - thingee<br />
<br />
'''April 9th, 2014 16:00 UTC'''<br />
(Agenda entered retrospectively)<br />
* Cinder Spec (jgriffith) <br />
** Just a heads up that cinder blueprints will move to a gerrit based process shortly, a la nova. Details and wiki entry to follow.<br />
* RC2 status (jgriffith) <br />
** Just after cutting RC2, a bunch of bugs<br />
* Testing RC code (jgriffith)<br />
** Get on it, folks!<br />
**Looks like theres some serious, intermittent performance issues in the API somewhere...<br />
<br />
'''April 2, 2014 16:00 UTC'''<br />
_Meeting cancelled and summary discussion held on #openstack-cinder_<br />
<br />
* Release status and bugs<br />
* -2s left on reviews from before Junos opened - please check if they are still valid<br />
- https://review.openstack.org/#/c/73446/ (JGriffith)<br />
- https://review.openstack.org/#/c/80550/ (JBryant)<br />
- https://review.openstack.org/#/c/82100/ (Avishay)<br />
- https://review.openstack.org/#/c/74158/ (Avishay)<br />
- + a whole bunch of stable branch stuff<br />
<br />
== Previous meetings ==<br />
<br />
'''Mar 26, 2014 16:00 UTC'''<br />
* RC1 updates (jgriffith)<br />
* Design Summit Sessions (jgriffith)<br />
http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-03-26-16.00.log.html<br />
<br />
'''Mar 19, 2014 16:00 UTC'''<br />
* ProphetStor Driver Exception request for Icehouse (jgriffith)<br />
* Bug status/updates (jgriffith)<br />
* What we should be punting to Juno (aka immediate -2 in Gerrit) (jgriffith)<br />
* Continuous Integration for Cinder Certification (jungleboyj)<br />
<br />
'''Mar 12, 2014 16:00 UTC'''<br />
* Cancelled due to nothing on the agenda. Ad-hoc discussion on #openstack-cinder instead<br />
<br />
'''Mar 5, 2014 16:00 UTC'''<br />
* Volume replication - avishay<br />
* [https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage New LVM-based driver for shared storage] - mtanino<br />
* DRBD/drbdmanage driver for cinder - philr<br />
<br />
'''Feb 19, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* [https://review.openstack.org/#/c/73745 Milestone Consideration for Drivers] -thingee<br />
* [https://etherpad.openstack.org/p/cinder-hack-201402 Hack-a-thon details] -thingee<br />
* [https://review.openstack.org/#/c/66737/ scheduling for local storage] -DuncanT<br />
<br />
'''Feb 5, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* Multiple pools per backend (bswartz)<br />
'''Jan 8, 2014 16:00 UTC'''<br />
* I2 is just around the corner, blueprint updates<br />
* Alternating meeting time proposal, results on feedback<br />
* Driver cert test, it's there... use it<br />
* Prioritizing patches and reviews<br />
'''December 18, 2013 16:00 UTC'''<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/cinder-backup-recover-api cinder backup recovery api - import/export backups] - avishay<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/copy-volume-to-image-task-flow] - Griffith<br />
* [https://blueprints.launchpad.net/cinder/+spec/admin-defined-capabilities Admin-defined capabilities] - Ollie<br />
* Why is type manage an extension? -Thingee<br />
<br />
'''December 11, 2013 16:00 UTC'''<br />
* Proposal of [https://etherpad.openstack.org/p/cinder-extensions extension packages] -Thingee<br />
<br />
'''December 4, 2013 16:00 UTC'''<br />
* Progressing with [https://wiki.openstack.org/wiki/Cinder/blueprints/multi-attach-volume multi-attach / shared-volume] - sgordon<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-acls-for-volumes Access Control List design discussion] - alatynskaya<br />
<br />
'''November 27, 2013 16:00 UTC'''<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-continuous-volume-replication-v2 Updated volume mirroring design] - avishay<br />
* Start using only Mock for new tests... [http://lists.openstack.org/pipermail/openstack-dev/2013-November/018501.html Related Nova Discussion] - Thingee<br />
* Rate limiting came up in the summit, and [http://lists.openstack.org/pipermail/openstack-dev/2013-November/020291.html on openstack-dev] - avishay<br />
* Metadata backup (https://review.openstack.org/#/c/51900/) progress RFC - dosaboy<br />
<br />
<br />
'''November 20, 2013 16:00 UTC'''<br />
* I-1 scheduling - JGriffith<br />
<br />
<br />
'''November 13, 2013 16:00 UTC'''<br />
* patches should update doc files where necessary to ease writing of release notes (Avishay?)<br />
* fencing host from storage (Ehud Trainin)<br />
* Summarize priority of tasks from summit discussions (https://wiki.openstack.org/wiki/Summit/Icehouse/Etherpads#Cinder and https://etherpad.openstack.org/p/cinder-icehouse-summary) Griff<br />
<br />
<br />
'''October 30, 2013 16:00 UTC'''<br />
* cinder backup metadata support - http://goo.gl/Jkg2FV (dosaboy)<br />
* fencing and unfencing host from storage - https://blueprints.launchpad.net/cinder/+spec/fencing-and-unfencing (Ehud Trainin)<br />
<br />
'''October 23, 2013 16:00 UTC'''<br />
* Nexenta backup driver https://review.openstack.org/#/c/47005/ - DuncanT<br />
<br />
<br />
'''October 2, 2013 16:00 UTC'''<br />
* What's still broken in Havana<br />
:* Backups and multibackend (https://code.launchpad.net/bugs/1228223): Fix committed<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066): Fix committed<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here): '''???'''<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896): '''Still open'''<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469): Probable fix committed<br />
:* Summary of gate issue pertaining to Cinder can be viewed here: http://paste.openstack.org/show/47798/<br />
:* Moving to taskflow - avishay<br />
<br />
<br />
'''Sept 25, 2013, 16:00 UTC'''<br />
* PTL nomination process is open until the 26'th, if you want to run send your nomination proposal out to the dev ML<br />
* What's broken in Havana<br />
:* Backups (specifically when configured with multi-backend volumes)<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066)<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here)<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896)<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469)<br />
:* ????<br />
* Cinderclient release plans/status? (Eharney)<br />
* OSLO imports (DuncanT)<br />
* bp/cinder-backup-improvements (dosaboy)<br />
* bp/multi-attach (zhiyan)<br />
<br />
<br />
<br />
'''Aug 21, 2013, 16:00 UTC'''<br />
# No agenda, no meeting.<br />
<br />
'''Aug 14, 2013, 16:00 UTC'''<br />
# Volume migration status - avishay<br />
# API extensions using metadata. This comes from the [https://review.openstack.org/#/c/38322/ readonly volume attach support]. - thingee<br />
[http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-08-14-16.00.log.html IRC Log]<br />
<br />
'''Aug 7, 2013, 16:00 UTC'''<br />
# [https://bugs.launchpad.net/cinder/+bug/1209199 RFC - make all rbd clones copy-on-write] -- Dosaboy<br />
# V1 API removal issues, plans and timescales - DuncanT<br />
<br />
== Meeting Minutes ==<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2014/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2013/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2012/</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=CinderMeetings&diff=56165CinderMeetings2014-06-18T00:02:33Z<p>John-griffith: /* Next meeting */</p>
<hr />
<div><br />
= Weekly Cinder team meeting =<br />
'''NOTE MEETING TIME: Wed's at 16:00 UTC'''<br />
<br />
If you're interested in Cinder or Block Storage in general for OpenStack, we have a weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, on Wednesdays at 16:00 UTC. Please feel free to add items to the agenda below. NOTE: When adding topics please include your IRC name so we know who's topic it is and how to get more info.<br />
<br />
== Next meeting ==<br />
'''NOTE:''' ''Include your IRC nickname next to agenda items so that you can be called upon in the meeting and arrive at the meeting promptly if placing items in agenda. You might want to put this on your calendar if you are adding items.''<br />
<br />
== Previous meetings ==<br />
<br />
'''June 4th, 2014 16:00 UTC'''<br />
* Volume backup modification (navneet)<br />
** Blueprint and spec review/comments? https://blueprints.launchpad.net/cinder/+spec/vol-backup-service-per-backend<br />
* Dynamic multi pool (navneet)<br />
** Review comments? https://review.openstack.org/#/c/85760/<br />
** Implementation approach comparison.<br />
* 3rd party ci (asselin)<br />
** I have a conflict with another meeting, but my WIP to add nodepool into jaypipe's 3rd party ci solution is available here: https://github.com/rasselin/os-ext-testing/tree/nodepool<br />
* oslo.db (jungleboyj)<br />
** Want to quickly discuss the review out there for this: https://review.openstack.org/#/c/77125/<br />
** Move to current oslo.db? Wait for library work?<br />
** Need to drop off the meeting about 40 minutes in so if we can cover before then it would be appreciated. :-)<br />
<br />
'''May 28th, 2014 16:00 UTC'''<br />
* 3rd Party CI (jungleboyj)<br />
** What tempest test cases to run?<br />
** iSCSI only? What about for FC only drivers then?<br />
** Progress on where to record results?<br />
* SSH host keys (jungleboyj)<br />
** https://launchpad.net/bugs/1320050 and https://bugs.launchpad.net/cinder/+bug/1320056<br />
** Need plan to get this addressed by all drivers using SSH. (New config options?)<br />
** Way to get this backported to Havana?<br />
* Dynamic multi-pools (navneet)<br />
** Status and WIP review (https://review.openstack.org/#/c/85760/)<br />
** Back manager design improvement/rewriting for better rpc message handling.<br />
** Back up service for multi pools.<br />
* cinder-specs (jgriffith)<br />
** Specs repo is live<br />
** Process<br />
** Reviews<br />
<br />
'''May 21st, 2014 16:00 UTC'''<br />
* Consistency Groups (xyang)<br />
** A few people have concerns on the restriction of one volume type per CG. Should we allow one CG to have multiple volume types on the same backend? Let's discuss about it.<br />
* Third-Party CI (jgriffith)<br />
** Who's started, who's planning to and how can we help support each other to get this going smoothly<br />
* Moving GlusterFS snapshot code into the NFS RemoteFs driver (mberlin)<br />
** The GlusterFS snapshot code using qcow2 snapshots is useful for all file based storage systems. I would volunteer to move the GlusterFS snapshot code into the general RemoteFs driver - making it easier to get [https://review.openstack.org/#/c/94186/ our driver] accepted ;-)<br />
** Eric Harney is fine with this and planned to do this for Juno anyway ([https://blueprints.launchpad.net/cinder/+spec/remotefs-snaps see his blueprint]). I've put it on the agenda to make sure others also agree with this approach.<br />
<br />
'''May 7th, 2014 16:00 UTC'''<br />
* Limit == 0 in API [https://review.openstack.org/#/c/86207/ patch review] - thingee<br />
<br />
'''April 16th, 2014 16:00 UTC'''<br />
* Release Status<br />
* Summit Session Updates<br />
* Next Stop ATL!!!<br />
* Cinder resource status - thingee<br />
<br />
'''April 9th, 2014 16:00 UTC'''<br />
(Agenda entered retrospectively)<br />
* Cinder Spec (jgriffith) <br />
** Just a heads up that cinder blueprints will move to a gerrit based process shortly, a la nova. Details and wiki entry to follow.<br />
* RC2 status (jgriffith) <br />
** Just after cutting RC2, a bunch of bugs<br />
* Testing RC code (jgriffith)<br />
** Get on it, folks!<br />
**Looks like theres some serious, intermittent performance issues in the API somewhere...<br />
<br />
'''April 2, 2014 16:00 UTC'''<br />
_Meeting cancelled and summary discussion held on #openstack-cinder_<br />
<br />
* Release status and bugs<br />
* -2s left on reviews from before Junos opened - please check if they are still valid<br />
- https://review.openstack.org/#/c/73446/ (JGriffith)<br />
- https://review.openstack.org/#/c/80550/ (JBryant)<br />
- https://review.openstack.org/#/c/82100/ (Avishay)<br />
- https://review.openstack.org/#/c/74158/ (Avishay)<br />
- + a whole bunch of stable branch stuff<br />
<br />
== Previous meetings ==<br />
<br />
'''Mar 26, 2014 16:00 UTC'''<br />
* RC1 updates (jgriffith)<br />
* Design Summit Sessions (jgriffith)<br />
http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-03-26-16.00.log.html<br />
<br />
'''Mar 19, 2014 16:00 UTC'''<br />
* ProphetStor Driver Exception request for Icehouse (jgriffith)<br />
* Bug status/updates (jgriffith)<br />
* What we should be punting to Juno (aka immediate -2 in Gerrit) (jgriffith)<br />
* Continuous Integration for Cinder Certification (jungleboyj)<br />
<br />
'''Mar 12, 2014 16:00 UTC'''<br />
* Cancelled due to nothing on the agenda. Ad-hoc discussion on #openstack-cinder instead<br />
<br />
'''Mar 5, 2014 16:00 UTC'''<br />
* Volume replication - avishay<br />
* [https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage New LVM-based driver for shared storage] - mtanino<br />
* DRBD/drbdmanage driver for cinder - philr<br />
<br />
'''Feb 19, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* [https://review.openstack.org/#/c/73745 Milestone Consideration for Drivers] -thingee<br />
* [https://etherpad.openstack.org/p/cinder-hack-201402 Hack-a-thon details] -thingee<br />
* [https://review.openstack.org/#/c/66737/ scheduling for local storage] -DuncanT<br />
<br />
'''Feb 5, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* Multiple pools per backend (bswartz)<br />
'''Jan 8, 2014 16:00 UTC'''<br />
* I2 is just around the corner, blueprint updates<br />
* Alternating meeting time proposal, results on feedback<br />
* Driver cert test, it's there... use it<br />
* Prioritizing patches and reviews<br />
'''December 18, 2013 16:00 UTC'''<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/cinder-backup-recover-api cinder backup recovery api - import/export backups] - avishay<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/copy-volume-to-image-task-flow] - Griffith<br />
* [https://blueprints.launchpad.net/cinder/+spec/admin-defined-capabilities Admin-defined capabilities] - Ollie<br />
* Why is type manage an extension? -Thingee<br />
<br />
'''December 11, 2013 16:00 UTC'''<br />
* Proposal of [https://etherpad.openstack.org/p/cinder-extensions extension packages] -Thingee<br />
<br />
'''December 4, 2013 16:00 UTC'''<br />
* Progressing with [https://wiki.openstack.org/wiki/Cinder/blueprints/multi-attach-volume multi-attach / shared-volume] - sgordon<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-acls-for-volumes Access Control List design discussion] - alatynskaya<br />
<br />
'''November 27, 2013 16:00 UTC'''<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-continuous-volume-replication-v2 Updated volume mirroring design] - avishay<br />
* Start using only Mock for new tests... [http://lists.openstack.org/pipermail/openstack-dev/2013-November/018501.html Related Nova Discussion] - Thingee<br />
* Rate limiting came up in the summit, and [http://lists.openstack.org/pipermail/openstack-dev/2013-November/020291.html on openstack-dev] - avishay<br />
* Metadata backup (https://review.openstack.org/#/c/51900/) progress RFC - dosaboy<br />
<br />
<br />
'''November 20, 2013 16:00 UTC'''<br />
* I-1 scheduling - JGriffith<br />
<br />
<br />
'''November 13, 2013 16:00 UTC'''<br />
* patches should update doc files where necessary to ease writing of release notes (Avishay?)<br />
* fencing host from storage (Ehud Trainin)<br />
* Summarize priority of tasks from summit discussions (https://wiki.openstack.org/wiki/Summit/Icehouse/Etherpads#Cinder and https://etherpad.openstack.org/p/cinder-icehouse-summary) Griff<br />
<br />
<br />
'''October 30, 2013 16:00 UTC'''<br />
* cinder backup metadata support - http://goo.gl/Jkg2FV (dosaboy)<br />
* fencing and unfencing host from storage - https://blueprints.launchpad.net/cinder/+spec/fencing-and-unfencing (Ehud Trainin)<br />
<br />
'''October 23, 2013 16:00 UTC'''<br />
* Nexenta backup driver https://review.openstack.org/#/c/47005/ - DuncanT<br />
<br />
<br />
'''October 2, 2013 16:00 UTC'''<br />
* What's still broken in Havana<br />
:* Backups and multibackend (https://code.launchpad.net/bugs/1228223): Fix committed<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066): Fix committed<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here): '''???'''<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896): '''Still open'''<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469): Probable fix committed<br />
:* Summary of gate issue pertaining to Cinder can be viewed here: http://paste.openstack.org/show/47798/<br />
:* Moving to taskflow - avishay<br />
<br />
<br />
'''Sept 25, 2013, 16:00 UTC'''<br />
* PTL nomination process is open until the 26'th, if you want to run send your nomination proposal out to the dev ML<br />
* What's broken in Havana<br />
:* Backups (specifically when configured with multi-backend volumes)<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066)<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here)<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896)<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469)<br />
:* ????<br />
* Cinderclient release plans/status? (Eharney)<br />
* OSLO imports (DuncanT)<br />
* bp/cinder-backup-improvements (dosaboy)<br />
* bp/multi-attach (zhiyan)<br />
<br />
<br />
<br />
'''Aug 21, 2013, 16:00 UTC'''<br />
# No agenda, no meeting.<br />
<br />
'''Aug 14, 2013, 16:00 UTC'''<br />
# Volume migration status - avishay<br />
# API extensions using metadata. This comes from the [https://review.openstack.org/#/c/38322/ readonly volume attach support]. - thingee<br />
[http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-08-14-16.00.log.html IRC Log]<br />
<br />
'''Aug 7, 2013, 16:00 UTC'''<br />
# [https://bugs.launchpad.net/cinder/+bug/1209199 RFC - make all rbd clones copy-on-write] -- Dosaboy<br />
# V1 API removal issues, plans and timescales - DuncanT<br />
<br />
== Meeting Minutes ==<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2014/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2013/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2012/</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=Cinder/how-to-contribute-a-driver&diff=55888Cinder/how-to-contribute-a-driver2014-06-14T15:13:41Z<p>John-griffith: /* How To Contribute a driver to Cinder */</p>
<hr />
<div>= How To Contribute a driver to Cinder =<br />
<br />
== Third Party CI Requirement Policy ==<br />
One thing that has been lacking for plugins over the past is any sort of formal CI testing. The OpenStack Project as a whole has had an extremely comprehensive CI implementation for some time now that runs against reference implementations and some of the more common optional configs that are out there. As a Community Cinder (and other projects as well) have agreed that if a vendor wishes to submit a driver for their particular storage device that said vendor should also be required to set up a third party CI system in their lab which runs tempest-dsvm-full against their device for every Cinder commit, and provides feedback in to Gerrit.<br />
<br />
This is very much an evolving process and is subject to change, but currently here's where we stand as of June 13, 2014:<br />
* All vendors with a driver in the Cinder code-base are required to have 3'rd party CI testing prior to Juno 3 opening<br />
* Every commit made to Cinder should be ran against the vendors 3'rd party CI environment<br />
* Currently the tests that should be run are 'tempest-dsvm-full'<br />
* Your version should follow a naming template like: "tempest-dsvm-full-<DriverName>" <br />
* Results/logs should be reported just as they are with the OpenStack Infra CI systems<br />
* If a vendor has more than one driver, they need more than one CI system. In other words if you have 3 drivers, you'll be expected to test all 3 of those drivers.<br />
<br />
There are a number of resources out there to help deploy your own CI environment. One of the best sources currently is a series of blog posting from Jay Pipes that can be found here: http://www.joinfu.com/2014/02/setting-up-an-openstack-external-testing-system-part-2/<br />
<br />
As I mentioned this is an evolving process, there's a good deal more information that's needed here to help folks and we'll get that flushed out as we go along and start having some successes on this.<br />
<br />
== Before you write any code ==<br />
* The most important place to start is the How To Contribute Page:<br />
https://wiki.openstack.org/wiki/How_To_Contribute#If_you.27re_a_developer<br />
<br />
* Hopefully you already familiarized yourself with the Cinder wiki page <br />
https://wiki.openstack.org/wiki/Cinder<br />
<br />
== Helpful Hints ==<br />
Cinder offers a reference implementation that should be used as a model. The reference implementation driver file is cinder/volume/drivers/lvm.py, not to be mistaken for cinder/volume/driver.py which is the base class that all of the drivers inherit from. You must implement all of the methods that exist as core features (check out the driver compat matrix from the cinder wiki).<br />
<br />
Note that there are a lot of options that show up there regarding iSCSI targets etc, but this gives you an idea of the expectations in terms of features that are implemented and some of the behaviors. I strongly recommend loading up devstack (you're going to need it to test your driver anyway) and play around with the default LVM. It's really important that you get a feel for how Ciinder works and interacts with the other OpenStack projects before you get too far along.<br />
<br />
We have a development channel on freenode: #openstack-cinder<br />
There are developers here round the clock, it's a great resource for you get started. Log in, ask questions, don't stare at code in isolation for a week... if you're stuck on something just ask. There' also no need to start off with "Can I ask a question"... you likely won't get a response. Just type in your question, that way anybody monitoring the channel that might know the answer can step in and answer.<br />
<br />
== Before You Submit Code ==<br />
There's a number of things that you should get from the "how to contribute guide" but to reiterate as they're often missed:<br />
* You need to submit a detailed blueprint in Launchpad introducing your driver and submitting it for approval<br />
* Have a general idea of how Cinder works, what it's used for, why the other projects in OpenStack may or may not use it<br />
* Fully understand the difference between ephemeral storage on the Nova side versus the persistent storage offered by Cinder<br />
<br />
== Oh, and don't forget ==<br />
Unit tests for new code are required. We're in the process of converting everything to use mock (rather than mox) for our unit tests. Be sure when writing unit tests and setting up fakes to use mock, examples of it's usage can be found in the existing tests like cinder/tests/test_volume.py.<br />
<br />
There's an expectation that unit tests leave the system as they found it. That means using things like the tempfile module if you have to write out some persistent data somewhere for your test.<br />
<br />
There's an expectation that any backend device and every driver that is submitted can successfully run and pass the existing OpenStack Tempest tests. Every commit in OpenStack goes through an automated gate test, all we ask here is that since we won't have your backend device that you run this yourself an make sure you've covered all of the required features and that everything works as expected. We have a script to help you with that in the devstack tree: https://github.com/openstack-dev/devstack/tree/master/driver_certs. This is relatively new and needs some more flushing out as well as some documentation, but it's a start and it should progress and grow as time goes by.</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=CinderMeetings&diff=53884CinderMeetings2014-05-28T03:51:39Z<p>John-griffith: /* Next meeting */</p>
<hr />
<div><br />
= Weekly Cinder team meeting =<br />
'''NOTE MEETING TIME: Wed's at 16:00 UTC'''<br />
<br />
If you're interested in Cinder or Block Storage in general for OpenStack, we have a weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, on Wednesdays at 16:00 UTC. Please feel free to add items to the agenda below. NOTE: When adding topics please include your IRC name so we know who's topic it is and how to get more info.<br />
<br />
== Next meeting ==<br />
'''NOTE:''' ''Include your IRC nickname next to agenda items so that you can be called upon in the meeting and arrive at the meeting promptly if placing items in agenda. You might want to put this on your calendar if you are adding items.''<br />
<br />
'''May 28th, 2014 16:00 UTC'''<br />
* 3rd Party CI (jungleboyj)<br />
** What tempest test cases to run?<br />
** iSCSI only? What about for FC only drivers then?<br />
** Progress on where to record results?<br />
* SSH host keys (jungleboyj)<br />
** https://launchpad.net/bugs/1320050 and https://bugs.launchpad.net/cinder/+bug/1320056<br />
** Need plan to get this addressed by all drivers using SSH. (New config options?)<br />
** Way to get this backported to Havana?<br />
* Dynamic multi-pools (navneet)<br />
** Status and WIP review (https://review.openstack.org/#/c/85760/)<br />
** Back manager design improvement/rewriting for better rpc message handling.<br />
** Back up service for multi pools.<br />
* cinder-specs (jgriffith)<br />
** Specs repo is live<br />
** Process<br />
** Reviews<br />
<br />
== Previous meetings ==<br />
<br />
'''May 21st, 2014 16:00 UTC'''<br />
* Consistency Groups (xyang)<br />
** A few people have concerns on the restriction of one volume type per CG. Should we allow one CG to have multiple volume types on the same backend? Let's discuss about it.<br />
* Third-Party CI (jgriffith)<br />
** Who's started, who's planning to and how can we help support each other to get this going smoothly<br />
* Moving GlusterFS snapshot code into the NFS RemoteFs driver (mberlin)<br />
** The GlusterFS snapshot code using qcow2 snapshots is useful for all file based storage systems. I would volunteer to move the GlusterFS snapshot code into the general RemoteFs driver - making it easier to get [https://review.openstack.org/#/c/94186/ our driver] accepted ;-)<br />
** Eric Harney is fine with this and planned to do this for Juno anyway ([https://blueprints.launchpad.net/cinder/+spec/remotefs-snaps see his blueprint]). I've put it on the agenda to make sure others also agree with this approach.<br />
<br />
'''May 7th, 2014 16:00 UTC'''<br />
* Limit == 0 in API [https://review.openstack.org/#/c/86207/ patch review] - thingee<br />
<br />
'''April 16th, 2014 16:00 UTC'''<br />
* Release Status<br />
* Summit Session Updates<br />
* Next Stop ATL!!!<br />
* Cinder resource status - thingee<br />
<br />
'''April 9th, 2014 16:00 UTC'''<br />
(Agenda entered retrospectively)<br />
* Cinder Spec (jgriffith) <br />
** Just a heads up that cinder blueprints will move to a gerrit based process shortly, a la nova. Details and wiki entry to follow.<br />
* RC2 status (jgriffith) <br />
** Just after cutting RC2, a bunch of bugs<br />
* Testing RC code (jgriffith)<br />
** Get on it, folks!<br />
**Looks like theres some serious, intermittent performance issues in the API somewhere...<br />
<br />
'''April 2, 2014 16:00 UTC'''<br />
_Meeting cancelled and summary discussion held on #openstack-cinder_<br />
<br />
* Release status and bugs<br />
* -2s left on reviews from before Junos opened - please check if they are still valid<br />
- https://review.openstack.org/#/c/73446/ (JGriffith)<br />
- https://review.openstack.org/#/c/80550/ (JBryant)<br />
- https://review.openstack.org/#/c/82100/ (Avishay)<br />
- https://review.openstack.org/#/c/74158/ (Avishay)<br />
- + a whole bunch of stable branch stuff<br />
<br />
== Previous meetings ==<br />
<br />
'''Mar 26, 2014 16:00 UTC'''<br />
* RC1 updates (jgriffith)<br />
* Design Summit Sessions (jgriffith)<br />
http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-03-26-16.00.log.html<br />
<br />
'''Mar 19, 2014 16:00 UTC'''<br />
* ProphetStor Driver Exception request for Icehouse (jgriffith)<br />
* Bug status/updates (jgriffith)<br />
* What we should be punting to Juno (aka immediate -2 in Gerrit) (jgriffith)<br />
* Continuous Integration for Cinder Certification (jungleboyj)<br />
<br />
'''Mar 12, 2014 16:00 UTC'''<br />
* Cancelled due to nothing on the agenda. Ad-hoc discussion on #openstack-cinder instead<br />
<br />
'''Mar 5, 2014 16:00 UTC'''<br />
* Volume replication - avishay<br />
* [https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage New LVM-based driver for shared storage] - mtanino<br />
* DRBD/drbdmanage driver for cinder - philr<br />
<br />
'''Feb 19, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* [https://review.openstack.org/#/c/73745 Milestone Consideration for Drivers] -thingee<br />
* [https://etherpad.openstack.org/p/cinder-hack-201402 Hack-a-thon details] -thingee<br />
* [https://review.openstack.org/#/c/66737/ scheduling for local storage] -DuncanT<br />
<br />
'''Feb 5, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* Multiple pools per backend (bswartz)<br />
'''Jan 8, 2014 16:00 UTC'''<br />
* I2 is just around the corner, blueprint updates<br />
* Alternating meeting time proposal, results on feedback<br />
* Driver cert test, it's there... use it<br />
* Prioritizing patches and reviews<br />
'''December 18, 2013 16:00 UTC'''<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/cinder-backup-recover-api cinder backup recovery api - import/export backups] - avishay<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/copy-volume-to-image-task-flow] - Griffith<br />
* [https://blueprints.launchpad.net/cinder/+spec/admin-defined-capabilities Admin-defined capabilities] - Ollie<br />
* Why is type manage an extension? -Thingee<br />
<br />
'''December 11, 2013 16:00 UTC'''<br />
* Proposal of [https://etherpad.openstack.org/p/cinder-extensions extension packages] -Thingee<br />
<br />
'''December 4, 2013 16:00 UTC'''<br />
* Progressing with [https://wiki.openstack.org/wiki/Cinder/blueprints/multi-attach-volume multi-attach / shared-volume] - sgordon<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-acls-for-volumes Access Control List design discussion] - alatynskaya<br />
<br />
'''November 27, 2013 16:00 UTC'''<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-continuous-volume-replication-v2 Updated volume mirroring design] - avishay<br />
* Start using only Mock for new tests... [http://lists.openstack.org/pipermail/openstack-dev/2013-November/018501.html Related Nova Discussion] - Thingee<br />
* Rate limiting came up in the summit, and [http://lists.openstack.org/pipermail/openstack-dev/2013-November/020291.html on openstack-dev] - avishay<br />
* Metadata backup (https://review.openstack.org/#/c/51900/) progress RFC - dosaboy<br />
<br />
<br />
'''November 20, 2013 16:00 UTC'''<br />
* I-1 scheduling - JGriffith<br />
<br />
<br />
'''November 13, 2013 16:00 UTC'''<br />
* patches should update doc files where necessary to ease writing of release notes (Avishay?)<br />
* fencing host from storage (Ehud Trainin)<br />
* Summarize priority of tasks from summit discussions (https://wiki.openstack.org/wiki/Summit/Icehouse/Etherpads#Cinder and https://etherpad.openstack.org/p/cinder-icehouse-summary) Griff<br />
<br />
<br />
'''October 30, 2013 16:00 UTC'''<br />
* cinder backup metadata support - http://goo.gl/Jkg2FV (dosaboy)<br />
* fencing and unfencing host from storage - https://blueprints.launchpad.net/cinder/+spec/fencing-and-unfencing (Ehud Trainin)<br />
<br />
'''October 23, 2013 16:00 UTC'''<br />
* Nexenta backup driver https://review.openstack.org/#/c/47005/ - DuncanT<br />
<br />
<br />
'''October 2, 2013 16:00 UTC'''<br />
* What's still broken in Havana<br />
:* Backups and multibackend (https://code.launchpad.net/bugs/1228223): Fix committed<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066): Fix committed<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here): '''???'''<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896): '''Still open'''<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469): Probable fix committed<br />
:* Summary of gate issue pertaining to Cinder can be viewed here: http://paste.openstack.org/show/47798/<br />
:* Moving to taskflow - avishay<br />
<br />
<br />
'''Sept 25, 2013, 16:00 UTC'''<br />
* PTL nomination process is open until the 26'th, if you want to run send your nomination proposal out to the dev ML<br />
* What's broken in Havana<br />
:* Backups (specifically when configured with multi-backend volumes)<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066)<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here)<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896)<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469)<br />
:* ????<br />
* Cinderclient release plans/status? (Eharney)<br />
* OSLO imports (DuncanT)<br />
* bp/cinder-backup-improvements (dosaboy)<br />
* bp/multi-attach (zhiyan)<br />
<br />
<br />
<br />
'''Aug 21, 2013, 16:00 UTC'''<br />
# No agenda, no meeting.<br />
<br />
'''Aug 14, 2013, 16:00 UTC'''<br />
# Volume migration status - avishay<br />
# API extensions using metadata. This comes from the [https://review.openstack.org/#/c/38322/ readonly volume attach support]. - thingee<br />
[http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-08-14-16.00.log.html IRC Log]<br />
<br />
'''Aug 7, 2013, 16:00 UTC'''<br />
# [https://bugs.launchpad.net/cinder/+bug/1209199 RFC - make all rbd clones copy-on-write] -- Dosaboy<br />
# V1 API removal issues, plans and timescales - DuncanT<br />
<br />
== Meeting Minutes ==<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2014/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2013/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2012/</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=CinderMeetings&diff=52872CinderMeetings2014-05-21T03:36:07Z<p>John-griffith: /* Next meeting */</p>
<hr />
<div><br />
= Weekly Cinder team meeting =<br />
'''NOTE MEETING TIME: Wed's at 16:00 UTC'''<br />
<br />
If you're interested in Cinder or Block Storage in general for OpenStack, we have a weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, on Wednesdays at 16:00 UTC. Please feel free to add items to the agenda below. NOTE: When adding topics please include your IRC name so we know who's topic it is and how to get more info.<br />
<br />
== Next meeting ==<br />
'''NOTE:''' ''Include your IRC nickname next to agenda items so that you can be called upon in the meeting and arrive at the meeting promptly if placing items in agenda. You might want to put this on your calendar if you are adding items.''<br />
<br />
'''May 21st, 2014 16:00 UTC'''<br />
* Consistency Groups (xyang)<br />
** A few people have concerns on the restriction of one volume type per CG. Should we allow one CG to have multiple volume types on the same backend? Let's discuss about it.<br />
* Third-Party CI (jgriffith)<br />
** Who's started, who's planning to and how can we help support each other to get this going smoothly<br />
<br />
== Previous meetings ==<br />
<br />
'''May 7th, 2014 16:00 UTC'''<br />
* Limit == 0 in API [https://review.openstack.org/#/c/86207/ patch review] - thingee<br />
<br />
'''April 16th, 2014 16:00 UTC'''<br />
* Release Status<br />
* Summit Session Updates<br />
* Next Stop ATL!!!<br />
* Cinder resource status - thingee<br />
<br />
'''April 9th, 2014 16:00 UTC'''<br />
(Agenda entered retrospectively)<br />
* Cinder Spec (jgriffith) <br />
** Just a heads up that cinder blueprints will move to a gerrit based process shortly, a la nova. Details and wiki entry to follow.<br />
* RC2 status (jgriffith) <br />
** Just after cutting RC2, a bunch of bugs<br />
* Testing RC code (jgriffith)<br />
** Get on it, folks!<br />
**Looks like theres some serious, intermittent performance issues in the API somewhere...<br />
<br />
'''April 2, 2014 16:00 UTC'''<br />
_Meeting cancelled and summary discussion held on #openstack-cinder_<br />
<br />
* Release status and bugs<br />
* -2s left on reviews from before Junos opened - please check if they are still valid<br />
- https://review.openstack.org/#/c/73446/ (JGriffith)<br />
- https://review.openstack.org/#/c/80550/ (JBryant)<br />
- https://review.openstack.org/#/c/82100/ (Avishay)<br />
- https://review.openstack.org/#/c/74158/ (Avishay)<br />
- + a whole bunch of stable branch stuff<br />
<br />
== Previous meetings ==<br />
<br />
'''Mar 26, 2014 16:00 UTC'''<br />
* RC1 updates (jgriffith)<br />
* Design Summit Sessions (jgriffith)<br />
http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-03-26-16.00.log.html<br />
<br />
'''Mar 19, 2014 16:00 UTC'''<br />
* ProphetStor Driver Exception request for Icehouse (jgriffith)<br />
* Bug status/updates (jgriffith)<br />
* What we should be punting to Juno (aka immediate -2 in Gerrit) (jgriffith)<br />
* Continuous Integration for Cinder Certification (jungleboyj)<br />
<br />
'''Mar 12, 2014 16:00 UTC'''<br />
* Cancelled due to nothing on the agenda. Ad-hoc discussion on #openstack-cinder instead<br />
<br />
'''Mar 5, 2014 16:00 UTC'''<br />
* Volume replication - avishay<br />
* [https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage New LVM-based driver for shared storage] - mtanino<br />
* DRBD/drbdmanage driver for cinder - philr<br />
<br />
'''Feb 19, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* [https://review.openstack.org/#/c/73745 Milestone Consideration for Drivers] -thingee<br />
* [https://etherpad.openstack.org/p/cinder-hack-201402 Hack-a-thon details] -thingee<br />
* [https://review.openstack.org/#/c/66737/ scheduling for local storage] -DuncanT<br />
<br />
'''Feb 5, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* Multiple pools per backend (bswartz)<br />
'''Jan 8, 2014 16:00 UTC'''<br />
* I2 is just around the corner, blueprint updates<br />
* Alternating meeting time proposal, results on feedback<br />
* Driver cert test, it's there... use it<br />
* Prioritizing patches and reviews<br />
'''December 18, 2013 16:00 UTC'''<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/cinder-backup-recover-api cinder backup recovery api - import/export backups] - avishay<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/copy-volume-to-image-task-flow] - Griffith<br />
* [https://blueprints.launchpad.net/cinder/+spec/admin-defined-capabilities Admin-defined capabilities] - Ollie<br />
* Why is type manage an extension? -Thingee<br />
<br />
'''December 11, 2013 16:00 UTC'''<br />
* Proposal of [https://etherpad.openstack.org/p/cinder-extensions extension packages] -Thingee<br />
<br />
'''December 4, 2013 16:00 UTC'''<br />
* Progressing with [https://wiki.openstack.org/wiki/Cinder/blueprints/multi-attach-volume multi-attach / shared-volume] - sgordon<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-acls-for-volumes Access Control List design discussion] - alatynskaya<br />
<br />
'''November 27, 2013 16:00 UTC'''<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-continuous-volume-replication-v2 Updated volume mirroring design] - avishay<br />
* Start using only Mock for new tests... [http://lists.openstack.org/pipermail/openstack-dev/2013-November/018501.html Related Nova Discussion] - Thingee<br />
* Rate limiting came up in the summit, and [http://lists.openstack.org/pipermail/openstack-dev/2013-November/020291.html on openstack-dev] - avishay<br />
* Metadata backup (https://review.openstack.org/#/c/51900/) progress RFC - dosaboy<br />
<br />
<br />
'''November 20, 2013 16:00 UTC'''<br />
* I-1 scheduling - JGriffith<br />
<br />
<br />
'''November 13, 2013 16:00 UTC'''<br />
* patches should update doc files where necessary to ease writing of release notes (Avishay?)<br />
* fencing host from storage (Ehud Trainin)<br />
* Summarize priority of tasks from summit discussions (https://wiki.openstack.org/wiki/Summit/Icehouse/Etherpads#Cinder and https://etherpad.openstack.org/p/cinder-icehouse-summary) Griff<br />
<br />
<br />
'''October 30, 2013 16:00 UTC'''<br />
* cinder backup metadata support - http://goo.gl/Jkg2FV (dosaboy)<br />
* fencing and unfencing host from storage - https://blueprints.launchpad.net/cinder/+spec/fencing-and-unfencing (Ehud Trainin)<br />
<br />
'''October 23, 2013 16:00 UTC'''<br />
* Nexenta backup driver https://review.openstack.org/#/c/47005/ - DuncanT<br />
<br />
<br />
'''October 2, 2013 16:00 UTC'''<br />
* What's still broken in Havana<br />
:* Backups and multibackend (https://code.launchpad.net/bugs/1228223): Fix committed<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066): Fix committed<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here): '''???'''<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896): '''Still open'''<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469): Probable fix committed<br />
:* Summary of gate issue pertaining to Cinder can be viewed here: http://paste.openstack.org/show/47798/<br />
:* Moving to taskflow - avishay<br />
<br />
<br />
'''Sept 25, 2013, 16:00 UTC'''<br />
* PTL nomination process is open until the 26'th, if you want to run send your nomination proposal out to the dev ML<br />
* What's broken in Havana<br />
:* Backups (specifically when configured with multi-backend volumes)<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066)<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here)<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896)<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469)<br />
:* ????<br />
* Cinderclient release plans/status? (Eharney)<br />
* OSLO imports (DuncanT)<br />
* bp/cinder-backup-improvements (dosaboy)<br />
* bp/multi-attach (zhiyan)<br />
<br />
<br />
<br />
'''Aug 21, 2013, 16:00 UTC'''<br />
# No agenda, no meeting.<br />
<br />
'''Aug 14, 2013, 16:00 UTC'''<br />
# Volume migration status - avishay<br />
# API extensions using metadata. This comes from the [https://review.openstack.org/#/c/38322/ readonly volume attach support]. - thingee<br />
[http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-08-14-16.00.log.html IRC Log]<br />
<br />
'''Aug 7, 2013, 16:00 UTC'''<br />
# [https://bugs.launchpad.net/cinder/+bug/1209199 RFC - make all rbd clones copy-on-write] -- Dosaboy<br />
# V1 API removal issues, plans and timescales - DuncanT<br />
<br />
== Meeting Minutes ==<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2014/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2013/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2012/</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=Design_Summit/Juno/Etherpads&diff=51039Design Summit/Juno/Etherpads2014-05-03T14:58:35Z<p>John-griffith: /* Cinder */</p>
<hr />
<div>[[Category:Summit]]<br />
[[Category:Juno]]<br />
[[Category:Etherpad]]<br />
<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
== Ceilometer ==<br />
== Cinder ==<br />
<br />
* Thurs 16:10-16:50 [https://etherpad.openstack.org/p/juno-cinder-volume-replication Volume Replication]<br />
* Thurs 17:00-17:40 [https://etherpad.openstack.org/p/juno-cinder-DRBD DRBD For Cinder-Volumes]<br />
* Friday 09:00-09:40 [https://etherpad.openstack.org/p/juno-cinder-nfs-in-cinder NFS and its role within Cinder]<br />
* Friday 10:00-10:40 [https://etherpad.openstack.org/p/juno-cinder-cinder-consistency-groups Adding Consistency Groups to Cinder]<br />
* Friday 10:50-11:30 [https://etherpad.openstack.org/p/juno-cinder-3rd-party-cert-and-verification 3'rd party certificiation and CI systems]<br />
* Friday 11:40-12:20 [https://etherpad.openstack.org/p/juno-cinder-changed-block-list Changed Block List for Cinder Volumes]<br />
* Friday 13:20-14:00 [https://etherpad.openstack.org/p/juno-cinder-state-and-workflow-management Cinder State and Workflow Management]<br />
* Friday 14:10-14:50 [https://etherpad.openstack.org/p/juno-cinder-framework-for-state-reporting Framework for detailed Volume Stats reporting]<br />
* Friday 15:00-15:40 [https://etherpad.openstack.org/p/juno-cinder-multiple-pools-per-backend Mulitple Pools per Cinder Backend]<br />
* Friday 16:00-16:40 [https://etherpad.openstack.org/p/juno-cinder-whats-a-cinder-driver What is a Cinder Driver]<br />
<br />
==Cross-Project==<br />
* Tues 11:15-11:55 [https://etherpad.openstack.org/p/juno-cross-project-future-of-python The Future of Python Support]<br />
* Tues 11:15-12:45 [https://etherpad.openstack.org/p/juno-cross-project-consistency-across-rest-apis Consistency Across OpenStack REST APIs]<br />
* Tues 14:00-14:40 [https://etherpad.openstack.org/p/juno-cross-oslo-library-releases New Oslo Library Releases and Your Project]<br />
* Tues 14:50-16:20 [https://etherpad.openstack.org/p/juno-summit-cross-project-user-experience User Experience Designers Gathering]<br />
* Tues 15:40-16:20 [https://etherpad.openstack.org/p/juno-cross-project-quota-management-endpoint Cross-project Quota Management Service Endpoint]<br />
* Tues 16:40 [https://etherpad.openstack.org/p/juno-summit-gate How do we make it easier to fix the Gate?]<br />
<br />
== Devstack ==<br />
* Fri 16:00 [https://etherpad.openstack.org/p/juno-summit-devstack-update DevStack Update]<br />
* Fri 16:50 [https://etherpad.openstack.org/p/juno-summit-devstack-project-support DevStack Project Support]<br />
<br />
== Documentation ==<br />
== Glance ==<br />
== Heat ==<br />
<br />
* Wed 9.00-9:40 [https://etherpad.openstack.org/p/juno-summit-heat-dev-ops Dev/Ops Session]<br />
* Wed 9.50-10:30 [https://etherpad.openstack.org/p/juno-summit-heat-sw-orch Next Steps for Software Orchestration]<br />
* Wed 11.00-11:40 [https://etherpad.openstack.org/p/heat-workflow-vs-convergence Scaling, Robustness and Convergence]<br />
* Wed 11.50-12:30 [https://etherpad.openstack.org/p/juno-summit-heat-notifications Augmenting Polling with Notifications]<br />
* Wed 13.50-14:30 [https://etherpad.openstack.org/p/juno-summit-heat-event Event notifications]<br />
* Wed 14.40-15:20 [https://etherpad.openstack.org/p/juno-summit-heat-callbacks Stack and Resource lifecycle callbacks]<br />
* Wed 15.30-16:10 [https://etherpad.openstack.org/p/juno-summit-heat-api-v2 API v2]<br />
* Wed 16.30-17:10 [https://etherpad.openstack.org/p/juno-summit-heat-plugin-versioning Resource Plugin Versioning]<br />
<br />
== Horizon ==<br />
* Wed 16:30-17:10 [https://etherpad.openstack.org/p/juno-summit-horizon-devops Horizon Dev/Ops Session]<br />
* Wed 17:20-18:00 [https://etherpad.openstack.org/p/juno-summit-horizon-usability-test-results Review Horizon Usability Test feedback, proposals]<br />
* Fri 9:00-9:40 [https://etherpad.openstack.org/p/juno-summit-horizon-static-files Handling of static files]<br />
* Fri 9:50-10:30 [https://etherpad.openstack.org/p/juno-summit-horizon-widgets Modular, widget-based views and more pluggability]<br />
* Fri 10:50-11:30 [https://etherpad.openstack.org/p/juno-summit-horizon-client-side Client side development]<br />
<br />
== Infrastructure ==<br />
* Wed 9:50 - [https://etherpad.openstack.org/p/juno-summit-elastic-recheck Elastic Recheck next steps]<br />
* Wed 11:00 - Jenkins moving forward<br />
* Wed 11:50 - Improving Third Party Testing<br />
* Thu 11:00 - Discussion/design talk of Vinz code review system<br />
* Thu 11:50 - StoryBoard: current status & Juno plans<br />
* Fri 9:50 - Replace Launchpad OpenID authentication<br />
* Fri 10:50 - Translation platform discussion<br />
<br />
== Ironic ==<br />
<br />
* Tues 11:15 [https://etherpad.openstack.org/p/juno-summit-ironic-python-agent Ironic Python Agent]<br />
* Tues 12:05 [https://etherpad.openstack.org/p/juno-summit-ironic-multitenancy Hardware Multitenancy Risk Mitigation]<br />
* Tues 14:50 [https://etherpad.openstack.org/p/juno-summit-ironic-performance Performance and Scalability]<br />
* Tues 15:40 [https://etherpad.openstack.org/p/juno-summit-ironic-arch Planning changes for Juno]<br />
<br />
== Keystone ==<br />
== Marconi ==<br />
== Neutron ==<br />
* Wed 9:00-9:40: [https://etherpad.openstack.org/p/juno-neutron-policies New Policies for Neutron in Juno]<br />
* Wed 9:50-10:30: Code Review Process Improvements<br />
* Wed 11:00-11:40: IPv6 status in Neutron<br />
* Wed 11:50-12:30: ML2 Juno Roadmap<br />
**[https://etherpad.openstack.org/p/ML2_mechanismdriver_extensions_support Extensions Support In ML2 Mechanism Drivers]<br />
* Wed 13:50-14:30: Refactoring the Neutron Server Core<br />
* Wed 14:40-15:20: [https://etherpad.openstack.org/p/novanet-neutron-migration Nova-Net to Neutron migration]<br />
* Wed 15:30-14:10: Integrating Tasks into Neutron<br />
* Wed 16:30-17:10: Neutron Advanced Services and Flavor Framework<br />
** Advanced Services: https://etherpad.openstack.org/p/juno-advanced-services<br />
*** Flavor framework for advanced services<br />
** https://etherpad.openstack.org/p/juno-virtual-resource-for-service-chaining<br />
* Wed 17:20-18:00: Neutron Distributed Virtual Router Progress Update<br />
* Thu 9:00-9:40: Neutron QA and Testing<br />
** https://etherpad.openstack.org/p/TempestAndNeutronJuno<br />
* Thu 9:50-10:30: Sharing the load of operational responsibility<br />
* Thu 11:00-11:40: Neutron LBaaS Update<br />
* Thur 11:50-12:30: Modular Layer2 Agents<br />
** https://etherpad.openstack.org/p/JunoSummit-ovs-firewall-driver<br />
** https://etherpad.openstack.org/p/JunoSummit_-_Chaining_basic_services_with_OVS<br />
* Fri 10:50-11:30: [https://etherpad.openstack.org/p/group-based-policy Neutron Group Based Policy]<br />
* Fri 11:40-12:30: Combined FWaaS and VPNaaS Session<br />
** FWaaS: https://etherpad.openstack.org/p/juno-fwaas<br />
* Fri 13:20-14:00: LBaaS SSL L7 and automated scenarios<br />
* Fri 14:10-14:50: [https://etherpad.openstack.org/p/hierarchical_network_topology Hierarchical Network Topologies]<br />
* Fri 15:00-15:40: [https://etherpad.openstack.org/p/L3-vendor-plugins L3 Vendor Plugins]<br />
* Fri 16:00-16:40: Dynamic routing and pluggable external networks<br />
** https://etherpad.openstack.org/p/juno-neutron-pluggable-external-network<br />
* Fri 16:50-17:30: [https://etherpad.openstack.org/p/servicevm Service VM Discussion]<br />
<br />
== Nova ==<br />
<br />
'''Wednesday, May 14'''<br />
<br />
* 9:00am [https://etherpad.openstack.org/p/juno-nova-third-party-ci Continuation of third party CI]<br />
* 9:50am [https://etherpad.openstack.org/p/juno-nova-clustered-hypervisor-support Clustered hypervisor support in Nova]<br />
* 11:00am [https://etherpad.openstack.org/p/juno-nova-deprecating-baremetal The road to deprecating nova.virt.baremetal]<br />
* 11:50am [https://etherpad.openstack.org/p/juno-nova-data-transfer-service Data transfer service plug-in]<br />
* 1:50pm [https://etherpad.openstack.org/p/juno-nova-live-upgrade Next steps in live upgrade]<br />
* 2:40pm [https://etherpad.openstack.org/p/juno-nova-image-precaching Image precaching service]<br />
* 3:30pm [https://etherpad.openstack.org/p/juno-nova-flavor-storage-revamp Flavor storage re-vamp]<br />
* 4:30pm [https://etherpad.openstack.org/p/juno-nova-cross-project-interactions Rethinking cross project interactions]<br />
* 5:20pm [https://etherpad.openstack.org/p/juno-nova-v2-on-v3-api-poc Nova V2 on V3 API implementation POC]<br />
<br />
'''Thursday, May 15'''<br />
<br />
* 9:00am [https://etherpad.openstack.org/p/juno-nova-hypev-new-features Hyper-V Driver new features]<br />
* 9:50am [https://etherpad.openstack.org/p/juno-nova-libvirt-driver-roadmap Libvirt driver roadmap for Juno]<br />
* 11:00am [https://etherpad.openstack.org/p/juno-nova-kvm-live-migration Improve performance of live migration on KVM]<br />
* 11:50am [https://etherpad.openstack.org/p/juno-nova-conductor-api limited conductor API]<br />
* 1:30pm [https://etherpad.openstack.org/p/juno-nova-quota-state-management Implementing state management for quotas]<br />
* 2:20pm [https://etherpad.openstack.org/p/juno-nova-multi-volume-snapshots Multi-Volume Snapshots]<br />
* 3:10pm [https://etherpad.openstack.org/p/juno-nova-hypervisor-power-mgmt Hypervisor power management]<br />
* 4:10pm [https://etherpad.openstack.org/p/juno-nova-sriov-support SR-IOV support]<br />
* 5:00pm [https://etherpad.openstack.org/p/juno-nova-v3-api Nova V3 API]<br />
<br />
'''Friday, May 16'''<br />
<br />
* 9:00am [https://etherpad.openstack.org/p/juno-nova-vmware-driver-roadmap Vmwareapi driver roadmap for Juno]<br />
* 9:50am [https://etherpad.openstack.org/p/juno-nova-docker-driver-features Docker driver - features & testing]<br />
* 10:50am [https://etherpad.openstack.org/p/juno-nova-gantt-apis Future of Gantt APIs and interfaces]<br />
* 11:40am [https://etherpad.openstack.org/p/juno-nova-no-db-scheduler Common no DB Scheduler]<br />
* 1:20pm [https://etherpad.openstack.org/p/juno-nova-scheduling-server-groups Simultaneous Scheduling for Server Groups]<br />
* 2:10pm [https://etherpad.openstack.org/p/juno-nova-scheduler-hints-vm-lifecycle Scheduler hints for VM life cycle]<br />
* 3:00pm [https://etherpad.openstack.org/p/juno-nova-devops Nova Dev/Ops Session]<br />
* 4:00pm [https://etherpad.openstack.org/p/juno-nova-unsession Unsession]<br />
<br />
== Ops ==<br />
* Mon 1115 – 1155 [https://etherpad.openstack.org/p/juno-summit-ops-askthedevs Ask the devs: Meet the PTLs and TC, How to get the best out of the design summit]<br />
* Mon 1205 – 1245 [https://etherpad.openstack.org/p/juno-summit-ops-reasonabledefaults Reasonable Defaults]<br />
* Mon 1400 – 1440 [https://etherpad.openstack.org/p/juno-summit-ops-upgradesdeployment Upgrades and Deployment Approaches]<br />
* Mon 1450 – 1620 [https://etherpad.openstack.org/p/juno-summit-ops-architecture Architecture Show and Tell, Tales and Fails]<br />
* Mon 1730 – 1810 [https://etherpad.openstack.org/p/juno-summit-ops-security Security]<br />
<br />
* Fri 9:00 – 9:40 [https://etherpad.openstack.org/p/juno-summit-ops-enterprise Enterprise Gaps]<br />
* Fri 9:50 – 10:30 [https://etherpad.openstack.org/p/juno-summit-ops-database Database]<br />
* Fri 10:50 – 11:30 [https://etherpad.openstack.org/p/juno-summit-ops-issuesatscale Issues at Scale]<br />
* Fri 11:40 – 12:20 [https://etherpad.openstack.org/p/juno-summit-ops-meta Meta Discussion – ops communication and governance]<br />
* Fri 1:20 – 2:00 [https://etherpad.openstack.org/p/juno-summit-ops-ansible Ansible]<br />
* Fri 2:10 – 2:50 [https://etherpad.openstack.org/p/juno-summit-ops-chef Chef]<br />
* Fri 3:00 – 3:40 [https://etherpad.openstack.org/p/juno-summit-ops-puppet Puppet]<br />
* Fri 4:00 – 4:40 [https://etherpad.openstack.org/p/juno-summit-ops-networking Networking]<br />
* Fri 4:50 – 5:30 [https://etherpad.openstack.org/p/juno-summit-ops-monitoringlogging Monitoring and Logging]<br />
<br />
== Oslo ==<br />
* Wed 9:00 - 9:40 [https://etherpad.openstack.org/p/juno-oslo-release-plan Release Plan for Low-level Libraries]<br />
* Wed 9:50 - 10:30 oslo.messaging status and plans for Juno<br />
* Wed 11:00 - 11:40 AMQP 1.0 protocol driver<br />
* Thu 9:00 - 9:40 Oslo Library Teams Breakout Session<br />
* Thu 9:50 - 10:30 [https://etherpad.openstack.org/p/juno-infra-library-testing Testing pre-releases of Oslo libs with apps]<br />
* Thu 11:00 - 11:40 OpenStack cross service/project OpenStack profiler<br />
* Thu 15:10 - 16:00 [https://etherpad.openstack.org/p/juno-oslo-bayer Upstream chat with Mike Bayer]<br />
* Thu 16:10 - 17:00 [https://etherpad.openstack.org/p/juno-summit-oslo-messaging-rpc-proxy rpc proxy(oslo.messaging)]<br />
* Fri 14:10 - 15:50 oslo.rootwrap: performance and other improvements<br />
* Fri 15:00 - 16:40 Semantic versioning and oslo<br />
* Fri 16:00 - 16:40 PKI for messaging<br />
<br />
== QA ==<br />
<br />
===Wednesday===<br />
* 2:40 – 3:20 [https://etherpad.openstack.org/p/juno-summit-branchless-tempest Branchless Tempest]<br />
* 3:30 – 4:10 [https://etherpad.openstack.org/p/juno-summit-tempest-documentation Tempest Documentation Gaps]<br />
* 4:30 – 5:10 Functional API Testing - post dev QA vs TDD<br />
* 5:20 – 6:00 Rally and Tempest Integration<br />
<br />
===Thursday===<br />
* 1:30 – 2:10 [https://etherpad.openstack.org/p/juno-summit-api-tests-with-jsonschema API tests with JSONSchema]<br />
* 2:20 – 3:00 Negative Testing: Fuzzy Test Framework<br />
* 3:10 – 3:50 How to improve the UX of our Testing Tools<br />
* 4:10 – 4:50 Tempest, GUI, Client, Server<br />
<br />
===Friday===<br />
* 1:20 – 2:00 [https://etherpad.openstack.org/p/juno-summit-grenade Grenade Current Status and Next Steps]<br />
* 2:10 – 2:50 [https://etherpad.openstack.org/p/juno-summit-qa-policy QA Program Policy and Changes in Juno]<br />
<br />
== Release Management ==<br />
== Sahara (ex. Savanna) ==<br />
<br />
* [http://junodesignsummit.sched.org/event/b4f52627efa42f285978d5af3643e189 Thu 13:30] [https://etherpad.openstack.org/p/juno-summit-sahara-relmngmt-backward Releasing and backward compatibility]<br />
* [http://junodesignsummit.sched.org/event/c8774beefd9e9188a3e0729d2bd7131e Thu 14:20] [https://etherpad.openstack.org/p/juno-summit-sahara-testing-plugins CI/gating and plugin requirements]<br />
* [http://junodesignsummit.sched.org/event/10bc9a23eb43eb9df885586035fb2491 Thu 15:10] [https://etherpad.openstack.org/p/juno-summit-sahara-scale-integration Scalable Sahara and further OpenStack integration]<br />
* [http://junodesignsummit.sched.org/event/be842178a085fe95b7665a653f8ab541 Thu 16:10] [https://etherpad.openstack.org/p/juno-summit-sahara-ux UX improvements]<br />
* [http://junodesignsummit.sched.org/event/dfa603324c0bbf29c2f09a77efb82d1d Thu 17:00] [https://etherpad.openstack.org/p/juno-summit-sahara-edp Future of EDP: plugins, SPI, Oozie]<br />
* [http://junodesignsummit.sched.org/event/a64f771cf28ed3ad637730db828668ff Fri 09:00] [https://etherpad.openstack.org/p/juno-summit-sahara-v2-api Next major REST API - v2]<br />
* [http://junodesignsummit.sched.org/event/49089a1d9c8203c6a4c1f0001fa417af Fri 09:50] [https://etherpad.openstack.org/p/juno-summit-sahara-roadmap-retro Sahara in Icehouse and Juno]<br />
<br />
== Swift ==<br />
== TripleO (Deployment) ==<br />
* Fri 11:40 - 12:20 [https://etherpad.openstack.org/p/juno-summit-tripleo-tuskar-planning TripleO Tuskar Planning]<br />
* Fri 13:20 - 14:00 [https://etherpad.openstack.org/p/juno-summit-tripleo-environment TripleO Development and Testing Environment]<br />
* Fri 14:10 - 14:50 [https://etherpad.openstack.org/p/juno-summit-tripleo-and-docker TripleO and Docker]<br />
* Fri 15:00 - 15:40 [https://etherpad.openstack.org/p/juno-summit-tripleo-ci TripleO CI]<br />
* Fri 16:00 - 16:40 [https://etherpad.openstack.org/p/juno-summit-tripleo-neutron TripleO and Neutron]<br />
* Fri 16:50 - 17:30 [https://etherpad.openstack.org/p/juno-summit-tripleo-devops TripleO Dev/Ops Session]<br />
<br />
== Trove ==<br />
== User Committee ==</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=Design_Summit/Juno/Etherpads&diff=51016Design Summit/Juno/Etherpads2014-05-02T23:28:05Z<p>John-griffith: /* Cinder */</p>
<hr />
<div>[[Category:Summit]]<br />
[[Category:Juno]]<br />
[[Category:Etherpad]]<br />
<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
== Ceilometer ==<br />
== Cinder ==<br />
<br />
* Thurs 16:10-16:50 [https://etherpad.openstack.org/p/juno-cinder-volume-replication Volume Replication]<br />
* Thurs 17:00-17:40 [https://etherpad.openstack.org/p/juno-cinder-DRBD DRBD For Cinder-Volumes]<br />
* Friday 09:00-09:40 [https://etherpad.openstack.org/p/juno-cinder-nfs-in-cinder NFS and its role within Cinder]<br />
* Friday 10:00-10:40 [https://etherpad.openstack.org/p/juno-cinder-cinder-consistency-groups Adding Consistency Groups to Cinder]<br />
* Friday 10:50-11:30 [https://etherpad.openstack.org/p/juno-cinder-3rd-party-cert-and-verification 3'rd party certificiation and CI systems]<br />
* Friday 11:40-12:20 [https://etherpad.openstack.org/p/juno-cinder-changed-block-list Changed Block List for Cinder Volumes]<br />
* Friday 13:20-14:00 [https://etherpad.openstack.org/p/juno-cinder-state-and-workflow-management Cinder State and Workflow Management]<br />
* Friday 14:10-14:50 [https://etherpad.openstack.org/p/juno-cinder-framework-for-state-reporting Framework for detailed Volume Stats reporting]<br />
* Friday 15:00-15:40 [https://etherpad.openstack.org/p/juno-cinder-multiple-pools-per-backend Mulitple Pools per Cinder Backend]<br />
* Friday 16:00-16:40 [TBD: What's a Cinder Driver]<br />
<br />
==Cross-Project==<br />
* Tues 11:15-11:55 [https://etherpad.openstack.org/p/juno-cross-project-future-of-python The Future of Python Support]<br />
* Tues 11:15-12:45 [https://etherpad.openstack.org/p/juno-cross-project-consistency-across-rest-apis Consistency Across OpenStack REST APIs]<br />
* Tues 14:00-14:40 [https://etherpad.openstack.org/p/juno-cross-oslo-library-releases New Oslo Library Releases and Your Project]<br />
* Tues 14:50-16:20 [https://etherpad.openstack.org/p/juno-summit-cross-project-user-experience User Experience Designers Gathering]<br />
* Tues 15:40-16:20 [https://etherpad.openstack.org/p/juno-cross-project-quota-management-endpoint Cross-project Quota Management Service Endpoint]<br />
* Tues 16:40 [https://etherpad.openstack.org/p/juno-summit-gate How do we make it easier to fix the Gate?]<br />
<br />
== Devstack ==<br />
* Fri 16:00 [https://etherpad.openstack.org/p/juno-summit-devstack-update DevStack Update]<br />
* Fri 16:50 [https://etherpad.openstack.org/p/juno-summit-devstack-project-support DevStack Project Support]<br />
<br />
== Documentation ==<br />
== Glance ==<br />
== Heat ==<br />
<br />
* Wed 9.00-9:40 [https://etherpad.openstack.org/p/juno-summit-heat-dev-ops Dev/Ops Session]<br />
* Wed 9.50-10:30 [https://etherpad.openstack.org/p/juno-summit-heat-sw-orch Next Steps for Software Orchestration]<br />
* Wed 11.00-11:40 [https://etherpad.openstack.org/p/heat-workflow-vs-convergence Scaling, Robustness and Convergence]<br />
* Wed 11.50-12:30 [https://etherpad.openstack.org/p/juno-summit-heat-notifications Augmenting Polling with Notifications]<br />
* Wed 13.50-14:30 [https://etherpad.openstack.org/p/juno-summit-heat-event Event notifications]<br />
* Wed 14.40-15:20 [https://etherpad.openstack.org/p/juno-summit-heat-callbacks Stack and Resource lifecycle callbacks]<br />
* Wed 15.30-16:10 [https://etherpad.openstack.org/p/juno-summit-heat-api-v2 API v2]<br />
* Wed 16.30-17:10 [https://etherpad.openstack.org/p/juno-summit-heat-plugin-versioning Resource Plugin Versioning]<br />
<br />
== Horizon ==<br />
* Wed 16:30-17:10 [https://etherpad.openstack.org/p/juno-summit-horizon-devops Horizon Dev/Ops Session]<br />
* Wed 17:20-18:00 [https://etherpad.openstack.org/p/juno-summit-horizon-usability-test-results Review Horizon Usability Test feedback, proposals]<br />
* Fri 9:00-9:40 [https://etherpad.openstack.org/p/juno-summit-horizon-static-files Handling of static files]<br />
* Fri 9:50-10:30 [https://etherpad.openstack.org/p/juno-summit-horizon-widgets Modular, widget-based views and more pluggability]<br />
* Fri 10:50-11:30 [https://etherpad.openstack.org/p/juno-summit-horizon-client-side Client side development]<br />
<br />
== Infrastructure ==<br />
* Wed 9:50 - [https://etherpad.openstack.org/p/juno-summit-elastic-recheck Elastic Recheck next steps]<br />
* Wed 11:00 - Jenkins moving forward<br />
* Wed 11:50 - Improving Third Party Testing<br />
* Thu 11:00 - Discussion/design talk of Vinz code review system<br />
* Thu 11:50 - StoryBoard: current status & Juno plans<br />
* Fri 9:50 - Replace Launchpad OpenID authentication<br />
* Fri 10:50 - Translation platform discussion<br />
<br />
== Ironic ==<br />
<br />
* Tues 11:15 [https://etherpad.openstack.org/p/juno-summit-ironic-python-agent Ironic Python Agent]<br />
* Tues 12:05 [https://etherpad.openstack.org/p/juno-summit-ironic-multitenancy Hardware Multitenancy Risk Mitigation]<br />
* Tues 14:50 [https://etherpad.openstack.org/p/juno-summit-ironic-performance Performance and Scalability]<br />
* Tues 15:40 [https://etherpad.openstack.org/p/juno-summit-ironic-arch Planning changes for Juno]<br />
<br />
== Keystone ==<br />
== Marconi ==<br />
== Neutron ==<br />
* Wed 9:00-9:40: [https://etherpad.openstack.org/p/juno-neutron-policies New Policies for Neutron in Juno]<br />
* Wed 9:50-10:30: Code Review Process Improvements<br />
* Wed 11:00-11:40: IPv6 status in Neutron<br />
* Wed 11:50-12:30: ML2 Juno Roadmap<br />
**[https://etherpad.openstack.org/p/ML2_mechanismdriver_extensions_support Extensions Support In ML2 Mechanism Drivers]<br />
* Wed 13:50-14:30: Refactoring the Neutron Server Core<br />
* Wed 14:40-15:20: [https://etherpad.openstack.org/p/novanet-neutron-migration Nova-Net to Neutron migration]<br />
* Wed 15:30-14:10: Integrating Tasks into Neutron<br />
* Wed 16:30-17:10: Neutron Advanced Services and Flavor Framework<br />
** Advanced Services: https://etherpad.openstack.org/p/juno-advanced-services<br />
*** Flavor framework for advanced services<br />
** https://etherpad.openstack.org/p/juno-virtual-resource-for-service-chaining<br />
* Wed 17:20-18:00: Neutron Distributed Virtual Router Progress Update<br />
* Thu 9:00-9:40: Neutron QA and Testing<br />
** https://etherpad.openstack.org/p/TempestAndNeutronJuno<br />
* Thu 9:50-10:30: Sharing the load of operational responsibility<br />
* Thu 11:00-11:40: Neutron LBaaS Update<br />
* Thur 11:50-12:30: Modular Layer2 Agents<br />
** https://etherpad.openstack.org/p/JunoSummit-ovs-firewall-driver<br />
** https://etherpad.openstack.org/p/JunoSummit_-_Chaining_basic_services_with_OVS<br />
* Fri 10:50-11:30: [https://etherpad.openstack.org/p/group-based-policy Neutron Group Based Policy]<br />
* Fri 11:40-12:30: Combined FWaaS and VPNaaS Session<br />
** FWaaS: https://etherpad.openstack.org/p/juno-fwaas<br />
* Fri 13:20-14:00: LBaaS SSL L7 and automated scenarios<br />
* Fri 14:10-14:50: [https://etherpad.openstack.org/p/hierarchical_network_topology Hierarchical Network Topologies]<br />
* Fri 15:00-15:40: [https://etherpad.openstack.org/p/L3-vendor-plugins L3 Vendor Plugins]<br />
* Fri 16:00-16:40: Dynamic routing and pluggable external networks<br />
** https://etherpad.openstack.org/p/juno-neutron-pluggable-external-network<br />
* Fri 16:50-17:30: [https://etherpad.openstack.org/p/servicevm Service VM Discussion]<br />
<br />
== Nova ==<br />
<br />
'''Wednesday, May 14'''<br />
<br />
* 9:00am [https://etherpad.openstack.org/p/juno-nova-third-party-ci Continuation of third party CI]<br />
* 9:50am [https://etherpad.openstack.org/p/juno-nova-clustered-hypervisor-support Clustered hypervisor support in Nova]<br />
* 11:00am [https://etherpad.openstack.org/p/juno-nova-deprecating-baremetal The road to deprecating nova.virt.baremetal]<br />
* 11:50am [https://etherpad.openstack.org/p/juno-nova-data-transfer-service Data transfer service plug-in]<br />
* 1:50pm [https://etherpad.openstack.org/p/juno-nova-live-upgrade Next steps in live upgrade]<br />
* 2:40pm [https://etherpad.openstack.org/p/juno-nova-image-precaching Image precaching service]<br />
* 3:30pm [https://etherpad.openstack.org/p/juno-nova-flavor-storage-revamp Flavor storage re-vamp]<br />
* 4:30pm [https://etherpad.openstack.org/p/juno-nova-cross-project-interactions Rethinking cross project interactions]<br />
* 5:20pm [https://etherpad.openstack.org/p/juno-nova-v2-on-v3-api-poc Nova V2 on V3 API implementation POC]<br />
<br />
'''Thursday, May 15'''<br />
<br />
* 9:00am [https://etherpad.openstack.org/p/juno-nova-hypev-new-features Hyper-V Driver new features]<br />
* 9:50am [https://etherpad.openstack.org/p/juno-nova-libvirt-driver-roadmap Libvirt driver roadmap for Juno]<br />
* 11:00am [https://etherpad.openstack.org/p/juno-nova-kvm-live-migration Improve performance of live migration on KVM]<br />
* 11:50am [https://etherpad.openstack.org/p/juno-nova-conductor-api limited conductor API]<br />
* 1:30pm [https://etherpad.openstack.org/p/juno-nova-quota-state-management Implementing state management for quotas]<br />
* 2:20pm [https://etherpad.openstack.org/p/juno-nova-multi-volume-snapshots Multi-Volume Snapshots]<br />
* 3:10pm [https://etherpad.openstack.org/p/juno-nova-hypervisor-power-mgmt Hypervisor power management]<br />
* 4:10pm [https://etherpad.openstack.org/p/juno-nova-sriov-support SR-IOV support]<br />
* 5:00pm [https://etherpad.openstack.org/p/juno-nova-v3-api Nova V3 API]<br />
<br />
'''Friday, May 16'''<br />
<br />
* 9:00am [https://etherpad.openstack.org/p/juno-nova-vmware-driver-roadmap Vmwareapi driver roadmap for Juno]<br />
* 9:50am [https://etherpad.openstack.org/p/juno-nova-docker-driver-features Docker driver - features & testing]<br />
* 10:50am [https://etherpad.openstack.org/p/juno-nova-gantt-apis Future of Gantt APIs and interfaces]<br />
* 11:40am [https://etherpad.openstack.org/p/juno-nova-no-db-scheduler Common no DB Scheduler]<br />
* 1:20pm [https://etherpad.openstack.org/p/juno-nova-scheduling-server-groups Simultaneous Scheduling for Server Groups]<br />
* 2:10pm [https://etherpad.openstack.org/p/juno-nova-scheduler-hints-vm-lifecycle Scheduler hints for VM life cycle]<br />
* 3:00pm [https://etherpad.openstack.org/p/juno-nova-devops Nova Dev/Ops Session]<br />
* 4:00pm [https://etherpad.openstack.org/p/juno-nova-unsession Unsession]<br />
<br />
== Ops ==<br />
* Mon 1115 – 1155 [https://etherpad.openstack.org/p/juno-summit-ops-askthedevs Ask the devs: Meet the PTLs and TC, How to get the best out of the design summit]<br />
* Mon 1205 – 1245 [https://etherpad.openstack.org/p/juno-summit-ops-reasonabledefaults Reasonable Defaults]<br />
* Mon 1400 – 1440 [https://etherpad.openstack.org/p/juno-summit-ops-upgradesdeployment Upgrades and Deployment Approaches]<br />
* Mon 1450 – 1620 [https://etherpad.openstack.org/p/juno-summit-ops-architecture Architecture Show and Tell, Tales and Fails]<br />
* Mon 1730 – 1810 [https://etherpad.openstack.org/p/juno-summit-ops-security Security]<br />
<br />
* Fri 9:00 – 9:40 [https://etherpad.openstack.org/p/juno-summit-ops-enterprise Enterprise Gaps]<br />
* Fri 9:50 – 10:30 [https://etherpad.openstack.org/p/juno-summit-ops-database Database]<br />
* Fri 10:50 – 11:30 [https://etherpad.openstack.org/p/juno-summit-ops-issuesatscale Issues at Scale]<br />
* Fri 11:40 – 12:20 [https://etherpad.openstack.org/p/juno-summit-ops-meta Meta Discussion – ops communication and governance]<br />
* Fri 1:20 – 2:00 [https://etherpad.openstack.org/p/juno-summit-ops-ansible Ansible]<br />
* Fri 2:10 – 2:50 [https://etherpad.openstack.org/p/juno-summit-ops-chef Chef]<br />
* Fri 3:00 – 3:40 [https://etherpad.openstack.org/p/juno-summit-ops-puppet Puppet]<br />
* Fri 4:00 – 4:40 [https://etherpad.openstack.org/p/juno-summit-ops-networking Networking]<br />
* Fri 4:50 – 5:30 [https://etherpad.openstack.org/p/juno-summit-ops-monitoringlogging Monitoring and Logging]<br />
<br />
== Oslo ==<br />
* Wed 9:00 - 9:40 [https://etherpad.openstack.org/p/juno-oslo-release-plan Release Plan for Low-level Libraries]<br />
* Wed 9:50 - 10:30 oslo.messaging status and plans for Juno<br />
* Wed 11:00 - 11:40 AMQP 1.0 protocol driver<br />
* Thu 9:00 - 9:40 Oslo Library Teams Breakout Session<br />
* Thu 9:50 - 10:30 [https://etherpad.openstack.org/p/juno-infra-library-testing Testing pre-releases of Oslo libs with apps]<br />
* Thu 11:00 - 11:40 OpenStack cross service/project OpenStack profiler<br />
* Thu 15:10 - 16:00 [https://etherpad.openstack.org/p/juno-oslo-bayer Upstream chat with Mike Bayer]<br />
* Thu 16:10 - 17:00 [https://etherpad.openstack.org/p/juno-summit-oslo-messaging-rpc-proxy rpc proxy(oslo.messaging)]<br />
* Fri 14:10 - 15:50 oslo.rootwrap: performance and other improvements<br />
* Fri 15:00 - 16:40 Semantic versioning and oslo<br />
* Fri 16:00 - 16:40 PKI for messaging<br />
<br />
== QA ==<br />
<br />
===Wednesday===<br />
* 2:40 – 3:20 [https://etherpad.openstack.org/p/juno-summit-branchless-tempest Branchless Tempest]<br />
* 3:30 – 4:10 [https://etherpad.openstack.org/p/juno-summit-tempest-documentation Tempest Documentation Gaps]<br />
* 4:30 – 5:10 Functional API Testing - post dev QA vs TDD<br />
* 5:20 – 6:00 Rally and Tempest Integration<br />
<br />
===Thursday===<br />
* 1:30 – 2:10 [https://etherpad.openstack.org/p/juno-summit-api-tests-with-jsonschema API tests with JSONSchema]<br />
* 2:20 – 3:00 Negative Testing: Fuzzy Test Framework<br />
* 3:10 – 3:50 How to improve the UX of our Testing Tools<br />
* 4:10 – 4:50 Tempest, GUI, Client, Server<br />
<br />
===Friday===<br />
* 1:20 – 2:00 [https://etherpad.openstack.org/p/juno-summit-grenade Grenade Current Status and Next Steps]<br />
* 2:10 – 2:50 [https://etherpad.openstack.org/p/juno-summit-qa-policy QA Program Policy and Changes in Juno]<br />
<br />
== Release Management ==<br />
== Sahara (ex. Savanna) ==<br />
<br />
* [http://junodesignsummit.sched.org/event/b4f52627efa42f285978d5af3643e189 Thu 13:30] [https://etherpad.openstack.org/p/juno-summit-sahara-relmngmt-backward Releasing and backward compatibility]<br />
* [http://junodesignsummit.sched.org/event/c8774beefd9e9188a3e0729d2bd7131e Thu 14:20] [https://etherpad.openstack.org/p/juno-summit-sahara-testing-plugins CI/gating and plugin requirements]<br />
* [http://junodesignsummit.sched.org/event/10bc9a23eb43eb9df885586035fb2491 Thu 15:10] [https://etherpad.openstack.org/p/juno-summit-sahara-scale-integration Scalable Sahara and further OpenStack integration]<br />
* [http://junodesignsummit.sched.org/event/be842178a085fe95b7665a653f8ab541 Thu 16:10] [https://etherpad.openstack.org/p/juno-summit-sahara-ux UX improvements]<br />
* [http://junodesignsummit.sched.org/event/dfa603324c0bbf29c2f09a77efb82d1d Thu 17:00] [https://etherpad.openstack.org/p/juno-summit-sahara-edp Future of EDP: plugins, SPI, Oozie]<br />
* [http://junodesignsummit.sched.org/event/a64f771cf28ed3ad637730db828668ff Fri 09:00] [https://etherpad.openstack.org/p/juno-summit-sahara-v2-api Next major REST API - v2]<br />
* [http://junodesignsummit.sched.org/event/49089a1d9c8203c6a4c1f0001fa417af Fri 09:50] [https://etherpad.openstack.org/p/juno-summit-sahara-roadmap-retro Sahara in Icehouse and Juno]<br />
<br />
== Swift ==<br />
== TripleO (Deployment) ==<br />
* Fri 11:40 - 12:20 [https://etherpad.openstack.org/p/juno-summit-tripleo-tuskar-planning TripleO Tuskar Planning]<br />
* Fri 13:20 - 14:00 [https://etherpad.openstack.org/p/juno-summit-tripleo-environment TripleO Development and Testing Environment]<br />
* Fri 14:10 - 14:50 [https://etherpad.openstack.org/p/juno-summit-tripleo-and-docker TripleO and Docker]<br />
* Fri 15:00 - 15:40 [https://etherpad.openstack.org/p/juno-summit-tripleo-ci TripleO CI]<br />
* Fri 16:00 - 16:40 [https://etherpad.openstack.org/p/juno-summit-tripleo-neutron TripleO and Neutron]<br />
* Fri 16:50 - 17:30 [https://etherpad.openstack.org/p/juno-summit-tripleo-devops TripleO Dev/Ops Session]<br />
<br />
== Trove ==<br />
== User Committee ==</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=Design_Summit/Juno/Etherpads&diff=51014Design Summit/Juno/Etherpads2014-05-02T23:09:06Z<p>John-griffith: /* Cinder */</p>
<hr />
<div>[[Category:Summit]]<br />
[[Category:Juno]]<br />
[[Category:Etherpad]]<br />
<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
== Ceilometer ==<br />
== Cinder ==<br />
<br />
* Thurs 16:10-16:50 [https://etherpad.openstack.org/p/juno-cinder-volume-replication Volume Replication]<br />
* Thurs 17:00-17:40 [https://etherpad.openstack.org/p/juno-cinder-DRBD DRBD For Cinder-Volumes]<br />
* Friday 09:00-09:40 [https://etherpad.openstack.org/p/juno-cinder-nfs-in-cinder NFS and its role within Cinder]<br />
* Friday 10:00-10:40 [https://etherpad.openstack.org/p/juno-cinder-cinder-consistency-groups Adding Consistency Groups to Cinder]<br />
* Friday 10:50-11:30 [https://etherpad.openstack.org/p/juno-cinder-3rd-party-cert-and-verification 3'rd party certificiation and CI systems]<br />
* Friday 11:40-12:20 [https://etherpad.openstack.org/p/juno-cinder-changed-block-list Changed Block List for Cinder Volumes]<br />
* Friday 13:20-14:00 [https://etherpad.openstack.org/p/juno-cinder-state-and-workflow-management Cinder State and Workflow Management]<br />
* Friday 14:10-14:50 [https://etherpad.openstack.org/p/juno-cinder-framework-for-state-reporting Framework for detailed Volume Stats reporting]<br />
* Friday 15:00-15:40 [https://etherpad.openstack.org/p/juno-cinder-multiple-pools-per-backend Mulitple Pools per Cinder Backend]<br />
* Friday 16:00-16:40 []<br />
<br />
==Cross-Project==<br />
* Tues 11:15-11:55 [https://etherpad.openstack.org/p/juno-cross-project-future-of-python The Future of Python Support]<br />
* Tues 11:15-12:45 [https://etherpad.openstack.org/p/juno-cross-project-consistency-across-rest-apis Consistency Across OpenStack REST APIs]<br />
* Tues 14:00-14:40 [https://etherpad.openstack.org/p/juno-cross-oslo-library-releases New Oslo Library Releases and Your Project]<br />
* Tues 14:50-16:20 [https://etherpad.openstack.org/p/juno-summit-cross-project-user-experience User Experience Designers Gathering]<br />
* Tues 15:40-16:20 [https://etherpad.openstack.org/p/juno-cross-project-quota-management-endpoint Cross-project Quota Management Service Endpoint]<br />
* Tues 16:40 [https://etherpad.openstack.org/p/juno-summit-gate How do we make it easier to fix the Gate?]<br />
<br />
== Devstack ==<br />
* Fri 16:00 [https://etherpad.openstack.org/p/juno-summit-devstack-update DevStack Update]<br />
* Fri 16:50 [https://etherpad.openstack.org/p/juno-summit-devstack-project-support DevStack Project Support]<br />
<br />
== Documentation ==<br />
== Glance ==<br />
== Heat ==<br />
<br />
* Wed 9.00-9:40 [https://etherpad.openstack.org/p/juno-summit-heat-dev-ops Dev/Ops Session]<br />
* Wed 9.50-10:30 [https://etherpad.openstack.org/p/juno-summit-heat-sw-orch Next Steps for Software Orchestration]<br />
* Wed 11.00-11:40 [https://etherpad.openstack.org/p/heat-workflow-vs-convergence Scaling, Robustness and Convergence]<br />
* Wed 11.50-12:30 [https://etherpad.openstack.org/p/juno-summit-heat-notifications Augmenting Polling with Notifications]<br />
* Wed 13.50-14:30 [https://etherpad.openstack.org/p/juno-summit-heat-event Event notifications]<br />
* Wed 14.40-15:20 [https://etherpad.openstack.org/p/juno-summit-heat-callbacks Stack and Resource lifecycle callbacks]<br />
* Wed 15.30-16:10 [https://etherpad.openstack.org/p/juno-summit-heat-api-v2 API v2]<br />
* Wed 16.30-17:10 [https://etherpad.openstack.org/p/juno-summit-heat-plugin-versioning Resource Plugin Versioning]<br />
<br />
== Horizon ==<br />
* Wed 16:30-17:10 [https://etherpad.openstack.org/p/juno-summit-horizon-devops Horizon Dev/Ops Session]<br />
* Wed 17:20-18:00 [https://etherpad.openstack.org/p/juno-summit-horizon-usability-test-results Review Horizon Usability Test feedback, proposals]<br />
* Fri 9:00-9:40 [https://etherpad.openstack.org/p/juno-summit-horizon-static-files Handling of static files]<br />
* Fri 9:50-10:30 [https://etherpad.openstack.org/p/juno-summit-horizon-widgets Modular, widget-based views and more pluggability]<br />
* Fri 10:50-11:30 [https://etherpad.openstack.org/p/juno-summit-horizon-client-side Client side development]<br />
<br />
== Infrastructure ==<br />
* Wed 9:50 - [https://etherpad.openstack.org/p/juno-summit-elastic-recheck Elastic Recheck next steps]<br />
* Wed 11:00 - Jenkins moving forward<br />
* Wed 11:50 - Improving Third Party Testing<br />
* Thu 11:00 - Discussion/design talk of Vinz code review system<br />
* Thu 11:50 - StoryBoard: current status & Juno plans<br />
* Fri 9:50 - Replace Launchpad OpenID authentication<br />
* Fri 10:50 - Translation platform discussion<br />
<br />
== Ironic ==<br />
<br />
* Tues 11:15 [https://etherpad.openstack.org/p/juno-summit-ironic-python-agent Ironic Python Agent]<br />
* Tues 12:05 [https://etherpad.openstack.org/p/juno-summit-ironic-multitenancy Hardware Multitenancy Risk Mitigation]<br />
* Tues 14:50 [https://etherpad.openstack.org/p/juno-summit-ironic-performance Performance and Scalability]<br />
* Tues 15:40 [https://etherpad.openstack.org/p/juno-summit-ironic-arch Planning changes for Juno]<br />
<br />
== Keystone ==<br />
== Marconi ==<br />
== Neutron ==<br />
* Wed 9:00-9:40: [https://etherpad.openstack.org/p/juno-neutron-policies New Policies for Neutron in Juno]<br />
* Wed 9:50-10:30: Code Review Process Improvements<br />
* Wed 11:00-11:40: IPv6 status in Neutron<br />
* Wed 11:50-12:30: ML2 Juno Roadmap<br />
**[https://etherpad.openstack.org/p/ML2_mechanismdriver_extensions_support Extensions Support In ML2 Mechanism Drivers]<br />
* Wed 13:50-14:30: Refactoring the Neutron Server Core<br />
* Wed 14:40-15:20: [https://etherpad.openstack.org/p/novanet-neutron-migration Nova-Net to Neutron migration]<br />
* Wed 15:30-14:10: Integrating Tasks into Neutron<br />
* Wed 16:30-17:10: Neutron Advanced Services and Flavor Framework<br />
** Advanced Services: https://etherpad.openstack.org/p/juno-advanced-services<br />
*** Flavor framework for advanced services<br />
** https://etherpad.openstack.org/p/juno-virtual-resource-for-service-chaining<br />
* Wed 17:20-18:00: Neutron Distributed Virtual Router Progress Update<br />
* Thu 9:00-9:40: Neutron QA and Testing<br />
** https://etherpad.openstack.org/p/TempestAndNeutronJuno<br />
* Thu 9:50-10:30: Sharing the load of operational responsibility<br />
* Thu 11:00-11:40: Neutron LBaaS Update<br />
* Thur 11:50-12:30: Modular Layer2 Agents<br />
** https://etherpad.openstack.org/p/JunoSummit-ovs-firewall-driver<br />
** https://etherpad.openstack.org/p/JunoSummit_-_Chaining_basic_services_with_OVS<br />
* Fri 10:50-11:30: [https://etherpad.openstack.org/p/group-based-policy Neutron Group Based Policy]<br />
* Fri 11:40-12:30: Combined FWaaS and VPNaaS Session<br />
** FWaaS: https://etherpad.openstack.org/p/juno-fwaas<br />
* Fri 13:20-14:00: LBaaS SSL L7 and automated scenarios<br />
* Fri 14:10-14:50: [https://etherpad.openstack.org/p/hierarchical_network_topology Hierarchical Network Topologies]<br />
* Fri 15:00-15:40: [https://etherpad.openstack.org/p/L3-vendor-plugins L3 Vendor Plugins]<br />
* Fri 16:00-16:40: Dynamic routing and pluggable external networks<br />
** https://etherpad.openstack.org/p/juno-neutron-pluggable-external-network<br />
* Fri 16:50-17:30: [https://etherpad.openstack.org/p/servicevm Service VM Discussion]<br />
<br />
== Nova ==<br />
<br />
'''Wednesday, May 14'''<br />
<br />
* 9:00am [https://etherpad.openstack.org/p/juno-nova-third-party-ci Continuation of third party CI]<br />
* 9:50am [https://etherpad.openstack.org/p/juno-nova-clustered-hypervisor-support Clustered hypervisor support in Nova]<br />
* 11:00am [https://etherpad.openstack.org/p/juno-nova-deprecating-baremetal The road to deprecating nova.virt.baremetal]<br />
* 11:50am [https://etherpad.openstack.org/p/juno-nova-data-transfer-service Data transfer service plug-in]<br />
* 1:50pm [https://etherpad.openstack.org/p/juno-nova-live-upgrade Next steps in live upgrade]<br />
* 2:40pm [https://etherpad.openstack.org/p/juno-nova-image-precaching Image precaching service]<br />
* 3:30pm [https://etherpad.openstack.org/p/juno-nova-flavor-storage-revamp Flavor storage re-vamp]<br />
* 4:30pm [https://etherpad.openstack.org/p/juno-nova-cross-project-interactions Rethinking cross project interactions]<br />
* 5:20pm [https://etherpad.openstack.org/p/juno-nova-v2-on-v3-api-poc Nova V2 on V3 API implementation POC]<br />
<br />
'''Thursday, May 15'''<br />
<br />
* 9:00am [https://etherpad.openstack.org/p/juno-nova-hypev-new-features Hyper-V Driver new features]<br />
* 9:50am [https://etherpad.openstack.org/p/juno-nova-libvirt-driver-roadmap Libvirt driver roadmap for Juno]<br />
* 11:00am [https://etherpad.openstack.org/p/juno-nova-kvm-live-migration Improve performance of live migration on KVM]<br />
* 11:50am [https://etherpad.openstack.org/p/juno-nova-conductor-api limited conductor API]<br />
* 1:30pm [https://etherpad.openstack.org/p/juno-nova-quota-state-management Implementing state management for quotas]<br />
* 2:20pm [https://etherpad.openstack.org/p/juno-nova-multi-volume-snapshots Multi-Volume Snapshots]<br />
* 3:10pm [https://etherpad.openstack.org/p/juno-nova-hypervisor-power-mgmt Hypervisor power management]<br />
* 4:10pm [https://etherpad.openstack.org/p/juno-nova-sriov-support SR-IOV support]<br />
* 5:00pm [https://etherpad.openstack.org/p/juno-nova-v3-api Nova V3 API]<br />
<br />
'''Friday, May 16'''<br />
<br />
* 9:00am [https://etherpad.openstack.org/p/juno-nova-vmware-driver-roadmap Vmwareapi driver roadmap for Juno]<br />
* 9:50am [https://etherpad.openstack.org/p/juno-nova-docker-driver-features Docker driver - features & testing]<br />
* 10:50am [https://etherpad.openstack.org/p/juno-nova-gantt-apis Future of Gantt APIs and interfaces]<br />
* 11:40am [https://etherpad.openstack.org/p/juno-nova-no-db-scheduler Common no DB Scheduler]<br />
* 1:20pm [https://etherpad.openstack.org/p/juno-nova-scheduling-server-groups Simultaneous Scheduling for Server Groups]<br />
* 2:10pm [https://etherpad.openstack.org/p/juno-nova-scheduler-hints-vm-lifecycle Scheduler hints for VM life cycle]<br />
* 3:00pm [https://etherpad.openstack.org/p/juno-nova-devops Nova Dev/Ops Session]<br />
* 4:00pm [https://etherpad.openstack.org/p/juno-nova-unsession Unsession]<br />
<br />
== Ops ==<br />
* Mon 1115 – 1155 [https://etherpad.openstack.org/p/juno-summit-ops-askthedevs Ask the devs: Meet the PTLs and TC, How to get the best out of the design summit]<br />
* Mon 1205 – 1245 [https://etherpad.openstack.org/p/juno-summit-ops-reasonabledefaults Reasonable Defaults]<br />
* Mon 1400 – 1440 [https://etherpad.openstack.org/p/juno-summit-ops-upgradesdeployment Upgrades and Deployment Approaches]<br />
* Mon 1450 – 1620 [https://etherpad.openstack.org/p/juno-summit-ops-architecture Architecture Show and Tell, Tales and Fails]<br />
* Mon 1730 – 1810 [https://etherpad.openstack.org/p/juno-summit-ops-security Security]<br />
<br />
* Fri 9:00 – 9:40 [https://etherpad.openstack.org/p/juno-summit-ops-enterprise Enterprise Gaps]<br />
* Fri 9:50 – 10:30 [https://etherpad.openstack.org/p/juno-summit-ops-database Database]<br />
* Fri 10:50 – 11:30 [https://etherpad.openstack.org/p/juno-summit-ops-issuesatscale Issues at Scale]<br />
* Fri 11:40 – 12:20 [https://etherpad.openstack.org/p/juno-summit-ops-meta Meta Discussion – ops communication and governance]<br />
* Fri 1:20 – 2:00 [https://etherpad.openstack.org/p/juno-summit-ops-ansible Ansible]<br />
* Fri 2:10 – 2:50 [https://etherpad.openstack.org/p/juno-summit-ops-chef Chef]<br />
* Fri 3:00 – 3:40 [https://etherpad.openstack.org/p/juno-summit-ops-puppet Puppet]<br />
* Fri 4:00 – 4:40 [https://etherpad.openstack.org/p/juno-summit-ops-networking Networking]<br />
* Fri 4:50 – 5:30 [https://etherpad.openstack.org/p/juno-summit-ops-monitoringlogging Monitoring and Logging]<br />
<br />
== Oslo ==<br />
* Wed 9:00 - 9:40 [https://etherpad.openstack.org/p/juno-oslo-release-plan Release Plan for Low-level Libraries]<br />
* Wed 9:50 - 10:30 oslo.messaging status and plans for Juno<br />
* Wed 11:00 - 11:40 AMQP 1.0 protocol driver<br />
* Thu 9:00 - 9:40 Oslo Library Teams Breakout Session<br />
* Thu 9:50 - 10:30 [https://etherpad.openstack.org/p/juno-infra-library-testing Testing pre-releases of Oslo libs with apps]<br />
* Thu 11:00 - 11:40 OpenStack cross service/project OpenStack profiler<br />
* Thu 15:10 - 16:00 [https://etherpad.openstack.org/p/juno-oslo-bayer Upstream chat with Mike Bayer]<br />
* Thu 16:10 - 17:00 [https://etherpad.openstack.org/p/juno-summit-oslo-messaging-rpc-proxy rpc proxy(oslo.messaging)]<br />
* Fri 14:10 - 15:50 oslo.rootwrap: performance and other improvements<br />
* Fri 15:00 - 16:40 Semantic versioning and oslo<br />
* Fri 16:00 - 16:40 PKI for messaging<br />
<br />
== QA ==<br />
<br />
===Wednesday===<br />
* 2:40 – 3:20 [https://etherpad.openstack.org/p/juno-summit-branchless-tempest Branchless Tempest]<br />
* 3:30 – 4:10 [https://etherpad.openstack.org/p/juno-summit-tempest-documentation Tempest Documentation Gaps]<br />
* 4:30 – 5:10 Functional API Testing - post dev QA vs TDD<br />
* 5:20 – 6:00 Rally and Tempest Integration<br />
<br />
===Thursday===<br />
* 1:30 – 2:10 [https://etherpad.openstack.org/p/juno-summit-api-tests-with-jsonschema API tests with JSONSchema]<br />
* 2:20 – 3:00 Negative Testing: Fuzzy Test Framework<br />
* 3:10 – 3:50 How to improve the UX of our Testing Tools<br />
* 4:10 – 4:50 Tempest, GUI, Client, Server<br />
<br />
===Friday===<br />
* 1:20 – 2:00 [https://etherpad.openstack.org/p/juno-summit-grenade Grenade Current Status and Next Steps]<br />
* 2:10 – 2:50 [https://etherpad.openstack.org/p/juno-summit-qa-policy QA Program Policy and Changes in Juno]<br />
<br />
== Release Management ==<br />
== Sahara (ex. Savanna) ==<br />
<br />
* [http://junodesignsummit.sched.org/event/b4f52627efa42f285978d5af3643e189 Thu 13:30] [https://etherpad.openstack.org/p/juno-summit-sahara-relmngmt-backward Releasing and backward compatibility]<br />
* [http://junodesignsummit.sched.org/event/c8774beefd9e9188a3e0729d2bd7131e Thu 14:20] [https://etherpad.openstack.org/p/juno-summit-sahara-testing-plugins CI/gating and plugin requirements]<br />
* [http://junodesignsummit.sched.org/event/10bc9a23eb43eb9df885586035fb2491 Thu 15:10] [https://etherpad.openstack.org/p/juno-summit-sahara-scale-integration Scalable Sahara and further OpenStack integration]<br />
* [http://junodesignsummit.sched.org/event/be842178a085fe95b7665a653f8ab541 Thu 16:10] [https://etherpad.openstack.org/p/juno-summit-sahara-ux UX improvements]<br />
* [http://junodesignsummit.sched.org/event/dfa603324c0bbf29c2f09a77efb82d1d Thu 17:00] [https://etherpad.openstack.org/p/juno-summit-sahara-edp Future of EDP: plugins, SPI, Oozie]<br />
* [http://junodesignsummit.sched.org/event/a64f771cf28ed3ad637730db828668ff Fri 09:00] [https://etherpad.openstack.org/p/juno-summit-sahara-v2-api Next major REST API - v2]<br />
* [http://junodesignsummit.sched.org/event/49089a1d9c8203c6a4c1f0001fa417af Fri 09:50] [https://etherpad.openstack.org/p/juno-summit-sahara-roadmap-retro Sahara in Icehouse and Juno]<br />
<br />
== Swift ==<br />
== TripleO (Deployment) ==<br />
* Fri 11:40 - 12:20 [https://etherpad.openstack.org/p/juno-summit-tripleo-tuskar-planning TripleO Tuskar Planning]<br />
* Fri 13:20 - 14:00 [https://etherpad.openstack.org/p/juno-summit-tripleo-environment TripleO Development and Testing Environment]<br />
* Fri 14:10 - 14:50 [https://etherpad.openstack.org/p/juno-summit-tripleo-and-docker TripleO and Docker]<br />
* Fri 15:00 - 15:40 [https://etherpad.openstack.org/p/juno-summit-tripleo-ci TripleO CI]<br />
* Fri 16:00 - 16:40 [https://etherpad.openstack.org/p/juno-summit-tripleo-neutron TripleO and Neutron]<br />
* Fri 16:50 - 17:30 [https://etherpad.openstack.org/p/juno-summit-tripleo-devops TripleO Dev/Ops Session]<br />
<br />
== Trove ==<br />
== User Committee ==</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=Design_Summit/Juno/Etherpads&diff=51013Design Summit/Juno/Etherpads2014-05-02T23:04:32Z<p>John-griffith: /* Cinder */</p>
<hr />
<div>[[Category:Summit]]<br />
[[Category:Juno]]<br />
[[Category:Etherpad]]<br />
<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
== Ceilometer ==<br />
== Cinder ==<br />
<br />
* Thurs 16:10-16:50 [https://etherpad.openstack.org/p/juno-cinder-volume-replication Volume Replication]<br />
* Thurs 17:00-17:40 [https://etherpad.openstack.org/p/juno-cinder-DRBD DRBD For Cinder-Volumes]<br />
* Friday 09:00-09:40 [https://etherpad.openstack.org/p/juno-cinder-nfs-in-cinder NFS and its role within Cinder]<br />
* Friday 10:00-10:40 [https://etherpad.openstack.org/p/juno-cinder-cinder-consistency-groups Adding Consistency Groups to Cinder]<br />
* Friday 10:50-11:30 [https://etherpad.openstack.org/p/juno-cinder-3rd-party-cert-and-verification 3'rd party certificiation and CI systems]<br />
* Friday 11:40-12:20 [https://etherpad.openstack.org/p/juno-cinder-changed-block-list-between-volume-and-snapshots Changed Block List for Cinder Volumes]<br />
* Friday 13:20-14:00 [https://etherpad.openstack.org/p/juno-cinder-state-and-workflow-management Cinder State and Workflow Management]<br />
* Friday 14:10-14:50 [https://etherpad.openstack.org/p/juno-cinder-framework-for-state-reporting Framework for detailed Volume Stats reporting]<br />
* Friday 15:00-15:40 [https://etherpad.openstack.org/p/juno-cinder-multiple-pools-per-backend Mulitple Pools per Cinder Backend]<br />
* Friday 16:00-16:40 []<br />
<br />
==Cross-Project==<br />
* Tues 11:15-11:55 [https://etherpad.openstack.org/p/juno-cross-project-future-of-python The Future of Python Support]<br />
* Tues 11:15-12:45 [https://etherpad.openstack.org/p/juno-cross-project-consistency-across-rest-apis Consistency Across OpenStack REST APIs]<br />
* Tues 14:00-14:40 [https://etherpad.openstack.org/p/juno-cross-oslo-library-releases New Oslo Library Releases and Your Project]<br />
* Tues 14:50-16:20 [https://etherpad.openstack.org/p/juno-summit-cross-project-user-experience User Experience Designers Gathering]<br />
* Tues 15:40-16:20 [https://etherpad.openstack.org/p/juno-cross-project-quota-management-endpoint Cross-project Quota Management Service Endpoint]<br />
* Tues 16:40 [https://etherpad.openstack.org/p/juno-summit-gate How do we make it easier to fix the Gate?]<br />
<br />
== Devstack ==<br />
* Fri 16:00 [https://etherpad.openstack.org/p/juno-summit-devstack-update DevStack Update]<br />
* Fri 16:50 [https://etherpad.openstack.org/p/juno-summit-devstack-project-support DevStack Project Support]<br />
<br />
== Documentation ==<br />
== Glance ==<br />
== Heat ==<br />
<br />
* Wed 9.00-9:40 [https://etherpad.openstack.org/p/juno-summit-heat-dev-ops Dev/Ops Session]<br />
* Wed 9.50-10:30 [https://etherpad.openstack.org/p/juno-summit-heat-sw-orch Next Steps for Software Orchestration]<br />
* Wed 11.00-11:40 [https://etherpad.openstack.org/p/heat-workflow-vs-convergence Scaling, Robustness and Convergence]<br />
* Wed 11.50-12:30 [https://etherpad.openstack.org/p/juno-summit-heat-notifications Augmenting Polling with Notifications]<br />
* Wed 13.50-14:30 [https://etherpad.openstack.org/p/juno-summit-heat-event Event notifications]<br />
* Wed 14.40-15:20 [https://etherpad.openstack.org/p/juno-summit-heat-callbacks Stack and Resource lifecycle callbacks]<br />
* Wed 15.30-16:10 [https://etherpad.openstack.org/p/juno-summit-heat-api-v2 API v2]<br />
* Wed 16.30-17:10 [https://etherpad.openstack.org/p/juno-summit-heat-plugin-versioning Resource Plugin Versioning]<br />
<br />
== Horizon ==<br />
* Wed 16:30-17:10 [https://etherpad.openstack.org/p/juno-summit-horizon-devops Horizon Dev/Ops Session]<br />
* Wed 17:20-18:00 [https://etherpad.openstack.org/p/juno-summit-horizon-usability-test-results Review Horizon Usability Test feedback, proposals]<br />
* Fri 9:00-9:40 [https://etherpad.openstack.org/p/juno-summit-horizon-static-files Handling of static files]<br />
* Fri 9:50-10:30 [https://etherpad.openstack.org/p/juno-summit-horizon-widgets Modular, widget-based views and more pluggability]<br />
* Fri 10:50-11:30 [https://etherpad.openstack.org/p/juno-summit-horizon-client-side Client side development]<br />
<br />
== Infrastructure ==<br />
* Wed 9:50 - [https://etherpad.openstack.org/p/juno-summit-elastic-recheck Elastic Recheck next steps]<br />
* Wed 11:00 - Jenkins moving forward<br />
* Wed 11:50 - Improving Third Party Testing<br />
* Thu 11:00 - Discussion/design talk of Vinz code review system<br />
* Thu 11:50 - StoryBoard: current status & Juno plans<br />
* Fri 9:50 - Replace Launchpad OpenID authentication<br />
* Fri 10:50 - Translation platform discussion<br />
<br />
== Ironic ==<br />
<br />
* Tues 11:15 [https://etherpad.openstack.org/p/juno-summit-ironic-python-agent Ironic Python Agent]<br />
* Tues 12:05 [https://etherpad.openstack.org/p/juno-summit-ironic-multitenancy Hardware Multitenancy Risk Mitigation]<br />
* Tues 14:50 [https://etherpad.openstack.org/p/juno-summit-ironic-performance Performance and Scalability]<br />
* Tues 15:40 [https://etherpad.openstack.org/p/juno-summit-ironic-arch Planning changes for Juno]<br />
<br />
== Keystone ==<br />
== Marconi ==<br />
== Neutron ==<br />
* Wed 9:00-9:40: [https://etherpad.openstack.org/p/juno-neutron-policies New Policies for Neutron in Juno]<br />
* Wed 9:50-10:30: Code Review Process Improvements<br />
* Wed 11:00-11:40: IPv6 status in Neutron<br />
* Wed 11:50-12:30: ML2 Juno Roadmap<br />
**[https://etherpad.openstack.org/p/ML2_mechanismdriver_extensions_support Extensions Support In ML2 Mechanism Drivers]<br />
* Wed 13:50-14:30: Refactoring the Neutron Server Core<br />
* Wed 14:40-15:20: [https://etherpad.openstack.org/p/novanet-neutron-migration Nova-Net to Neutron migration]<br />
* Wed 15:30-14:10: Integrating Tasks into Neutron<br />
* Wed 16:30-17:10: Neutron Advanced Services and Flavor Framework<br />
** Advanced Services: https://etherpad.openstack.org/p/juno-advanced-services<br />
*** Flavor framework for advanced services<br />
** https://etherpad.openstack.org/p/juno-virtual-resource-for-service-chaining<br />
* Wed 17:20-18:00: Neutron Distributed Virtual Router Progress Update<br />
* Thu 9:00-9:40: Neutron QA and Testing<br />
** https://etherpad.openstack.org/p/TempestAndNeutronJuno<br />
* Thu 9:50-10:30: Sharing the load of operational responsibility<br />
* Thu 11:00-11:40: Neutron LBaaS Update<br />
* Thur 11:50-12:30: Modular Layer2 Agents<br />
** https://etherpad.openstack.org/p/JunoSummit-ovs-firewall-driver<br />
** https://etherpad.openstack.org/p/JunoSummit_-_Chaining_basic_services_with_OVS<br />
* Fri 10:50-11:30: [https://etherpad.openstack.org/p/group-based-policy Neutron Group Based Policy]<br />
* Fri 11:40-12:30: Combined FWaaS and VPNaaS Session<br />
** FWaaS: https://etherpad.openstack.org/p/juno-fwaas<br />
* Fri 13:20-14:00: LBaaS SSL L7 and automated scenarios<br />
* Fri 14:10-14:50: [https://etherpad.openstack.org/p/hierarchical_network_topology Hierarchical Network Topologies]<br />
* Fri 15:00-15:40: [https://etherpad.openstack.org/p/L3-vendor-plugins L3 Vendor Plugins]<br />
* Fri 16:00-16:40: Dynamic routing and pluggable external networks<br />
** https://etherpad.openstack.org/p/juno-neutron-pluggable-external-network<br />
* Fri 16:50-17:30: [https://etherpad.openstack.org/p/servicevm Service VM Discussion]<br />
<br />
== Nova ==<br />
<br />
'''Wednesday, May 14'''<br />
<br />
* 9:00am [https://etherpad.openstack.org/p/juno-nova-third-party-ci Continuation of third party CI]<br />
* 9:50am [https://etherpad.openstack.org/p/juno-nova-clustered-hypervisor-support Clustered hypervisor support in Nova]<br />
* 11:00am [https://etherpad.openstack.org/p/juno-nova-deprecating-baremetal The road to deprecating nova.virt.baremetal]<br />
* 11:50am [https://etherpad.openstack.org/p/juno-nova-data-transfer-service Data transfer service plug-in]<br />
* 1:50pm [https://etherpad.openstack.org/p/juno-nova-live-upgrade Next steps in live upgrade]<br />
* 2:40pm [https://etherpad.openstack.org/p/juno-nova-image-precaching Image precaching service]<br />
* 3:30pm [https://etherpad.openstack.org/p/juno-nova-flavor-storage-revamp Flavor storage re-vamp]<br />
* 4:30pm [https://etherpad.openstack.org/p/juno-nova-cross-project-interactions Rethinking cross project interactions]<br />
* 5:20pm [https://etherpad.openstack.org/p/juno-nova-v2-on-v3-api-poc Nova V2 on V3 API implementation POC]<br />
<br />
'''Thursday, May 15'''<br />
<br />
* 9:00am [https://etherpad.openstack.org/p/juno-nova-hypev-new-features Hyper-V Driver new features]<br />
* 9:50am [https://etherpad.openstack.org/p/juno-nova-libvirt-driver-roadmap Libvirt driver roadmap for Juno]<br />
* 11:00am [https://etherpad.openstack.org/p/juno-nova-kvm-live-migration Improve performance of live migration on KVM]<br />
* 11:50am [https://etherpad.openstack.org/p/juno-nova-conductor-api limited conductor API]<br />
* 1:30pm [https://etherpad.openstack.org/p/juno-nova-quota-state-management Implementing state management for quotas]<br />
* 2:20pm [https://etherpad.openstack.org/p/juno-nova-multi-volume-snapshots Multi-Volume Snapshots]<br />
* 3:10pm [https://etherpad.openstack.org/p/juno-nova-hypervisor-power-mgmt Hypervisor power management]<br />
* 4:10pm [https://etherpad.openstack.org/p/juno-nova-sriov-support SR-IOV support]<br />
* 5:00pm [https://etherpad.openstack.org/p/juno-nova-v3-api Nova V3 API]<br />
<br />
'''Friday, May 16'''<br />
<br />
* 9:00am [https://etherpad.openstack.org/p/juno-nova-vmware-driver-roadmap Vmwareapi driver roadmap for Juno]<br />
* 9:50am [https://etherpad.openstack.org/p/juno-nova-docker-driver-features Docker driver - features & testing]<br />
* 10:50am [https://etherpad.openstack.org/p/juno-nova-gantt-apis Future of Gantt APIs and interfaces]<br />
* 11:40am [https://etherpad.openstack.org/p/juno-nova-no-db-scheduler Common no DB Scheduler]<br />
* 1:20pm [https://etherpad.openstack.org/p/juno-nova-scheduling-server-groups Simultaneous Scheduling for Server Groups]<br />
* 2:10pm [https://etherpad.openstack.org/p/juno-nova-scheduler-hints-vm-lifecycle Scheduler hints for VM life cycle]<br />
* 3:00pm [https://etherpad.openstack.org/p/juno-nova-devops Nova Dev/Ops Session]<br />
* 4:00pm [https://etherpad.openstack.org/p/juno-nova-unsession Unsession]<br />
<br />
== Ops ==<br />
* Mon 1115 – 1155 [https://etherpad.openstack.org/p/juno-summit-ops-askthedevs Ask the devs: Meet the PTLs and TC, How to get the best out of the design summit]<br />
* Mon 1205 – 1245 [https://etherpad.openstack.org/p/juno-summit-ops-reasonabledefaults Reasonable Defaults]<br />
* Mon 1400 – 1440 [https://etherpad.openstack.org/p/juno-summit-ops-upgradesdeployment Upgrades and Deployment Approaches]<br />
* Mon 1450 – 1620 [https://etherpad.openstack.org/p/juno-summit-ops-architecture Architecture Show and Tell, Tales and Fails]<br />
* Mon 1730 – 1810 [https://etherpad.openstack.org/p/juno-summit-ops-security Security]<br />
<br />
* Fri 9:00 – 9:40 [https://etherpad.openstack.org/p/juno-summit-ops-enterprise Enterprise Gaps]<br />
* Fri 9:50 – 10:30 [https://etherpad.openstack.org/p/juno-summit-ops-database Database]<br />
* Fri 10:50 – 11:30 [https://etherpad.openstack.org/p/juno-summit-ops-issuesatscale Issues at Scale]<br />
* Fri 11:40 – 12:20 [https://etherpad.openstack.org/p/juno-summit-ops-meta Meta Discussion – ops communication and governance]<br />
* Fri 1:20 – 2:00 [https://etherpad.openstack.org/p/juno-summit-ops-ansible Ansible]<br />
* Fri 2:10 – 2:50 [https://etherpad.openstack.org/p/juno-summit-ops-chef Chef]<br />
* Fri 3:00 – 3:40 [https://etherpad.openstack.org/p/juno-summit-ops-puppet Puppet]<br />
* Fri 4:00 – 4:40 [https://etherpad.openstack.org/p/juno-summit-ops-networking Networking]<br />
* Fri 4:50 – 5:30 [https://etherpad.openstack.org/p/juno-summit-ops-monitoringlogging Monitoring and Logging]<br />
<br />
== Oslo ==<br />
* Wed 9:00 - 9:40 [https://etherpad.openstack.org/p/juno-oslo-release-plan Release Plan for Low-level Libraries]<br />
* Wed 9:50 - 10:30 oslo.messaging status and plans for Juno<br />
* Wed 11:00 - 11:40 AMQP 1.0 protocol driver<br />
* Thu 9:00 - 9:40 Oslo Library Teams Breakout Session<br />
* Thu 9:50 - 10:30 [https://etherpad.openstack.org/p/juno-infra-library-testing Testing pre-releases of Oslo libs with apps]<br />
* Thu 11:00 - 11:40 OpenStack cross service/project OpenStack profiler<br />
* Thu 15:10 - 16:00 [https://etherpad.openstack.org/p/juno-oslo-bayer Upstream chat with Mike Bayer]<br />
* Thu 16:10 - 17:00 [https://etherpad.openstack.org/p/juno-summit-oslo-messaging-rpc-proxy rpc proxy(oslo.messaging)]<br />
* Fri 14:10 - 15:50 oslo.rootwrap: performance and other improvements<br />
* Fri 15:00 - 16:40 Semantic versioning and oslo<br />
* Fri 16:00 - 16:40 PKI for messaging<br />
<br />
== QA ==<br />
<br />
===Wednesday===<br />
* 2:40 – 3:20 [https://etherpad.openstack.org/p/juno-summit-branchless-tempest Branchless Tempest]<br />
* 3:30 – 4:10 [https://etherpad.openstack.org/p/juno-summit-tempest-documentation Tempest Documentation Gaps]<br />
* 4:30 – 5:10 Functional API Testing - post dev QA vs TDD<br />
* 5:20 – 6:00 Rally and Tempest Integration<br />
<br />
===Thursday===<br />
* 1:30 – 2:10 [https://etherpad.openstack.org/p/juno-summit-api-tests-with-jsonschema API tests with JSONSchema]<br />
* 2:20 – 3:00 Negative Testing: Fuzzy Test Framework<br />
* 3:10 – 3:50 How to improve the UX of our Testing Tools<br />
* 4:10 – 4:50 Tempest, GUI, Client, Server<br />
<br />
===Friday===<br />
* 1:20 – 2:00 [https://etherpad.openstack.org/p/juno-summit-grenade Grenade Current Status and Next Steps]<br />
* 2:10 – 2:50 [https://etherpad.openstack.org/p/juno-summit-qa-policy QA Program Policy and Changes in Juno]<br />
<br />
== Release Management ==<br />
== Sahara (ex. Savanna) ==<br />
<br />
* [http://junodesignsummit.sched.org/event/b4f52627efa42f285978d5af3643e189 Thu 13:30] [https://etherpad.openstack.org/p/juno-summit-sahara-relmngmt-backward Releasing and backward compatibility]<br />
* [http://junodesignsummit.sched.org/event/c8774beefd9e9188a3e0729d2bd7131e Thu 14:20] [https://etherpad.openstack.org/p/juno-summit-sahara-testing-plugins CI/gating and plugin requirements]<br />
* [http://junodesignsummit.sched.org/event/10bc9a23eb43eb9df885586035fb2491 Thu 15:10] [https://etherpad.openstack.org/p/juno-summit-sahara-scale-integration Scalable Sahara and further OpenStack integration]<br />
* [http://junodesignsummit.sched.org/event/be842178a085fe95b7665a653f8ab541 Thu 16:10] [https://etherpad.openstack.org/p/juno-summit-sahara-ux UX improvements]<br />
* [http://junodesignsummit.sched.org/event/dfa603324c0bbf29c2f09a77efb82d1d Thu 17:00] [https://etherpad.openstack.org/p/juno-summit-sahara-edp Future of EDP: plugins, SPI, Oozie]<br />
* [http://junodesignsummit.sched.org/event/a64f771cf28ed3ad637730db828668ff Fri 09:00] [https://etherpad.openstack.org/p/juno-summit-sahara-v2-api Next major REST API - v2]<br />
* [http://junodesignsummit.sched.org/event/49089a1d9c8203c6a4c1f0001fa417af Fri 09:50] [https://etherpad.openstack.org/p/juno-summit-sahara-roadmap-retro Sahara in Icehouse and Juno]<br />
<br />
== Swift ==<br />
== TripleO (Deployment) ==<br />
* Fri 11:40 - 12:20 [https://etherpad.openstack.org/p/juno-summit-tripleo-tuskar-planning TripleO Tuskar Planning]<br />
* Fri 13:20 - 14:00 [https://etherpad.openstack.org/p/juno-summit-tripleo-environment TripleO Development and Testing Environment]<br />
* Fri 14:10 - 14:50 [https://etherpad.openstack.org/p/juno-summit-tripleo-and-docker TripleO and Docker]<br />
* Fri 15:00 - 15:40 [https://etherpad.openstack.org/p/juno-summit-tripleo-ci TripleO CI]<br />
* Fri 16:00 - 16:40 [https://etherpad.openstack.org/p/juno-summit-tripleo-neutron TripleO and Neutron]<br />
* Fri 16:50 - 17:30 [https://etherpad.openstack.org/p/juno-summit-tripleo-devops TripleO Dev/Ops Session]<br />
<br />
== Trove ==<br />
== User Committee ==</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=CinderMeetings&diff=48857CinderMeetings2014-04-16T00:04:33Z<p>John-griffith: /* Next meeting */</p>
<hr />
<div><br />
= Weekly Cinder team meeting =<br />
'''NOTE MEETING TIME: Wed's at 16:00 UTC'''<br />
<br />
If you're interested in Cinder or Block Storage in general for OpenStack, we have a weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, on Wednesdays at 16:00 UTC. Please feel free to add items to the agenda below. NOTE: When adding topics please include your IRC name so we know who's topic it is and how to get more info.<br />
<br />
== Next meeting ==<br />
'''NOTE:''' ''Include your IRC nickname next to agenda items so that you can be called upon in the meeting and arrive at the meeting promptly if placing items in agenda. You might want to put this on your calendar if you are adding items.''<br />
<br />
'''April 16th, 2014 16:00 UTC'''<br />
* Release Status<br />
* Summit Session Updates<br />
* Next Stop ATL!!!<br />
<br />
== Previous meetings ==<br />
'''April 9th, 2014 16:00 UTC'''<br />
(Agenda entered retrospectively)<br />
* Cinder Spec (jgriffith) <br />
** Just a heads up that cinder blueprints will move to a gerrit based process shortly, a la nova. Details and wiki entry to follow.<br />
* RC2 status (jgriffith) <br />
** Just after cutting RC2, a bunch of bugs<br />
* Testing RC code (jgriffith)<br />
** Get on it, folks!<br />
**Looks like theres some serious, intermittent performance issues in the API somewhere...<br />
<br />
<br />
<br />
'''April 2, 2014 16:00 UTC'''<br />
_Meeting cancelled and summary discussion held on #openstack-cinder_<br />
<br />
<br />
* Release status and bugs<br />
* -2s left on reviews from before Junos opened - please check if they are still valid<br />
- https://review.openstack.org/#/c/73446/ (JGriffith)<br />
- https://review.openstack.org/#/c/80550/ (JBryant)<br />
- https://review.openstack.org/#/c/82100/ (Avishay)<br />
- https://review.openstack.org/#/c/74158/ (Avishay)<br />
- + a whole bunch of stable branch stuff<br />
<br />
== Previous meetings ==<br />
<br />
'''Mar 26, 2014 16:00 UTC'''<br />
* RC1 updates (jgriffith)<br />
* Design Summit Sessions (jgriffith)<br />
http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-03-26-16.00.log.html<br />
<br />
'''Mar 19, 2014 16:00 UTC'''<br />
* ProphetStor Driver Exception request for Icehouse (jgriffith)<br />
* Bug status/updates (jgriffith)<br />
* What we should be punting to Juno (aka immediate -2 in Gerrit) (jgriffith)<br />
* Continuous Integration for Cinder Certification (jungleboyj)<br />
<br />
'''Mar 12, 2014 16:00 UTC'''<br />
* Cancelled due to nothing on the agenda. Ad-hoc discussion on #openstack-cinder instead<br />
<br />
'''Mar 5, 2014 16:00 UTC'''<br />
* Volume replication - avishay<br />
* [https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage New LVM-based driver for shared storage] - mtanino<br />
* DRBD/drbdmanage driver for cinder - philr<br />
<br />
'''Feb 19, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* [https://review.openstack.org/#/c/73745 Milestone Consideration for Drivers] -thingee<br />
* [https://etherpad.openstack.org/p/cinder-hack-201402 Hack-a-thon details] -thingee<br />
* [https://review.openstack.org/#/c/66737/ scheduling for local storage] -DuncanT<br />
<br />
'''Feb 5, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* Multiple pools per backend (bswartz)<br />
'''Jan 8, 2014 16:00 UTC'''<br />
* I2 is just around the corner, blueprint updates<br />
* Alternating meeting time proposal, results on feedback<br />
* Driver cert test, it's there... use it<br />
* Prioritizing patches and reviews<br />
'''December 18, 2013 16:00 UTC'''<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/cinder-backup-recover-api cinder backup recovery api - import/export backups] - avishay<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/copy-volume-to-image-task-flow] - Griffith<br />
* [https://blueprints.launchpad.net/cinder/+spec/admin-defined-capabilities Admin-defined capabilities] - Ollie<br />
* Why is type manage an extension? -Thingee<br />
<br />
'''December 11, 2013 16:00 UTC'''<br />
* Proposal of [https://etherpad.openstack.org/p/cinder-extensions extension packages] -Thingee<br />
<br />
'''December 4, 2013 16:00 UTC'''<br />
* Progressing with [https://wiki.openstack.org/wiki/Cinder/blueprints/multi-attach-volume multi-attach / shared-volume] - sgordon<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-acls-for-volumes Access Control List design discussion] - alatynskaya<br />
<br />
'''November 27, 2013 16:00 UTC'''<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-continuous-volume-replication-v2 Updated volume mirroring design] - avishay<br />
* Start using only Mock for new tests... [http://lists.openstack.org/pipermail/openstack-dev/2013-November/018501.html Related Nova Discussion] - Thingee<br />
* Rate limiting came up in the summit, and [http://lists.openstack.org/pipermail/openstack-dev/2013-November/020291.html on openstack-dev] - avishay<br />
* Metadata backup (https://review.openstack.org/#/c/51900/) progress RFC - dosaboy<br />
<br />
<br />
'''November 20, 2013 16:00 UTC'''<br />
* I-1 scheduling - JGriffith<br />
<br />
<br />
'''November 13, 2013 16:00 UTC'''<br />
* patches should update doc files where necessary to ease writing of release notes (Avishay?)<br />
* fencing host from storage (Ehud Trainin)<br />
* Summarize priority of tasks from summit discussions (https://wiki.openstack.org/wiki/Summit/Icehouse/Etherpads#Cinder and https://etherpad.openstack.org/p/cinder-icehouse-summary) Griff<br />
<br />
<br />
'''October 30, 2013 16:00 UTC'''<br />
* cinder backup metadata support - http://goo.gl/Jkg2FV (dosaboy)<br />
* fencing and unfencing host from storage - https://blueprints.launchpad.net/cinder/+spec/fencing-and-unfencing (Ehud Trainin)<br />
<br />
'''October 23, 2013 16:00 UTC'''<br />
* Nexenta backup driver https://review.openstack.org/#/c/47005/ - DuncanT<br />
<br />
<br />
'''October 2, 2013 16:00 UTC'''<br />
* What's still broken in Havana<br />
:* Backups and multibackend (https://code.launchpad.net/bugs/1228223): Fix committed<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066): Fix committed<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here): '''???'''<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896): '''Still open'''<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469): Probable fix committed<br />
:* Summary of gate issue pertaining to Cinder can be viewed here: http://paste.openstack.org/show/47798/<br />
:* Moving to taskflow - avishay<br />
<br />
<br />
'''Sept 25, 2013, 16:00 UTC'''<br />
* PTL nomination process is open until the 26'th, if you want to run send your nomination proposal out to the dev ML<br />
* What's broken in Havana<br />
:* Backups (specifically when configured with multi-backend volumes)<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066)<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here)<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896)<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469)<br />
:* ????<br />
* Cinderclient release plans/status? (Eharney)<br />
* OSLO imports (DuncanT)<br />
* bp/cinder-backup-improvements (dosaboy)<br />
* bp/multi-attach (zhiyan)<br />
<br />
<br />
<br />
'''Aug 21, 2013, 16:00 UTC'''<br />
# No agenda, no meeting.<br />
<br />
'''Aug 14, 2013, 16:00 UTC'''<br />
# Volume migration status - avishay<br />
# API extensions using metadata. This comes from the [https://review.openstack.org/#/c/38322/ readonly volume attach support]. - thingee<br />
[http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-08-14-16.00.log.html IRC Log]<br />
<br />
'''Aug 7, 2013, 16:00 UTC'''<br />
# [https://bugs.launchpad.net/cinder/+bug/1209199 RFC - make all rbd clones copy-on-write] -- Dosaboy<br />
# V1 API removal issues, plans and timescales - DuncanT<br />
<br />
== Meeting Minutes ==<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2014/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2013/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2012/</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Icehouse&diff=48783ReleaseNotes/Icehouse2014-04-15T15:38:20Z<p>John-griffith: /* OpenStack Block Storage (Cinder) */</p>
<hr />
<div>= OpenStack 2014.1 (Icehouse) Release Notes =<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
== General Upgrade Notes ==<br />
<br />
* Windows packagers should use pbr 0.8 to avoid [https://bugs.launchpad.net/pbr/+bug/1294246 bug 1294246]<br />
* The log-config option has been renamed log-config-append, and will now append any configuration specified, rather than completely overriding any other settings as currently occurs. (https://bugs.launchpad.net/oslo/+bug/1169328, https://bugs.launchpad.net/oslo/+bug/1238349)<br />
* To minimize downtime, OpenStack Networking must be upgraded and neutron-metadata-agent restarted before OpenStack Compute is upgraded. Compute must be able to verify the X-Tenant-ID which is now passed by the neutron-metadata-agent service. (https://bugs.launchpad.net/neutron/+bug/1235450)<br />
<br />
== OpenStack Object Storage (Swift) ==<br />
<br />
=== Key New Features ===<br />
<br />
* '''Discoverable capabilities''': A Swift proxy server now by default (although it can be turned off) will respond to requests to /info. The response to these requests include information about the cluster and can be used by clients to determine which features are supported in the cluster. This means that one client will be able to communicate with multiple Swift clusters and take advantage of the features available in each cluster.<br />
<br />
* '''Generic way to persist system metadata''': Swift now supports system-level metadata on accounts and containers. System metadata provides a means to store internal custom metadata with associated Swift resources in a safe and secure fashion without actually having to plumb custom metadata through the core swift servers. The new gatekeeper middleware prevents this system metadata from leaking into the request or being set by a client.<br />
<br />
* '''Account-level ACLs and ACL format v2''': Accounts now have a new privileged header to represent ACLs or any other form of account-level access control. The value of the header is a JSON dictionary string to be interpreted by the auth system. A reference implementation is given in TempAuth. Please see the full docs at http://swift.openstack.org/overview_auth.html<br />
<br />
* '''Object replication ssync (an rsync alternative)''': A Swift storage node can now be configured to use Swift primitives for replication transport instead of rsync.<br />
<br />
* '''Automatic retry on read failures''': If a source times out on an object server read, try another one of them with a modified range. This means that drive failures during a client request will not be visible to the end-user client.<br />
<br />
* '''Work on upcoming storage policies'''<br />
<br />
=== Known Issues ===<br />
<br />
None known at this time<br />
<br />
=== Upgrade Notes ===<br />
<br />
Read full change log notes at https://github.com/openstack/swift/blob/master/CHANGELOG to see any config changes that would affect upgrades.<br />
<br />
As always, Swift can be upgraded with no downtime. <br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Upgrade Support ====<br />
<br />
* Limited live upgrades are now supported. This enables deployers to upgrade controller infrastructure first, and subsequently upgrade individual compute nodes without requiring downtime of the entire cloud to complete.<br />
<br />
==== Compute Drivers ====<br />
<br />
===== Hyper-V =====<br />
<br />
* Added RDP console support.<br />
<br />
===== Libvirt (KVM) =====<br />
<br />
* The Libvirt compute driver now supports providing modified kernel arguments to booting compute instances. The kernel arguments are retrieved from the <tt>os_command_line</tt> key in the image metadata as stored in Glance, if a value for the key was provided. Otherwise the default kernel arguments are used.<br />
* The Libvirt driver now supports using VirtIO SCSI (virtio-scsi) instead of VirtIO Block (virtio-blk) to provide block device access for instances. Virtio SCSI is a para-virtualized SCSI controller device designed as a future successor to VirtIO Block and aiming to provide improved scalability and performance.<br />
* The Libvirt Compute driver now supports adding a Virtio RNG device to compute instances to provide increased entropy. Virtio RNG is a paravirtual random number generation device. It allows the compute node to provide entropy to the compute instances in order to fill their entropy pool. The default entropy device used is /dev/random, however use of a physical hardware RNG device attached to the host is also possible. The use of the Virtio RNG device is enabled using the hw_rng property in the metadata of the image used to build the instance.<br />
* The Libvirt driver now allows the configuration of instances to use video driver other than the default (cirros). This allows the specification of different video driver models, different amounts of video RAM, and different numbers of heads. These values are configured by setting the <tt>hw_video_model</tt>, <tt>hw_video_vram</tt>, and <tt>hw_video_head</tt> properties in the image metadata. Currently supported video driver models are <tt>vga</tt>, <tt>cirrus</tt>, <tt>vmvga</tt>, <tt>xen</tt> and <tt>qxl</tt>. <br />
* Watchdog support has been added to the Libvirt driver. The watchdog device used is <tt>i6300esb</tt>. It is enabled by setting the <tt>hw_watchdog_action</tt> property in the image properties or flavor extra specifications (<tt>extra_specs</tt>) to a value other than <tt>disabled</tt>. Supported <tt>hw_watchdog_action</tt> property values, which denote the action for the watchdog device to take in the event of an instance failure, are <tt>poweroff</tt>, <tt>reset</tt>, <tt>pause</tt>, and <tt>none</tt>.<br />
* The High Precision Event Timer (HPET) is now disabled for instances created using the Libvirt driver. The use of this option was found to lead to clock drift in Windows guests when under heavy load.<br />
* The libvirt driver now supports waiting for an event from Neutron during instance boot for better reliability. This requires a suitably new Neutron that supports sending these events, and avoids a race between the instance expecting networking to be ready and the actual plumbing that is required.<br />
<br />
===== VMware =====<br />
<br />
* The VMware Compute drivers now support the virtual machine diagnostics call. Diagnostics can be retrieved using the "nova diagnostics INSTANCE" command, where INSTANCE is replaced by an instance name or instance identifier.<br />
* The VMware Compute drivers now booting an instance from an ISO image.<br />
* The VMware Compute drivers now support the aging of cached images.<br />
<br />
===== XenServer =====<br />
<br />
* All XenServer specific configuration items have changed name, and moved to a [xenserver] section in nova.conf. While the old names will still work in this release, the old names are now deprecated, and support for them could well be removed in a future release of Nova.<br />
* Added initial support for [https://blueprints.launchpad.net/nova/+spec/pci-passthrough-xenapi PCI passthrough]<br />
* Maintained group B status through the introduction of the [[XenServer/XenServer_CI|XenServer CI]]<br />
* Improved support for ephemeral disks (including [https://blueprints.launchpad.net/nova/+spec/xenapi-migrate-ephemeral-disks migration] and [https://blueprints.launchpad.net/nova/+spec/xenapi-resize-ephemeral-disks resize up] of multiple ephemeral disks)<br />
* Support for [https://blueprints.launchpad.net/nova/+spec/xenapi-vcpu-pin-set vcpu_pin_set], essential when you pin CPU resources to Dom0<br />
* Numerous performance and stability enhancements<br />
<br />
==== API ====<br />
<br />
* In OpenStack Compute, the <tt>OS-DCF:diskConfig</tt> API attribute is no longer supported in V3 of the nova API.<br />
* The Compute API currently supports both XML and JSON formats. Support for the XML format is now deprecated and will be retired in a future release.<br />
* The Compute API now exposes a mechanism for permanently removing decommissioned compute nodes. Previously these would continue to be listed even where the compute service had had been disabled and the system re-provisioned. This functionality is provided by the <tt>ExtendedServicesDelete</tt> API extension.<br />
* Separated the V3 API admin_actions plugin into logically separate plugins so operators can enable subsets of the functionality currently present in the plugin.<br />
* The Compute service now uses the tenant identifier instead of the tenant name when authenticating with OpenStack Networking (Neutron). This improves support for the OpenStack Identity API v3 which allows non-unique tenant names.<br />
* The Compute API now exposes the hypervisor IP address, allowing it to be retrieved by administrators using the <tt>nova hypervisor-show</tt> command.<br />
<br />
==== Scheduler ====<br />
<br />
* The scheduler now includes an initial implementation of a caching scheduler driver. The caching scheduler uses the existing facilities for applying scheduler filters and weights but caches the list of available hosts. When a user request is passed to the caching scheduler it attempts to perform scheduling based on the list of cached hosts, with a view to improving scheduler performance.<br />
* A new scheduler filter, <tt>AggregateImagePropertiesIsolation</tt>, has been introduced. The new filter schedules instances to hosts based on matching namespaced image properties with host aggregate properties. Hosts that do not belong to any host aggregate remain valid scheduling targets for instances based on all images. The new Compute service configuration keys <tt>aggregate_image_properties_isolation_namespace</tt> and <tt>aggregate_image_properties_isolation_separator</tt> are used to determine which image properties are examined by the filter.<br />
* Weight normalization in OpenStack Compute: See: <br />
** https://review.openstack.org/#/c/27160/ Weights are normalized, so there is no need to inflate multipliers artificially. The maximum weight that a weigher will put for a node is 1.0 and the minimum is 0.0. <br />
* The scheduler now supports server groups. The following types are supported - anti-affinity and affinity filters. That is, a server that is deployed will be done according to a predefined policy.<br />
<br />
==== Other Features ====<br />
<br />
* Notifications are now generated upon the creation and deletion of keypairs.<br />
* Notifications are now generated when an Compute host is enabled, disabled, powered on, shut down, rebooted, put into maintenance mode and taken out of maintenance mode.<br />
* Compute services are now able to shutdown gracefully by disabling processing of new requests when a service shutdown is requested but allowing requests already in process to complete before terminating.<br />
* The Compute service determines what action to take when instances are found to be running that were previously marked deleted based on the value of the <tt>running_deleted_instance_action</tt> configuration key. A new <tt>shutdown</tt> value has been added. Using this new value allows administrators to optionally keep instances found in this state for diagnostics while still releasing the runtime resources.<br />
* File injection is now disabled by default in OpenStack Compute. Instead it is recommended that the ConfigDrive and metadata server facilities are used to modify guests at launch. To enable file injection modify the <tt>inject_key</tt> and <tt>inject_partition</tt> configuration keys in <tt>/etc/nova/nova.conf</tt> and restart the Compute services. The file injection mechanism is likely to be disabled in a future release.<br />
* A number of changes have been made to the expected format <tt>/etc/nova/nova.conf</tt> configuration file with a view to ensuring that all configuration groups in the file use descriptive names. A number of driver specific flags, including those for the Libvirt driver, have also been moved to their own option groups.<br />
<br />
=== Known Issues ===<br />
* OpenStack Compute has some features that use newer API versions from other projects, but the following are the only API versions tested in Icehouse:<br />
** Keystone v2<br />
** Cinder v1<br />
** Glance v1<br />
* The libvirt driver backed by Xen or LXC is an untested configuration (group C on [[HypervisorSupportMatrix]]). Since it's untested, a change made it in that broke both of these configurations. [https://bugs.launchpad.net/nova/+bug/1301453]<br />
<br />
=== Upgrade Notes ===<br />
<br />
* Scheduler and weight normalization (https://review.openstack.org/#/c/27160/): In previous releases the Compute and Cells scheduler used raw weights (i.e. the weighers returned any value, and that was the value used by the weighing proccess).<br />
** If you were using several weighers for Compute:<br />
*** If several weighers were used (in previous releases Nova only shipped one weigher for compute), it is possible that your multipliers were inflated artificially in order to make an important weigher prevail against any other weigher that returned large raw values. You need to check your weighers and take into account that now the maximum and minimum weights for a host will always be <tt>1.0</tt> and <tt>0.0</tt>.<br />
** If you are using cells:<br />
*** <tt>nova.cells.weights.mute_child.MuteChild</tt>: The weigher returned the value <tt>mute_weight_value</tt> as the weight assigned to a child that didn't update its capabilities in a while. It can still be used, but will have no effect on the final weight that will be computed by the weighing process, that will be <tt>1.0</tt>. If you are using this weigher to mute a child cell you need to adjust the <tt>mute_weight_multiplier</tt>.<br />
*** <tt>nova.cells.weights.weight_offset.WeightOffsetWeigher</tt> introduces a new configuration option <tt>offset_weight_multiplier</tt>. This new option has to be adjusted. In previous releases, the weigher returned the value of the configured offset for each of the cells in the weighing process. While the winner of that process will still be the same, it will get a weight of <tt>1.0</tt>. If you were using this weigher and you were relying in its value to make it prevail against any other weighers you need to adjust its multiplier accordingly.<br />
* An early Docker compute driver was included in the Havana release. This driver has been moved from Nova into its own repository. The new location is http://git.openstack.org/cgit/stackforge/nova-docker<br />
* https://review.openstack.org/50668 - The <tt>compute_api_class</tt> configuration option has been removed.<br />
* https://review.openstack.org/#/c/54290/ - The following deprecated configuration option aliases have been removed in favor of their new names:<br />
** <tt>service_quantum_metadata_proxy</tt><br />
** <tt>quantum_metadata_proxy_shared_secret</tt><br />
** <tt>use_quantum_default_nets</tt><br />
** <tt>quantum_default_tenant_id</tt><br />
** <tt>vpn_instance_type</tt><br />
** <tt>default_instance_type</tt><br />
** <tt>quantum_url</tt><br />
** <tt>quantum_url_timeout</tt><br />
** <tt>quantum_admin_username</tt><br />
** <tt>quantum_admin_password</tt><br />
** <tt>quantum_admin_tenant_name</tt><br />
** <tt>quantum_region_name</tt><br />
** <tt>quantum_admin_auth_url</tt><br />
** <tt>quantum_api_insecure</tt><br />
** <tt>quantum_auth_strategy</tt><br />
** <tt>quantum_ovs_bridge</tt><br />
** <tt>quantum_extension_sync_interval</tt><br />
** <tt>vmwareapi_host_ip</tt><br />
** <tt>vmwareapi_host_username</tt><br />
** <tt>vmwareapi_host_password</tt><br />
** <tt>vmwareapi_cluster_name</tt><br />
** <tt>vmwareapi_task_poll_interval</tt><br />
** <tt>vmwareapi_api_retry_count</tt><br />
** <tt>vnc_port</tt><br />
** <tt>vnc_port_total</tt><br />
** <tt>use_linked_clone</tt><br />
** <tt>vmwareapi_vlan_interface</tt><br />
** <tt>vmwareapi_wsdl_loc</tt><br />
* The PowerVM driver has been removed: https://review.openstack.org/#/c/57774/<br />
* The keystone_authtoken defaults changed in nova.conf: https://review.openstack.org/#/c/62815/<br />
* libvirt lvm names changed from using instance_name_template to instance uuid (https://review.openstack.org/#/c/76968). Possible manual cleanup required if using a non default instance_name_template.<br />
* rbd disk names changed from using instance_name_template to instance uuid. Manual cleanup required of old virtual disks after the transition. (TBD find review)<br />
* Icehouse brings libguestfs as a requirement. Installing icehouse dependencies on a system currently running havana may cause the havana node to begin using libguestfs and break unexpectedly. It is recommended that libvirt_inject_partition=-2 be set on havana nodes prior to starting an upgrade of packages on the system if the nova packages will be updated last.<br />
* Creating a private flavor now adds access to the tenant automatically. This was the documented behavior in Havana, but the actual mplementation in Havana and previous versions of Nova did not add the tenant automatically to private flavors.<br />
* Nova previously included a nova.conf.sample. This file was automatically generated and is no longer included directly. If you are packaging Nova and wish to include the sample config file, see etc/nova/README.nova.conf for instructions on how to generate the file at build time.<br />
* Nova now defaults to requiring an event from Neutron when booting libvirt guests. If you upgrade Nova before Neutron, you must disable this feature in Nova until Neutron supports it by setting '''vif_plugging_is_fatal=False''' and '''vif_plugging_timeout=0'''. Recommended order is: Nova (with this disabled), Neutron (with the notifications enabled), and then enable vif_plugging_is_fatal=True with the default value of vif_plugging_timeout.<br />
* Nova supports a limited live upgrade model for the compute nodes in Icehouse. To do this, upgrade controller infrastructure (everthing except nova-compute) first, but set the '''[upgrade_levels]/compute=icehouse-compat''' option. This will enable Icehouse controller services to talk to Havana compute services. Upgrades of individual compute nodes can then proceed normally. When all the computes are upgraded, unset the compute version option to retain the default and restart the controller services.<br />
* The following configuration options are marked as deprecated in this release. See <tt>nova.conf.sample</tt> for their replacements. <tt>[GROUP]/option</tt><br />
** <tt>[DEFAULT]/rabbit_durable_queues</tt><br />
** <tt>[rpc_notifier2]/topics</tt><br />
** <tt>[DEFAULT]/log_config</tt><br />
** <tt>[DEFAULT]/logfile</tt><br />
** <tt>[DEFAULT]/logdir</tt><br />
** <tt>[DEFAULT]/base_dir_name</tt><br />
** <tt>[DEFAULT]/instance_type_extra_specs</tt><br />
** <tt>[DEFAULT]/db_backend</tt><br />
** <tt>[DEFAULT]/sql_connection</tt><br />
** <tt>[DATABASE]/sql_connection</tt><br />
** <tt>[sql]/connection</tt><br />
** <tt>[DEFAULT]/sql_idle_timeout</tt><br />
** <tt>[DATABASE]/sql_idle_timeout</tt><br />
** <tt>[sql]/idle_timeout</tt><br />
** <tt>[DEFAULT]/sql_min_pool_size</tt><br />
** <tt>[DATABASE]/sql_min_pool_size</tt><br />
** <tt>[DEFAULT]/sql_max_pool_size</tt><br />
** <tt>[DATABASE]/sql_max_pool_size</tt><br />
** <tt>[DEFAULT]/sql_max_retries</tt><br />
** <tt>[DATABASE]/sql_max_retries</tt><br />
** <tt>[DEFAULT]/sql_retry_interval</tt><br />
** <tt>[DATABASE]/reconnect_interval</tt><br />
** <tt>[DEFAULT]/sql_max_overflow</tt><br />
** <tt>[DATABASE]/sqlalchemy_max_overflow</tt><br />
** <tt>[DEFAULT]/sql_connection_debug</tt><br />
** <tt>[DEFAULT]/sql_connection_trace</tt><br />
** <tt>[DATABASE]/sqlalchemy_pool_timeout</tt><br />
** <tt>[DEFAULT]/memcache_servers</tt><br />
** <tt>[DEFAULT]/libvirt_type</tt><br />
** <tt>[DEFAULT]/libvirt_uri</tt><br />
** <tt>[DEFAULT]/libvirt_inject_password</tt><br />
** <tt>[DEFAULT]/libvirt_inject_key</tt><br />
** <tt>[DEFAULT]/libvirt_inject_partition</tt><br />
** <tt>[DEFAULT]/libvirt_vif_driver</tt><br />
** <tt>[DEFAULT]/libvirt_volume_drivers</tt><br />
** <tt>[DEFAULT]/libvirt_disk_prefix</tt><br />
** <tt>[DEFAULT]/libvirt_wait_soft_reboot_seconds</tt><br />
** <tt>[DEFAULT]/libvirt_cpu_mode</tt><br />
** <tt>[DEFAULT]/libvirt_cpu_model</tt><br />
** <tt>[DEFAULT]/libvirt_snapshots_directory</tt><br />
** <tt>[DEFAULT]/libvirt_images_type</tt><br />
** <tt>[DEFAULT]/libvirt_images_volume_group</tt><br />
** <tt>[DEFAULT]/libvirt_sparse_logical_volumes</tt><br />
** <tt>[DEFAULT]/libvirt_images_rbd_pool</tt><br />
** <tt>[DEFAULT]/libvirt_images_rbd_ceph_conf</tt><br />
** <tt>[DEFAULT]/libvirt_snapshot_compression</tt><br />
** <tt>[DEFAULT]/libvirt_use_virtio_for_bridges</tt><br />
** <tt>[DEFAULT]/libvirt_iscsi_use_multipath</tt><br />
** <tt>[DEFAULT]/libvirt_iser_use_multipath</tt><br />
** <tt>[DEFAULT]/matchmaker_ringfile</tt><br />
** <tt>[DEFAULT]/agent_timeout</tt><br />
** <tt>[DEFAULT]/agent_version_timeout</tt><br />
** <tt>[DEFAULT]/agent_resetnetwork_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_agent_path</tt><br />
** <tt>[DEFAULT]/xenapi_disable_agent</tt><br />
** <tt>[DEFAULT]/xenapi_use_agent_default</tt><br />
** <tt>[DEFAULT]/xenapi_login_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_connection_concurrent</tt><br />
** <tt>[DEFAULT]/xenapi_connection_url</tt><br />
** <tt>[DEFAULT]/xenapi_connection_username</tt><br />
** <tt>[DEFAULT]/xenapi_connection_password</tt><br />
** <tt>[DEFAULT]/xenapi_vhd_coalesce_poll_interval</tt><br />
** <tt>[DEFAULT]/xenapi_check_host</tt><br />
** <tt>[DEFAULT]/xenapi_vhd_coalesce_max_attempts</tt><br />
** <tt>[DEFAULT]/xenapi_sr_base_path</tt><br />
** <tt>[DEFAULT]/target_host</tt><br />
** <tt>[DEFAULT]/target_port</tt><br />
** <tt>[DEFAULT]/iqn_prefix</tt><br />
** <tt>[DEFAULT]/xenapi_remap_vbd_dev</tt><br />
** <tt>[DEFAULT]/xenapi_remap_vbd_dev_prefix</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_base_url</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_seed_chance</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_seed_duration</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_max_last_accessed</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_listen_port_start</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_listen_port_end</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_download_stall_cutoff</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_max_seeder_processes_per_host</tt><br />
** <tt>[DEFAULT]/use_join_force</tt><br />
** <tt>[DEFAULT]/xenapi_ovs_integration_bridge</tt><br />
** <tt>[DEFAULT]/cache_images</tt><br />
** <tt>[DEFAULT]/xenapi_image_compression_level</tt><br />
** <tt>[DEFAULT]/default_os_type</tt><br />
** <tt>[DEFAULT]/block_device_creation_timeout</tt><br />
** <tt>[DEFAULT]/max_kernel_ramdisk_size</tt><br />
** <tt>[DEFAULT]/sr_matching_filter</tt><br />
** <tt>[DEFAULT]/xenapi_sparse_copy</tt><br />
** <tt>[DEFAULT]/xenapi_num_vbd_unplug_retries</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_images</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_network_name</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_boot_menu_url</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_mkisofs_cmd</tt><br />
** <tt>[DEFAULT]/xenapi_running_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_vif_driver</tt><br />
** <tt>[DEFAULT]/xenapi_image_upload_handler</tt><br />
<br />
== OpenStack Image Service (Glance) ==<br />
=== Key New Features ===<br />
* The calculation of storage quotas has been improved. Deleted images are now excluded from the count (https://bugs.launchpad.net/glance/+bug/1261738), which may affect your existing usage figures.<br />
* Glance has moved to using 0-based indices for location entries, to be in line with JSON-pointer RFC6901 (https://bugs.launchpad.net/glance/+bug/1282437)<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Dashboard (Horizon) ==<br />
<br />
=== Key New Features ===<br />
* Thanks to the [[I18nTeam]] Horizon is now available in Hindi, German and Serbian. Translations for Australian English, British English, Dutch, French, Japanese, Korean, Polish, Portuguese, Simplified and Traditional Chinese, Spanish and Russian have also been updated.<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
* New v3 API features<br />
** <code>/v3/OS-FEDERATION/</code> allows Keystone to consume federated authentication via [https://shibboleth.net/ Shibboleth] for multiple Identity Providers, and mapping federated attributes into OpenStack group-based role assignments (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-federation-ext.md documentation]).<br />
** <code>POST /v3/users/{user_id}/password</code> allows API users to update their own passwords (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#change-user-password-post-usersuser_idpassword documentation]).<br />
** <code>GET v3/auth/token?nocatalog</code> allows API users to opt-out of receiving the service catalog when performing online token validation (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#validate-token-get-authtokensnocatalog documentation]).<br />
** <code>/v3/regions</code> provides a public interface for describing multi-region deployments (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#regions-v3regions documentation]).<br />
** <code>/v3/OS-SIMPLECERT/</code> now publishes the certificates used for PKI token validation (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-simple-certs-ext.md documentation]).<br />
** <code>/v3/OS-TRUST/trusts</code> is now capable of providing limited-use delegation via the <code>remaining_uses</code> attribute of [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-trust-ext.md trusts].<br />
* The assignments backend (the source of authorization data) has now been completely separated from the identity backend (the source of authentication data). This means that you can now back your deployment's identity data to LDAP, and your authorization data to SQL, for example.<br />
* KVS drivers are now capable of writing to persistent Key-Value stores such as Redis, Cassandra, or MongoDB.<br />
* Keystone's driver interfaces are now implemented as Abstract Base Classes (ABCs) to make it easier to track compatibility of custom driver implementations across releases.<br />
* Keystone's default <code>etc/policy.json</code> has been rewritten in an easier to read format.<br />
* [http://docs.openstack.org/developer/keystone/event_notifications.html Notifications] are now emitted in response to create, update and delete events on roles, groups, and trusts.<br />
* Custom extensions and driver implementations may now subscribe to internal-only event notifications, including ''disable'' events (which are only exposed externally as part of ''update'' events).<br />
* Keystone now emits [http://www.dmtf.org/standards/cadf Cloud Audit Data Federation (CADF)] event notifications in response to authentication events.<br />
* [https://review.openstack.org/#/c/50362/ Additional plugins] are provided to handle external authentication via <code>REMOTE_USER</code> with respect to single-domain versus multi-domain deployments.<br />
* <code>policy.json</code> can now perform enforcement on the target domain in a domain-aware operation using, for example, <code>%(target.{entity}.domain_id)s</code>.<br />
* The LDAP driver for the assignment backend now supports group-based role assignment operations.<br />
* Keystone now publishes token revocation ''events'' in addition to providing continued support for token revocation ''lists''. Token revocation events are designed to consume much less overhead (when compared to token revocation lists) and will enable Keystone eliminate token persistence during the Juno release.<br />
* Deployers can now define arbitrary limits on the size of collections in API responses (for example, <code>GET /v3/users</code> might be configured to return only 100 users, rather than 10,000). Clients will be informed when truncation has occurred.<br />
* Lazy translation has been enabled to translating responses according to the requested Accept-Language header.<br />
* Keystone now emits i18n-ready log messages.<br />
* Collection filtering is now performed in the driver layer, where possible, for improved performance.<br />
<br />
=== Known Issues ===<br />
<br />
* [https://bugs.launchpad.net/keystone/+bug/1291157 Bug 1291157]: If using the <code>OS-FEDERATION</code> extension, deleting an Identity Provider or Protocol does ''not'' result in previously-issued tokens being revoked. This will not be fixed in the ''stable/icehouse'' branch.<br />
<br />
=== Upgrade Notes ===<br />
<br />
* The v2 API has been prepared for deprecation, but remains stable in the Icehouse release. It may be formally deprecated during the Juno release pending widespread support for the v3 API.<br />
* Backwards compatibility for <code>keystone.middleware.auth_token</code> has been removed. <code>auth_token</code> middleware module is no longer provided by Keystone itself, and must be imported from <code>keystoneclient.middleware.auth_token</code> instead.<br />
* The <code>s3_token</code> middleware module is no longer provided by Keystone itself, and must be imported from <code>keystoneclient.middleware.s3_token</code> instead. Backwards compatibility for <code>keystone.middleware.s3_token</code> will be removed in Juno.<br />
* The default token duration has been reduced from 24 hours to just 1 hour. This effectively reduces the number of tokens that must be persisted at any one time, and (for PKI deployments) reduces the overhead of the token revocation list.<br />
* <code>keystone.contrib.access.core.AccessLogMiddleware</code> has been deprecated in favor of either the eventlet debug access log or Apache httpd access log and may be removed in the K release.<br />
* <code>keystone.contrib.stats.core.StatsMiddleware</code> has been deprecated in favor of external tooling and may be removed in the K release.<br />
* <code>keystone.middleware.XmlBodyMiddleware</code> has been deprecated in favor of support for "application/json" only and may be removed in the K release.<br />
* A v3 API version of the EC2 Credential system has been implemented. To use this, the following section needs to be added to <code>keystone-paste.ini</code>:<br />
[filter:ec2_extension_v3]<br />
paste.filter_factory = keystone.contrib.ec2:Ec2ExtensionV3.factory<br />
... and <code>ec2_extension_v3</code> needs to be added to the pipeline variable in the <code>[pipeline:api_v3]</code> section of <code>keystone-paste.ini</code>.<br />
* <code>etc/policy.json</code> updated to provide rules for the new v3 EC2 Credential CRUD as show in the updated sample <code>policy.json</code> and <code>policy.v3cloudsample.json</code><br />
* Migration numbers 38, 39 and 40 move all role assignment data into a single, unified table with first-class columns for role references.<br />
* TODO: deprecations for the move to oslo-incubator db<br />
* A new configuration option, <code>mutable_domain_id</code> is <code>false</code> by default to harden security around domain-level administration boundaries. This may break API functionality that you depended on in Havana. If so, set this value to <code>true</code> and ''please'' voice your use case to the Keystone community.<br />
* TODO: any non-ideal default values that will be changed in the future<br />
* TODO: the move to oslo.messaging<br />
<br />
== OpenStack Network Service (Neutron) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet.<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
=== Key New Features ===<br />
* Ability to change the type of an existing volume (retype)<br />
* Add volume metadata support to the Cinder Backup Object<br />
* Implement Multiple API workers<br />
* Add ability to delete Quota<br />
* Add ability to import/export backups in to Cinder<br />
* Added Fibre Channel Zone manager for automated FC zoning during volume attach/detach<br />
<br />
=== New Backend Drivers/Plugins ===<br />
* EMC VNX Direct Driver<br />
* HP MSA 2040<br />
* IBM SONAS and Storwize V7000 Unified Storage Systems<br />
<br />
=== Known Issues ===<br />
* Reconnect on failure for multiple servers always connects to first server (Bug: #1261631)<br />
* Storwize/SVC driver crashes when check volume copy status (Bug: #1304115)<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Telemetry (Ceilometer) ==<br />
=== Key New Features ===<br />
* API additions<br />
** arbitrarily complex combinations of query constraints for meters, samples and alarms<br />
** capabilities API for discovery of storage driver specific features<br />
** selectable aggregates for statistics, including new cardinality and standard deviation functions <br />
** direct access to samples decoupled from a specific meter<br />
** events API, in the style of [https://github.com/rackerlabs/stacktach StackTach]<br />
<br />
* Alarming improvements<br />
** time-constrained alarms, providing flexibility to set the bar higher or lower depending on time of day or day of the week<br />
** exclusion of weak data points with anomalously low sample counts <br />
** derived rate-based meters for disk & network, more suited to threshold-oriented alarming <br />
<br />
* Integration touch-points<br />
** split collector into notification agent solely responsible for consuming external notifications<br />
** redesign of pipeline configuration for pluggable resource discovery<br />
** configurable persistence of raw notification payloads, in the style of [https://github.com/rackerlabs/stacktach StackTach]<br />
<br />
* Storage drivers<br />
** approaching feature parity in HBase & SQLAlchemy & DB2 drivers<br />
** optimization of resource queries<br />
** HBase: add Alarm support<br />
<br />
* New sources of metrics<br />
** Neutron north-bound API on SDN controller<br />
** VMware vCenter Server API<br />
** SNMP daemons on baremetal hosts<br />
** OpenDaylight REST APIs<br />
<br />
=== Known Issues ===<br />
* SQLAlchemy storage driver is problematic with a scaled out collector service when run against PostgreSQL https://bugs.launchpad.net/ceilometer/+bug/1305332<br />
* HBase storage driver reports truncated list of meters: https://bugs.launchpad.net/ceilometer/+bug/1288284<br />
* HBase storage driver doesn't work with HappyBase version 0.7 <br />
* excessive load on nova-api service induced by compute agent: https://bugs.launchpad.net/ceilometer/+bug/1297528<br />
<br />
=== Upgrade Notes ===<br />
* the pre-existing collector service has been augmented with a new notification agent that must also be started up post-upgrade<br />
* MongoDB storage driver now requires the MongoDB installation to be version 2.4 or greater (the lower bound for Havana was 2.2), see [http://docs.mongodb.org/manual/release-notes/2.4-upgrade upgrade instructions].<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
=== Key New Features ===<br />
* '''HOT templates''': The [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html HOT template format] is now supported as the recommended format for authoring heat templates.<br />
* '''OpenStack resources''': There is now sufficient coverage of resource types to port any template to [http://docs.openstack.org/developer/heat/template_guide/openstack.html native OpenStack resources]<br />
* '''Software configuration''': New API and resources to allow software configuration to be performed using a variety of techniques and tools<br />
* '''Non-admin users''': It is now possible to launch any stack without requiring admin user credentials. See the upgrade notes on enabling this by configuring stack domain users.<br />
* '''Operator API''': Cloud operators now have a dedicated admin API to perform operations on all stacks<br />
* '''Autoscaling resources''': [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::AutoScalingGroup OS::Heat::AutoScalingGroup] and [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ScalingPolicy OS::Heat::ScalingPolicy] now allow the autoscaling of any arbitrary collection of resources<br />
* '''Notifications''': Heat now sends RPC notifications for events such as stack state changes and autoscaling triggers<br />
* '''Heat engine scaling''': It is now possible to share orchestration load across multiple instances of heat-engine. Locking is coordinated by a pluggable distributed lock, with a SQL based default lock plugin.<br />
* '''File inclusion with get_file''': The [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#intrinsic-functions intrinsic function] get_file is used by python-heatclient and heat to allow files to be attached to stack create and update actions, which is useful for representing configuration files and nested stacks in separate files.<br />
* '''Cloud-init resources''': The [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::CloudConfig OS::Heat::CloudConfig] and [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::MultipartMime OS::Heat::MultipartMime]<br />
* '''Stack abandon and adopt''': It is now possible to abandon a stack, which deletes the stack from Heat without deleting the actual OpenStack resources. The resulting abandon data can also be used to adopt a stack, which creates a new stack based on already existing OpenStack resources. Adopt should be considered an experimental feature for the Icehouse release of Heat.<br />
* '''Stack preview''': The stack-preview action returns a list of resources which are expected to be created if a stack is created with the provided template<br />
* '''New resources''': The following new resources are implemented in this release:<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::CloudConfig OS::Heat::CloudConfig]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::MultipartMime OS::Heat::MultipartMime]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::SoftwareConfig OS::Heat::SoftwareConfig]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::SoftwareDeployment OS::Heat::SoftwareDeployment]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::StructuredConfig OS::Heat::StructuredConfig]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::StructuredDeployment OS::Heat::StructuredDeployment]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::RandomString OS::Heat::RandomString]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ResourceGroup OS::Heat::ResourceGroup]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::AutoScalingGroup OS::Heat::AutoScalingGroup]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ScalingPolicy OS::Heat::ScalingPolicy]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::SecurityGroup OS::Neutron::SecurityGroup]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::MeteringLabel OS::Neutron::MeteringLabel]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::MeteringRule OS::Neutron::MeteringRule]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::ProviderNet OS::Neutron::ProviderNet]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::NetworkGateway OS::Neutron::NetworkGateway]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember OS::Neutron::PoolMember]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::KeyPair OS::Nova::KeyPair]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::FloatingIP OS::Nova::FloatingIP]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::FloatingIPAssociation OS::Nova::FloatingIPAssociation]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Trove::Instance OS::Trove::Instance]<br />
<br />
=== Known Issues ===<br />
* Any error during a stack-update operation (for example from a transient cloud error, a heat bug, or a user template error) can lead to stacks going into an unrecoverable error state. Currently it is only recommended to attempt stack updates if it is practical to recover from errors by deleting and recreating the stack.<br />
* The new stack-adopt operation should be considered an experimental feature<br />
* CFN API returns HTTP status code 500 on all errors ([https://bugs.launchpad.net/heat/+bug/1291079 bug 1291079])<br />
* Deleting stacks containing volume attachments may need to be attempted multiple times due to a volume detachment race ([https://bugs.launchpad.net/heat/+bug/1298350 bug 1298350])<br />
<br />
=== Upgrade Notes ===<br />
Please read the general notes on [https://wiki.openstack.org/wiki/Security/Icehouse/Heat Heat's security model].<br />
<br />
==== Deferred authentication method ====<br />
The default <code>deferred_auth_method</code> of <code>password</code> is deprecated as of Icehouse, so although it is still the default, deployers are strongly encouraged to move to using <code>deferred_auth_method=trusts</code>, which is planned to become the default for Juno. This model has the following benefits:<br />
* It avoids storing user credentials in the heat database<br />
* It removes the need to provide a password as well as a token on stack create<br />
* It limits the actions the heat service user can perform on a users behalf.<br />
<br />
To enable trusts for deferred operations:<br />
* Ensure the keystone service heat is configured to use has enabled the [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-trust-ext.md OS-TRUST extension]<br />
* Set <code>deferred_auth_method = trusts</code> in <code>/etc/heat/heat.conf</code><br />
* Optionally specify the roles to be delegated to the heat service user (<code>trusts_delegated_roles</code> in <code>heat.conf</code>, defaults to <code>heat_stack_owner</code> which will be referred to in the following instructions. You may wish to modify this list of roles to suit your local RBAC policies)<br />
* Ensure the role(s) to be delegated exist, e.g <code>heat_stack_owner</code> exists when running <code>keystone role-list</code><br />
* All users creating heat stacks should possess this role in the project where they are creating the stack. A trust will be created by heat on stack creation between the stack owner (user creating the stack) and the heat service user, delegating the <code>heat_stack_user</code> role to the heat service user, for the lifetime of the stack.<br />
<br />
==== Stack domain users ====<br />
(shardy TODO)<br />
<br />
== OpenStack Database service (Trove) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Documentation ==<br />
<br />
=== Key New Features ===<br />
<br />
* New manual: Command-Line Interface Reference<br />
* API reference has been updated and includes now PDF files as well<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
None yet</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Icehouse&diff=48778ReleaseNotes/Icehouse2014-04-15T15:31:51Z<p>John-griffith: /* Known Issues */</p>
<hr />
<div>= OpenStack 2014.1 (Icehouse) Release Notes =<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
== General Upgrade Notes ==<br />
<br />
* Windows packagers should use pbr 0.8 to avoid [https://bugs.launchpad.net/pbr/+bug/1294246 bug 1294246]<br />
* The log-config option has been renamed log-config-append, and will now append any configuration specified, rather than completely overriding any other settings as currently occurs. (https://bugs.launchpad.net/oslo/+bug/1169328, https://bugs.launchpad.net/oslo/+bug/1238349)<br />
* To minimize downtime, OpenStack Networking must be upgraded and neutron-metadata-agent restarted before OpenStack Compute is upgraded. Compute must be able to verify the X-Tenant-ID which is now passed by the neutron-metadata-agent service. (https://bugs.launchpad.net/neutron/+bug/1235450)<br />
<br />
== OpenStack Object Storage (Swift) ==<br />
<br />
=== Key New Features ===<br />
<br />
* '''Discoverable capabilities''': A Swift proxy server now by default (although it can be turned off) will respond to requests to /info. The response to these requests include information about the cluster and can be used by clients to determine which features are supported in the cluster. This means that one client will be able to communicate with multiple Swift clusters and take advantage of the features available in each cluster.<br />
<br />
* '''Generic way to persist system metadata''': Swift now supports system-level metadata on accounts and containers. System metadata provides a means to store internal custom metadata with associated Swift resources in a safe and secure fashion without actually having to plumb custom metadata through the core swift servers. The new gatekeeper middleware prevents this system metadata from leaking into the request or being set by a client.<br />
<br />
* '''Account-level ACLs and ACL format v2''': Accounts now have a new privileged header to represent ACLs or any other form of account-level access control. The value of the header is a JSON dictionary string to be interpreted by the auth system. A reference implementation is given in TempAuth. Please see the full docs at http://swift.openstack.org/overview_auth.html<br />
<br />
* '''Object replication ssync (an rsync alternative)''': A Swift storage node can now be configured to use Swift primitives for replication transport instead of rsync.<br />
<br />
* '''Automatic retry on read failures''': If a source times out on an object server read, try another one of them with a modified range. This means that drive failures during a client request will not be visible to the end-user client.<br />
<br />
* '''Work on upcoming storage policies'''<br />
<br />
=== Known Issues ===<br />
<br />
None known at this time<br />
<br />
=== Upgrade Notes ===<br />
<br />
Read full change log notes at https://github.com/openstack/swift/blob/master/CHANGELOG to see any config changes that would affect upgrades.<br />
<br />
As always, Swift can be upgraded with no downtime. <br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Upgrade Support ====<br />
<br />
* Limited live upgrades are now supported. This enables deployers to upgrade controller infrastructure first, and subsequently upgrade individual compute nodes without requiring downtime of the entire cloud to complete.<br />
<br />
==== Compute Drivers ====<br />
<br />
===== Hyper-V =====<br />
<br />
* Added RDP console support.<br />
<br />
===== Libvirt (KVM) =====<br />
<br />
* The Libvirt compute driver now supports providing modified kernel arguments to booting compute instances. The kernel arguments are retrieved from the <tt>os_command_line</tt> key in the image metadata as stored in Glance, if a value for the key was provided. Otherwise the default kernel arguments are used.<br />
* The Libvirt driver now supports using VirtIO SCSI (virtio-scsi) instead of VirtIO Block (virtio-blk) to provide block device access for instances. Virtio SCSI is a para-virtualized SCSI controller device designed as a future successor to VirtIO Block and aiming to provide improved scalability and performance.<br />
* The Libvirt Compute driver now supports adding a Virtio RNG device to compute instances to provide increased entropy. Virtio RNG is a paravirtual random number generation device. It allows the compute node to provide entropy to the compute instances in order to fill their entropy pool. The default entropy device used is /dev/random, however use of a physical hardware RNG device attached to the host is also possible. The use of the Virtio RNG device is enabled using the hw_rng property in the metadata of the image used to build the instance.<br />
* The Libvirt driver now allows the configuration of instances to use video driver other than the default (cirros). This allows the specification of different video driver models, different amounts of video RAM, and different numbers of heads. These values are configured by setting the <tt>hw_video_model</tt>, <tt>hw_video_vram</tt>, and <tt>hw_video_head</tt> properties in the image metadata. Currently supported video driver models are <tt>vga</tt>, <tt>cirrus</tt>, <tt>vmvga</tt>, <tt>xen</tt> and <tt>qxl</tt>. <br />
* Watchdog support has been added to the Libvirt driver. The watchdog device used is <tt>i6300esb</tt>. It is enabled by setting the <tt>hw_watchdog_action</tt> property in the image properties or flavor extra specifications (<tt>extra_specs</tt>) to a value other than <tt>disabled</tt>. Supported <tt>hw_watchdog_action</tt> property values, which denote the action for the watchdog device to take in the event of an instance failure, are <tt>poweroff</tt>, <tt>reset</tt>, <tt>pause</tt>, and <tt>none</tt>.<br />
* The High Precision Event Timer (HPET) is now disabled for instances created using the Libvirt driver. The use of this option was found to lead to clock drift in Windows guests when under heavy load.<br />
* The libvirt driver now supports waiting for an event from Neutron during instance boot for better reliability. This requires a suitably new Neutron that supports sending these events, and avoids a race between the instance expecting networking to be ready and the actual plumbing that is required.<br />
<br />
===== VMware =====<br />
<br />
* The VMware Compute drivers now support the virtual machine diagnostics call. Diagnostics can be retrieved using the "nova diagnostics INSTANCE" command, where INSTANCE is replaced by an instance name or instance identifier.<br />
* The VMware Compute drivers now booting an instance from an ISO image.<br />
* The VMware Compute drivers now support the aging of cached images.<br />
<br />
===== XenServer =====<br />
<br />
* All XenServer specific configuration items have changed name, and moved to a [xenserver] section in nova.conf. While the old names will still work in this release, the old names are now deprecated, and support for them could well be removed in a future release of Nova.<br />
* Added initial support for [https://blueprints.launchpad.net/nova/+spec/pci-passthrough-xenapi PCI passthrough]<br />
* Maintained group B status through the introduction of the [[XenServer/XenServer_CI|XenServer CI]]<br />
* Improved support for ephemeral disks (including [https://blueprints.launchpad.net/nova/+spec/xenapi-migrate-ephemeral-disks migration] and [https://blueprints.launchpad.net/nova/+spec/xenapi-resize-ephemeral-disks resize up] of multiple ephemeral disks)<br />
* Support for [https://blueprints.launchpad.net/nova/+spec/xenapi-vcpu-pin-set vcpu_pin_set], essential when you pin CPU resources to Dom0<br />
* Numerous performance and stability enhancements<br />
<br />
==== API ====<br />
<br />
* In OpenStack Compute, the <tt>OS-DCF:diskConfig</tt> API attribute is no longer supported in V3 of the nova API.<br />
* The Compute API currently supports both XML and JSON formats. Support for the XML format is now deprecated and will be retired in a future release.<br />
* The Compute API now exposes a mechanism for permanently removing decommissioned compute nodes. Previously these would continue to be listed even where the compute service had had been disabled and the system re-provisioned. This functionality is provided by the <tt>ExtendedServicesDelete</tt> API extension.<br />
* Separated the V3 API admin_actions plugin into logically separate plugins so operators can enable subsets of the functionality currently present in the plugin.<br />
* The Compute service now uses the tenant identifier instead of the tenant name when authenticating with OpenStack Networking (Neutron). This improves support for the OpenStack Identity API v3 which allows non-unique tenant names.<br />
* The Compute API now exposes the hypervisor IP address, allowing it to be retrieved by administrators using the <tt>nova hypervisor-show</tt> command.<br />
<br />
==== Scheduler ====<br />
<br />
* The scheduler now includes an initial implementation of a caching scheduler driver. The caching scheduler uses the existing facilities for applying scheduler filters and weights but caches the list of available hosts. When a user request is passed to the caching scheduler it attempts to perform scheduling based on the list of cached hosts, with a view to improving scheduler performance.<br />
* A new scheduler filter, <tt>AggregateImagePropertiesIsolation</tt>, has been introduced. The new filter schedules instances to hosts based on matching namespaced image properties with host aggregate properties. Hosts that do not belong to any host aggregate remain valid scheduling targets for instances based on all images. The new Compute service configuration keys <tt>aggregate_image_properties_isolation_namespace</tt> and <tt>aggregate_image_properties_isolation_separator</tt> are used to determine which image properties are examined by the filter.<br />
* Weight normalization in OpenStack Compute: See: <br />
** https://review.openstack.org/#/c/27160/ Weights are normalized, so there is no need to inflate multipliers artificially. The maximum weight that a weigher will put for a node is 1.0 and the minimum is 0.0. <br />
* The scheduler now supports server groups. The following types are supported - anti-affinity and affinity filters. That is, a server that is deployed will be done according to a predefined policy.<br />
<br />
==== Other Features ====<br />
<br />
* Notifications are now generated upon the creation and deletion of keypairs.<br />
* Notifications are now generated when an Compute host is enabled, disabled, powered on, shut down, rebooted, put into maintenance mode and taken out of maintenance mode.<br />
* Compute services are now able to shutdown gracefully by disabling processing of new requests when a service shutdown is requested but allowing requests already in process to complete before terminating.<br />
* The Compute service determines what action to take when instances are found to be running that were previously marked deleted based on the value of the <tt>running_deleted_instance_action</tt> configuration key. A new <tt>shutdown</tt> value has been added. Using this new value allows administrators to optionally keep instances found in this state for diagnostics while still releasing the runtime resources.<br />
* File injection is now disabled by default in OpenStack Compute. Instead it is recommended that the ConfigDrive and metadata server facilities are used to modify guests at launch. To enable file injection modify the <tt>inject_key</tt> and <tt>inject_partition</tt> configuration keys in <tt>/etc/nova/nova.conf</tt> and restart the Compute services. The file injection mechanism is likely to be disabled in a future release.<br />
* A number of changes have been made to the expected format <tt>/etc/nova/nova.conf</tt> configuration file with a view to ensuring that all configuration groups in the file use descriptive names. A number of driver specific flags, including those for the Libvirt driver, have also been moved to their own option groups.<br />
<br />
=== Known Issues ===<br />
* OpenStack Compute has some features that use newer API versions from other projects, but the following are the only API versions tested in Icehouse:<br />
** Keystone v2<br />
** Cinder v1<br />
** Glance v1<br />
* The libvirt driver backed by Xen or LXC is an untested configuration (group C on [[HypervisorSupportMatrix]]). Since it's untested, a change made it in that broke both of these configurations. [https://bugs.launchpad.net/nova/+bug/1301453]<br />
<br />
=== Upgrade Notes ===<br />
<br />
* Scheduler and weight normalization (https://review.openstack.org/#/c/27160/): In previous releases the Compute and Cells scheduler used raw weights (i.e. the weighers returned any value, and that was the value used by the weighing proccess).<br />
** If you were using several weighers for Compute:<br />
*** If several weighers were used (in previous releases Nova only shipped one weigher for compute), it is possible that your multipliers were inflated artificially in order to make an important weigher prevail against any other weigher that returned large raw values. You need to check your weighers and take into account that now the maximum and minimum weights for a host will always be <tt>1.0</tt> and <tt>0.0</tt>.<br />
** If you are using cells:<br />
*** <tt>nova.cells.weights.mute_child.MuteChild</tt>: The weigher returned the value <tt>mute_weight_value</tt> as the weight assigned to a child that didn't update its capabilities in a while. It can still be used, but will have no effect on the final weight that will be computed by the weighing process, that will be <tt>1.0</tt>. If you are using this weigher to mute a child cell you need to adjust the <tt>mute_weight_multiplier</tt>.<br />
*** <tt>nova.cells.weights.weight_offset.WeightOffsetWeigher</tt> introduces a new configuration option <tt>offset_weight_multiplier</tt>. This new option has to be adjusted. In previous releases, the weigher returned the value of the configured offset for each of the cells in the weighing process. While the winner of that process will still be the same, it will get a weight of <tt>1.0</tt>. If you were using this weigher and you were relying in its value to make it prevail against any other weighers you need to adjust its multiplier accordingly.<br />
* An early Docker compute driver was included in the Havana release. This driver has been moved from Nova into its own repository. The new location is http://git.openstack.org/cgit/stackforge/nova-docker<br />
* https://review.openstack.org/50668 - The <tt>compute_api_class</tt> configuration option has been removed.<br />
* https://review.openstack.org/#/c/54290/ - The following deprecated configuration option aliases have been removed in favor of their new names:<br />
** <tt>service_quantum_metadata_proxy</tt><br />
** <tt>quantum_metadata_proxy_shared_secret</tt><br />
** <tt>use_quantum_default_nets</tt><br />
** <tt>quantum_default_tenant_id</tt><br />
** <tt>vpn_instance_type</tt><br />
** <tt>default_instance_type</tt><br />
** <tt>quantum_url</tt><br />
** <tt>quantum_url_timeout</tt><br />
** <tt>quantum_admin_username</tt><br />
** <tt>quantum_admin_password</tt><br />
** <tt>quantum_admin_tenant_name</tt><br />
** <tt>quantum_region_name</tt><br />
** <tt>quantum_admin_auth_url</tt><br />
** <tt>quantum_api_insecure</tt><br />
** <tt>quantum_auth_strategy</tt><br />
** <tt>quantum_ovs_bridge</tt><br />
** <tt>quantum_extension_sync_interval</tt><br />
** <tt>vmwareapi_host_ip</tt><br />
** <tt>vmwareapi_host_username</tt><br />
** <tt>vmwareapi_host_password</tt><br />
** <tt>vmwareapi_cluster_name</tt><br />
** <tt>vmwareapi_task_poll_interval</tt><br />
** <tt>vmwareapi_api_retry_count</tt><br />
** <tt>vnc_port</tt><br />
** <tt>vnc_port_total</tt><br />
** <tt>use_linked_clone</tt><br />
** <tt>vmwareapi_vlan_interface</tt><br />
** <tt>vmwareapi_wsdl_loc</tt><br />
* The PowerVM driver has been removed: https://review.openstack.org/#/c/57774/<br />
* The keystone_authtoken defaults changed in nova.conf: https://review.openstack.org/#/c/62815/<br />
* libvirt lvm names changed from using instance_name_template to instance uuid (https://review.openstack.org/#/c/76968). Possible manual cleanup required if using a non default instance_name_template.<br />
* rbd disk names changed from using instance_name_template to instance uuid. Manual cleanup required of old virtual disks after the transition. (TBD find review)<br />
* Icehouse brings libguestfs as a requirement. Installing icehouse dependencies on a system currently running havana may cause the havana node to begin using libguestfs and break unexpectedly. It is recommended that libvirt_inject_partition=-2 be set on havana nodes prior to starting an upgrade of packages on the system if the nova packages will be updated last.<br />
* Creating a private flavor now adds access to the tenant automatically. This was the documented behavior in Havana, but the actual mplementation in Havana and previous versions of Nova did not add the tenant automatically to private flavors.<br />
* Nova previously included a nova.conf.sample. This file was automatically generated and is no longer included directly. If you are packaging Nova and wish to include the sample config file, see etc/nova/README.nova.conf for instructions on how to generate the file at build time.<br />
* Nova now defaults to requiring an event from Neutron when booting libvirt guests. If you upgrade Nova before Neutron, you must disable this feature in Nova until Neutron supports it by setting '''vif_plugging_is_fatal=False''' and '''vif_plugging_timeout=0'''. Recommended order is: Nova (with this disabled), Neutron (with the notifications enabled), and then enable vif_plugging_is_fatal=True with the default value of vif_plugging_timeout.<br />
* Nova supports a limited live upgrade model for the compute nodes in Icehouse. To do this, upgrade controller infrastructure (everthing except nova-compute) first, but set the '''[upgrade_levels]/compute=icehouse-compat''' option. This will enable Icehouse controller services to talk to Havana compute services. Upgrades of individual compute nodes can then proceed normally. When all the computes are upgraded, unset the compute version option to retain the default and restart the controller services.<br />
* The following configuration options are marked as deprecated in this release. See <tt>nova.conf.sample</tt> for their replacements. <tt>[GROUP]/option</tt><br />
** <tt>[DEFAULT]/rabbit_durable_queues</tt><br />
** <tt>[rpc_notifier2]/topics</tt><br />
** <tt>[DEFAULT]/log_config</tt><br />
** <tt>[DEFAULT]/logfile</tt><br />
** <tt>[DEFAULT]/logdir</tt><br />
** <tt>[DEFAULT]/base_dir_name</tt><br />
** <tt>[DEFAULT]/instance_type_extra_specs</tt><br />
** <tt>[DEFAULT]/db_backend</tt><br />
** <tt>[DEFAULT]/sql_connection</tt><br />
** <tt>[DATABASE]/sql_connection</tt><br />
** <tt>[sql]/connection</tt><br />
** <tt>[DEFAULT]/sql_idle_timeout</tt><br />
** <tt>[DATABASE]/sql_idle_timeout</tt><br />
** <tt>[sql]/idle_timeout</tt><br />
** <tt>[DEFAULT]/sql_min_pool_size</tt><br />
** <tt>[DATABASE]/sql_min_pool_size</tt><br />
** <tt>[DEFAULT]/sql_max_pool_size</tt><br />
** <tt>[DATABASE]/sql_max_pool_size</tt><br />
** <tt>[DEFAULT]/sql_max_retries</tt><br />
** <tt>[DATABASE]/sql_max_retries</tt><br />
** <tt>[DEFAULT]/sql_retry_interval</tt><br />
** <tt>[DATABASE]/reconnect_interval</tt><br />
** <tt>[DEFAULT]/sql_max_overflow</tt><br />
** <tt>[DATABASE]/sqlalchemy_max_overflow</tt><br />
** <tt>[DEFAULT]/sql_connection_debug</tt><br />
** <tt>[DEFAULT]/sql_connection_trace</tt><br />
** <tt>[DATABASE]/sqlalchemy_pool_timeout</tt><br />
** <tt>[DEFAULT]/memcache_servers</tt><br />
** <tt>[DEFAULT]/libvirt_type</tt><br />
** <tt>[DEFAULT]/libvirt_uri</tt><br />
** <tt>[DEFAULT]/libvirt_inject_password</tt><br />
** <tt>[DEFAULT]/libvirt_inject_key</tt><br />
** <tt>[DEFAULT]/libvirt_inject_partition</tt><br />
** <tt>[DEFAULT]/libvirt_vif_driver</tt><br />
** <tt>[DEFAULT]/libvirt_volume_drivers</tt><br />
** <tt>[DEFAULT]/libvirt_disk_prefix</tt><br />
** <tt>[DEFAULT]/libvirt_wait_soft_reboot_seconds</tt><br />
** <tt>[DEFAULT]/libvirt_cpu_mode</tt><br />
** <tt>[DEFAULT]/libvirt_cpu_model</tt><br />
** <tt>[DEFAULT]/libvirt_snapshots_directory</tt><br />
** <tt>[DEFAULT]/libvirt_images_type</tt><br />
** <tt>[DEFAULT]/libvirt_images_volume_group</tt><br />
** <tt>[DEFAULT]/libvirt_sparse_logical_volumes</tt><br />
** <tt>[DEFAULT]/libvirt_images_rbd_pool</tt><br />
** <tt>[DEFAULT]/libvirt_images_rbd_ceph_conf</tt><br />
** <tt>[DEFAULT]/libvirt_snapshot_compression</tt><br />
** <tt>[DEFAULT]/libvirt_use_virtio_for_bridges</tt><br />
** <tt>[DEFAULT]/libvirt_iscsi_use_multipath</tt><br />
** <tt>[DEFAULT]/libvirt_iser_use_multipath</tt><br />
** <tt>[DEFAULT]/matchmaker_ringfile</tt><br />
** <tt>[DEFAULT]/agent_timeout</tt><br />
** <tt>[DEFAULT]/agent_version_timeout</tt><br />
** <tt>[DEFAULT]/agent_resetnetwork_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_agent_path</tt><br />
** <tt>[DEFAULT]/xenapi_disable_agent</tt><br />
** <tt>[DEFAULT]/xenapi_use_agent_default</tt><br />
** <tt>[DEFAULT]/xenapi_login_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_connection_concurrent</tt><br />
** <tt>[DEFAULT]/xenapi_connection_url</tt><br />
** <tt>[DEFAULT]/xenapi_connection_username</tt><br />
** <tt>[DEFAULT]/xenapi_connection_password</tt><br />
** <tt>[DEFAULT]/xenapi_vhd_coalesce_poll_interval</tt><br />
** <tt>[DEFAULT]/xenapi_check_host</tt><br />
** <tt>[DEFAULT]/xenapi_vhd_coalesce_max_attempts</tt><br />
** <tt>[DEFAULT]/xenapi_sr_base_path</tt><br />
** <tt>[DEFAULT]/target_host</tt><br />
** <tt>[DEFAULT]/target_port</tt><br />
** <tt>[DEFAULT]/iqn_prefix</tt><br />
** <tt>[DEFAULT]/xenapi_remap_vbd_dev</tt><br />
** <tt>[DEFAULT]/xenapi_remap_vbd_dev_prefix</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_base_url</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_seed_chance</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_seed_duration</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_max_last_accessed</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_listen_port_start</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_listen_port_end</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_download_stall_cutoff</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_max_seeder_processes_per_host</tt><br />
** <tt>[DEFAULT]/use_join_force</tt><br />
** <tt>[DEFAULT]/xenapi_ovs_integration_bridge</tt><br />
** <tt>[DEFAULT]/cache_images</tt><br />
** <tt>[DEFAULT]/xenapi_image_compression_level</tt><br />
** <tt>[DEFAULT]/default_os_type</tt><br />
** <tt>[DEFAULT]/block_device_creation_timeout</tt><br />
** <tt>[DEFAULT]/max_kernel_ramdisk_size</tt><br />
** <tt>[DEFAULT]/sr_matching_filter</tt><br />
** <tt>[DEFAULT]/xenapi_sparse_copy</tt><br />
** <tt>[DEFAULT]/xenapi_num_vbd_unplug_retries</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_images</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_network_name</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_boot_menu_url</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_mkisofs_cmd</tt><br />
** <tt>[DEFAULT]/xenapi_running_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_vif_driver</tt><br />
** <tt>[DEFAULT]/xenapi_image_upload_handler</tt><br />
<br />
== OpenStack Image Service (Glance) ==<br />
=== Key New Features ===<br />
* The calculation of storage quotas has been improved. Deleted images are now excluded from the count (https://bugs.launchpad.net/glance/+bug/1261738), which may affect your existing usage figures.<br />
* Glance has moved to using 0-based indices for location entries, to be in line with JSON-pointer RFC6901 (https://bugs.launchpad.net/glance/+bug/1282437)<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Dashboard (Horizon) ==<br />
<br />
=== Key New Features ===<br />
* Thanks to the [[I18nTeam]] Horizon is now available in Hindi, German and Serbian. Translations for Australian English, British English, Dutch, French, Japanese, Korean, Polish, Portuguese, Simplified and Traditional Chinese, Spanish and Russian have also been updated.<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
* New v3 API features<br />
** <code>/v3/OS-FEDERATION/</code> allows Keystone to consume federated authentication via [https://shibboleth.net/ Shibboleth] for multiple Identity Providers, and mapping federated attributes into OpenStack group-based role assignments (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-federation-ext.md documentation]).<br />
** <code>POST /v3/users/{user_id}/password</code> allows API users to update their own passwords (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#change-user-password-post-usersuser_idpassword documentation]).<br />
** <code>GET v3/auth/token?nocatalog</code> allows API users to opt-out of receiving the service catalog when performing online token validation (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#validate-token-get-authtokensnocatalog documentation]).<br />
** <code>/v3/regions</code> provides a public interface for describing multi-region deployments (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#regions-v3regions documentation]).<br />
** <code>/v3/OS-SIMPLECERT/</code> now publishes the certificates used for PKI token validation (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-simple-certs-ext.md documentation]).<br />
** <code>/v3/OS-TRUST/trusts</code> is now capable of providing limited-use delegation via the <code>remaining_uses</code> attribute of [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-trust-ext.md trusts].<br />
* The assignments backend (the source of authorization data) has now been completely separated from the identity backend (the source of authentication data). This means that you can now back your deployment's identity data to LDAP, and your authorization data to SQL, for example.<br />
* KVS drivers are now capable of writing to persistent Key-Value stores such as Redis, Cassandra, or MongoDB.<br />
* Keystone's driver interfaces are now implemented as Abstract Base Classes (ABCs) to make it easier to track compatibility of custom driver implementations across releases.<br />
* Keystone's default <code>etc/policy.json</code> has been rewritten in an easier to read format.<br />
* [http://docs.openstack.org/developer/keystone/event_notifications.html Notifications] are now emitted in response to create, update and delete events on roles, groups, and trusts.<br />
* Custom extensions and driver implementations may now subscribe to internal-only event notifications, including ''disable'' events (which are only exposed externally as part of ''update'' events).<br />
* Keystone now emits [http://www.dmtf.org/standards/cadf Cloud Audit Data Federation (CADF)] event notifications in response to authentication events.<br />
* [https://review.openstack.org/#/c/50362/ Additional plugins] are provided to handle external authentication via <code>REMOTE_USER</code> with respect to single-domain versus multi-domain deployments.<br />
* <code>policy.json</code> can now perform enforcement on the target domain in a domain-aware operation using, for example, <code>%(target.{entity}.domain_id)s</code>.<br />
* The LDAP driver for the assignment backend now supports group-based role assignment operations.<br />
* Keystone now publishes token revocation ''events'' in addition to providing continued support for token revocation ''lists''. Token revocation events are designed to consume much less overhead (when compared to token revocation lists) and will enable Keystone eliminate token persistence during the Juno release.<br />
* Deployers can now define arbitrary limits on the size of collections in API responses (for example, <code>GET /v3/users</code> might be configured to return only 100 users, rather than 10,000). Clients will be informed when truncation has occurred.<br />
* Lazy translation has been enabled to translating responses according to the requested Accept-Language header.<br />
* Keystone now emits i18n-ready log messages.<br />
* Collection filtering is now performed in the driver layer, where possible, for improved performance.<br />
<br />
=== Known Issues ===<br />
<br />
* [https://bugs.launchpad.net/keystone/+bug/1291157 Bug 1291157]: If using the <code>OS-FEDERATION</code> extension, deleting an Identity Provider or Protocol does ''not'' result in previously-issued tokens being revoked. This will not be fixed in the ''stable/icehouse'' branch.<br />
<br />
=== Upgrade Notes ===<br />
<br />
* The v2 API has been prepared for deprecation, but remains stable in the Icehouse release. It may be formally deprecated during the Juno release pending widespread support for the v3 API.<br />
* Backwards compatibility for <code>keystone.middleware.auth_token</code> has been removed. <code>auth_token</code> middleware module is no longer provided by Keystone itself, and must be imported from <code>keystoneclient.middleware.auth_token</code> instead.<br />
* The <code>s3_token</code> middleware module is no longer provided by Keystone itself, and must be imported from <code>keystoneclient.middleware.s3_token</code> instead. Backwards compatibility for <code>keystone.middleware.s3_token</code> will be removed in Juno.<br />
* The default token duration has been reduced from 24 hours to just 1 hour. This effectively reduces the number of tokens that must be persisted at any one time, and (for PKI deployments) reduces the overhead of the token revocation list.<br />
* <code>keystone.contrib.access.core.AccessLogMiddleware</code> has been deprecated in favor of either the eventlet debug access log or Apache httpd access log and may be removed in the K release.<br />
* <code>keystone.contrib.stats.core.StatsMiddleware</code> has been deprecated in favor of external tooling and may be removed in the K release.<br />
* <code>keystone.middleware.XmlBodyMiddleware</code> has been deprecated in favor of support for "application/json" only and may be removed in the K release.<br />
* A v3 API version of the EC2 Credential system has been implemented. To use this, the following section needs to be added to <code>keystone-paste.ini</code>:<br />
[filter:ec2_extension_v3]<br />
paste.filter_factory = keystone.contrib.ec2:Ec2ExtensionV3.factory<br />
... and <code>ec2_extension_v3</code> needs to be added to the pipeline variable in the <code>[pipeline:api_v3]</code> section of <code>keystone-paste.ini</code>.<br />
* <code>etc/policy.json</code> updated to provide rules for the new v3 EC2 Credential CRUD as show in the updated sample <code>policy.json</code> and <code>policy.v3cloudsample.json</code><br />
* Migration numbers 38, 39 and 40 move all role assignment data into a single, unified table with first-class columns for role references.<br />
* TODO: deprecations for the move to oslo-incubator db<br />
* A new configuration option, <code>mutable_domain_id</code> is <code>false</code> by default to harden security around domain-level administration boundaries. This may break API functionality that you depended on in Havana. If so, set this value to <code>true</code> and ''please'' voice your use case to the Keystone community.<br />
* TODO: any non-ideal default values that will be changed in the future<br />
* TODO: the move to oslo.messaging<br />
<br />
== OpenStack Network Service (Neutron) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet.<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
=== Key New Features ===<br />
* Ability to change the type of an existing volume (retype)<br />
* Add volume metadata support to the Cinder Backup Object<br />
* Implement Multiple API workers<br />
* Add ability to delete Quota<br />
* Add ability to import/export backups in to Cinder<br />
<br />
=== Known Issues ===<br />
* Reconnect on failure for multiple servers always connects to first server (Bug: #1261631)<br />
* Storwize/SVC driver crashes when check volume copy status (Bug: #1304115)<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Telemetry (Ceilometer) ==<br />
=== Key New Features ===<br />
* API additions<br />
** arbitrarily complex combinations of query constraints for meters, samples and alarms<br />
** capabilities API for discovery of storage driver specific features<br />
** selectable aggregates for statistics, including new cardinality and standard deviation functions <br />
** direct access to samples decoupled from a specific meter<br />
** events API, in the style of [https://github.com/rackerlabs/stacktach StackTach]<br />
<br />
* Alarming improvements<br />
** time-constrained alarms, providing flexibility to set the bar higher or lower depending on time of day or day of the week<br />
** exclusion of weak data points with anomalously low sample counts <br />
** derived rate-based meters for disk & network, more suited to threshold-oriented alarming <br />
<br />
* Integration touch-points<br />
** split collector into notification agent solely responsible for consuming external notifications<br />
** redesign of pipeline configuration for pluggable resource discovery<br />
** configurable persistence of raw notification payloads, in the style of [https://github.com/rackerlabs/stacktach StackTach]<br />
<br />
* Storage drivers<br />
** approaching feature parity in HBase & SQLAlchemy & DB2 drivers<br />
** optimization of resource queries<br />
** HBase: add Alarm support<br />
<br />
* New sources of metrics<br />
** Neutron north-bound API on SDN controller<br />
** VMware vCenter Server API<br />
** SNMP daemons on baremetal hosts<br />
** OpenDaylight REST APIs<br />
<br />
=== Known Issues ===<br />
* SQLAlchemy storage driver is problematic with a scaled out collector service when run against PostgreSQL https://bugs.launchpad.net/ceilometer/+bug/1305332<br />
* HBase storage driver reports truncated list of meters: https://bugs.launchpad.net/ceilometer/+bug/1288284<br />
* HBase storage driver doesn't work with HappyBase version 0.7 <br />
* excessive load on nova-api service induced by compute agent: https://bugs.launchpad.net/ceilometer/+bug/1297528<br />
<br />
=== Upgrade Notes ===<br />
* the pre-existing collector service has been augmented with a new notification agent that must also be started up post-upgrade<br />
* MongoDB storage driver now requires the MongoDB installation to be version 2.4 or greater (the lower bound for Havana was 2.2), see [http://docs.mongodb.org/manual/release-notes/2.4-upgrade upgrade instructions].<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
=== Key New Features ===<br />
* '''HOT templates''': The [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html HOT template format] is now supported as the recommended format for authoring heat templates.<br />
* '''OpenStack resources''': There is now sufficient coverage of resource types to port any template to [http://docs.openstack.org/developer/heat/template_guide/openstack.html native OpenStack resources]<br />
* '''Software configuration''': New API and resources to allow software configuration to be performed using a variety of techniques and tools<br />
* '''Non-admin users''': It is now possible to launch any stack without requiring admin user credentials. See the upgrade notes on enabling this by configuring stack domain users.<br />
* '''Operator API''': Cloud operators now have a dedicated admin API to perform operations on all stacks<br />
* '''Autoscaling resources''': [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::AutoScalingGroup OS::Heat::AutoScalingGroup] and [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ScalingPolicy OS::Heat::ScalingPolicy] now allow the autoscaling of any arbitrary collection of resources<br />
* '''Notifications''': Heat now sends RPC notifications for events such as stack state changes and autoscaling triggers<br />
* '''Heat engine scaling''': It is now possible to share orchestration load across multiple instances of heat-engine. Locking is coordinated by a pluggable distributed lock, with a SQL based default lock plugin.<br />
* '''File inclusion with get_file''': The [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#intrinsic-functions intrinsic function] get_file is used by python-heatclient and heat to allow files to be attached to stack create and update actions, which is useful for representing configuration files and nested stacks in separate files.<br />
* '''Cloud-init resources''': The [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::CloudConfig OS::Heat::CloudConfig] and [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::MultipartMime OS::Heat::MultipartMime]<br />
* '''Stack abandon and adopt''': It is now possible to abandon a stack, which deletes the stack from Heat without deleting the actual OpenStack resources. The resulting abandon data can also be used to adopt a stack, which creates a new stack based on already existing OpenStack resources. Adopt should be considered an experimental feature for the Icehouse release of Heat.<br />
* '''Stack preview''': The stack-preview action returns a list of resources which are expected to be created if a stack is created with the provided template<br />
* '''New resources''': The following new resources are implemented in this release:<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::CloudConfig OS::Heat::CloudConfig]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::MultipartMime OS::Heat::MultipartMime]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::SoftwareConfig OS::Heat::SoftwareConfig]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::SoftwareDeployment OS::Heat::SoftwareDeployment]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::StructuredConfig OS::Heat::StructuredConfig]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::StructuredDeployment OS::Heat::StructuredDeployment]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::RandomString OS::Heat::RandomString]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ResourceGroup OS::Heat::ResourceGroup]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::AutoScalingGroup OS::Heat::AutoScalingGroup]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ScalingPolicy OS::Heat::ScalingPolicy]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::SecurityGroup OS::Neutron::SecurityGroup]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::MeteringLabel OS::Neutron::MeteringLabel]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::MeteringRule OS::Neutron::MeteringRule]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::ProviderNet OS::Neutron::ProviderNet]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::NetworkGateway OS::Neutron::NetworkGateway]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember OS::Neutron::PoolMember]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::KeyPair OS::Nova::KeyPair]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::FloatingIP OS::Nova::FloatingIP]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::FloatingIPAssociation OS::Nova::FloatingIPAssociation]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Trove::Instance OS::Trove::Instance]<br />
<br />
=== Known Issues ===<br />
* Any error during a stack-update operation (for example from a transient cloud error, a heat bug, or a user template error) can lead to stacks going into an unrecoverable error state. Currently it is only recommended to attempt stack updates if it is practical to recover from errors by deleting and recreating the stack.<br />
* The new stack-adopt operation should be considered an experimental feature<br />
* CFN API returns HTTP status code 500 on all errors ([https://bugs.launchpad.net/heat/+bug/1291079 bug 1291079])<br />
* Deleting stacks containing volume attachments may need to be attempted multiple times due to a volume detachment race ([https://bugs.launchpad.net/heat/+bug/1298350 bug 1298350])<br />
<br />
=== Upgrade Notes ===<br />
Please read the general notes on [https://wiki.openstack.org/wiki/Security/Icehouse/Heat Heat's security model].<br />
<br />
==== Deferred authentication method ====<br />
The default <code>deferred_auth_method</code> of <code>password</code> is deprecated as of Icehouse, so although it is still the default, deployers are strongly encouraged to move to using <code>deferred_auth_method=trusts</code>, which is planned to become the default for Juno. This model has the following benefits:<br />
* It avoids storing user credentials in the heat database<br />
* It removes the need to provide a password as well as a token on stack create<br />
* It limits the actions the heat service user can perform on a users behalf.<br />
<br />
To enable trusts for deferred operations:<br />
* Ensure the keystone service heat is configured to use has enabled the [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-trust-ext.md OS-TRUST extension]<br />
* Set <code>deferred_auth_method = trusts</code> in <code>/etc/heat/heat.conf</code><br />
* Optionally specify the roles to be delegated to the heat service user (<code>trusts_delegated_roles</code> in <code>heat.conf</code>, defaults to <code>heat_stack_owner</code> which will be referred to in the following instructions. You may wish to modify this list of roles to suit your local RBAC policies)<br />
* Ensure the role(s) to be delegated exist, e.g <code>heat_stack_owner</code> exists when running <code>keystone role-list</code><br />
* All users creating heat stacks should possess this role in the project where they are creating the stack. A trust will be created by heat on stack creation between the stack owner (user creating the stack) and the heat service user, delegating the <code>heat_stack_user</code> role to the heat service user, for the lifetime of the stack.<br />
<br />
==== Stack domain users ====<br />
(shardy TODO)<br />
<br />
== OpenStack Database service (Trove) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Documentation ==<br />
<br />
=== Key New Features ===<br />
<br />
* New manual: Command-Line Interface Reference<br />
* API reference has been updated and includes now PDF files as well<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
None yet</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=ReleaseNotes/Icehouse&diff=48776ReleaseNotes/Icehouse2014-04-15T15:25:21Z<p>John-griffith: /* Key New Features */</p>
<hr />
<div>= OpenStack 2014.1 (Icehouse) Release Notes =<br />
<br />
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3"><br />
__TOC__<br />
</div><br />
<br />
== General Upgrade Notes ==<br />
<br />
* Windows packagers should use pbr 0.8 to avoid [https://bugs.launchpad.net/pbr/+bug/1294246 bug 1294246]<br />
* The log-config option has been renamed log-config-append, and will now append any configuration specified, rather than completely overriding any other settings as currently occurs. (https://bugs.launchpad.net/oslo/+bug/1169328, https://bugs.launchpad.net/oslo/+bug/1238349)<br />
* To minimize downtime, OpenStack Networking must be upgraded and neutron-metadata-agent restarted before OpenStack Compute is upgraded. Compute must be able to verify the X-Tenant-ID which is now passed by the neutron-metadata-agent service. (https://bugs.launchpad.net/neutron/+bug/1235450)<br />
<br />
== OpenStack Object Storage (Swift) ==<br />
<br />
=== Key New Features ===<br />
<br />
* '''Discoverable capabilities''': A Swift proxy server now by default (although it can be turned off) will respond to requests to /info. The response to these requests include information about the cluster and can be used by clients to determine which features are supported in the cluster. This means that one client will be able to communicate with multiple Swift clusters and take advantage of the features available in each cluster.<br />
<br />
* '''Generic way to persist system metadata''': Swift now supports system-level metadata on accounts and containers. System metadata provides a means to store internal custom metadata with associated Swift resources in a safe and secure fashion without actually having to plumb custom metadata through the core swift servers. The new gatekeeper middleware prevents this system metadata from leaking into the request or being set by a client.<br />
<br />
* '''Account-level ACLs and ACL format v2''': Accounts now have a new privileged header to represent ACLs or any other form of account-level access control. The value of the header is a JSON dictionary string to be interpreted by the auth system. A reference implementation is given in TempAuth. Please see the full docs at http://swift.openstack.org/overview_auth.html<br />
<br />
* '''Object replication ssync (an rsync alternative)''': A Swift storage node can now be configured to use Swift primitives for replication transport instead of rsync.<br />
<br />
* '''Automatic retry on read failures''': If a source times out on an object server read, try another one of them with a modified range. This means that drive failures during a client request will not be visible to the end-user client.<br />
<br />
* '''Work on upcoming storage policies'''<br />
<br />
=== Known Issues ===<br />
<br />
None known at this time<br />
<br />
=== Upgrade Notes ===<br />
<br />
Read full change log notes at https://github.com/openstack/swift/blob/master/CHANGELOG to see any config changes that would affect upgrades.<br />
<br />
As always, Swift can be upgraded with no downtime. <br />
<br />
== OpenStack Compute (Nova) ==<br />
<br />
=== Key New Features ===<br />
<br />
==== Upgrade Support ====<br />
<br />
* Limited live upgrades are now supported. This enables deployers to upgrade controller infrastructure first, and subsequently upgrade individual compute nodes without requiring downtime of the entire cloud to complete.<br />
<br />
==== Compute Drivers ====<br />
<br />
===== Hyper-V =====<br />
<br />
* Added RDP console support.<br />
<br />
===== Libvirt (KVM) =====<br />
<br />
* The Libvirt compute driver now supports providing modified kernel arguments to booting compute instances. The kernel arguments are retrieved from the <tt>os_command_line</tt> key in the image metadata as stored in Glance, if a value for the key was provided. Otherwise the default kernel arguments are used.<br />
* The Libvirt driver now supports using VirtIO SCSI (virtio-scsi) instead of VirtIO Block (virtio-blk) to provide block device access for instances. Virtio SCSI is a para-virtualized SCSI controller device designed as a future successor to VirtIO Block and aiming to provide improved scalability and performance.<br />
* The Libvirt Compute driver now supports adding a Virtio RNG device to compute instances to provide increased entropy. Virtio RNG is a paravirtual random number generation device. It allows the compute node to provide entropy to the compute instances in order to fill their entropy pool. The default entropy device used is /dev/random, however use of a physical hardware RNG device attached to the host is also possible. The use of the Virtio RNG device is enabled using the hw_rng property in the metadata of the image used to build the instance.<br />
* The Libvirt driver now allows the configuration of instances to use video driver other than the default (cirros). This allows the specification of different video driver models, different amounts of video RAM, and different numbers of heads. These values are configured by setting the <tt>hw_video_model</tt>, <tt>hw_video_vram</tt>, and <tt>hw_video_head</tt> properties in the image metadata. Currently supported video driver models are <tt>vga</tt>, <tt>cirrus</tt>, <tt>vmvga</tt>, <tt>xen</tt> and <tt>qxl</tt>. <br />
* Watchdog support has been added to the Libvirt driver. The watchdog device used is <tt>i6300esb</tt>. It is enabled by setting the <tt>hw_watchdog_action</tt> property in the image properties or flavor extra specifications (<tt>extra_specs</tt>) to a value other than <tt>disabled</tt>. Supported <tt>hw_watchdog_action</tt> property values, which denote the action for the watchdog device to take in the event of an instance failure, are <tt>poweroff</tt>, <tt>reset</tt>, <tt>pause</tt>, and <tt>none</tt>.<br />
* The High Precision Event Timer (HPET) is now disabled for instances created using the Libvirt driver. The use of this option was found to lead to clock drift in Windows guests when under heavy load.<br />
* The libvirt driver now supports waiting for an event from Neutron during instance boot for better reliability. This requires a suitably new Neutron that supports sending these events, and avoids a race between the instance expecting networking to be ready and the actual plumbing that is required.<br />
<br />
===== VMware =====<br />
<br />
* The VMware Compute drivers now support the virtual machine diagnostics call. Diagnostics can be retrieved using the "nova diagnostics INSTANCE" command, where INSTANCE is replaced by an instance name or instance identifier.<br />
* The VMware Compute drivers now booting an instance from an ISO image.<br />
* The VMware Compute drivers now support the aging of cached images.<br />
<br />
===== XenServer =====<br />
<br />
* All XenServer specific configuration items have changed name, and moved to a [xenserver] section in nova.conf. While the old names will still work in this release, the old names are now deprecated, and support for them could well be removed in a future release of Nova.<br />
* Added initial support for [https://blueprints.launchpad.net/nova/+spec/pci-passthrough-xenapi PCI passthrough]<br />
* Maintained group B status through the introduction of the [[XenServer/XenServer_CI|XenServer CI]]<br />
* Improved support for ephemeral disks (including [https://blueprints.launchpad.net/nova/+spec/xenapi-migrate-ephemeral-disks migration] and [https://blueprints.launchpad.net/nova/+spec/xenapi-resize-ephemeral-disks resize up] of multiple ephemeral disks)<br />
* Support for [https://blueprints.launchpad.net/nova/+spec/xenapi-vcpu-pin-set vcpu_pin_set], essential when you pin CPU resources to Dom0<br />
* Numerous performance and stability enhancements<br />
<br />
==== API ====<br />
<br />
* In OpenStack Compute, the <tt>OS-DCF:diskConfig</tt> API attribute is no longer supported in V3 of the nova API.<br />
* The Compute API currently supports both XML and JSON formats. Support for the XML format is now deprecated and will be retired in a future release.<br />
* The Compute API now exposes a mechanism for permanently removing decommissioned compute nodes. Previously these would continue to be listed even where the compute service had had been disabled and the system re-provisioned. This functionality is provided by the <tt>ExtendedServicesDelete</tt> API extension.<br />
* Separated the V3 API admin_actions plugin into logically separate plugins so operators can enable subsets of the functionality currently present in the plugin.<br />
* The Compute service now uses the tenant identifier instead of the tenant name when authenticating with OpenStack Networking (Neutron). This improves support for the OpenStack Identity API v3 which allows non-unique tenant names.<br />
* The Compute API now exposes the hypervisor IP address, allowing it to be retrieved by administrators using the <tt>nova hypervisor-show</tt> command.<br />
<br />
==== Scheduler ====<br />
<br />
* The scheduler now includes an initial implementation of a caching scheduler driver. The caching scheduler uses the existing facilities for applying scheduler filters and weights but caches the list of available hosts. When a user request is passed to the caching scheduler it attempts to perform scheduling based on the list of cached hosts, with a view to improving scheduler performance.<br />
* A new scheduler filter, <tt>AggregateImagePropertiesIsolation</tt>, has been introduced. The new filter schedules instances to hosts based on matching namespaced image properties with host aggregate properties. Hosts that do not belong to any host aggregate remain valid scheduling targets for instances based on all images. The new Compute service configuration keys <tt>aggregate_image_properties_isolation_namespace</tt> and <tt>aggregate_image_properties_isolation_separator</tt> are used to determine which image properties are examined by the filter.<br />
* Weight normalization in OpenStack Compute: See: <br />
** https://review.openstack.org/#/c/27160/ Weights are normalized, so there is no need to inflate multipliers artificially. The maximum weight that a weigher will put for a node is 1.0 and the minimum is 0.0. <br />
* The scheduler now supports server groups. The following types are supported - anti-affinity and affinity filters. That is, a server that is deployed will be done according to a predefined policy.<br />
<br />
==== Other Features ====<br />
<br />
* Notifications are now generated upon the creation and deletion of keypairs.<br />
* Notifications are now generated when an Compute host is enabled, disabled, powered on, shut down, rebooted, put into maintenance mode and taken out of maintenance mode.<br />
* Compute services are now able to shutdown gracefully by disabling processing of new requests when a service shutdown is requested but allowing requests already in process to complete before terminating.<br />
* The Compute service determines what action to take when instances are found to be running that were previously marked deleted based on the value of the <tt>running_deleted_instance_action</tt> configuration key. A new <tt>shutdown</tt> value has been added. Using this new value allows administrators to optionally keep instances found in this state for diagnostics while still releasing the runtime resources.<br />
* File injection is now disabled by default in OpenStack Compute. Instead it is recommended that the ConfigDrive and metadata server facilities are used to modify guests at launch. To enable file injection modify the <tt>inject_key</tt> and <tt>inject_partition</tt> configuration keys in <tt>/etc/nova/nova.conf</tt> and restart the Compute services. The file injection mechanism is likely to be disabled in a future release.<br />
* A number of changes have been made to the expected format <tt>/etc/nova/nova.conf</tt> configuration file with a view to ensuring that all configuration groups in the file use descriptive names. A number of driver specific flags, including those for the Libvirt driver, have also been moved to their own option groups.<br />
<br />
=== Known Issues ===<br />
* OpenStack Compute has some features that use newer API versions from other projects, but the following are the only API versions tested in Icehouse:<br />
** Keystone v2<br />
** Cinder v1<br />
** Glance v1<br />
* The libvirt driver backed by Xen or LXC is an untested configuration (group C on [[HypervisorSupportMatrix]]). Since it's untested, a change made it in that broke both of these configurations. [https://bugs.launchpad.net/nova/+bug/1301453]<br />
<br />
=== Upgrade Notes ===<br />
<br />
* Scheduler and weight normalization (https://review.openstack.org/#/c/27160/): In previous releases the Compute and Cells scheduler used raw weights (i.e. the weighers returned any value, and that was the value used by the weighing proccess).<br />
** If you were using several weighers for Compute:<br />
*** If several weighers were used (in previous releases Nova only shipped one weigher for compute), it is possible that your multipliers were inflated artificially in order to make an important weigher prevail against any other weigher that returned large raw values. You need to check your weighers and take into account that now the maximum and minimum weights for a host will always be <tt>1.0</tt> and <tt>0.0</tt>.<br />
** If you are using cells:<br />
*** <tt>nova.cells.weights.mute_child.MuteChild</tt>: The weigher returned the value <tt>mute_weight_value</tt> as the weight assigned to a child that didn't update its capabilities in a while. It can still be used, but will have no effect on the final weight that will be computed by the weighing process, that will be <tt>1.0</tt>. If you are using this weigher to mute a child cell you need to adjust the <tt>mute_weight_multiplier</tt>.<br />
*** <tt>nova.cells.weights.weight_offset.WeightOffsetWeigher</tt> introduces a new configuration option <tt>offset_weight_multiplier</tt>. This new option has to be adjusted. In previous releases, the weigher returned the value of the configured offset for each of the cells in the weighing process. While the winner of that process will still be the same, it will get a weight of <tt>1.0</tt>. If you were using this weigher and you were relying in its value to make it prevail against any other weighers you need to adjust its multiplier accordingly.<br />
* An early Docker compute driver was included in the Havana release. This driver has been moved from Nova into its own repository. The new location is http://git.openstack.org/cgit/stackforge/nova-docker<br />
* https://review.openstack.org/50668 - The <tt>compute_api_class</tt> configuration option has been removed.<br />
* https://review.openstack.org/#/c/54290/ - The following deprecated configuration option aliases have been removed in favor of their new names:<br />
** <tt>service_quantum_metadata_proxy</tt><br />
** <tt>quantum_metadata_proxy_shared_secret</tt><br />
** <tt>use_quantum_default_nets</tt><br />
** <tt>quantum_default_tenant_id</tt><br />
** <tt>vpn_instance_type</tt><br />
** <tt>default_instance_type</tt><br />
** <tt>quantum_url</tt><br />
** <tt>quantum_url_timeout</tt><br />
** <tt>quantum_admin_username</tt><br />
** <tt>quantum_admin_password</tt><br />
** <tt>quantum_admin_tenant_name</tt><br />
** <tt>quantum_region_name</tt><br />
** <tt>quantum_admin_auth_url</tt><br />
** <tt>quantum_api_insecure</tt><br />
** <tt>quantum_auth_strategy</tt><br />
** <tt>quantum_ovs_bridge</tt><br />
** <tt>quantum_extension_sync_interval</tt><br />
** <tt>vmwareapi_host_ip</tt><br />
** <tt>vmwareapi_host_username</tt><br />
** <tt>vmwareapi_host_password</tt><br />
** <tt>vmwareapi_cluster_name</tt><br />
** <tt>vmwareapi_task_poll_interval</tt><br />
** <tt>vmwareapi_api_retry_count</tt><br />
** <tt>vnc_port</tt><br />
** <tt>vnc_port_total</tt><br />
** <tt>use_linked_clone</tt><br />
** <tt>vmwareapi_vlan_interface</tt><br />
** <tt>vmwareapi_wsdl_loc</tt><br />
* The PowerVM driver has been removed: https://review.openstack.org/#/c/57774/<br />
* The keystone_authtoken defaults changed in nova.conf: https://review.openstack.org/#/c/62815/<br />
* libvirt lvm names changed from using instance_name_template to instance uuid (https://review.openstack.org/#/c/76968). Possible manual cleanup required if using a non default instance_name_template.<br />
* rbd disk names changed from using instance_name_template to instance uuid. Manual cleanup required of old virtual disks after the transition. (TBD find review)<br />
* Icehouse brings libguestfs as a requirement. Installing icehouse dependencies on a system currently running havana may cause the havana node to begin using libguestfs and break unexpectedly. It is recommended that libvirt_inject_partition=-2 be set on havana nodes prior to starting an upgrade of packages on the system if the nova packages will be updated last.<br />
* Creating a private flavor now adds access to the tenant automatically. This was the documented behavior in Havana, but the actual mplementation in Havana and previous versions of Nova did not add the tenant automatically to private flavors.<br />
* Nova previously included a nova.conf.sample. This file was automatically generated and is no longer included directly. If you are packaging Nova and wish to include the sample config file, see etc/nova/README.nova.conf for instructions on how to generate the file at build time.<br />
* Nova now defaults to requiring an event from Neutron when booting libvirt guests. If you upgrade Nova before Neutron, you must disable this feature in Nova until Neutron supports it by setting '''vif_plugging_is_fatal=False''' and '''vif_plugging_timeout=0'''. Recommended order is: Nova (with this disabled), Neutron (with the notifications enabled), and then enable vif_plugging_is_fatal=True with the default value of vif_plugging_timeout.<br />
* Nova supports a limited live upgrade model for the compute nodes in Icehouse. To do this, upgrade controller infrastructure (everthing except nova-compute) first, but set the '''[upgrade_levels]/compute=icehouse-compat''' option. This will enable Icehouse controller services to talk to Havana compute services. Upgrades of individual compute nodes can then proceed normally. When all the computes are upgraded, unset the compute version option to retain the default and restart the controller services.<br />
* The following configuration options are marked as deprecated in this release. See <tt>nova.conf.sample</tt> for their replacements. <tt>[GROUP]/option</tt><br />
** <tt>[DEFAULT]/rabbit_durable_queues</tt><br />
** <tt>[rpc_notifier2]/topics</tt><br />
** <tt>[DEFAULT]/log_config</tt><br />
** <tt>[DEFAULT]/logfile</tt><br />
** <tt>[DEFAULT]/logdir</tt><br />
** <tt>[DEFAULT]/base_dir_name</tt><br />
** <tt>[DEFAULT]/instance_type_extra_specs</tt><br />
** <tt>[DEFAULT]/db_backend</tt><br />
** <tt>[DEFAULT]/sql_connection</tt><br />
** <tt>[DATABASE]/sql_connection</tt><br />
** <tt>[sql]/connection</tt><br />
** <tt>[DEFAULT]/sql_idle_timeout</tt><br />
** <tt>[DATABASE]/sql_idle_timeout</tt><br />
** <tt>[sql]/idle_timeout</tt><br />
** <tt>[DEFAULT]/sql_min_pool_size</tt><br />
** <tt>[DATABASE]/sql_min_pool_size</tt><br />
** <tt>[DEFAULT]/sql_max_pool_size</tt><br />
** <tt>[DATABASE]/sql_max_pool_size</tt><br />
** <tt>[DEFAULT]/sql_max_retries</tt><br />
** <tt>[DATABASE]/sql_max_retries</tt><br />
** <tt>[DEFAULT]/sql_retry_interval</tt><br />
** <tt>[DATABASE]/reconnect_interval</tt><br />
** <tt>[DEFAULT]/sql_max_overflow</tt><br />
** <tt>[DATABASE]/sqlalchemy_max_overflow</tt><br />
** <tt>[DEFAULT]/sql_connection_debug</tt><br />
** <tt>[DEFAULT]/sql_connection_trace</tt><br />
** <tt>[DATABASE]/sqlalchemy_pool_timeout</tt><br />
** <tt>[DEFAULT]/memcache_servers</tt><br />
** <tt>[DEFAULT]/libvirt_type</tt><br />
** <tt>[DEFAULT]/libvirt_uri</tt><br />
** <tt>[DEFAULT]/libvirt_inject_password</tt><br />
** <tt>[DEFAULT]/libvirt_inject_key</tt><br />
** <tt>[DEFAULT]/libvirt_inject_partition</tt><br />
** <tt>[DEFAULT]/libvirt_vif_driver</tt><br />
** <tt>[DEFAULT]/libvirt_volume_drivers</tt><br />
** <tt>[DEFAULT]/libvirt_disk_prefix</tt><br />
** <tt>[DEFAULT]/libvirt_wait_soft_reboot_seconds</tt><br />
** <tt>[DEFAULT]/libvirt_cpu_mode</tt><br />
** <tt>[DEFAULT]/libvirt_cpu_model</tt><br />
** <tt>[DEFAULT]/libvirt_snapshots_directory</tt><br />
** <tt>[DEFAULT]/libvirt_images_type</tt><br />
** <tt>[DEFAULT]/libvirt_images_volume_group</tt><br />
** <tt>[DEFAULT]/libvirt_sparse_logical_volumes</tt><br />
** <tt>[DEFAULT]/libvirt_images_rbd_pool</tt><br />
** <tt>[DEFAULT]/libvirt_images_rbd_ceph_conf</tt><br />
** <tt>[DEFAULT]/libvirt_snapshot_compression</tt><br />
** <tt>[DEFAULT]/libvirt_use_virtio_for_bridges</tt><br />
** <tt>[DEFAULT]/libvirt_iscsi_use_multipath</tt><br />
** <tt>[DEFAULT]/libvirt_iser_use_multipath</tt><br />
** <tt>[DEFAULT]/matchmaker_ringfile</tt><br />
** <tt>[DEFAULT]/agent_timeout</tt><br />
** <tt>[DEFAULT]/agent_version_timeout</tt><br />
** <tt>[DEFAULT]/agent_resetnetwork_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_agent_path</tt><br />
** <tt>[DEFAULT]/xenapi_disable_agent</tt><br />
** <tt>[DEFAULT]/xenapi_use_agent_default</tt><br />
** <tt>[DEFAULT]/xenapi_login_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_connection_concurrent</tt><br />
** <tt>[DEFAULT]/xenapi_connection_url</tt><br />
** <tt>[DEFAULT]/xenapi_connection_username</tt><br />
** <tt>[DEFAULT]/xenapi_connection_password</tt><br />
** <tt>[DEFAULT]/xenapi_vhd_coalesce_poll_interval</tt><br />
** <tt>[DEFAULT]/xenapi_check_host</tt><br />
** <tt>[DEFAULT]/xenapi_vhd_coalesce_max_attempts</tt><br />
** <tt>[DEFAULT]/xenapi_sr_base_path</tt><br />
** <tt>[DEFAULT]/target_host</tt><br />
** <tt>[DEFAULT]/target_port</tt><br />
** <tt>[DEFAULT]/iqn_prefix</tt><br />
** <tt>[DEFAULT]/xenapi_remap_vbd_dev</tt><br />
** <tt>[DEFAULT]/xenapi_remap_vbd_dev_prefix</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_base_url</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_seed_chance</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_seed_duration</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_max_last_accessed</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_listen_port_start</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_listen_port_end</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_download_stall_cutoff</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_max_seeder_processes_per_host</tt><br />
** <tt>[DEFAULT]/use_join_force</tt><br />
** <tt>[DEFAULT]/xenapi_ovs_integration_bridge</tt><br />
** <tt>[DEFAULT]/cache_images</tt><br />
** <tt>[DEFAULT]/xenapi_image_compression_level</tt><br />
** <tt>[DEFAULT]/default_os_type</tt><br />
** <tt>[DEFAULT]/block_device_creation_timeout</tt><br />
** <tt>[DEFAULT]/max_kernel_ramdisk_size</tt><br />
** <tt>[DEFAULT]/sr_matching_filter</tt><br />
** <tt>[DEFAULT]/xenapi_sparse_copy</tt><br />
** <tt>[DEFAULT]/xenapi_num_vbd_unplug_retries</tt><br />
** <tt>[DEFAULT]/xenapi_torrent_images</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_network_name</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_boot_menu_url</tt><br />
** <tt>[DEFAULT]/xenapi_ipxe_mkisofs_cmd</tt><br />
** <tt>[DEFAULT]/xenapi_running_timeout</tt><br />
** <tt>[DEFAULT]/xenapi_vif_driver</tt><br />
** <tt>[DEFAULT]/xenapi_image_upload_handler</tt><br />
<br />
== OpenStack Image Service (Glance) ==<br />
=== Key New Features ===<br />
* The calculation of storage quotas has been improved. Deleted images are now excluded from the count (https://bugs.launchpad.net/glance/+bug/1261738), which may affect your existing usage figures.<br />
* Glance has moved to using 0-based indices for location entries, to be in line with JSON-pointer RFC6901 (https://bugs.launchpad.net/glance/+bug/1282437)<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Dashboard (Horizon) ==<br />
<br />
=== Key New Features ===<br />
* Thanks to the [[I18nTeam]] Horizon is now available in Hindi, German and Serbian. Translations for Australian English, British English, Dutch, French, Japanese, Korean, Polish, Portuguese, Simplified and Traditional Chinese, Spanish and Russian have also been updated.<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Identity (Keystone) ==<br />
<br />
=== Key New Features ===<br />
<br />
* New v3 API features<br />
** <code>/v3/OS-FEDERATION/</code> allows Keystone to consume federated authentication via [https://shibboleth.net/ Shibboleth] for multiple Identity Providers, and mapping federated attributes into OpenStack group-based role assignments (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-federation-ext.md documentation]).<br />
** <code>POST /v3/users/{user_id}/password</code> allows API users to update their own passwords (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#change-user-password-post-usersuser_idpassword documentation]).<br />
** <code>GET v3/auth/token?nocatalog</code> allows API users to opt-out of receiving the service catalog when performing online token validation (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#validate-token-get-authtokensnocatalog documentation]).<br />
** <code>/v3/regions</code> provides a public interface for describing multi-region deployments (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#regions-v3regions documentation]).<br />
** <code>/v3/OS-SIMPLECERT/</code> now publishes the certificates used for PKI token validation (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-simple-certs-ext.md documentation]).<br />
** <code>/v3/OS-TRUST/trusts</code> is now capable of providing limited-use delegation via the <code>remaining_uses</code> attribute of [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-trust-ext.md trusts].<br />
* The assignments backend (the source of authorization data) has now been completely separated from the identity backend (the source of authentication data). This means that you can now back your deployment's identity data to LDAP, and your authorization data to SQL, for example.<br />
* KVS drivers are now capable of writing to persistent Key-Value stores such as Redis, Cassandra, or MongoDB.<br />
* Keystone's driver interfaces are now implemented as Abstract Base Classes (ABCs) to make it easier to track compatibility of custom driver implementations across releases.<br />
* Keystone's default <code>etc/policy.json</code> has been rewritten in an easier to read format.<br />
* [http://docs.openstack.org/developer/keystone/event_notifications.html Notifications] are now emitted in response to create, update and delete events on roles, groups, and trusts.<br />
* Custom extensions and driver implementations may now subscribe to internal-only event notifications, including ''disable'' events (which are only exposed externally as part of ''update'' events).<br />
* Keystone now emits [http://www.dmtf.org/standards/cadf Cloud Audit Data Federation (CADF)] event notifications in response to authentication events.<br />
* [https://review.openstack.org/#/c/50362/ Additional plugins] are provided to handle external authentication via <code>REMOTE_USER</code> with respect to single-domain versus multi-domain deployments.<br />
* <code>policy.json</code> can now perform enforcement on the target domain in a domain-aware operation using, for example, <code>%(target.{entity}.domain_id)s</code>.<br />
* The LDAP driver for the assignment backend now supports group-based role assignment operations.<br />
* Keystone now publishes token revocation ''events'' in addition to providing continued support for token revocation ''lists''. Token revocation events are designed to consume much less overhead (when compared to token revocation lists) and will enable Keystone eliminate token persistence during the Juno release.<br />
* Deployers can now define arbitrary limits on the size of collections in API responses (for example, <code>GET /v3/users</code> might be configured to return only 100 users, rather than 10,000). Clients will be informed when truncation has occurred.<br />
* Lazy translation has been enabled to translating responses according to the requested Accept-Language header.<br />
* Keystone now emits i18n-ready log messages.<br />
* Collection filtering is now performed in the driver layer, where possible, for improved performance.<br />
<br />
=== Known Issues ===<br />
<br />
* [https://bugs.launchpad.net/keystone/+bug/1291157 Bug 1291157]: If using the <code>OS-FEDERATION</code> extension, deleting an Identity Provider or Protocol does ''not'' result in previously-issued tokens being revoked. This will not be fixed in the ''stable/icehouse'' branch.<br />
<br />
=== Upgrade Notes ===<br />
<br />
* The v2 API has been prepared for deprecation, but remains stable in the Icehouse release. It may be formally deprecated during the Juno release pending widespread support for the v3 API.<br />
* Backwards compatibility for <code>keystone.middleware.auth_token</code> has been removed. <code>auth_token</code> middleware module is no longer provided by Keystone itself, and must be imported from <code>keystoneclient.middleware.auth_token</code> instead.<br />
* The <code>s3_token</code> middleware module is no longer provided by Keystone itself, and must be imported from <code>keystoneclient.middleware.s3_token</code> instead. Backwards compatibility for <code>keystone.middleware.s3_token</code> will be removed in Juno.<br />
* The default token duration has been reduced from 24 hours to just 1 hour. This effectively reduces the number of tokens that must be persisted at any one time, and (for PKI deployments) reduces the overhead of the token revocation list.<br />
* <code>keystone.contrib.access.core.AccessLogMiddleware</code> has been deprecated in favor of either the eventlet debug access log or Apache httpd access log and may be removed in the K release.<br />
* <code>keystone.contrib.stats.core.StatsMiddleware</code> has been deprecated in favor of external tooling and may be removed in the K release.<br />
* <code>keystone.middleware.XmlBodyMiddleware</code> has been deprecated in favor of support for "application/json" only and may be removed in the K release.<br />
* A v3 API version of the EC2 Credential system has been implemented. To use this, the following section needs to be added to <code>keystone-paste.ini</code>:<br />
[filter:ec2_extension_v3]<br />
paste.filter_factory = keystone.contrib.ec2:Ec2ExtensionV3.factory<br />
... and <code>ec2_extension_v3</code> needs to be added to the pipeline variable in the <code>[pipeline:api_v3]</code> section of <code>keystone-paste.ini</code>.<br />
* <code>etc/policy.json</code> updated to provide rules for the new v3 EC2 Credential CRUD as show in the updated sample <code>policy.json</code> and <code>policy.v3cloudsample.json</code><br />
* Migration numbers 38, 39 and 40 move all role assignment data into a single, unified table with first-class columns for role references.<br />
* TODO: deprecations for the move to oslo-incubator db<br />
* A new configuration option, <code>mutable_domain_id</code> is <code>false</code> by default to harden security around domain-level administration boundaries. This may break API functionality that you depended on in Havana. If so, set this value to <code>true</code> and ''please'' voice your use case to the Keystone community.<br />
* TODO: any non-ideal default values that will be changed in the future<br />
* TODO: the move to oslo.messaging<br />
<br />
== OpenStack Network Service (Neutron) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet.<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Block Storage (Cinder) ==<br />
=== Key New Features ===<br />
* Ability to change the type of an existing volume (retype)<br />
* Add volume metadata support to the Cinder Backup Object<br />
* Implement Multiple API workers<br />
* Add ability to delete Quota<br />
* Add ability to import/export backups in to Cinder<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
<br />
== OpenStack Telemetry (Ceilometer) ==<br />
=== Key New Features ===<br />
* API additions<br />
** arbitrarily complex combinations of query constraints for meters, samples and alarms<br />
** capabilities API for discovery of storage driver specific features<br />
** selectable aggregates for statistics, including new cardinality and standard deviation functions <br />
** direct access to samples decoupled from a specific meter<br />
** events API, in the style of [https://github.com/rackerlabs/stacktach StackTach]<br />
<br />
* Alarming improvements<br />
** time-constrained alarms, providing flexibility to set the bar higher or lower depending on time of day or day of the week<br />
** exclusion of weak data points with anomalously low sample counts <br />
** derived rate-based meters for disk & network, more suited to threshold-oriented alarming <br />
<br />
* Integration touch-points<br />
** split collector into notification agent solely responsible for consuming external notifications<br />
** redesign of pipeline configuration for pluggable resource discovery<br />
** configurable persistence of raw notification payloads, in the style of [https://github.com/rackerlabs/stacktach StackTach]<br />
<br />
* Storage drivers<br />
** approaching feature parity in HBase & SQLAlchemy & DB2 drivers<br />
** optimization of resource queries<br />
** HBase: add Alarm support<br />
<br />
* New sources of metrics<br />
** Neutron north-bound API on SDN controller<br />
** VMware vCenter Server API<br />
** SNMP daemons on baremetal hosts<br />
** OpenDaylight REST APIs<br />
<br />
=== Known Issues ===<br />
* SQLAlchemy storage driver is problematic with a scaled out collector service when run against PostgreSQL https://bugs.launchpad.net/ceilometer/+bug/1305332<br />
* HBase storage driver reports truncated list of meters: https://bugs.launchpad.net/ceilometer/+bug/1288284<br />
* HBase storage driver doesn't work with HappyBase version 0.7 <br />
* excessive load on nova-api service induced by compute agent: https://bugs.launchpad.net/ceilometer/+bug/1297528<br />
<br />
=== Upgrade Notes ===<br />
* the pre-existing collector service has been augmented with a new notification agent that must also be started up post-upgrade<br />
* MongoDB storage driver now requires the MongoDB installation to be version 2.4 or greater (the lower bound for Havana was 2.2), see [http://docs.mongodb.org/manual/release-notes/2.4-upgrade upgrade instructions].<br />
<br />
== OpenStack Orchestration (Heat) ==<br />
=== Key New Features ===<br />
* '''HOT templates''': The [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html HOT template format] is now supported as the recommended format for authoring heat templates.<br />
* '''OpenStack resources''': There is now sufficient coverage of resource types to port any template to [http://docs.openstack.org/developer/heat/template_guide/openstack.html native OpenStack resources]<br />
* '''Software configuration''': New API and resources to allow software configuration to be performed using a variety of techniques and tools<br />
* '''Non-admin users''': It is now possible to launch any stack without requiring admin user credentials. See the upgrade notes on enabling this by configuring stack domain users.<br />
* '''Operator API''': Cloud operators now have a dedicated admin API to perform operations on all stacks<br />
* '''Autoscaling resources''': [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::AutoScalingGroup OS::Heat::AutoScalingGroup] and [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ScalingPolicy OS::Heat::ScalingPolicy] now allow the autoscaling of any arbitrary collection of resources<br />
* '''Notifications''': Heat now sends RPC notifications for events such as stack state changes and autoscaling triggers<br />
* '''Heat engine scaling''': It is now possible to share orchestration load across multiple instances of heat-engine. Locking is coordinated by a pluggable distributed lock, with a SQL based default lock plugin.<br />
* '''File inclusion with get_file''': The [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#intrinsic-functions intrinsic function] get_file is used by python-heatclient and heat to allow files to be attached to stack create and update actions, which is useful for representing configuration files and nested stacks in separate files.<br />
* '''Cloud-init resources''': The [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::CloudConfig OS::Heat::CloudConfig] and [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::MultipartMime OS::Heat::MultipartMime]<br />
* '''Stack abandon and adopt''': It is now possible to abandon a stack, which deletes the stack from Heat without deleting the actual OpenStack resources. The resulting abandon data can also be used to adopt a stack, which creates a new stack based on already existing OpenStack resources. Adopt should be considered an experimental feature for the Icehouse release of Heat.<br />
* '''Stack preview''': The stack-preview action returns a list of resources which are expected to be created if a stack is created with the provided template<br />
* '''New resources''': The following new resources are implemented in this release:<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::CloudConfig OS::Heat::CloudConfig]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::MultipartMime OS::Heat::MultipartMime]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::SoftwareConfig OS::Heat::SoftwareConfig]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::SoftwareDeployment OS::Heat::SoftwareDeployment]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::StructuredConfig OS::Heat::StructuredConfig]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::StructuredDeployment OS::Heat::StructuredDeployment]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::RandomString OS::Heat::RandomString]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ResourceGroup OS::Heat::ResourceGroup]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::AutoScalingGroup OS::Heat::AutoScalingGroup]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ScalingPolicy OS::Heat::ScalingPolicy]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::SecurityGroup OS::Neutron::SecurityGroup]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::MeteringLabel OS::Neutron::MeteringLabel]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::MeteringRule OS::Neutron::MeteringRule]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::ProviderNet OS::Neutron::ProviderNet]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::NetworkGateway OS::Neutron::NetworkGateway]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember OS::Neutron::PoolMember]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::KeyPair OS::Nova::KeyPair]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::FloatingIP OS::Nova::FloatingIP]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::FloatingIPAssociation OS::Nova::FloatingIPAssociation]<br />
:* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Trove::Instance OS::Trove::Instance]<br />
<br />
=== Known Issues ===<br />
* Any error during a stack-update operation (for example from a transient cloud error, a heat bug, or a user template error) can lead to stacks going into an unrecoverable error state. Currently it is only recommended to attempt stack updates if it is practical to recover from errors by deleting and recreating the stack.<br />
* The new stack-adopt operation should be considered an experimental feature<br />
* CFN API returns HTTP status code 500 on all errors ([https://bugs.launchpad.net/heat/+bug/1291079 bug 1291079])<br />
* Deleting stacks containing volume attachments may need to be attempted multiple times due to a volume detachment race ([https://bugs.launchpad.net/heat/+bug/1298350 bug 1298350])<br />
<br />
=== Upgrade Notes ===<br />
Please read the general notes on [https://wiki.openstack.org/wiki/Security/Icehouse/Heat Heat's security model].<br />
<br />
==== Deferred authentication method ====<br />
The default <code>deferred_auth_method</code> of <code>password</code> is deprecated as of Icehouse, so although it is still the default, deployers are strongly encouraged to move to using <code>deferred_auth_method=trusts</code>, which is planned to become the default for Juno. This model has the following benefits:<br />
* It avoids storing user credentials in the heat database<br />
* It removes the need to provide a password as well as a token on stack create<br />
* It limits the actions the heat service user can perform on a users behalf.<br />
<br />
To enable trusts for deferred operations:<br />
* Ensure the keystone service heat is configured to use has enabled the [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-trust-ext.md OS-TRUST extension]<br />
* Set <code>deferred_auth_method = trusts</code> in <code>/etc/heat/heat.conf</code><br />
* Optionally specify the roles to be delegated to the heat service user (<code>trusts_delegated_roles</code> in <code>heat.conf</code>, defaults to <code>heat_stack_owner</code> which will be referred to in the following instructions. You may wish to modify this list of roles to suit your local RBAC policies)<br />
* Ensure the role(s) to be delegated exist, e.g <code>heat_stack_owner</code> exists when running <code>keystone role-list</code><br />
* All users creating heat stacks should possess this role in the project where they are creating the stack. A trust will be created by heat on stack creation between the stack owner (user creating the stack) and the heat service user, delegating the <code>heat_stack_user</code> role to the heat service user, for the lifetime of the stack.<br />
<br />
==== Stack domain users ====<br />
(shardy TODO)<br />
<br />
== OpenStack Database service (Trove) ==<br />
=== Key New Features ===<br />
<br />
=== Known Issues ===<br />
None yet<br />
<br />
=== Upgrade Notes ===<br />
None yet<br />
<br />
== OpenStack Documentation ==<br />
<br />
=== Key New Features ===<br />
<br />
* New manual: Command-Line Interface Reference<br />
* API reference has been updated and includes now PDF files as well<br />
<br />
=== Known Issues ===<br />
<br />
=== Upgrade Notes ===<br />
None yet</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=Cinder&diff=47358Cinder2014-04-01T19:20:21Z<p>John-griffith: /* OpenStack Block Storage ("Cinder") */</p>
<hr />
<div><br />
<br />
= OpenStack Block Storage ("Cinder") =<br />
<br />
{| border="1" cellpadding="2" cellspacing="0"<br />
| [[https://launchpad.net/cinder/ Cinder on launchpad (including bug tracker and blueprints)]]<br />
|-<br />
| [[https://github.com/openstack/cinder Source code]]<br />
|-<br />
| [[http://docs.openstack.org/developer/cinder/ Developer docs]]<br />
|}<br />
<br />
== Mission Statement ==<br />
To implement services and libraries to provide on demand, self-service access to Block Storage resources. Provide Software Defined Block Storage via abstraction and automation on top of various traditional backend block storage devices.<br />
<br />
== Description ==<br />
Cinder is a Block Storage service for OpenStack. It's designed to allow the use of either a reference implementation (LVM) to present storage resources to end users that can be consumed by the OpenStack Compute Project (Nova). The short description of Cinder is that it virtualizes pools of block storage devices and provides end users with a self service API to request and consume those resources without requiring any knowledge of where their storage is actually deployed or on what type of device.<br />
<br />
== Related projects ==<br />
* Python Cinder client<br />
* Block Storage API documentation<br />
<br />
== What is Cinder ? ==<br />
<br />
Cinder provides an infrastructure for managing volumes in OpenStack. It was originally a Nova component called nova-volume, but has become an independent project since the Folsom release.<br />
<br />
== Reasoning: ==<br />
# Nova is currently a very large project; managing all of the dependencies in linkages of services within Nova can make the ability to advance new features and functionality very difficult.<br />
# As a result of the many components and dependencies in Nova, it's difficult for anybody to really have a complete view of Nova and to be a true expert. This makes the job of core team member on Nova very difficult, and inhibits good thorough reviews of bug and blueprint submissions. <br />
# Block storage is a critical component of [[OpenStack]], as such it warrants focused and dedicated attention.<br />
# Having Block Storage as a dedicated core project in [[OpenStack]] enables the ability to greatly improve functionality and reliability of the block storage component of [[OpenStack]]<br />
<br />
== Documents: ==<br />
* Cinder deep dive (updated for Grizzly): [[File:cinder-grizzly-deep-dive-pub.pdf]]<br />
<br />
== Minimum Driver Features ==<br />
See [https://github.com/openstack/cinder/blob/master/doc/source/devref/drivers.rst driver dev docs]<br />
<br />
=== Keeping consistant with multi backend ===<br />
In order to maintain consistency with multi backend, do not directly use FLAGS.my_flag, instead use the self.configuration that is provided to the volume drivers. If this does not exist, look @ lvm.py and add it to your driver. using FLAGS.my_flag instead of self.configuration.my_flag will cause multi backend to not work properly. Multi backend relies on the configurations to be within a specific config group in the config file, and the self.configuration abstracts that away from the drivers.<br />
<br />
== Keeping informed and providing '''CONSTRUCTIVE INPUT''' ==<br />
The Cinder team currently meets on a weekly basis in #openstack-meeting at 16:00 UTC on Wednesdays. I try to keep the meetings wiki agenda page http://wiki.openstack.org/CinderMeetings up to date and follow it. Also keep in mind that '''anybody''' is able to add/suggest agenda items via the meeting wiki page.<br />
<br />
Of course, there's also IRC... a number of us monitor #openstack-cinder or you can always send a PM to jgriffith (that's me)<br />
<br />
== Concerns from the community: ==<br />
=== Compatibility and Migration: ===<br />
There has been a significant amount of concern raised regarding "compatibility"; unfortunately this seems to mean different things to different people. For those that haven't looked at the Cinder code or tried a demo in devstack, here are some question/answers:<br />
<br />
* Do the same nova client commands I use for volumes today still work the same? '''YES'''<br />
* Do the same euca2ools that I use for volumes today still work the same? '''YES'''<br />
* Does block storage still work the same as it does today in terms of LVM, iSCSI and the drivers that are curently in place? '''YES'''<br />
* Are the associated database tables the same as they are in the current nova volume code? '''For the most part YES, all volume related tables and columns are migrated, non-volume related tables however are not present'''<br />
* Does it use the same nova database as we use today? '''No, it does require a new independent database'''<br />
* Are you going to implement cinder with complete disregard for my current install and completely change everything out from under me? '''ABSOLUTELY NOT'''<br />
* Are you going to test migrating from nova-vol to Cinder? '''YES'''<br />
* Are those migration tests going to be done just using fakes/unit tests? '''NO, we would require running setups, most likely devstack'''<br />
* Are you planning to provide migration scripts/tools to move from nova to cinder? '''YES'''<br />
<br />
=== Additional thoughts to keep in mind: ===<br />
* The Cinder core team is fortunate enough to have a number of members who currently work for companies that are using [[OpenStack]] in production environments. There is a strong representation and the concerns of Providers is in fact a major consideration<br />
* The goal is '''NOT''' to throw away nova-volume as it is today, but to separate it, focus on it and improve it.<br />
* Migration is one of the top priorities for introduction of Cinder into Folsom (regardless of whether nova-volume is still in place or not). This is something that is just considered a part of the requirements for the project.<br />
<br />
== Cinder Core Drivers ==<br />
For a list of the core drivers in each OpenStack release and the volume operations they support, see https://wiki.openstack.org/wiki/CinderSupportMatrix<br />
<br />
== Notes About Submitting Patches ==<br />
Everyone is welcome to sign the CLA and submit code. Please be sure you familiarize yourself with the "how to contribute guide" (https://wiki.openstack.org/wiki/How_To_Contribute#If_you.27re_a_developer).<br />
<br />
Keep in mind, there is a disproportionate number of submitters to reviewers. YOU can help with this!! Anybody is welcome to review patches, jump in, give a review. It's a great way to learn more about the code and to help you make better submissions in the future. It also helps your karma, when you submit a patch if you're an active reviewer core team members are more likely to notice your patch and give it some attention before some others.<br />
<br />
== Cinder Plugins ==<br />
How to submit a plugin/driver: https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver<br />
<br />
Cinder Plugin/Driver certification page: https://wiki.openstack.org/wiki/Cinder/certified-drivers<br />
<br />
The following plugins (from other sources) are avaialble for this project<br />
* [https://wiki.openstack.org/wiki/Mellanox-Cinder Mellanox Cinder Plugin] Mellanox Cinder Plugin<br />
<br />
== Configuring devstack to use your driver and backend ==<br />
One of the things you'll be required to do when submitting a new driver is running your backend and driver in a devstack environment and executing the tempest volume tests against it. Currently we provide a driver_cert wrapper (mentioned in the how-to-contribute-a-driver section). One thing that causes some confusion is how do I configure devstack to use my backend device. It used to be that your driver info would have to be added to lib/cinder in devstack to set your options. We then created a cinder/plugin module in devstack. Fortunately though it's MUCH easier than that. For *most* drivers, the only changes that are made consist of cinder.conf file changes. That can easily be accomplished by using devstacks local.conf file (more info here: http://devstack.org/configuration.html). For more complex actions (like the need to install packages etc, the plugin directory in devstack can be used). An example of what this file would look like to add driver FOO is shown below, the default localrc section is included for completeness, but the section of interest is the post-config cinder.conf section:<br />
<br />
<nowiki>[[local|localrc]]</nowiki><br /><br />
<sub>:# Passwords<br /><br />
ADMIN_PASSWORD=password<br /><br />
MYSQL_PASSWORD=password<br /><br />
RABBIT_PASSWORD=password<br /><br />
SERVICE_PASSWORD=password<br /><br />
SERVICE_TOKEN=password<br /><br />
SCREEN_LOGDIR=/opt/stack/logs<br /><br />
HOST_IP=172.16.140.246<br /><br />
disable_service n-net<br /><br />
enable_service q-svc<br /><br />
enable_service q-agt<br /><br />
enable_service q-dhcp<br /><br />
enable_service q-l3<br /><br />
enable_service q-meta<br /><br />
enable_service neutron<br /><br />
<br /><br />
<br />
<nowiki># These options define expected driver capabilities </nowiki><br /><br />
TEMPEST_VOLUME_DRIVER=foo<br /><br />
TEMPEST_VOLUME_VENDOR="Foo Inc"<br /><br />
TEMPEST_STORAGE_PROTOCOL=iSCSI<br /><br />
<br /><br />
<nowiki># These options allow you to specify a branch other than "master" be used </nowiki><br /><br />
CINDER_REPO=https://review.openstack.org/openstack/cinder<br /><br />
CINDER_BRANCH=refs/changes/83/72183/4<br /><br />
<br /><br />
<nowiki># Disable security groups entirely</nowiki><br /><br />
Q_USE_SECGROUP=False<br /><br />
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver<br /><br />
CINDER_SECURE_DELETE=False<br /><br />
<br /><br />
<nowiki>[[post-config|$CINDER_CONF]]</nowiki><br /><br />
volume_driver = cinder.volume.drivers.foo.FooDriver<br /><br />
foos_var = something<br /><br />
another_foo_var = something-else</sub><br /><br />
<br /></div>John-griffithhttps://wiki.openstack.org/w/index.php?title=CinderMeetings&diff=46685CinderMeetings2014-03-26T16:50:47Z<p>John-griffith: /* Weekly Cinder team meeting */</p>
<hr />
<div><br />
= Weekly Cinder team meeting =<br />
'''NOTE MEETING TIME: Wed's at 16:00 UTC'''<br />
<br />
If you're interested in Cinder or Block Storage in general for OpenStack, we have a weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, on Wednesdays at 16:00 UTC. Please feel free to add items to the agenda below. NOTE: When adding topics please include your IRC name so we know who's topic it is and how to get more info.<br />
<br />
== Next meeting ==<br />
'''NOTE:''' ''Include your IRC nickname next to agenda items so that you can be called upon in the meeting and arrive at the meeting promptly if placing items in agenda. You might want to put this on your calendar if you are adding items.''<br />
<br />
'''April 2, 2014 16:00 UTC'''<br />
<br />
== Previous meetings ==<br />
<br />
'''Mar 26, 2014 16:00 UTC'''<br />
* RC1 updates (jgriffith)<br />
* Design Summit Sessions (jgriffith)<br />
http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-03-26-16.00.log.html<br />
<br />
'''Mar 19, 2014 16:00 UTC'''<br />
* ProphetStor Driver Exception request for Icehouse (jgriffith)<br />
* Bug status/updates (jgriffith)<br />
* What we should be punting to Juno (aka immediate -2 in Gerrit) (jgriffith)<br />
* Continuous Integration for Cinder Certification (jungleboyj)<br />
<br />
'''Mar 12, 2014 16:00 UTC'''<br />
* Cancelled due to nothing on the agenda. Ad-hoc discussion on #openstack-cinder instead<br />
<br />
'''Mar 5, 2014 16:00 UTC'''<br />
* Volume replication - avishay<br />
* [https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage New LVM-based driver for shared storage] - mtanino<br />
* DRBD/drbdmanage driver for cinder - philr<br />
<br />
'''Feb 19, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* [https://review.openstack.org/#/c/73745 Milestone Consideration for Drivers] -thingee<br />
* [https://etherpad.openstack.org/p/cinder-hack-201402 Hack-a-thon details] -thingee<br />
* [https://review.openstack.org/#/c/66737/ scheduling for local storage] -DuncanT<br />
<br />
'''Feb 5, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* Multiple pools per backend (bswartz)<br />
'''Jan 8, 2014 16:00 UTC'''<br />
* I2 is just around the corner, blueprint updates<br />
* Alternating meeting time proposal, results on feedback<br />
* Driver cert test, it's there... use it<br />
* Prioritizing patches and reviews<br />
'''December 18, 2013 16:00 UTC'''<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/cinder-backup-recover-api cinder backup recovery api - import/export backups] - avishay<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/copy-volume-to-image-task-flow] - Griffith<br />
* [https://blueprints.launchpad.net/cinder/+spec/admin-defined-capabilities Admin-defined capabilities] - Ollie<br />
* Why is type manage an extension? -Thingee<br />
<br />
'''December 11, 2013 16:00 UTC'''<br />
* Proposal of [https://etherpad.openstack.org/p/cinder-extensions extension packages] -Thingee<br />
<br />
'''December 4, 2013 16:00 UTC'''<br />
* Progressing with [https://wiki.openstack.org/wiki/Cinder/blueprints/multi-attach-volume multi-attach / shared-volume] - sgordon<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-acls-for-volumes Access Control List design discussion] - alatynskaya<br />
<br />
'''November 27, 2013 16:00 UTC'''<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-continuous-volume-replication-v2 Updated volume mirroring design] - avishay<br />
* Start using only Mock for new tests... [http://lists.openstack.org/pipermail/openstack-dev/2013-November/018501.html Related Nova Discussion] - Thingee<br />
* Rate limiting came up in the summit, and [http://lists.openstack.org/pipermail/openstack-dev/2013-November/020291.html on openstack-dev] - avishay<br />
* Metadata backup (https://review.openstack.org/#/c/51900/) progress RFC - dosaboy<br />
<br />
<br />
'''November 20, 2013 16:00 UTC'''<br />
* I-1 scheduling - JGriffith<br />
<br />
<br />
'''November 13, 2013 16:00 UTC'''<br />
* patches should update doc files where necessary to ease writing of release notes (Avishay?)<br />
* fencing host from storage (Ehud Trainin)<br />
* Summarize priority of tasks from summit discussions (https://wiki.openstack.org/wiki/Summit/Icehouse/Etherpads#Cinder and https://etherpad.openstack.org/p/cinder-icehouse-summary) Griff<br />
<br />
<br />
'''October 30, 2013 16:00 UTC'''<br />
* cinder backup metadata support - http://goo.gl/Jkg2FV (dosaboy)<br />
* fencing and unfencing host from storage - https://blueprints.launchpad.net/cinder/+spec/fencing-and-unfencing (Ehud Trainin)<br />
<br />
'''October 23, 2013 16:00 UTC'''<br />
* Nexenta backup driver https://review.openstack.org/#/c/47005/ - DuncanT<br />
<br />
<br />
'''October 2, 2013 16:00 UTC'''<br />
* What's still broken in Havana<br />
:* Backups and multibackend (https://code.launchpad.net/bugs/1228223): Fix committed<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066): Fix committed<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here): '''???'''<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896): '''Still open'''<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469): Probable fix committed<br />
:* Summary of gate issue pertaining to Cinder can be viewed here: http://paste.openstack.org/show/47798/<br />
:* Moving to taskflow - avishay<br />
<br />
<br />
'''Sept 25, 2013, 16:00 UTC'''<br />
* PTL nomination process is open until the 26'th, if you want to run send your nomination proposal out to the dev ML<br />
* What's broken in Havana<br />
:* Backups (specifically when configured with multi-backend volumes)<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066)<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here)<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896)<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469)<br />
:* ????<br />
* Cinderclient release plans/status? (Eharney)<br />
* OSLO imports (DuncanT)<br />
* bp/cinder-backup-improvements (dosaboy)<br />
* bp/multi-attach (zhiyan)<br />
<br />
<br />
<br />
'''Aug 21, 2013, 16:00 UTC'''<br />
# No agenda, no meeting.<br />
<br />
'''Aug 14, 2013, 16:00 UTC'''<br />
# Volume migration status - avishay<br />
# API extensions using metadata. This comes from the [https://review.openstack.org/#/c/38322/ readonly volume attach support]. - thingee<br />
[http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-08-14-16.00.log.html IRC Log]<br />
<br />
'''Aug 7, 2013, 16:00 UTC'''<br />
# [https://bugs.launchpad.net/cinder/+bug/1209199 RFC - make all rbd clones copy-on-write] -- Dosaboy<br />
# V1 API removal issues, plans and timescales - DuncanT<br />
<br />
== Meeting Minutes ==<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2014/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2013/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2012/</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=CinderMeetings&diff=46676CinderMeetings2014-03-26T15:46:31Z<p>John-griffith: /* Weekly Cinder team meeting */</p>
<hr />
<div><br />
= Weekly Cinder team meeting =<br />
'''NOTE MEETING TIME: Wed's at 16:00 UTC'''<br />
<br />
If you're interested in Cinder or Block Storage in general for OpenStack, we have a weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, on Wednesdays at 16:00 UTC. Please feel free to add items to the agenda below. NOTE: When adding topics please include your IRC name so we know who's topic it is and how to get more info.<br />
<br />
== Next meeting ==<br />
'''NOTE:''' ''Include your IRC nickname next to agenda items so that you can be called upon in the meeting and arrive at the meeting promptly if placing items in agenda. You might want to put this on your calendar if you are adding items.''<br />
<br />
'''Mar 26, 2014 16:00 UTC'''<br />
* RC1 updates (jgriffith)<br />
* Design Summit Sessions (jgriffith)<br />
<br />
== Previous meetings ==<br />
<br />
'''Mar 19, 2014 16:00 UTC'''<br />
* ProphetStor Driver Exception request for Icehouse (jgriffith)<br />
* Bug status/updates (jgriffith)<br />
* What we should be punting to Juno (aka immediate -2 in Gerrit) (jgriffith)<br />
* Continuous Integration for Cinder Certification (jungleboyj)<br />
<br />
'''Mar 12, 2014 16:00 UTC'''<br />
* Cancelled due to nothing on the agenda. Ad-hoc discussion on #openstack-cinder instead<br />
<br />
'''Mar 5, 2014 16:00 UTC'''<br />
* Volume replication - avishay<br />
* [https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage New LVM-based driver for shared storage] - mtanino<br />
* DRBD/drbdmanage driver for cinder - philr<br />
<br />
'''Feb 19, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* [https://review.openstack.org/#/c/73745 Milestone Consideration for Drivers] -thingee<br />
* [https://etherpad.openstack.org/p/cinder-hack-201402 Hack-a-thon details] -thingee<br />
* [https://review.openstack.org/#/c/66737/ scheduling for local storage] -DuncanT<br />
<br />
'''Feb 5, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* Multiple pools per backend (bswartz)<br />
'''Jan 8, 2014 16:00 UTC'''<br />
* I2 is just around the corner, blueprint updates<br />
* Alternating meeting time proposal, results on feedback<br />
* Driver cert test, it's there... use it<br />
* Prioritizing patches and reviews<br />
'''December 18, 2013 16:00 UTC'''<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/cinder-backup-recover-api cinder backup recovery api - import/export backups] - avishay<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/copy-volume-to-image-task-flow] - Griffith<br />
* [https://blueprints.launchpad.net/cinder/+spec/admin-defined-capabilities Admin-defined capabilities] - Ollie<br />
* Why is type manage an extension? -Thingee<br />
<br />
'''December 11, 2013 16:00 UTC'''<br />
* Proposal of [https://etherpad.openstack.org/p/cinder-extensions extension packages] -Thingee<br />
<br />
'''December 4, 2013 16:00 UTC'''<br />
* Progressing with [https://wiki.openstack.org/wiki/Cinder/blueprints/multi-attach-volume multi-attach / shared-volume] - sgordon<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-acls-for-volumes Access Control List design discussion] - alatynskaya<br />
<br />
'''November 27, 2013 16:00 UTC'''<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-continuous-volume-replication-v2 Updated volume mirroring design] - avishay<br />
* Start using only Mock for new tests... [http://lists.openstack.org/pipermail/openstack-dev/2013-November/018501.html Related Nova Discussion] - Thingee<br />
* Rate limiting came up in the summit, and [http://lists.openstack.org/pipermail/openstack-dev/2013-November/020291.html on openstack-dev] - avishay<br />
* Metadata backup (https://review.openstack.org/#/c/51900/) progress RFC - dosaboy<br />
<br />
<br />
'''November 20, 2013 16:00 UTC'''<br />
* I-1 scheduling - JGriffith<br />
<br />
<br />
'''November 13, 2013 16:00 UTC'''<br />
* patches should update doc files where necessary to ease writing of release notes (Avishay?)<br />
* fencing host from storage (Ehud Trainin)<br />
* Summarize priority of tasks from summit discussions (https://wiki.openstack.org/wiki/Summit/Icehouse/Etherpads#Cinder and https://etherpad.openstack.org/p/cinder-icehouse-summary) Griff<br />
<br />
<br />
'''October 30, 2013 16:00 UTC'''<br />
* cinder backup metadata support - http://goo.gl/Jkg2FV (dosaboy)<br />
* fencing and unfencing host from storage - https://blueprints.launchpad.net/cinder/+spec/fencing-and-unfencing (Ehud Trainin)<br />
<br />
'''October 23, 2013 16:00 UTC'''<br />
* Nexenta backup driver https://review.openstack.org/#/c/47005/ - DuncanT<br />
<br />
<br />
'''October 2, 2013 16:00 UTC'''<br />
* What's still broken in Havana<br />
:* Backups and multibackend (https://code.launchpad.net/bugs/1228223): Fix committed<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066): Fix committed<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here): '''???'''<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896): '''Still open'''<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469): Probable fix committed<br />
:* Summary of gate issue pertaining to Cinder can be viewed here: http://paste.openstack.org/show/47798/<br />
:* Moving to taskflow - avishay<br />
<br />
<br />
'''Sept 25, 2013, 16:00 UTC'''<br />
* PTL nomination process is open until the 26'th, if you want to run send your nomination proposal out to the dev ML<br />
* What's broken in Havana<br />
:* Backups (specifically when configured with multi-backend volumes)<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066)<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here)<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896)<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469)<br />
:* ????<br />
* Cinderclient release plans/status? (Eharney)<br />
* OSLO imports (DuncanT)<br />
* bp/cinder-backup-improvements (dosaboy)<br />
* bp/multi-attach (zhiyan)<br />
<br />
<br />
<br />
'''Aug 21, 2013, 16:00 UTC'''<br />
# No agenda, no meeting.<br />
<br />
'''Aug 14, 2013, 16:00 UTC'''<br />
# Volume migration status - avishay<br />
# API extensions using metadata. This comes from the [https://review.openstack.org/#/c/38322/ readonly volume attach support]. - thingee<br />
[http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-08-14-16.00.log.html IRC Log]<br />
<br />
'''Aug 7, 2013, 16:00 UTC'''<br />
# [https://bugs.launchpad.net/cinder/+bug/1209199 RFC - make all rbd clones copy-on-write] -- Dosaboy<br />
# V1 API removal issues, plans and timescales - DuncanT<br />
<br />
== Meeting Minutes ==<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2014/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2013/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2012/</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=CinderMeetings&diff=45877CinderMeetings2014-03-18T23:40:28Z<p>John-griffith: /* Next meeting */</p>
<hr />
<div><br />
= Weekly Cinder team meeting =<br />
'''NOTE MEETING TIME: Wed's at 16:00 UTC'''<br />
<br />
If you're interested in Cinder or Block Storage in general for OpenStack, we have a weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, on Wednesdays at 16:00 UTC. Please feel free to add items to the agenda below. NOTE: When adding topics please include your IRC name so we know who's topic it is and how to get more info.<br />
<br />
== Next meeting ==<br />
'''NOTE:''' ''Include your IRC nickname next to agenda items so that you can be called upon in the meeting and arrive at the meeting promptly if placing items in agenda. You might want to put this on your calendar if you are adding items.''<br />
<br />
'''Mar 19, 2014 16:00 UTC'''<br />
* ProphetStor Driver Exception request for Icehouse (jgriffith)<br />
* Bug status/updates (jgriffith)<br />
* What we should be punting to Juno (aka immediate -2 in Gerrit) (jgriffith)<br />
<br />
== Previous meetings ==<br />
<br />
'''Mar 12, 2014 16:00 UTC'''<br />
* Cancelled due to nothing on the agenda. Ad-hoc discussion on #openstack-cinder instead<br />
<br />
'''Mar 5, 2014 16:00 UTC'''<br />
* Volume replication - avishay<br />
* [https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage New LVM-based driver for shared storage] - mtanino<br />
* DRBD/drbdmanage driver for cinder - philr<br />
<br />
'''Feb 19, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* [https://review.openstack.org/#/c/73745 Milestone Consideration for Drivers] -thingee<br />
* [https://etherpad.openstack.org/p/cinder-hack-201402 Hack-a-thon details] -thingee<br />
* [https://review.openstack.org/#/c/66737/ scheduling for local storage] -DuncanT<br />
<br />
'''Feb 5, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* Multiple pools per backend (bswartz)<br />
'''Jan 8, 2014 16:00 UTC'''<br />
* I2 is just around the corner, blueprint updates<br />
* Alternating meeting time proposal, results on feedback<br />
* Driver cert test, it's there... use it<br />
* Prioritizing patches and reviews<br />
'''December 18, 2013 16:00 UTC'''<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/cinder-backup-recover-api cinder backup recovery api - import/export backups] - avishay<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/copy-volume-to-image-task-flow] - Griffith<br />
* [https://blueprints.launchpad.net/cinder/+spec/admin-defined-capabilities Admin-defined capabilities] - Ollie<br />
* Why is type manage an extension? -Thingee<br />
<br />
'''December 11, 2013 16:00 UTC'''<br />
* Proposal of [https://etherpad.openstack.org/p/cinder-extensions extension packages] -Thingee<br />
<br />
'''December 4, 2013 16:00 UTC'''<br />
* Progressing with [https://wiki.openstack.org/wiki/Cinder/blueprints/multi-attach-volume multi-attach / shared-volume] - sgordon<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-acls-for-volumes Access Control List design discussion] - alatynskaya<br />
<br />
'''November 27, 2013 16:00 UTC'''<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-continuous-volume-replication-v2 Updated volume mirroring design] - avishay<br />
* Start using only Mock for new tests... [http://lists.openstack.org/pipermail/openstack-dev/2013-November/018501.html Related Nova Discussion] - Thingee<br />
* Rate limiting came up in the summit, and [http://lists.openstack.org/pipermail/openstack-dev/2013-November/020291.html on openstack-dev] - avishay<br />
* Metadata backup (https://review.openstack.org/#/c/51900/) progress RFC - dosaboy<br />
<br />
<br />
'''November 20, 2013 16:00 UTC'''<br />
* I-1 scheduling - JGriffith<br />
<br />
<br />
'''November 13, 2013 16:00 UTC'''<br />
* patches should update doc files where necessary to ease writing of release notes (Avishay?)<br />
* fencing host from storage (Ehud Trainin)<br />
* Summarize priority of tasks from summit discussions (https://wiki.openstack.org/wiki/Summit/Icehouse/Etherpads#Cinder and https://etherpad.openstack.org/p/cinder-icehouse-summary) Griff<br />
<br />
<br />
'''October 30, 2013 16:00 UTC'''<br />
* cinder backup metadata support - http://goo.gl/Jkg2FV (dosaboy)<br />
* fencing and unfencing host from storage - https://blueprints.launchpad.net/cinder/+spec/fencing-and-unfencing (Ehud Trainin)<br />
<br />
'''October 23, 2013 16:00 UTC'''<br />
* Nexenta backup driver https://review.openstack.org/#/c/47005/ - DuncanT<br />
<br />
<br />
'''October 2, 2013 16:00 UTC'''<br />
* What's still broken in Havana<br />
:* Backups and multibackend (https://code.launchpad.net/bugs/1228223): Fix committed<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066): Fix committed<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here): '''???'''<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896): '''Still open'''<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469): Probable fix committed<br />
:* Summary of gate issue pertaining to Cinder can be viewed here: http://paste.openstack.org/show/47798/<br />
:* Moving to taskflow - avishay<br />
<br />
<br />
'''Sept 25, 2013, 16:00 UTC'''<br />
* PTL nomination process is open until the 26'th, if you want to run send your nomination proposal out to the dev ML<br />
* What's broken in Havana<br />
:* Backups (specifically when configured with multi-backend volumes)<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066)<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here)<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896)<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469)<br />
:* ????<br />
* Cinderclient release plans/status? (Eharney)<br />
* OSLO imports (DuncanT)<br />
* bp/cinder-backup-improvements (dosaboy)<br />
* bp/multi-attach (zhiyan)<br />
<br />
<br />
<br />
'''Aug 21, 2013, 16:00 UTC'''<br />
# No agenda, no meeting.<br />
<br />
'''Aug 14, 2013, 16:00 UTC'''<br />
# Volume migration status - avishay<br />
# API extensions using metadata. This comes from the [https://review.openstack.org/#/c/38322/ readonly volume attach support]. - thingee<br />
[http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-08-14-16.00.log.html IRC Log]<br />
<br />
'''Aug 7, 2013, 16:00 UTC'''<br />
# [https://bugs.launchpad.net/cinder/+bug/1209199 RFC - make all rbd clones copy-on-write] -- Dosaboy<br />
# V1 API removal issues, plans and timescales - DuncanT<br />
<br />
== Meeting Minutes ==<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2014/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2013/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2012/</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=CinderMeetings&diff=45875CinderMeetings2014-03-18T23:11:41Z<p>John-griffith: /* Next meeting */</p>
<hr />
<div><br />
= Weekly Cinder team meeting =<br />
'''NOTE MEETING TIME: Wed's at 16:00 UTC'''<br />
<br />
If you're interested in Cinder or Block Storage in general for OpenStack, we have a weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, on Wednesdays at 16:00 UTC. Please feel free to add items to the agenda below. NOTE: When adding topics please include your IRC name so we know who's topic it is and how to get more info.<br />
<br />
== Next meeting ==<br />
'''NOTE:''' ''Include your IRC nickname next to agenda items so that you can be called upon in the meeting and arrive at the meeting promptly if placing items in agenda. You might want to put this on your calendar if you are adding items.''<br />
<br />
'''Mar 19, 2014 16:00 UTC'''<br />
* ProphetStor Driver Exception request for Icehouse (jgriffith)<br />
* Bug status/updates (jgriffith)<br />
* What we should be punting to Juno (jgriffith)<br />
<br />
== Previous meetings ==<br />
<br />
'''Mar 12, 2014 16:00 UTC'''<br />
* Cancelled due to nothing on the agenda. Ad-hoc discussion on #openstack-cinder instead<br />
<br />
'''Mar 5, 2014 16:00 UTC'''<br />
* Volume replication - avishay<br />
* [https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage New LVM-based driver for shared storage] - mtanino<br />
* DRBD/drbdmanage driver for cinder - philr<br />
<br />
'''Feb 19, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* [https://review.openstack.org/#/c/73745 Milestone Consideration for Drivers] -thingee<br />
* [https://etherpad.openstack.org/p/cinder-hack-201402 Hack-a-thon details] -thingee<br />
* [https://review.openstack.org/#/c/66737/ scheduling for local storage] -DuncanT<br />
<br />
'''Feb 5, 2014 16:00 UTC'''<br />
* I3 Status check/updates<br />
* Cert test<br />
* Multiple pools per backend (bswartz)<br />
'''Jan 8, 2014 16:00 UTC'''<br />
* I2 is just around the corner, blueprint updates<br />
* Alternating meeting time proposal, results on feedback<br />
* Driver cert test, it's there... use it<br />
* Prioritizing patches and reviews<br />
'''December 18, 2013 16:00 UTC'''<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/cinder-backup-recover-api cinder backup recovery api - import/export backups] - avishay<br />
* Blueprint discussion [https://blueprints.launchpad.net/cinder/+spec/copy-volume-to-image-task-flow] - Griffith<br />
* [https://blueprints.launchpad.net/cinder/+spec/admin-defined-capabilities Admin-defined capabilities] - Ollie<br />
* Why is type manage an extension? -Thingee<br />
<br />
'''December 11, 2013 16:00 UTC'''<br />
* Proposal of [https://etherpad.openstack.org/p/cinder-extensions extension packages] -Thingee<br />
<br />
'''December 4, 2013 16:00 UTC'''<br />
* Progressing with [https://wiki.openstack.org/wiki/Cinder/blueprints/multi-attach-volume multi-attach / shared-volume] - sgordon<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-acls-for-volumes Access Control List design discussion] - alatynskaya<br />
<br />
'''November 27, 2013 16:00 UTC'''<br />
* [https://etherpad.openstack.org/p/icehouse-cinder-continuous-volume-replication-v2 Updated volume mirroring design] - avishay<br />
* Start using only Mock for new tests... [http://lists.openstack.org/pipermail/openstack-dev/2013-November/018501.html Related Nova Discussion] - Thingee<br />
* Rate limiting came up in the summit, and [http://lists.openstack.org/pipermail/openstack-dev/2013-November/020291.html on openstack-dev] - avishay<br />
* Metadata backup (https://review.openstack.org/#/c/51900/) progress RFC - dosaboy<br />
<br />
<br />
'''November 20, 2013 16:00 UTC'''<br />
* I-1 scheduling - JGriffith<br />
<br />
<br />
'''November 13, 2013 16:00 UTC'''<br />
* patches should update doc files where necessary to ease writing of release notes (Avishay?)<br />
* fencing host from storage (Ehud Trainin)<br />
* Summarize priority of tasks from summit discussions (https://wiki.openstack.org/wiki/Summit/Icehouse/Etherpads#Cinder and https://etherpad.openstack.org/p/cinder-icehouse-summary) Griff<br />
<br />
<br />
'''October 30, 2013 16:00 UTC'''<br />
* cinder backup metadata support - http://goo.gl/Jkg2FV (dosaboy)<br />
* fencing and unfencing host from storage - https://blueprints.launchpad.net/cinder/+spec/fencing-and-unfencing (Ehud Trainin)<br />
<br />
'''October 23, 2013 16:00 UTC'''<br />
* Nexenta backup driver https://review.openstack.org/#/c/47005/ - DuncanT<br />
<br />
<br />
'''October 2, 2013 16:00 UTC'''<br />
* What's still broken in Havana<br />
:* Backups and multibackend (https://code.launchpad.net/bugs/1228223): Fix committed<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066): Fix committed<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here): '''???'''<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896): '''Still open'''<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469): Probable fix committed<br />
:* Summary of gate issue pertaining to Cinder can be viewed here: http://paste.openstack.org/show/47798/<br />
:* Moving to taskflow - avishay<br />
<br />
<br />
'''Sept 25, 2013, 16:00 UTC'''<br />
* PTL nomination process is open until the 26'th, if you want to run send your nomination proposal out to the dev ML<br />
* What's broken in Havana<br />
:* Backups (specifically when configured with multi-backend volumes)<br />
:* Configuration - Global CONF settings in brick don't belong, and a number of them break multi-backend (Bug #1230066)<br />
:* TaskFlow retry mechanism - The majority felt this should be left as a white list, but no work has been done to fix it so we still have ugly failures/roll-backs (3 bugs logged here)<br />
:* Quotas - Don't know that anybody has gotten to the bottom of the quota syncing issue (Bug #1202896)<br />
:* iSCSI Target creation failures - This was thought to have been fixed but showed up last night (Bug #1223469)<br />
:* ????<br />
* Cinderclient release plans/status? (Eharney)<br />
* OSLO imports (DuncanT)<br />
* bp/cinder-backup-improvements (dosaboy)<br />
* bp/multi-attach (zhiyan)<br />
<br />
<br />
<br />
'''Aug 21, 2013, 16:00 UTC'''<br />
# No agenda, no meeting.<br />
<br />
'''Aug 14, 2013, 16:00 UTC'''<br />
# Volume migration status - avishay<br />
# API extensions using metadata. This comes from the [https://review.openstack.org/#/c/38322/ readonly volume attach support]. - thingee<br />
[http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-08-14-16.00.log.html IRC Log]<br />
<br />
'''Aug 7, 2013, 16:00 UTC'''<br />
# [https://bugs.launchpad.net/cinder/+bug/1209199 RFC - make all rbd clones copy-on-write] -- Dosaboy<br />
# V1 API removal issues, plans and timescales - DuncanT<br />
<br />
== Meeting Minutes ==<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2014/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2013/<br />
<br />
http://eavesdrop.openstack.org/meetings/cinder/2012/</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=Cinder&diff=42553Cinder2014-02-17T18:12:20Z<p>John-griffith: /* Cinder Plugins */</p>
<hr />
<div><br />
= OpenStack Block Storage ("Cinder") =<br />
<br />
{| border="1" cellpadding="2" cellspacing="0"<br />
| [[https://launchpad.net/cinder/ Cinder on launchpad (including bug tracker and blueprints)]]<br />
|-<br />
| [[https://github.com/openstack/cinder Source code]]<br />
|-<br />
| [[http://docs.openstack.org/developer/cinder/ Developer docs]]<br />
|-<br />
| [[http://docs.openstack.org/grizzly/openstack-block-storage/admin/content/ OpenStack Block Storage Service Administration Guide]]<br />
|}<br />
<br />
== Related projects ==<br />
* Python Cinder client<br />
* Block Storage API documentation<br />
<br />
== What is Cinder ? ==<br />
<br />
Cinder provides an infrastructure for managing volumes in OpenStack. It was originally a Nova component called nova-volume, but has become an independent project since the Folsom release.<br />
<br />
== Reasoning: ==<br />
# Nova is currently a very large project; managing all of the dependencies in linkages of services within Nova can make the ability to advance new features and functionality very difficult.<br />
# As a result of the many components and dependencies in Nova, it's difficult for anybody to really have a complete view of Nova and to be a true expert. This makes the job of core team member on Nova very difficult, and inhibits good thorough reviews of bug and blueprint submissions. <br />
# Block storage is a critical component of [[OpenStack]], as such it warrants focused and dedicated attention.<br />
# Having Block Storage as a dedicated core project in [[OpenStack]] enables the ability to greatly improve functionality and reliability of the block storage component of [[OpenStack]]<br />
<br />
== Documents: ==<br />
* Cinder deep dive (updated for Grizzly): [[File:cinder-grizzly-deep-dive-pub.pdf]]<br />
<br />
== Minimum Driver Features ==<br />
See [https://github.com/openstack/cinder/blob/master/doc/source/devref/drivers.rst driver dev docs]<br />
<br />
=== Keeping consistant with multi backend ===<br />
In order to maintain consistency with multi backend, do not directly use FLAGS.my_flag, instead use the self.configuration that is provided to the volume drivers. If this does not exist, look @ lvm.py and add it to your driver. using FLAGS.my_flag instead of self.configuration.my_flag will cause multi backend to not work properly. Multi backend relies on the configurations to be within a specific config group in the config file, and the self.configuration abstracts that away from the drivers.<br />
<br />
== Keeping informed and providing '''CONSTRUCTIVE INPUT''' ==<br />
The Cinder team currently meets on a weekly basis in #openstack-meeting at 16:00 UTC on Wednesdays. I try to keep the meetings wiki agenda page http://wiki.openstack.org/CinderMeetings up to date and follow it. Also keep in mind that '''anybody''' is able to add/suggest agenda items via the meeting wiki page.<br />
<br />
Of course, there's also IRC... a number of us monitor #openstack-cinder or you can always send a PM to jgriffith (that's me)<br />
<br />
== Concerns from the community: ==<br />
=== Compatibility and Migration: ===<br />
There has been a significant amount of concern raised regarding "compatibility"; unfortunately this seems to mean different things to different people. For those that haven't looked at the Cinder code or tried a demo in devstack, here are some question/answers:<br />
<br />
* Do the same nova client commands I use for volumes today still work the same? '''YES'''<br />
* Do the same euca2ools that I use for volumes today still work the same? '''YES'''<br />
* Does block storage still work the same as it does today in terms of LVM, iSCSI and the drivers that are curently in place? '''YES'''<br />
* Are the associated database tables the same as they are in the current nova volume code? '''For the most part YES, all volume related tables and columns are migrated, non-volume related tables however are not present'''<br />
* Does it use the same nova database as we use today? '''No, it does require a new independent database'''<br />
* Are you going to implement cinder with complete disregard for my current install and completely change everything out from under me? '''ABSOLUTELY NOT'''<br />
* Are you going to test migrating from nova-vol to Cinder? '''YES'''<br />
* Are those migration tests going to be done just using fakes/unit tests? '''NO, we would require running setups, most likely devstack'''<br />
* Are you planning to provide migration scripts/tools to move from nova to cinder? '''YES'''<br />
<br />
=== Additional thoughts to keep in mind: ===<br />
* The Cinder core team is fortunate enough to have a number of members who currently work for companies that are using [[OpenStack]] in production environments. There is a strong representation and the concerns of Providers is in fact a major consideration<br />
* The goal is '''NOT''' to throw away nova-volume as it is today, but to separate it, focus on it and improve it.<br />
* Migration is one of the top priorities for introduction of Cinder into Folsom (regardless of whether nova-volume is still in place or not). This is something that is just considered a part of the requirements for the project.<br />
<br />
== Cinder Core Drivers ==<br />
For a list of the core drivers in each OpenStack release and the volume operations they support, see https://wiki.openstack.org/wiki/CinderSupportMatrix<br />
<br />
== Notes About Submitting Patches ==<br />
Everyone is welcome to sign the CLA and submit code. Please be sure you familiarize yourself with the "how to contribute guide" (https://wiki.openstack.org/wiki/How_To_Contribute#If_you.27re_a_developer).<br />
<br />
Keep in mind, there is a disproportionate number of submitters to reviewers. YOU can help with this!! Anybody is welcome to review patches, jump in, give a review. It's a great way to learn more about the code and to help you make better submissions in the future. It also helps your karma, when you submit a patch if you're an active reviewer core team members are more likely to notice your patch and give it some attention before some others.<br />
<br />
== Cinder Plugins ==<br />
How to submit a plugin/driver: https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver<br />
<br />
Cinder Plugin/Driver certification page: https://wiki.openstack.org/wiki/Cinder/certified-drivers<br />
<br />
The following plugins (from other sources) are avaialble for this project<br />
* [https://wiki.openstack.org/wiki/Mellanox-Cinder Mellanox Cinder Plugin] Mellanox Cinder Plugin<br />
<br />
== Configuring devstack to use your driver and backend ==<br />
One of the things you'll be required to do when submitting a new driver is running your backend and driver in a devstack environment and executing the tempest volume tests against it. Currently we provide a driver_cert wrapper (mentioned in the how-to-contribute-a-driver section). One thing that causes some confusion is how do I configure devstack to use my backend device. It used to be that your driver info would have to be added to lib/cinder in devstack to set your options. We then created a cinder/plugin module in devstack. Fortunately though it's MUCH easier than that. For *most* drivers, the only changes that are made consist of cinder.conf file changes. That can easily be accomplished by using devstacks local.conf file (more info here: http://devstack.org/configuration.html). For more complex actions (like the need to install packages etc, the plugin directory in devstack can be used). An example of what this file would look like to add driver FOO is shown below, the default localrc section is included for completeness, but the section of interest is the post-config cinder.conf section:<br />
<br />
<nowiki>[[local|localrc]]</nowiki><br /><br />
<sub>:# Passwords<br /><br />
ADMIN_PASSWORD=password<br /><br />
MYSQL_PASSWORD=password<br /><br />
RABBIT_PASSWORD=password<br /><br />
SERVICE_PASSWORD=password<br /><br />
SERVICE_TOKEN=password<br /><br />
SCREEN_LOGDIR=/opt/stack/logs<br /><br />
HOST_IP=172.16.140.246<br /><br />
disable_service n-net<br /><br />
enable_service q-svc<br /><br />
enable_service q-agt<br /><br />
enable_service q-dhcp<br /><br />
enable_service q-l3<br /><br />
enable_service q-meta<br /><br />
enable_service neutron<br /><br />
<br /><br />
<nowiki># Disable security groups entirely<br /></nowiki><br />
Q_USE_SECGROUP=False<br /><br />
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver<br /><br />
CINDER_SECURE_DELETE=False<br /><br />
<br /><br />
<nowiki>[[post-config|$CINDER_CONF]]</nowiki><br /><br />
volume_driver = cinder.volume.drivers.foo.FooDriver<br /><br />
foos_var = something<br /><br />
another_foo_var = something-else</sub><br /><br />
<br /></div>John-griffithhttps://wiki.openstack.org/w/index.php?title=Cinder&diff=42552Cinder2014-02-17T18:11:57Z<p>John-griffith: /* OpenStack Block Storage ("Cinder") */</p>
<hr />
<div><br />
= OpenStack Block Storage ("Cinder") =<br />
<br />
{| border="1" cellpadding="2" cellspacing="0"<br />
| [[https://launchpad.net/cinder/ Cinder on launchpad (including bug tracker and blueprints)]]<br />
|-<br />
| [[https://github.com/openstack/cinder Source code]]<br />
|-<br />
| [[http://docs.openstack.org/developer/cinder/ Developer docs]]<br />
|-<br />
| [[http://docs.openstack.org/grizzly/openstack-block-storage/admin/content/ OpenStack Block Storage Service Administration Guide]]<br />
|}<br />
<br />
== Related projects ==<br />
* Python Cinder client<br />
* Block Storage API documentation<br />
<br />
== What is Cinder ? ==<br />
<br />
Cinder provides an infrastructure for managing volumes in OpenStack. It was originally a Nova component called nova-volume, but has become an independent project since the Folsom release.<br />
<br />
== Reasoning: ==<br />
# Nova is currently a very large project; managing all of the dependencies in linkages of services within Nova can make the ability to advance new features and functionality very difficult.<br />
# As a result of the many components and dependencies in Nova, it's difficult for anybody to really have a complete view of Nova and to be a true expert. This makes the job of core team member on Nova very difficult, and inhibits good thorough reviews of bug and blueprint submissions. <br />
# Block storage is a critical component of [[OpenStack]], as such it warrants focused and dedicated attention.<br />
# Having Block Storage as a dedicated core project in [[OpenStack]] enables the ability to greatly improve functionality and reliability of the block storage component of [[OpenStack]]<br />
<br />
== Documents: ==<br />
* Cinder deep dive (updated for Grizzly): [[File:cinder-grizzly-deep-dive-pub.pdf]]<br />
<br />
== Minimum Driver Features ==<br />
See [https://github.com/openstack/cinder/blob/master/doc/source/devref/drivers.rst driver dev docs]<br />
<br />
=== Keeping consistant with multi backend ===<br />
In order to maintain consistency with multi backend, do not directly use FLAGS.my_flag, instead use the self.configuration that is provided to the volume drivers. If this does not exist, look @ lvm.py and add it to your driver. using FLAGS.my_flag instead of self.configuration.my_flag will cause multi backend to not work properly. Multi backend relies on the configurations to be within a specific config group in the config file, and the self.configuration abstracts that away from the drivers.<br />
<br />
== Keeping informed and providing '''CONSTRUCTIVE INPUT''' ==<br />
The Cinder team currently meets on a weekly basis in #openstack-meeting at 16:00 UTC on Wednesdays. I try to keep the meetings wiki agenda page http://wiki.openstack.org/CinderMeetings up to date and follow it. Also keep in mind that '''anybody''' is able to add/suggest agenda items via the meeting wiki page.<br />
<br />
Of course, there's also IRC... a number of us monitor #openstack-cinder or you can always send a PM to jgriffith (that's me)<br />
<br />
== Concerns from the community: ==<br />
=== Compatibility and Migration: ===<br />
There has been a significant amount of concern raised regarding "compatibility"; unfortunately this seems to mean different things to different people. For those that haven't looked at the Cinder code or tried a demo in devstack, here are some question/answers:<br />
<br />
* Do the same nova client commands I use for volumes today still work the same? '''YES'''<br />
* Do the same euca2ools that I use for volumes today still work the same? '''YES'''<br />
* Does block storage still work the same as it does today in terms of LVM, iSCSI and the drivers that are curently in place? '''YES'''<br />
* Are the associated database tables the same as they are in the current nova volume code? '''For the most part YES, all volume related tables and columns are migrated, non-volume related tables however are not present'''<br />
* Does it use the same nova database as we use today? '''No, it does require a new independent database'''<br />
* Are you going to implement cinder with complete disregard for my current install and completely change everything out from under me? '''ABSOLUTELY NOT'''<br />
* Are you going to test migrating from nova-vol to Cinder? '''YES'''<br />
* Are those migration tests going to be done just using fakes/unit tests? '''NO, we would require running setups, most likely devstack'''<br />
* Are you planning to provide migration scripts/tools to move from nova to cinder? '''YES'''<br />
<br />
=== Additional thoughts to keep in mind: ===<br />
* The Cinder core team is fortunate enough to have a number of members who currently work for companies that are using [[OpenStack]] in production environments. There is a strong representation and the concerns of Providers is in fact a major consideration<br />
* The goal is '''NOT''' to throw away nova-volume as it is today, but to separate it, focus on it and improve it.<br />
* Migration is one of the top priorities for introduction of Cinder into Folsom (regardless of whether nova-volume is still in place or not). This is something that is just considered a part of the requirements for the project.<br />
<br />
== Cinder Core Drivers ==<br />
For a list of the core drivers in each OpenStack release and the volume operations they support, see https://wiki.openstack.org/wiki/CinderSupportMatrix<br />
<br />
== Notes About Submitting Patches ==<br />
Everyone is welcome to sign the CLA and submit code. Please be sure you familiarize yourself with the "how to contribute guide" (https://wiki.openstack.org/wiki/How_To_Contribute#If_you.27re_a_developer).<br />
<br />
Keep in mind, there is a disproportionate number of submitters to reviewers. YOU can help with this!! Anybody is welcome to review patches, jump in, give a review. It's a great way to learn more about the code and to help you make better submissions in the future. It also helps your karma, when you submit a patch if you're an active reviewer core team members are more likely to notice your patch and give it some attention before some others.<br />
<br />
== Cinder Plugins ==<br />
How to submit a plugin/driver: https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver<br />
Cinder Plugin/Driver certification page: https://wiki.openstack.org/wiki/Cinder/certified-drivers<br />
<br />
The following plugins (from other sources) are avaialble for this project<br />
* [https://wiki.openstack.org/wiki/Mellanox-Cinder Mellanox Cinder Plugin] Mellanox Cinder Plugin<br />
<br />
== Configuring devstack to use your driver and backend ==<br />
One of the things you'll be required to do when submitting a new driver is running your backend and driver in a devstack environment and executing the tempest volume tests against it. Currently we provide a driver_cert wrapper (mentioned in the how-to-contribute-a-driver section). One thing that causes some confusion is how do I configure devstack to use my backend device. It used to be that your driver info would have to be added to lib/cinder in devstack to set your options. We then created a cinder/plugin module in devstack. Fortunately though it's MUCH easier than that. For *most* drivers, the only changes that are made consist of cinder.conf file changes. That can easily be accomplished by using devstacks local.conf file (more info here: http://devstack.org/configuration.html). For more complex actions (like the need to install packages etc, the plugin directory in devstack can be used). An example of what this file would look like to add driver FOO is shown below, the default localrc section is included for completeness, but the section of interest is the post-config cinder.conf section:<br />
<br />
<nowiki>[[local|localrc]]</nowiki><br /><br />
<sub>:# Passwords<br /><br />
ADMIN_PASSWORD=password<br /><br />
MYSQL_PASSWORD=password<br /><br />
RABBIT_PASSWORD=password<br /><br />
SERVICE_PASSWORD=password<br /><br />
SERVICE_TOKEN=password<br /><br />
SCREEN_LOGDIR=/opt/stack/logs<br /><br />
HOST_IP=172.16.140.246<br /><br />
disable_service n-net<br /><br />
enable_service q-svc<br /><br />
enable_service q-agt<br /><br />
enable_service q-dhcp<br /><br />
enable_service q-l3<br /><br />
enable_service q-meta<br /><br />
enable_service neutron<br /><br />
<br /><br />
<nowiki># Disable security groups entirely<br /></nowiki><br />
Q_USE_SECGROUP=False<br /><br />
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver<br /><br />
CINDER_SECURE_DELETE=False<br /><br />
<br /><br />
<nowiki>[[post-config|$CINDER_CONF]]</nowiki><br /><br />
volume_driver = cinder.volume.drivers.foo.FooDriver<br /><br />
foos_var = something<br /><br />
another_foo_var = something-else</sub><br /><br />
<br /></div>John-griffithhttps://wiki.openstack.org/w/index.php?title=Cinder/tested-3rdParty-drivers&diff=42549Cinder/tested-3rdParty-drivers2014-02-17T17:42:21Z<p>John-griffith: /* Testing requirements for upcoming Juno release */</p>
<hr />
<div>= Driver Certification =<br />
=== Testing for current Icehouse release ===<br />
The idea of requiring drivers to run functional tests and certify is new to the Icehouse release. To get started with this process we've implemented a simple wrapper around the tempest volume.api tests. The process currently is for each vendor to run this certification test against their backend driver in their own environment. The wrapper is very simple, it just does a fresh clone of the cinder and tempest repos and restarts services, then runs the tempest volume.api tagged tests in the tempest suites and collects the output to a temporary log file.<br />
<br />
This is far from extensive or ideal, however it's already uncovered a number of issues with existing drivers and has proven to be a beneficial process. Currently this is a manual process that we'd like to see run/updated at each milestone and at each RC for Icehouse. The current list of the most recent run of the certification tests and a link to the resultant log files is included in the table found at the bottom of this wiki page.<br />
<br />
=== Testing requirements for upcoming Juno release ===<br />
To be designated as compatible, a third-party plugin and/or driver code must implement external third party testing. The testing should be Tempest executed against a Devstack build with the proposed code changes. The environment managed by the vendor should be configured to incorporate the plugin and/or driver solution. The OpenStack Infrastructure team has provided details on how to integrate 3rd party testing at:<br />
<br />
http://ci.openstack.org/third_party.html<br />
<br />
and Tempest can be found at:<br />
<br />
https://github.com/openstack/tempest<br />
<br />
The Cinder team expects that the third party testing will provide a +/-1 verify vote for all changes to a plugin or driver’s code. In addition, the Cinder team expects that the third party test will also vote on all code submissions by the jenkins user. The jenkins user regularly submits requirements changes and the Cinder team hopes to catch any possible regressions as early as possible.<br />
<br />
=== Most Recent Results for Icehouse ===<br />
{| class="wikitable"<br />
|-<br />
! Driver Name !! Pass/Fail !! Link to Log Files !! Date of Test Run<br />
|-<br />
| SolidFire || Pass || https://s3.amazonaws.com/solidfire-cert-results/tmp.wsfgEXbccC || Feb 13, 2014<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|}</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=Cinder/tested-3rdParty-drivers&diff=42546Cinder/tested-3rdParty-drivers2014-02-17T17:40:49Z<p>John-griffith: /* Most Recent Results for Icehouse */</p>
<hr />
<div>= Driver Certification =<br />
=== Testing for current Icehouse release ===<br />
The idea of requiring drivers to run functional tests and certify is new to the Icehouse release. To get started with this process we've implemented a simple wrapper around the tempest volume.api tests. The process currently is for each vendor to run this certification test against their backend driver in their own environment. The wrapper is very simple, it just does a fresh clone of the cinder and tempest repos and restarts services, then runs the tempest volume.api tagged tests in the tempest suites and collects the output to a temporary log file.<br />
<br />
This is far from extensive or ideal, however it's already uncovered a number of issues with existing drivers and has proven to be a beneficial process. Currently this is a manual process that we'd like to see run/updated at each milestone and at each RC for Icehouse. The current list of the most recent run of the certification tests and a link to the resultant log files is included in the table found at the bottom of this wiki page.<br />
<br />
=== Testing requirements for upcoming Juno release ===<br />
To be designated as compatible, a third-party plugin and/or driver code must implement external third party testing. The testing should be Tempest executed against a Devstack build with the proposed code changes. The environment managed by the vendor should be configured to incorporate the plugin and/or driver solution. The OpenStack Infrastructure team has provided details on how to integrate 3rd party testing at:<br />
<br />
http://ci.openstack.org/third_party.html<br />
<br />
and Tempest can be found at:<br />
<br />
https://github.com/openstack/tempest<br />
<br />
The Cinder team expects that the third party testing will provide a +/-1 verify vote for all changes to a plugin or driver’s code. In addition, the Neutron team expects that the third party test will also vote on all code submissions by the jenkins user. The jenkins user regularly submits requirements changes and the Neutron team hopes to catch any possible regressions as early as possible.<br />
<br />
=== Most Recent Results for Icehouse ===<br />
{| class="wikitable"<br />
|-<br />
! Driver Name !! Pass/Fail !! Link to Log Files !! Date of Test Run<br />
|-<br />
| SolidFire || Pass || https://s3.amazonaws.com/solidfire-cert-results/tmp.wsfgEXbccC || Feb 13, 2014<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|}</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=Cinder/tested-3rdParty-drivers&diff=42544Cinder/tested-3rdParty-drivers2014-02-17T17:39:59Z<p>John-griffith: /* Most Recent Results for Icehouse */</p>
<hr />
<div>= Driver Certification =<br />
=== Testing for current Icehouse release ===<br />
The idea of requiring drivers to run functional tests and certify is new to the Icehouse release. To get started with this process we've implemented a simple wrapper around the tempest volume.api tests. The process currently is for each vendor to run this certification test against their backend driver in their own environment. The wrapper is very simple, it just does a fresh clone of the cinder and tempest repos and restarts services, then runs the tempest volume.api tagged tests in the tempest suites and collects the output to a temporary log file.<br />
<br />
This is far from extensive or ideal, however it's already uncovered a number of issues with existing drivers and has proven to be a beneficial process. Currently this is a manual process that we'd like to see run/updated at each milestone and at each RC for Icehouse. The current list of the most recent run of the certification tests and a link to the resultant log files is included in the table found at the bottom of this wiki page.<br />
<br />
=== Testing requirements for upcoming Juno release ===<br />
To be designated as compatible, a third-party plugin and/or driver code must implement external third party testing. The testing should be Tempest executed against a Devstack build with the proposed code changes. The environment managed by the vendor should be configured to incorporate the plugin and/or driver solution. The OpenStack Infrastructure team has provided details on how to integrate 3rd party testing at:<br />
<br />
http://ci.openstack.org/third_party.html<br />
<br />
and Tempest can be found at:<br />
<br />
https://github.com/openstack/tempest<br />
<br />
The Cinder team expects that the third party testing will provide a +/-1 verify vote for all changes to a plugin or driver’s code. In addition, the Neutron team expects that the third party test will also vote on all code submissions by the jenkins user. The jenkins user regularly submits requirements changes and the Neutron team hopes to catch any possible regressions as early as possible.<br />
<br />
=== Most Recent Results for Icehouse ===<br />
{| class="wikitable"<br />
|-<br />
! Driver Name !! Pass/Fail !! Link to Log Files !! Date of Test Run<br />
|-<br />
| SolidFire || Pass || solidfire-cert-results.s3-website-us-east-1.amazonaws.com || Feb 13, 2014<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|}</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=Cinder/tested-3rdParty-drivers&diff=42539Cinder/tested-3rdParty-drivers2014-02-17T16:50:46Z<p>John-griffith: /* Driver Certification */</p>
<hr />
<div>= Driver Certification =<br />
=== Testing for current Icehouse release ===<br />
The idea of requiring drivers to run functional tests and certify is new to the Icehouse release. To get started with this process we've implemented a simple wrapper around the tempest volume.api tests. The process currently is for each vendor to run this certification test against their backend driver in their own environment. The wrapper is very simple, it just does a fresh clone of the cinder and tempest repos and restarts services, then runs the tempest volume.api tagged tests in the tempest suites and collects the output to a temporary log file.<br />
<br />
This is far from extensive or ideal, however it's already uncovered a number of issues with existing drivers and has proven to be a beneficial process. Currently this is a manual process that we'd like to see run/updated at each milestone and at each RC for Icehouse. The current list of the most recent run of the certification tests and a link to the resultant log files is included in the table found at the bottom of this wiki page.<br />
<br />
=== Testing requirements for upcoming Juno release ===<br />
To be designated as compatible, a third-party plugin and/or driver code must implement external third party testing. The testing should be Tempest executed against a Devstack build with the proposed code changes. The environment managed by the vendor should be configured to incorporate the plugin and/or driver solution. The OpenStack Infrastructure team has provided details on how to integrate 3rd party testing at:<br />
<br />
http://ci.openstack.org/third_party.html<br />
<br />
and Tempest can be found at:<br />
<br />
https://github.com/openstack/tempest<br />
<br />
The Cinder team expects that the third party testing will provide a +/-1 verify vote for all changes to a plugin or driver’s code. In addition, the Neutron team expects that the third party test will also vote on all code submissions by the jenkins user. The jenkins user regularly submits requirements changes and the Neutron team hopes to catch any possible regressions as early as possible.<br />
<br />
=== Most Recent Results for Icehouse ===<br />
{| class="wikitable"<br />
|-<br />
! Driver Name !! Pass/Fail !! Link to Log Files !! Date of Test Run<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|-<br />
| Example || Example || Example || Example<br />
|}</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=Cinder/tested-3rdParty-drivers&diff=42536Cinder/tested-3rdParty-drivers2014-02-17T16:39:44Z<p>John-griffith: Created page with "= Driver Certification ="</p>
<hr />
<div>= Driver Certification =</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=Cinder&diff=42381Cinder2014-02-14T23:59:41Z<p>John-griffith: /* Configuring devstack to use your driver and backend */</p>
<hr />
<div><br />
= OpenStack Block Storage ("Cinder") =<br />
<br />
{| border="1" cellpadding="2" cellspacing="0"<br />
| [[https://launchpad.net/cinder/ Cinder on launchpad (including bug tracker and blueprints)]]<br />
|-<br />
| [[https://github.com/openstack/cinder Source code]]<br />
|-<br />
| [[http://docs.openstack.org/developer/cinder/ Developer docs]]<br />
|-<br />
| [[http://docs.openstack.org/grizzly/openstack-block-storage/admin/content/ OpenStack Block Storage Service Administration Guide]]<br />
|}<br />
<br />
== Related projects ==<br />
* Python Cinder client<br />
* Block Storage API documentation<br />
<br />
== What is Cinder ? ==<br />
<br />
Cinder provides an infrastructure for managing volumes in OpenStack. It was originally a Nova component called nova-volume, but has become an independent project since the Folsom release.<br />
<br />
== Reasoning: ==<br />
# Nova is currently a very large project; managing all of the dependencies in linkages of services within Nova can make the ability to advance new features and functionality very difficult.<br />
# As a result of the many components and dependencies in Nova, it's difficult for anybody to really have a complete view of Nova and to be a true expert. This makes the job of core team member on Nova very difficult, and inhibits good thorough reviews of bug and blueprint submissions. <br />
# Block storage is a critical component of [[OpenStack]], as such it warrants focused and dedicated attention.<br />
# Having Block Storage as a dedicated core project in [[OpenStack]] enables the ability to greatly improve functionality and reliability of the block storage component of [[OpenStack]]<br />
<br />
== Documents: ==<br />
* Cinder deep dive (updated for Grizzly): [[File:cinder-grizzly-deep-dive-pub.pdf]]<br />
<br />
== Minimum Driver Features ==<br />
See [https://github.com/openstack/cinder/blob/master/doc/source/devref/drivers.rst driver dev docs]<br />
<br />
=== Keeping consistant with multi backend ===<br />
In order to maintain consistency with multi backend, do not directly use FLAGS.my_flag, instead use the self.configuration that is provided to the volume drivers. If this does not exist, look @ lvm.py and add it to your driver. using FLAGS.my_flag instead of self.configuration.my_flag will cause multi backend to not work properly. Multi backend relies on the configurations to be within a specific config group in the config file, and the self.configuration abstracts that away from the drivers.<br />
<br />
== Keeping informed and providing '''CONSTRUCTIVE INPUT''' ==<br />
The Cinder team currently meets on a weekly basis in #openstack-meeting at 16:00 UTC on Wednesdays. I try to keep the meetings wiki agenda page http://wiki.openstack.org/CinderMeetings up to date and follow it. Also keep in mind that '''anybody''' is able to add/suggest agenda items via the meeting wiki page.<br />
<br />
Of course, there's also IRC... a number of us monitor #openstack-cinder or you can always send a PM to jgriffith (that's me)<br />
<br />
== Concerns from the community: ==<br />
=== Compatibility and Migration: ===<br />
There has been a significant amount of concern raised regarding "compatibility"; unfortunately this seems to mean different things to different people. For those that haven't looked at the Cinder code or tried a demo in devstack, here are some question/answers:<br />
<br />
* Do the same nova client commands I use for volumes today still work the same? '''YES'''<br />
* Do the same euca2ools that I use for volumes today still work the same? '''YES'''<br />
* Does block storage still work the same as it does today in terms of LVM, iSCSI and the drivers that are curently in place? '''YES'''<br />
* Are the associated database tables the same as they are in the current nova volume code? '''For the most part YES, all volume related tables and columns are migrated, non-volume related tables however are not present'''<br />
* Does it use the same nova database as we use today? '''No, it does require a new independent database'''<br />
* Are you going to implement cinder with complete disregard for my current install and completely change everything out from under me? '''ABSOLUTELY NOT'''<br />
* Are you going to test migrating from nova-vol to Cinder? '''YES'''<br />
* Are those migration tests going to be done just using fakes/unit tests? '''NO, we would require running setups, most likely devstack'''<br />
* Are you planning to provide migration scripts/tools to move from nova to cinder? '''YES'''<br />
<br />
=== Additional thoughts to keep in mind: ===<br />
* The Cinder core team is fortunate enough to have a number of members who currently work for companies that are using [[OpenStack]] in production environments. There is a strong representation and the concerns of Providers is in fact a major consideration<br />
* The goal is '''NOT''' to throw away nova-volume as it is today, but to separate it, focus on it and improve it.<br />
* Migration is one of the top priorities for introduction of Cinder into Folsom (regardless of whether nova-volume is still in place or not). This is something that is just considered a part of the requirements for the project.<br />
<br />
== Cinder Core Drivers ==<br />
For a list of the core drivers in each OpenStack release and the volume operations they support, see https://wiki.openstack.org/wiki/CinderSupportMatrix<br />
<br />
== Notes About Submitting Patches ==<br />
Everyone is welcome to sign the CLA and submit code. Please be sure you familiarize yourself with the "how to contribute guide" (https://wiki.openstack.org/wiki/How_To_Contribute#If_you.27re_a_developer).<br />
<br />
Keep in mind, there is a disproportionate number of submitters to reviewers. YOU can help with this!! Anybody is welcome to review patches, jump in, give a review. It's a great way to learn more about the code and to help you make better submissions in the future. It also helps your karma, when you submit a patch if you're an active reviewer core team members are more likely to notice your patch and give it some attention before some others.<br />
<br />
== Cinder Plugins ==<br />
How to submit a plugin/driver: https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver<br />
<br />
The following plugins (from other sources) are avaialble for this project<br />
* [https://wiki.openstack.org/wiki/Mellanox-Cinder Mellanox Cinder Plugin] Mellanox Cinder Plugin<br />
<br />
== Configuring devstack to use your driver and backend ==<br />
One of the things you'll be required to do when submitting a new driver is running your backend and driver in a devstack environment and executing the tempest volume tests against it. Currently we provide a driver_cert wrapper (mentioned in the how-to-contribute-a-driver section). One thing that causes some confusion is how do I configure devstack to use my backend device. It used to be that your driver info would have to be added to lib/cinder in devstack to set your options. We then created a cinder/plugin module in devstack. Fortunately though it's MUCH easier than that. For *most* drivers, the only changes that are made consist of cinder.conf file changes. That can easily be accomplished by using devstacks local.conf file (more info here: http://devstack.org/configuration.html). For more complex actions (like the need to install packages etc, the plugin directory in devstack can be used). An example of what this file would look like to add driver FOO is shown below, the default localrc section is included for completeness, but the section of interest is the post-config cinder.conf section:<br />
<br />
<nowiki>[[local|localrc]]</nowiki><br /><br />
<sub>:# Passwords<br /><br />
ADMIN_PASSWORD=password<br /><br />
MYSQL_PASSWORD=password<br /><br />
RABBIT_PASSWORD=password<br /><br />
SERVICE_PASSWORD=password<br /><br />
SERVICE_TOKEN=password<br /><br />
SCREEN_LOGDIR=/opt/stack/logs<br /><br />
HOST_IP=172.16.140.246<br /><br />
disable_service n-net<br /><br />
enable_service q-svc<br /><br />
enable_service q-agt<br /><br />
enable_service q-dhcp<br /><br />
enable_service q-l3<br /><br />
enable_service q-meta<br /><br />
enable_service neutron<br /><br />
<br /><br />
<nowiki># Disable security groups entirely<br /></nowiki><br />
Q_USE_SECGROUP=False<br /><br />
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver<br /><br />
CINDER_SECURE_DELETE=False<br /><br />
<br /><br />
<nowiki>[[post-config|$CINDER_CONF]]</nowiki><br /><br />
volume_driver = cinder.volume.drivers.foo.FooDriver<br /><br />
foos_var = something<br /><br />
another_foo_var = something-else</sub><br /><br />
<br /></div>John-griffithhttps://wiki.openstack.org/w/index.php?title=Cinder&diff=42380Cinder2014-02-14T23:58:26Z<p>John-griffith: /* Configuring devstack to use your driver and backend */</p>
<hr />
<div><br />
= OpenStack Block Storage ("Cinder") =<br />
<br />
{| border="1" cellpadding="2" cellspacing="0"<br />
| [[https://launchpad.net/cinder/ Cinder on launchpad (including bug tracker and blueprints)]]<br />
|-<br />
| [[https://github.com/openstack/cinder Source code]]<br />
|-<br />
| [[http://docs.openstack.org/developer/cinder/ Developer docs]]<br />
|-<br />
| [[http://docs.openstack.org/grizzly/openstack-block-storage/admin/content/ OpenStack Block Storage Service Administration Guide]]<br />
|}<br />
<br />
== Related projects ==<br />
* Python Cinder client<br />
* Block Storage API documentation<br />
<br />
== What is Cinder ? ==<br />
<br />
Cinder provides an infrastructure for managing volumes in OpenStack. It was originally a Nova component called nova-volume, but has become an independent project since the Folsom release.<br />
<br />
== Reasoning: ==<br />
# Nova is currently a very large project; managing all of the dependencies in linkages of services within Nova can make the ability to advance new features and functionality very difficult.<br />
# As a result of the many components and dependencies in Nova, it's difficult for anybody to really have a complete view of Nova and to be a true expert. This makes the job of core team member on Nova very difficult, and inhibits good thorough reviews of bug and blueprint submissions. <br />
# Block storage is a critical component of [[OpenStack]], as such it warrants focused and dedicated attention.<br />
# Having Block Storage as a dedicated core project in [[OpenStack]] enables the ability to greatly improve functionality and reliability of the block storage component of [[OpenStack]]<br />
<br />
== Documents: ==<br />
* Cinder deep dive (updated for Grizzly): [[File:cinder-grizzly-deep-dive-pub.pdf]]<br />
<br />
== Minimum Driver Features ==<br />
See [https://github.com/openstack/cinder/blob/master/doc/source/devref/drivers.rst driver dev docs]<br />
<br />
=== Keeping consistant with multi backend ===<br />
In order to maintain consistency with multi backend, do not directly use FLAGS.my_flag, instead use the self.configuration that is provided to the volume drivers. If this does not exist, look @ lvm.py and add it to your driver. using FLAGS.my_flag instead of self.configuration.my_flag will cause multi backend to not work properly. Multi backend relies on the configurations to be within a specific config group in the config file, and the self.configuration abstracts that away from the drivers.<br />
<br />
== Keeping informed and providing '''CONSTRUCTIVE INPUT''' ==<br />
The Cinder team currently meets on a weekly basis in #openstack-meeting at 16:00 UTC on Wednesdays. I try to keep the meetings wiki agenda page http://wiki.openstack.org/CinderMeetings up to date and follow it. Also keep in mind that '''anybody''' is able to add/suggest agenda items via the meeting wiki page.<br />
<br />
Of course, there's also IRC... a number of us monitor #openstack-cinder or you can always send a PM to jgriffith (that's me)<br />
<br />
== Concerns from the community: ==<br />
=== Compatibility and Migration: ===<br />
There has been a significant amount of concern raised regarding "compatibility"; unfortunately this seems to mean different things to different people. For those that haven't looked at the Cinder code or tried a demo in devstack, here are some question/answers:<br />
<br />
* Do the same nova client commands I use for volumes today still work the same? '''YES'''<br />
* Do the same euca2ools that I use for volumes today still work the same? '''YES'''<br />
* Does block storage still work the same as it does today in terms of LVM, iSCSI and the drivers that are curently in place? '''YES'''<br />
* Are the associated database tables the same as they are in the current nova volume code? '''For the most part YES, all volume related tables and columns are migrated, non-volume related tables however are not present'''<br />
* Does it use the same nova database as we use today? '''No, it does require a new independent database'''<br />
* Are you going to implement cinder with complete disregard for my current install and completely change everything out from under me? '''ABSOLUTELY NOT'''<br />
* Are you going to test migrating from nova-vol to Cinder? '''YES'''<br />
* Are those migration tests going to be done just using fakes/unit tests? '''NO, we would require running setups, most likely devstack'''<br />
* Are you planning to provide migration scripts/tools to move from nova to cinder? '''YES'''<br />
<br />
=== Additional thoughts to keep in mind: ===<br />
* The Cinder core team is fortunate enough to have a number of members who currently work for companies that are using [[OpenStack]] in production environments. There is a strong representation and the concerns of Providers is in fact a major consideration<br />
* The goal is '''NOT''' to throw away nova-volume as it is today, but to separate it, focus on it and improve it.<br />
* Migration is one of the top priorities for introduction of Cinder into Folsom (regardless of whether nova-volume is still in place or not). This is something that is just considered a part of the requirements for the project.<br />
<br />
== Cinder Core Drivers ==<br />
For a list of the core drivers in each OpenStack release and the volume operations they support, see https://wiki.openstack.org/wiki/CinderSupportMatrix<br />
<br />
== Notes About Submitting Patches ==<br />
Everyone is welcome to sign the CLA and submit code. Please be sure you familiarize yourself with the "how to contribute guide" (https://wiki.openstack.org/wiki/How_To_Contribute#If_you.27re_a_developer).<br />
<br />
Keep in mind, there is a disproportionate number of submitters to reviewers. YOU can help with this!! Anybody is welcome to review patches, jump in, give a review. It's a great way to learn more about the code and to help you make better submissions in the future. It also helps your karma, when you submit a patch if you're an active reviewer core team members are more likely to notice your patch and give it some attention before some others.<br />
<br />
== Cinder Plugins ==<br />
How to submit a plugin/driver: https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver<br />
<br />
The following plugins (from other sources) are avaialble for this project<br />
* [https://wiki.openstack.org/wiki/Mellanox-Cinder Mellanox Cinder Plugin] Mellanox Cinder Plugin<br />
<br />
== Configuring devstack to use your driver and backend ==<br />
One of the things you'll be required to do when submitting a new driver is running your backend and driver in a devstack environment and executing the tempest volume tests against it. Currently we provide a driver_cert wrapper (mentioned in the how-to-contribute-a-driver section). One thing that causes some confusion is how do I configure devstack to use my backend device. It used to be that your driver info would have to be added to lib/cinder in devstack to set your options. We then created a cinder/plugin module in devstack. Fortunately though it's MUCH easier than that. For *most* drivers, the only changes that are made consist of cinder.conf file changes. That can easily be accomplished by using devstacks local.conf file (more info here: http://devstack.org/configuration.html). For more complex actions (like the need to install packages etc, the plugin directory in devstack can be used). An example of what this file would look like to add driver FOO is shown below, the default localrc section is included for completeness, but the section of interest is the post-config cinder.conf section:<br />
<br />
<nowiki>[[local|localrc]]</nowiki><br /><br />
<sub>:# Passwords<br /><br />
:ADMIN_PASSWORD=password<br /><br />
:MYSQL_PASSWORD=password<br /><br />
:RABBIT_PASSWORD=password<br /><br />
:SERVICE_PASSWORD=password<br /><br />
:SERVICE_TOKEN=password<br /><br />
:SCREEN_LOGDIR=/opt/stack/logs<br /><br />
:HOST_IP=172.16.140.246<br /><br />
:disable_service n-net<br /><br />
:enable_service q-svc<br /><br />
:enable_service q-agt<br /><br />
:enable_service q-dhcp<br /><br />
:enable_service q-l3<br /><br />
:enable_service q-meta<br /><br />
:enable_service neutron<br /><br />
:<br /><br />
:<nowiki># Disable security groups entirely<br /></nowiki><br />
:Q_USE_SECGROUP=False<br /><br />
:LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver<br /><br />
:CINDER_SECURE_DELETE=False<br /><br />
:<br /><br />
:<nowiki>[[post-config|$CINDER_CONF]]</nowiki><br /><br />
:volume_driver = cinder.volume.drivers.foo.FooDriver<br /><br />
:foos_var = something<br /><br />
:another_foo_var = something-else</sub><br /><br />
<br /></div>John-griffithhttps://wiki.openstack.org/w/index.php?title=Cinder&diff=42377Cinder2014-02-14T23:49:43Z<p>John-griffith: /* Configuring devstack to use your driver and backend */</p>
<hr />
<div><br />
= OpenStack Block Storage ("Cinder") =<br />
<br />
{| border="1" cellpadding="2" cellspacing="0"<br />
| [[https://launchpad.net/cinder/ Cinder on launchpad (including bug tracker and blueprints)]]<br />
|-<br />
| [[https://github.com/openstack/cinder Source code]]<br />
|-<br />
| [[http://docs.openstack.org/developer/cinder/ Developer docs]]<br />
|-<br />
| [[http://docs.openstack.org/grizzly/openstack-block-storage/admin/content/ OpenStack Block Storage Service Administration Guide]]<br />
|}<br />
<br />
== Related projects ==<br />
* Python Cinder client<br />
* Block Storage API documentation<br />
<br />
== What is Cinder ? ==<br />
<br />
Cinder provides an infrastructure for managing volumes in OpenStack. It was originally a Nova component called nova-volume, but has become an independent project since the Folsom release.<br />
<br />
== Reasoning: ==<br />
# Nova is currently a very large project; managing all of the dependencies in linkages of services within Nova can make the ability to advance new features and functionality very difficult.<br />
# As a result of the many components and dependencies in Nova, it's difficult for anybody to really have a complete view of Nova and to be a true expert. This makes the job of core team member on Nova very difficult, and inhibits good thorough reviews of bug and blueprint submissions. <br />
# Block storage is a critical component of [[OpenStack]], as such it warrants focused and dedicated attention.<br />
# Having Block Storage as a dedicated core project in [[OpenStack]] enables the ability to greatly improve functionality and reliability of the block storage component of [[OpenStack]]<br />
<br />
== Documents: ==<br />
* Cinder deep dive (updated for Grizzly): [[File:cinder-grizzly-deep-dive-pub.pdf]]<br />
<br />
== Minimum Driver Features ==<br />
See [https://github.com/openstack/cinder/blob/master/doc/source/devref/drivers.rst driver dev docs]<br />
<br />
=== Keeping consistant with multi backend ===<br />
In order to maintain consistency with multi backend, do not directly use FLAGS.my_flag, instead use the self.configuration that is provided to the volume drivers. If this does not exist, look @ lvm.py and add it to your driver. using FLAGS.my_flag instead of self.configuration.my_flag will cause multi backend to not work properly. Multi backend relies on the configurations to be within a specific config group in the config file, and the self.configuration abstracts that away from the drivers.<br />
<br />
== Keeping informed and providing '''CONSTRUCTIVE INPUT''' ==<br />
The Cinder team currently meets on a weekly basis in #openstack-meeting at 16:00 UTC on Wednesdays. I try to keep the meetings wiki agenda page http://wiki.openstack.org/CinderMeetings up to date and follow it. Also keep in mind that '''anybody''' is able to add/suggest agenda items via the meeting wiki page.<br />
<br />
Of course, there's also IRC... a number of us monitor #openstack-cinder or you can always send a PM to jgriffith (that's me)<br />
<br />
== Concerns from the community: ==<br />
=== Compatibility and Migration: ===<br />
There has been a significant amount of concern raised regarding "compatibility"; unfortunately this seems to mean different things to different people. For those that haven't looked at the Cinder code or tried a demo in devstack, here are some question/answers:<br />
<br />
* Do the same nova client commands I use for volumes today still work the same? '''YES'''<br />
* Do the same euca2ools that I use for volumes today still work the same? '''YES'''<br />
* Does block storage still work the same as it does today in terms of LVM, iSCSI and the drivers that are curently in place? '''YES'''<br />
* Are the associated database tables the same as they are in the current nova volume code? '''For the most part YES, all volume related tables and columns are migrated, non-volume related tables however are not present'''<br />
* Does it use the same nova database as we use today? '''No, it does require a new independent database'''<br />
* Are you going to implement cinder with complete disregard for my current install and completely change everything out from under me? '''ABSOLUTELY NOT'''<br />
* Are you going to test migrating from nova-vol to Cinder? '''YES'''<br />
* Are those migration tests going to be done just using fakes/unit tests? '''NO, we would require running setups, most likely devstack'''<br />
* Are you planning to provide migration scripts/tools to move from nova to cinder? '''YES'''<br />
<br />
=== Additional thoughts to keep in mind: ===<br />
* The Cinder core team is fortunate enough to have a number of members who currently work for companies that are using [[OpenStack]] in production environments. There is a strong representation and the concerns of Providers is in fact a major consideration<br />
* The goal is '''NOT''' to throw away nova-volume as it is today, but to separate it, focus on it and improve it.<br />
* Migration is one of the top priorities for introduction of Cinder into Folsom (regardless of whether nova-volume is still in place or not). This is something that is just considered a part of the requirements for the project.<br />
<br />
== Cinder Core Drivers ==<br />
For a list of the core drivers in each OpenStack release and the volume operations they support, see https://wiki.openstack.org/wiki/CinderSupportMatrix<br />
<br />
== Notes About Submitting Patches ==<br />
Everyone is welcome to sign the CLA and submit code. Please be sure you familiarize yourself with the "how to contribute guide" (https://wiki.openstack.org/wiki/How_To_Contribute#If_you.27re_a_developer).<br />
<br />
Keep in mind, there is a disproportionate number of submitters to reviewers. YOU can help with this!! Anybody is welcome to review patches, jump in, give a review. It's a great way to learn more about the code and to help you make better submissions in the future. It also helps your karma, when you submit a patch if you're an active reviewer core team members are more likely to notice your patch and give it some attention before some others.<br />
<br />
== Cinder Plugins ==<br />
How to submit a plugin/driver: https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver<br />
<br />
The following plugins (from other sources) are avaialble for this project<br />
* [https://wiki.openstack.org/wiki/Mellanox-Cinder Mellanox Cinder Plugin] Mellanox Cinder Plugin<br />
<br />
== Configuring devstack to use your driver and backend ==<br />
One of the things you'll be required to do when submitting a new driver is running your backend and driver in a devstack environment and executing the tempest volume tests against it. Currently we provide a driver_cert wrapper (mentioned in the how-to-contribute-a-driver section). One thing that causes some confusion is how to I configure devstack to use my backend device. It used to be that your driver info would have to be added to lib/cinder in devstack to set your options. We then created a cinder/plugin module in devstack. Fortunately though it's MUCH easier than that. For *most* drivers, the only changes that are made consist of cinder.conf file changes. That can easily be accomplished by using devstacks local.conf file (more info here: http://devstack.org/configuration.html). For more complex actions (like the need to install packages etc, the plugin directory in devstack can be used). An example of what this file would look like to add driver FOO is shown below, the default localrc section is included for completeness, but the section of interest is the post-config cinder.conf section:<br />
<br />
<nowiki>[[local|localrc]]</nowiki><br /><br />
<sub>:# Passwords<br /><br />
:ADMIN_PASSWORD=password<br /><br />
:MYSQL_PASSWORD=password<br /><br />
:RABBIT_PASSWORD=password<br /><br />
:SERVICE_PASSWORD=password<br /><br />
:SERVICE_TOKEN=password<br /><br />
:SCREEN_LOGDIR=/opt/stack/logs<br /><br />
:HOST_IP=172.16.140.246<br /><br />
:disable_service n-net<br /><br />
:enable_service q-svc<br /><br />
:enable_service q-agt<br /><br />
:enable_service q-dhcp<br /><br />
:enable_service q-l3<br /><br />
:enable_service q-meta<br /><br />
:enable_service neutron<br /><br />
:<br /><br />
:<nowiki># Disable security groups entirely<br /></nowiki><br />
:Q_USE_SECGROUP=False<br /><br />
:LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver<br /><br />
:CINDER_SECURE_DELETE=False<br /><br />
:<br /><br />
:<nowiki>[[post-config|$CINDER_CONF]]</nowiki><br /><br />
:volume_driver = cinder.volume.drivers.foo.FooDriver<br /><br />
:foos_var = something<br /><br />
:another_foo_var = something-else</sub><br /><br />
<br /></div>John-griffithhttps://wiki.openstack.org/w/index.php?title=Cinder&diff=42375Cinder2014-02-14T23:39:14Z<p>John-griffith: /* OpenStack Block Storage ("Cinder") */</p>
<hr />
<div><br />
= OpenStack Block Storage ("Cinder") =<br />
<br />
{| border="1" cellpadding="2" cellspacing="0"<br />
| [[https://launchpad.net/cinder/ Cinder on launchpad (including bug tracker and blueprints)]]<br />
|-<br />
| [[https://github.com/openstack/cinder Source code]]<br />
|-<br />
| [[http://docs.openstack.org/developer/cinder/ Developer docs]]<br />
|-<br />
| [[http://docs.openstack.org/grizzly/openstack-block-storage/admin/content/ OpenStack Block Storage Service Administration Guide]]<br />
|}<br />
<br />
== Related projects ==<br />
* Python Cinder client<br />
* Block Storage API documentation<br />
<br />
== What is Cinder ? ==<br />
<br />
Cinder provides an infrastructure for managing volumes in OpenStack. It was originally a Nova component called nova-volume, but has become an independent project since the Folsom release.<br />
<br />
== Reasoning: ==<br />
# Nova is currently a very large project; managing all of the dependencies in linkages of services within Nova can make the ability to advance new features and functionality very difficult.<br />
# As a result of the many components and dependencies in Nova, it's difficult for anybody to really have a complete view of Nova and to be a true expert. This makes the job of core team member on Nova very difficult, and inhibits good thorough reviews of bug and blueprint submissions. <br />
# Block storage is a critical component of [[OpenStack]], as such it warrants focused and dedicated attention.<br />
# Having Block Storage as a dedicated core project in [[OpenStack]] enables the ability to greatly improve functionality and reliability of the block storage component of [[OpenStack]]<br />
<br />
== Documents: ==<br />
* Cinder deep dive (updated for Grizzly): [[File:cinder-grizzly-deep-dive-pub.pdf]]<br />
<br />
== Minimum Driver Features ==<br />
See [https://github.com/openstack/cinder/blob/master/doc/source/devref/drivers.rst driver dev docs]<br />
<br />
=== Keeping consistant with multi backend ===<br />
In order to maintain consistency with multi backend, do not directly use FLAGS.my_flag, instead use the self.configuration that is provided to the volume drivers. If this does not exist, look @ lvm.py and add it to your driver. using FLAGS.my_flag instead of self.configuration.my_flag will cause multi backend to not work properly. Multi backend relies on the configurations to be within a specific config group in the config file, and the self.configuration abstracts that away from the drivers.<br />
<br />
== Keeping informed and providing '''CONSTRUCTIVE INPUT''' ==<br />
The Cinder team currently meets on a weekly basis in #openstack-meeting at 16:00 UTC on Wednesdays. I try to keep the meetings wiki agenda page http://wiki.openstack.org/CinderMeetings up to date and follow it. Also keep in mind that '''anybody''' is able to add/suggest agenda items via the meeting wiki page.<br />
<br />
Of course, there's also IRC... a number of us monitor #openstack-cinder or you can always send a PM to jgriffith (that's me)<br />
<br />
== Concerns from the community: ==<br />
=== Compatibility and Migration: ===<br />
There has been a significant amount of concern raised regarding "compatibility"; unfortunately this seems to mean different things to different people. For those that haven't looked at the Cinder code or tried a demo in devstack, here are some question/answers:<br />
<br />
* Do the same nova client commands I use for volumes today still work the same? '''YES'''<br />
* Do the same euca2ools that I use for volumes today still work the same? '''YES'''<br />
* Does block storage still work the same as it does today in terms of LVM, iSCSI and the drivers that are curently in place? '''YES'''<br />
* Are the associated database tables the same as they are in the current nova volume code? '''For the most part YES, all volume related tables and columns are migrated, non-volume related tables however are not present'''<br />
* Does it use the same nova database as we use today? '''No, it does require a new independent database'''<br />
* Are you going to implement cinder with complete disregard for my current install and completely change everything out from under me? '''ABSOLUTELY NOT'''<br />
* Are you going to test migrating from nova-vol to Cinder? '''YES'''<br />
* Are those migration tests going to be done just using fakes/unit tests? '''NO, we would require running setups, most likely devstack'''<br />
* Are you planning to provide migration scripts/tools to move from nova to cinder? '''YES'''<br />
<br />
=== Additional thoughts to keep in mind: ===<br />
* The Cinder core team is fortunate enough to have a number of members who currently work for companies that are using [[OpenStack]] in production environments. There is a strong representation and the concerns of Providers is in fact a major consideration<br />
* The goal is '''NOT''' to throw away nova-volume as it is today, but to separate it, focus on it and improve it.<br />
* Migration is one of the top priorities for introduction of Cinder into Folsom (regardless of whether nova-volume is still in place or not). This is something that is just considered a part of the requirements for the project.<br />
<br />
== Cinder Core Drivers ==<br />
For a list of the core drivers in each OpenStack release and the volume operations they support, see https://wiki.openstack.org/wiki/CinderSupportMatrix<br />
<br />
== Notes About Submitting Patches ==<br />
Everyone is welcome to sign the CLA and submit code. Please be sure you familiarize yourself with the "how to contribute guide" (https://wiki.openstack.org/wiki/How_To_Contribute#If_you.27re_a_developer).<br />
<br />
Keep in mind, there is a disproportionate number of submitters to reviewers. YOU can help with this!! Anybody is welcome to review patches, jump in, give a review. It's a great way to learn more about the code and to help you make better submissions in the future. It also helps your karma, when you submit a patch if you're an active reviewer core team members are more likely to notice your patch and give it some attention before some others.<br />
<br />
== Cinder Plugins ==<br />
How to submit a plugin/driver: https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver<br />
<br />
The following plugins (from other sources) are avaialble for this project<br />
* [https://wiki.openstack.org/wiki/Mellanox-Cinder Mellanox Cinder Plugin] Mellanox Cinder Plugin<br />
<br />
== Configuring devstack to use your driver and backend ==<br />
One of the things you'll be required to do when submitting a new driver is running your backend and driver in a devstack environment and executing the tempest volume tests against it. Currently we provide a driver_cert wrapper (mentioned in the how-to-contribute-a-driver section). One thing that causes some confusion is how to I configure devstack to use my backend device. It used to be that your driver info would have to be added to lib/cinder in devstack to set your options. We then created a cinder/plugin module in devstack. Fortunately though it's MUCH easier than that. For *most* drivers, the only changes that are made consist of cinder.conf file changes. That can easily be accomplished by using devstacks local.conf file (more info here: http://devstack.org/configuration.html). For more complex actions (like the need to install packages etc, the plugin directory in devstack can be used). An example of what this file would look like to add driver FOO is shown below, the default localrc section is included for completeness, but the section of interest is the post-config cinder.conf section:<br />
<code><br />
[[local|localrc]]<br />
# Passwords<br />
ADMIN_PASSWORD=password<br />
MYSQL_PASSWORD=password<br />
RABBIT_PASSWORD=password<br />
SERVICE_PASSWORD=password<br />
SERVICE_TOKEN=password<br />
SCREEN_LOGDIR=/opt/stack/logs<br />
HOST_IP=172.16.140.246<br />
disable_service n-net<br />
enable_service q-svc<br />
enable_service q-agt<br />
enable_service q-dhcp<br />
enable_service q-l3<br />
enable_service q-meta<br />
enable_service neutron<br />
<br />
# Disable security groups entirely<br />
Q_USE_SECGROUP=False<br />
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver<br />
CINDER_SECURE_DELETE=False<br />
<br />
[[post-config|$CINDER_CONF]]<br />
volume_driver = cinder.volume.drivers.foo.FooDriver<br />
foos_var = something<br />
another_foo_var = something-else<br />
</code></div>John-griffithhttps://wiki.openstack.org/w/index.php?title=Cinder&diff=42373Cinder2014-02-14T23:37:10Z<p>John-griffith: /* OpenStack Block Storage ("Cinder") */</p>
<hr />
<div><br />
= OpenStack Block Storage ("Cinder") =<br />
<br />
{| border="1" cellpadding="2" cellspacing="0"<br />
| [[https://launchpad.net/cinder/ Cinder on launchpad (including bug tracker and blueprints)]]<br />
|-<br />
| [[https://github.com/openstack/cinder Source code]]<br />
|-<br />
| [[http://docs.openstack.org/developer/cinder/ Developer docs]]<br />
|-<br />
| [[http://docs.openstack.org/grizzly/openstack-block-storage/admin/content/ OpenStack Block Storage Service Administration Guide]]<br />
|}<br />
<br />
== Related projects ==<br />
* Python Cinder client<br />
* Block Storage API documentation<br />
<br />
== What is Cinder ? ==<br />
<br />
Cinder provides an infrastructure for managing volumes in OpenStack. It was originally a Nova component called nova-volume, but has become an independent project since the Folsom release.<br />
<br />
== Reasoning: ==<br />
# Nova is currently a very large project; managing all of the dependencies in linkages of services within Nova can make the ability to advance new features and functionality very difficult.<br />
# As a result of the many components and dependencies in Nova, it's difficult for anybody to really have a complete view of Nova and to be a true expert. This makes the job of core team member on Nova very difficult, and inhibits good thorough reviews of bug and blueprint submissions. <br />
# Block storage is a critical component of [[OpenStack]], as such it warrants focused and dedicated attention.<br />
# Having Block Storage as a dedicated core project in [[OpenStack]] enables the ability to greatly improve functionality and reliability of the block storage component of [[OpenStack]]<br />
<br />
== Documents: ==<br />
* Cinder deep dive (updated for Grizzly): [[File:cinder-grizzly-deep-dive-pub.pdf]]<br />
<br />
== Minimum Driver Features ==<br />
See [https://github.com/openstack/cinder/blob/master/doc/source/devref/drivers.rst driver dev docs]<br />
<br />
=== Keeping consistant with multi backend ===<br />
In order to maintain consistency with multi backend, do not directly use FLAGS.my_flag, instead use the self.configuration that is provided to the volume drivers. If this does not exist, look @ lvm.py and add it to your driver. using FLAGS.my_flag instead of self.configuration.my_flag will cause multi backend to not work properly. Multi backend relies on the configurations to be within a specific config group in the config file, and the self.configuration abstracts that away from the drivers.<br />
<br />
== Keeping informed and providing '''CONSTRUCTIVE INPUT''' ==<br />
The Cinder team currently meets on a weekly basis in #openstack-meeting at 16:00 UTC on Wednesdays. I try to keep the meetings wiki agenda page http://wiki.openstack.org/CinderMeetings up to date and follow it. Also keep in mind that '''anybody''' is able to add/suggest agenda items via the meeting wiki page.<br />
<br />
Of course, there's also IRC... a number of us monitor #openstack-cinder or you can always send a PM to jgriffith (that's me)<br />
<br />
== Concerns from the community: ==<br />
=== Compatibility and Migration: ===<br />
There has been a significant amount of concern raised regarding "compatibility"; unfortunately this seems to mean different things to different people. For those that haven't looked at the Cinder code or tried a demo in devstack, here are some question/answers:<br />
<br />
* Do the same nova client commands I use for volumes today still work the same? '''YES'''<br />
* Do the same euca2ools that I use for volumes today still work the same? '''YES'''<br />
* Does block storage still work the same as it does today in terms of LVM, iSCSI and the drivers that are curently in place? '''YES'''<br />
* Are the associated database tables the same as they are in the current nova volume code? '''For the most part YES, all volume related tables and columns are migrated, non-volume related tables however are not present'''<br />
* Does it use the same nova database as we use today? '''No, it does require a new independent database'''<br />
* Are you going to implement cinder with complete disregard for my current install and completely change everything out from under me? '''ABSOLUTELY NOT'''<br />
* Are you going to test migrating from nova-vol to Cinder? '''YES'''<br />
* Are those migration tests going to be done just using fakes/unit tests? '''NO, we would require running setups, most likely devstack'''<br />
* Are you planning to provide migration scripts/tools to move from nova to cinder? '''YES'''<br />
<br />
=== Additional thoughts to keep in mind: ===<br />
* The Cinder core team is fortunate enough to have a number of members who currently work for companies that are using [[OpenStack]] in production environments. There is a strong representation and the concerns of Providers is in fact a major consideration<br />
* The goal is '''NOT''' to throw away nova-volume as it is today, but to separate it, focus on it and improve it.<br />
* Migration is one of the top priorities for introduction of Cinder into Folsom (regardless of whether nova-volume is still in place or not). This is something that is just considered a part of the requirements for the project.<br />
<br />
== Cinder Core Drivers ==<br />
For a list of the core drivers in each OpenStack release and the volume operations they support, see https://wiki.openstack.org/wiki/CinderSupportMatrix<br />
<br />
== Notes About Submitting Patches ==<br />
Everyone is welcome to sign the CLA and submit code. Please be sure you familiarize yourself with the "how to contribute guide" (https://wiki.openstack.org/wiki/How_To_Contribute#If_you.27re_a_developer).<br />
<br />
Keep in mind, there is a disproportionate number of submitters to reviewers. YOU can help with this!! Anybody is welcome to review patches, jump in, give a review. It's a great way to learn more about the code and to help you make better submissions in the future. It also helps your karma, when you submit a patch if you're an active reviewer core team members are more likely to notice your patch and give it some attention before some others.<br />
<br />
== Cinder Plugins ==<br />
How to submit a plugin/driver: https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver<br />
<br />
The following plugins (from other sources) are avaialble for this project<br />
* [https://wiki.openstack.org/wiki/Mellanox-Cinder Mellanox Cinder Plugin] Mellanox Cinder Plugin<br />
<br />
Configuring devstack to use your driver and backend<br />
One of the things you'll be required to do when submitting a new driver is running your backend and driver in a devstack environment and executing the tempest volume tests against it. Currently we provide a driver_cert wrapper (mentioned in the how-to-contribute-a-driver section). One thing that causes some confusion is how to I configure devstack to use my backend device. It used to be that your driver info would have to be added to lib/cinder in devstack to set your options. We then created a cinder/plugin module in devstack. Fortunately though it's MUCH easier than that. For *most* drivers, the only changes that are made consist of cinder.conf file changes. That can easily be accomplished by using devstacks local.conf file (more info here: http://devstack.org/configuration.html). For more complex actions (like the need to install packages etc, the plugin directory in devstack can be used). An example of what this file would look like to add driver FOO is shown below, the default localrc section is included for completeness, but the section of interest is the post-config cinder.conf section:<br />
<br />
[[local|localrc]]<br />
# Passwords<br />
ADMIN_PASSWORD=password<br />
MYSQL_PASSWORD=password<br />
RABBIT_PASSWORD=password<br />
SERVICE_PASSWORD=password<br />
SERVICE_TOKEN=password<br />
SCREEN_LOGDIR=/opt/stack/logs<br />
HOST_IP=172.16.140.246<br />
disable_service n-net<br />
enable_service q-svc<br />
enable_service q-agt<br />
enable_service q-dhcp<br />
enable_service q-l3<br />
enable_service q-meta<br />
enable_service neutron<br />
<br />
# Disable security groups entirely<br />
Q_USE_SECGROUP=False<br />
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver<br />
CINDER_SECURE_DELETE=False<br />
<br />
[[post-config|$CINDER_CONF]]<br />
volume_driver = cinder.volume.drivers.foo.FooDriver<br />
foos_var = something<br />
another_foo_var = something-else</div>John-griffithhttps://wiki.openstack.org/w/index.php?title=Cinder&diff=41721Cinder2014-02-10T16:22:00Z<p>John-griffith: /* OpenStack Block Storage ("Cinder") */</p>
<hr />
<div><br />
= OpenStack Block Storage ("Cinder") =<br />
<br />
{| border="1" cellpadding="2" cellspacing="0"<br />
| [[https://launchpad.net/cinder/ Cinder on launchpad (including bug tracker and blueprints)]]<br />
|-<br />
| [[https://github.com/openstack/cinder Source code]]<br />
|-<br />
| [[http://docs.openstack.org/developer/cinder/ Developer docs]]<br />
|-<br />
| [[http://docs.openstack.org/grizzly/openstack-block-storage/admin/content/ OpenStack Block Storage Service Administration Guide]]<br />
|}<br />
<br />
== Related projects ==<br />
* Python Cinder client<br />
* Block Storage API documentation<br />
<br />
== What is Cinder ? ==<br />
<br />
Cinder provides an infrastructure for managing volumes in OpenStack. It was originally a Nova component called nova-volume, but has become an independent project since the Folsom release.<br />
<br />
== Reasoning: ==<br />
# Nova is currently a very large project; managing all of the dependencies in linkages of services within Nova can make the ability to advance new features and functionality very difficult.<br />
# As a result of the many components and dependencies in Nova, it's difficult for anybody to really have a complete view of Nova and to be a true expert. This makes the job of core team member on Nova very difficult, and inhibits good thorough reviews of bug and blueprint submissions. <br />
# Block storage is a critical component of [[OpenStack]], as such it warrants focused and dedicated attention.<br />
# Having Block Storage as a dedicated core project in [[OpenStack]] enables the ability to greatly improve functionality and reliability of the block storage component of [[OpenStack]]<br />
<br />
== Documents: ==<br />
* Cinder deep dive (updated for Grizzly): [[File:cinder-grizzly-deep-dive-pub.pdf]]<br />
<br />
== Minimum Driver Features ==<br />
See [https://github.com/openstack/cinder/blob/master/doc/source/devref/drivers.rst driver dev docs]<br />
<br />
=== Keeping consistant with multi backend ===<br />
In order to maintain consistency with multi backend, do not directly use FLAGS.my_flag, instead use the self.configuration that is provided to the volume drivers. If this does not exist, look @ lvm.py and add it to your driver. using FLAGS.my_flag instead of self.configuration.my_flag will cause multi backend to not work properly. Multi backend relies on the configurations to be within a specific config group in the config file, and the self.configuration abstracts that away from the drivers.<br />
<br />
== Keeping informed and providing '''CONSTRUCTIVE INPUT''' ==<br />
The Cinder team currently meets on a weekly basis in #openstack-meeting at 16:00 UTC on Wednesdays. I try to keep the meetings wiki agenda page http://wiki.openstack.org/CinderMeetings up to date and follow it. Also keep in mind that '''anybody''' is able to add/suggest agenda items via the meeting wiki page.<br />
<br />
Of course, there's also IRC... a number of us monitor #openstack-cinder or you can always send a PM to jgriffith (that's me)<br />
<br />
== Concerns from the community: ==<br />
=== Compatibility and Migration: ===<br />
There has been a significant amount of concern raised regarding "compatibility"; unfortunately this seems to mean different things to different people. For those that haven't looked at the Cinder code or tried a demo in devstack, here are some question/answers:<br />
<br />
* Do the same nova client commands I use for volumes today still work the same? '''YES'''<br />
* Do the same euca2ools that I use for volumes today still work the same? '''YES'''<br />
* Does block storage still work the same as it does today in terms of LVM, iSCSI and the drivers that are curently in place? '''YES'''<br />
* Are the associated database tables the same as they are in the current nova volume code? '''For the most part YES, all volume related tables and columns are migrated, non-volume related tables however are not present'''<br />
* Does it use the same nova database as we use today? '''No, it does require a new independent database'''<br />
* Are you going to implement cinder with complete disregard for my current install and completely change everything out from under me? '''ABSOLUTELY NOT'''<br />
* Are you going to test migrating from nova-vol to Cinder? '''YES'''<br />
* Are those migration tests going to be done just using fakes/unit tests? '''NO, we would require running setups, most likely devstack'''<br />
* Are you planning to provide migration scripts/tools to move from nova to cinder? '''YES'''<br />
<br />
=== Additional thoughts to keep in mind: ===<br />
* The Cinder core team is fortunate enough to have a number of members who currently work for companies that are using [[OpenStack]] in production environments. There is a strong representation and the concerns of Providers is in fact a major consideration<br />
* The goal is '''NOT''' to throw away nova-volume as it is today, but to separate it, focus on it and improve it.<br />
* Migration is one of the top priorities for introduction of Cinder into Folsom (regardless of whether nova-volume is still in place or not). This is something that is just considered a part of the requirements for the project.<br />
<br />
== Cinder Core Drivers ==<br />
For a list of the core drivers in each OpenStack release and the volume operations they support, see https://wiki.openstack.org/wiki/CinderSupportMatrix<br />
<br />
== Notes About Submitting Patches ==<br />
Everyone is welcome to sign the CLA and submit code. Please be sure you familiarize yourself with the "how to contribute guide" (https://wiki.openstack.org/wiki/How_To_Contribute#If_you.27re_a_developer).<br />
<br />
Keep in mind, there is a disproportionate number of submitters to reviewers. YOU can help with this!! Anybody is welcome to review patches, jump in, give a review. It's a great way to learn more about the code and to help you make better submissions in the future. It also helps your karma, when you submit a patch if you're an active reviewer core team members are more likely to notice your patch and give it some attention before some others.<br />
<br />
== Cinder Plugins ==<br />
How to submit a plugin/driver: https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver<br />
<br />
The following plugins (from other sources) are avaialble for this project<br />
* [https://wiki.openstack.org/wiki/Mellanox-Cinder Mellanox Cinder Plugin] Mellanox Cinder Plugin</div>John-griffith