Difference between revisions of "ReleaseNotes/Grizzly"
(→Key New Features)
m (added to Releases)
|Line 430:||Line 430:|
= Known packaged distributions =
= Known packaged distributions =
Latest revision as of 00:30, 23 September 2014
- 1 OpenStack 2013.1 (Grizzly) Release Notes
- 1.1 General Upgrade Notes
- 1.2 OpenStack Object Storage (Swift)
- 1.3 OpenStack Compute (Nova)
- 1.4 OpenStack Image Service (Glance)
- 1.5 OpenStack Dashboard (Horizon)
- 1.5.1 Key New Features
- 126.96.36.199 Networking
- 188.8.131.52 Direct Image Upload To Glance
- 184.108.40.206 Flavor Extra Specs Support
- 220.127.116.11 Migrate Instance
- 18.104.22.168 User Experience Improvements
- 22.214.171.124 Community
- 126.96.36.199 Under The Hood
- 188.8.131.52 Other Improvements and Fixes
- 1.5.2 Known Issues
- 1.5.3 Upgrade Notes
- 1.5.1 Key New Features
- 1.6 OpenStack Identity (Keystone)
- 1.7 OpenStack Network Service (Quantum)
- 1.8 OpenStack Block Storage (Cinder)
- 2 Known packaged distributions
OpenStack 2013.1 (Grizzly) Release Notes
General Upgrade Notes
- Many projects have had their service launching scripts (e.g. nova-api) or their admin CLIs (e.g. keystone-manage) ported from optparse to argparse. This has lead to some minor incompatibilities in the arguments accepted by commands. For example, in glance you can no longer do glance-manage db_sync --config-file=... but must instead do glance-manage --config-file=... db_sync because the option is for the top-level command rather than the sub-command.
- Maybe projects have had their default loglevel changed to WARNING. Use verbose=True to change the loglevel to INFO (the previous default) and debug=True to change the loglevel to DEBUG. See bug #989269
OpenStack Object Storage (Swift)
During the OpenStack Grizzly release cycle, Swift has released version 1.7.5, 1.7.6, and 1.8.0. The full changelog for these releases is available at https://github.com/openstack/swift/blob/master/CHANGELOG. Highlights from this changelog are below.
Key New Features
- Added support for large objects with static manifests: http://docs.openstack.org/developer/swift/misc.html#module-swift.common.middleware.slo
- Global clusters building blocks
- Allow the rings to have an adjustable replica count: Deployers can now adjust the replica count on existing clusters
- Allow rings to have different replica counts: Deployers can choose different replica counts for account, container, and object rings
- Added support for a region tier above zones: Deployers can group zones into regions.
- Added timing-based sorting of object servers on read requests: This allows the fastest responding server to serve the most requests instead of a random choice of the replicas. This can be especially useful when a replicas are in different regions separated by a WAN.
- Support for the OPTIONS verb: Clients can use the OPTIONS verb as defined in http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html
- Added support for CORS requests: http://www.w3.org/TR/cors/
- Bulk requests
- Added support for auto-extracting archive uploads: A client can upload an archive file (ie a .tar file) and the contents will be stored individually in the cluster
- Added support for bulk deletes: A client can delete many objects with one delete request
- Support for custom log handlers: With this, deployers can send logs to one or more custom handlers as described in the http://docs.openstack.org/developer/swift/admin_guide.html?highlight=custom%20log%20handlers#custom-log-handlers
- Support for multi-range requests: Clients can request multiple ranges from an object with just one request
- Added an optional, temporary healthcheck failure: The healthcheck middleware will now report failure if a local file exists. This allows for better flexibility when upgrading individual servers
- StatsD updates
- Now reports timings instead of counts for errors
- Track unlinks for async pendings
- Fixed sample_rate
- Changed the default sample rate for a few high-traffic requests
- Added first-byte latency timings for GET requests
- Added per disk PUT timing monitoring support
- Added user-managed container quotas
- Added support for account-level quotas (managed by an auth reseller)
- Replication can now run against specific devices or partitions
- list_endpoints middleware: This middleware provides an API for determining where the ring places data. It is especially useful for integration with applications that can move compute jobs "close" to where the data is stored.
- Removed a dependency on webob
- Swift now returns 406 if it cannot satisfy the Accept header
- Swift will now reject names with NULL characters
- Added --top option to swift-recon -d
- Added options to swift-dispersion report to limit reports
- Added option to turn eventlet debug on/off
- proxy-logging middleware updates: proxy-logging can now handle logging for other middleware
- Added swift_hash_path_prefix option to swift.conf: New deployers are encouraged to set this to a random secret value
- Added fallocate_reserve option to protect against full drives
- Allow ring rebalance to take a seed
- Ring serialization will now produce the same gzip file (Py2.7+)
- Added support to swift-drive-audit for handling rotated logs
- Added speed limit options for DB auditor
- Force log entries to be one line
- Ensure that fsync is used and not just fdatasync
- Improved handoff node selection: More nodes in the cluster will be selected as handoffs and the choice will be made in a more consistent manner when the ring changes
- Updated Swift's MemcacheRing to provide API compatibility with common Python memcache libraries
As always, deployers can upgrade to the latest version of Swift with no downtime on their existing clusters. In the Grizzly timeframe, the following changes have been made that may affect existing deloyments.
- proxy-logging middleware updates: proxy-logging should be used twice in the proxy pipeline. The first
handles middleware logs for requests that never made it all the way to the server. The last handles requests that do make it to the server.
This is a change that may require an update to your proxy server config file or custom middleware that you may be using. See the full docs at http://docs.openstack.org/developer/swift/misc.html#module-swift.common.middleware.proxy_logging.
- StatsD default sample rate changed for some metrics: Added log_statsd_sample_rate_factor to globally tune the StatsD
sample rate. This tunable can be used to reduce StatsD traffic proportionally for all metrics and is intended to replace log_statsd_default_sample_rate, which is left alone for backward-compatibility, should anyone be using it.
- webob is no longer a dependency
OpenStack Compute (Nova)
Key New Features
- Cells: Grizzly will include a preview (experimental) release of cells functionality. Cells provides a new way to scale nova deployments, including the ability to have compute clusters (cells) in different geographic locations all under the same nova API. See Cells docs for more details.
- Availability Zones: Availability Zone support has been enhanced. Previously, the only way to set the availability zone for a given compute node was via its configuration file. You can now set a node's availability zone via the API.
- Admin APIs: There have been multiple additions to the API for administrative actions. This has been done to continue to move away from needing the nova-manage utility for most administrative tasks.
- API support for instance passwords: This enhancement to nova improves support for instances that require passwords to work, such as those running Windows. Instances can now generate and post an encrypted password to the metadata API (write once). This password can be retrieved via the public nova API. This functionality can be integrated with a guest initialization tool such as cloud-init.
- Bare metal provisioning: Grizzly includes a new hypervisor driver than can deploy machine images to bare metal, allowing tasks to run with no virtualisation overhead. This is supported but not fully featured - see the hypervisor feature matrix for details. Also, in the event one wishes (or needs) to customize the machine image being deployed, the diskimage-builder project on stackforge is recommended. See bare metal docs for more details.
- Improved MySQL connector performance: Some enhancements have been made to allow better interaction with MySQL and the threading model used by nova (eventlet).
- Database archiving: Support for pruning deleted items and placing them in separate tables to keep the most frequently written tables from growing without bounds.
- Instance Action Tracking: Nova has been updated to keep track of all actions performed on an instance. There is an API extension for accessing this information. Viewing the list of instance actions provides deeper insight into the history of an instance. It also provides much better error reporting for users and administrators.
- No-DB-Compute: The nova-compute service can optionally run in a mode where it has no direct access to the database. This improves Nova's security, though some concerns have been raised about the performance of this new mode.
- Quantum Security Groups Proxy: When managing security groups through Nova's API, all actions will be proxied to Quantum when Quantum is the network provider.
- File injection without mounting guest filesystem: Nova has the ability to use libguestfs to support file injection into a guest filesystem. Previously this was done by mounting the guest filesystem on the host. This has been refactored to use libguestfs APIs that do not require mounting the guest filesystem, which is much more secure.
- Default Security Group Rules: Nova can now be made to add rules to the default security group when it is created for a tenant.
- libvirt Custom Hardware: The libvirt driver in Nova will now check for properties on an image that specify specific hardware types that should be used. An example of when this is useful is for an image that does not support virtio, and should use a fully virtualized hardware type instead.
- libvirt Spice Console: The libvirt driver in Nova now supports Spice virtual consoles.
- powervm Resize, Migrate, and Snapshot: The powervm driver in Nova now supports the resize, migrate and snapshot operations.
- VMware Driver Improvements: Several improvements were made to the VMware driver, including support for VNC consoles, iSCSI volumes, live migration, rescue mode, Quantum, and improved Glance integration (OVF support, better download performance).
- Unique Instance Naming: When issuing an API command to create multiple servers, Nova will now give each instance a unique name based on a configured template. Previously all instances would have the same name.
- Availability Zones in OpenStack API: Support for availability zones has been enhanced in the OpenStack API. You can now list availability zones through the API. The availability zone for an instance is also included in instance details.
- Glance Direct Image File Copy: If Glance provides Nova a URL to the image location on a shared filesystem, Nova will now get the image content from there instead of through the Glance API. This will result in faster instance boot times under some circumstances.
- Boot without image: It is now possible to boot a volume-backed instance without specifying an image, if block-device-mapping is passed to the nova boot command.
- Quota-instance-resource: It is now possible to set accurate quota for CPU, disk IO, and network interface bandwidth(duo to a bug bandwidth Qos can't works) of an instance. By using this feature, you can provide a consistent amount of CPU capacity no matter what the actual underlying hardware.
- Network adapter hot-plug: It is now possible to hot-plug a pre created port to a running instance.
- Quotas for fixed IP addresses: It is now possible to set quotas for the allocation of fixed IP addreses (set quota_fixed_ips in nova.conf - default is unlimited)
- API extension for determining fixed from floating IPs: The new OS-EXT-IPS:type parameter in the compute API indicates whether an IP address associated with a virtual machine is fixed or floating.
- Multiple security groups with same name can be created
- Setting memcached_servers doesn't work with nova also
- Upgrading when there is an in-progress migration can cause nova-compute to fail to start
- Deleting instance during migration can cause nova-compute to fail to start
- nova net-list (os-networks extension) returns id instead of uuid
- nova-manage fixed list is broken
- nova can continually hit cinder api forever if a volume fails to create
- Co-authors don't show up in Authors file
- Quotas can be updated to less than currently running values
- Snapshots can get stuck in saving also also
- SQL password can appear in logs
- ec2 api does not work for aws-java-sdk
- Grizzly should now be able to detect when a shared filesystem is being used for instances path, eliminating a bug in previous versions where the image cache manager erroneously deleted images that were in use on shared filesystems. However, this bugfix has not been extensively tested in production environments. If you wish to be conservative, you may set image_cache_manager_interval=0 in your nova.conf file on your compute nodes to prevent the image cache manager from deleting any images.
OpenStack Image Service (Glance)
Key New Features
- API v2 Image Sharing: see http://docs.openstack.org/developer/glance/glanceapi.html#image-membership-changes-in-version-2-0 for more details.
- API v2 API JSON PATCH draft 10 support
- glance-control status: programatically expose the status of the glance services
- glance-manage downgrade: explicitly migrates a database down to the requested version
- LP 1155389: Images with data stored in a multi-tenant swift store cannot be downloaded by users in the tenants they are shared with, although such images are visible in the v2 api image lists.
- Image API v1/v2 codepaths: the implementations of the Image APIs use different internal code paths. The v2 API cannot utilize the glance-registry service and must have direct access to the data store. Set 'enable_v2_api=False' to disable the v2 API if necessary
- LP 1152716: administrative privileges are restricted to a single role defined in the config file
The upgrade process for Folsom to Grizzly is simple: deploy the latest code and run 'glance-manage db_sync'
OpenStack Dashboard (Horizon)
The Grizzly release cycle saw sweeping improvements to overall user experience, huge stability improvements, lots of new networking, instance management and image management features, a long-needed architectural clarification, and big increases in community engagement! Read on to get the specifics.
Key New Features
Quantum added a huge number of new features in Grizzly, including L3 support (routers), load balancers, network topology infographics, better compatibility with Nova networking APIs (VNIC ordering when launching an instance; security groups and floating IP integration) and vastly improved informational displays.
Direct Image Upload To Glance
It is now possible (though there are numerous deployment/security implications) to upload an image file directly from a user’s hard disk to Glance through Horizon. For multi-GB images it is still strongly recommended that the upload be done using the Glance CLI. Further improvements to this feature will come in future releases.
Flavor Extra Specs Support
In Folsom, Nova added support for “extra specs” on flavors–additional metadata which custom schedulers could use for appropriately scheduling instances. As of the Grizzly release, Horizon now supports reading and writing that data on any flavor.
Administrators now have the ability to migrate an instance off of its current host via the Admin dashboard’s Instances panel.
User Experience Improvements
“Not Authorized” & Being Logged Out
A shocking number of the problems first-time deployers of OpenStack have can be summarized as “I thought I set everything up, then I tried to log into the dashboard and I was immediately logged back out.” The root cause of this was that in an effort to be as secure as possible any 401 or 403 response from any service API was being treated the same as if it was an attempt to access an unauthorized portion of Horizon, and the user was summarily logged out with little to no information as to why.
In Grizzly we have instead chosen to improve this by treating service API 401 and 403 errors as slightly less severe than unauthorized access attempts to resitricted areas of Horizon. The reason for this is threefold:
For a non-malicious user these errors are almost 100% the result of misconfiguration and this makes debugging possible. A malicious user can make the exact same “unauthorized” requests via the CLI as they can via the dashboard; no special privileges are granted. API errors are generated by external systems not under the purview of our project and while we should attempt to respect and take appropriate action on those errors, we should not do anything drastic or even potentially destructive because of them. Going forward the user will not be logged out, but no information will be populated on the page and they will be presented with error messages informing them that they are unauthorized for the data they attempted to access.
A couple of long-standing user confusions were fixed in Grizzly.
First off, the API Access panel (containing a user’s API endpoints, rc files, and EC2 credentials) was moved from Settings to the Access & Security section of the Project dashboard.
Second, the Default Quotas and Services panels (which were both strictly informational) were combined into tabs in a single System Info panel to make it clear that these panels are thematically related, and to create a home for informational-only displays like these.
One-click Floating IP Management
A common complaint from users was that associating a floating IP to an instance involved numerous clicks and form selections for something that the majority of users had no knowledge of and didn’t care about. As such, a one-click “simple” floating IP association option has been created. For deployments which only have a single floating IP pool, this allows users to ignore explicit floating IP management and just click a button to associate or disassociate a floating IP with an instance.
The Images table now has a new feature: predefined filters for seeing your own images, images that have been shared with you, or public images. This makes finding the image you’re looking for a great deal easier and more pleasant.
Security Group Rule Editing Improvements
The security group rule editing experience has always been inherently very complicated simply given the number of options and the very technical terms involved. Moreover, the combined table-plus-form approach the OpenStack Dashboard had taken only made the UX more frustrating for an already difficult area.
In Grizzly this has all been reworked to be signficantly simpler, and to provide as much contextual help and streamlining as possible.
In an effort to make the dashboard more at-a-glance usable, we’ve added icons to most of the common action buttons throughout the dashboard.
“More Actions”, More Better
Lots of feedback came in that the “more actions” dropdown menu (for tables with numerous actions available on each row) was confusing to new users and/or difficult to click.
We’ve now improved it so that the button to open the menu is clearly labeled and the hitbox for clicking it is significantly larger.
Docs, docs, and more docs!
Large amounts of new documentation was added during the Grizzly cycle, most notably are sections documenting: all of the available settings for Horizon and the OpenStack Dashboard; security and deployment considerations; and deeper guides on customizing the OpenStack Dashboard.
During the Grizzly cycle we started holding a weekly project meeting on IRC. This has been extremely beneficial for the growth and progress of the project. Check out the OpenStack Meetings wiki page for specifics.
Under The Hood
Legacy Dashboard Names & Code Separation
Very early in the Grizzly cycle we took the opportunity to do some longstanding cleanup and refactoring work. The “nova” dashboard was renamed to “project” and the “syspanel” dashboard was renamed to “admin” to better reflect their respective purposes.
Moreover, a better separation was created between code related to the core Horizon framework code (which is not related to OpenStack specifically) and the OpenStack Dashboard code. At this point all code related to OpenStack lives in the OpenStack Dashboard directory, while the Horizon framework is completely agnostic and is a reusable Django app.
Object Storage Delimiters and Pseudo-folder Objects
When Horizon’s object storage interface was first added, Swift’s documentation recommended adding 0-byte objects with a special content type to denote pseudo-folders within a container. They have since decided that this is not the recommended practice, and that pseudo-folders should only be demarcated by a delimiting character (usually “/”) in the object name.
Horizon has been updated under the hood to use this method, which should bring it better into line with how most deployments are using their object storage.
Other Improvements and Fixes
- Support for Keystone’s PKI tokens.
- Flavor editing was made significantly more stable.
- Security groups can be added to a running instance.
- Volume quotas are handled by the appropriate service depending on whether or not Cinder is enabled.
- Password confirmation boxes are now validated for matching passwords on the client side for more immediate feedback.
- Numerous fixes to display more and better information for instances and volumes in their overview pages.
- Improved unicode support for the Object Storage panels.
- Logout now attempts to delete the token(s) associated with the current session to avoid replay attacks, etc.
- Various fixes for browser compatibility and rendering.
- Many, many other bugfixes and improvements. Check out Launchpad for the full list of what went on in Grizzly.
Editing a Flavor Which Results In An API Error Will Delete The Flavor
Due to the way that Nova handles flavor editing/replacement it is necessary to delete the old flavor before creating the replacement flavor. As such, if an API error occurs while creating the replacement it is possible to lose the old flavor without the new one being created.
Creating Rich Network Topologies
Due to several Quantum features landing very late in the Grizzly cycle, it is not possible to create particularly complex networking configurations through the OpenStack Dashboard. These features will continue to grow throughout future releases.
The Loadbalancer feature landed in the 11th hour for both Quantum and Horizon and, though we did our best to test it, may still contain undiscovered bugs. It is best considered a “beta” or “experimental” feature for the Grizzly release.
Quantum Brocade Plugin Not Compatible
The Brocade plugin for Quantum does not support key features of the floating IP addresses API which are considered central to Horizon’s functionality. As such, it is not compatible with the Grizzly release’s Quantum integration. Please refer to the Brocade Quantum Plugin page for possible workarounds and up-to-date information on how to use it with Horizon.
Deleting large numbers of resources simultaneously
Using the “select all” checkbox to delete large numbers of resources via the API can cause network timeouts (depending on configuration). This is due to the APIs not supporting bulk-deletion natively, and consequently Horizon has to send requests to delete each resource individually behind the scenes.
The Grizzly Horizon release should be fully compatible with both Grizzly and Folsom versions of the rest of the OpenStack core projects (Nova, Swift, etc.). While some features work significantly better with an all-Grizzly stack due to bugfixes, etc. in underlying services, there should not be limitations on what will or will not function.
Overall, great effort has been made to maintain compatibility for third-party developers who may have built on Horizon so far.
OpenStack Identity (Keystone)
The term "tenant" is now known as "project." Both terms may be used interchangeably during this release, but "tenants" are specifically exposed by the legacy Identity API v2.0, and "projects" are exposed by the Identity API v3.
Key New Features
- PKI Tokens: PKI-based signed tokens (capable of being validated offline) are the default token format instead of traditional UUID-based tokens
- New API: Support for Identity API v3 which is deployed identically on both port 5000 and 35357 by default.
- User groups: manage role assignments for groups of users (managed on Identity API v3, affects both APIs).
- Domains: a high-level container for projects, users and groups providing namespace isolation and an additional level of role management (managed on Identity API v3, affects both APIs).
- Trusts: Project-specific role delegation between users, with optional impersonation (Identity API v3 only).
- Credentials: generic credential storage per user (e.g. EC2, PKI, SSH, etc.) (Identity API v3 only)
- Policies: a centralized repository for arbitrary policy engine rule sets (Identity API v3 only).
- Token values no longer appear in URLs (Identity API v3 only).
- RBAC: policy.json controls are enforced for all Identity API v3 calls.
- Pluggable authentication: The default 'password' and 'token' authentication modules are now pluggable (Identity API v3 only) and can be easily replaced with custom code, for example to authenticate with an existing system. Plugins can also make calls to the existing identity driver. Authentication at the HTTP API layer is also pluggable in Identity API v3; however, see Known Issues below.
- External authentication: Keystone trusts externally provided CGI-style REMOTE_USER claims to identify end users.
- Read-only LDAP deployments using the bundled identity driver still requires a default domain to be created in LDAP. Fixed in 2013.1.2.
- Non-default authentication plugins will not be loaded properly due to bug 1157515 (this does not affect custom password and token authentication plugins). Fixed in 2013.1.1.
- Disabled users and projects are re-enabled by keystone-manage db_sync after migrating from folsom due to bug 1167421. Fixed in 2013.1.1.
- A default domain is created automatically by keystone-manage db_sync; any existing users and tenants are migrated into this domain and Identity API v2.0 calls assume this domain as context. Entities created outside of the default domain are not accessible via the v2.0 API. The default domain may not be deleted through the API, although which domain is considered the "default" is configurable via default_domain_id in keystone.conf.
- Member role: Users with a default tenant_id attribute or any relationship to an existing tenant are granted a new role that is automatically created by keystone-manage db_sync. After this migration, the default tenant_id attribute (v2.0) and default_project_id user attributes no longer grant any actual authorization, and are instead treated as a user-specified preference. The name and ID of the member role are configurable via member_role_id and member_role_name in keystone.conf. The default values (9fe2ff9ee4384b1894a90878d3e92bab and _member_, respectively) are intended to avoid conflicts with existing deployments.
- keystone.conf [token] default_format = <UUID | PKI> defines whether UUID or PKI tokens are generated. PKI is the default, and requires keystone-manage pki_setup to be run, or existing certificates to be installed.
- keystone.middleware.auth_token is deprecated and has been moved to keystoneclient.middleware.auth_token. This means you no longer need to install keystone on a node where you only need authentication middleware. Pipeline configuration of services which consume auth_token must be revised to refer to keystoneclient instead of keystone.
- keystone.middleware.swift_auth has been removed it has been moved in havana to Swift you will need to update the paste stanza for keystoneauth from the filter_factory to egg:swift#keystoneauth
- keystone.conf [DEFAULT] admin_endpoint and public_endpoint configuration options are now available to configure the full URLs returned by self-referential links to keystone, even if it's deployed behind a proxy (neither option affects how keystone listens for connections). Both default to localhost and dynamically use the admin_port and public_port options.
OpenStack Network Service (Quantum)
Key New Features
- Metadata improvements
- simplified physical network configuration requirements, eliminating key deployment hurdle
- support for overlapping IP address ranges
- Multiple Network support for multiple network nodes running L3-agents and DHCP-agents
- provides better scale + high-available for quantum deployments.
- Security groups: allows L3-L4 packet filtering for security policies to protect virtual machines.
- Backward compatible with Nova-API
- Additional features not found in Nova:
- IPv6 and IPv4 support
- inbound + outbound filtering
- Overlapping IP address range support
- Can be offloaded by plugins to enhanced filtering engines rather than iptables
- Load-balancing-as-a-Service (LBaaS)
- full load balancing API model + pluggable framework
- basic implementation based on HAproxy
- Already working with leading vendors on additional plugins, expect support for more vendor load-balancer technologies in Havana
- New Plugins supported
- Big Switch Plugin
- Brocade Plugin
- Hyper-V Plugin
- Plum Grid Plugin
- Midonet Plugin
- Additional Improvements to Existing Plugins
- Nicira NVP Plugin: Quality-of-Service, L2-Gateways, Port-Security.
- Ryu: support for OVS tunneling.
- Seamless upgrade from Folsom to Grizzly
- Support for XML API.
- Horizon GUI support for Routers + Loadbalancers. Means Horizon now implements nearly all of the main Quantum features.
OpenStack Block Storage (Cinder)
Key New Features
- Attach over fiber channel
- Support for multiple backends on the same manager
- Support for LIO as an ISCSI backend
- Block storage backup to swift
- HP 3PAR array
- CORAID storage using AoE
- HUAWEI storage
- Scality SOFS
- LVM thin provisioning support
- Mirrored LVM
- EMC VNX/VMAX arrays
|Feature||Supported in v1||Supported in v2|
|List Bootable Volumes||yes||yes|
|Filter volumes by attributes||yes||yes|
|Filter volumes by metadata||yes||yes|
|Filter snapshot by attributes||yes||yes|
|Filter snapshot by metadata||yes||yes|
|Update volume metadata||yes||yes|
|Update snapshot metadata||yes||yes|
- Copy the new api-paste.ini to /etc/cinder/api-paste.ini. The dot path for the v1 router changed, however, there is a bug to properly deprecate this.
- The root_helper config setting has been deprecated and should be updated/replaced with rootwrap_config in cinder.conf:
rootwrap_config = /etc/cinder/rootwrap.conf
- The original osapi_volume_extension setting value (cinder.api.openstack.volume.contrib.standard_extensions) has been deprecated for G and will need to be updated:
osapi_volume_extension = cinder.api.contrib.standard_extensions
- API extension folder moves from cinder/api/openstack/volume/contrib to cinder/api/contrib
Cinder API v2
- List volumes/snapshots summary actually is a summary view. In v1 it was the same as detail view.
- List volumes/snapshots detail and summary has display_name key changed to name.
- List volumes/snapshots detail and summary has display_description key changed to description.