Jump to: navigation, search

Difference between revisions of "ReleaseNotes/Liberty"

m (Key New Features)
(OpenStack Dashboard (Horizon))
Line 497: Line 497:
 
=== Key New Features ===
 
=== Key New Features ===
  
* A new network topology – The network topology diagram has been replaced with an interactive graph containing collapsible networks, and scales far better in large deployments (https://blueprints.launchpad.net/horizon/+spec/curvature-network-topology)
+
* A new network topology – The network topology diagram has been replaced with an interactive graph containing collapsible networks, and scales far better in large deployments (https://blueprints.launchpad.net/horizon/+spec/curvature-network-topology).
  
* Plugin improvements -- Horizon autodiscovers JavaScript files for inclusion and now has mechanisms for pluggable SCSS and Django template overrides
+
* Plugin improvements Horizon auto discovers JavaScript files for inclusion, and now has mechanisms for pluggable SCSS and Django template overrides.
  
 
* Compute (Nova)
 
* Compute (Nova)
** Support for shelving and unshelving of instances (https://blueprints.launchpad.net/horizon/+spec/horizon-shelving-command)
+
** Support for shelving and unshelving of instances (https://blueprints.launchpad.net/horizon/+spec/horizon-shelving-command).
** Support for v2 block device mapping, falling back to v1 when unavailable (https://blueprints.launchpad.net/horizon/+spec/horizon-block-device-mapping-v2)
+
** Support for v2 block device mapping, falling back to v1 when unavailable (https://blueprints.launchpad.net/horizon/+spec/horizon-block-device-mapping-v2).
  
 
* Networking (Neutron)
 
* Networking (Neutron)
** Added support for subnet allocation via subnet pools (https://blueprints.launchpad.net/horizon/+spec/neutron-subnet-allocation)
+
** Added support for subnet allocation via subnet pools (https://blueprints.launchpad.net/horizon/+spec/neutron-subnet-allocation).
** Added actions to easily associate LBaaS VIP with a floating IP (https://blueprints.launchpad.net/horizon/+spec/lbaas-vip-fip-associate)
+
** Added actions to easily associate LBaaS VIP with a floating IP (https://blueprints.launchpad.net/horizon/+spec/lbaas-vip-fip-associate).
  
 
* Images (Glance)
 
* Images (Glance)
** The metadata editor has been updated with AngularJS (https://blueprints.launchpad.net/horizon/+spec/angularize-metadata-update-modals)
+
** The metadata editor has been updated with AngularJS (https://blueprints.launchpad.net/horizon/+spec/angularize-metadata-update-modals).
** Compute images metadata can now be edited from the Project dashboard, using the new metadata editor (https://blueprints.launchpad.net/horizon/+spec/project-images-metadata)
+
** Compute images metadata can now be edited from the Project dashboard, using the new metadata editor (https://blueprints.launchpad.net/horizon/+spec/project-images-metadata).
  
 
* Block Storage (Cinder)
 
* Block Storage (Cinder)
** Enabled support for migrating volumes (https://blueprints.launchpad.net/horizon/+spec/volume-migration)
+
** Enabled support for migrating volumes (https://blueprints.launchpad.net/horizon/+spec/volume-migration).
** Volume types can be now edited, and include description fields (https://blueprints.launchpad.net/horizon/+spec/volume-type-description)
+
** Volume types can be now edited, and include description fields (https://blueprints.launchpad.net/horizon/+spec/volume-type-description).
  
 
* Orchestration (Heat)
 
* Orchestration (Heat)
** Improvements to the heat topology, making more resources identifiable where previously they had no icons and were display as unknown resources (https://blueprints.launchpad.net/horizon/+spec/heat-topology-display-improvement)
+
** Improvements to the heat topology, making more resources identifiable where previously they had no icons and were displayed as unknown resources (https://blueprints.launchpad.net/horizon/+spec/heat-topology-display-improvement).
  
 
* Data Processing (Sahara)
 
* Data Processing (Sahara)
** Unified job interface map. This is a human readable method for passing in configuration data that a job may require/ accept (https://blueprints.launchpad.net/horizon/+spec/unified-job-interface-map-ui)
+
** Unified job interface map. This is a human readable method for passing in configuration data that a job may require or accept (https://blueprints.launchpad.net/horizon/+spec/unified-job-interface-map-ui).
** Added editing capabilities for job binaries (https://blueprints.launchpad.net/horizon/+spec/allow-editing-of-job-binaries)
+
** Added editing capabilities for job binaries (https://blueprints.launchpad.net/horizon/+spec/allow-editing-of-job-binaries).
** Added editing capabilities for data sources (https://blueprints.launchpad.net/horizon/+spec/allow-editing-of-data-sources)
+
** Added editing capabilities for data sources (https://blueprints.launchpad.net/horizon/+spec/allow-editing-of-data-sources).
** Added editing capabilities for job templates (https://blueprints.launchpad.net/horizon/+spec/data-processing-edit-templates)
+
** Added editing capabilities for job templates (https://blueprints.launchpad.net/horizon/+spec/data-processing-edit-templates).
** Exposed event log for clusters (https://blueprints.launchpad.net/horizon/+spec/sahara-event-log)
+
** Exposed event log for clusters (https://blueprints.launchpad.net/horizon/+spec/sahara-event-log).
** Added support for shell job types (https://blueprints.launchpad.net/horizon/+spec/sahara-shell-action-form)
+
** Added support for shell job types (https://blueprints.launchpad.net/horizon/+spec/sahara-shell-action-form).
  
 
* Databases (Trove)
 
* Databases (Trove)
** Added initial support for database cluster creation and management. Vertica and MongoDB are currently supported. (https://blueprints.launchpad.net/horizon/+spec/database-clustering-support)
+
** Added initial support for database cluster creation and management. Vertica and MongoDB are currently supported (https://blueprints.launchpad.net/horizon/+spec/database-clustering-support).
  
 
* Identity (Keystone)
 
* Identity (Keystone)
** Added mapping for Identity Provider and Protocol specific WebSSO (https://github.com/openstack/horizon/commit/3b4021c0ad0e8d7b10aa8c2dcd8c13a5717c450c)
+
** Added mapping for Identity Provider and Protocol specific WebSSO (https://github.com/openstack/horizon/commit/3b4021c0ad0e8d7b10aa8c2dcd8c13a5717c450c).
** Configurable token hashing (https://github.com/openstack/django_openstack_auth/commit/ece924a79d27ede1a8475d7f98e6d66bc3cffd6c and https://github.com/openstack/horizon/commit/48e651d05cbe9366884868c5331d49a501945adc)
+
** Configurable token hashing (https://github.com/openstack/django_openstack_auth/commit/ece924a79d27ede1a8475d7f98e6d66bc3cffd6c and https://github.com/openstack/horizon/commit/48e651d05cbe9366884868c5331d49a501945adc).
  
 
* Horizon (internal improvements)
 
* Horizon (internal improvements)
** Full support for translation in AngularJS, along with simpler tooling (https://blueprints.launchpad.net/horizon/+spec/angular-translate-makemessages)
+
** Full support for translation in AngularJS, along with simpler tooling (https://blueprints.launchpad.net/horizon/+spec/angular-translate-makemessages).
** Added Karma for JavaScript testing (https://blueprints.launchpad.net/horizon/+spec/karma)
+
** Added Karma for JavaScript testing (https://blueprints.launchpad.net/horizon/+spec/karma).
** Added ESLint for JavaScript linting, using the eslint-config-openstack rules (https://blueprints.launchpad.net/horizon/+spec/jscs-cleanup)
+
** Added ESLint for JavaScript linting, using the eslint-config-openstack rules (https://blueprints.launchpad.net/horizon/+spec/jscs-cleanup).
** Horizon now supports overriding of existing Django templates (https://blueprints.launchpad.net/horizon/+spec/horizon-theme-templates)
+
** Horizon now supports overriding of existing Django templates (https://blueprints.launchpad.net/horizon/+spec/horizon-theme-templates).
** JavaScript files are now automatically included (https://blueprints.launchpad.net/horizon/+spec/auto-js-file-finding)
+
** JavaScript files are now automatically included (https://blueprints.launchpad.net/horizon/+spec/auto-js-file-finding).
  
 
=== Upgrade Notes ===
 
=== Upgrade Notes ===
  
* Django 1.8 is now supported, and Django 1.7 is our mininmum supported version (https://blueprints.launchpad.net/horizon/+spec/drop-django14-support)
+
* Django 1.8 is now supported, and Django 1.7 is our minimum supported version (https://blueprints.launchpad.net/horizon/+spec/drop-django14-support).
 
* Database-backed sessions will likely not persist across upgrades due to a change in their structure
 
* Database-backed sessions will likely not persist across upgrades due to a change in their structure
(https://github.com/openstack/django_openstack_auth/commit/8c64de92f4148d85704b10ea1f7bc441db2ddfee and https://github.com/openstack/horizon/commit/ee2771ab1a855342089abe5206fc6a5071a6d99e)
+
(https://github.com/openstack/django_openstack_auth/commit/8c64de92f4148d85704b10ea1f7bc441db2ddfee and https://github.com/openstack/horizon/commit/ee2771ab1a855342089abe5206fc6a5071a6d99e).
* Horizon no longer uses QUnit in testing, and it has been removed from our requirements (https://blueprints.launchpad.net/horizon/+spec/replace-qunit-tests-with-jasmine)
+
* Horizon no longer uses QUnit in testing, and it has been removed from our requirements (https://blueprints.launchpad.net/horizon/+spec/replace-qunit-tests-with-jasmine).
* Horizon now has multiple configuration options for the default web url (WEBROOT), static file location (STATIC_ROOT) and static file url (STATIC_URL) in its settings files
+
* Horizon now has multiple configuration options for the default web URL (WEBROOT), static file location (STATIC_ROOT) and static file URL (STATIC_URL) in its settings files.
* Themes have moved location from openstack_dashboard/static/themes, to openstack_dashboard/themes. Paths may need to be updated accordingly. Furthermore, Horizon is aligning closer with Bootstrap markup, and themes should be built around this ideology; see the top bar and side nav for details
+
* Themes have moved location from openstack_dashboard/static/themes, to openstack_dashboard/themes. Paths may need to be updated accordingly. Furthermore, Horizon is aligning closer with Bootstrap markup, and themes should be built around this ideology; see the top bar and side navigation for details.
* The deprecated OPENSTACK_QUANTUM_NETWORK config option has been removed. If you still use it, you need to replace it with OPENSTACK_NEUTRON_NETWORK
+
* The deprecated <code>OPENSTACK_QUANTUM_NETWORK</code> configuration option has been removed. If you still use it, replace it with OPENSTACK_NEUTRON_NETWORK
* There is now an OPENSTACK_NOVA_EXTENSIONS_BLACKLIST option in the settings, to disable selected extensions for performance reasons (https://github.com/openstack/horizon/commit/18f4b752b8653c9389f8b0471eccaa0659707ebe)
+
* There is now an <code>OPENSTACK_NOVA_EXTENSIONS_BLACKLIST</code> option in the settings, to disable selected extensions for performance reasons (https://github.com/openstack/horizon/commit/18f4b752b8653c9389f8b0471eccaa0659707ebe).
* Trove and Sahara panels now reside in openstack_dashboard/contrib
+
* Trove and Sahara panels now reside in openstack_dashboard/contrib.
  
 
== OpenStack Trove (DBaaS) ==
 
== OpenStack Trove (DBaaS) ==

Revision as of 11:23, 13 October 2015

Other languages:
Deutsch • ‎English • ‎日本語 • ‎한국어 • ‎中文(简体)‎ • ‎中文(台灣)‎

OpenStack Liberty Release Notes

Contents

OpenStack Object Storage (Swift)

Please see full release notes at https://github.com/openstack/swift/blob/master/CHANGELOG

New Features

  • Allow 1+ object-servers-per-disk deployment enabled by a new > 0 integer config value, "servers_per_port" in the [DEFAULT] config section for object-server and/or replication server configurations. The setting's integer value determines how many different object-server workers handle requests for any single unique local port in the ring. In this mode, the parent swift-object-server process continues to run as the original user (i.e. root if low-port binding is required). It binds to all ports as defined in the ring. It then forks off the specified number of workers per listen socket. The child, per-port servers, drops privileges and behaves pretty much how object-server workers always have with one exception: the ring has unique ports per disk, the object-servers will only handle requests for a single disk. The parent process detects dead servers and restarts them (with the correct listen socket). It starts missing servers when an updated ring file is found with a device on the server with a new port, and kills extraneous servers when their port is no longer found in the ring. The ring files are started at most on the schedule configured in the object-server configuration by every the "ring_check_interval" parameter (same default of 15s). In testing, this deployment configuration (with a value of 3) lowers request latency, improves requests per second, and isolates slow disk IO as compared to the existing "workers" setting. To use this, each device must be added to the ring using a different port.
  • The object server includes a "container_update_timeout" setting (with a default of 1 second). This value is the number of seconds that the object server will wait for the container server to update the listing before returning the status of the object PUT operation. Previously, the object server would wait up to 3 seconds for the container server response. The new behavior dramatically lowers object PUT latency when container servers in the cluster are busy (e.g. when the container is very large). Setting the value too low may result in a client PUT'ing an object and not being able to immediately find it in listings. Setting it too high will increase latency for clients when container servers are busy.
  • Added the ability to specify ranges for Static Large Object (SLO) segments.
  • Allow SLO PUTs to forgo per-segment integrity checks. Previously, each segment referenced in the manifest also needed the correct etag and bytes setting. These fields now allow the "null" value to skip those particular checks on the given segment.
  • Replicator configurations now support an "rsync_module" value to allow for per-device rsync modules. This setting gives operators the ability to fine-tune replication traffic in a Swift cluster and isolate replication disk IO to a particular device. Please see the docs and sample config files for more information and examples.
  • Ring changes
    • Partition placement no longer uses the port number to place partitions. This improves dispersion in small clusters running one object server per drive, and it does not affect dispersion in clusters running one object server per server.
    • Added ring-builder-analyzer tool to more easily test and analyze a series of ring management operations.
    • Ring validation now warns if a placement partition gets assigned to the same device multiple times. This happens when devices in the ring are unbalanced (e.g. two servers where one server has significantly more available capacity).
  • TempURL fixes (closes CVE-2015-5223)

    Do not allow PUT tempurls to create pointers to other data. Specifically, disallow the creation of DLO object manifests via a PUT tempurl. This prevents discoverability attacks which can use any PUT tempurl to probe for private data by creating a DLO object manifest and then using the PUT tempurl to head the object.

  • Swift now emits StatsD metrics on a per-policy basis.
  • Fixed an issue with Keystone integration where a COPY request to a service account may have succeeded even if a service token was not included in the request.
  • Bulk upload now treats user xattrs on files in the given archive as object metadata on the resulting created objects.
  • Emit warning log in object replicator if "handoffs_first" or "handoff_delete" is set.
  • Enable object replicator's failure count in swift-recon.
  • Added storage policy support to dispersion tools.
  • Support keystone v3 domains in swift-dispersion.
  • Added domain_remap information to the /info endpoint.
  • Added support for a "default_reseller_prefix" in domain_remap middleware config.
  • Allow rsync to use compression via a "rsync_compress" config. If set to true, compression is only enabled for an rsync to a device in a different region. In some cases, this can speed up cross-region replication data transfer.
  • Added time synchronization check in swift-recon (the --time option).
  • The account reaper now runs faster on large accounts.
  • Various other minor bug fixes and improvements.

Upgrade Notes

  • Dependency changes
    • Added six requirement. This is part of an ongoing effort to add support for Python 3.
    • Dropped support for Python 2.6.
  • Config changes
    • Recent versions of Python restrict the number of headers allowed in a request to 100. This number may be too low for custom middleware. The new "extra_header_count" config value in swift.conf can be used to increase the number of headers allowed.
    • Renamed "run_pause" setting to "interval" (current configs with run_pause still work). Future versions of Swift may remove the run_pause setting.
  • The versioned writes feature has been refactored and reimplemented as middleware. You should explicitly add the versioned_writes middleware to your proxy pipeline, but do not remove or disable the existing container server config setting ("allow_versions"), if it is currently enabled. The existing container server config setting enables existing containers to continue being versioned. Please see http://swift.openstack.org/middleware.html#how-to-enable-object-versioning-in-a-swift-cluster for further upgrade notes.

OpenStack Networking (Neutron)

New Features

  • Neutron now supports IPv6 Prefix Delegation for the automatic assignment of CIDRs to IPv6 subnets. For more information on the usage and configuration of this feature, see the OpenStack Networking Guide.
  • Neutron now exposes a QoS API, initially offering bandwidth limitation on the port level. The API, CLI, configuration and additional information may be found here [1].
  • Router high availability (L3 HA / VRRP) now works when layer 2 population (l2pop) is enabled [2].
  • VPNaaS reference drivers now work with HA routers.
  • Networks used for VRRP traffic for HA routers may now be configured to use a specific segmentation type or physical network tag [3].
  • The OVS agent may now be restarted without affecting data plane connectivity.
  • Neutron now offers role base access control for networks [4].
  • LBaaS V2 reference driver is now based on Octavia, an operator grade scalable, reliable Load Balancer platform.
  • LBaaS V2 API is no longer experimental. It is now stable.
  • Neutron now provides a way for admins to manually schedule agents, allowing host resources to be tested before they are enabled for tenant use [5].
  • Neutron now has a pluggable IP address management framework, enabling the use of alternate or third-party IPAM. The original, non-pluggable version of IPAM is enabled by default.

Deprecated and Removed Plugins and Drivers

  • The metaplugin is removed in the Liberty release.
  • The IBM SDN-VE monolithic plugin is removed in the Liberty release.
  • The Cisco N1kV monolithic plugin is removed in the Liberty release (replaced by the ML2 mechanism driver).
  • The Embrane plugin is deprecated and will be removed in the Mitaka release.

Deprecated Features

  • The FWaaS API is marked as experimental for Liberty. Further, the current API will be removed in Mitaka and replaced with a new FWaaS API, which the team is in the process of developing.
  • The LBaaS V1 API is marked as deprecated and is planned to be removed in a future release. Going forward, the LBaaS V2 API should be used.

Performance Considerations

  • The stock Trusty Tahr kernel (3.13) shows linear performance degradation when running "ip netns exec" as the number of namespaces increases. In cases where scale is important, a later version kernel (e.g. 3.19) should be used. [WHICH VERSION OF 3.13 EXHIBITED THIS. MOST VERSIONS WILL HAVE THIS FIX ALREADY.]
  • Creating Neutron-LBaaS load balancers in environments without hardware virtualization may be slow when using the Octavia driver. This is due to QEMU using the TCG accelerator instead of the KVM accelerator in environments without hardware virtualization available. We recommend enabling hardware virtualization on your compute nodes, or enabling nested virtualization when using the Octavia driver inside a virtual environment. See the following link for details on setting up nested virtualization for DevStack running inside KVM: http://docs.openstack.org/developer/devstack/guides/devstack-with-nested-kvm.html.

OpenStack Compute (Nova)

New Features

API

Scheduler

Architectural evolution on the scheduler has continued, along with key bug fixes:

Cells v2

Cells v2 is not currently in a usable state, but we have added some more supporting infrastructure:

Compute Driver Features

Libvirt
VMware
Hyper-V
Ironic

Other Features

Upgrade Notes

  • If you are coming from Kilo stable, please make sure you have fully upgraded to the latest release of that lineage before deploying Liberty. Due to bug https://bugs.launchpad.net/nova/+bug/1474074 versions of Kilo from before the fix will be problematic when talking to Liberty nodes.
  • Allocation ratios for RAM and CPU are now defined within the nova-compute service (so per compute node). Ratios also need to be provided for the scheduler service. Depending on whether a compute node is running Kilo or Liberty, the allocation ratios will behave differently : if the compute node is running Kilo then the CPU and RAM allocation ratios for that compute node will be the ones defaulted in the controller's nova.conf file.Or, if the compute node is Liberty then you'll be able to set a per-compute allocation ratio for both CPU and RAM. In order to leave the operator providing the allocation ratios to all the compute nodes, the default allocation ratio will be set in nova.conf to 0.0 (even for the controller). That doesn't mean that allocation ratios will actually be 0.0, just that the operator needs to provide those before the next release (ie. Mitaka). To be clear, the default allocation ratios are still 16.0 for cpu_allocation_ratio and 1.5 for ram_allocation_ratio.
  • nova-compute should be upgraded to Liberty code before upgrading Neutron services per the new "network-vif-deleted" event: https://review.openstack.org/#/c/187871/
  • Rootwrap filters must be updated after release to add the 'touch' command.
    • There is a race condition between imagebackend and imagecache mentioned in the Launchpad Bug 1256838.
    • In this case if base image is deleted by ImageCacheManager while imagebackend is copying the image to instance path, then the instance goes in to error state.
    • In order to resolve this issue, there is a need to add 'touch' command in compute.filters along with the change https://review.openstack.org/#/c/217579/.
    • In case of a race condition, when libvirt has changed the base file ownership to libvirt-qemu while imagebackend is copying the image, then we get permission denied error on updating the file access time using os.utime. To resolve this error we need to update the base file access time with root user privileges using 'touch' command.
  • The DiskFilter is now part of the scheduler_default_filters in Liberty per https://review.openstack.org/#/c/207942/ .
  • Per https://review.openstack.org/#/c/103916/ you can now only map one vCenter cluster to a single nova-compute node.
  • The Libvirt driver parallels has been renamed to virtuozzo
  • Orphaned tables - iscsi_targets, volumes - have been removed.
  • The default paste.ini has been updated to use the new v2.1 API for all endpoints, and the v3 endpoint has been removed. A compatibility mode middlewear is used to relax the v2.1 validation for the /v2 and /v1.1 endpoints.
  • The code for DB schema downgrades has now been removed: https://blueprints.launchpad.net/nova/+spec/nova-no-downward-sql-migration

Deprecations

  • The ability to disable in tree API extensions has been deprecated (https://blueprints.launchpad.net/nova/+spec/nova-api-deprecate-extensions)
  • The novaclient.v1_1 module has been deprecated [[6]][[7]] since 2.21.0 and we are going to remove it in the first python-novaclient release in Mitaka.
  • Method `novaclient.client.get_client_class` is deprecated [[8]] since 2.29.0. The method will be removed in Mitaka.
  • The mute_weight_value option on weighers has been deprecated, including for use with Cells.
  • The remove_unused_kernels configuration option for the Libvirt driver is now deprecated.
  • The minimum recommended version of vCenter for use with the vcenter driver is now 5.1.0. In Liberty this is logged as a warning, in Mitaka support for versions lower than 5.1.0 will be removed.
  • API v3 specific components have all been deprecated and removed from the default paste.ini

OpenStack Telemetry (Ceilometer)

Key New Features

  • Creation of Aodh to handle alarming service.
  • Metadata caching - reduced load of nova API polling.
  • Declarative meters
    • Ability to generate meters by defining meter definition template.
    • Ability to define specific SNMP meters to poll.
  • Support for data publishing from Ceilometer to Gnocchi.
  • Mandatory limit - limit restricted querying is enforced. The limit must be explicitly provided on queries, otherwise the result set is restricted to a default limit.
  • Distributed, coordinated notification agents - support for workload partitioning across multiple notification agents.
  • Events RBAC support.
  • PowerVM hypervisor support.
  • Improved MongoDB query support - performance improvement to statistic calculations.

Gnocchi Features

  • Initial influxdb driver implemented.

Aodh Features

  • Event alarms - ability to trigger an action when an event is received.
  • Trust support in alarms link.

Upgrade Notes

  • The name of some middleware used by ceilometer changed in a backward incompatible way. Before upgrading, edit the paste.ini file for ceilometer to change oslo.middleware to oslo_middleware. For example, using sed -ri 's/oslo\.middleware/oslo_middleware/' api_paste.ini
  • The notification agent is a core service to collecting data in Ceilometer. It now handles all transformations and publishing. Polling agents now defer all processing to notification agents, and must be deployed in tandem.
  • A mandatory limit is applied to each request. If no limit is given, it will be restricted to a default limit.

Deprecation

  • Ceilometer Alarms is deprecated in favour or Aodh.
  • RPC publisher and collector is deprecated in favour of a topic based notifier publisher.
  • Non-metric meters are still deprecated, and are to be removed in a future release.

OpenStack Identity (Keystone)

Key New Features

  • Experimental: Domain specific configuration options can be stored in SQL instead of configuration files, using the new REST APIs.
  • Experimental: Keystone now supports tokenless authorization with X.509 SSL client certificate.
  • Configuring per-Identity Provider WebSSO is now supported.
  • openstack_user_domain and openstack_project_domain attributes were added to SAML assertion in order to map user and project domains, respectively.
  • The credentials list call can now have its results filtered by credential type.
  • Support was improved for out-of-tree drivers by defining stable Driver Interfaces.
  • Several features were hardened, including Fernet tokens, Federation, domain specific configurations from database and role assignments.
  • Certain variables in keystone.conf now have options, which determine if the user's setting is valid.

Upgrade Notes

  • The EC2 token middleware, deprecated in Juno, is no longer available in keystone. It has been moved to the keystonemiddleware package.
  • The compute_port configuration option, deprecated in Juno, is no longer available.
  • The XML middleware stub has been removed, so references to it must be removed from the keystone-paste.ini configuration file.
  • stats_monitoring and stats_reporting paste filters have been removed, so references to it must be removed from the keystone-paste.ini configuration file.
  • The external authentication plugins ExternalDefault, ExternalDomain, LegacyDefaultDomain, and LegacyDomain, deprecated in Icehouse, are no longer available.
  • keystone.conf now references entrypoint names for drivers. For example, the drivers are now specified as "sql", "ldap", "uuid", rather than the full module path. See the sample configuration file for other examples.
  • We now expose entrypoints for the keystone-manage command instead of a file.
  • Schema downgrades via keystone-manage db_sync are no longer supported. Only upgrades are supported.
  • Features that were "extensions" in previous releases (OAuth delegation, Federated Identity support, Endpoint Policy, etc) are now enabled by default.
  • A new secure_proxy_ssl_header configuration option is available when running keystone behind a proxy.
  • Several configuration options have been deprecated, renamed, or moved to new sections in the keystone.conf file.
  • Domain name information can now be used in policy rules with the attribute domain_name.

Deprecations

  • Running Keystone in Eventlet remains deprecated and will be removed in the Mitaka release.
  • Using LDAP as the resource backend, i.e for projects and domains, is now deprecated and will be removed in the Mitaka release.
  • Using the full path to the driver class is deprecated in favor of using the entrypoint. In the Mitaka release, the entrypoint must be used.
  • In the [resource] and [role] sections of the keystone.conf file, not specifying the driver and using the assignment driver is deprecated. In the Mitaka release, the resource and role drivers will default to the SQL driver.
  • In keystone-paste.ini, using ,code>paste.filter_factory</code> is deprecated in favor of the "use" directive, specifying an entrypoint.
  • Not specifying a domain during a create user, group or project call, which relied on falling back to the default domain, is now deprecated and will be removed in the N release.
  • Certain deprecated methods from the assignment manager were removed in favor of the same methods in the [resource] and [role] manager.

OpenStack Block Storage (Cinder)

Key New Features

Upgrade Notes

Deprecations

OpenStack Orchestration (Heat)

New Features

Convergence

Convergence is a new orchestration engine maturing in the heat tree. In Liberty, the benefits of using the convergence engine are:

  • Greater parallelization of resource actions (for better scaling of large templates)
  • The ability to do a stack-update while there is already an update in-progress
  • Better handling of heat-engine failures (still WIP)

The convergence engine can be enabled by setting /etc/heat/heat/conf [DEFAULT] convergence_engine=true, then restarting heat-engine. Once this has been done, any subsequent created stack will use the convergence engine, while operations on existing stacks will continue to use the traditional engine.

Convergence has not been production tested and thus should be considered beta quality - use with caution. For the Liberty release, we recommend enabling convergence for the purposes of evaluation and scale testing. We will be considering making convergence the default engine in the Mitaka cycle. Convergence specific bugs are tracked in launchpad with the convergence-bugs tag.

Conditional resource exposure

Only resources actually installed in the cloud services are made available to users. Operators can further control resources available to users with standard policy rules in policy.json on per-resource type basis.

heat_template_version: 2015-10-15

2015-10-15 indicates that the YAML document is a HOT template and it may contain features added and/or removed up until the Liberty release.

  • Removes the Fn::Select function (path based get_attr/get_param references should be used instead).
  • If no <attribute name> is specified for calls to get_attr, a dict of all attributes is returned, e.g. { get_attr: [<resource name>]}.
  • Adds new str_split intrinsic function
  • Adds support for passing multiple lists to the existing list_join function.
  • Adds support for parsing map/list data to str_replace and list_join (they will be json serialized automatically)

REST API/heatclient additions

  • Stacks can now be assigned with a set of tags, and stack-list can filter and sort through those tags
  • "heat stack-preview ..." will return a preview of changes for a proposed stack-update
  • "heat template-validate --show-nested ..." will also validate all template resources and return nested data useful for building user interfaces
  • "heat resource-type-template --template-type hot ..." generates a template in HOT format
  • "heat resource-type-list" only shows types available to the user, and can filter results by name, version and support_status
  • "heat template-version-list" lists available template versions
  • "heat template-function-list ..." lists available functions for a template version

Enhancements to existing resources

New resources

The following new resources are now distributed with the Heat release:

[1] These existed Kilo as contrib resources as they were for non-integrated projects. These resources are now distributed with Heat as Big Tent projects.

[2] These existed Kilo as contrib resources as they require a user with an admin role. They are now distributed with Heat. Operators now have ability to hide them from under-privileged users by modifyig policy.json (for reference, OS::Nova::Flavor is hidden from non-admin users in default policy file supplied).

[3] These existed in Kilo as contrib resources as they used an approach not endorsed by the Heat project. They are now distributed with heat and documented as UNSUPPORTED.

[4] These resources are for projects which are not yet OpenStack Big Tent projects, so are documented as UNSUPPORTED

With the new OS::Keystone::* resources it is now be possible for cloud operators to use heat templates to manage Keystone service catalog entries and users.

Deprecated Resource Properties

Many resource properties have previously been documented as DEPRECATED. 15 of these properties are now flagged as HIDDEN, which means they will no longer be documented, but existing stacks and templates will continue to work after a heat upgrade. The [ http://docs.openstack.org/developer/heat/template_guide/openstack.html Resource Type Reference] should be consulted to determine available resource properties and attributes.

Upgrade notes

Configuration Changes

Notable changes to the /etc/heat/heat.conf [DEFAULT] section:

  • hidden_stack_tags has been added, and stacks containing these tag names will be hidden from stack-list results (defaults to data-processing-cluster, which hides sahara-created stacks)
  • instance_user was deprecated, and is now removed entirely. Nova servers created with OS::Nova::Server resource will now boot configured with the default user set up with the cloud image. AWS::EC2::Instance still creates "ec2-user"
  • max_resources_per_stack can now be set to -1 to disable enforcement
  • enable_cloud_watch_lite is now false by default as this REST API is deprecated
  • default_software_config_transport has gained the option ZAQAR_MESSAGE
  • default_deployment_signal_transport has gained the option ZAQAR_SIGNAL
  • auth_encryption_key is now documented as requiring exactly 32 characters
  • list_notifier_drivers was deprecated and is now removed
  • policy options have moved to the [oslo_policy] section
  • use_syslog_rfc_format is deprecated and now defaults to true

Notable changes to other sections of heat.conf:

  • [clients_keystone] auth_uri has been added to specify the unversioned keystone url
  • [heat_api] workers now defaults to 4 (was previously 0, which created a worker per host CPU)

The policy file /etc/heat/policy.json can now be configured with per-resource-type access policies, for example:

   "resource_types:OS::Nova::Flavor": "rule:context_is_admin"

Upgrading from Kilo to Liberty

Progress has been made on supporting live sql migrations, however it is still recommended to bring down the heat service for the duration of the upgrade. Downward SQL schema migrations are no longer supported. A rollback to Kilo will require restoring a snapshot of the pre-upgrade database.

OpenStack Data Processing (Sahara)

Key New Features

  • New plugins and versions:
    • Ambari plugin with supports HDP 2.2 / 2.3
    • Apache Hadoop 2.7.1 was added, Apache Hadoop 2.6.0 was deprecated
    • CDH 5.4.0 was added with HA support for NameNode and ResourceManager
    • MapR 5.0.0 was added
    • Spark 1.3.1 was added, Spark 1.0.0 was deprecated
    • HDP 1.3.2 and Apache Hadoop 1.2.1 was removed
  • Added support for using Swift with Spark EDP jobs
  • Added support for Spark EDP jobs in CDH and Ambari plugins
  • Added support for public and protected resources
  • Started integration with OpenStack client
  • Added support for editing all Sahara resources
  • Added automatic Hadoop configuration for clusters
  • Direct engine is deprecated and will be removed in Mitaka release
  • Added OpenStack manila NFS shares as a storage backend option for job binaries and data sources
  • Added support for definition and use of configuration interfaces for EDP job templates

Deprecations

  • Direct provisioning engine
  • Apache Hadoop 2.6.0
  • Spark 1.0.0
  • All Hadoop 1.X removed

OpenStack Search (Searchlight)

This is the first release for Searchlight. Searchlight is intended to dramatically improving the search capabilities and performance of various OpenStack cloud services by offloading user search queries. It provides Keystone RBAC based searches across OpenStack services by indexing their data into ElasticSearch and providing a security layer on top of incoming search queries. ElasticSearch is a search server based on Lucene. It provides a distributed, scalable, near real-time, faceted, multitenant-capable, and full-text search engine with a RESTful web interface.

Key New Features

New Resource Types Indexed

Upgrade Notes

N/A

Deprecations

N/A

OpenStack DNS (Designate)

Key New Features

  • Experimental: Hook Point API
  • Horizon Plugin moved out of tree
  • Purging deleted domains
  • Ceilometer "exists" periodic event per domain
  • ASync actions
    • Import
    • Export
  • Active /passive failover for designate-pool-manager periodic tasks
  • OpenStack client integration

Addtional DNS Server Backends

  • InfoBlox
  • Designate

Upgrade Notes

  • New service designate-zone-manager
    • It is recommended to use a supported tooz backend.
    • ZooKeeper is recommended, or anything supported by tooz.
    • If a tooz backend is not used, all zone-managers will assume ownership of all zones, and there will be 'n' "exists" messages per hour, where 'n' is the number of zone-manager processes.
  • designate-pool-manager can do active/passive failover for periodic tasks.
    • It is recommended to use a supported tooz backend.
    • If a tooz backend is not used, all pool-managers will assume ownership of the pool, and multiple periodic tasks will run. This can result in unforeseen consequences.

Deprecations

  • V1 API
    • An initial notice of intent, as there are operations that still require the Designate CLI interface which talks to V1, and Horizon panels that only talk to V1.

OpenStack Messaging Service (Zaqar)

Key New Features

  • Pre-signed URL - A new REST API endpoint to support pre-signed URL, which provides enough control over the resource being shared, without compromising security.
  • Email Notification - A new task driver for notification service, which can take a Zaqar subscriber's email address. When there is a new message posted to the queue, the subscriber will receive the message by email.
  • Policy Support - Support fine-grained permission control with the policy.json file like most of the other OpenStack components.
  • Persistent Transport - Added support for websocket as a persistent transport alternative for Zaqar. Now users will be able to establish long-lived connections between their applications and Zaqar to interchange large amounts of data without the connection setup adding overhead.

OpenStack Dashboard (Horizon)

Key New Features

  • Plugin improvements – Horizon auto discovers JavaScript files for inclusion, and now has mechanisms for pluggable SCSS and Django template overrides.

Upgrade Notes

(https://github.com/openstack/django_openstack_auth/commit/8c64de92f4148d85704b10ea1f7bc441db2ddfee and https://github.com/openstack/horizon/commit/ee2771ab1a855342089abe5206fc6a5071a6d99e).

  • Horizon no longer uses QUnit in testing, and it has been removed from our requirements (https://blueprints.launchpad.net/horizon/+spec/replace-qunit-tests-with-jasmine).
  • Horizon now has multiple configuration options for the default web URL (WEBROOT), static file location (STATIC_ROOT) and static file URL (STATIC_URL) in its settings files.
  • Themes have moved location from openstack_dashboard/static/themes, to openstack_dashboard/themes. Paths may need to be updated accordingly. Furthermore, Horizon is aligning closer with Bootstrap markup, and themes should be built around this ideology; see the top bar and side navigation for details.
  • The deprecated OPENSTACK_QUANTUM_NETWORK configuration option has been removed. If you still use it, replace it with OPENSTACK_NEUTRON_NETWORK
  • There is now an OPENSTACK_NOVA_EXTENSIONS_BLACKLIST option in the settings, to disable selected extensions for performance reasons (https://github.com/openstack/horizon/commit/18f4b752b8653c9389f8b0471eccaa0659707ebe).
  • Trove and Sahara panels now reside in openstack_dashboard/contrib.

OpenStack Trove (DBaaS)

Key New Features

  • Redis
    • Configuration Groups for Redis
    • Cluster support
  • MongoDB
    • Backup and restore for a single instance
    • User and database management
    • Configuration Groups
  • Percona XtraDB Cluster Server
    • Cluster support
  • Allow deployer to associate instance flavors with specific datastores
  • Horizon support for database clusters
  • Management API for datastore and versions
  • Ability to deploy Trove instances in a single admin tenant, so that the nova instances are hidden from the user

OpenStack Bare metal (Ironic)

Ironic has switched to an intermediate release model and released version 4.0 during Liberty, followed by two minor updates. Version 4.2 forms the basis for the OpenStack Integrated Liberty release and will receive stable updates.

Please see full release notes here: http://docs.openstack.org/developer/ironic/releasenotes/index.html

New Features

  • Added "ENROLL" hardware state, which is the default state for newly created nodes.
  • Added "abort" verb, which allows a user to interrupt certain operations while they are in progress.
  • Improved query and filtering support in the REST API.
  • Added support for CORS middleware.

Hardware Drivers

  • Added a new BootInterface for hardware drivers, which splits functionality out of the DeployInterface.
  • iLO virtual media drivers can work without Swift.
  • Added Cisco IMC driver.
  • Added OCS Driver.
  • Added UCS Driver.
  • Added Wake-On-Lan Power Driver.
  • ipmitool driver supports IPMI v1.5.
  • Added support to SNMP driver for “APC MasterSwitchPlus” series PDU’s.
  • pxe_ilo driver now supports UEFI Secure Boot (previous releases of theiLO driver only supported this for agent_ilo and iscsi_ilo).
  • Added Virtual Media support to iRMC Driver.
  • Added BIOS configuration to DRAC Driver.
  • PXE drivers now support GRUB2.

Deprecations

  • The "vendor_passthru" and "driver_vendor_passthru" methods of the DriverInterface have been removed. These were deprecated in Kilo and replaced with the @passthru decorator.
  • The migration tools to import data from a Nova "baremetal" deployment have been removed.
  • Deprecated the "parallel" option to periodic task decorator.
  • Removed deprecated ‘admin_api’ policy rule.
  • Support for the original "bash" deploy ramdisk is deprecated and will be removed in two cycles. The ironic-python-agent project should be used for all deploy drivers.

Upgrade Notes

  • Newly created nodes default to the new ENROLL state. Previously, nodes defaulted to AVAILABLE, which could lead to hardware being exposed prematurely to Nova.
  • The addition of API version headers in Kilo means that any client wishing to interact with the Liberty API must pass the appropriate version string in each HTTP request. Current API version is 1.14.

OpenStack Key Manager (Barbican)

New Features

  • Added capability for project administrators to define and manage a set of preferred Certificate Authorities (CAs) per project. This allows projects to achieve project specific security domains.
  • Barbican now has per project quota support for limiting number of barbican resources that can be created under a project. By default the quota is set to unlimited and can be overridden in Barbican configuration.
  • Support for rotating master key which is used for wrapping project level key. In this lightweight approach, only project level key (KEK) is re-wrapped with new master key (MKEK). This is currently applicable only for the PKCS11 plug-in. (http://specs.openstack.org/openstack/barbican-specs/specs/liberty/add-crypto-mkek-rotation-support-lightweight.html)
  • Updated Barbican's root resource to return version information matching Keystone, Nova and Manila format. This is used by keystoneclient versioned endpoint discovery feature.
  • Removed administrator endpoint as all operations are available on a regular endpoint. No separate endpoint is needed as access restrictions are enforced via oslo policy.
  • Added configuration for enabling sqlalchemy pool for the management of SQL connections.
  • Added ability to list secrets which are accessible via ACL using GET /v1/secrets?acl-only=true request.
  • Improved functional test coverage around Barbican APIs related to ACL operations, RBAC policy and secrets.
  • Fixed issues around creation of SnakeOil CA plug-in instance.
  • Barbican client CLI can now take keystone token for authentication. Earlier only username and password based authentication was supported.
  • Barbican client now has ability to create and list certificate orders.

Upgrade Notes

OpenStack Image Service (Glance)

Updated project guide that includes some details on operating, installing, configuring, developing to and using the service: http://docs.openstack.org/developer/glance/

Key New Features

Upgrade Notes

  • python-glanceclient now defaults to using Glance API v2 and if v2 is unavailable, it will fallback to v1.
  • Dependencies for backend stores are now optionally installed corresponding to each store specified.
  • Some stores like swift, s3, vmware now have python 3 support.
  • Some new as well as updated default metadata definitions ship with the source code.
  • More python 3 support added to Glance API, and now continuous support is extended by the means of tests to ensure compatibility.
  • utf-8 is now the default charset for the backend MySQL DB.
  • Migration scripts have been updated to perform a sanity check for the table charset.
  • 'ram_disk' and 'kernel' properties can now be null in the schema and 'id' is now read only attribute for v2 API.
  • A configuration option client_socket_timeout has been added to take advantage of the recent eventlet socket timeout behaviour.
  • A configuration option scrub_pool_size has been added to set the number of parallel threads that a scrubber should run and defaults to 1.
  • An important bug that allowed to change the image status using the Glance v1 API has now been fixed.

Deprecation

  • The experimental Catalog Index Service has been removed and now is a separate project called Searchlight.
  • The configuration options scrubber_datadir, cleanup_scrubber and cleanup_scrubber_time have been removed following the removal of the file backed queuing for scrubber.

OpenStack Shared File System (Manila)

New Features

  • Enabled support for availability zones.
  • Added administrator API components to share instances.
  • Added pool weigher which allows Manila scheduler to place new shares on pools with existing share servers.
  • Support for share migration from one hostpool to another hostpool.
  • Added shared extend capability in the generic driver.
  • Support for adding consistency groups, which allow snapshots for multiple filesystem shares to created at the same point in time.
  • Support for consistency groups in the NetApp cDOT driver and generic driver.
  • Support for oversubscription in thin provisioning.
  • Support for handling Windows service instances and exporting SMB shares.
  • Added new osapi_share_workers configuration option to improve the total throughput of Manila API service.
  • Added share hooks feature, which allows actions to be performed before and after share drive methods calls, call additional periodic hook for each 'N' tick, and update the results of a driver's action.
  • Improvements to the NetApp cDOT driver:
    • Added variables netapp:dedup, and netapp:compression when creating the flexvol that backs up a new manila share.
    • Added manage/unmanage support and shrink_share support
    • Support for extended_share API component
    • Support for netapp-lib PyPI project to communicate with storage arrays.
  • Improvements to the HP 3PAR driver:
    • Added reporting of dedupe, thin provisioning and hp3par_flash_cache capabilities. This allows share types and the CapabilitiesFilter to place shares on hosts with the requested capabilities.
    • Added share server support.
  • Added access-level support to the VNX Manila driver.
  • Added support for the Manila HDS HNAS driver.
  • The Huawei Manila driver now supports storage pools and extend_share, manage_existing, shrink_share, read-only share, smartcache and smartpartition.
  • GlusterFS drivers can now specify the list of compatible share layouts.

Deprecation

  • The share_reset_status API component is deprecated and replaced by share_instance_reset_status.