Jump to: navigation, search

Difference between revisions of "ReleaseNotes/Liberty"

(Deprecation)
m (Fixed broken URL for prefix delegation)
 
(53 intermediate revisions by 20 users not shown)
Line 2: Line 2:
 
<translate>
 
<translate>
  
 +
<!--T:1-->
 
[[Category:Liberty|Release Note]]
 
[[Category:Liberty|Release Note]]
 
[[Category:Release Note|Liberty]]
 
[[Category:Release Note|Liberty]]
 +
[[Category:Releases]]
 +
[[Category:Liberty]]
  
= OpenStack Liberty Release Notes =
+
= OpenStack Liberty Release Notes = <!--T:2-->
  
 +
<!--T:3-->
 
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3">
 
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3">
 
__TOC__
 
__TOC__
 
</div>
 
</div>
  
== OpenStack Object Storage (Swift) ==
+
== OpenStack Object Storage (Swift) == <!--T:4-->
  
 +
<!--T:5-->
 
Please see full release notes at https://github.com/openstack/swift/blob/master/CHANGELOG
 
Please see full release notes at https://github.com/openstack/swift/blob/master/CHANGELOG
  
=== New Features ===
+
=== New Features === <!--T:6-->
  
 +
<!--T:7-->
 
* Allow 1+ object-servers-per-disk deployment enabled by a new > 0 integer config value, "servers_per_port" in the [DEFAULT] config section for object-server and/or replication server configurations. The setting's integer value determines how many different object-server workers handle requests for any single unique local port in the ring. In this mode, the parent swift-object-server process continues to run as the original user (i.e. root if low-port binding is required). It binds to all ports as defined in the ring. It then forks off the specified number of workers per listen socket. The child, per-port servers, drops privileges and behaves pretty much how object-server workers always have with one exception: the ring has unique ports per disk, the object-servers will only handle requests for a single disk. The parent process detects dead servers and restarts them (with the correct listen socket). It starts missing servers when an updated ring file is found with a device on the server with a new port, and kills extraneous servers when their port is no longer found in the ring. The ring files are started at most on the schedule configured in the object-server configuration by every the "ring_check_interval" parameter (same default of 15s). In testing, this deployment configuration (with a value of 3) lowers request latency, improves requests per second, and isolates slow disk IO as compared to the existing "workers" setting. To use this, each device must be added to the ring using a different port.
 
* Allow 1+ object-servers-per-disk deployment enabled by a new > 0 integer config value, "servers_per_port" in the [DEFAULT] config section for object-server and/or replication server configurations. The setting's integer value determines how many different object-server workers handle requests for any single unique local port in the ring. In this mode, the parent swift-object-server process continues to run as the original user (i.e. root if low-port binding is required). It binds to all ports as defined in the ring. It then forks off the specified number of workers per listen socket. The child, per-port servers, drops privileges and behaves pretty much how object-server workers always have with one exception: the ring has unique ports per disk, the object-servers will only handle requests for a single disk. The parent process detects dead servers and restarts them (with the correct listen socket). It starts missing servers when an updated ring file is found with a device on the server with a new port, and kills extraneous servers when their port is no longer found in the ring. The ring files are started at most on the schedule configured in the object-server configuration by every the "ring_check_interval" parameter (same default of 15s). In testing, this deployment configuration (with a value of 3) lowers request latency, improves requests per second, and isolates slow disk IO as compared to the existing "workers" setting. To use this, each device must be added to the ring using a different port.
  
 +
<!--T:8-->
 
* The object server includes a "container_update_timeout" setting (with a default of 1 second). This value is the number of seconds that the object server will wait for the container server to update the listing before returning the status of the object PUT operation. Previously, the object server would wait up to 3 seconds for the container server response. The new behavior dramatically lowers object PUT latency when container servers in the cluster are busy (e.g. when the container is very large). Setting the value too low may result in a client PUT'ing an object and not being able to immediately find it in listings. Setting it too high will increase latency for clients when container servers are busy.
 
* The object server includes a "container_update_timeout" setting (with a default of 1 second). This value is the number of seconds that the object server will wait for the container server to update the listing before returning the status of the object PUT operation. Previously, the object server would wait up to 3 seconds for the container server response. The new behavior dramatically lowers object PUT latency when container servers in the cluster are busy (e.g. when the container is very large). Setting the value too low may result in a client PUT'ing an object and not being able to immediately find it in listings. Setting it too high will increase latency for clients when container servers are busy.
  
 +
<!--T:9-->
 
* Added the ability to specify ranges for Static Large Object (SLO) segments.
 
* Added the ability to specify ranges for Static Large Object (SLO) segments.
  
 +
<!--T:10-->
 
* Allow SLO PUTs to forgo per-segment integrity checks. Previously, each segment referenced in the manifest also needed the correct etag and bytes setting. These fields now allow the "null" value to skip those particular checks on the given segment.
 
* Allow SLO PUTs to forgo per-segment integrity checks. Previously, each segment referenced in the manifest also needed the correct etag and bytes setting. These fields now allow the "null" value to skip those particular checks on the given segment.
  
 +
<!--T:11-->
 
* Replicator configurations now support an "rsync_module" value to allow for per-device rsync modules. This setting gives operators the ability to fine-tune replication traffic in a Swift cluster and isolate replication disk IO to a particular device. Please see the docs and sample config files for more information and examples.
 
* Replicator configurations now support an "rsync_module" value to allow for per-device rsync modules. This setting gives operators the ability to fine-tune replication traffic in a Swift cluster and isolate replication disk IO to a particular device. Please see the docs and sample config files for more information and examples.
  
 +
<!--T:12-->
 
* Ring changes
 
* Ring changes
 
** Partition placement no longer uses the port number to place partitions. This improves dispersion in small clusters running one object server per drive, and it does not affect dispersion in clusters running one object server per server.
 
** Partition placement no longer uses the port number to place partitions. This improves dispersion in small clusters running one object server per drive, and it does not affect dispersion in clusters running one object server per server.
Line 32: Line 43:
 
** Ring validation now warns if a placement partition gets assigned to the same device multiple times. This happens when devices in the ring are unbalanced (e.g. two servers where one server has significantly more available capacity).
 
** Ring validation now warns if a placement partition gets assigned to the same device multiple times. This happens when devices in the ring are unbalanced (e.g. two servers where one server has significantly more available capacity).
  
 +
<!--T:13-->
 
* TempURL fixes (closes CVE-2015-5223)<p>Do not allow PUT tempurls to create pointers to other data. Specifically, disallow the creation of DLO object manifests via a PUT tempurl. This prevents discoverability attacks which can use any PUT tempurl to probe for private data by creating a DLO object manifest and then using the PUT tempurl to head the object.</p>
 
* TempURL fixes (closes CVE-2015-5223)<p>Do not allow PUT tempurls to create pointers to other data. Specifically, disallow the creation of DLO object manifests via a PUT tempurl. This prevents discoverability attacks which can use any PUT tempurl to probe for private data by creating a DLO object manifest and then using the PUT tempurl to head the object.</p>
 
* Swift now emits StatsD metrics on a per-policy basis.
 
* Swift now emits StatsD metrics on a per-policy basis.
Line 47: Line 59:
 
* Various other minor bug fixes and improvements.
 
* Various other minor bug fixes and improvements.
  
=== Upgrade Notes ===
+
=== Upgrade Notes === <!--T:14-->
  
 +
<!--T:15-->
 
* Dependency changes
 
* Dependency changes
 
** Added six requirement. This is part of an ongoing effort to add support for Python 3.
 
** Added six requirement. This is part of an ongoing effort to add support for Python 3.
Line 57: Line 70:
 
* The versioned writes feature has been refactored and reimplemented as middleware. You should explicitly add the versioned_writes middleware to your proxy pipeline, but do not remove or disable the existing container server config setting ("allow_versions"), if it is currently enabled. The existing container server config setting enables existing containers to continue being versioned. Please see http://swift.openstack.org/middleware.html#how-to-enable-object-versioning-in-a-swift-cluster for further upgrade notes.
 
* The versioned writes feature has been refactored and reimplemented as middleware. You should explicitly add the versioned_writes middleware to your proxy pipeline, but do not remove or disable the existing container server config setting ("allow_versions"), if it is currently enabled. The existing container server config setting enables existing containers to continue being versioned. Please see http://swift.openstack.org/middleware.html#how-to-enable-object-versioning-in-a-swift-cluster for further upgrade notes.
  
== OpenStack Networking (Neutron) ==
+
== OpenStack Networking (Neutron) == <!--T:16-->
  
=== New Features ===
+
=== New Features === <!--T:17-->
* Neutron now supports IPv6 Prefix Delegation for the automatic assignment of CIDRs to IPv6 subnets. For more information on the usage and configuration of this feature, see the [http://docs.openstack.org/networking-guide/adv_config_ipv6.html#prefix-delegation OpenStack Networking Guide].
+
* Neutron now supports IPv6 Prefix Delegation for the automatic assignment of CIDRs to IPv6 subnets. For more information on the usage and configuration of this feature, see the [http://docs.openstack.org/mitaka/networking-guide/adv-config-ipv6.html#prefix-delegation OpenStack Networking Guide].
 
* Neutron now exposes a QoS API, initially offering bandwidth limitation on the port level. The API, CLI, configuration and additional information may be found here [http://docs.openstack.org/developer/neutron/devref/quality_of_service.html].
 
* Neutron now exposes a QoS API, initially offering bandwidth limitation on the port level. The API, CLI, configuration and additional information may be found here [http://docs.openstack.org/developer/neutron/devref/quality_of_service.html].
 
* Router high availability (L3 HA / VRRP) now works when layer 2 population (l2pop) is enabled [https://bugs.launchpad.net/neutron/+bug/1365476].
 
* Router high availability (L3 HA / VRRP) now works when layer 2 population (l2pop) is enabled [https://bugs.launchpad.net/neutron/+bug/1365476].
Line 66: Line 79:
 
* Networks used for VRRP traffic for HA routers may now be configured to use a specific segmentation type or physical network tag [https://bugs.launchpad.net/neutron/+bug/1481443].
 
* Networks used for VRRP traffic for HA routers may now be configured to use a specific segmentation type or physical network tag [https://bugs.launchpad.net/neutron/+bug/1481443].
 
* The OVS agent may now be restarted without affecting data plane connectivity.
 
* The OVS agent may now be restarted without affecting data plane connectivity.
* Neutron now offers role base access control for networks [http://specs.openstack.org/openstack/neutron-specs/specs/liberty/rbac-networks.html].
+
* Neutron now offers role base access control (RBAC) for networks [http://specs.openstack.org/openstack/neutron-specs/specs/liberty/rbac-networks.html].
 +
** https://bugs.launchpad.net/neutron/+bug/1498790
 
* LBaaS V2 reference driver is now based on Octavia, an operator grade scalable, reliable Load Balancer platform.
 
* LBaaS V2 reference driver is now based on Octavia, an operator grade scalable, reliable Load Balancer platform.
 
* LBaaS V2 API is no longer experimental. It is now stable.
 
* LBaaS V2 API is no longer experimental. It is now stable.
Line 72: Line 86:
 
* Neutron now has a pluggable IP address management framework, enabling the use of alternate or third-party IPAM. The original, non-pluggable version of IPAM is enabled by default.
 
* Neutron now has a pluggable IP address management framework, enabling the use of alternate or third-party IPAM. The original, non-pluggable version of IPAM is enabled by default.
  
=== Deprecated and Removed Plugins and Drivers ===
+
=== Deprecated and Removed Plugins and Drivers === <!--T:18-->
 
* The metaplugin is removed in the Liberty release.
 
* The metaplugin is removed in the Liberty release.
 
* The IBM SDN-VE monolithic plugin is removed in the Liberty release.
 
* The IBM SDN-VE monolithic plugin is removed in the Liberty release.
Line 78: Line 92:
 
* The Embrane plugin is deprecated and will be removed in the Mitaka release.
 
* The Embrane plugin is deprecated and will be removed in the Mitaka release.
  
=== Deprecated Features ===
+
=== Deprecated Features === <!--T:19-->
 
* The FWaaS API is marked as experimental for Liberty. Further, the current API will be removed in Mitaka and replaced with a new FWaaS API, which the team is in the process of developing.
 
* The FWaaS API is marked as experimental for Liberty. Further, the current API will be removed in Mitaka and replaced with a new FWaaS API, which the team is in the process of developing.
 
* The LBaaS V1 API is marked as deprecated and is planned to be removed in a future release. Going forward, the LBaaS V2 API should be used.
 
* The LBaaS V1 API is marked as deprecated and is planned to be removed in a future release. Going forward, the LBaaS V2 API should be used.
 +
* The 'external_network_bridge' option for the L3 agent has been deprecated in favor of a bridge_mapping with a physnet. For more information, see the "Network Node" section of this scenario in the networking guide: http://docs.openstack.org/networking-guide/scenario_legacy_ovs.html
  
=== Performance Considerations ===
+
=== Performance Considerations === <!--T:20-->
 
* The stock Trusty Tahr kernel (3.13) shows linear performance degradation when running "ip netns exec" as the number of namespaces increases.  In cases where scale is important, a later version kernel (e.g. 3.19) should be used. [WHICH VERSION OF 3.13 EXHIBITED THIS. MOST VERSIONS WILL HAVE THIS FIX ALREADY.]
 
* The stock Trusty Tahr kernel (3.13) shows linear performance degradation when running "ip netns exec" as the number of namespaces increases.  In cases where scale is important, a later version kernel (e.g. 3.19) should be used. [WHICH VERSION OF 3.13 EXHIBITED THIS. MOST VERSIONS WILL HAVE THIS FIX ALREADY.]
 +
<br />
 +
'''Note:''' This regression should be fixed in Trusty Thar since 3.13.0-36.63 and later kernel versions. For further references see: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1328088
 +
 +
<!--T:21-->
 
* Creating Neutron-LBaaS load balancers in environments without hardware virtualization may be slow when using the Octavia driver.  This is due to QEMU using the TCG accelerator instead of the KVM accelerator in environments without hardware virtualization available.  We recommend enabling hardware virtualization on your compute nodes, or enabling nested virtualization when using the Octavia driver inside a virtual environment.  See the following link for details on setting up nested virtualization for DevStack running inside KVM: http://docs.openstack.org/developer/devstack/guides/devstack-with-nested-kvm.html.
 
* Creating Neutron-LBaaS load balancers in environments without hardware virtualization may be slow when using the Octavia driver.  This is due to QEMU using the TCG accelerator instead of the KVM accelerator in environments without hardware virtualization available.  We recommend enabling hardware virtualization on your compute nodes, or enabling nested virtualization when using the Octavia driver inside a virtual environment.  See the following link for details on setting up nested virtualization for DevStack running inside KVM: http://docs.openstack.org/developer/devstack/guides/devstack-with-nested-kvm.html.
  
== OpenStack Compute (Nova) ==
+
== OpenStack Compute (Nova) == <!--T:22-->
  
=== New Features ===
+
=== New Features === <!--T:23-->
  
==== API ====
+
==== API ==== <!--T:24-->
  
 +
<!--T:25-->
 
* Turned on v2.1 by default for all endpoints, v2.0 and v1.1 using a new compatibility mode, to make the transition transparent to existing API users (https://blueprints.launchpad.net/nova/+spec/api-relax-validation)
 
* Turned on v2.1 by default for all endpoints, v2.0 and v1.1 using a new compatibility mode, to make the transition transparent to existing API users (https://blueprints.launchpad.net/nova/+spec/api-relax-validation)
 
* Evacuate made more robust (partial) (https://blueprints.launchpad.net/nova/+spec/robustify-evacuate)
 
* Evacuate made more robust (partial) (https://blueprints.launchpad.net/nova/+spec/robustify-evacuate)
Line 109: Line 129:
 
* Metadata: API: Proxy neutron configuration to guest instance (partial) (https://blueprints.launchpad.net/nova/+spec/metadata-service-network-info)
 
* Metadata: API: Proxy neutron configuration to guest instance (partial) (https://blueprints.launchpad.net/nova/+spec/metadata-service-network-info)
  
==== Scheduler ====
+
==== Scheduler ==== <!--T:26-->
  
 +
<!--T:27-->
 
Architectural evolution on the scheduler has continued, along with key bug fixes:
 
Architectural evolution on the scheduler has continued, along with key bug fixes:
 
* Adds object model for a launch request spec (partially complete) (https://blueprints.launchpad.net/nova/+spec/request-spec-object)
 
* Adds object model for a launch request spec (partially complete) (https://blueprints.launchpad.net/nova/+spec/request-spec-object)
Line 117: Line 138:
 
* Improved user feedback when returning ''NoValidHost'' from the scheduler (http://specs.openstack.org/openstack/nova-specs/specs/liberty/approved/add_exceeded_max_retries_exception.html).
 
* Improved user feedback when returning ''NoValidHost'' from the scheduler (http://specs.openstack.org/openstack/nova-specs/specs/liberty/approved/add_exceeded_max_retries_exception.html).
  
==== Cells v2 ====
+
==== Cells v2 ==== <!--T:28-->
  
 +
<!--T:29-->
 
Cells v2 is not currently in a usable state, but we have added some more supporting infrastructure:
 
Cells v2 is not currently in a usable state, but we have added some more supporting infrastructure:
  
 +
<!--T:30-->
 
* Cells host mapping (https://blueprints.launchpad.net/nova/+spec/cells-host-mapping)
 
* Cells host mapping (https://blueprints.launchpad.net/nova/+spec/cells-host-mapping)
 
* Cells instance migration (https://blueprints.launchpad.net/nova/+spec/cells-instance-migration)
 
* Cells instance migration (https://blueprints.launchpad.net/nova/+spec/cells-instance-migration)
  
==== Compute Driver Features ====
+
==== Compute Driver Features ==== <!--T:31-->
  
===== Libvirt =====
+
===== Libvirt ===== <!--T:32-->
  
 +
<!--T:33-->
 
* Moved to using ''os-brick'' library for Libvirt volume drivers allowing sharing of logic for volume discovery and removal between Nova and Cinder (http://specs.openstack.org/openstack/nova-specs/specs/liberty/implemented/use-os-brick-library.html).
 
* Moved to using ''os-brick'' library for Libvirt volume drivers allowing sharing of logic for volume discovery and removal between Nova and Cinder (http://specs.openstack.org/openstack/nova-specs/specs/liberty/implemented/use-os-brick-library.html).
 
* Added ''live_migration_completion_timeout'' and ''live_migration_progress_timeout'' configuration keys to assist with capping the maximum time a live migration should be allowed to run, particularly when progress has halted (https://launchpad.net/bugs/1429220).
 
* Added ''live_migration_completion_timeout'' and ''live_migration_progress_timeout'' configuration keys to assist with capping the maximum time a live migration should be allowed to run, particularly when progress has halted (https://launchpad.net/bugs/1429220).
Line 144: Line 168:
 
* virtio-net multiqueue (partial) (https://blueprints.launchpad.net/nova/+spec/libvirt-virtio-net-multiqueue)
 
* virtio-net multiqueue (partial) (https://blueprints.launchpad.net/nova/+spec/libvirt-virtio-net-multiqueue)
  
===== VMware =====
+
===== VMware ===== <!--T:34-->
  
 +
<!--T:35-->
 
* VMware driver domain metadata (https://blueprints.launchpad.net/nova/+spec/vmware-driver-domain-metadata)
 
* VMware driver domain metadata (https://blueprints.launchpad.net/nova/+spec/vmware-driver-domain-metadata)
 
* Enable setting memory, disk and vnic limits (partial) (https://blueprints.launchpad.net/nova/+spec/vmware-limits)
 
* Enable setting memory, disk and vnic limits (partial) (https://blueprints.launchpad.net/nova/+spec/vmware-limits)
Line 153: Line 178:
 
* VMware: Support for console log in the VMware driver (partial) (https://blueprints.launchpad.net/nova/+spec/vmware-console-log)
 
* VMware: Support for console log in the VMware driver (partial) (https://blueprints.launchpad.net/nova/+spec/vmware-console-log)
  
===== Hyper-V =====
+
===== Hyper-V ===== <!--T:36-->
  
 +
<!--T:37-->
 
* Hyper-V unit tests refactoring (continued + partial) (https://blueprints.launchpad.net/nova/+spec/hyper-v-test-refactoring-liberty)
 
* Hyper-V unit tests refactoring (continued + partial) (https://blueprints.launchpad.net/nova/+spec/hyper-v-test-refactoring-liberty)
  
===== Ironic =====
+
===== Ironic ===== <!--T:38-->
  
 +
<!--T:39-->
 
* Pass down the instance name to Ironic driver (https://blueprints.launchpad.net/nova/+spec/pass-down-instance-name-to-ironic-driver)
 
* Pass down the instance name to Ironic driver (https://blueprints.launchpad.net/nova/+spec/pass-down-instance-name-to-ironic-driver)
  
==== Other Features ====
+
==== Other Features ==== <!--T:40-->
  
 +
<!--T:41-->
 
* Added support for specifying multiple ''instance_type'' names to the ''AggregateTypeAffinityFilter'' (https://blueprints.launchpad.net/nova/+spec/aggregatetypeaffinityfilter-multi-value-support).
 
* Added support for specifying multiple ''instance_type'' names to the ''AggregateTypeAffinityFilter'' (https://blueprints.launchpad.net/nova/+spec/aggregatetypeaffinityfilter-multi-value-support).
 
* Added experimental online DB schema change option (https://blueprints.launchpad.net/nova/+spec/online-schema-changes)
 
* Added experimental online DB schema change option (https://blueprints.launchpad.net/nova/+spec/online-schema-changes)
Line 173: Line 201:
 
* Running Nova with rootwrap as a daemon (https://blueprints.launchpad.net/nova/+spec/nova-rootwrap-daemon-mode)
 
* Running Nova with rootwrap as a daemon (https://blueprints.launchpad.net/nova/+spec/nova-rootwrap-daemon-mode)
 
* Remove 'scheduled_at' column in nova instances table (https://blueprints.launchpad.net/nova/+spec/cleanup-scheduled-at)
 
* Remove 'scheduled_at' column in nova instances table (https://blueprints.launchpad.net/nova/+spec/cleanup-scheduled-at)
 +
* A new config option "handle_virt_lifecycle_events" in the DEFAULT group was added to allow disabling the event callback handling for instance lifecycle events from the virt driver (which is only implemented by the libvirt and hyper-v drivers in Liberty). This mostly serves as a workaround in case the callbacks are racing under heavy load and causing problems like shutting down running instances. See https://review.openstack.org/#/c/159275/ for details.
  
=== Upgrade Notes ===
+
=== Upgrade Notes === <!--T:42-->
  
 +
<!--T:43-->
 
* If you are coming from Kilo stable, please make sure you have fully upgraded to the latest release of that lineage before deploying Liberty. Due to bug https://bugs.launchpad.net/nova/+bug/1474074 versions of Kilo from before the fix will be problematic when talking to Liberty nodes.
 
* If you are coming from Kilo stable, please make sure you have fully upgraded to the latest release of that lineage before deploying Liberty. Due to bug https://bugs.launchpad.net/nova/+bug/1474074 versions of Kilo from before the fix will be problematic when talking to Liberty nodes.
 
* Allocation ratios for RAM and CPU are now defined within the nova-compute service (so per compute node). Ratios also need to be provided for the scheduler service. Depending on whether a compute node is running Kilo or Liberty, the allocation ratios will behave differently : ''if the compute node is running Kilo'' then the CPU and RAM allocation ratios for that compute node will be the ones defaulted in the controller's nova.conf file.''Or, if the compute node is Liberty'' then you'll be able to set a per-compute allocation ratio for both CPU and RAM. In order to leave the operator providing the allocation ratios to all the compute nodes, the default allocation ratio will be set in nova.conf to 0.0 (even for the controller). That doesn't mean that allocation ratios will actually be 0.0, just that the operator needs to provide those '''before the next release (ie. Mitaka)'''. To be clear, the default allocation ratios are still 16.0 for cpu_allocation_ratio and 1.5 for ram_allocation_ratio.
 
* Allocation ratios for RAM and CPU are now defined within the nova-compute service (so per compute node). Ratios also need to be provided for the scheduler service. Depending on whether a compute node is running Kilo or Liberty, the allocation ratios will behave differently : ''if the compute node is running Kilo'' then the CPU and RAM allocation ratios for that compute node will be the ones defaulted in the controller's nova.conf file.''Or, if the compute node is Liberty'' then you'll be able to set a per-compute allocation ratio for both CPU and RAM. In order to leave the operator providing the allocation ratios to all the compute nodes, the default allocation ratio will be set in nova.conf to 0.0 (even for the controller). That doesn't mean that allocation ratios will actually be 0.0, just that the operator needs to provide those '''before the next release (ie. Mitaka)'''. To be clear, the default allocation ratios are still 16.0 for cpu_allocation_ratio and 1.5 for ram_allocation_ratio.
Line 190: Line 220:
 
* The default paste.ini has been updated to use the new v2.1 API for all endpoints, and the v3 endpoint has been removed. A compatibility mode middlewear is used to relax the v2.1 validation for the /v2 and /v1.1 endpoints.
 
* The default paste.ini has been updated to use the new v2.1 API for all endpoints, and the v3 endpoint has been removed. A compatibility mode middlewear is used to relax the v2.1 validation for the /v2 and /v1.1 endpoints.
 
* The code for DB schema downgrades has now been removed: https://blueprints.launchpad.net/nova/+spec/nova-no-downward-sql-migration
 
* The code for DB schema downgrades has now been removed: https://blueprints.launchpad.net/nova/+spec/nova-no-downward-sql-migration
 +
* The default DB driver we test against is now pymysql rather than Python-MySQL
 +
* The "powervm" hv_type shim has been removed. This only affects users of the [https://github.com/stackforge/powervc-driver PowerVC driver on stackforge] which are using older images with hv_type=powervm in the image metadata.
 +
* The minimum required version of libvirt in the Mitaka release will be 0.10.2. Support for libvirt < 0.10.2 is deprecated in Liberty: https://review.openstack.org/#/c/183220/
 +
* The libvirt.remove_unused_kernels config option is deprecated for removal and now defaults to True: https://review.openstack.org/#/c/182315/
 +
* Setting force_config_drive=always in nova.conf is deprecated, use True/False boolean values instead: https://review.openstack.org/#/c/156153/
  
=== Deprecations ===
+
=== Deprecated Features === <!--T:44-->
  
 +
<!--T:45-->
 
* The ability to disable in tree API extensions has been deprecated (https://blueprints.launchpad.net/nova/+spec/nova-api-deprecate-extensions)
 
* The ability to disable in tree API extensions has been deprecated (https://blueprints.launchpad.net/nova/+spec/nova-api-deprecate-extensions)
 
* The novaclient.v1_1 module has been deprecated [[https://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=61ef35fe79e2a3a76987a92f9ee2db0bf1f6e651]][[https://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=0a60aae852d2688861d0b4ba097a1a00529f0611]] since 2.21.0 and we are going to remove it in the first python-novaclient release in Mitaka.
 
* The novaclient.v1_1 module has been deprecated [[https://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=61ef35fe79e2a3a76987a92f9ee2db0bf1f6e651]][[https://git.openstack.org/cgit/openstack/python-novaclient/commit/?id=0a60aae852d2688861d0b4ba097a1a00529f0611]] since 2.21.0 and we are going to remove it in the first python-novaclient release in Mitaka.
Line 201: Line 237:
 
* API v3 specific components have all been deprecated and removed from the default paste.ini
 
* API v3 specific components have all been deprecated and removed from the default paste.ini
  
== OpenStack Telemetry (Ceilometer) ==
+
== OpenStack Telemetry (Ceilometer) == <!--T:46-->
  
=== Key New Features ===
+
=== Key New Features === <!--T:47-->
 
* Creation of Aodh to handle alarming service.
 
* Creation of Aodh to handle alarming service.
 
* Metadata caching - reduced load of nova API polling.
 
* Metadata caching - reduced load of nova API polling.
Line 215: Line 251:
 
* PowerVM hypervisor support.
 
* PowerVM hypervisor support.
 
* Improved MongoDB query support - performance improvement to statistic calculations.
 
* Improved MongoDB query support - performance improvement to statistic calculations.
 +
* Additional meter support:
 +
** Magnum meters
 +
** DBaaS meters
 +
** DNSaaS meters
  
==== Gnocchi Features ====
+
==== Gnocchi Features ==== <!--T:48-->
 
* Initial influxdb driver implemented.
 
* Initial influxdb driver implemented.
  
==== Aodh Features ====
+
==== Aodh Features ==== <!--T:49-->
 
* Event alarms - ability to trigger an action when an event is received.
 
* Event alarms - ability to trigger an action when an event is received.
 
* Trust support in alarms [https://blueprints.launchpad.net/ceilometer/+spec/trust-alarm-notifier link].
 
* Trust support in alarms [https://blueprints.launchpad.net/ceilometer/+spec/trust-alarm-notifier link].
  
=== Upgrade Notes ===
+
=== Upgrade Notes === <!--T:50-->
 
* The name of some middleware used by ceilometer changed in a backward incompatible way. Before upgrading, edit the <code>paste.ini</code> file for ceilometer to change <code>oslo.middleware</code> to <code>oslo_middleware</code>. For example, using <nowiki>sed -ri 's/oslo\.middleware/oslo_middleware/' api_paste.ini</nowiki>
 
* The name of some middleware used by ceilometer changed in a backward incompatible way. Before upgrading, edit the <code>paste.ini</code> file for ceilometer to change <code>oslo.middleware</code> to <code>oslo_middleware</code>. For example, using <nowiki>sed -ri 's/oslo\.middleware/oslo_middleware/' api_paste.ini</nowiki>
 
* The notification agent is a core service to collecting data in Ceilometer. It now handles all transformations and publishing. Polling agents now defer all processing to notification agents, and must be deployed in tandem.
 
* The notification agent is a core service to collecting data in Ceilometer. It now handles all transformations and publishing. Polling agents now defer all processing to notification agents, and must be deployed in tandem.
 
* A mandatory limit is applied to each request. If no limit is given, it will be restricted to a default limit.
 
* A mandatory limit is applied to each request. If no limit is given, it will be restricted to a default limit.
  
=== Deprecation ===
+
=== Deprecated Features === <!--T:51-->
 
* Ceilometer Alarms is deprecated in favour or Aodh.
 
* Ceilometer Alarms is deprecated in favour or Aodh.
 
* RPC publisher and collector is deprecated in favour of a topic based notifier publisher.
 
* RPC publisher and collector is deprecated in favour of a topic based notifier publisher.
 
* Non-metric meters are still deprecated, and are to be removed in a future release.
 
* Non-metric meters are still deprecated, and are to be removed in a future release.
  
== OpenStack Identity (Keystone) ==
+
== OpenStack Identity (Keystone) == <!--T:52-->
  
=== Key New Features ===
+
=== Key New Features === <!--T:53-->
  
 +
<!--T:54-->
 
* '''''Experimental''''': Domain specific configuration options can be stored in SQL instead of configuration files, using the new REST APIs.
 
* '''''Experimental''''': Domain specific configuration options can be stored in SQL instead of configuration files, using the new REST APIs.
 
* '''''Experimental''''': Keystone now supports tokenless authorization with X.509 SSL client certificate.
 
* '''''Experimental''''': Keystone now supports tokenless authorization with X.509 SSL client certificate.
Line 246: Line 287:
 
* Certain variables in keystone.conf now have options, which determine if the user's setting is valid.
 
* Certain variables in keystone.conf now have options, which determine if the user's setting is valid.
  
=== Upgrade Notes ===
+
=== Upgrade Notes === <!--T:55-->
  
* The EC2 token middleware, deprecated in Juno, is no longer available in keystone. It's been moved to the keystonemiddleware package.
+
<!--T:56-->
 +
* The EC2 token middleware, deprecated in Juno, is no longer available in keystone. It has been moved to the keystonemiddleware package.
 
* The <code>compute_port configuration</code> option, deprecated in Juno, is no longer available.
 
* The <code>compute_port configuration</code> option, deprecated in Juno, is no longer available.
 
* The XML middleware stub has been removed, so references to it must be removed from the <code>keystone-paste.ini</code> configuration file.
 
* The XML middleware stub has been removed, so references to it must be removed from the <code>keystone-paste.ini</code> configuration file.
Line 261: Line 303:
 
* Domain name information can now be used in policy rules with the attribute <code>domain_name</code>.
 
* Domain name information can now be used in policy rules with the attribute <code>domain_name</code>.
  
=== Deprecations ===
+
=== Deprecated Features === <!--T:57-->
  
 +
<!--T:58-->
 
* Running Keystone in Eventlet remains deprecated and will be removed in the Mitaka release.
 
* Running Keystone in Eventlet remains deprecated and will be removed in the Mitaka release.
 
* Using LDAP as the resource backend, i.e for projects and domains, is now deprecated and will be removed in the Mitaka release.
 
* Using LDAP as the resource backend, i.e for projects and domains, is now deprecated and will be removed in the Mitaka release.
 
* Using the full path to the driver class is deprecated in favor of using the entrypoint. In the Mitaka release, the entrypoint must be used.
 
* Using the full path to the driver class is deprecated in favor of using the entrypoint. In the Mitaka release, the entrypoint must be used.
 
* In the [resource] and [role] sections of the <code>keystone.conf</code> file, not specifying the driver and using the assignment driver is deprecated. In the Mitaka release, the resource and role drivers will default to the SQL driver.
 
* In the [resource] and [role] sections of the <code>keystone.conf</code> file, not specifying the driver and using the assignment driver is deprecated. In the Mitaka release, the resource and role drivers will default to the SQL driver.
* In <code>keystone-paste.ini</code>, using ,code>paste.filter_factory</code> is deprecated in favor of the "use" directive, specifying an entrypoint.
+
* In <code>keystone-paste.ini</code>, using <code>paste.filter_factory</code> is deprecated in favor of the "use" directive, specifying an entrypoint.
 
* Not specifying a domain during a create user, group or project call, which relied on falling back to the default domain, is now deprecated and will be removed in the N release.
 
* Not specifying a domain during a create user, group or project call, which relied on falling back to the default domain, is now deprecated and will be removed in the N release.
 
* Certain deprecated methods from the assignment manager were removed in favor of the same methods in the [resource] and [role] manager.
 
* Certain deprecated methods from the assignment manager were removed in favor of the same methods in the [resource] and [role] manager.
  
== OpenStack Block Storage (Cinder) ==
+
== OpenStack Block Storage (Cinder) == <!--T:59-->
  
=== Key New Features ===
+
=== Key New Features === <!--T:60-->
 
* A generic image caching solution, so popular VM images can be cached and copied-on-write to a new volume. [http://docs.openstack.org/admin-guide-cloud/blockstorage_image_volume_cache.html Read docs for more info]
 
* A generic image caching solution, so popular VM images can be cached and copied-on-write to a new volume. [http://docs.openstack.org/admin-guide-cloud/blockstorage_image_volume_cache.html Read docs for more info]
 
* Non-disruptive backups [http://docs.openstack.org/admin-guide-cloud/blockstorage_volume_backups.html Read docs for more info].
 
* Non-disruptive backups [http://docs.openstack.org/admin-guide-cloud/blockstorage_volume_backups.html Read docs for more info].
 
* Ability to clone consistency groups of volumes [http://docs.openstack.org/admin-guide-cloud/blockstorage-consistency-groups.html Read docs for more info].
 
* Ability to clone consistency groups of volumes [http://docs.openstack.org/admin-guide-cloud/blockstorage-consistency-groups.html Read docs for more info].
* List capabilities of a volume backend (fetch extra-specs)
+
* List capabilities of a volume backend (fetch extra-specs).
* Nested quotas
+
* Nested quotas.
 +
* Default LVM backends to be thin provisioned if available.
 +
* Corrected cinder service-list to show as Down when a driver fails to initialize.
 +
* Improved volume migration management:
 +
** Able to see if previous migration attempt was successful
 +
** Admins able to monitor migrations via cinder list
 +
** New volume status of 'maintenance' to prevent operations being attempted while migration is occurring
 +
** Improve backend volume name/id consistency after migration completes
  
=== Upgrade Notes ===
+
=== Upgrade Notes === <!--T:61-->
 +
* A change in parameters to RPC APIs and work on object conversion prevent running Liberty c-vol or c-api services with Kilo or earlier versions of either service.
  
=== Deprecations ===
+
=== Deprecated Features === <!--T:62-->
 +
* Removed Simple and Chance Schedulers.
 +
* Removed deprecated HDS HUS iSCSI driver.
 +
* Removed Coraid driver.
 +
* Remove Solaris iSCSI driver.
 +
* Removed --force option for allowing upload of image to attached volume.
 +
* Marked the v1 API as deprecated.
  
== OpenStack Orchestration (Heat) ==
+
== OpenStack Orchestration (Heat) == <!--T:63-->
  
=== New Features ===
+
=== New Features === <!--T:64-->
  
==== Convergence ====
+
==== Convergence ==== <!--T:65-->
 
Convergence is a new orchestration engine maturing in the heat tree. In Liberty, the benefits of using the convergence engine are:
 
Convergence is a new orchestration engine maturing in the heat tree. In Liberty, the benefits of using the convergence engine are:
 
* Greater parallelization of resource actions (for better scaling of large templates)
 
* Greater parallelization of resource actions (for better scaling of large templates)
Line 294: Line 351:
 
* Better handling of heat-engine failures (still WIP)
 
* Better handling of heat-engine failures (still WIP)
  
 +
<!--T:66-->
 
The convergence engine can be enabled by setting /etc/heat/heat/conf [DEFAULT] convergence_engine=true, then restarting heat-engine. Once this has been done, any subsequent created stack will use the convergence engine, while operations on existing stacks will continue to use the traditional engine.
 
The convergence engine can be enabled by setting /etc/heat/heat/conf [DEFAULT] convergence_engine=true, then restarting heat-engine. Once this has been done, any subsequent created stack will use the convergence engine, while operations on existing stacks will continue to use the traditional engine.
  
 +
<!--T:67-->
 
Convergence has '''not''' been production tested and thus should be considered '''beta''' quality - use with caution. For the Liberty release, we recommend enabling convergence for the purposes of evaluation and scale testing. We will be considering making convergence the default engine in the Mitaka cycle. Convergence specific bugs are tracked in launchpad with the [https://bugs.launchpad.net/heat/+bugs?field.tag=convergence-bugs convergence-bugs tag].
 
Convergence has '''not''' been production tested and thus should be considered '''beta''' quality - use with caution. For the Liberty release, we recommend enabling convergence for the purposes of evaluation and scale testing. We will be considering making convergence the default engine in the Mitaka cycle. Convergence specific bugs are tracked in launchpad with the [https://bugs.launchpad.net/heat/+bugs?field.tag=convergence-bugs convergence-bugs tag].
  
==== Conditional resource exposure ====
+
==== Conditional resource exposure ==== <!--T:68-->
 
Only resources actually installed in the cloud services are made available to users. Operators can further control resources available to users with standard policy rules in [https://github.com/openstack/heat/blob/master/etc/heat/policy.json#L80 policy.json on per-resource type basis].
 
Only resources actually installed in the cloud services are made available to users. Operators can further control resources available to users with standard policy rules in [https://github.com/openstack/heat/blob/master/etc/heat/policy.json#L80 policy.json on per-resource type basis].
  
==== heat_template_version: 2015-10-15 ====
+
==== heat_template_version: 2015-10-15 ==== <!--T:69-->
  
 +
<!--T:70-->
 
2015-10-15 indicates that the YAML document is a HOT template and it may contain features added and/or removed up until the Liberty release.  
 
2015-10-15 indicates that the YAML document is a HOT template and it may contain features added and/or removed up until the Liberty release.  
 
* Removes the Fn::Select function (path based [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#get-attr get_attr]/[http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#get-param get_param] references should be used instead).  
 
* Removes the Fn::Select function (path based [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#get-attr get_attr]/[http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#get-param get_param] references should be used instead).  
Line 310: Line 370:
 
* Adds support for parsing map/list data to [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#str-replace str_replace] and [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#list-join list_join] (they will be json serialized automatically)
 
* Adds support for parsing map/list data to [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#str-replace str_replace] and [http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#list-join list_join] (they will be json serialized automatically)
  
==== REST API/heatclient additions ====
+
==== REST API/heatclient additions ==== <!--T:71-->
 
* Stacks can now be assigned with a set of tags, and stack-list can filter and sort through those tags
 
* Stacks can now be assigned with a set of tags, and stack-list can filter and sort through those tags
 
* "heat stack-preview ..." will return a preview of changes for a proposed stack-update
 
* "heat stack-preview ..." will return a preview of changes for a proposed stack-update
Line 319: Line 379:
 
* "heat template-function-list ..." lists available functions for a template version
 
* "heat template-function-list ..." lists available functions for a template version
  
==== Enhancements to existing resources ====
+
==== Enhancements to existing resources ==== <!--T:72-->
 
* Software deployments can now use Zaqar for [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::Server-prop-software_config_transport deploying software data] and [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::SoftwareDeployment-prop-signal_transport signalling back to Heat]
 
* Software deployments can now use Zaqar for [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::Server-prop-software_config_transport deploying software data] and [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::SoftwareDeployment-prop-signal_transport signalling back to Heat]
 
* Stack actions are now performed on remote [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::Stack OS::Heat::Stack] resources
 
* Stack actions are now performed on remote [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::Stack OS::Heat::Stack] resources
Line 325: Line 385:
 
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ResourceGroup-updpolicy OS::Heat::ResourceGroup update_policy] now supports specifying [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ResourceGroup-prop-batch_create batch_create] and [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ResourceGroup-prop-rolling_update rolling_update] options
 
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ResourceGroup-updpolicy OS::Heat::ResourceGroup update_policy] now supports specifying [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ResourceGroup-prop-batch_create batch_create] and [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ResourceGroup-prop-rolling_update rolling_update] options
  
==== New resources ====
+
==== New resources ==== <!--T:73-->
 
The following new resources are now distributed with the Heat release:
 
The following new resources are now distributed with the Heat release:
 
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Barbican::Order OS::Barbican::Order] [1]
 
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Barbican::Order OS::Barbican::Order] [1]
Line 358: Line 418:
 
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Sahara::DataSource OS::Sahara::DataSource]
 
* [http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Sahara::DataSource OS::Sahara::DataSource]
  
 +
<!--T:74-->
 
[1] These existed Kilo as contrib resources as they were for non-integrated projects. These resources are now distributed with Heat as Big Tent projects.
 
[1] These existed Kilo as contrib resources as they were for non-integrated projects. These resources are now distributed with Heat as Big Tent projects.
  
[2] These existed Kilo as contrib resources as they require a user with an admin role. They are now distributed with Heat. Operators now have ability to hide them from under-privileged users by modifyig policy.json (for reference, OS::Nova::Flavor is hidden from non-admin users in default policy file supplied).
+
<!--T:75-->
 +
[2] These existed Kilo as contrib resources as they require a user with an admin role. They are now distributed with Heat. Operators now have ability to hide them from under-privileged users by modifying policy.json (for reference, OS::Nova::Flavor is hidden from non-admin users in default policy file supplied).
  
 +
<!--T:76-->
 
[3] These existed in Kilo as contrib resources as they used an approach not endorsed by the Heat project. They are now distributed with heat and documented as UNSUPPORTED.
 
[3] These existed in Kilo as contrib resources as they used an approach not endorsed by the Heat project. They are now distributed with heat and documented as UNSUPPORTED.
  
 +
<!--T:77-->
 
[4] These resources are for projects which are not yet OpenStack Big Tent projects, so are documented as UNSUPPORTED
 
[4] These resources are for projects which are not yet OpenStack Big Tent projects, so are documented as UNSUPPORTED
  
 +
<!--T:78-->
 
With the new OS::Keystone::* resources it is now be possible for cloud operators to use heat templates to manage Keystone service catalog entries and users.
 
With the new OS::Keystone::* resources it is now be possible for cloud operators to use heat templates to manage Keystone service catalog entries and users.
  
==== Deprecated Resource Properties ====
+
==== Deprecated Resource Properties ==== <!--T:79-->
 
Many resource properties have previously been documented as DEPRECATED. 15 of these properties are now flagged as HIDDEN, which means they will no longer be documented, but existing stacks and templates will continue to work after a heat upgrade. The [ http://docs.openstack.org/developer/heat/template_guide/openstack.html Resource Type Reference] should be consulted to determine available resource properties and attributes.
 
Many resource properties have previously been documented as DEPRECATED. 15 of these properties are now flagged as HIDDEN, which means they will no longer be documented, but existing stacks and templates will continue to work after a heat upgrade. The [ http://docs.openstack.org/developer/heat/template_guide/openstack.html Resource Type Reference] should be consulted to determine available resource properties and attributes.
  
=== Upgrade notes ===
+
=== Upgrade notes === <!--T:80-->
  
==== Configuration Changes ====
+
==== Configuration Changes ==== <!--T:81-->
 
Notable changes to the /etc/heat/heat.conf [DEFAULT] section:
 
Notable changes to the /etc/heat/heat.conf [DEFAULT] section:
 
* hidden_stack_tags has been added, and stacks containing these tag names will be hidden from stack-list results (defaults to data-processing-cluster, which hides sahara-created stacks)
 
* hidden_stack_tags has been added, and stacks containing these tag names will be hidden from stack-list results (defaults to data-processing-cluster, which hides sahara-created stacks)
Line 386: Line 451:
 
* use_syslog_rfc_format is deprecated and now defaults to true
 
* use_syslog_rfc_format is deprecated and now defaults to true
  
 +
<!--T:82-->
 
Notable changes to other sections of heat.conf:
 
Notable changes to other sections of heat.conf:
 
* [clients_keystone] auth_uri has been added to specify the unversioned keystone url
 
* [clients_keystone] auth_uri has been added to specify the unversioned keystone url
 
* [heat_api] workers now defaults to 4 (was previously 0, which created a worker per host CPU)
 
* [heat_api] workers now defaults to 4 (was previously 0, which created a worker per host CPU)
  
 +
<!--T:83-->
 
The policy file /etc/heat/policy.json can now be configured with per-resource-type access policies, for example:
 
The policy file /etc/heat/policy.json can now be configured with per-resource-type access policies, for example:
 
     "resource_types:OS::Nova::Flavor": "rule:context_is_admin"
 
     "resource_types:OS::Nova::Flavor": "rule:context_is_admin"
  
==== Upgrading from Kilo to Liberty ====
+
==== Upgrading from Kilo to Liberty ==== <!--T:84-->
 
Progress has been made on supporting live sql migrations, however it is still recommended to bring down the heat service for the duration of the upgrade. Downward SQL schema migrations are no longer supported. A rollback to Kilo will require restoring a snapshot of the pre-upgrade database.
 
Progress has been made on supporting live sql migrations, however it is still recommended to bring down the heat service for the duration of the upgrade. Downward SQL schema migrations are no longer supported. A rollback to Kilo will require restoring a snapshot of the pre-upgrade database.
  
== OpenStack Data Processing (Sahara) ==
+
== OpenStack Data Processing (Sahara) == <!--T:85-->
  
=== Key New Features ===
+
=== Key New Features === <!--T:86-->
  
 +
<!--T:87-->
 
* New plugins and versions:
 
* New plugins and versions:
 
** Ambari plugin with supports HDP  2.2 / 2.3
 
** Ambari plugin with supports HDP  2.2 / 2.3
Line 417: Line 485:
 
* Added support for definition and use of configuration interfaces for EDP job templates
 
* Added support for definition and use of configuration interfaces for EDP job templates
  
=== Deprecations ===
+
=== Deprecated Features === <!--T:88-->
  
 +
<!--T:89-->
 
* Direct provisioning engine
 
* Direct provisioning engine
 
* Apache Hadoop 2.6.0
 
* Apache Hadoop 2.6.0
Line 424: Line 493:
 
* All Hadoop 1.X removed
 
* All Hadoop 1.X removed
  
== OpenStack Search (Searchlight) ==
+
== OpenStack Search (Searchlight) == <!--T:90-->
  
 +
<!--T:91-->
 
This is the first release for Searchlight. Searchlight is intended to dramatically improving the search capabilities and performance of various OpenStack cloud services by offloading user search queries. It provides Keystone RBAC based searches across OpenStack services by indexing their data into ElasticSearch and providing a security layer on top of incoming search queries. ElasticSearch is a search server based on Lucene. It provides a distributed, scalable, near real-time, faceted, multitenant-capable, and full-text search engine with a RESTful web interface.
 
This is the first release for Searchlight. Searchlight is intended to dramatically improving the search capabilities and performance of various OpenStack cloud services by offloading user search queries. It provides Keystone RBAC based searches across OpenStack services by indexing their data into ElasticSearch and providing a security layer on top of incoming search queries. ElasticSearch is a search server based on Lucene. It provides a distributed, scalable, near real-time, faceted, multitenant-capable, and full-text search engine with a RESTful web interface.
  
 +
<!--T:92-->
 
* [https://wiki.openstack.org/wiki/Searchlight Project Wiki]
 
* [https://wiki.openstack.org/wiki/Searchlight Project Wiki]
  
=== Key New Features ===
+
=== Key New Features === <!--T:93-->
 
* [http://docs.openstack.org/developer/searchlight/searchlightapi.html Searchlight Search API] OpenStack Resource Type based API providing native ElasticSearch query support
 
* [http://docs.openstack.org/developer/searchlight/searchlightapi.html Searchlight Search API] OpenStack Resource Type based API providing native ElasticSearch query support
 
* [http://docs.openstack.org/developer/searchlight/indexingservice.html#bulk-indexing Bulk Indexing CLI] searchlight-manage indexing command line interface
 
* [http://docs.openstack.org/developer/searchlight/indexingservice.html#bulk-indexing Bulk Indexing CLI] searchlight-manage indexing command line interface
Line 437: Line 508:
 
* [https://github.com/openstack/searchlight/tree/master/devstack Devstack deployment]
 
* [https://github.com/openstack/searchlight/tree/master/devstack Devstack deployment]
  
==== New Resource Types Indexed ====
+
==== New Resource Types Indexed ==== <!--T:94-->
 
* [http://docs.openstack.org/developer/searchlight/plugins/nova.html OS::Nova::Server] Nova server instances
 
* [http://docs.openstack.org/developer/searchlight/plugins/nova.html OS::Nova::Server] Nova server instances
 
* [http://docs.openstack.org/developer/searchlight/plugins/glance.html OS::Glance::Image & OS::Glance::Metadef] Glance Images and Metadata Definitions
 
* [http://docs.openstack.org/developer/searchlight/plugins/glance.html OS::Glance::Image & OS::Glance::Metadef] Glance Images and Metadata Definitions
 
* [http://docs.openstack.org/developer/searchlight/plugins/designate.html OS::Designate::Zone & OS::Designate::RecordSet] Designate Domain and Record Sets
 
* [http://docs.openstack.org/developer/searchlight/plugins/designate.html OS::Designate::Zone & OS::Designate::RecordSet] Designate Domain and Record Sets
  
=== Upgrade Notes ===
+
=== Upgrade Notes === <!--T:95-->
  
 +
<!--T:96-->
 
N/A
 
N/A
  
=== Deprecations ===
+
=== Deprecated Features === <!--T:97-->
  
 +
<!--T:98-->
 
N/A
 
N/A
  
== OpenStack DNS (Designate) ==
+
== OpenStack DNS (Designate) == <!--T:99-->
  
=== Key New Features ===
+
=== Key New Features === <!--T:100-->
  
 +
<!--T:101-->
 
* '''''Experimental''''': Hook Point API
 
* '''''Experimental''''': Hook Point API
 
* Horizon Plugin moved out of tree
 
* Horizon Plugin moved out of tree
Line 461: Line 535:
 
** Import
 
** Import
 
** Export
 
** Export
* Active /passive failover for designate-pool-manager periodic tasks
+
* Active / passive failover for designate-pool-manager periodic tasks
 
* OpenStack client integration
 
* OpenStack client integration
  
==== Addtional DNS Server Backends ====
+
==== Additional DNS Server Backends ==== <!--T:102-->
  
 +
<!--T:103-->
 
* InfoBlox
 
* InfoBlox
 
* Designate
 
* Designate
  
=== Upgrade Notes ===
+
=== Upgrade Notes === <!--T:104-->
  
 +
<!--T:105-->
 
* New service <code>designate-zone-manager</code>
 
* New service <code>designate-zone-manager</code>
 
** It is recommended to use a supported tooz backend.
 
** It is recommended to use a supported tooz backend.
Line 476: Line 552:
 
** If a tooz backend is not used, all zone-managers will assume ownership of all zones, and there will be ''''n'''' "exists" messages per hour, where ''''n'''' is the number of zone-manager processes.
 
** If a tooz backend is not used, all zone-managers will assume ownership of all zones, and there will be ''''n'''' "exists" messages per hour, where ''''n'''' is the number of zone-manager processes.
  
 +
<!--T:106-->
 
* <code>designate-pool-manager</code> can do active/passive failover for periodic tasks.
 
* <code>designate-pool-manager</code> can do active/passive failover for periodic tasks.
 
** It is recommended to use a supported tooz backend.
 
** It is recommended to use a supported tooz backend.
 
** If a tooz backend is not used, all pool-managers will assume ownership of the pool, and multiple periodic tasks will run. This can result in unforeseen consequences.
 
** If a tooz backend is not used, all pool-managers will assume ownership of the pool, and multiple periodic tasks will run. This can result in unforeseen consequences.
  
=== Deprecations ===
+
=== Deprecated Features === <!--T:107-->
  
 +
<!--T:108-->
 
* V1 API
 
* V1 API
 
** An initial notice of intent, as there are operations that still require the Designate CLI interface which talks to V1, and Horizon panels that only talk to V1.
 
** An initial notice of intent, as there are operations that still require the Designate CLI interface which talks to V1, and Horizon panels that only talk to V1.
  
== OpenStack Messaging Service (Zaqar) ==
+
== OpenStack Messaging Service (Zaqar) == <!--T:109-->
  
=== Key New Features ===
+
=== Key New Features === <!--T:110-->
 
* Pre-signed URL - A new REST API endpoint to support pre-signed URL, which provides enough control over the resource being shared, without compromising security.
 
* Pre-signed URL - A new REST API endpoint to support pre-signed URL, which provides enough control over the resource being shared, without compromising security.
* Email Notification - A new task driver for notification service, which can take an email address as the subscriber of Zaqar's subscription. When there is a new message posted to the queue, the email will receive the message.
+
* Email Notification - A new task driver for notification service, which can take a Zaqar subscriber's email address. When there is a new message posted to the queue, the subscriber will receive the message by email.
* Policy Support - Support fine-grained permission control with the policy.json like most of the other OpenStack components.
+
* Policy Support - Support fine-grained permission control with the <code>policy.json</code> file like most of the other OpenStack components.
* Persistent Transport - Add support for websocket as a persistent transport alternative for Zaqar. Now users will be able to establish long-lived connections between their applications and Zaqar to interchange great amounts of data without the connection setup added overhead.
+
* Persistent Transport - Added support for websocket as a persistent transport alternative for Zaqar. Now users will be able to establish long-lived connections between their applications and Zaqar to interchange large amounts of data without the connection setup adding overhead.
  
=== Upgrade Notes ===
+
== OpenStack Dashboard (Horizon) == <!--T:111-->
  
=== Deprecations ===
+
=== Key New Features === <!--T:112-->
  
== OpenStack Dashboard (Horizon) ==
+
<!--T:113-->
 +
* A new network topology – The network topology diagram has been replaced with an interactive graph containing collapsible networks, and scales far better in large deployments (https://blueprints.launchpad.net/horizon/+spec/curvature-network-topology).
  
=== Key New Features ===
+
<!--T:114-->
 +
* Plugin improvements – Horizon auto discovers JavaScript files for inclusion, and now has mechanisms for pluggable SCSS and Django template overrides.
  
* A new network topology – The network topology diagram has been replaced with an interactive graph containing collapsible networks, and scales far better in large deployments (https://blueprints.launchpad.net/horizon/+spec/curvature-network-topology)
+
<!--T:115-->
 +
* Compute (Nova)
 +
** Support for shelving and unshelving of instances (https://blueprints.launchpad.net/horizon/+spec/horizon-shelving-command).
 +
** Support for v2 block device mapping, falling back to v1 when unavailable (https://blueprints.launchpad.net/horizon/+spec/horizon-block-device-mapping-v2).
  
* Plugin improvements -- Horizon autodiscovers JavaScript files for inclusion and now has mechanisms for pluggable SCSS and Django template overrides
+
<!--T:116-->
 +
* Networking (Neutron)
 +
** Added support for subnet allocation via subnet pools (https://blueprints.launchpad.net/horizon/+spec/neutron-subnet-allocation).
 +
** Added actions to easily associate LBaaS VIP with a floating IP (https://blueprints.launchpad.net/horizon/+spec/lbaas-vip-fip-associate).
  
* Nova
+
<!--T:117-->
** Support for shelving and unshelving of instances (https://blueprints.launchpad.net/horizon/+spec/horizon-shelving-command)
+
* Images (Glance)
** Support for v2 block device mapping, falling back to v1 when unavailable (https://blueprints.launchpad.net/horizon/+spec/horizon-block-device-mapping-v2)
+
** The metadata editor has been updated with AngularJS (https://blueprints.launchpad.net/horizon/+spec/angularize-metadata-update-modals).
 +
** Compute images metadata can now be edited from the Project dashboard, using the new metadata editor (https://blueprints.launchpad.net/horizon/+spec/project-images-metadata).
  
* Neutron
+
<!--T:118-->
** Added support for subnet allocation via subnet pools (https://blueprints.launchpad.net/horizon/+spec/neutron-subnet-allocation)
+
* Block Storage (Cinder)
** Added actions to easily associate LBaaS VIP with a floating IP (https://blueprints.launchpad.net/horizon/+spec/lbaas-vip-fip-associate)
+
** Enabled support for migrating volumes (https://blueprints.launchpad.net/horizon/+spec/volume-migration).
 +
** Volume types can be now edited, and include description fields (https://blueprints.launchpad.net/horizon/+spec/volume-type-description).
  
* Glance
+
<!--T:119-->
** The metadata editor has been updated with AngularJS (https://blueprints.launchpad.net/horizon/+spec/angularize-metadata-update-modals)
+
* Orchestration (Heat)
** Compute images metadata can now be edited from the Project dashboard, using the new metadata editor (https://blueprints.launchpad.net/horizon/+spec/project-images-metadata)
+
** Improvements to the heat topology, making more resources identifiable where previously they had no icons and were displayed as unknown resources (https://blueprints.launchpad.net/horizon/+spec/heat-topology-display-improvement).
  
* Cinder
+
<!--T:120-->
** Enabled support for migrating volumes (https://blueprints.launchpad.net/horizon/+spec/volume-migration)
+
* Data Processing (Sahara)
** Volume types can be now edited, and include description fields (https://blueprints.launchpad.net/horizon/+spec/volume-type-description)
+
** Unified job interface map. This is a human readable method for passing in configuration data that a job may require or accept (https://blueprints.launchpad.net/horizon/+spec/unified-job-interface-map-ui).
 +
** Added editing capabilities for job binaries (https://blueprints.launchpad.net/horizon/+spec/allow-editing-of-job-binaries).
 +
** Added editing capabilities for data sources (https://blueprints.launchpad.net/horizon/+spec/allow-editing-of-data-sources).
 +
** Added editing capabilities for job templates (https://blueprints.launchpad.net/horizon/+spec/data-processing-edit-templates).
 +
** Exposed event log for clusters (https://blueprints.launchpad.net/horizon/+spec/sahara-event-log).
 +
** Added support for shell job types (https://blueprints.launchpad.net/horizon/+spec/sahara-shell-action-form).
  
* Heat
+
<!--T:121-->
** Improvements to the heat topology, making more resources identifiable where previously they had no icons and were display as unknown resources (https://blueprints.launchpad.net/horizon/+spec/heat-topology-display-improvement)
+
* Databases (Trove)
 +
** Added initial support for database cluster creation and management. Vertica and MongoDB are currently supported (https://blueprints.launchpad.net/horizon/+spec/database-clustering-support).
  
* Sahara
+
<!--T:122-->
** Unified job interface map. This is a human readable method for passing in configuration data that a job may require/ accept (https://blueprints.launchpad.net/horizon/+spec/unified-job-interface-map-ui)
+
* Identity (Keystone)
** Added editing capabilities for job binaries (https://blueprints.launchpad.net/horizon/+spec/allow-editing-of-job-binaries)
+
** Added mapping for Identity Provider and Protocol specific WebSSO (https://github.com/openstack/horizon/commit/3b4021c0ad0e8d7b10aa8c2dcd8c13a5717c450c).
** Added editing capabilities for data sources (https://blueprints.launchpad.net/horizon/+spec/allow-editing-of-data-sources)
+
** Configurable token hashing (https://github.com/openstack/django_openstack_auth/commit/ece924a79d27ede1a8475d7f98e6d66bc3cffd6c and https://github.com/openstack/horizon/commit/48e651d05cbe9366884868c5331d49a501945adc).
** Added editing capabilities for job templates (https://blueprints.launchpad.net/horizon/+spec/data-processing-edit-templates)
 
** Exposed event log for clusters (https://blueprints.launchpad.net/horizon/+spec/sahara-event-log)
 
** Added support for shell job types (https://blueprints.launchpad.net/horizon/+spec/sahara-shell-action-form)
 
  
* Trove
+
<!--T:123-->
** Added initial support for database cluster creation and management. Vertica and MongoDB are currently supported. (https://blueprints.launchpad.net/horizon/+spec/database-clustering-support)
+
* Horizon (internal improvements)
 +
** Full support for translation in AngularJS, along with simpler tooling (https://blueprints.launchpad.net/horizon/+spec/angular-translate-makemessages).
 +
** Added Karma for JavaScript testing (https://blueprints.launchpad.net/horizon/+spec/karma).
 +
** Added ESLint for JavaScript linting, using the eslint-config-openstack rules (https://blueprints.launchpad.net/horizon/+spec/jscs-cleanup).
 +
** Horizon now supports overriding of existing Django templates (https://blueprints.launchpad.net/horizon/+spec/horizon-theme-templates).
 +
** JavaScript files are now automatically included (https://blueprints.launchpad.net/horizon/+spec/auto-js-file-finding).
  
* Keystone
+
=== Upgrade Notes === <!--T:124-->
** IdP mapping improvements (https://review.openstack.org/#/c/220012/)
 
** Configurable token hashing (https://review.openstack.org/#/c/201360/ and https://review.openstack.org/#/c/206765/)
 
  
* Horizon
+
<!--T:125-->
** Full support for translation in AngularJS, along with simpler tooling (https://blueprints.launchpad.net/horizon/+spec/angular-translate-makemessages)
+
* Django 1.8 is now supported, and Django 1.7 is our minimum supported version (https://blueprints.launchpad.net/horizon/+spec/drop-django14-support).
** Added Karma for JavaScript testing (https://blueprints.launchpad.net/horizon/+spec/karma)
+
* Database-backed sessions will likely not persist across upgrades due to a change in their structure (https://github.com/openstack/django_openstack_auth/commit/8c64de92f4148d85704b10ea1f7bc441db2ddfee and https://github.com/openstack/horizon/commit/ee2771ab1a855342089abe5206fc6a5071a6d99e).
** Added ESLint for JavaScript linting, using the eslint-config-openstack rules (https://blueprints.launchpad.net/horizon/+spec/jscs-cleanup)
+
* Horizon no longer uses QUnit in testing, and it has been removed from our requirements (https://blueprints.launchpad.net/horizon/+spec/replace-qunit-tests-with-jasmine).
** Horizon now supports overriding of existing Django templates (https://blueprints.launchpad.net/horizon/+spec/horizon-theme-templates)
+
* Horizon now has multiple configuration options for the default web URL (<code>WEBROOT</code>), static file location (<code>STATIC_ROOT</code>) and static file URL (<code>STATIC_URL</code>) in its settings files.
** JavaScript files are now automatically included (https://blueprints.launchpad.net/horizon/+spec/auto-js-file-finding)
+
* Themes have moved location from <code>openstack_dashboard/static/themes</code>, to <code>openstack_dashboard/themes</code>. Paths may need to be updated accordingly. Furthermore, Horizon is aligning closer with Bootstrap markup, and themes should be built around this ideology; see the top bar and side navigation for details.
 +
* The deprecated <code>OPENSTACK_QUANTUM_NETWORK</code> configuration option has been removed. If you still use it, replace it with <code>OPENSTACK_NEUTRON_NETWORK</code>
 +
* There is now an <code>OPENSTACK_NOVA_EXTENSIONS_BLACKLIST</code> option in the settings, to disable selected extensions for performance reasons (https://github.com/openstack/horizon/commit/18f4b752b8653c9389f8b0471eccaa0659707ebe).
 +
* Trove and Sahara panels now reside in <code>openstack_dashboard/contrib</code>. This is to provide separation for reviews provided mostly by the service teams. In the future, these panels may become plugins rather than being kept in Horizon (https://blueprints.launchpad.net/horizon/+spec/plugin-sanity).
 +
* Horizon requires both a <code>volume</code> and <code>volumev2</code> endpoint for Cinder, even if only using v2.
 +
* Many JavaScript files and most notably the base page template (<code>horizon/templates/base.html</code>) have moved from the framework portion of the repo  (<code>horizon</code>) to the application side (<code>openstack_dashboard</code>) to better separate the framework from the application.
  
=== Upgrade Notes ===
+
== OpenStack Trove (DBaaS) == <!--T:126-->
  
* Django 1.8 is now supported, and Django 1.7 is our mininmum supported version (https://blueprints.launchpad.net/horizon/+spec/drop-django14-support)
+
=== Key New Features === <!--T:127-->
* Database-backed sessions will likely not persist across upgrades due to a change in their structure (https://review.openstack.org/#/c/222478/ and https://review.openstack.org/#/c/222480/)
 
* Horizon no longer uses QUnit in testing, and it has been removed from our requirements (https://blueprints.launchpad.net/horizon/+spec/replace-qunit-tests-with-jasmine)
 
* Horizon now has multiple configuration options for the default web url (WEBROOT), static file location (STATIC_ROOT) and static file url (STATIC_URL) in its settings files
 
* Themes have moved location from openstack_dashboard/static/themes, to openstack_dashboard/themes. Paths may need to be updated accordingly. Furthermore, Horizon is aligning closer with Bootstrap markup, and themes should be built around this ideology; see the top bar and side nav for details
 
* The deprecated OPENSTACK_QUANTUM_NETWORK config option has been removed. If you still use it, you need to replace it with OPENSTACK_NEUTRON_NETWORK
 
 
 
== OpenStack Trove (DBaaS) ==
 
 
 
=== Key New Features ===
 
  
 +
<!--T:128-->
 
* Redis
 
* Redis
 
** Configuration Groups for Redis
 
** Configuration Groups for Redis
Line 573: Line 664:
 
* Ability to deploy Trove instances in a single admin tenant, so that the nova instances are hidden from the user
 
* Ability to deploy Trove instances in a single admin tenant, so that the nova instances are hidden from the user
  
== OpenStack Bare metal (Ironic) ==
+
== OpenStack Bare metal (Ironic) == <!--T:129-->
  
 +
<!--T:130-->
 
Ironic has switched to an [http://git.openstack.org/cgit/openstack/governance/tree/reference/tags/release_cycle-with-intermediary.rst intermediate release model] and released version 4.0 during Liberty, followed by two minor updates. Version 4.2 forms the basis for the OpenStack Integrated Liberty release and will receive stable updates.
 
Ironic has switched to an [http://git.openstack.org/cgit/openstack/governance/tree/reference/tags/release_cycle-with-intermediary.rst intermediate release model] and released version 4.0 during Liberty, followed by two minor updates. Version 4.2 forms the basis for the OpenStack Integrated Liberty release and will receive stable updates.
  
 +
<!--T:131-->
 
Please see full release notes here: http://docs.openstack.org/developer/ironic/releasenotes/index.html
 
Please see full release notes here: http://docs.openstack.org/developer/ironic/releasenotes/index.html
  
=== New Features ===
+
=== New Features === <!--T:132-->
  
* Added "ENROLL" hardware state, which is the default state for newly created Nodes.
+
<!--T:133-->
 +
* Added "ENROLL" hardware state, which is the default state for newly created nodes.
 
* Added "abort" verb, which allows a user to interrupt certain operations while they are in progress.
 
* Added "abort" verb, which allows a user to interrupt certain operations while they are in progress.
 
* Improved query and filtering support in the REST API.
 
* Improved query and filtering support in the REST API.
 
* Added support for CORS middleware.
 
* Added support for CORS middleware.
  
==== Hardware Drivers ====
+
==== Hardware Drivers ==== <!--T:134-->
  
 +
<!--T:135-->
 
* Added a new BootInterface for hardware drivers, which splits functionality out of the DeployInterface.
 
* Added a new BootInterface for hardware drivers, which splits functionality out of the DeployInterface.
* iLO virtual media drivers can work without Swift
+
* iLO virtual media drivers can work without Swift.
* Added Cisco IMC driver
+
* Added Cisco IMC driver.
* Add OCS Driver
+
* Added OCS Driver.
* Add UCS Driver
+
* Added UCS Driver.
* Add Wake-On-Lan Power Driver
+
* Added Wake-On-Lan Power Driver.
* ipmitool driver supports IPMI v1.5
+
* ipmitool driver supports IPMI v1.5.
* Add support to SNMP driver for “APC MasterSwitchPlus” series PDU’s
+
* Added support to SNMP driver for “APC MasterSwitchPlus” series PDU’s.
* pxe_ilo driver now supports UEFI Secure Boot (previous releases of theiLO driver only supported this for agent_ilo and iscsi_ilo)
+
* pxe_ilo driver now supports UEFI Secure Boot (previous releases of theiLO driver only supported this for agent_ilo and iscsi_ilo).
* Add Virtual Media support to iRMC Driver
+
* Added Virtual Media support to iRMC Driver.
* Add BIOS config to DRAC Driver
+
* Added BIOS configuration to DRAC Driver.
* PXE drivers now support GRUB2
+
* PXE drivers now support GRUB2.
  
=== Deprecations ===
+
=== Deprecated Features === <!--T:136-->
  
 +
<!--T:137-->
 
* The "vendor_passthru" and "driver_vendor_passthru" methods of the DriverInterface have been removed. These were deprecated in Kilo and replaced with the @passthru decorator.
 
* The "vendor_passthru" and "driver_vendor_passthru" methods of the DriverInterface have been removed. These were deprecated in Kilo and replaced with the @passthru decorator.
 
* The migration tools to import data from a Nova "baremetal" deployment have been removed.
 
* The migration tools to import data from a Nova "baremetal" deployment have been removed.
* Deprecated the ‘parallel’ option to periodic task decorator
+
* Deprecated the "parallel" option to periodic task decorator.
* Removed deprecated ‘admin_api’ policy rule
+
* Removed deprecated ‘admin_api’ policy rule.
 
* Support for the original "bash" deploy ramdisk is deprecated and will be removed in two cycles. The ironic-python-agent project should be used for all deploy drivers.
 
* Support for the original "bash" deploy ramdisk is deprecated and will be removed in two cycles. The ironic-python-agent project should be used for all deploy drivers.
  
=== Upgrade Notes ===
+
=== Upgrade Notes === <!--T:138-->
  
 +
<!--T:139-->
 
* Newly created nodes default to the new ENROLL state. Previously, nodes defaulted to AVAILABLE, which could lead to hardware being exposed prematurely to Nova.
 
* Newly created nodes default to the new ENROLL state. Previously, nodes defaulted to AVAILABLE, which could lead to hardware being exposed prematurely to Nova.
 
* The addition of API version headers in Kilo means that any client wishing to interact with the Liberty API must pass the appropriate version string in each HTTP request. Current API version is 1.14.
 
* The addition of API version headers in Kilo means that any client wishing to interact with the Liberty API must pass the appropriate version string in each HTTP request. Current API version is 1.14.
  
== OpenStack Key Manager (Barbican) ==
+
== OpenStack Key Manager (Barbican) == <!--T:140-->
  
=== New Features ===
+
=== New Features === <!--T:141-->
  
* Added capability for project administrators to define and manage a set of preferred Certificate Authorities (CAs) per project. This allows projects to achieve project specific security domains.
+
<!--T:142-->
* Barbican now has per project quota support for limiting number of barbican resources that can be created under a project. By default the quota is set to unlimited and can be overridden in Barbican configuration.
+
* Added the ability for project administrators to create certificate authorities per project.  Also, project administrators are able to define and manage a set of preferred certificate authorities (CAs) per project. This allows projects to achieve project specific security domains.
* Support for rotating master key which is used for wrapping project level key. In this lightweight approach, only project level key (KEK) is re-wrapped with new master key (MKEK). This is currently applicable only for the PKCS11 plug-in. (http://specs.openstack.org/openstack/barbican-specs/specs/liberty/add-crypto-mkek-rotation-support-lightweight.html)
+
* Barbican now has per project quota support for limiting number of Barbican resources that can be created under a project. By default the quota is set to unlimited and can be overridden in Barbican configuration.
* Updated Barbican's root resource to return version information matching Keystone, Nova and Manila format. This is used by keystoneclient versioned endpoint discovery feature.
+
* Support for a rotating master key which is used for wrapping project level keys. In this lightweight approach, only the project level key (KEK) is re-wrapped with new master key (MKEK). This is currently applicable only for the PKCS11 plug-in. (http://specs.openstack.org/openstack/barbican-specs/specs/liberty/add-crypto-mkek-rotation-support-lightweight.html)
* Removed administrator endpoint as all operations are available on a regular endpoint. No separate endpoint is needed as access restrictions are enforced via oslo policy.
+
* Updated Barbican's root resource to return version information matching Keystone, Nova and Manila format. This is used by keystoneclient's versioned endpoint discovery feature.
 +
* Removed administrator endpoint as all operations are available on a regular endpoint. No separate endpoint is needed as access restrictions are enforced via Oslo policy.
 
* Added configuration for enabling sqlalchemy pool for the management of SQL connections.
 
* Added configuration for enabling sqlalchemy pool for the management of SQL connections.
 
* Added ability to list secrets which are accessible via ACL using GET /v1/secrets?acl-only=true request.
 
* Added ability to list secrets which are accessible via ACL using GET /v1/secrets?acl-only=true request.
 
* Improved functional test coverage around Barbican APIs related to ACL operations, RBAC policy and secrets.
 
* Improved functional test coverage around Barbican APIs related to ACL operations, RBAC policy and secrets.
 
* Fixed issues around creation of SnakeOil CA plug-in instance.
 
* Fixed issues around creation of SnakeOil CA plug-in instance.
* Barbican client CLI can now take keystone token for authentication. Earlier only username and password based authentication was supported.
+
* Barbican client CLI can now take a Keystone token for authentication. Earlier only username and password based authentication was supported.
 
* Barbican client now has ability to create and list certificate orders.
 
* Barbican client now has ability to create and list certificate orders.
  
=== Upgrade Notes ===
+
=== Upgrade Notes === <!--T:143-->
  
 +
<!--T:144-->
 
* Removed project secret association table. Secret project relationship is maintained by foreign key. For more detail, see http://specs.openstack.org/openstack/barbican-specs/specs/liberty/data-remove-tenant-secret-assoc.html .
 
* Removed project secret association table. Secret project relationship is maintained by foreign key. For more detail, see http://specs.openstack.org/openstack/barbican-specs/specs/liberty/data-remove-tenant-secret-assoc.html .
 
* Renamed barbican configuration file to <code>barbican.conf</code>.
 
* Renamed barbican configuration file to <code>barbican.conf</code>.
  
== OpenStack Image Service (Glance) ==
+
== OpenStack Image Service (Glance) == <!--T:145-->
  
Here you can find a newly updated project guide that includes some details on operating, installing, configuring, developing to and using the service:- http://docs.openstack.org/developer/glance/
+
<!--T:146-->
 +
Updated project guide that includes some details on operating, installing, configuring, developing to and using the service: http://docs.openstack.org/developer/glance/
  
=== Key New Features ===
+
=== Key New Features === <!--T:147-->
  
* Glance now supports uploading signed images. More info at http://specs.openstack.org/openstack/glance-specs/specs/liberty/image-signing-and-verification-support.html .
+
<!--T:148-->
* Scrubbing of Images in parallel is now possible. More info at http://specs.openstack.org/openstack/glance-specs/specs/liberty/scrub-images-in-parallel.html .
+
* Added support for uploading signed images. For more information, see http://specs.openstack.org/openstack/glance-specs/specs/liberty/image-signing-and-verification-support.html .
* Now you can monitor the health of a Glance node using the healthcheck middleware. More info at http://specs.openstack.org/openstack/glance-specs/specs/liberty/healtcheck-middleware.html .
+
* Scrubbing of images in parallel is now possible. For more information, see http://specs.openstack.org/openstack/glance-specs/specs/liberty/scrub-images-in-parallel.html .
* The EXPERIMENTAL Artifacts API is now available for use. Though, it is subject to change anytime in the future until it becomes a standard regular API.
+
* The health of a Glance node can be monitored using the healthcheck middleware. For more information, see http://specs.openstack.org/openstack/glance-specs/specs/liberty/healtcheck-middleware.html .
* S3 store now has proxy support.
+
* The EXPERIMENTAL Artifacts API is now available for use. Please note, it is subject to change in the future until it becomes a standard API.
 +
* S3 store now has proxy support. For more information, see http://specs.openstack.org/openstack/glance-specs/specs/liberty/http-proxy-support-for-s3.html .
 
* Swift store now has v3 authentication support.
 
* Swift store now has v3 authentication support.
* python-glanceclient now support some advanced aspects of keystone sessions.
+
* python-glanceclient now supports some advanced aspects of keystone sessions.
* python-glanceclient now supports tags for Metadata Definition Catalog
+
* python-glanceclient now supports tags for Metadata Definition Catalog.
* Uploading and Downloading is now supported in the Glance Cinder back end, specifically 'get', 'add', and 'delete' methods for cinder storage volumes enable users to upload and download images to and from volumes.
 
  
=== Upgrade Notes ===
+
=== Upgrade Notes === <!--T:149-->
  
 +
<!--T:150-->
 
* python-glanceclient now defaults to using Glance API v2 and if v2 is unavailable, it will fallback to v1.
 
* python-glanceclient now defaults to using Glance API v2 and if v2 is unavailable, it will fallback to v1.
* Dependencies for backend stores are not optionally installed corresponding to each store specified.
+
* Dependencies for backend stores are now optionally installed corresponding to each store specified.
 
* Some stores like swift, s3, vmware now have python 3 support.
 
* Some stores like swift, s3, vmware now have python 3 support.
 
* Some new as well as updated default metadata definitions ship with the source code.
 
* Some new as well as updated default metadata definitions ship with the source code.
* More python 3 support added to Glance API and now continuous support is extended by the means of tests to ensure compatibility.
+
* More python 3 support added to Glance API, and now continuous support is extended by the means of tests to ensure compatibility.
 
* utf-8 is now the default charset for the backend MySQL DB.
 
* utf-8 is now the default charset for the backend MySQL DB.
* Migration scripts are updates and they now sanity check for the table charset.
+
* Migration scripts have been updated to perform a sanity check for the table charset.
 
* 'ram_disk' and 'kernel' properties can now be null in the schema and 'id' is now read only attribute for v2 API.
 
* 'ram_disk' and 'kernel' properties can now be null in the schema and 'id' is now read only attribute for v2 API.
* A config option 'client_socket_timeout' has been added  to take advantage of the new(ish) eventlet socket timeout behaviour.
+
* A configuration option <code>client_socket_timeout</code> has been added  to take advantage of the recent eventlet socket timeout behaviour.
* Config option 'scrub_pool_size' has been added to set the number of parallel threads that a scrubber should run and defaults to 1.
+
* A configuration option <code>scrub_pool_size</code> has been added to set the number of parallel threads that a scrubber should run and defaults to 1.
* An important bug that allowed to change the Image status using the Glance v1 API has now been fixed.
+
* An important bug that allowed to change the image status using the Glance v1 API has now been fixed.
 +
* Glance /versions endpoint now returns an an HTTP 200 code, whereas it used to return 300. 
  
=== Deprecation ===
+
=== Deprecated Features === <!--T:151-->
  
 +
<!--T:152-->
 
* The experimental Catalog Index Service has been removed and now is a separate project called Searchlight.
 
* The experimental Catalog Index Service has been removed and now is a separate project called Searchlight.
* Config options 'scrubber_datadir', 'cleanup_scrubber' and 'cleanup_scrubber_time' have been removed following the removal of the file backed queuing for scrubber as it was no longer used.
+
* The configuration options <code>scrubber_datadir</code>, <code>cleanup_scrubber</code> and <code>cleanup_scrubber_time</code> have been removed following the removal of the file backed queuing for scrubber.
 +
 
 +
== OpenStack Shared File System (Manila) == <!--T:153-->
 +
 
 +
=== New Features === <!--T:154-->
 +
 
 +
<!--T:155-->
 +
* Enabled support for availability zones.
 +
* Added administrator API components to share instances.
 +
* Added pool weigher which allows Manila scheduler to place new shares on pools with existing share servers.
 +
* Support for share migration from one hostpool to another hostpool (experimental).
 +
* Added shared extend capability in the generic driver.
 +
* Support for adding consistency groups, which allow snapshots for multiple filesystem shares to created at the same point in time (experimental).
 +
* Support for consistency groups in the NetApp cDOT driver and generic driver.
 +
* Support for oversubscription in thin provisioning.
 +
* New Windows SMB driver:
 +
** Support for handling Windows service instances and exporting SMB shares.
 +
* Added new <code>osapi_share_workers</code> configuration option to improve the total throughput of the Manila API service.
 +
* Added share hooks feature, which allows actions to be performed before and after share drive methods calls, call additional periodic hook for each 'N' tick, and update the results of a driver's action.
 +
* Improvements to the NetApp cDOT driver:
 +
** Added variables netapp:dedup, and netapp:compression when creating the flexvol that backs up a new manila share.
 +
** Added manage/unmanage support and shrink_share support.
 +
** Support for <code>extended_share</code> API component.
 +
** Support for netapp-lib PyPI project to communicate with storage arrays.
 +
* Improvements to the HP 3PAR driver:
 +
**Added reporting of dedupe, thin provisioning and hp3par_flash_cache capabilities. This allows share types and the CapabilitiesFilter to place shares on hosts with the requested capabilities.
 +
**Added share server support.
 +
* Improvements to the Huawei Manila driver:
 +
** Added support for storage pools, extend_share, manage_existing, shrink_share, read-only share, smartcache and smartpartition.
 +
** Added reporting of dedupe, thin provisioning and compression capabilities.
 +
* Added access-level support to the VNX Manila driver.
 +
* Added support for the Manila HDS HNAS driver.
 +
* Added GlusterFS native driver.
 +
** GlusterFS drivers can now specify the list of compatible share layouts.
 +
* Added microversion support (v2 API).
 +
 
 +
=== Deprecated Features === <!--T:156-->
 +
 
 +
<!--T:157-->
 +
* The <code>share_reset_status</code> API component is deprecated and replaced by <code>share_instance_reset_status</code>.
 +
 
  
  
 
</translate>
 
</translate>

Latest revision as of 16:04, 25 May 2016

Other languages:
Deutsch • ‎English • ‎日本語 • ‎한국어 • ‎中文(简体)‎ • ‎中文(台灣)‎

OpenStack Liberty Release Notes

Contents

OpenStack Object Storage (Swift)

Please see full release notes at https://github.com/openstack/swift/blob/master/CHANGELOG

New Features

  • Allow 1+ object-servers-per-disk deployment enabled by a new > 0 integer config value, "servers_per_port" in the [DEFAULT] config section for object-server and/or replication server configurations. The setting's integer value determines how many different object-server workers handle requests for any single unique local port in the ring. In this mode, the parent swift-object-server process continues to run as the original user (i.e. root if low-port binding is required). It binds to all ports as defined in the ring. It then forks off the specified number of workers per listen socket. The child, per-port servers, drops privileges and behaves pretty much how object-server workers always have with one exception: the ring has unique ports per disk, the object-servers will only handle requests for a single disk. The parent process detects dead servers and restarts them (with the correct listen socket). It starts missing servers when an updated ring file is found with a device on the server with a new port, and kills extraneous servers when their port is no longer found in the ring. The ring files are started at most on the schedule configured in the object-server configuration by every the "ring_check_interval" parameter (same default of 15s). In testing, this deployment configuration (with a value of 3) lowers request latency, improves requests per second, and isolates slow disk IO as compared to the existing "workers" setting. To use this, each device must be added to the ring using a different port.
  • The object server includes a "container_update_timeout" setting (with a default of 1 second). This value is the number of seconds that the object server will wait for the container server to update the listing before returning the status of the object PUT operation. Previously, the object server would wait up to 3 seconds for the container server response. The new behavior dramatically lowers object PUT latency when container servers in the cluster are busy (e.g. when the container is very large). Setting the value too low may result in a client PUT'ing an object and not being able to immediately find it in listings. Setting it too high will increase latency for clients when container servers are busy.
  • Added the ability to specify ranges for Static Large Object (SLO) segments.
  • Allow SLO PUTs to forgo per-segment integrity checks. Previously, each segment referenced in the manifest also needed the correct etag and bytes setting. These fields now allow the "null" value to skip those particular checks on the given segment.
  • Replicator configurations now support an "rsync_module" value to allow for per-device rsync modules. This setting gives operators the ability to fine-tune replication traffic in a Swift cluster and isolate replication disk IO to a particular device. Please see the docs and sample config files for more information and examples.
  • Ring changes
    • Partition placement no longer uses the port number to place partitions. This improves dispersion in small clusters running one object server per drive, and it does not affect dispersion in clusters running one object server per server.
    • Added ring-builder-analyzer tool to more easily test and analyze a series of ring management operations.
    • Ring validation now warns if a placement partition gets assigned to the same device multiple times. This happens when devices in the ring are unbalanced (e.g. two servers where one server has significantly more available capacity).
  • TempURL fixes (closes CVE-2015-5223)

    Do not allow PUT tempurls to create pointers to other data. Specifically, disallow the creation of DLO object manifests via a PUT tempurl. This prevents discoverability attacks which can use any PUT tempurl to probe for private data by creating a DLO object manifest and then using the PUT tempurl to head the object.

  • Swift now emits StatsD metrics on a per-policy basis.
  • Fixed an issue with Keystone integration where a COPY request to a service account may have succeeded even if a service token was not included in the request.
  • Bulk upload now treats user xattrs on files in the given archive as object metadata on the resulting created objects.
  • Emit warning log in object replicator if "handoffs_first" or "handoff_delete" is set.
  • Enable object replicator's failure count in swift-recon.
  • Added storage policy support to dispersion tools.
  • Support keystone v3 domains in swift-dispersion.
  • Added domain_remap information to the /info endpoint.
  • Added support for a "default_reseller_prefix" in domain_remap middleware config.
  • Allow rsync to use compression via a "rsync_compress" config. If set to true, compression is only enabled for an rsync to a device in a different region. In some cases, this can speed up cross-region replication data transfer.
  • Added time synchronization check in swift-recon (the --time option).
  • The account reaper now runs faster on large accounts.
  • Various other minor bug fixes and improvements.

Upgrade Notes

  • Dependency changes
    • Added six requirement. This is part of an ongoing effort to add support for Python 3.
    • Dropped support for Python 2.6.
  • Config changes
    • Recent versions of Python restrict the number of headers allowed in a request to 100. This number may be too low for custom middleware. The new "extra_header_count" config value in swift.conf can be used to increase the number of headers allowed.
    • Renamed "run_pause" setting to "interval" (current configs with run_pause still work). Future versions of Swift may remove the run_pause setting.
  • The versioned writes feature has been refactored and reimplemented as middleware. You should explicitly add the versioned_writes middleware to your proxy pipeline, but do not remove or disable the existing container server config setting ("allow_versions"), if it is currently enabled. The existing container server config setting enables existing containers to continue being versioned. Please see http://swift.openstack.org/middleware.html#how-to-enable-object-versioning-in-a-swift-cluster for further upgrade notes.

OpenStack Networking (Neutron)

New Features

  • Neutron now supports IPv6 Prefix Delegation for the automatic assignment of CIDRs to IPv6 subnets. For more information on the usage and configuration of this feature, see the OpenStack Networking Guide.
  • Neutron now exposes a QoS API, initially offering bandwidth limitation on the port level. The API, CLI, configuration and additional information may be found here [1].
  • Router high availability (L3 HA / VRRP) now works when layer 2 population (l2pop) is enabled [2].
  • VPNaaS reference drivers now work with HA routers.
  • Networks used for VRRP traffic for HA routers may now be configured to use a specific segmentation type or physical network tag [3].
  • The OVS agent may now be restarted without affecting data plane connectivity.
  • Neutron now offers role base access control (RBAC) for networks [4].
  • LBaaS V2 reference driver is now based on Octavia, an operator grade scalable, reliable Load Balancer platform.
  • LBaaS V2 API is no longer experimental. It is now stable.
  • Neutron now provides a way for admins to manually schedule agents, allowing host resources to be tested before they are enabled for tenant use [5].
  • Neutron now has a pluggable IP address management framework, enabling the use of alternate or third-party IPAM. The original, non-pluggable version of IPAM is enabled by default.

Deprecated and Removed Plugins and Drivers

  • The metaplugin is removed in the Liberty release.
  • The IBM SDN-VE monolithic plugin is removed in the Liberty release.
  • The Cisco N1kV monolithic plugin is removed in the Liberty release (replaced by the ML2 mechanism driver).
  • The Embrane plugin is deprecated and will be removed in the Mitaka release.

Deprecated Features

  • The FWaaS API is marked as experimental for Liberty. Further, the current API will be removed in Mitaka and replaced with a new FWaaS API, which the team is in the process of developing.
  • The LBaaS V1 API is marked as deprecated and is planned to be removed in a future release. Going forward, the LBaaS V2 API should be used.
  • The 'external_network_bridge' option for the L3 agent has been deprecated in favor of a bridge_mapping with a physnet. For more information, see the "Network Node" section of this scenario in the networking guide: http://docs.openstack.org/networking-guide/scenario_legacy_ovs.html

Performance Considerations

  • The stock Trusty Tahr kernel (3.13) shows linear performance degradation when running "ip netns exec" as the number of namespaces increases. In cases where scale is important, a later version kernel (e.g. 3.19) should be used. [WHICH VERSION OF 3.13 EXHIBITED THIS. MOST VERSIONS WILL HAVE THIS FIX ALREADY.]


Note: This regression should be fixed in Trusty Thar since 3.13.0-36.63 and later kernel versions. For further references see: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1328088
  • Creating Neutron-LBaaS load balancers in environments without hardware virtualization may be slow when using the Octavia driver. This is due to QEMU using the TCG accelerator instead of the KVM accelerator in environments without hardware virtualization available. We recommend enabling hardware virtualization on your compute nodes, or enabling nested virtualization when using the Octavia driver inside a virtual environment. See the following link for details on setting up nested virtualization for DevStack running inside KVM: http://docs.openstack.org/developer/devstack/guides/devstack-with-nested-kvm.html.

OpenStack Compute (Nova)

New Features

API

Scheduler

Architectural evolution on the scheduler has continued, along with key bug fixes:

Cells v2

Cells v2 is not currently in a usable state, but we have added some more supporting infrastructure:

Compute Driver Features

Libvirt
VMware
Hyper-V
Ironic

Other Features

Upgrade Notes

  • If you are coming from Kilo stable, please make sure you have fully upgraded to the latest release of that lineage before deploying Liberty. Due to bug https://bugs.launchpad.net/nova/+bug/1474074 versions of Kilo from before the fix will be problematic when talking to Liberty nodes.
  • Allocation ratios for RAM and CPU are now defined within the nova-compute service (so per compute node). Ratios also need to be provided for the scheduler service. Depending on whether a compute node is running Kilo or Liberty, the allocation ratios will behave differently : if the compute node is running Kilo then the CPU and RAM allocation ratios for that compute node will be the ones defaulted in the controller's nova.conf file.Or, if the compute node is Liberty then you'll be able to set a per-compute allocation ratio for both CPU and RAM. In order to leave the operator providing the allocation ratios to all the compute nodes, the default allocation ratio will be set in nova.conf to 0.0 (even for the controller). That doesn't mean that allocation ratios will actually be 0.0, just that the operator needs to provide those before the next release (ie. Mitaka). To be clear, the default allocation ratios are still 16.0 for cpu_allocation_ratio and 1.5 for ram_allocation_ratio.
  • nova-compute should be upgraded to Liberty code before upgrading Neutron services per the new "network-vif-deleted" event: https://review.openstack.org/#/c/187871/
  • Rootwrap filters must be updated after release to add the 'touch' command.
    • There is a race condition between imagebackend and imagecache mentioned in the Launchpad Bug 1256838.
    • In this case if base image is deleted by ImageCacheManager while imagebackend is copying the image to instance path, then the instance goes in to error state.
    • In order to resolve this issue, there is a need to add 'touch' command in compute.filters along with the change https://review.openstack.org/#/c/217579/.
    • In case of a race condition, when libvirt has changed the base file ownership to libvirt-qemu while imagebackend is copying the image, then we get permission denied error on updating the file access time using os.utime. To resolve this error we need to update the base file access time with root user privileges using 'touch' command.
  • The DiskFilter is now part of the scheduler_default_filters in Liberty per https://review.openstack.org/#/c/207942/ .
  • Per https://review.openstack.org/#/c/103916/ you can now only map one vCenter cluster to a single nova-compute node.
  • The Libvirt driver parallels has been renamed to virtuozzo
  • Orphaned tables - iscsi_targets, volumes - have been removed.
  • The default paste.ini has been updated to use the new v2.1 API for all endpoints, and the v3 endpoint has been removed. A compatibility mode middlewear is used to relax the v2.1 validation for the /v2 and /v1.1 endpoints.
  • The code for DB schema downgrades has now been removed: https://blueprints.launchpad.net/nova/+spec/nova-no-downward-sql-migration
  • The default DB driver we test against is now pymysql rather than Python-MySQL
  • The "powervm" hv_type shim has been removed. This only affects users of the PowerVC driver on stackforge which are using older images with hv_type=powervm in the image metadata.
  • The minimum required version of libvirt in the Mitaka release will be 0.10.2. Support for libvirt < 0.10.2 is deprecated in Liberty: https://review.openstack.org/#/c/183220/
  • The libvirt.remove_unused_kernels config option is deprecated for removal and now defaults to True: https://review.openstack.org/#/c/182315/
  • Setting force_config_drive=always in nova.conf is deprecated, use True/False boolean values instead: https://review.openstack.org/#/c/156153/

Deprecated Features

  • The ability to disable in tree API extensions has been deprecated (https://blueprints.launchpad.net/nova/+spec/nova-api-deprecate-extensions)
  • The novaclient.v1_1 module has been deprecated [[6]][[7]] since 2.21.0 and we are going to remove it in the first python-novaclient release in Mitaka.
  • Method `novaclient.client.get_client_class` is deprecated [[8]] since 2.29.0. The method will be removed in Mitaka.
  • The mute_weight_value option on weighers has been deprecated, including for use with Cells.
  • The remove_unused_kernels configuration option for the Libvirt driver is now deprecated.
  • The minimum recommended version of vCenter for use with the vcenter driver is now 5.1.0. In Liberty this is logged as a warning, in Mitaka support for versions lower than 5.1.0 will be removed.
  • API v3 specific components have all been deprecated and removed from the default paste.ini

OpenStack Telemetry (Ceilometer)

Key New Features

  • Creation of Aodh to handle alarming service.
  • Metadata caching - reduced load of nova API polling.
  • Declarative meters
    • Ability to generate meters by defining meter definition template.
    • Ability to define specific SNMP meters to poll.
  • Support for data publishing from Ceilometer to Gnocchi.
  • Mandatory limit - limit restricted querying is enforced. The limit must be explicitly provided on queries, otherwise the result set is restricted to a default limit.
  • Distributed, coordinated notification agents - support for workload partitioning across multiple notification agents.
  • Events RBAC support.
  • PowerVM hypervisor support.
  • Improved MongoDB query support - performance improvement to statistic calculations.
  • Additional meter support:
    • Magnum meters
    • DBaaS meters
    • DNSaaS meters

Gnocchi Features

  • Initial influxdb driver implemented.

Aodh Features

  • Event alarms - ability to trigger an action when an event is received.
  • Trust support in alarms link.

Upgrade Notes

  • The name of some middleware used by ceilometer changed in a backward incompatible way. Before upgrading, edit the paste.ini file for ceilometer to change oslo.middleware to oslo_middleware. For example, using sed -ri 's/oslo\.middleware/oslo_middleware/' api_paste.ini
  • The notification agent is a core service to collecting data in Ceilometer. It now handles all transformations and publishing. Polling agents now defer all processing to notification agents, and must be deployed in tandem.
  • A mandatory limit is applied to each request. If no limit is given, it will be restricted to a default limit.

Deprecated Features

  • Ceilometer Alarms is deprecated in favour or Aodh.
  • RPC publisher and collector is deprecated in favour of a topic based notifier publisher.
  • Non-metric meters are still deprecated, and are to be removed in a future release.

OpenStack Identity (Keystone)

Key New Features

  • Experimental: Domain specific configuration options can be stored in SQL instead of configuration files, using the new REST APIs.
  • Experimental: Keystone now supports tokenless authorization with X.509 SSL client certificate.
  • Configuring per-Identity Provider WebSSO is now supported.
  • openstack_user_domain and openstack_project_domain attributes were added to SAML assertion in order to map user and project domains, respectively.
  • The credentials list call can now have its results filtered by credential type.
  • Support was improved for out-of-tree drivers by defining stable Driver Interfaces.
  • Several features were hardened, including Fernet tokens, Federation, domain specific configurations from database and role assignments.
  • Certain variables in keystone.conf now have options, which determine if the user's setting is valid.

Upgrade Notes

  • The EC2 token middleware, deprecated in Juno, is no longer available in keystone. It has been moved to the keystonemiddleware package.
  • The compute_port configuration option, deprecated in Juno, is no longer available.
  • The XML middleware stub has been removed, so references to it must be removed from the keystone-paste.ini configuration file.
  • stats_monitoring and stats_reporting paste filters have been removed, so references to it must be removed from the keystone-paste.ini configuration file.
  • The external authentication plugins ExternalDefault, ExternalDomain, LegacyDefaultDomain, and LegacyDomain, deprecated in Icehouse, are no longer available.
  • keystone.conf now references entrypoint names for drivers. For example, the drivers are now specified as "sql", "ldap", "uuid", rather than the full module path. See the sample configuration file for other examples.
  • We now expose entrypoints for the keystone-manage command instead of a file.
  • Schema downgrades via keystone-manage db_sync are no longer supported. Only upgrades are supported.
  • Features that were "extensions" in previous releases (OAuth delegation, Federated Identity support, Endpoint Policy, etc) are now enabled by default.
  • A new secure_proxy_ssl_header configuration option is available when running keystone behind a proxy.
  • Several configuration options have been deprecated, renamed, or moved to new sections in the keystone.conf file.
  • Domain name information can now be used in policy rules with the attribute domain_name.

Deprecated Features

  • Running Keystone in Eventlet remains deprecated and will be removed in the Mitaka release.
  • Using LDAP as the resource backend, i.e for projects and domains, is now deprecated and will be removed in the Mitaka release.
  • Using the full path to the driver class is deprecated in favor of using the entrypoint. In the Mitaka release, the entrypoint must be used.
  • In the [resource] and [role] sections of the keystone.conf file, not specifying the driver and using the assignment driver is deprecated. In the Mitaka release, the resource and role drivers will default to the SQL driver.
  • In keystone-paste.ini, using paste.filter_factory is deprecated in favor of the "use" directive, specifying an entrypoint.
  • Not specifying a domain during a create user, group or project call, which relied on falling back to the default domain, is now deprecated and will be removed in the N release.
  • Certain deprecated methods from the assignment manager were removed in favor of the same methods in the [resource] and [role] manager.

OpenStack Block Storage (Cinder)

Key New Features

  • A generic image caching solution, so popular VM images can be cached and copied-on-write to a new volume. Read docs for more info
  • Non-disruptive backups Read docs for more info.
  • Ability to clone consistency groups of volumes Read docs for more info.
  • List capabilities of a volume backend (fetch extra-specs).
  • Nested quotas.
  • Default LVM backends to be thin provisioned if available.
  • Corrected cinder service-list to show as Down when a driver fails to initialize.
  • Improved volume migration management:
    • Able to see if previous migration attempt was successful
    • Admins able to monitor migrations via cinder list
    • New volume status of 'maintenance' to prevent operations being attempted while migration is occurring
    • Improve backend volume name/id consistency after migration completes

Upgrade Notes

  • A change in parameters to RPC APIs and work on object conversion prevent running Liberty c-vol or c-api services with Kilo or earlier versions of either service.

Deprecated Features

  • Removed Simple and Chance Schedulers.
  • Removed deprecated HDS HUS iSCSI driver.
  • Removed Coraid driver.
  • Remove Solaris iSCSI driver.
  • Removed --force option for allowing upload of image to attached volume.
  • Marked the v1 API as deprecated.

OpenStack Orchestration (Heat)

New Features

Convergence

Convergence is a new orchestration engine maturing in the heat tree. In Liberty, the benefits of using the convergence engine are:

  • Greater parallelization of resource actions (for better scaling of large templates)
  • The ability to do a stack-update while there is already an update in-progress
  • Better handling of heat-engine failures (still WIP)

The convergence engine can be enabled by setting /etc/heat/heat/conf [DEFAULT] convergence_engine=true, then restarting heat-engine. Once this has been done, any subsequent created stack will use the convergence engine, while operations on existing stacks will continue to use the traditional engine.

Convergence has not been production tested and thus should be considered beta quality - use with caution. For the Liberty release, we recommend enabling convergence for the purposes of evaluation and scale testing. We will be considering making convergence the default engine in the Mitaka cycle. Convergence specific bugs are tracked in launchpad with the convergence-bugs tag.

Conditional resource exposure

Only resources actually installed in the cloud services are made available to users. Operators can further control resources available to users with standard policy rules in policy.json on per-resource type basis.

heat_template_version: 2015-10-15

2015-10-15 indicates that the YAML document is a HOT template and it may contain features added and/or removed up until the Liberty release.

  • Removes the Fn::Select function (path based get_attr/get_param references should be used instead).
  • If no <attribute name> is specified for calls to get_attr, a dict of all attributes is returned, e.g. { get_attr: [<resource name>]}.
  • Adds new str_split intrinsic function
  • Adds support for passing multiple lists to the existing list_join function.
  • Adds support for parsing map/list data to str_replace and list_join (they will be json serialized automatically)

REST API/heatclient additions

  • Stacks can now be assigned with a set of tags, and stack-list can filter and sort through those tags
  • "heat stack-preview ..." will return a preview of changes for a proposed stack-update
  • "heat template-validate --show-nested ..." will also validate all template resources and return nested data useful for building user interfaces
  • "heat resource-type-template --template-type hot ..." generates a template in HOT format
  • "heat resource-type-list" only shows types available to the user, and can filter results by name, version and support_status
  • "heat template-version-list" lists available template versions
  • "heat template-function-list ..." lists available functions for a template version

Enhancements to existing resources

New resources

The following new resources are now distributed with the Heat release:

[1] These existed Kilo as contrib resources as they were for non-integrated projects. These resources are now distributed with Heat as Big Tent projects.

[2] These existed Kilo as contrib resources as they require a user with an admin role. They are now distributed with Heat. Operators now have ability to hide them from under-privileged users by modifying policy.json (for reference, OS::Nova::Flavor is hidden from non-admin users in default policy file supplied).

[3] These existed in Kilo as contrib resources as they used an approach not endorsed by the Heat project. They are now distributed with heat and documented as UNSUPPORTED.

[4] These resources are for projects which are not yet OpenStack Big Tent projects, so are documented as UNSUPPORTED

With the new OS::Keystone::* resources it is now be possible for cloud operators to use heat templates to manage Keystone service catalog entries and users.

Deprecated Resource Properties

Many resource properties have previously been documented as DEPRECATED. 15 of these properties are now flagged as HIDDEN, which means they will no longer be documented, but existing stacks and templates will continue to work after a heat upgrade. The [ http://docs.openstack.org/developer/heat/template_guide/openstack.html Resource Type Reference] should be consulted to determine available resource properties and attributes.

Upgrade notes

Configuration Changes

Notable changes to the /etc/heat/heat.conf [DEFAULT] section:

  • hidden_stack_tags has been added, and stacks containing these tag names will be hidden from stack-list results (defaults to data-processing-cluster, which hides sahara-created stacks)
  • instance_user was deprecated, and is now removed entirely. Nova servers created with OS::Nova::Server resource will now boot configured with the default user set up with the cloud image. AWS::EC2::Instance still creates "ec2-user"
  • max_resources_per_stack can now be set to -1 to disable enforcement
  • enable_cloud_watch_lite is now false by default as this REST API is deprecated
  • default_software_config_transport has gained the option ZAQAR_MESSAGE
  • default_deployment_signal_transport has gained the option ZAQAR_SIGNAL
  • auth_encryption_key is now documented as requiring exactly 32 characters
  • list_notifier_drivers was deprecated and is now removed
  • policy options have moved to the [oslo_policy] section
  • use_syslog_rfc_format is deprecated and now defaults to true

Notable changes to other sections of heat.conf:

  • [clients_keystone] auth_uri has been added to specify the unversioned keystone url
  • [heat_api] workers now defaults to 4 (was previously 0, which created a worker per host CPU)

The policy file /etc/heat/policy.json can now be configured with per-resource-type access policies, for example:

   "resource_types:OS::Nova::Flavor": "rule:context_is_admin"

Upgrading from Kilo to Liberty

Progress has been made on supporting live sql migrations, however it is still recommended to bring down the heat service for the duration of the upgrade. Downward SQL schema migrations are no longer supported. A rollback to Kilo will require restoring a snapshot of the pre-upgrade database.

OpenStack Data Processing (Sahara)

Key New Features

  • New plugins and versions:
    • Ambari plugin with supports HDP 2.2 / 2.3
    • Apache Hadoop 2.7.1 was added, Apache Hadoop 2.6.0 was deprecated
    • CDH 5.4.0 was added with HA support for NameNode and ResourceManager
    • MapR 5.0.0 was added
    • Spark 1.3.1 was added, Spark 1.0.0 was deprecated
    • HDP 1.3.2 and Apache Hadoop 1.2.1 was removed
  • Added support for using Swift with Spark EDP jobs
  • Added support for Spark EDP jobs in CDH and Ambari plugins
  • Added support for public and protected resources
  • Started integration with OpenStack client
  • Added support for editing all Sahara resources
  • Added automatic Hadoop configuration for clusters
  • Direct engine is deprecated and will be removed in Mitaka release
  • Added OpenStack manila NFS shares as a storage backend option for job binaries and data sources
  • Added support for definition and use of configuration interfaces for EDP job templates

Deprecated Features

  • Direct provisioning engine
  • Apache Hadoop 2.6.0
  • Spark 1.0.0
  • All Hadoop 1.X removed

OpenStack Search (Searchlight)

This is the first release for Searchlight. Searchlight is intended to dramatically improving the search capabilities and performance of various OpenStack cloud services by offloading user search queries. It provides Keystone RBAC based searches across OpenStack services by indexing their data into ElasticSearch and providing a security layer on top of incoming search queries. ElasticSearch is a search server based on Lucene. It provides a distributed, scalable, near real-time, faceted, multitenant-capable, and full-text search engine with a RESTful web interface.

Key New Features

New Resource Types Indexed

Upgrade Notes

N/A

Deprecated Features

N/A

OpenStack DNS (Designate)

Key New Features

  • Experimental: Hook Point API
  • Horizon Plugin moved out of tree
  • Purging deleted domains
  • Ceilometer "exists" periodic event per domain
  • ASync actions
    • Import
    • Export
  • Active / passive failover for designate-pool-manager periodic tasks
  • OpenStack client integration

Additional DNS Server Backends

  • InfoBlox
  • Designate

Upgrade Notes

  • New service designate-zone-manager
    • It is recommended to use a supported tooz backend.
    • ZooKeeper is recommended, or anything supported by tooz.
    • If a tooz backend is not used, all zone-managers will assume ownership of all zones, and there will be 'n' "exists" messages per hour, where 'n' is the number of zone-manager processes.
  • designate-pool-manager can do active/passive failover for periodic tasks.
    • It is recommended to use a supported tooz backend.
    • If a tooz backend is not used, all pool-managers will assume ownership of the pool, and multiple periodic tasks will run. This can result in unforeseen consequences.

Deprecated Features

  • V1 API
    • An initial notice of intent, as there are operations that still require the Designate CLI interface which talks to V1, and Horizon panels that only talk to V1.

OpenStack Messaging Service (Zaqar)

Key New Features

  • Pre-signed URL - A new REST API endpoint to support pre-signed URL, which provides enough control over the resource being shared, without compromising security.
  • Email Notification - A new task driver for notification service, which can take a Zaqar subscriber's email address. When there is a new message posted to the queue, the subscriber will receive the message by email.
  • Policy Support - Support fine-grained permission control with the policy.json file like most of the other OpenStack components.
  • Persistent Transport - Added support for websocket as a persistent transport alternative for Zaqar. Now users will be able to establish long-lived connections between their applications and Zaqar to interchange large amounts of data without the connection setup adding overhead.

OpenStack Dashboard (Horizon)

Key New Features

  • Plugin improvements – Horizon auto discovers JavaScript files for inclusion, and now has mechanisms for pluggable SCSS and Django template overrides.

Upgrade Notes

OpenStack Trove (DBaaS)

Key New Features

  • Redis
    • Configuration Groups for Redis
    • Cluster support
  • MongoDB
    • Backup and restore for a single instance
    • User and database management
    • Configuration Groups
  • Percona XtraDB Cluster Server
    • Cluster support
  • Allow deployer to associate instance flavors with specific datastores
  • Horizon support for database clusters
  • Management API for datastore and versions
  • Ability to deploy Trove instances in a single admin tenant, so that the nova instances are hidden from the user

OpenStack Bare metal (Ironic)

Ironic has switched to an intermediate release model and released version 4.0 during Liberty, followed by two minor updates. Version 4.2 forms the basis for the OpenStack Integrated Liberty release and will receive stable updates.

Please see full release notes here: http://docs.openstack.org/developer/ironic/releasenotes/index.html

New Features

  • Added "ENROLL" hardware state, which is the default state for newly created nodes.
  • Added "abort" verb, which allows a user to interrupt certain operations while they are in progress.
  • Improved query and filtering support in the REST API.
  • Added support for CORS middleware.

Hardware Drivers

  • Added a new BootInterface for hardware drivers, which splits functionality out of the DeployInterface.
  • iLO virtual media drivers can work without Swift.
  • Added Cisco IMC driver.
  • Added OCS Driver.
  • Added UCS Driver.
  • Added Wake-On-Lan Power Driver.
  • ipmitool driver supports IPMI v1.5.
  • Added support to SNMP driver for “APC MasterSwitchPlus” series PDU’s.
  • pxe_ilo driver now supports UEFI Secure Boot (previous releases of theiLO driver only supported this for agent_ilo and iscsi_ilo).
  • Added Virtual Media support to iRMC Driver.
  • Added BIOS configuration to DRAC Driver.
  • PXE drivers now support GRUB2.

Deprecated Features

  • The "vendor_passthru" and "driver_vendor_passthru" methods of the DriverInterface have been removed. These were deprecated in Kilo and replaced with the @passthru decorator.
  • The migration tools to import data from a Nova "baremetal" deployment have been removed.
  • Deprecated the "parallel" option to periodic task decorator.
  • Removed deprecated ‘admin_api’ policy rule.
  • Support for the original "bash" deploy ramdisk is deprecated and will be removed in two cycles. The ironic-python-agent project should be used for all deploy drivers.

Upgrade Notes

  • Newly created nodes default to the new ENROLL state. Previously, nodes defaulted to AVAILABLE, which could lead to hardware being exposed prematurely to Nova.
  • The addition of API version headers in Kilo means that any client wishing to interact with the Liberty API must pass the appropriate version string in each HTTP request. Current API version is 1.14.

OpenStack Key Manager (Barbican)

New Features

  • Added the ability for project administrators to create certificate authorities per project. Also, project administrators are able to define and manage a set of preferred certificate authorities (CAs) per project. This allows projects to achieve project specific security domains.
  • Barbican now has per project quota support for limiting number of Barbican resources that can be created under a project. By default the quota is set to unlimited and can be overridden in Barbican configuration.
  • Support for a rotating master key which is used for wrapping project level keys. In this lightweight approach, only the project level key (KEK) is re-wrapped with new master key (MKEK). This is currently applicable only for the PKCS11 plug-in. (http://specs.openstack.org/openstack/barbican-specs/specs/liberty/add-crypto-mkek-rotation-support-lightweight.html)
  • Updated Barbican's root resource to return version information matching Keystone, Nova and Manila format. This is used by keystoneclient's versioned endpoint discovery feature.
  • Removed administrator endpoint as all operations are available on a regular endpoint. No separate endpoint is needed as access restrictions are enforced via Oslo policy.
  • Added configuration for enabling sqlalchemy pool for the management of SQL connections.
  • Added ability to list secrets which are accessible via ACL using GET /v1/secrets?acl-only=true request.
  • Improved functional test coverage around Barbican APIs related to ACL operations, RBAC policy and secrets.
  • Fixed issues around creation of SnakeOil CA plug-in instance.
  • Barbican client CLI can now take a Keystone token for authentication. Earlier only username and password based authentication was supported.
  • Barbican client now has ability to create and list certificate orders.

Upgrade Notes

OpenStack Image Service (Glance)

Updated project guide that includes some details on operating, installing, configuring, developing to and using the service: http://docs.openstack.org/developer/glance/

Key New Features

Upgrade Notes

  • python-glanceclient now defaults to using Glance API v2 and if v2 is unavailable, it will fallback to v1.
  • Dependencies for backend stores are now optionally installed corresponding to each store specified.
  • Some stores like swift, s3, vmware now have python 3 support.
  • Some new as well as updated default metadata definitions ship with the source code.
  • More python 3 support added to Glance API, and now continuous support is extended by the means of tests to ensure compatibility.
  • utf-8 is now the default charset for the backend MySQL DB.
  • Migration scripts have been updated to perform a sanity check for the table charset.
  • 'ram_disk' and 'kernel' properties can now be null in the schema and 'id' is now read only attribute for v2 API.
  • A configuration option client_socket_timeout has been added to take advantage of the recent eventlet socket timeout behaviour.
  • A configuration option scrub_pool_size has been added to set the number of parallel threads that a scrubber should run and defaults to 1.
  • An important bug that allowed to change the image status using the Glance v1 API has now been fixed.
  • Glance /versions endpoint now returns an an HTTP 200 code, whereas it used to return 300.

Deprecated Features

  • The experimental Catalog Index Service has been removed and now is a separate project called Searchlight.
  • The configuration options scrubber_datadir, cleanup_scrubber and cleanup_scrubber_time have been removed following the removal of the file backed queuing for scrubber.

OpenStack Shared File System (Manila)

New Features

  • Enabled support for availability zones.
  • Added administrator API components to share instances.
  • Added pool weigher which allows Manila scheduler to place new shares on pools with existing share servers.
  • Support for share migration from one hostpool to another hostpool (experimental).
  • Added shared extend capability in the generic driver.
  • Support for adding consistency groups, which allow snapshots for multiple filesystem shares to created at the same point in time (experimental).
  • Support for consistency groups in the NetApp cDOT driver and generic driver.
  • Support for oversubscription in thin provisioning.
  • New Windows SMB driver:
    • Support for handling Windows service instances and exporting SMB shares.
  • Added new osapi_share_workers configuration option to improve the total throughput of the Manila API service.
  • Added share hooks feature, which allows actions to be performed before and after share drive methods calls, call additional periodic hook for each 'N' tick, and update the results of a driver's action.
  • Improvements to the NetApp cDOT driver:
    • Added variables netapp:dedup, and netapp:compression when creating the flexvol that backs up a new manila share.
    • Added manage/unmanage support and shrink_share support.
    • Support for extended_share API component.
    • Support for netapp-lib PyPI project to communicate with storage arrays.
  • Improvements to the HP 3PAR driver:
    • Added reporting of dedupe, thin provisioning and hp3par_flash_cache capabilities. This allows share types and the CapabilitiesFilter to place shares on hosts with the requested capabilities.
    • Added share server support.
  • Improvements to the Huawei Manila driver:
    • Added support for storage pools, extend_share, manage_existing, shrink_share, read-only share, smartcache and smartpartition.
    • Added reporting of dedupe, thin provisioning and compression capabilities.
  • Added access-level support to the VNX Manila driver.
  • Added support for the Manila HDS HNAS driver.
  • Added GlusterFS native driver.
    • GlusterFS drivers can now specify the list of compatible share layouts.
  • Added microversion support (v2 API).

Deprecated Features

  • The share_reset_status API component is deprecated and replaced by share_instance_reset_status.