Jump to: navigation, search

Difference between revisions of "ReleaseNotes/Juno"

(Horizon Known Issues)
(Add category)
 
(59 intermediate revisions by 29 users not shown)
Line 1: Line 1:
{| style="color:#000000; border:solid 1px #A8A8A8; padding:0.5em; margin:0.5em 0; background-color:#FFFFFF;font-size:95%; vertical-align:middle;"
+
<languages />
| style="padding:1em;width: 40px" | [[Image:Warning.svg|40px]]
+
<translate>
| '''Release Under Development'''
 
This release of OpenStack is under development and has yet to be completed.
 
  
The information on this page may not accurately reflect the state of release at the current point in time.
+
<!--T:114-->
|}
+
[[Category:Juno|Release Note]]
 +
[[Category:Release Note|Juno]]
 +
[[Category:Releases]]
 +
[[Category:Juno]]
  
= OpenStack 2014.2 (Juno) Release Notes =
+
= OpenStack 2014.2 (Juno) Release Notes = <!--T:1-->
  
 +
<!--T:2-->
 
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3">
 
<div style="column-count:3;-moz-column-count:3;-webkit-column-count:3">
 
__TOC__
 
__TOC__
 
</div>
 
</div>
  
== General Upgrade Notes ==
+
== General Upgrade Notes == <!--T:3-->
  
 +
<!--T:4-->
 
* The simplejson package is an optional requirement in most projects, therefore it's not listed in all project's requirements.txt file. However, if you're using it, e.g. better performance with python 2.6 on RHEL 6, then you will need simplejson >= 2.2.0.  See https://bugs.launchpad.net/oslo-incubator/+bug/1361230 for details.
 
* The simplejson package is an optional requirement in most projects, therefore it's not listed in all project's requirements.txt file. However, if you're using it, e.g. better performance with python 2.6 on RHEL 6, then you will need simplejson >= 2.2.0.  See https://bugs.launchpad.net/oslo-incubator/+bug/1361230 for details.
  
== OpenStack Object Storage (Swift) ==
+
== OpenStack Object Storage (Swift) == <!--T:5-->
  
=== Key New Features ===
+
=== Key New Features === <!--T:6-->
  
  
 +
<!--T:7-->
 
The Juno integrated release includes three releases of OpenStack Swift: 2.0.0, 2.1.0, and 2.2.0. The changelog for these releases is available at https://github.com/openstack/swift/blob/2.2.0.rc1/CHANGELOG#L1-L173. Please refer to that document for release details.
 
The Juno integrated release includes three releases of OpenStack Swift: 2.0.0, 2.1.0, and 2.2.0. The changelog for these releases is available at https://github.com/openstack/swift/blob/2.2.0.rc1/CHANGELOG#L1-L173. Please refer to that document for release details.
  
 +
<!--T:8-->
 
Important new features are highlighted below. Please read the CHANGELOG and associated documentation.
 
Important new features are highlighted below. Please read the CHANGELOG and associated documentation.
  
 +
<!--T:9-->
 
* Storage policies
 
* Storage policies
 
* Keystone v3 support
 
* Keystone v3 support
Line 34: Line 40:
  
  
=== Known Issues ===
+
=== Known Issues === <!--T:10-->
  
 +
<!--T:11-->
 
* None at this time
 
* None at this time
  
=== Upgrade Notes ===
+
=== Upgrade Notes === <!--T:12-->
  
 +
<!--T:13-->
 
As always, you can upgrade your Swift cluster with no downtime for end-users. Please refer to sample config files and documentation before every release.
 
As always, you can upgrade your Swift cluster with no downtime for end-users. Please refer to sample config files and documentation before every release.
  
 +
<!--T:14-->
 
* There have been some logging changes that need to be called out. In all cases, well-behaved log processors will not be affected.
 
* There have been some logging changes that need to be called out. In all cases, well-behaved log processors will not be affected.
 
** Storage node (account, container, object) logs now have the PID logged at the end of the log line.
 
** Storage node (account, container, object) logs now have the PID logged at the end of the log line.
Line 51: Line 60:
 
*  A list of all updated, deprecated or removed options in swift can be found at: http://docs.openstack.org/trunk/config-reference/content/swift-conf-changes-master.html
 
*  A list of all updated, deprecated or removed options in swift can be found at: http://docs.openstack.org/trunk/config-reference/content/swift-conf-changes-master.html
  
== OpenStack Compute (Nova) ==
+
== OpenStack Compute (Nova) == <!--T:15-->
 
 
There is a summary of specifications for the Juno release of Nova at [[Nova/Juno-Specs]].
 
  
=== Key New Features ===
+
===Instance features=== <!--T:16-->
  
==== Upgrade Support ====
+
<!--T:17-->
 +
* Allow users to specify an image to use for rescue instead of the original base image. [https://blueprints.launchpad.net/nova/+spec/allow-image-to-be-specified-during-rescue launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/allow-image-to-be-specified-during-rescue specification]
 +
* Allow images to specify if a config drive should be used. [https://blueprints.launchpad.net/nova/+spec/config-drive-image-property launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/config-drive-image-property specification]
 +
* Give users and administrators the ability to control the vCPU topology exposed to guests via flavors. [https://blueprints.launchpad.net/nova/+spec/virt-driver-vcpu-topology launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/virt-driver-vcpu-topology specification]
 +
* Attach All Local Disks During Rescue. [https://blueprints.launchpad.net/nova/+spec/rescue-attach-all-disks launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/rescue-attach-all-disks specification]
  
* TBD
+
===Networking=== <!--T:18-->
  
==== Compute Drivers ====
+
<!--T:19-->
 +
* Improve the nova-network code to allow per-network settings. [https://blueprints.launchpad.net/nova/+spec/better-support-for-multiple-networks launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/better-support-for-multiple-networks specification]
 +
* Allow deployers to add hooks which are informed as soon as networking information for an instance is changed. [https://blueprints.launchpad.net/nova/+spec/instance-network-info-hook launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/instance-network-info-hook specification]
 +
* Enable nova instances to be booted up with SR-IOV neutron ports. [https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/pci-passthrough-sriov specification]
 +
* Permit VMs to attach multiple interfaces to one network. [https://blueprints.launchpad.net/nova/+spec/multiple-if-1-net launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/nfv-multiple-if-1-net specification]
 +
* Preserve Neutron ports attached using "--nic port-id" when instance is terminated. [https://review.openstack.org/#/c/126309/ review]
  
===== Hyper-V =====
+
===Scheduling=== <!--T:20-->
  
* TBD
+
<!--T:21-->
 +
* Extensible Resource Tracking. The set of resources tracked by nova is hard coded, this change makes that extensible, which will allow plug-ins to track new types of resources for scheduling. [https://blueprints.launchpad.net/nova/+spec/extensible-resource-tracking launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/extensible-resource-tracking specification]
 +
* Allow a host to be evacuated, but with the scheduler selecting destination hosts for the instances moved. [https://blueprints.launchpad.net/nova/+spec/find-host-and-evacuate-instance launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/find-host-and-evacuate-instance specification]
 +
* Add support for host aggregates to scheduler filters. launchpad: [https://blueprints.launchpad.net/nova/+spec/per-aggregate-disk-allocation-ratio disk]; [https://blueprints.launchpad.net/nova/+spec/per-aggregate-max-instances-per-host instances]; and [https://blueprints.launchpad.net/nova/+spec/per-aggregate-max-io-ops-per-host IO ops] [http://specs.openstack.org/openstack/nova-specs/specs/juno/per-aggregate-filters specification]
  
===== Libvirt (KVM) =====
+
===Other=== <!--T:22-->
  
* TBD
+
<!--T:23-->
 +
* Offload periodic task sql query load to a slave sql server if one is configured. [https://blueprints.launchpad.net/nova/+spec/juno-slaveification launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/juno-slaveification specification]
 +
* Only update the status of a host in the sql database when the status changes, instead of every 60 seconds. [https://blueprints.launchpad.net/nova/+spec/on-demand-compute-update launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/on-demand-compute-update specification]
 +
* Include status information in API listings of hypervisor hosts. [https://blueprints.launchpad.net/nova/+spec/return-status-for-hypervisor-node launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/return-status-for-hypervisor-node specification]
 +
* Allow API callers to specify more than one status to filter by when listing services. [https://blueprints.launchpad.net/nova/+spec/servers-list-support-multi-status launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/servers-list-support-multi-status specification]
 +
* Add quota values to constrain the number and size of server groups a users can create. [https://blueprints.launchpad.net/nova/+spec/server-group-quotas launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/server-group-quotas specification]
  
===== VMware =====
+
===Hypervisor driver specific=== <!--T:24-->
  
* TBD
+
====Hyper-V==== <!--T:25-->
  
===== XenServer =====
+
<!--T:26-->
 +
* Support for differencing vhdx images. [https://blueprints.launchpad.net/nova/+spec/add-differencing-vhdx-resize-support launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/add-differencing-vhdx-resize-support specification]
 +
* Support for console serial logs. [https://blueprints.launchpad.net/nova/+spec/hyper-v-console-log launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/hyper-v-console-log specification]
 +
* Support soft reboot. [https://blueprints.launchpad.net/nova/+spec/hyper-v-soft-reboot launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/hyper-v-soft-reboot specification]
  
* TBD
+
====Ironic==== <!--T:27-->
  
==== API ====
+
<!--T:28-->
 +
* Add a virt driver for Ironic. [https://blueprints.launchpad.net/nova/+spec/add-ironic-driver launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/add-ironic-driver specification]
  
* TBD
+
====libvirt==== <!--T:29-->
  
==== Scheduler ====
+
<!--T:30-->
 +
* Performance improvements to listing instances on modern libvirts. [https://blueprints.launchpad.net/nova/+spec/libvirt-domain-listing-speedup launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/libvirt-domain-listing-speedup specification]
 +
* Allow snapshots of network backed disks. [https://blueprints.launchpad.net/nova/+spec/libvirt-volume-snap-network-disk launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/libvirt-volume-snap-network-disk specification]
 +
* Enable qemu memory balloon statistics for ceilometer reporting. [https://blueprints.launchpad.net/nova/+spec/enabled-qemu-memballoon-stats launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/enabled-qemu-memballoon-stats specification]
 +
* Add support for handing back unused disk blocks to the underlying storage system. [https://blueprints.launchpad.net/nova/+spec/libvirt-disk-discard-option launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/libvirt-disk-discard-option specification]
 +
* Meta-data about an instance is now recorded in the libvirt domain XML. This is intended to help administrators while debugging problems. [https://blueprints.launchpad.net/nova/+spec/libvirt-driver-domain-metadata launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/libvirt-driver-domain-metadata specification]
 +
* Support namespaces for LXC containers. [https://blueprints.launchpad.net/nova/+spec/libvirt-lxc-user-namespaces launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/libvirt-lxc-user-namespaces specification]
 +
* Copy-on-write cloning for RBD-backed disks. [https://blueprints.launchpad.net/nova/+spec/rbd-clone-image-handler launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/rbd-clone-image-handler specification]
 +
* Expose interactive serial consoles. [https://blueprints.launchpad.net/nova/+spec/serial-ports launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/serial-ports specification]
 +
* Allow controlled shutdown of guest operating systems during VM power off. [https://blueprints.launchpad.net/nova/+spec/user-defined-shutdown launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/user-defined-shutdown specification]
 +
* Intelligent NUMA node placement for guests. [https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/virt-driver-numa-placement specification]
  
* TBD
+
====vmware==== <!--T:31-->
  
==== Other Features ====
+
<!--T:32-->
 +
* Move the vmware driver to using the oslo vmware helper library. [https://blueprints.launchpad.net/nova/+spec/use-oslo-vmware launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/use-oslo-vmware specification]
 +
* Add support for network interface hot plugging to vmware. [https://blueprints.launchpad.net/nova/+spec/vmware-hot-plug launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/vmware-hot-plug specification]
 +
* Refactor the vmware driver's spawn functionality to be more maintainable. This work was internal, but is mentioned here because it significantly improves the supportability of the VMWare driver. [https://blueprints.launchpad.net/nova/+spec/vmware-spawn-refactor launchpad] [http://specs.openstack.org/openstack/nova-specs/specs/juno/vmware-spawn-refactor specification]
  
* TBD
+
=== Known Issues === <!--T:33-->
  
=== Known Issues ===
+
<!--T:34-->
 +
* When using libvirt, live snapshots are effectively disabled, due to this difficult-to-reproduce bug: https://bugs.launchpad.net/nova/+bug/1334398 (https://review.openstack.org/#/c/102643/)
 +
* Glance v2 and Keystone v3 are not tested with Nova in Juno.
  
* TBD
+
=== Upgrade Notes === <!--T:35-->
  
=== Upgrade Notes ===
+
<!--T:115-->
* A list of all updated, deprecated or removed options in Nova can be found at: http://docs.openstack.org/trunk/config-reference/content/nova-conf-changes-master.html
+
* A list of all updated, deprecated or removed options in Nova can be found at: http://docs.openstack.org/juno/config-reference/content/nova-conf-changes-juno.html
 
* The nova-manage flavor subcommand is deprecated in Juno and will be removed in the 2015.1 (K) release: https://review.openstack.org/#/c/86122/
 
* The nova-manage flavor subcommand is deprecated in Juno and will be removed in the 2015.1 (K) release: https://review.openstack.org/#/c/86122/
 
* https://review.openstack.org/#/c/102212/
 
* https://review.openstack.org/#/c/102212/
 
* Minimum required libvirt version is now 0.9.11: https://review.openstack.org/#/c/58494/
 
* Minimum required libvirt version is now 0.9.11: https://review.openstack.org/#/c/58494/
* Nova is now supporting the [https://review.openstack.org/#/c/43986/ Cinder V2 API]. The Cinder V1 API is deprecated in Juno and Nova will switch over to Cinder V2 by default in the "L" release.
+
* Nova is now supporting the [https://review.openstack.org/#/c/43986/ Cinder V2 API]. The Cinder V1 API is deprecated in Juno and Nova will switch over to Cinder V2 by default in the Kilo release.
 +
** Nova talks to Cinder V1 in the gate (continuous integration testing).
 
* Debug log output in python-novaclient has [https://review.openstack.org/#/c/98443/ changed slightly] to improve readability. The sha1 hash of the keystone token is now printed instead of the token itself - greatly shortening the amount of content being printed while still retaining the ability to determine token mismatch scenarios. In addition, some extra '\n' characters that were being added are removed. ''Double-check any log parsers!''
 
* Debug log output in python-novaclient has [https://review.openstack.org/#/c/98443/ changed slightly] to improve readability. The sha1 hash of the keystone token is now printed instead of the token itself - greatly shortening the amount of content being printed while still retaining the ability to determine token mismatch scenarios. In addition, some extra '\n' characters that were being added are removed. ''Double-check any log parsers!''
 
* libvirt.volume_drivers config param for nova.conf is deprecated, to be removed in the Lxxxx release. In general, this should affect only a small number of developers working on drivers. If this is you, the recommended approach is to continue your work inside a nova tree.
 
* libvirt.volume_drivers config param for nova.conf is deprecated, to be removed in the Lxxxx release. In general, this should affect only a small number of developers working on drivers. If this is you, the recommended approach is to continue your work inside a nova tree.
 +
* python 2.6 support is deprecated in Juno and will be removed in the Kilo 2015.1 release.
 +
 +
== OpenStack Image Service (Glance) == <!--T:36-->
 +
 +
=== Key New Features === <!--T:116-->
  
== OpenStack Image Service (Glance) ==
+
<!--T:117-->
=== Key New Features ===
 
 
* Asynchronous Processing
 
* Asynchronous Processing
 
* Pull of glance.store into its own library
 
* Pull of glance.store into its own library
 
* [http://docs.openstack.org/developer/glance/metadefs-concepts.html Metadata Definitions Catalog]
 
* [http://docs.openstack.org/developer/glance/metadefs-concepts.html Metadata Definitions Catalog]
 
* Restricted policy for downloading images.
 
* Restricted policy for downloading images.
 +
* Enhanced Scrubber service allows single instance services multiple glance-api servers cross nodes.
 +
 +
=== Known Issues === <!--T:37-->
  
=== Known Issues ===
+
=== Upgrade Notes === <!--T:38-->
  
=== Upgrade Notes ===
+
<!--T:118-->
*  A list of all updated, deprecated or removed options in Glance can be found at: http://docs.openstack.org/trunk/config-reference/content/glance-conf-changes-master.html
+
*  A list of all updated, deprecated or removed options in Glance can be found at: http://docs.openstack.org/juno/config-reference/content/glance-conf-changes-juno.html
 
* The ability to upload a public image is now admin-only by default. To continue to use the previous behaviour, edit the publicize_image flag in etc/policy.json to remove the role restriction.
 
* The ability to upload a public image is now admin-only by default. To continue to use the previous behaviour, edit the publicize_image flag in etc/policy.json to remove the role restriction.
 +
* The requirement and check on UTF-8 charset for DB tables is enforced, operator need to migration tables and existing data to UTF-8 manually if glance-manage complains it during the sync.
 +
* glance workers will now be equal to the number of CPUs available by default if not explicitly specified in glance-api.conf and/or glance-registry.conf
 +
** There is no upgrade impact to glance-api workers since glance-api.conf previously hard-coded the workers value to 1 so anyone upgrading to tihs will still get whatever value was set in glance-api.conf prior to this change. There is an upgrade impact to the glance-registry workers since glance-registry.conf did not hard-code the workers value to 1 before this change. So anyone upgrading to this change that does not have workers specified in glance-registry.conf will now be running multiple workers by default when they restart the glance registry service.
  
== OpenStack Dashboard (Horizon) ==
+
== OpenStack Dashboard (Horizon) == <!--T:39-->
  
=== Key New Features ===
+
=== Key New Features === <!--T:40-->
 +
 
 +
==== Sahara ==== <!--T:41-->
  
==== Sahara ====
+
<!--T:119-->
 
The OpenStack Data Processing project (Sahara) was formally included into the integrated release in Juno and Horizon includes broad support for managing your data processing. You can specify and build clusters to utilize several data types with user specified jobs while tracking the progress of those jobs.
 
The OpenStack Data Processing project (Sahara) was formally included into the integrated release in Juno and Horizon includes broad support for managing your data processing. You can specify and build clusters to utilize several data types with user specified jobs while tracking the progress of those jobs.
  
==== Neutron Features ====
+
==== Neutron Features ==== <!--T:42-->
 +
 
 +
<!--T:120-->
 
Neutron added several new features in Juno, including:
 
Neutron added several new features in Juno, including:
 
* DVR (Distributed Virtual Routing)
 
* DVR (Distributed Virtual Routing)
Line 130: Line 188:
 
* IPv6 subnet modes
 
* IPv6 subnet modes
  
 +
<!--T:43-->
 
Horizon provides support for these new features with the Juno release. These features provide much greater flexibility in specifying software defined networks.
 
Horizon provides support for these new features with the Juno release. These features provide much greater flexibility in specifying software defined networks.
  
 +
<!--T:44-->
 
An existing feature in Neutron that Horizon now supports is the MAC learning extension.
 
An existing feature in Neutron that Horizon now supports is the MAC learning extension.
  
==== Glance Features ====
+
==== Glance Features ==== <!--T:45-->
In Juno, Glance introduced the the ability to manage a catalog of metadata definitions where users can register the metadata definitions to be used on various resource types including images, volumes, aggregates, and flavors. Support for viewing and editing the assignment of these metadata tags is included in Horizon.
+
 
 +
<!--T:121-->
 +
In Juno, Glance introduced the ability to manage a catalog of metadata definitions where users can register the metadata definitions to be used on various resource types including images, aggregates, and flavors. Support for viewing and editing the assignment of these metadata tags is included in Horizon.
  
==== Cinder Features ====
+
==== Cinder Features ==== <!--T:46-->
 +
 
 +
<!--T:122-->
 
In a continued effort to provide fuller API support, several features supported by Cinder are now supported in Horizon in the Juno release. Users can now utilize swift to store volume backups from Horizon as well as restore volumes from these backups.
 
In a continued effort to provide fuller API support, several features supported by Cinder are now supported in Horizon in the Juno release. Users can now utilize swift to store volume backups from Horizon as well as restore volumes from these backups.
  
 +
<!--T:47-->
 
Other features of the Cinder API not previously supported by Horizon added in Juno include:
 
Other features of the Cinder API not previously supported by Horizon added in Juno include:
 
* Enabling resetting the state of a snapshot
 
* Enabling resetting the state of a snapshot
Line 147: Line 212:
 
* QoS (quality of service) support
 
* QoS (quality of service) support
  
==== Trove ====
+
==== Trove ==== <!--T:48-->
 +
 
 +
<!--T:123-->
 
Trove supports potentially using numerous different datastores, e.g., mysql, redis, mongodb. Users can now select from the list of datastores supported by the cloud operator when creating their database instances.
 
Trove supports potentially using numerous different datastores, e.g., mysql, redis, mongodb. Users can now select from the list of datastores supported by the cloud operator when creating their database instances.
  
 +
<!--T:49-->
 
Another addition is support for utilizing and restoring from incremental database backups.
 
Another addition is support for utilizing and restoring from incremental database backups.
  
 +
<!--T:50-->
 
To improve support for Neutron based clouds, when creating a database instance, the user can now specify the NIC for the database instance on creation allowing direct access to the instance by the user.
 
To improve support for Neutron based clouds, when creating a database instance, the user can now specify the NIC for the database instance on creation allowing direct access to the instance by the user.
  
==== Nova ====
+
==== Nova ==== <!--T:51-->
 +
 
 +
<!--T:124-->
 
The new nova instance actions panel provides a list of all actions taken on all instances in the current project allowing users to view resulting errors or actions taken by other users on those instances.
 
The new nova instance actions panel provides a list of all actions taken on all instances in the current project allowing users to view resulting errors or actions taken by other users on those instances.
  
Administrators now have the ability to evacuate hosts off hypervisors which can aid in system maintenance by providing a mechanism to migrate all instances to other hosts.
+
<!--T:52-->
 +
Administrators now have the ability to evacuate instances off hypervisors which can aid in system maintenance by providing a mechanism to migrate all instances to other hosts.
 +
 
 +
==== Improved Plugin Support ==== <!--T:53-->
  
==== Improved Plugin Support ====
+
<!--T:125-->
 
The plugin system in Horizon continued to improve in the Juno release.
 
The plugin system in Horizon continued to improve in the Juno release.
 
Some of those improvements:
 
Some of those improvements:
 +
 +
<!--T:126-->
 
* Support for adding plugin specific AngularJS modules
 
* Support for adding plugin specific AngularJS modules
 
* Support for adding static files, e.g., CSS, JS, images
 
* Support for adding static files, e.g., CSS, JS, images
Line 168: Line 244:
 
* Numerous other bug fixes
 
* Numerous other bug fixes
  
==== Enhanced RBAC support ====
+
==== Enhanced RBAC support ==== <!--T:54-->
 +
 
 +
<!--T:127-->
 
In an ongoing effort to support richer role based access control (RBAC) in Horizon, the views for several more services were enhanced with RBAC checks to determine user access to actions. The newly supported services are compute, network and orchestration. These changes allow operators to implement finer grained access control than just "member" and "admin".
 
In an ongoing effort to support richer role based access control (RBAC) in Horizon, the views for several more services were enhanced with RBAC checks to determine user access to actions. The newly supported services are compute, network and orchestration. These changes allow operators to implement finer grained access control than just "member" and "admin".
  
 +
<!--T:55-->
 
The identity panels (domains, projects, users, roles, groups) have also been converted to support RBAC at the view level. The identity panels have been moved from the admin dashboard into their own 'Identity' dashboard and accessibility is determined by policies alone. This is the first step toward consolidating the near duplicate content of the project and admin dashboards into single views supporting a wide range of roles.
 
The identity panels (domains, projects, users, roles, groups) have also been converted to support RBAC at the view level. The identity panels have been moved from the admin dashboard into their own 'Identity' dashboard and accessibility is determined by policies alone. This is the first step toward consolidating the near duplicate content of the project and admin dashboards into single views supporting a wide range of roles.
  
==== UX Changes ====
+
==== UX Changes ==== <!--T:56-->
 +
 
 +
<!--T:128-->
 
In Juno, Horizon transitioned to utilizing Bootstrap v3. Horizon had been pinned to an older version of Bootstrap for several releases. This change now allows Horizon to pick up numerous bug fixes and overall improvements in the Bootstrap framework. The look and feel remains mainly consistent with the Havana release.
 
In Juno, Horizon transitioned to utilizing Bootstrap v3. Horizon had been pinned to an older version of Bootstrap for several releases. This change now allows Horizon to pick up numerous bug fixes and overall improvements in the Bootstrap framework. The look and feel remains mainly consistent with the Havana release.
  
==== JavaScript Libraries Extracted ====
+
==== JavaScript Libraries Extracted ==== <!--T:57-->
 +
 
 +
<!--T:129-->
 
As part of the Horizon team's ongoing efforts to split the repository into more logical pieces, all the 3rd party JavaScript libraries that Horizon depends on have been removed from the Horizon code base and python xstatic packages have been utilized instead. The xstatic format allows for easy consumption by the Django framework Horizon is built on. Now JavaScript libraries are utilized like any other python dependency in Horizon.
 
As part of the Horizon team's ongoing efforts to split the repository into more logical pieces, all the 3rd party JavaScript libraries that Horizon depends on have been removed from the Horizon code base and python xstatic packages have been utilized instead. The xstatic format allows for easy consumption by the Django framework Horizon is built on. Now JavaScript libraries are utilized like any other python dependency in Horizon.
  
==== Conversion from LESS to SCSS ====
+
==== Conversion from LESS to SCSS ==== <!--T:58-->
 +
 
 +
<!--T:130-->
 
The supported stylesheets in Horizon have been converted to utilize SCSS rather than LESS. The change was necessary due to a prevalent lack of support for LESS compilers in python. This change also allowed us to upgrade to Bootstrap 3, as parts of the Bootstrap 3 LESS stylesheets were not supported by existing python based LESS compilers.
 
The supported stylesheets in Horizon have been converted to utilize SCSS rather than LESS. The change was necessary due to a prevalent lack of support for LESS compilers in python. This change also allowed us to upgrade to Bootstrap 3, as parts of the Bootstrap 3 LESS stylesheets were not supported by existing python based LESS compilers.
  
=== Known Issues ===
+
=== Known Issues === <!--T:59-->
 +
 
 +
==== Rendering issues in extensions ==== <!--T:60-->
  
==== Rendering issues in extensions ====
+
<!--T:131-->
 
The conversion to utilizing Bootstrap v3 can cause content extensions written on top of Horizon to have rendering issues. Most of these are fixed by a simple CSS class name substitutions. These issues are primarily seen with buttons and panel content widths.
 
The conversion to utilizing Bootstrap v3 can cause content extensions written on top of Horizon to have rendering issues. Most of these are fixed by a simple CSS class name substitutions. These issues are primarily seen with buttons and panel content widths.
  
==== Online Compression ====
+
==== Online Compression ==== <!--T:61-->
 +
 
 +
<!--T:132-->
 
With the move to SCSS, there may be issues with utilizing online compression in non-DEBUG mode in Horizon. Offline compression continues to work as in previous releases.
 
With the move to SCSS, there may be issues with utilizing online compression in non-DEBUG mode in Horizon. Offline compression continues to work as in previous releases.
  
==== Neutron L3 HA ====
+
==== Neutron L3 HA ==== <!--T:62-->
The HA property is updateable in the UI, however, the migration itself fails on the agent.
 
  
=== Upgrade Notes ===
+
<!--T:133-->
 +
The HA property is updateable in the UI, however, Neutron API does not allow the update operation because toggling HA support does not work.
 +
https://bugs.launchpad.net/horizon/+bug/1378525
 +
 
 +
=== Upgrade Notes === <!--T:63-->
 +
 
 +
<!--T:134-->
 
* FLAVOR_EXTRA_KEYS setting deprecated.  The use of this key has been replaced with direct calls to the nova and [http://docs.openstack.org/developer/glance/metadefs-concepts.html glance api] as appropriate.
 
* FLAVOR_EXTRA_KEYS setting deprecated.  The use of this key has been replaced with direct calls to the nova and [http://docs.openstack.org/developer/glance/metadefs-concepts.html glance api] as appropriate.
  
== OpenStack Identity (Keystone) ==
+
== OpenStack Identity (Keystone) == <!--T:64-->
  
=== Key New Features ===
+
=== Key New Features === <!--T:65-->
  
 +
<!--T:66-->
 
* Keystone now has experimental support for [http://docs.openstack.org/developer/keystone/configure_federation.html#keystone-as-an-identity-provider-idp Keystone-to-Keystone federation], where one instance acts as an Identity Provider, and the other a Service Provider.
 
* Keystone now has experimental support for [http://docs.openstack.org/developer/keystone/configure_federation.html#keystone-as-an-identity-provider-idp Keystone-to-Keystone federation], where one instance acts as an Identity Provider, and the other a Service Provider.
 
* PKIZ is a new token provider available for users of PKI tokens, which simply adds zlib-based compression to traditional PKI tokens.
 
* PKIZ is a new token provider available for users of PKI tokens, which simply adds zlib-based compression to traditional PKI tokens.
Line 221: Line 316:
 
* Services can now be filtered by name (<code> GET /v3/services?name={service_name}</code>).
 
* Services can now be filtered by name (<code> GET /v3/services?name={service_name}</code>).
  
=== Known Issues ===
+
=== Known Issues === <!--T:67-->
 +
 
 +
==== LDAP paged search results don't work with python-ldap 2.4 ==== <!--T:135-->
  
* None yet
+
<!--T:136-->
 +
When using an LDAP backend with paged search results enabled, AttributeErrors will be encountered if python-ldap 2.4 is being used.  This is due to a backwards incompatible API change in python-ldap.  The issue can be worked around in a few ways:
 +
* Disabling paging of search results by setting ''page_size'' to ''0'' in the ''[ldap]'' section of keystone.conf.
 +
* Downgrade python-ldap to version 2.3.x.
  
=== Upgrade Notes ===
+
<!--T:68-->
 +
A fix for this issue has been proposed, which is expected to be made available in a stable update for Juno.  For more details see https://bugs.launchpad.net/keystone/+bug/1381768
  
 +
=== Upgrade Notes === <!--T:69-->
 +
 +
<!--T:70-->
 
* Due to the simpler out-of-the-box experience, the default token provider is now UUID instead of PKI.
 
* Due to the simpler out-of-the-box experience, the default token provider is now UUID instead of PKI.
 
* Database migrations for releases prior to Havana have been dropped, meaning that you must upgrade to the Juno release from either a Havana or Icehouse deployment.
 
* Database migrations for releases prior to Havana have been dropped, meaning that you must upgrade to the Juno release from either a Havana or Icehouse deployment.
* A comprehensive list of all updated, deprecated or removed options in Keystone can be found at: http://docs.openstack.org/trunk/config-reference/content/keystone-conf-changes-master.html
+
* A comprehensive list of all updated, deprecated or removed options in Keystone can be found at: http://docs.openstack.org/juno/config-reference/content/keystone-conf-changes-juno.html
 
** All <code>token_api</code> methods are now deprecated.
 
** All <code>token_api</code> methods are now deprecated.
 
** LDAP configuration options that previously contained the deprecated <code>tenant</code> terminology have been superseded by options using the term <code>project</code>.
 
** LDAP configuration options that previously contained the deprecated <code>tenant</code> terminology have been superseded by options using the term <code>project</code>.
Line 238: Line 342:
 
* LDAP/AD configuration: All configuration options containing the term "tenant" have been deprecated in favor of similarly named configuration options using the term "project" (for example, <code>tenant_id_attribute</code> has been replaced by <code>project_id_attribute</code>).
 
* LDAP/AD configuration: All configuration options containing the term "tenant" have been deprecated in favor of similarly named configuration options using the term "project" (for example, <code>tenant_id_attribute</code> has been replaced by <code>project_id_attribute</code>).
  
== OpenStack Network Service (Neutron) ==
+
== OpenStack Network Service (Neutron) == <!--T:71-->
 
=== Key New Features ===
 
=== Key New Features ===
 
* DB migration refactor and new timeline
 
* DB migration refactor and new timeline
Line 258: Line 362:
 
** Nuage Networks ML2 Mechanism Driver
 
** Nuage Networks ML2 Mechanism Driver
 
** SR-IOV capable NIC ML2 Mechanism Driver
 
** SR-IOV capable NIC ML2 Mechanism Driver
 +
** OpenContrail Neutron Plugin
  
=== Known Issues ===
+
=== Known Issues === <!--T:72-->
 
* This is the first release for DVR and HA L3. The Neutron team desires to designate these features as production ready in Kilo and requests that deployers test on non-critical workloads and report any issues.
 
* This is the first release for DVR and HA L3. The Neutron team desires to designate these features as production ready in Kilo and requests that deployers test on non-critical workloads and report any issues.
 
* FWaaS is still labeled as experimental, as it does not allow you to have more than one FW per tenant.
 
* FWaaS is still labeled as experimental, as it does not allow you to have more than one FW per tenant.
  
=== Upgrade Notes ===
+
=== Upgrade Notes === <!--T:73-->
* A list of all updated, deprecated or removed options in neutron can be found at: http://docs.openstack.org/trunk/config-reference/content/neutron-conf-changes-master.html
+
* DB migration from the previous releases (icehouse or havana)
 +
** In Icehouse or Hanava releases, the db migration operation is optional. If your Neutron database is not stamped (i.e., there is the db migration version info), please make sure to "stamp icehouse" before running the upgrade db migration to Juno.
 +
** To check if your database is stamped, run the following command:
 +
neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file <your plugin config file> current
 +
** If the output of the current version is '''None''', please run:
 +
neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file <your plugin config file> stamp icehouse
 +
** and then run the db migration for upgrading Juno:
 +
neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file <your plugin config file> upgrade juno
 +
* A list of all updated, deprecated or removed options in neutron can be found at: http://docs.openstack.org/juno/config-reference/content/neutron-conf-changes-juno.html
 
* Attribute level policies dependent on resources are not enforced anymore. Meaning that some older policies from Icehouse are not needed. (e.g. "get_port:binding:vnic_type": "rule:admin_or_owner").
 
* Attribute level policies dependent on resources are not enforced anymore. Meaning that some older policies from Icehouse are not needed. (e.g. "get_port:binding:vnic_type": "rule:admin_or_owner").
 
* The following plugins are deprecated in Juno:
 
* The following plugins are deprecated in Juno:
Line 272: Line 385:
 
*XML support in the API is deprecated. Users and deployers should migrate to JSON for API interactions as soon as possible since the XML support will be removed in the Kilo (2015.1) release.
 
*XML support in the API is deprecated. Users and deployers should migrate to JSON for API interactions as soon as possible since the XML support will be removed in the Kilo (2015.1) release.
  
== OpenStack Block Storage (Cinder) ==
+
== OpenStack Block Storage (Cinder) == <!--T:74-->
 
=== Key New Features ===
 
=== Key New Features ===
 
* Support for Volume Replication.
 
* Support for Volume Replication.
 
* Support for Consistency Groups and Snapshots of Consistency Groups.
 
* Support for Consistency Groups and Snapshots of Consistency Groups.
 
* Support for Volume Pools.
 
* Support for Volume Pools.
 +
* Completion of i18n-enablement
 +
* Honor Glance protected properties in Image Upload
 +
* Enable ability to restrict bandwidth usage on volume-copy operations
 +
* Add Volume Num Weighter Scheduling
  
=== Known Issues ===
+
=== New Drivers/Plugins === <!--T:75-->
None yet
+
* Datera
 +
* Fujitsu ETERNUS
 +
* Fusion IO
 +
* Hitachi HBSD
 +
* Huawei
 +
* Nimble
 +
* Prophetstor
 +
* Pure
 +
* XtremIO
 +
* Oracle ZFS
 +
 
 +
=== Limitations/Known Issues === <!--T:76-->
 +
* The newly introduced 'Pool' terminology is a logical concept to describe a set of storage resource that can be used to serve core Cinder requests, e.g. volumes/snapshots.  This notion is almost identical to Cinder Volume Backend, for it has simliar attributes (capacity, capability).  The main difference is Pool couldn't exist on its own, it must reside in a Volume Backend.  One Volume Backend can have mulitple Pools but Pools don't have sub-Pools (meaning even they have, sub-Pools don't get to exposed to Cinder, yet).  Pool has a unique name in backend namespace, which means Volume Backend can't have two pools using same name.  The introduction of 'Pools' has some user visible impact because it changes the granularity of scheduling a volume from 'Backend' to 'Pools'.  For example: migrating/managing volume now has to include pool in 'host' parameter in order to work. 
 +
  cinder manage --source-name X --name newX host@backend#POOL
 +
  cinder migrate UUID host@backend#POOL
 +
 
 +
* To find out what pools a backend has, use following API extension to query info (need admin role):
 +
  -  Pool name only:
 +
    GET http://CINDER_API_ENDPOINT/v2/TENANT_ID/scheduler-stats/get_pools
 +
  -  Detailed Pool info:
 +
    GET http://CINDER_API_ENDPOINT/v2/TENANT_ID/scheduler-stats/get_pools\?detail\=True
  
=== Upgrade Notes ===
+
* The 'retyping' or affinity filter hint *may not* work like before.  Cinder has a special code path for legacy volumes - volumes created before Juno - to allow (potential) migration between pools even migration_policy is set to 'never'.  But not every driver has magic to move volumes to one pool to another at minimum cost.  The inconsistency behavior between drivers (same command may take totally different time to finish), which could be confusing.
  
* Nova is now supporting the [https://review.openstack.org/#/c/43986/ Cinder V2 API]. The Cinder V1 API is deprecated in Juno and Nova will switch over to Cinder V2 by default in the "L" release.
+
=== Upgrade Notes === <!--T:77-->
 +
* A list of all updated, deprecated or removed options in Cinder can be found at: http://docs.openstack.org/trunk/config-reference/content/cinder-conf-changes-juno.html
 +
* Nova is now supporting the [https://review.openstack.org/#/c/43986/ Cinder V2 API]. The Cinder V1 API is deprecated in Juno and Nova will switch over to Cinder V2 by default in the "L" release. You need to update the cinder_catalog_info config option in nova to 'volumev2:cinder:internalURL' to have Nova use the cinder v2 endpoint, in addition to the cinder v2 endpoint being available in the keystone catalog.
  
== OpenStack Telemetry (Ceilometer) ==
+
== OpenStack Telemetry (Ceilometer) == <!--T:78-->
 
=== Key New Features ===
 
=== Key New Features ===
 
* Support for partitioning metric collection load across horizontally scaled-out central agents
 
* Support for partitioning metric collection load across horizontally scaled-out central agents
Line 296: Line 435:
 
** the ability to more easily extend the range of SNMP metrics that ceilometer gathers
 
** the ability to more easily extend the range of SNMP metrics that ceilometer gathers
 
** the ability to derive new metrics from arithmetic transformations applied to multiple primary metrics
 
** the ability to derive new metrics from arithmetic transformations applied to multiple primary metrics
 +
* Option to split off the alarms persistence into a separate database
 +
* Option to use notifications instead of RPC for metering messages 
 
* Metering of Neutron networking services: LBaaS, FWaaS & VPNaaS  
 
* Metering of Neutron networking services: LBaaS, FWaaS & VPNaaS  
 
* New XenAPI compute inspector
 
* New XenAPI compute inspector
Line 303: Line 444:
 
* New Telemetry section of the [http://docs.openstack.org/admin-guide-cloud/content/ch_admin-openstack-telemetry.html Cloud Administrator Guide]
 
* New Telemetry section of the [http://docs.openstack.org/admin-guide-cloud/content/ch_admin-openstack-telemetry.html Cloud Administrator Guide]
  
=== Known Issues ===
+
=== Known Issues === <!--T:79-->
* None yet
+
* [https://bugs.launchpad.net/ceilometer/+bug/1381600 1381600] The new <code>ceilometer-agent-ipmi</code> fails to emit any samples when it encounters unparseable data from <code>ipmitool</code>.
  
=== Upgrade Notes ===
+
=== Upgrade Notes === <!--T:80-->
 
*  A list of all updated, deprecated or removed options in ceilometer can be found at: http://docs.openstack.org/trunk/config-reference/content/ceilometer-conf-changes-master.html
 
*  A list of all updated, deprecated or removed options in ceilometer can be found at: http://docs.openstack.org/trunk/config-reference/content/ceilometer-conf-changes-master.html
  
== OpenStack Orchestration (Heat) ==
+
== OpenStack Orchestration (Heat) == <!--T:81-->
 
=== Key New Features ===
 
=== Key New Features ===
  
 +
<!--T:82-->
 
* Recovery from failures during stack updates
 
* Recovery from failures during stack updates
 
* API to cancel and roll back an in-progress stack update
 
* API to cancel and roll back an in-progress stack update
Line 330: Line 472:
 
* Improved visibility into trees of nested stacks
 
* Improved visibility into trees of nested stacks
  
=== Known Issues ===
+
=== Known Issues === <!--T:83-->
 
None yet
 
None yet
  
=== Upgrade Notes ===
+
=== Upgrade Notes === <!--T:84-->
*  A list of all updated, deprecated or removed options in heat can be found at: http://docs.openstack.org/trunk/config-reference/content/heat-conf-changes-master.html
+
*  A list of all updated, deprecated or removed options in heat can be found at: http://docs.openstack.org/juno/config-reference/content/heat-conf-changes-master.html
  
== OpenStack Database service (Trove) ==
+
== OpenStack Database service (Trove) == <!--T:85-->
 
=== Key New Features ===
 
=== Key New Features ===
 +
* Support for Asynchronous Replication (master-slave replicas) between provisioned mysql instances.
 +
* Introduction of a new Clustering API with initial support for MongoDB clusters.
 +
* Support for deploying Trove on an OpenStack solution that is using Neutron for networking. Prior to this, only nova-network was supported.
 +
* Support for provisioning PostgreSQL datastore instances.
 +
* Backup and Restore support for Couchbase.
 +
* Support to optionally restrict the Cinder backend used for Trove volumes.
 +
* Support for defining custom datastore configuration parameters in the Trove database (using mgmt API).
 +
* The ability to list all datastore types and versions in a single call
  
* support for PostgreSQL databases
+
=== Other Incremental Improvements=== <!--T:86-->
* per datastore volume support
+
* Logging audit to improve log levels throughout the trove components.
* support for MongDB clusters
+
* The extensions loading mechanism was improved by adding support for stevedore.
 +
* The ability to support volumes for data is now on a per datastore bases.
 +
* Created and updated timestamps and instance count were added to configuration groups list and details calls.
  
=== Known Issues ===
+
=== Known Issues === <!--T:87-->
None yet
+
 
 +
<!--T:137-->
 +
* [https://bugs.launchpad.net/trove/+bug/1333852 1333852]: Trove does not support flavor UUIDs -- the Trove flavors API requires flavors with a numerical ID in order to be consistent with the API response for icehouse Trove.
 +
 
 +
=== Upgrade Notes === <!--T:88-->
 +
 
 +
<!--T:138-->
 +
* trove_api_workers and trove_conductor_workers will now be equal to the number of CPUs available by default if not explicitly specified in the trove configuration files.
 +
** Anyone upgrading to this change that does not have trove_api_workers or trove_conductor_workers specified in the trove configuration files will now be running multiple API and conductor workers by default when they restart the respective trove services.
 +
 
 +
== OpenStack Data Processing (Sahara) == <!--T:89-->
 +
 
 +
=== New Key Features === <!--T:139-->
 +
 
 +
<!--T:90-->
 +
* Data processing UI was fully merged into OpenStack Dashboard (horizon).
 +
* Support of CDH 5.x was added.
 +
* Support of Apache Spark was added. Supported versions are 0.9.1 and 1.0.0. Elastic data processing (EDP) engine was refactored a lot to support non-Oozie workflow engines.
 +
* Support of Apache Hadoop 2.4.1 was added in addition to existing 1.2.1 and 2.3.0. Version 2.3.0 is deprecated in Juno.
 +
* Support of multi region deployments.
 +
* Hadoop Swift authentication using [http://docs.openstack.org/developer/sahara/userdoc/advanced.configuration.guide.html#domain-usage-for-swift-proxy-users keystone trust mechanism]. Now Hadoop can access data in Swift without storing credentials in config files.
 +
* [http://docs.openstack.org/developer/sahara/userdoc/configuration.guide.html#sahara-notifications-configuration Ceilometer integration] was added. Now Sahara notifies Ceilometer about all cluster state changes.
 +
* Cluster provisioning error handling was improved. If something goes wrong during scaling, cluster will rollback to original state.
 +
* Added ability to [http://docs.openstack.org/developer/sahara/userdoc/features.html#security-group-management specify security groups for a node group]. Also Sahara could automatically create security group with only required ports open.
 +
* Implemented [http://docs.openstack.org/developer/sahara/userdoc/features.html#running-sahara-in-distributed-mode distributed mode] for Sahara: sahara-all process is decoupled into sahara-api and sahara-engine. You can run several instances of sahara-api and sahara-engine on different hosts. Note that the feature implementation is considered to be in alpha-state.
 +
 
 +
=== Known Issues === <!--T:91-->
 +
 
 +
<!--T:92-->
 +
* [https://bugs.launchpad.net/sahara/+bug/1271349 Bug 1271349]: Sahara requires root privileges to access VMs via namespaces.
 +
 
 +
=== Upgrade Notes === <!--T:93-->
 +
 
 +
==== Main binary renamed to sahara-all ==== <!--T:94-->
 +
 
 +
<!--T:95-->
 +
Please, note that you should use `sahara-all` instead of `sahara-api` to start
 +
the All-In-One Sahara.
 +
 
 +
==== sahara.conf upgrade ==== <!--T:96-->
 +
 
 +
<!--T:97-->
 +
We've migrated from custom auth_token middleware config options to the common
 +
config options. To update your config file you should replace the following
 +
old config opts with the new ones.
 +
 
 +
<!--T:98-->
 +
* "os_auth_protocol", "os_auth_host", "os_auth_port" -> "[keystone_authtoken]/auth_uri" and "[keystone_authtoken]/identity_uri"
 +
* "os_admin_username" -> "[keystone_authtoken]/admin_user"
 +
* "os_admin_password" -> "[keystone_authtoken]/admin_password"
 +
* "os_admin_tenant_name" -> "[keystone_authtoken]/admin_tenant_name"
 +
 
 +
<!--T:99-->
 +
We've replaced oslo code from sahara.openstack.common.db by usage of oslo.db library.
 +
 
 +
<!--T:100-->
 +
Also sqlite database is not supported anymore. Please use MySQL or PostgreSQL
 +
db backends for Sahara. Sqlite support was dropped because it doesn't support
 +
(and not going to support, see http://www.sqlite.org/omitted.html) ALTER
 +
COLUMN and DROP COLUMN commands required for DB migrations between versions.
 +
 
 +
<!--T:101-->
 +
You can find more info about config file options in Sahara repository in file
 +
"etc/sahara/sahara.conf.sample".
 +
 
 +
==== Sahara Dashboard was merged into OpenStack Dashboard ==== <!--T:102-->
 +
 
 +
<!--T:103-->
 +
The Sahara Dashboard is not available in Juno release. Instead it's
 +
functionality is provided by OpenStack Dashboard out of the box.
 +
The Sahara UI is available in OpenStack Dashboard in
 +
"Project" -> "Data Processing" tab.
 +
 
 +
<!--T:104-->
 +
Note that you have to properly register Sahara in Keystone in
 +
order for Sahara UI in the Dashboard to work.
 +
 
 +
==== VM user name changed for HEAT infrastructure engine ==== <!--T:105-->
 +
 
 +
<!--T:106-->
 +
We've updated HEAT infrastructure engine ("infrastructure_engine=heat") to
 +
use the same rules for instance user name as in direct engine. Before the
 +
change user name for VMs created by Sahara using HEAT engine was always
 +
'ec2-user'. Now user name is taken from the image registry as it is described
 +
in the documentation.
 +
 
 +
<!--T:107-->
 +
Note, this change breaks Sahara backward compatibility for clusters created
 +
using HEAT infrastructure engine before the change. Clusters will continue to
 +
operate, but it is not recommended to perform scale operation over them.
 +
 
 +
==== Anti affinity implementation changed ==== <!--T:108-->
 +
 
 +
<!--T:109-->
 +
Starting with Juno release anti affinity feature is implemented using server
 +
groups. There should not be much difference in Sahara behavior from user
 +
perspective, but there are internal changes:
 +
 
 +
<!--T:110-->
 +
* Server group object will be created if anti affinity feature is enabled.
 +
* New implementation doesn't allow several affected instances on the same host even if they don't have common processes. So, if anti affinity enabled for 'datanode' and 'tasktracker' processes, previous implementation allowed to have instance with 'datanode' process and other instance with 'tasktracker' process on one host. New implementation guarantees that  instances will be on different hosts.
 +
 
 +
<!--T:111-->
 +
Note, new implementation will be applied for new clusters only. Old implementation will be applied if user scales cluster created in Icehouse.
  
=== Upgrade Notes ===
 
None yet
 
  
== OpenStack Documentation ==
+
== OpenStack Documentation == <!--T:112-->
  
 +
<!--T:113-->
 
* This release, the OpenStack Foundation funded a five-day book sprint to write the new [http://docs.openstack.org/arch-design/content/arch-guide-how-this-book-is-organized.html OpenStack Architecture Design Guide]. It offers architectures for general purpose, compute-focused, storage-focused, network-focused, multi-site, hybrid, massively scalable, and specialized clouds.
 
* This release, the OpenStack Foundation funded a five-day book sprint to write the new [http://docs.openstack.org/arch-design/content/arch-guide-how-this-book-is-organized.html OpenStack Architecture Design Guide]. It offers architectures for general purpose, compute-focused, storage-focused, network-focused, multi-site, hybrid, massively scalable, and specialized clouds.
 +
* The Install Guides have had a lot of clean up and standardization: uses common message queue (RabbitMQ), replaces openstack-config (crudini) commands with config file editing for improved learning opportunities and consistency, references a generic SQL database so that MariaDB or MySQL can be substituted, and replaces auth_port and auth_protocol with identity_uri, and auth_host with auth_uri throughout. The Install Guides are thoroughly tested on each distribution and continuously published until the official release packages are available to everyone.
 
* The [http://docs.openstack.org/high-availability-guide/content/index.html High Availability Guide] now has a separate review team and has moved into a separate repository.
 
* The [http://docs.openstack.org/high-availability-guide/content/index.html High Availability Guide] now has a separate review team and has moved into a separate repository.
 
* The [http://docs.openstack.org/security-guide/content/ Security Guide] now has a specialized review team and has moved into a separate repository.
 
* The [http://docs.openstack.org/security-guide/content/ Security Guide] now has a specialized review team and has moved into a separate repository.
Line 358: Line 613:
 
* The Command-Line Reference has been updated with new client releases and now contains additional chapters for the common OpenStack client, the trove-manage client, and the Data processing client (sahara).
 
* The Command-Line Reference has been updated with new client releases and now contains additional chapters for the common OpenStack client, the trove-manage client, and the Data processing client (sahara).
 
* The [http://docs.openstack.org/admin-guide-cloud/content/ OpenStack Cloud Administrator Guide] now contains information about Telemetry (ceilometer).
 
* The [http://docs.openstack.org/admin-guide-cloud/content/ OpenStack Cloud Administrator Guide] now contains information about Telemetry (ceilometer).
 +
</translate>

Latest revision as of 17:34, 3 December 2015

Other languages:
English • ‎فارسی • ‎日本語 • ‎한국어 • ‎中文(简体)‎ • ‎中文(台灣)‎

OpenStack 2014.2 (Juno) Release Notes

Contents

General Upgrade Notes

  • The simplejson package is an optional requirement in most projects, therefore it's not listed in all project's requirements.txt file. However, if you're using it, e.g. better performance with python 2.6 on RHEL 6, then you will need simplejson >= 2.2.0. See https://bugs.launchpad.net/oslo-incubator/+bug/1361230 for details.

OpenStack Object Storage (Swift)

Key New Features

The Juno integrated release includes three releases of OpenStack Swift: 2.0.0, 2.1.0, and 2.2.0. The changelog for these releases is available at https://github.com/openstack/swift/blob/2.2.0.rc1/CHANGELOG#L1-L173. Please refer to that document for release details.

Important new features are highlighted below. Please read the CHANGELOG and associated documentation.

  • Storage policies
  • Keystone v3 support
  • Server-side account-to-account copy
  • Better partition placement when adding a new server, zone, or region.
  • Zero-copy GET responses using splice()
  • Parallel object auditor


Known Issues

  • None at this time

Upgrade Notes

As always, you can upgrade your Swift cluster with no downtime for end-users. Please refer to sample config files and documentation before every release.

  • There have been some logging changes that need to be called out. In all cases, well-behaved log processors will not be affected.
    • Storage node (account, container, object) logs now have the PID logged at the end of the log line.
    • Object daemons now send a user-agent string with their full name (e.g. "obj" is now "object").
  • Once an additional storage policy has been enabled, downgrading to Swift pre-2.0.0 will cause any additional storage policies to become unavailable.
  • As part of an effort to eventually update the default port to swift to an non-IANA-assigned range, bind_port is now a required setting. Anyone currently explicitly setting the ports will not be affected. However, if you do not currently set the ports, please ensure that your *_server.conf has bind_port set to match your ring as part of your upgrade.
  • Note that storage policies include a new daemon, the container-reconciler.
  • TempURL default allowed methods config setting now also allows POST and DELETE. This means tempurls can be created for these verbs. It does not affect any existing tempurls.
  • A list of all updated, deprecated or removed options in swift can be found at: http://docs.openstack.org/trunk/config-reference/content/swift-conf-changes-master.html

OpenStack Compute (Nova)

Instance features

Networking

  • Improve the nova-network code to allow per-network settings. launchpad specification
  • Allow deployers to add hooks which are informed as soon as networking information for an instance is changed. launchpad specification
  • Enable nova instances to be booted up with SR-IOV neutron ports. launchpad specification
  • Permit VMs to attach multiple interfaces to one network. launchpad specification
  • Preserve Neutron ports attached using "--nic port-id" when instance is terminated. review

Scheduling

  • Extensible Resource Tracking. The set of resources tracked by nova is hard coded, this change makes that extensible, which will allow plug-ins to track new types of resources for scheduling. launchpad specification
  • Allow a host to be evacuated, but with the scheduler selecting destination hosts for the instances moved. launchpad specification
  • Add support for host aggregates to scheduler filters. launchpad: disk; instances; and IO ops specification

Other

Hypervisor driver specific

Hyper-V

Ironic

libvirt

vmware

  • Move the vmware driver to using the oslo vmware helper library. launchpad specification
  • Add support for network interface hot plugging to vmware. launchpad specification
  • Refactor the vmware driver's spawn functionality to be more maintainable. This work was internal, but is mentioned here because it significantly improves the supportability of the VMWare driver. launchpad specification

Known Issues

Upgrade Notes

  • A list of all updated, deprecated or removed options in Nova can be found at: http://docs.openstack.org/juno/config-reference/content/nova-conf-changes-juno.html
  • The nova-manage flavor subcommand is deprecated in Juno and will be removed in the 2015.1 (K) release: https://review.openstack.org/#/c/86122/
  • https://review.openstack.org/#/c/102212/
  • Minimum required libvirt version is now 0.9.11: https://review.openstack.org/#/c/58494/
  • Nova is now supporting the Cinder V2 API. The Cinder V1 API is deprecated in Juno and Nova will switch over to Cinder V2 by default in the Kilo release.
    • Nova talks to Cinder V1 in the gate (continuous integration testing).
  • Debug log output in python-novaclient has changed slightly to improve readability. The sha1 hash of the keystone token is now printed instead of the token itself - greatly shortening the amount of content being printed while still retaining the ability to determine token mismatch scenarios. In addition, some extra '\n' characters that were being added are removed. Double-check any log parsers!
  • libvirt.volume_drivers config param for nova.conf is deprecated, to be removed in the Lxxxx release. In general, this should affect only a small number of developers working on drivers. If this is you, the recommended approach is to continue your work inside a nova tree.
  • python 2.6 support is deprecated in Juno and will be removed in the Kilo 2015.1 release.

OpenStack Image Service (Glance)

Key New Features

  • Asynchronous Processing
  • Pull of glance.store into its own library
  • Metadata Definitions Catalog
  • Restricted policy for downloading images.
  • Enhanced Scrubber service allows single instance services multiple glance-api servers cross nodes.

Known Issues

Upgrade Notes

  • A list of all updated, deprecated or removed options in Glance can be found at: http://docs.openstack.org/juno/config-reference/content/glance-conf-changes-juno.html
  • The ability to upload a public image is now admin-only by default. To continue to use the previous behaviour, edit the publicize_image flag in etc/policy.json to remove the role restriction.
  • The requirement and check on UTF-8 charset for DB tables is enforced, operator need to migration tables and existing data to UTF-8 manually if glance-manage complains it during the sync.
  • glance workers will now be equal to the number of CPUs available by default if not explicitly specified in glance-api.conf and/or glance-registry.conf
    • There is no upgrade impact to glance-api workers since glance-api.conf previously hard-coded the workers value to 1 so anyone upgrading to tihs will still get whatever value was set in glance-api.conf prior to this change. There is an upgrade impact to the glance-registry workers since glance-registry.conf did not hard-code the workers value to 1 before this change. So anyone upgrading to this change that does not have workers specified in glance-registry.conf will now be running multiple workers by default when they restart the glance registry service.

OpenStack Dashboard (Horizon)

Key New Features

Sahara

The OpenStack Data Processing project (Sahara) was formally included into the integrated release in Juno and Horizon includes broad support for managing your data processing. You can specify and build clusters to utilize several data types with user specified jobs while tracking the progress of those jobs.

Neutron Features

Neutron added several new features in Juno, including:

  • DVR (Distributed Virtual Routing)
  • L3 HA support
  • IPv6 subnet modes

Horizon provides support for these new features with the Juno release. These features provide much greater flexibility in specifying software defined networks.

An existing feature in Neutron that Horizon now supports is the MAC learning extension.

Glance Features

In Juno, Glance introduced the ability to manage a catalog of metadata definitions where users can register the metadata definitions to be used on various resource types including images, aggregates, and flavors. Support for viewing and editing the assignment of these metadata tags is included in Horizon.

Cinder Features

In a continued effort to provide fuller API support, several features supported by Cinder are now supported in Horizon in the Juno release. Users can now utilize swift to store volume backups from Horizon as well as restore volumes from these backups.

Other features of the Cinder API not previously supported by Horizon added in Juno include:

  • Enabling resetting the state of a snapshot
  • Enabling resetting the state of a volume
  • Supporting upload-to-image
  • Volume retype
  • QoS (quality of service) support

Trove

Trove supports potentially using numerous different datastores, e.g., mysql, redis, mongodb. Users can now select from the list of datastores supported by the cloud operator when creating their database instances.

Another addition is support for utilizing and restoring from incremental database backups.

To improve support for Neutron based clouds, when creating a database instance, the user can now specify the NIC for the database instance on creation allowing direct access to the instance by the user.

Nova

The new nova instance actions panel provides a list of all actions taken on all instances in the current project allowing users to view resulting errors or actions taken by other users on those instances.

Administrators now have the ability to evacuate instances off hypervisors which can aid in system maintenance by providing a mechanism to migrate all instances to other hosts.

Improved Plugin Support

The plugin system in Horizon continued to improve in the Juno release. Some of those improvements:

  • Support for adding plugin specific AngularJS modules
  • Support for adding static files, e.g., CSS, JS, images
  • Ability to add exceptions
  • Fixing ordering issues
  • Numerous other bug fixes

Enhanced RBAC support

In an ongoing effort to support richer role based access control (RBAC) in Horizon, the views for several more services were enhanced with RBAC checks to determine user access to actions. The newly supported services are compute, network and orchestration. These changes allow operators to implement finer grained access control than just "member" and "admin".

The identity panels (domains, projects, users, roles, groups) have also been converted to support RBAC at the view level. The identity panels have been moved from the admin dashboard into their own 'Identity' dashboard and accessibility is determined by policies alone. This is the first step toward consolidating the near duplicate content of the project and admin dashboards into single views supporting a wide range of roles.

UX Changes

In Juno, Horizon transitioned to utilizing Bootstrap v3. Horizon had been pinned to an older version of Bootstrap for several releases. This change now allows Horizon to pick up numerous bug fixes and overall improvements in the Bootstrap framework. The look and feel remains mainly consistent with the Havana release.

JavaScript Libraries Extracted

As part of the Horizon team's ongoing efforts to split the repository into more logical pieces, all the 3rd party JavaScript libraries that Horizon depends on have been removed from the Horizon code base and python xstatic packages have been utilized instead. The xstatic format allows for easy consumption by the Django framework Horizon is built on. Now JavaScript libraries are utilized like any other python dependency in Horizon.

Conversion from LESS to SCSS

The supported stylesheets in Horizon have been converted to utilize SCSS rather than LESS. The change was necessary due to a prevalent lack of support for LESS compilers in python. This change also allowed us to upgrade to Bootstrap 3, as parts of the Bootstrap 3 LESS stylesheets were not supported by existing python based LESS compilers.

Known Issues

Rendering issues in extensions

The conversion to utilizing Bootstrap v3 can cause content extensions written on top of Horizon to have rendering issues. Most of these are fixed by a simple CSS class name substitutions. These issues are primarily seen with buttons and panel content widths.

Online Compression

With the move to SCSS, there may be issues with utilizing online compression in non-DEBUG mode in Horizon. Offline compression continues to work as in previous releases.

Neutron L3 HA

The HA property is updateable in the UI, however, Neutron API does not allow the update operation because toggling HA support does not work. https://bugs.launchpad.net/horizon/+bug/1378525

Upgrade Notes

  • FLAVOR_EXTRA_KEYS setting deprecated. The use of this key has been replaced with direct calls to the nova and glance api as appropriate.

OpenStack Identity (Keystone)

Key New Features

  • Keystone now has experimental support for Keystone-to-Keystone federation, where one instance acts as an Identity Provider, and the other a Service Provider.
  • PKIZ is a new token provider available for users of PKI tokens, which simply adds zlib-based compression to traditional PKI tokens.
  • The hashing algorithm used for PKI tokens has been made configurable (the default is still MD5, but the Keystone team recommends that deployments migrate to SHA256).
  • Identity-driver-configuration-per-domain now supports Internet domain names of arbitrary hierarchical complexity (for example, customer.cloud.example.com).
  • The LDAP identity backend now supports description as an attribute of users.
  • Identity API v3 requests are now validated via JSON Schema.
  • In the case of multiple identity backends, Keystone can now map arbitrary resource IDs to arbitrary backends.
  • keystoneclient.middleware.auth_token has been moved into it's own repository, keystonemiddleware.auth_token.
  • Identity API v3 now supports a discrete call to retrieve a service catalog, GET /v3/auth/catalog.
  • Federated authentication events and local role assignment operations now result in CADF (audit) notifications.
  • Keystone can now associate a given policy blob with one or more endpoints.
  • Keystone now provides JSON Home documents on the root API endpoints in response to Accept: application/json-home headers.
  • Hiding endpoints from client's service catalogs is now more easily manageable via OS-EP-FILTER.
  • The credentials collection API is now filterable per associated user (GET /v3/credentials?user_id={user_id}).
  • New, generic API endpoints are available for retrieving authentication-related data, such as a service catalog, available project scopes, and available domain scopes.
  • Keystone now supports mapping the user enabled attribute to the lock attribute in LDAP (and inverting the corresponding boolean value accordingly).
  • A CA certificate file is now configurable for LDAPS connections.
  • The templated catalog backend now supports generating service catalogs for Identity API v3.
  • Service names were added to the v3 service catalog.
  • Services can now be filtered by name ( GET /v3/services?name={service_name}).

Known Issues

LDAP paged search results don't work with python-ldap 2.4

When using an LDAP backend with paged search results enabled, AttributeErrors will be encountered if python-ldap 2.4 is being used. This is due to a backwards incompatible API change in python-ldap. The issue can be worked around in a few ways:

  • Disabling paging of search results by setting page_size to 0 in the [ldap] section of keystone.conf.
  • Downgrade python-ldap to version 2.3.x.

A fix for this issue has been proposed, which is expected to be made available in a stable update for Juno. For more details see https://bugs.launchpad.net/keystone/+bug/1381768

Upgrade Notes

  • Due to the simpler out-of-the-box experience, the default token provider is now UUID instead of PKI.
  • Database migrations for releases prior to Havana have been dropped, meaning that you must upgrade to the Juno release from either a Havana or Icehouse deployment.
  • A comprehensive list of all updated, deprecated or removed options in Keystone can be found at: http://docs.openstack.org/juno/config-reference/content/keystone-conf-changes-juno.html
    • All token_api methods are now deprecated.
    • LDAP configuration options that previously contained the deprecated tenant terminology have been superseded by options using the term project.
    • Proxy methods from the identity backend to the assignment backend (created to provide backwards compatibility as a result of the split of the Assignment backend from the Identity backend), have been removed. This should only affect custom, out-of-tree API extensions.
    • Loading authentication plugins solely by class name in keystone.conf is now deprecated in favor of loading them by custom-method-name = custom_package.CustomClass pairs, and then defining the sequence of authentication methods as a list (methods = custom-method-name, password).
    • In-tree token drivers (keystone.token.backends) have been moved to keystone.token.persistence.backends. Proxy objects exist to maintain compatibility. If a non-default value is used, it is recommended the value of the driver option in the [token] section of keystone.conf is updated to use the new location.
  • All KVS backends besides the token driver have been formally deprecated.
  • LDAP/AD configuration: All configuration options containing the term "tenant" have been deprecated in favor of similarly named configuration options using the term "project" (for example, tenant_id_attribute has been replaced by project_id_attribute).

OpenStack Network Service (Neutron)

Key New Features

  • DB migration refactor and new timeline
  • Distributed Virtual Router Support (DVR)
  • Full IPV6 support for tenant networks
  • High Availability for the L3 Agent
  • ipset support for security groups in place of iptables (this option is configurable)
  • L3 agent performance improvements
  • Migration to oslo.messaging library for RPC communication.
  • Security group rules for devices RPC call refactoring (a huge performance improvement)
  • New Plugins supported in Juno include the following:
    • A10 Networks LBaaS driver for the LBaaS V1 API
    • Arista L3 routing plugin
    • Big Switch L3 routing plugin
    • Brocade L3 routing plugin
    • Cisco APIC ML2 Driver (including a L3 routing plugin).
    • Cisco CSR L3 routing plugin
    • Freescale SDN ML2 Mechanism Driver
    • Nuage Networks ML2 Mechanism Driver
    • SR-IOV capable NIC ML2 Mechanism Driver
    • OpenContrail Neutron Plugin

Known Issues

  • This is the first release for DVR and HA L3. The Neutron team desires to designate these features as production ready in Kilo and requests that deployers test on non-critical workloads and report any issues.
  • FWaaS is still labeled as experimental, as it does not allow you to have more than one FW per tenant.

Upgrade Notes

  • DB migration from the previous releases (icehouse or havana)
    • In Icehouse or Hanava releases, the db migration operation is optional. If your Neutron database is not stamped (i.e., there is the db migration version info), please make sure to "stamp icehouse" before running the upgrade db migration to Juno.
    • To check if your database is stamped, run the following command:
neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file <your plugin config file> current
    • If the output of the current version is None, please run:
neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file <your plugin config file> stamp icehouse
    • and then run the db migration for upgrading Juno:
neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file <your plugin config file> upgrade juno
  • A list of all updated, deprecated or removed options in neutron can be found at: http://docs.openstack.org/juno/config-reference/content/neutron-conf-changes-juno.html
  • Attribute level policies dependent on resources are not enforced anymore. Meaning that some older policies from Icehouse are not needed. (e.g. "get_port:binding:vnic_type": "rule:admin_or_owner").
  • The following plugins are deprecated in Juno:
    • Cisco Nexus Sub-Plugin (The Nexus 1000V Sub-Plugin is still retained and supported in Juno).
    • Mellanox Plugin
    • Ryu Plugin
  • XML support in the API is deprecated. Users and deployers should migrate to JSON for API interactions as soon as possible since the XML support will be removed in the Kilo (2015.1) release.

OpenStack Block Storage (Cinder)

Key New Features

  • Support for Volume Replication.
  • Support for Consistency Groups and Snapshots of Consistency Groups.
  • Support for Volume Pools.
  • Completion of i18n-enablement
  • Honor Glance protected properties in Image Upload
  • Enable ability to restrict bandwidth usage on volume-copy operations
  • Add Volume Num Weighter Scheduling

New Drivers/Plugins

  • Datera
  • Fujitsu ETERNUS
  • Fusion IO
  • Hitachi HBSD
  • Huawei
  • Nimble
  • Prophetstor
  • Pure
  • XtremIO
  • Oracle ZFS

Limitations/Known Issues

  • The newly introduced 'Pool' terminology is a logical concept to describe a set of storage resource that can be used to serve core Cinder requests, e.g. volumes/snapshots. This notion is almost identical to Cinder Volume Backend, for it has simliar attributes (capacity, capability). The main difference is Pool couldn't exist on its own, it must reside in a Volume Backend. One Volume Backend can have mulitple Pools but Pools don't have sub-Pools (meaning even they have, sub-Pools don't get to exposed to Cinder, yet). Pool has a unique name in backend namespace, which means Volume Backend can't have two pools using same name. The introduction of 'Pools' has some user visible impact because it changes the granularity of scheduling a volume from 'Backend' to 'Pools'. For example: migrating/managing volume now has to include pool in 'host' parameter in order to work.
 cinder manage --source-name X --name newX host@backend#POOL
 cinder migrate UUID host@backend#POOL
  • To find out what pools a backend has, use following API extension to query info (need admin role):
 -  Pool name only:
    GET http://CINDER_API_ENDPOINT/v2/TENANT_ID/scheduler-stats/get_pools
 -  Detailed Pool info:
    GET http://CINDER_API_ENDPOINT/v2/TENANT_ID/scheduler-stats/get_pools\?detail\=True
  • The 'retyping' or affinity filter hint *may not* work like before. Cinder has a special code path for legacy volumes - volumes created before Juno - to allow (potential) migration between pools even migration_policy is set to 'never'. But not every driver has magic to move volumes to one pool to another at minimum cost. The inconsistency behavior between drivers (same command may take totally different time to finish), which could be confusing.

Upgrade Notes

  • A list of all updated, deprecated or removed options in Cinder can be found at: http://docs.openstack.org/trunk/config-reference/content/cinder-conf-changes-juno.html
  • Nova is now supporting the Cinder V2 API. The Cinder V1 API is deprecated in Juno and Nova will switch over to Cinder V2 by default in the "L" release. You need to update the cinder_catalog_info config option in nova to 'volumev2:cinder:internalURL' to have Nova use the cinder v2 endpoint, in addition to the cinder v2 endpoint being available in the keystone catalog.

OpenStack Telemetry (Ceilometer)

Key New Features

  • Support for partitioning metric collection load across horizontally scaled-out central agents
  • New method of partitioning alarm evaluation load using tooz coordination, as opposed to a hand-crafted protocol
  • Much improved SQLAlchemy storage performance & scalability, so that MySQL or PostgreSQL can be used as the metering store for PoCs or small deployments
  • Support for hardware-oriented monitoring of IPMI sensors via notifications from either Ironic or a new standalone agent
  • More flexible & efficient SNMP monitoring:
    • batching queries for multiple SNMP metrics into a single call to each daemon
    • dynamic discovery of nodes deployed by TripleO for SNMP polling
    • the ability to more easily extend the range of SNMP metrics that ceilometer gathers
    • the ability to derive new metrics from arithmetic transformations applied to multiple primary metrics
  • Option to split off the alarms persistence into a separate database
  • Option to use notifications instead of RPC for metering messages
  • Metering of Neutron networking services: LBaaS, FWaaS & VPNaaS
  • New XenAPI compute inspector
  • Support for persisting events via the MongoDB & Hbase storage drivers (previously limited to SQLAlchemy)
  • Support for per-device metering of instance disks
  • Use of ceilometer as a collector for os-profiler data
  • New Telemetry section of the Cloud Administrator Guide

Known Issues

  • 1381600 The new ceilometer-agent-ipmi fails to emit any samples when it encounters unparseable data from ipmitool.

Upgrade Notes

OpenStack Orchestration (Heat)

Key New Features

  • Recovery from failures during stack updates
  • API to cancel and roll back an in-progress stack update
  • Implementation of new resource types:
    • OS::Glance::Image
    • OS::Heat::SwiftSignal
      • Provides the option to store Wait Condition (and Software Deployment) data in Swift
    • OS::Heat::StructuredDeployments
      • Groups code for multiple lifecycle events into a single deployment resource
    • OS::Heat::SoftwareDeployments
      • Provides a way of avoiding circular dependencies when deploying an interdependent cluster of servers
    • OS::Heat::SoftwareComponent
    • OS::Nova::ServerGroup
    • OS::Sahara::NodeGroupTemplate
    • OS::Sahara::ClusterTemplate
  • Remember the previously-supplied parameters when updating a stack
  • Improved scalability
  • Improved visibility into trees of nested stacks

Known Issues

None yet

Upgrade Notes

OpenStack Database service (Trove)

Key New Features

  • Support for Asynchronous Replication (master-slave replicas) between provisioned mysql instances.
  • Introduction of a new Clustering API with initial support for MongoDB clusters.
  • Support for deploying Trove on an OpenStack solution that is using Neutron for networking. Prior to this, only nova-network was supported.
  • Support for provisioning PostgreSQL datastore instances.
  • Backup and Restore support for Couchbase.
  • Support to optionally restrict the Cinder backend used for Trove volumes.
  • Support for defining custom datastore configuration parameters in the Trove database (using mgmt API).
  • The ability to list all datastore types and versions in a single call

Other Incremental Improvements

  • Logging audit to improve log levels throughout the trove components.
  • The extensions loading mechanism was improved by adding support for stevedore.
  • The ability to support volumes for data is now on a per datastore bases.
  • Created and updated timestamps and instance count were added to configuration groups list and details calls.

Known Issues

  • 1333852: Trove does not support flavor UUIDs -- the Trove flavors API requires flavors with a numerical ID in order to be consistent with the API response for icehouse Trove.

Upgrade Notes

  • trove_api_workers and trove_conductor_workers will now be equal to the number of CPUs available by default if not explicitly specified in the trove configuration files.
    • Anyone upgrading to this change that does not have trove_api_workers or trove_conductor_workers specified in the trove configuration files will now be running multiple API and conductor workers by default when they restart the respective trove services.

OpenStack Data Processing (Sahara)

New Key Features

  • Data processing UI was fully merged into OpenStack Dashboard (horizon).
  • Support of CDH 5.x was added.
  • Support of Apache Spark was added. Supported versions are 0.9.1 and 1.0.0. Elastic data processing (EDP) engine was refactored a lot to support non-Oozie workflow engines.
  • Support of Apache Hadoop 2.4.1 was added in addition to existing 1.2.1 and 2.3.0. Version 2.3.0 is deprecated in Juno.
  • Support of multi region deployments.
  • Hadoop Swift authentication using keystone trust mechanism. Now Hadoop can access data in Swift without storing credentials in config files.
  • Ceilometer integration was added. Now Sahara notifies Ceilometer about all cluster state changes.
  • Cluster provisioning error handling was improved. If something goes wrong during scaling, cluster will rollback to original state.
  • Added ability to specify security groups for a node group. Also Sahara could automatically create security group with only required ports open.
  • Implemented distributed mode for Sahara: sahara-all process is decoupled into sahara-api and sahara-engine. You can run several instances of sahara-api and sahara-engine on different hosts. Note that the feature implementation is considered to be in alpha-state.

Known Issues

  • Bug 1271349: Sahara requires root privileges to access VMs via namespaces.

Upgrade Notes

Main binary renamed to sahara-all

Please, note that you should use `sahara-all` instead of `sahara-api` to start the All-In-One Sahara.

sahara.conf upgrade

We've migrated from custom auth_token middleware config options to the common config options. To update your config file you should replace the following old config opts with the new ones.

  • "os_auth_protocol", "os_auth_host", "os_auth_port" -> "[keystone_authtoken]/auth_uri" and "[keystone_authtoken]/identity_uri"
  • "os_admin_username" -> "[keystone_authtoken]/admin_user"
  • "os_admin_password" -> "[keystone_authtoken]/admin_password"
  • "os_admin_tenant_name" -> "[keystone_authtoken]/admin_tenant_name"

We've replaced oslo code from sahara.openstack.common.db by usage of oslo.db library.

Also sqlite database is not supported anymore. Please use MySQL or PostgreSQL db backends for Sahara. Sqlite support was dropped because it doesn't support (and not going to support, see http://www.sqlite.org/omitted.html) ALTER COLUMN and DROP COLUMN commands required for DB migrations between versions.

You can find more info about config file options in Sahara repository in file "etc/sahara/sahara.conf.sample".

Sahara Dashboard was merged into OpenStack Dashboard

The Sahara Dashboard is not available in Juno release. Instead it's functionality is provided by OpenStack Dashboard out of the box. The Sahara UI is available in OpenStack Dashboard in "Project" -> "Data Processing" tab.

Note that you have to properly register Sahara in Keystone in order for Sahara UI in the Dashboard to work.

VM user name changed for HEAT infrastructure engine

We've updated HEAT infrastructure engine ("infrastructure_engine=heat") to use the same rules for instance user name as in direct engine. Before the change user name for VMs created by Sahara using HEAT engine was always 'ec2-user'. Now user name is taken from the image registry as it is described in the documentation.

Note, this change breaks Sahara backward compatibility for clusters created using HEAT infrastructure engine before the change. Clusters will continue to operate, but it is not recommended to perform scale operation over them.

Anti affinity implementation changed

Starting with Juno release anti affinity feature is implemented using server groups. There should not be much difference in Sahara behavior from user perspective, but there are internal changes:

  • Server group object will be created if anti affinity feature is enabled.
  • New implementation doesn't allow several affected instances on the same host even if they don't have common processes. So, if anti affinity enabled for 'datanode' and 'tasktracker' processes, previous implementation allowed to have instance with 'datanode' process and other instance with 'tasktracker' process on one host. New implementation guarantees that instances will be on different hosts.

Note, new implementation will be applied for new clusters only. Old implementation will be applied if user scales cluster created in Icehouse.


OpenStack Documentation

  • This release, the OpenStack Foundation funded a five-day book sprint to write the new OpenStack Architecture Design Guide. It offers architectures for general purpose, compute-focused, storage-focused, network-focused, multi-site, hybrid, massively scalable, and specialized clouds.
  • The Install Guides have had a lot of clean up and standardization: uses common message queue (RabbitMQ), replaces openstack-config (crudini) commands with config file editing for improved learning opportunities and consistency, references a generic SQL database so that MariaDB or MySQL can be substituted, and replaces auth_port and auth_protocol with identity_uri, and auth_host with auth_uri throughout. The Install Guides are thoroughly tested on each distribution and continuously published until the official release packages are available to everyone.
  • The High Availability Guide now has a separate review team and has moved into a separate repository.
  • The Security Guide now has a specialized review team and has moved into a separate repository.
  • The long-form API reference documents have been re-purposed to focus on the API Complete Reference.
  • The User Guide now contains Database Service for OpenStack information.
  • The Command-Line Reference has been updated with new client releases and now contains additional chapters for the common OpenStack client, the trove-manage client, and the Data processing client (sahara).
  • The OpenStack Cloud Administrator Guide now contains information about Telemetry (ceilometer).