ReleaseNotes/Juno

= OpenStack 2014.2 (Juno) Release Notes =

General Upgrade Notes

 * The simplejson package is an optional requirement in most projects, therefore it's not listed in all project's requirements.txt file. However, if you're using it, e.g. better performance with python 2.6 on RHEL 6, then you will need simplejson >= 2.2.0. See https://bugs.launchpad.net/oslo-incubator/+bug/1361230 for details.

Key New Features
The Juno integrated release includes three releases of OpenStack Swift: 2.0.0, 2.1.0, and 2.2.0. The changelog for these releases is available at https://github.com/openstack/swift/blob/2.2.0.rc1/CHANGELOG#L1-L173. Please refer to that document for release details.

Important new features are highlighted below. Please read the CHANGELOG and associated documentation.


 * Storage policies
 * Keystone v3 support
 * Server-side account-to-account copy
 * Better partition placement when adding a new server, zone, or region.
 * Zero-copy GET responses using splice
 * Parallel object auditor

Known Issues

 * None at this time

Upgrade Notes
As always, you can upgrade your Swift cluster with no downtime for end-users. Please refer to sample config files and documentation before every release.


 * There have been some logging changes that need to be called out. In all cases, well-behaved log processors will not be affected.
 * Storage node (account, container, object) logs now have the PID logged at the end of the log line.
 * Object daemons now send a user-agent string with their full name (e.g. "obj" is now "object").
 * Once an additional storage policy has been enabled, downgrading to Swift pre-2.0.0 will cause any additional storage policies to become unavailable.
 * As part of an effort to eventually update the default port to swift to an non-IANA-assigned range, bind_port is now a required setting. Anyone currently explicitly setting the ports will not be affected. However, if you do not currently set the ports, please ensure that your *_server.conf has bind_port set to match your ring as part of your upgrade.
 * Note that storage policies include a new daemon, the container-reconciler.
 * TempURL default allowed methods config setting now also allows POST and DELETE. This means tempurls can be created for these verbs. It does not affect any existing tempurls.
 * A list of all updated, deprecated or removed options in swift can be found at: http://docs.openstack.org/trunk/config-reference/content/swift-conf-changes-master.html

Instance features

 * Allow users to specify an image to use for rescue instead of the original base image. launchpad specification
 * Allow images to specify if a config drive should be used. launchpad specification
 * Give users and administrators the ability to control the vCPU topology exposed to guests via flavors. launchpad specification
 * Attach All Local Disks During Rescue. launchpad specification

Networking

 * Improve the nova-network code to allow per-network settings. launchpad specification
 * Allow deployers to add hooks which are informed as soon as networking information for an instance is changed. launchpad specification
 * Enable nova instances to be booted up with SR-IOV neutron ports. launchpad specification
 * Permit VMs to attach multiple interfaces to one network. launchpad specification
 * Preserve Neutron ports attached using "--nic port-id" when instance is terminated. review

Scheduling

 * Extensible Resource Tracking. The set of resources tracked by nova is hard coded, this change makes that extensible, which will allow plug-ins to track new types of resources for scheduling. launchpad specification
 * Allow a host to be evacuated, but with the scheduler selecting destination hosts for the instances moved. launchpad specification
 * Add support for host aggregates to scheduler filters. launchpad: disk; instances; and IO ops specification

Other

 * Offload periodic task sql query load to a slave sql server if one is configured. launchpad specification
 * Only update the status of a host in the sql database when the status changes, instead of every 60 seconds. launchpad specification
 * Include status information in API listings of hypervisor hosts. launchpad specification
 * Allow API callers to specify more than one status to filter by when listing services. launchpad specification
 * Add quota values to constrain the number and size of server groups a users can create. launchpad specification

Hyper-V

 * Support for differencing vhdx images. launchpad specification
 * Support for console serial logs. launchpad specification
 * Support soft reboot. launchpad specification

Ironic

 * Add a virt driver for Ironic. launchpad specification

libvirt

 * Performance improvements to listing instances on modern libvirts. launchpad specification
 * Allow snapshots of network backed disks. launchpad specification
 * Enable qemu memory balloon statistics for ceilometer reporting. launchpad specification
 * Add support for handing back unused disk blocks to the underlying storage system. launchpad specification
 * Meta-data about an instance is now recorded in the libvirt domain XML. This is intended to help administrators while debugging problems. launchpad specification
 * Support namespaces for LXC containers. launchpad specification
 * Copy-on-write cloning for RBD-backed disks. launchpad specification
 * Expose interactive serial consoles. launchpad specification
 * Allow controlled shutdown of guest operating systems during VM power off. launchpad specification
 * Intelligent NUMA node placement for guests. launchpad specification

vmware

 * Move the vmware driver to using the oslo vmware helper library. launchpad specification
 * Add support for network interface hot plugging to vmware. launchpad specification
 * Refactor the vmware driver's spawn functionality to be more maintainable. This work was internal, but is mentioned here because it significantly improves the supportability of the VMWare driver. launchpad specification

Known Issues

 * When using libvirt, live snapshots are effectively disabled, due to this difficult-to-reproduce bug: https://bugs.launchpad.net/nova/+bug/1334398 (https://review.openstack.org/#/c/102643/)
 * Glance v2 and Keystone v3 are not tested with Nova in Juno.

Upgrade Notes

 * A list of all updated, deprecated or removed options in Nova can be found at: http://docs.openstack.org/juno/config-reference/content/nova-conf-changes-juno.html
 * The nova-manage flavor subcommand is deprecated in Juno and will be removed in the 2015.1 (K) release: https://review.openstack.org/#/c/86122/
 * https://review.openstack.org/#/c/102212/
 * Minimum required libvirt version is now 0.9.11: https://review.openstack.org/#/c/58494/
 * Nova is now supporting the Cinder V2 API. The Cinder V1 API is deprecated in Juno and Nova will switch over to Cinder V2 by default in the Kilo release.
 * Nova talks to Cinder V1 in the gate (continuous integration testing).
 * Debug log output in python-novaclient has changed slightly to improve readability. The sha1 hash of the keystone token is now printed instead of the token itself - greatly shortening the amount of content being printed while still retaining the ability to determine token mismatch scenarios. In addition, some extra '\n' characters that were being added are removed. Double-check any log parsers!
 * libvirt.volume_drivers config param for nova.conf is deprecated, to be removed in the Lxxxx release. In general, this should affect only a small number of developers working on drivers. If this is you, the recommended approach is to continue your work inside a nova tree.
 * python 2.6 support is deprecated in Juno and will be removed in the Kilo 2015.1 release.

Key New Features

 * Asynchronous Processing
 * Pull of glance.store into its own library
 * Metadata Definitions Catalog
 * Restricted policy for downloading images.
 * Enhanced Scrubber service allows single instance services multiple glance-api servers cross nodes.

Upgrade Notes

 * A list of all updated, deprecated or removed options in Glance can be found at: http://docs.openstack.org/juno/config-reference/content/glance-conf-changes-juno.html
 * The ability to upload a public image is now admin-only by default. To continue to use the previous behaviour, edit the publicize_image flag in etc/policy.json to remove the role restriction.
 * The requirement and check on UTF-8 charset for DB tables is enforced, operator need to migration tables and existing data to UTF-8 manually if glance-manage complains it during the sync.
 * glance workers will now be equal to the number of CPUs available by default if not explicitly specified in glance-api.conf and/or glance-registry.conf
 * There is no upgrade impact to glance-api workers since glance-api.conf previously hard-coded the workers value to 1 so anyone upgrading to tihs will still get whatever value was set in glance-api.conf prior to this change. There is an upgrade impact to the glance-registry workers since glance-registry.conf did not hard-code the workers value to 1 before this change. So anyone upgrading to this change that does not have workers specified in glance-registry.conf will now be running multiple workers by default when they restart the glance registry service.

Sahara
The OpenStack Data Processing project (Sahara) was formally included into the integrated release in Juno and Horizon includes broad support for managing your data processing. You can specify and build clusters to utilize several data types with user specified jobs while tracking the progress of those jobs.

Neutron Features
Neutron added several new features in Juno, including:
 * DVR (Distributed Virtual Routing)
 * L3 HA support
 * IPv6 subnet modes

Horizon provides support for these new features with the Juno release. These features provide much greater flexibility in specifying software defined networks.

An existing feature in Neutron that Horizon now supports is the MAC learning extension.

Glance Features
In Juno, Glance introduced the ability to manage a catalog of metadata definitions where users can register the metadata definitions to be used on various resource types including images, aggregates, and flavors. Support for viewing and editing the assignment of these metadata tags is included in Horizon.

Cinder Features
In a continued effort to provide fuller API support, several features supported by Cinder are now supported in Horizon in the Juno release. Users can now utilize swift to store volume backups from Horizon as well as restore volumes from these backups.

Other features of the Cinder API not previously supported by Horizon added in Juno include:
 * Enabling resetting the state of a snapshot
 * Enabling resetting the state of a volume
 * Supporting upload-to-image
 * Volume retype
 * QoS (quality of service) support

Trove
Trove supports potentially using numerous different datastores, e.g., mysql, redis, mongodb. Users can now select from the list of datastores supported by the cloud operator when creating their database instances.

Another addition is support for utilizing and restoring from incremental database backups.

To improve support for Neutron based clouds, when creating a database instance, the user can now specify the NIC for the database instance on creation allowing direct access to the instance by the user.

Nova
The new nova instance actions panel provides a list of all actions taken on all instances in the current project allowing users to view resulting errors or actions taken by other users on those instances.

Administrators now have the ability to evacuate instances off hypervisors which can aid in system maintenance by providing a mechanism to migrate all instances to other hosts.

Improved Plugin Support
The plugin system in Horizon continued to improve in the Juno release. Some of those improvements:


 * Support for adding plugin specific AngularJS modules
 * Support for adding static files, e.g., CSS, JS, images
 * Ability to add exceptions
 * Fixing ordering issues
 * Numerous other bug fixes

Enhanced RBAC support
In an ongoing effort to support richer role based access control (RBAC) in Horizon, the views for several more services were enhanced with RBAC checks to determine user access to actions. The newly supported services are compute, network and orchestration. These changes allow operators to implement finer grained access control than just "member" and "admin".

The identity panels (domains, projects, users, roles, groups) have also been converted to support RBAC at the view level. The identity panels have been moved from the admin dashboard into their own 'Identity' dashboard and accessibility is determined by policies alone. This is the first step toward consolidating the near duplicate content of the project and admin dashboards into single views supporting a wide range of roles.

UX Changes
In Juno, Horizon transitioned to utilizing Bootstrap v3. Horizon had been pinned to an older version of Bootstrap for several releases. This change now allows Horizon to pick up numerous bug fixes and overall improvements in the Bootstrap framework. The look and feel remains mainly consistent with the Havana release.

JavaScript Libraries Extracted
As part of the Horizon team's ongoing efforts to split the repository into more logical pieces, all the 3rd party JavaScript libraries that Horizon depends on have been removed from the Horizon code base and python xstatic packages have been utilized instead. The xstatic format allows for easy consumption by the Django framework Horizon is built on. Now JavaScript libraries are utilized like any other python dependency in Horizon.

Conversion from LESS to SCSS
The supported stylesheets in Horizon have been converted to utilize SCSS rather than LESS. The change was necessary due to a prevalent lack of support for LESS compilers in python. This change also allowed us to upgrade to Bootstrap 3, as parts of the Bootstrap 3 LESS stylesheets were not supported by existing python based LESS compilers.

Rendering issues in extensions
The conversion to utilizing Bootstrap v3 can cause content extensions written on top of Horizon to have rendering issues. Most of these are fixed by a simple CSS class name substitutions. These issues are primarily seen with buttons and panel content widths.

Online Compression
With the move to SCSS, there may be issues with utilizing online compression in non-DEBUG mode in Horizon. Offline compression continues to work as in previous releases.

Neutron L3 HA
The HA property is updateable in the UI, however, Neutron API does not allow the update operation because toggling HA support does not work. https://bugs.launchpad.net/horizon/+bug/1378525

Upgrade Notes

 * FLAVOR_EXTRA_KEYS setting deprecated. The use of this key has been replaced with direct calls to the nova and glance api as appropriate.

Key New Features

 * Keystone now has experimental support for Keystone-to-Keystone federation, where one instance acts as an Identity Provider, and the other a Service Provider.
 * PKIZ is a new token provider available for users of PKI tokens, which simply adds zlib-based compression to traditional PKI tokens.
 * The hashing algorithm used for PKI tokens has been made configurable (the default is still MD5, but the Keystone team recommends that deployments migrate to SHA256).
 * Identity-driver-configuration-per-domain now supports Internet domain names of arbitrary hierarchical complexity (for example, ).
 * The LDAP identity backend now supports  as an attribute of users.
 * Identity API v3 requests are now validated via JSON Schema.
 * In the case of multiple identity backends, Keystone can now map arbitrary resource IDs to arbitrary backends.
 * has been moved into it's own repository,.
 * Identity API v3 now supports a discrete call to retrieve a service catalog,.
 * Federated authentication events and local role assignment operations now result in CADF (audit) notifications.
 * Keystone can now associate a given policy blob with one or more endpoints.
 * Keystone now provides JSON Home documents on the root API endpoints in response to  headers.
 * Hiding endpoints from client's service catalogs is now more easily manageable via.
 * The credentials collection API is now filterable per associated user.
 * New, generic API endpoints are available for retrieving authentication-related data, such as a service catalog, available project scopes, and available domain scopes.
 * Keystone now supports mapping the user  attribute to the   attribute in LDAP (and inverting the corresponding boolean value accordingly).
 * A CA certificate file is now configurable for LDAPS connections.
 * The templated catalog backend now supports generating service catalogs for Identity API v3.
 * Service names were added to the v3 service catalog.
 * Services can now be filtered by name.

LDAP paged search results don't work with python-ldap 2.4
When using an LDAP backend with paged search results enabled, AttributeErrors will be encountered if python-ldap 2.4 is being used. This is due to a backwards incompatible API change in python-ldap. The issue can be worked around in a few ways:
 * Disabling paging of search results by setting page_size to 0 in the [ldap] section of keystone.conf.
 * Downgrade python-ldap to version 2.3.x.

A fix for this issue has been proposed, which is expected to be made available in a stable update for Juno. For more details see https://bugs.launchpad.net/keystone/+bug/1381768

Upgrade Notes

 * Due to the simpler out-of-the-box experience, the default token provider is now UUID instead of PKI.
 * Database migrations for releases prior to Havana have been dropped, meaning that you must upgrade to the Juno release from either a Havana or Icehouse deployment.
 * A comprehensive list of all updated, deprecated or removed options in Keystone can be found at: http://docs.openstack.org/juno/config-reference/content/keystone-conf-changes-juno.html
 * All  methods are now deprecated.
 * LDAP configuration options that previously contained the deprecated  terminology have been superseded by options using the term.
 * Proxy methods from the identity backend to the assignment backend (created to provide backwards compatibility as a result of the split of the Assignment backend from the Identity backend), have been removed. This should only affect custom, out-of-tree API extensions.
 * Loading authentication plugins solely by class name in  is now deprecated in favor of loading them by   pairs, and then defining the sequence of authentication methods as a list.
 * In-tree token drivers have been moved to  . Proxy objects exist to maintain compatibility. If a non-default value is used, it is recommended the value of the   option in the   section of   is updated to use the new location.
 * All KVS backends besides the  driver have been formally deprecated.
 * LDAP/AD configuration: All configuration options containing the term "tenant" have been deprecated in favor of similarly named configuration options using the term "project" (for example,  has been replaced by  ).

Key New Features

 * DB migration refactor and new timeline
 * Distributed Virtual Router Support (DVR)
 * Full IPV6 support for tenant networks
 * High Availability for the L3 Agent
 * ipset support for security groups in place of iptables (this option is configurable)
 * L3 agent performance improvements
 * Migration to oslo.messaging library for RPC communication.
 * Security group rules for devices RPC call refactoring (a huge performance improvement)
 * New Plugins supported in Juno include the following:
 * A10 Networks LBaaS driver for the LBaaS V1 API
 * Arista L3 routing plugin
 * Big Switch L3 routing plugin
 * Brocade L3 routing plugin
 * Cisco APIC ML2 Driver (including a L3 routing plugin).
 * Cisco CSR L3 routing plugin
 * Freescale SDN ML2 Mechanism Driver
 * Nuage Networks ML2 Mechanism Driver
 * SR-IOV capable NIC ML2 Mechanism Driver
 * OpenContrail Neutron Plugin

Known Issues

 * This is the first release for DVR and HA L3. The Neutron team desires to designate these features as production ready in Kilo and requests that deployers test on non-critical workloads and report any issues.
 * FWaaS is still labeled as experimental, as it does not allow you to have more than one FW per tenant.

Upgrade Notes
neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file current neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file stamp icehouse neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file upgrade juno
 * DB migration from the previous releases (icehouse or havana)
 * In Icehouse or Hanava releases, the db migration operation is optional. If your Neutron database is not stamped (i.e., there is the db migration version info), please make sure to "stamp icehouse" before running the upgrade db migration to Juno.
 * To check if your database is stamped, run the following command:
 * If the output of the current version is None, please run:
 * and then run the db migration for upgrading Juno:
 * A list of all updated, deprecated or removed options in neutron can be found at: http://docs.openstack.org/juno/config-reference/content/neutron-conf-changes-juno.html
 * Attribute level policies dependent on resources are not enforced anymore. Meaning that some older policies from Icehouse are not needed. (e.g. "get_port:binding:vnic_type": "rule:admin_or_owner").
 * The following plugins are deprecated in Juno:
 * Cisco Nexus Sub-Plugin (The Nexus 1000V Sub-Plugin is still retained and supported in Juno).
 * Mellanox Plugin
 * Ryu Plugin
 * XML support in the API is deprecated. Users and deployers should migrate to JSON for API interactions as soon as possible since the XML support will be removed in the Kilo (2015.1) release.

Key New Features

 * Support for Volume Replication.
 * Support for Consistency Groups and Snapshots of Consistency Groups.
 * Support for Volume Pools.
 * Completion of i18n-enablement
 * Honor Glance protected properties in Image Upload
 * Enable ability to restrict bandwidth usage on volume-copy operations
 * Add Volume Num Weighter Scheduling

New Drivers/Plugins

 * Datera
 * Fujitsu ETERNUS
 * Fusion IO
 * Hitachi HBSD
 * Huawei
 * Nimble
 * Prophetstor
 * Pure
 * XtremIO
 * Oracle ZFS

Limitations/Known Issues
cinder manage --source-name X --name newX host@backend#POOL cinder migrate UUID host@backend#POOL
 * The newly introduced 'Pool' terminology is a logical concept to describe a set of storage resource that can be used to serve core Cinder requests, e.g. volumes/snapshots. This notion is almost identical to Cinder Volume Backend, for it has simliar attributes (capacity, capability).  The main difference is Pool couldn't exist on its own, it must reside in a Volume Backend.  One Volume Backend can have mulitple Pools but Pools don't have sub-Pools (meaning even they have, sub-Pools don't get to exposed to Cinder, yet).  Pool has a unique name in backend namespace, which means Volume Backend can't have two pools using same name.  The introduction of 'Pools' has some user visible impact because it changes the granularity of scheduling a volume from 'Backend' to 'Pools'.  For example: migrating/managing volume now has to include pool in 'host' parameter in order to work.

- Pool name only: GET http://CINDER_API_ENDPOINT/v2/TENANT_ID/scheduler-stats/get_pools - Detailed Pool info: GET http://CINDER_API_ENDPOINT/v2/TENANT_ID/scheduler-stats/get_pools\?detail\=True
 * To find out what pools a backend has, use following API extension to query info (need admin role):


 * The 'retyping' or affinity filter hint *may not* work like before. Cinder has a special code path for legacy volumes - volumes created before Juno - to allow (potential) migration between pools even migration_policy is set to 'never'.  But not every driver has magic to move volumes to one pool to another at minimum cost.  The inconsistency behavior between drivers (same command may take totally different time to finish), which could be confusing.

Upgrade Notes

 * A list of all updated, deprecated or removed options in Cinder can be found at: http://docs.openstack.org/trunk/config-reference/content/cinder-conf-changes-juno.html
 * Nova is now supporting the Cinder V2 API. The Cinder V1 API is deprecated in Juno and Nova will switch over to Cinder V2 by default in the "L" release. You need to update the cinder_catalog_info config option in nova to 'volumev2:cinder:internalURL' to have Nova use the cinder v2 endpoint, in addition to the cinder v2 endpoint being available in the keystone catalog.

Key New Features

 * Support for partitioning metric collection load across horizontally scaled-out central agents
 * New method of partitioning alarm evaluation load using tooz coordination, as opposed to a hand-crafted protocol
 * Much improved SQLAlchemy storage performance & scalability, so that MySQL or PostgreSQL can be used as the metering store for PoCs or small deployments
 * Support for hardware-oriented monitoring of IPMI sensors via notifications from either Ironic or a new standalone agent
 * More flexible & efficient SNMP monitoring:
 * batching queries for multiple SNMP metrics into a single call to each daemon
 * dynamic discovery of nodes deployed by TripleO for SNMP polling
 * the ability to more easily extend the range of SNMP metrics that ceilometer gathers
 * the ability to derive new metrics from arithmetic transformations applied to multiple primary metrics
 * Option to split off the alarms persistence into a separate database
 * Option to use notifications instead of RPC for metering messages
 * Metering of Neutron networking services: LBaaS, FWaaS & VPNaaS
 * New XenAPI compute inspector
 * Support for persisting events via the MongoDB & Hbase storage drivers (previously limited to SQLAlchemy)
 * Support for per-device metering of instance disks
 * Use of ceilometer as a collector for os-profiler data
 * New Telemetry section of the Cloud Administrator Guide

Known Issues

 * 1381600 The new  fails to emit any samples when it encounters unparseable data from.

Upgrade Notes

 * A list of all updated, deprecated or removed options in ceilometer can be found at: http://docs.openstack.org/trunk/config-reference/content/ceilometer-conf-changes-master.html

Key New Features

 * Recovery from failures during stack updates
 * API to cancel and roll back an in-progress stack update
 * Implementation of new resource types:
 * OS::Glance::Image
 * OS::Heat::SwiftSignal
 * Provides the option to store Wait Condition (and Software Deployment) data in Swift
 * OS::Heat::StructuredDeployments
 * Groups code for multiple lifecycle events into a single deployment resource
 * OS::Heat::SoftwareDeployments
 * Provides a way of avoiding circular dependencies when deploying an interdependent cluster of servers
 * OS::Heat::SoftwareComponent
 * OS::Nova::ServerGroup
 * OS::Sahara::NodeGroupTemplate
 * OS::Sahara::ClusterTemplate
 * Remember the previously-supplied parameters when updating a stack
 * Improved scalability
 * Improved visibility into trees of nested stacks

Known Issues
None yet

Upgrade Notes

 * A list of all updated, deprecated or removed options in heat can be found at: http://docs.openstack.org/juno/config-reference/content/heat-conf-changes-master.html

Key New Features

 * Support for Asynchronous Replication (master-slave replicas) between provisioned mysql instances.
 * Introduction of a new Clustering API with initial support for MongoDB clusters.
 * Support for deploying Trove on an OpenStack solution that is using Neutron for networking. Prior to this, only nova-network was supported.
 * Support for provisioning PostgreSQL datastore instances.
 * Backup and Restore support for Couchbase.
 * Support to optionally restrict the Cinder backend used for Trove volumes.
 * Support for defining custom datastore configuration parameters in the Trove database (using mgmt API).
 * The ability to list all datastore types and versions in a single call

Other Incremental Improvements

 * Logging audit to improve log levels throughout the trove components.
 * The extensions loading mechanism was improved by adding support for stevedore.
 * The ability to support volumes for data is now on a per datastore bases.
 * Created and updated timestamps and instance count were added to configuration groups list and details calls.

Known Issues

 * 1333852: Trove does not support flavor UUIDs -- the Trove flavors API requires flavors with a numerical ID in order to be consistent with the API response for icehouse Trove.

Upgrade Notes

 * trove_api_workers and trove_conductor_workers will now be equal to the number of CPUs available by default if not explicitly specified in the trove configuration files.
 * Anyone upgrading to this change that does not have trove_api_workers or trove_conductor_workers specified in the trove configuration files will now be running multiple API and conductor workers by default when they restart the respective trove services.

New Key Features

 * Data processing UI was fully merged into OpenStack Dashboard (horizon).
 * Support of CDH 5.x was added.
 * Support of Apache Spark was added. Supported versions are 0.9.1 and 1.0.0. Elastic data processing (EDP) engine was refactored a lot to support non-Oozie workflow engines.
 * Support of Apache Hadoop 2.4.1 was added in addition to existing 1.2.1 and 2.3.0. Version 2.3.0 is deprecated in Juno.
 * Support of multi region deployments.
 * Hadoop Swift authentication using keystone trust mechanism. Now Hadoop can access data in Swift without storing credentials in config files.
 * Ceilometer integration was added. Now Sahara notifies Ceilometer about all cluster state changes.
 * Cluster provisioning error handling was improved. If something goes wrong during scaling, cluster will rollback to original state.
 * Added ability to specify security groups for a node group. Also Sahara could automatically create security group with only required ports open.
 * Implemented distributed mode for Sahara: sahara-all process is decoupled into sahara-api and sahara-engine. You can run several instances of sahara-api and sahara-engine on different hosts. Note that the feature implementation is considered to be in alpha-state.

Known Issues

 * Bug 1271349: Sahara requires root privileges to access VMs via namespaces.

Main binary renamed to sahara-all
Please, note that you should use `sahara-all` instead of `sahara-api` to start the All-In-One Sahara.

sahara.conf upgrade
We've migrated from custom auth_token middleware config options to the common config options. To update your config file you should replace the following old config opts with the new ones.


 * "os_auth_protocol", "os_auth_host", "os_auth_port" -> "[keystone_authtoken]/auth_uri" and "[keystone_authtoken]/identity_uri"
 * "os_admin_username" -> "[keystone_authtoken]/admin_user"
 * "os_admin_password" -> "[keystone_authtoken]/admin_password"
 * "os_admin_tenant_name" -> "[keystone_authtoken]/admin_tenant_name"

We've replaced oslo code from sahara.openstack.common.db by usage of oslo.db library.

Also sqlite database is not supported anymore. Please use MySQL or PostgreSQL db backends for Sahara. Sqlite support was dropped because it doesn't support (and not going to support, see http://www.sqlite.org/omitted.html) ALTER COLUMN and DROP COLUMN commands required for DB migrations between versions.

You can find more info about config file options in Sahara repository in file "etc/sahara/sahara.conf.sample".

Sahara Dashboard was merged into OpenStack Dashboard
The Sahara Dashboard is not available in Juno release. Instead it's functionality is provided by OpenStack Dashboard out of the box. The Sahara UI is available in OpenStack Dashboard in "Project" -> "Data Processing" tab.

Note that you have to properly register Sahara in Keystone in order for Sahara UI in the Dashboard to work.

VM user name changed for HEAT infrastructure engine
We've updated HEAT infrastructure engine ("infrastructure_engine=heat") to use the same rules for instance user name as in direct engine. Before the change user name for VMs created by Sahara using HEAT engine was always 'ec2-user'. Now user name is taken from the image registry as it is described in the documentation.

Note, this change breaks Sahara backward compatibility for clusters created using HEAT infrastructure engine before the change. Clusters will continue to operate, but it is not recommended to perform scale operation over them.

Anti affinity implementation changed
Starting with Juno release anti affinity feature is implemented using server groups. There should not be much difference in Sahara behavior from user perspective, but there are internal changes:


 * Server group object will be created if anti affinity feature is enabled.
 * New implementation doesn't allow several affected instances on the same host even if they don't have common processes. So, if anti affinity enabled for 'datanode' and 'tasktracker' processes, previous implementation allowed to have instance with 'datanode' process and other instance with 'tasktracker' process on one host. New implementation guarantees that instances will be on different hosts.

Note, new implementation will be applied for new clusters only. Old implementation will be applied if user scales cluster created in Icehouse.

OpenStack Documentation

 * This release, the OpenStack Foundation funded a five-day book sprint to write the new OpenStack Architecture Design Guide. It offers architectures for general purpose, compute-focused, storage-focused, network-focused, multi-site, hybrid, massively scalable, and specialized clouds.
 * The Install Guides have had a lot of clean up and standardization: uses common message queue (RabbitMQ), replaces openstack-config (crudini) commands with config file editing for improved learning opportunities and consistency, references a generic SQL database so that MariaDB or MySQL can be substituted, and replaces auth_port and auth_protocol with identity_uri, and auth_host with auth_uri throughout. The Install Guides are thoroughly tested on each distribution and continuously published until the official release packages are available to everyone.
 * The High Availability Guide now has a separate review team and has moved into a separate repository.
 * The Security Guide now has a specialized review team and has moved into a separate repository.
 * The long-form API reference documents have been re-purposed to focus on the API Complete Reference.
 * The User Guide now contains Database Service for OpenStack information.
 * The Command-Line Reference has been updated with new client releases and now contains additional chapters for the common OpenStack client, the trove-manage client, and the Data processing client (sahara).
 * The OpenStack Cloud Administrator Guide now contains information about Telemetry (ceilometer).