ReleaseNotes/Icehouse/pl

= Informacje o wydaniu OpenStack 2014.1 (Icehouse) =

Find the translated release notes in Japanese. More translations will be coming.

General Upgrade Notes

 * Windows packagers should use pbr 0.8 to avoid bug 1294246
 * The log-config option has been renamed log-config-append, and will now append any configuration specified, rather than completely overriding any other settings as currently occurs. (https://bugs.launchpad.net/oslo/+bug/1169328, https://bugs.launchpad.net/oslo/+bug/1238349)
 * To minimize downtime, OpenStack Networking must be upgraded and neutron-metadata-agent restarted before OpenStack Compute is upgraded. Compute must be able to verify the X-Tenant-ID which is now passed by the neutron-metadata-agent service. (https://bugs.launchpad.net/neutron/+bug/1235450)

Key New Features

 * Discoverable capabilities: A Swift proxy server now by default (although it can be turned off) will respond to requests to /info. The response to these requests include information about the cluster and can be used by clients to determine which features are supported in the cluster. This means that one client will be able to communicate with multiple Swift clusters and take advantage of the features available in each cluster.


 * Generic way to persist system metadata: Swift now supports system-level metadata on accounts and containers. System metadata provides a means to store internal custom metadata with associated Swift resources in a safe and secure fashion without actually having to plumb custom metadata through the core swift servers. The new gatekeeper middleware prevents this system metadata from leaking into the request or being set by a client.


 * Account-level ACLs and ACL format v2: Accounts now have a new privileged header to represent ACLs or any other form of account-level access control. The value of the header is a JSON dictionary string to be interpreted by the auth system. A reference implementation is given in TempAuth. Please see the full docs at http://swift.openstack.org/overview_auth.html


 * Object replication ssync (an rsync alternative): A Swift storage node can now be configured to use Swift primitives for replication transport instead of rsync.


 * Automatic retry on read failures: If a source times out on an object server read, try another one of them with a modified range. This means that drive failures during a client request will not be visible to the end-user client.


 * Work on upcoming storage policies

Known Issues
None known at this time

Upgrade Notes
Read full change log notes at https://github.com/openstack/swift/blob/master/CHANGELOG to see any config changes that would affect upgrades.

As always, Swift can be upgraded with no downtime.

Upgrade Support

 * Limited live upgrades are now supported. This enables deployers to upgrade controller infrastructure first, and subsequently upgrade individual compute nodes without requiring downtime of the entire cloud to complete.

Hyper-V

 * Added RDP console support.

Libvirt (KVM)

 * The Libvirt compute driver now supports providing modified kernel arguments to booting compute instances. The kernel arguments are retrieved from the os_command_line key in the image metadata as stored in Glance, if a value for the key was provided. Otherwise the default kernel arguments are used.
 * The Libvirt driver now supports using VirtIO SCSI (virtio-scsi) instead of VirtIO Block (virtio-blk) to provide block device access for instances. Virtio SCSI is a para-virtualized SCSI controller device designed as a future successor to VirtIO Block and aiming to provide improved scalability and performance.
 * The Libvirt Compute driver now supports adding a Virtio RNG device to compute instances to provide increased entropy. Virtio RNG is a paravirtual random number generation device. It allows the compute node to provide entropy to the compute instances in order to fill their entropy pool. The default entropy device used is /dev/random, however use of a physical hardware RNG device attached to the host is also possible. The use of the Virtio RNG device is enabled using the hw_rng property in the metadata of the image used to build the instance.
 * The Libvirt driver now allows the configuration of instances to use video driver other than the default (cirros). This allows the specification of different video driver models, different amounts of video RAM, and different numbers of heads. These values are configured by setting the hw_video_model, hw_video_vram, and hw_video_head properties in the image metadata. Currently supported video driver models are vga, cirrus, vmvga, xen and qxl</tt>.
 * Watchdog support has been added to the Libvirt driver. The watchdog device used is i6300esb</tt>. It is enabled by setting the hw_watchdog_action</tt> property in the image properties or flavor extra specifications (extra_specs</tt>) to a value other than disabled</tt>. Supported hw_watchdog_action</tt> property values, which denote the action for the watchdog device to take in the event of an instance failure, are poweroff</tt>, reset</tt>, pause</tt>, and none</tt>.
 * The High Precision Event Timer (HPET) is now disabled for instances created using the Libvirt driver. The use of this option was found to lead to clock drift in Windows guests when under heavy load.
 * The libvirt driver now supports waiting for an event from Neutron during instance boot for better reliability. This requires a suitably new Neutron that supports sending these events, and avoids a race between the instance expecting networking to be ready and the actual plumbing that is required.

VMware

 * The VMware Compute drivers now support the virtual machine diagnostics call. Diagnostics can be retrieved using the "nova diagnostics INSTANCE" command, where INSTANCE is replaced by an instance name or instance identifier.
 * The VMware Compute drivers now support booting an instance from an ISO image.
 * The VMware Compute drivers now support the aging of cached images.

XenServer

 * All XenServer specific configuration items have changed name, and moved to a [xenserver] section in nova.conf. While the old names will still work in this release, the old names are now deprecated, and support for them could well be removed in a future release of Nova.
 * Added initial support for PCI passthrough
 * Maintained group B status through the introduction of the XenServer CI
 * Improved support for ephemeral disks (including migration and resize up of multiple ephemeral disks)
 * Support for vcpu_pin_set, essential when you pin CPU resources to Dom0
 * Numerous performance and stability enhancements

API

 * In OpenStack Compute, the OS-DCF:diskConfig</tt> API attribute is no longer supported in V3 of the nova API.
 * The Compute API currently supports both XML and JSON formats. Support for the XML format is now deprecated and will be retired in a future release.
 * The Compute API now exposes a mechanism for permanently removing decommissioned compute nodes. Previously these would continue to be listed even where the compute service had had been disabled and the system re-provisioned. This functionality is provided by the ExtendedServicesDelete</tt> API extension.
 * Separated the V3 API admin_actions plugin into logically separate plugins so operators can enable subsets of the functionality currently present in the plugin.
 * The Compute service now uses the tenant identifier instead of the tenant name when authenticating with OpenStack Networking (Neutron). This improves support for the OpenStack Identity API v3 which allows non-unique tenant names.
 * The Compute API now exposes the hypervisor IP address, allowing it to be retrieved by administrators using the nova hypervisor-show</tt> command.

Scheduler

 * The scheduler now includes an initial implementation of a caching scheduler driver. The caching scheduler uses the existing facilities for applying scheduler filters and weights but caches the list of available hosts. When a user request is passed to the caching scheduler it attempts to perform scheduling based on the list of cached hosts, with a view to improving scheduler performance.
 * A new scheduler filter, AggregateImagePropertiesIsolation</tt>, has been introduced. The new filter schedules instances to hosts based on matching namespaced image properties with host aggregate properties. Hosts that do not belong to any host aggregate remain valid scheduling targets for instances based on all images. The new Compute service configuration keys aggregate_image_properties_isolation_namespace</tt> and aggregate_image_properties_isolation_separator</tt> are used to determine which image properties are examined by the filter.
 * Weight normalization in OpenStack Compute: See:
 * https://review.openstack.org/#/c/27160/ Weights are normalized, so there is no need to inflate multipliers artificially. The maximum weight that a weigher will put for a node is 1.0 and the minimum is 0.0.
 * The scheduler now supports server groups. The following types are supported - anti-affinity and affinity filters. That is, a server that is deployed will be done according to a predefined policy.

Other Features

 * Notifications are now generated upon the creation and deletion of keypairs.
 * Notifications are now generated when an Compute host is enabled, disabled, powered on, shut down, rebooted, put into maintenance mode and taken out of maintenance mode.
 * Compute services are now able to shutdown gracefully by disabling processing of new requests when a service shutdown is requested but allowing requests already in process to complete before terminating.
 * The Compute service determines what action to take when instances are found to be running that were previously marked deleted based on the value of the <tt>running_deleted_instance_action</tt> configuration key. A new <tt>shutdown</tt> value has been added. Using this new value allows administrators to optionally keep instances found in this state for diagnostics while still releasing the runtime resources.
 * File injection is now disabled by default in OpenStack Compute. Instead it is recommended that the ConfigDrive and metadata server facilities are used to modify guests at launch. To enable file injection modify the <tt>inject_key</tt> and <tt>inject_partition</tt> configuration keys in <tt>/etc/nova/nova.conf</tt> and restart the Compute services. The file injection mechanism is likely to be disabled in a future release.
 * A number of changes have been made to the expected format <tt>/etc/nova/nova.conf</tt> configuration file with a view to ensuring that all configuration groups in the file use descriptive names. A number of driver specific flags, including those for the Libvirt driver, have also been moved to their own option groups.

Known Issues

 * OpenStack Compute has some features that use newer API versions from other projects, but the following are the only API versions tested in Icehouse:
 * Keystone v2
 * Cinder v1
 * Glance v1

Upgrade Notes

 * Scheduler and weight normalization (https://review.openstack.org/#/c/27160/): In previous releases the Compute and Cells scheduler used raw weights (i.e. the weighers returned any value, and that was the value used by the weighing proccess).
 * If you were using several weighers for Compute:
 * If several weighers were used (in previous releases Nova only shipped one weigher for compute), it is possible that your multipliers were inflated artificially in order to make an important weigher prevail against any other weigher that returned large raw values. You need to check your weighers and take into account that now the maximum and minimum weights for a host will always be <tt>1.0</tt> and <tt>0.0</tt>.
 * If you are using cells:
 * <tt>nova.cells.weights.mute_child.MuteChild</tt>: The weigher returned the value <tt>mute_weight_value</tt> as the weight assigned to a child that didn't update its capabilities in a while. It can still be used, but will have no effect on the final weight that will be computed by the weighing process, that will be <tt>1.0</tt>. If you are using this weigher to mute a child cell you need to adjust the <tt>mute_weight_multiplier</tt>.
 * <tt>nova.cells.weights.weight_offset.WeightOffsetWeigher</tt> introduces a new configuration option <tt>offset_weight_multiplier</tt>. This new option has to be adjusted. In previous releases, the weigher returned the value of the configured offset for each of the cells in the weighing process. While the winner of that process will still be the same, it will get a weight of <tt>1.0</tt>. If you were using this weigher and you were relying in its value to make it prevail against any other weighers you need to adjust its multiplier accordingly.
 * An early Docker compute driver was included in the Havana release. This driver has been moved from Nova into its own repository. The new location is http://git.openstack.org/cgit/stackforge/nova-docker
 * https://review.openstack.org/50668 - The <tt>compute_api_class</tt> configuration option has been removed.
 * https://review.openstack.org/#/c/54290/ - The following deprecated configuration option aliases have been removed in favor of their new names:
 * <tt>service_quantum_metadata_proxy</tt>
 * <tt>quantum_metadata_proxy_shared_secret</tt>
 * <tt>use_quantum_default_nets</tt>
 * <tt>quantum_default_tenant_id</tt>
 * <tt>vpn_instance_type</tt>
 * <tt>default_instance_type</tt>
 * <tt>quantum_url</tt>
 * <tt>quantum_url_timeout</tt>
 * <tt>quantum_admin_username</tt>
 * <tt>quantum_admin_password</tt>
 * <tt>quantum_admin_tenant_name</tt>
 * <tt>quantum_region_name</tt>
 * <tt>quantum_admin_auth_url</tt>
 * <tt>quantum_api_insecure</tt>
 * <tt>quantum_auth_strategy</tt>
 * <tt>quantum_ovs_bridge</tt>
 * <tt>quantum_extension_sync_interval</tt>
 * <tt>vmwareapi_host_ip</tt>
 * <tt>vmwareapi_host_username</tt>
 * <tt>vmwareapi_host_password</tt>
 * <tt>vmwareapi_cluster_name</tt>
 * <tt>vmwareapi_task_poll_interval</tt>
 * <tt>vmwareapi_api_retry_count</tt>
 * <tt>vnc_port</tt>
 * <tt>vnc_port_total</tt>
 * <tt>use_linked_clone</tt>
 * <tt>vmwareapi_vlan_interface</tt>
 * <tt>vmwareapi_wsdl_loc</tt>
 * The PowerVM driver has been removed: https://review.openstack.org/#/c/57774/
 * The keystone_authtoken defaults changed in nova.conf: https://review.openstack.org/#/c/62815/
 * libvirt lvm names changed from using instance_name_template to instance uuid (https://review.openstack.org/#/c/76968). Possible manual cleanup required if using a non default instance_name_template.
 * rbd disk names changed from using instance_name_template to instance uuid. Manual cleanup required of old virtual disks after the transition. (TBD find review)
 * Icehouse brings libguestfs as a requirement. Installing icehouse dependencies on a system currently running havana may cause the havana node to begin using libguestfs and break unexpectedly. It is recommended that libvirt_inject_partition=-2 be set on havana nodes prior to starting an upgrade of packages on the system if the nova packages will be updated last.
 * Creating a private flavor now adds access to the tenant automatically. This was the documented behavior in Havana, but the actual mplementation in Havana and previous versions of Nova did not add the tenant automatically to private flavors.
 * Nova previously included a nova.conf.sample. This file was automatically generated and is no longer included directly.  If you are packaging Nova and wish to include the sample config file, see etc/nova/README.nova.conf for instructions on how to generate the file at build time.
 * Nova now defaults to requiring an event from Neutron when booting libvirt guests. If you upgrade Nova before Neutron, you must disable this feature in Nova until Neutron supports it by setting vif_plugging_is_fatal=False and vif_plugging_timeout=0. Recommended order is: Nova (with this disabled), Neutron (with the notifications enabled), and then enable vif_plugging_is_fatal=True with the default value of vif_plugging_timeout.
 * Nova supports a limited live upgrade model for the compute nodes in Icehouse. To do this, upgrade controller infrastructure (everthing except nova-compute) first, but set the [upgrade_levels]/compute=icehouse-compat option. This will enable Icehouse controller services to talk to Havana compute services. Upgrades of individual compute nodes can then proceed normally. When all the computes are upgraded, unset the compute version option to retain the default and restart the controller services.
 * The following configuration options are marked as deprecated in this release. See <tt>nova.conf.sample</tt> for their replacements.  <tt>[GROUP]/option</tt>
 * <tt>[DEFAULT]/rabbit_durable_queues</tt>
 * <tt>[rpc_notifier2]/topics</tt>
 * <tt>[DEFAULT]/log_config</tt>
 * <tt>[DEFAULT]/logfile</tt>
 * <tt>[DEFAULT]/logdir</tt>
 * <tt>[DEFAULT]/base_dir_name</tt>
 * <tt>[DEFAULT]/instance_type_extra_specs</tt>
 * <tt>[DEFAULT]/db_backend</tt>
 * <tt>[DEFAULT]/sql_connection</tt>
 * <tt>[DATABASE]/sql_connection</tt>
 * <tt>[sql]/connection</tt>
 * <tt>[DEFAULT]/sql_idle_timeout</tt>
 * <tt>[DATABASE]/sql_idle_timeout</tt>
 * <tt>[sql]/idle_timeout</tt>
 * <tt>[DEFAULT]/sql_min_pool_size</tt>
 * <tt>[DATABASE]/sql_min_pool_size</tt>
 * <tt>[DEFAULT]/sql_max_pool_size</tt>
 * <tt>[DATABASE]/sql_max_pool_size</tt>
 * <tt>[DEFAULT]/sql_max_retries</tt>
 * <tt>[DATABASE]/sql_max_retries</tt>
 * <tt>[DEFAULT]/sql_retry_interval</tt>
 * <tt>[DATABASE]/reconnect_interval</tt>
 * <tt>[DEFAULT]/sql_max_overflow</tt>
 * <tt>[DATABASE]/sqlalchemy_max_overflow</tt>
 * <tt>[DEFAULT]/sql_connection_debug</tt>
 * <tt>[DEFAULT]/sql_connection_trace</tt>
 * <tt>[DATABASE]/sqlalchemy_pool_timeout</tt>
 * <tt>[DEFAULT]/memcache_servers</tt>
 * <tt>[DEFAULT]/libvirt_type</tt>
 * <tt>[DEFAULT]/libvirt_uri</tt>
 * <tt>[DEFAULT]/libvirt_inject_password</tt>
 * <tt>[DEFAULT]/libvirt_inject_key</tt>
 * <tt>[DEFAULT]/libvirt_inject_partition</tt>
 * <tt>[DEFAULT]/libvirt_vif_driver</tt>
 * <tt>[DEFAULT]/libvirt_volume_drivers</tt>
 * <tt>[DEFAULT]/libvirt_disk_prefix</tt>
 * <tt>[DEFAULT]/libvirt_wait_soft_reboot_seconds</tt>
 * <tt>[DEFAULT]/libvirt_cpu_mode</tt>
 * <tt>[DEFAULT]/libvirt_cpu_model</tt>
 * <tt>[DEFAULT]/libvirt_snapshots_directory</tt>
 * <tt>[DEFAULT]/libvirt_images_type</tt>
 * <tt>[DEFAULT]/libvirt_images_volume_group</tt>
 * <tt>[DEFAULT]/libvirt_sparse_logical_volumes</tt>
 * <tt>[DEFAULT]/libvirt_images_rbd_pool</tt>
 * <tt>[DEFAULT]/libvirt_images_rbd_ceph_conf</tt>
 * <tt>[DEFAULT]/libvirt_snapshot_compression</tt>
 * <tt>[DEFAULT]/libvirt_use_virtio_for_bridges</tt>
 * <tt>[DEFAULT]/libvirt_iscsi_use_multipath</tt>
 * <tt>[DEFAULT]/libvirt_iser_use_multipath</tt>
 * <tt>[DEFAULT]/matchmaker_ringfile</tt>
 * <tt>[DEFAULT]/agent_timeout</tt>
 * <tt>[DEFAULT]/agent_version_timeout</tt>
 * <tt>[DEFAULT]/agent_resetnetwork_timeout</tt>
 * <tt>[DEFAULT]/xenapi_agent_path</tt>
 * <tt>[DEFAULT]/xenapi_disable_agent</tt>
 * <tt>[DEFAULT]/xenapi_use_agent_default</tt>
 * <tt>[DEFAULT]/xenapi_login_timeout</tt>
 * <tt>[DEFAULT]/xenapi_connection_concurrent</tt>
 * <tt>[DEFAULT]/xenapi_connection_url</tt>
 * <tt>[DEFAULT]/xenapi_connection_username</tt>
 * <tt>[DEFAULT]/xenapi_connection_password</tt>
 * <tt>[DEFAULT]/xenapi_vhd_coalesce_poll_interval</tt>
 * <tt>[DEFAULT]/xenapi_check_host</tt>
 * <tt>[DEFAULT]/xenapi_vhd_coalesce_max_attempts</tt>
 * <tt>[DEFAULT]/xenapi_sr_base_path</tt>
 * <tt>[DEFAULT]/target_host</tt>
 * <tt>[DEFAULT]/target_port</tt>
 * <tt>[DEFAULT]/iqn_prefix</tt>
 * <tt>[DEFAULT]/xenapi_remap_vbd_dev</tt>
 * <tt>[DEFAULT]/xenapi_remap_vbd_dev_prefix</tt>
 * <tt>[DEFAULT]/xenapi_torrent_base_url</tt>
 * <tt>[DEFAULT]/xenapi_torrent_seed_chance</tt>
 * <tt>[DEFAULT]/xenapi_torrent_seed_duration</tt>
 * <tt>[DEFAULT]/xenapi_torrent_max_last_accessed</tt>
 * <tt>[DEFAULT]/xenapi_torrent_listen_port_start</tt>
 * <tt>[DEFAULT]/xenapi_torrent_listen_port_end</tt>
 * <tt>[DEFAULT]/xenapi_torrent_download_stall_cutoff</tt>
 * <tt>[DEFAULT]/xenapi_torrent_max_seeder_processes_per_host</tt>
 * <tt>[DEFAULT]/use_join_force</tt>
 * <tt>[DEFAULT]/xenapi_ovs_integration_bridge</tt>
 * <tt>[DEFAULT]/cache_images</tt>
 * <tt>[DEFAULT]/xenapi_image_compression_level</tt>
 * <tt>[DEFAULT]/default_os_type</tt>
 * <tt>[DEFAULT]/block_device_creation_timeout</tt>
 * <tt>[DEFAULT]/max_kernel_ramdisk_size</tt>
 * <tt>[DEFAULT]/sr_matching_filter</tt>
 * <tt>[DEFAULT]/xenapi_sparse_copy</tt>
 * <tt>[DEFAULT]/xenapi_num_vbd_unplug_retries</tt>
 * <tt>[DEFAULT]/xenapi_torrent_images</tt>
 * <tt>[DEFAULT]/xenapi_ipxe_network_name</tt>
 * <tt>[DEFAULT]/xenapi_ipxe_boot_menu_url</tt>
 * <tt>[DEFAULT]/xenapi_ipxe_mkisofs_cmd</tt>
 * <tt>[DEFAULT]/xenapi_running_timeout</tt>
 * <tt>[DEFAULT]/xenapi_vif_driver</tt>
 * <tt>[DEFAULT]/xenapi_image_upload_handler</tt>

Key New Features

 * Add VMware Datastore as Storage Backend (See https://blueprints.launchpad.net/glance/+spec/vmware-datastore-storage-backend)
 * Adding image location selection strategy (See https://blueprints.launchpad.net/glance/+spec/image-location-selection-strategy)
 * A new filed 'virtual_size' is added for image (See https://blueprints.launchpad.net/glance/+spec/split-image-size)
 * API message localization (See http://docs.openstack.org/developer/glance/glanceapi.html#api-message-localization)
 * The calculation of storage quotas has been improved. Deleted images are now excluded from the count (https://bugs.launchpad.net/glance/+bug/1261738), which may affect your existing usage figures.
 * Glance has moved to using 0-based indices for location entries, to be in line with JSON-pointer RFC6901 (https://bugs.launchpad.net/glance/+bug/1282437)

Known Issues
None.

Upgrade Notes

 * Glance is using oslo.messaging to replace its private notifier code, it's recomended to use a combination of `notification_driver` + `transport_url`. The old configuration 'notifier_strategy' is deprecated though it still works.

Language Support

 * Thanks to the I18nTeam Horizon is now available in Hindi, German and Serbian. Translations for Australian English, British English, Dutch, French, Japanese, Korean, Polish, Portuguese, Simplified and Traditional Chinese, Spanish and Russian have also been updated.

Nova

 * Live Migration Support
 * HyperV console support
 * Disk config option support
 * Improved support for managing host aggregates and availability zones.
 * Support for easily setting flavor extra specs

Cinder

 * Role based access support for Cinder views
 * v2 API support
 * Extend volume support

Neutron

 * Router Rules Support -- displays router rules on routers when returned by neutron

Swift

 * Support for creating public containers and providing links to those containers
 * Support explicit creation of pseudo directories

Heat

 * Ability to update an existing stack
 * Template validation
 * Support for adding an environment files

Ceilometer
Adminstrators can now view daily usage reports per project across services.

User Experience Enhancements

 * More Extensible Navigation
 * The primary dashboard and panel navigation has been updated from the tab navigation to an accordion implementation. Dashboards and Panel Groups are now expandable and collapsible in the page navigation.  This change allows for the addition of more dashboards as well as accommodates the increasing number of panels in dashboards.
 * Wizard
 * Horizon now provides a Wizard control to complete multi-step interdependent tasks. This is now utilized in the create network action.
 * Inline Table Editing
 * Tables can now be written to support editing fields in the table to reduce the need for opening separate forms. The first sample of this is in the Admin dashboard, Projects panel.
 * Self-Service Password Change
 * Leveraging enhancements to Identity API v3 (Keystone), users can now change their own passwords without the need to involve an administrator. This functionality was previously only available with Identity API v2.0.
 * Server side table filtering
 * Tables can now easily be wired to filter results from underlying API calls based on criteria selected by the user rather than just perform an on page search.

Framework

 * JavaScript =====
 * In a move to provide a better user experience Horizon has adopted AngularJS as the primary JavaScript framework. JavaScript is now a browser requirement to run the Horizon interface.  More to come in Juno.
 * Added reusable charts for use in Horizon
 * Integration of Jasmine testing library


 * Full Django 1.6 support


 * Plugin Architecture
 * Horizon now boasts dynamic loading/disabling of dashboards, panel groups and panels. By merely adding a file in the the enabled directory, the selection of items loaded into Horizon can be altered.  Editing the Django settings file is no longer required.


 * Integration Test Framework
 * Horizon now supports running integration tests against a working devstack system. There is a limited test suite, but this a great step forward.

Known Issues
If utilizing multi-domain support in Identity API v3, users will be unable to manage resources in any domain other than the default domain.

Upgrade Notes
Browsers used will now need to support JavaScript.

The default for "can_set_password" is now False. This means that unless the setting is explicitly set to True, the option to set an 'Admin password' for an instance will not be shown in the Launch Instance workflow. Not all hypervisors support this feature which created confusion with users, and there is now a safer way to set and retrieve a password (see LP#1291006).

The default for "can_set_mountpoint" is now False, and should be set to True in the settings in order to add the option to set the mount point for volumes in the dashboard. At this point only the Xen hypervisor supports this feature (see LP#1255136).

Key New Features

 * New v3 API features
 * allows Keystone to consume federated authentication via Shibboleth for multiple Identity Providers, and mapping federated attributes into OpenStack group-based role assignments (see documentation).
 * allows API users to update their own passwords (see documentation).
 * allows API users to opt-out of receiving the service catalog when performing online token validation (see documentation).
 * provides a public interface for describing multi-region deployments (see documentation).
 * now publishes the certificates used for PKI token validation (see documentation).
 * is now capable of providing limited-use delegation via the  attribute of trusts.
 * The assignments backend (the source of authorization data) has now been completely separated from the identity backend (the source of authentication data). This means that you can now back your deployment's identity data to LDAP, and your authorization data to SQL, for example.
 * The token KVS driver is now capable of writing to persistent Key-Value stores such as Redis, Cassandra, or MongoDB.
 * Keystone's driver interfaces are now implemented as Abstract Base Classes (ABCs) to make it easier to track compatibility of custom driver implementations across releases.
 * Keystone's default  has been rewritten in an easier to read format.
 * Notifications are now emitted in response to create, update and delete events on roles, groups, and trusts.
 * Custom extensions and driver implementations may now subscribe to internal-only event notifications, including disable events (which are only exposed externally as part of update events).
 * Keystone now emits Cloud Audit Data Federation (CADF) event notifications in response to authentication events.
 * Additional plugins are provided to handle external authentication via  with respect to single-domain versus multi-domain deployments.
 * can now perform enforcement on the target domain in a domain-aware operation using, for example,.
 * The LDAP driver for the assignment backend now supports group-based role assignment operations.
 * Keystone now publishes token revocation events in addition to providing continued support for token revocation lists. Token revocation events are designed to consume much less overhead (when compared to token revocation lists) and will enable Keystone eliminate token persistence during the Juno release.
 * Deployers can now define arbitrary limits on the size of collections in API responses (for example,  might be configured to return only 100 users, rather than 10,000). Clients will be informed when truncation has occurred.
 * Lazy translation has been enabled to translating responses according to the requested Accept-Language header.
 * Keystone now emits i18n-ready log messages.
 * Collection filtering is now performed in the driver layer, where possible, for improved performance.

Known Issues

 * Bug 1291157: If using the  extension, deleting an Identity Provider or Protocol does not result in previously-issued tokens being revoked. This will not be fixed in the stable/icehouse branch.
 * Bug 1308218: Duplicate user resources may be returned in response to

Upgrade Notes
[filter:ec2_extension_v3] paste.filter_factory = keystone.contrib.ec2:Ec2ExtensionV3.factory ... and  needs to be added to the pipeline variable in the   section of.
 * The v2 API has been prepared for deprecation, but remains stable in the Icehouse release. It may be formally deprecated during the Juno release pending widespread support for the v3 API.
 * Backwards compatibility for  has been removed.   middleware module is no longer provided by Keystone itself, and must be imported from   instead.
 * The  middleware module is no longer provided by Keystone itself, and must be imported from   instead. Backwards compatibility for   will be removed in Juno.
 * The default token duration has been reduced from 24 hours to just 1 hour. This effectively reduces the number of tokens that must be persisted at any one time, and (for PKI deployments) reduces the overhead of the token revocation list.
 * has been deprecated in favor of either the eventlet debug access log or Apache httpd access log and may be removed in the K release.
 * has been deprecated in favor of external tooling and may be removed in the K release.
 * has been deprecated in favor of support for "application/json" only and may be removed in the K release.
 * A v3 API version of the EC2 Credential system has been implemented. To use this, the following section needs to be added to :
 * updated to provide rules for the new v3 EC2 Credential CRUD as show in the updated sample  and
 * Migration numbers 38, 39 and 40 move all role assignment data into a single, unified table with first-class columns for role references.
 * TODO: deprecations for the move to oslo-incubator db
 * A new configuration option,  is   by default to harden security around domain-level administration boundaries. This may break API functionality that you depended on in Havana. If so, set this value to   and please voice your use case to the Keystone community.
 * TODO: any non-ideal default values that will be changed in the future
 * Keystone's move to oslo.messaging for emitting event notifications has resulted in new configuration options which are potentially incompatible with those from Havana (TODO: enumerate old/new config values)

Key New Features
During Icehouse cycle the team focused on stability and testing of the Neutron codebase. Many of the existing plugins and drivers were revised to address know performance and stability issues.

New Drivers/Plugins

 * IBM SDN-VE
 * Nuage
 * OneConvergence
 * OpenDaylight

New Load Balancing as a Service Drivers

 * Embrane
 * NetScaler
 * Radware

New VPN Driver

 * Cisco CSR

Known Issues

 * When activating the new Nova callback functionality, the  configuration should contain the version in the URL. For example: "http://127.0.0.1:8774/v2"
 * Midokura maintains its own MidoNet Icehouse plugin in an external public repository. The plugin can be found here: https://github.com/midokura/neutron.  Please contact Midokura for more information (info@midokura.com)
 * Schema migrations when Advance Service Plugins are enabled might not properly update the schema for all configurations. Please test the migration on a copy of the database prior to executing on a live database.  The Neutron team will address this as part of the first stable update.

Upgrade Notes

 * The OVS plugin and Linux Bridge plugin are deprecated and should not be used for deployments. The ML2 plugin combines OVS and Linux Bridge support into one plugin.  A migration script has been provided for Havana deployments looking to convert to ML2.  The migration does not have a rollback capability, so it is recommended the migration be tested on a copy of the database prior to running on a live system.
 * The Neutron team has extended support for legacy Quantum configuration file options for one more release. The Icehouse release is final release that these options will be supported.  Deployers are encouraged update configurations to use the proper Neutron options.
 * XML support in the API is deprecated. Users and deployers should migrate to JSON for API interactions as soon as possible since the XML support will be retired in a future release.

Key New Features

 * Ability to change the type of an existing volume (retype)
 * Add volume metadata support to the Cinder Backup Object
 * Implement Multiple API workers
 * Add ability to delete Quota
 * Add ability to import/export backups in to Cinder
 * Added Fibre Channel Zone manager for automated FC zoning during volume attach/detach
 * Ability to update a volume type encryption
 * Ceilometer notifications on attach/detach

New Backend Drivers/Plugins

 * EMC VMAX/VNX SMI-S FC Driver
 * EMC VNX iSCSI Direct Driver
 * HP MSA 2040
 * IBM SONAS and Storwize V7000 Unified Storage Systems
 * NetApp ESeries

Known Issues

 * Reconnect on failure for multiple servers always connects to first server (Bug: #1261631)
 * Storwize/SVC driver crashes when check volume copy status (Bug: #1304115)
 * Glance API v2 not supported (Bug: #1308594)
 * It is recommended you leave Cinder v1 enabled as Nova does not know how to talk to v2.

Upgrade Notes

 * Force detach API call is now an admin only call and no longer the policy default of admin and owner. Force detach requires clean up work by the admin, in which the admin would not know when an owner did this operation.
 * Simple/Chance scheduler have been deprecated. The filter scheduler should be used instead for similar functionality. Just set your cinder.conf with scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler
 * hp3par_domain config option was deprecated in Havana release not officially removed. It does nothing.

Key New Features

 * API additions
 * arbitrarily complex combinations of query constraints for meters, samples and alarms
 * capabilities API for discovery of storage driver specific features
 * selectable aggregates for statistics, including new cardinality and standard deviation functions
 * direct access to samples decoupled from a specific meter
 * events API, in the style of StackTach


 * Alarming improvements
 * time-constrained alarms, providing flexibility to set the bar higher or lower depending on time of day or day of the week
 * exclusion of weak data points with anomalously low sample counts
 * derived rate-based meters for disk & network, more suited to threshold-oriented alarming


 * Integration touch-points
 * split collector into notification agent solely responsible for consuming external notifications
 * redesign of pipeline configuration for pluggable resource discovery
 * configurable persistence of raw notification payloads, in the style of StackTach


 * Storage drivers
 * approaching feature parity in HBase & SQLAlchemy & DB2 drivers
 * optimization of resource queries
 * HBase: add Alarm support


 * New sources of metrics
 * Neutron north-bound API on SDN controller
 * VMware vCenter Server API
 * SNMP daemons on baremetal hosts
 * OpenDaylight REST APIs

Known Issues

 * SQLAlchemy storage driver is problematic with a scaled out collector service when run against PostgreSQL https://bugs.launchpad.net/ceilometer/+bug/1305332
 * HBase storage driver reports truncated list of meters: https://bugs.launchpad.net/ceilometer/+bug/1288284
 * HBase storage driver doesn't work with HappyBase version 0.7
 * excessive load on nova-api service induced by compute agent: https://bugs.launchpad.net/ceilometer/+bug/1297528

Upgrade Notes

 * the pre-existing collector service has been augmented with a new notification agent that must also be started up post-upgrade
 * MongoDB storage driver now requires the MongoDB installation to be version 2.4 or greater (the lower bound for Havana was 2.2), see upgrade instructions.

Key New Features

 * HOT templates: The HOT template format is now supported as the recommended format for authoring heat templates.
 * OpenStack resources: There is now sufficient coverage of resource types to port any template to native OpenStack resources
 * Software configuration: New API and resources to allow software configuration to be performed using a variety of techniques and tools
 * Non-admin users: It is now possible to launch any stack without requiring admin user credentials. See the upgrade notes on enabling this by configuring stack domain users.
 * Operator API: Cloud operators now have a dedicated admin API to perform operations on all stacks
 * Autoscaling resources: OS::Heat::AutoScalingGroup and OS::Heat::ScalingPolicy now allow the autoscaling of any arbitrary collection of resources
 * Notifications: Heat now sends RPC notifications for events such as stack state changes and autoscaling triggers
 * Heat engine scaling: It is now possible to share orchestration load across multiple instances of heat-engine. Locking is coordinated by a pluggable distributed lock, with a SQL based default lock plugin.
 * File inclusion with get_file: The intrinsic function get_file is used by python-heatclient and heat to allow files to be attached to stack create and update actions, which is useful for representing configuration files and nested stacks in separate files.
 * Cloud-init resources: The OS::Heat::CloudConfig and OS::Heat::MultipartMime
 * Stack abandon and adopt: It is now possible to abandon a stack, which deletes the stack from Heat without deleting the actual OpenStack resources. The resulting abandon data can also be used to adopt a stack, which creates a new stack based on already existing OpenStack resources. Adopt should be considered an experimental feature for the Icehouse release of Heat.
 * Stack preview: The stack-preview action returns a list of resources which are expected to be created if a stack is created with the provided template
 * New resources: The following new resources are implemented in this release:
 * OS::Heat::CloudConfig
 * OS::Heat::MultipartMime
 * OS::Heat::SoftwareConfig
 * OS::Heat::SoftwareDeployment
 * OS::Heat::StructuredConfig
 * OS::Heat::StructuredDeployment
 * OS::Heat::RandomString
 * OS::Heat::ResourceGroup
 * OS::Heat::AutoScalingGroup
 * OS::Heat::ScalingPolicy
 * OS::Neutron::SecurityGroup
 * OS::Neutron::MeteringLabel
 * OS::Neutron::MeteringRule
 * OS::Neutron::ProviderNet
 * OS::Neutron::NetworkGateway
 * OS::Neutron::PoolMember
 * OS::Nova::KeyPair
 * OS::Nova::FloatingIP
 * OS::Nova::FloatingIPAssociation
 * OS::Trove::Instance

Known Issues

 * Any error during a stack-update operation (for example from a transient cloud error, a heat bug, or a user template error) can lead to stacks going into an unrecoverable error state. Currently it is only recommended to attempt stack updates if it is practical to recover from errors by deleting and recreating the stack.
 * The new stack-adopt operation should be considered an experimental feature
 * CFN API returns HTTP status code 500 on all errors (bug 1291079)
 * Deleting stacks containing volume attachments may need to be attempted multiple times due to a volume detachment race (bug 1298350)

Upgrade Notes
Please read the general notes on Heat's security model.

See the sections below on Deferred authentication method and Stack domain users.

Deprecated resources
The following resources are deprecated in this release, and may be removed in the future:
 * OS::Neutron::RouterGateway should no longer be used. Use the `external_gateway_info` property of OS::Neutron::Router instead.

Deferred authentication method
The default  of   is deprecated as of Icehouse, so although it is still the default, deployers are strongly encouraged to move to using , which is planned to become the default for Juno. This model has the following benefits:
 * It avoids storing user credentials in the heat database
 * It removes the need to provide a password as well as a token on stack create
 * It limits the actions the heat service user can perform on a users behalf.

To enable trusts for deferred operations:
 * Ensure the keystone service heat is configured to use has enabled the OS-TRUST extension
 * Set  in
 * Optionally specify the roles to be delegated to the heat service user ( in , defaults to   which will be referred to in the following instructions.  You may wish to modify this list of roles to suit your local RBAC policies)
 * Ensure the role(s) to be delegated exist, e.g  exists when running
 * All users creating heat stacks should possess this role in the project where they are creating the stack. A trust will be created by heat on stack creation between the stack owner (user creating the stack) and the heat service user, delegating the  role to the heat service user, for the lifetime of the stack.

See this blog post for further details.

Stack domain users
To enable non-admin creation of certain resources there is some deployment time configuration required to create a keystone domain and domain-admin user, otherwise Heat will fall back to the previous behavior, but this fallback behavior may not be available in Juno.

$OS_TOKEN refers to a token, e.g the service admin token or some other valid token for a user with sufficient roles to create users and domains. $KEYSTONE_ENDPOINT_V3 refers to the v3 keystone endpoint, e.g http:// :5000/v3 where is the IP address or resolvable name for the keystone service

Steps in summary: openstack --os-token $OS_TOKEN --os-url=$KEYSTONE_ENDPOINT_V3 --os-identity-api-version=3 domain create heat --description "Owns users and projects created by heat"
 * Create a "heat" keystone domain using python-openstackclient (the keystoneclient CLI interface does not support domains)

This returns a domain ID, referred to as $HEAT_DOMAIN_ID below

openstack --os-token $OS_TOKEN --os-url=$KEYSTONE_ENDPOINT_V3 --os-identity-api-version=3 user create --password $PASSWORD --domain $HEAT_DOMAIN_ID heat_domain_admin --description "Manages users and projects created by heat" This returns a user ID, referred to as $DOMAIN_ADMIN_ID below
 * Create a domain-admin user for the "heat" domain

openstack --os-token $OS_TOKEN --os-url=$KS_ENDPOINT_V3 --os-identity-api-version=3 role add --user $DOMAIN_ADMIN_ID --domain $HEAT_DOMAIN_ID admin
 * Make the user a domain admin by adding the admin role for the domain

stack_domain_admin_password = stack_domain_admin = heat_domain_admin stack_user_domain = <domain id returned from domain create above>
 * Update heat.conf with the domain ID and the username/password for the domain-admin user

See this blog post for full details details.

Key New Features

 * User/Schema management
 * Users can do CRUD management on MYSQL Users and Schemas through the Trove API
 * Flavor / Cinder Volume resizes
 * Resize up/down the flavor that defines the Trove instance
 * Resize up the optional Cinder Volume size if the datastore requires a larger volume
 * Multiple datastore support
 * Full feature support for MySQL and Percona
 * Experimental (not full feature) support for MongoDB, Redis, Cassandra, and Couchbase
 * Configuration groups
 * Define a set of configuration options to attach to new or existing instances
 * Backups and Restore
 * Executes native backup software on a datastore, and steam the output to a swift container
 * Full and incremental backups
 * Optional DNS support via designate
 * Flag to define whether to provision DNS for an instance

Known Issues
None yet

Upgrade Notes

 * Trove Conductor is a new daemon to proxy database communication from guests. It needs to be installed and running.
 * new Datastores feature requires operators to define (or remove) the datastores your installation will support
 * new Configuration Groups feature allows operators to define a subset of configuration options for a particular datastore

Key New Features

 * The Operations Guide now has an Upgrades chapter and a Network Troubleshooting section.
 * New manual: Command-Line Interface Reference
 * The End User Guide now contains a Python SDK chapter.
 * The Complete API Reference has been updated with a responsive design and you can download a PDF of the complete reference.
 * The four Installation Guides have updated example architectures including OpenStack Networking, improved basic configuration, and have been more completely tested across distributions.