Jump to: navigation, search

Difference between revisions of "ReleaseNotes/Icehouse"

m (Framework)
(add category)
 
(20 intermediate revisions by 5 users not shown)
Line 1: Line 1:
 
[[Category:Icehouse|Release Note]]
 
[[Category:Icehouse|Release Note]]
 
[[Category:Release Note|Icehouse]]
 
[[Category:Release Note|Icehouse]]
 +
[[Category:Releases]]
 +
[[Category:Icehouse]]
  
 
= OpenStack 2014.1 (Icehouse) Release Notes =
 
= OpenStack 2014.1 (Icehouse) Release Notes =
  
<!-- Find the translated release notes in [[ReleaseNotes/Icehouse/ja |Japanese]]. -->
+
Find the translated release notes in [[ReleaseNotes/Icehouse/ja |Japanese]]. More translations will be coming.
 
<!-- [[ReleaseNotes/Icehouse/zh_cn |simplified Chinese]], [[ReleaseNotes/Icehouse/ko_KR |Korean]], and [[ReleaseNotes/Icehouse/pt_BR |Brazilian Portuguese]]. -->
 
<!-- [[ReleaseNotes/Icehouse/zh_cn |simplified Chinese]], [[ReleaseNotes/Icehouse/ko_KR |Korean]], and [[ReleaseNotes/Icehouse/pt_BR |Brazilian Portuguese]]. -->
  
Line 66: Line 68:
 
* The High Precision Event Timer (HPET) is now disabled for instances created using the Libvirt driver. The use of this option was found to lead to clock drift in Windows guests when under heavy load.
 
* The High Precision Event Timer (HPET) is now disabled for instances created using the Libvirt driver. The use of this option was found to lead to clock drift in Windows guests when under heavy load.
 
* The libvirt driver now supports waiting for an event from Neutron during instance boot for better reliability. This requires a suitably new Neutron that supports sending these events, and avoids a race between the instance expecting networking to be ready and the actual plumbing that is required.
 
* The libvirt driver now supports waiting for an event from Neutron during instance boot for better reliability. This requires a suitably new Neutron that supports sending these events, and avoids a race between the instance expecting networking to be ready and the actual plumbing that is required.
 +
* Enhanced Platform Awareness (EPA) CPU capabilities are added by fixing the bug to call baselineCPU to full feature list in the libvirt driver, which allows to expose CPU feature set such as AES-NI to guests.
  
 
===== VMware =====
 
===== VMware =====
Line 98: Line 101:
 
** https://review.openstack.org/#/c/27160/ Weights are normalized, so there is no need to inflate multipliers artificially. The maximum weight that a weigher will put for a node is 1.0 and the minimum is 0.0.  
 
** https://review.openstack.org/#/c/27160/ Weights are normalized, so there is no need to inflate multipliers artificially. The maximum weight that a weigher will put for a node is 1.0 and the minimum is 0.0.  
 
* The scheduler now supports server groups. The following types are supported - anti-affinity and affinity filters. That is, a server that is deployed will be done according to a predefined policy.
 
* The scheduler now supports server groups. The following types are supported - anti-affinity and affinity filters. That is, a server that is deployed will be done according to a predefined policy.
 +
* A new framework to support utilization based scheduling is added (bp: https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling), and a CPU monitor is added to monitor the runtime CPU utilization and use that for more intelligent scheduling, also a new scheduler metrics weight is added to support that. To use that framework, monitor and weight, the nova config file should be changed otherwise it is disabled by default.
  
 
==== Other Features ====
 
==== Other Features ====
Line 158: Line 162:
 
* rbd disk names changed from using instance_name_template to instance uuid. Manual cleanup required of old virtual disks after the transition. (TBD find review)
 
* rbd disk names changed from using instance_name_template to instance uuid. Manual cleanup required of old virtual disks after the transition. (TBD find review)
 
* Icehouse brings libguestfs as a requirement. Installing icehouse dependencies on a system currently running havana may cause the havana node to begin using libguestfs and break unexpectedly. It is recommended that libvirt_inject_partition=-2 be set on havana nodes prior to starting an upgrade of packages on the system if the nova packages will be updated last.
 
* Icehouse brings libguestfs as a requirement. Installing icehouse dependencies on a system currently running havana may cause the havana node to begin using libguestfs and break unexpectedly. It is recommended that libvirt_inject_partition=-2 be set on havana nodes prior to starting an upgrade of packages on the system if the nova packages will be updated last.
* Creating a private flavor now adds access to the tenant automatically. This was the documented behavior in Havana, but the actual mplementation in Havana and previous versions of Nova did not add the tenant automatically to private flavors.
+
* Creating a private flavor now adds access to the tenant automatically. This was the documented behavior in Havana, but the actual implementation in Havana and previous versions of Nova did not add the tenant automatically to private flavors.
 
* Nova previously included a nova.conf.sample.  This file was automatically generated and is no longer included directly.  If you are packaging Nova and wish to include the sample config file, see etc/nova/README.nova.conf for instructions on how to generate the file at build time.
 
* Nova previously included a nova.conf.sample.  This file was automatically generated and is no longer included directly.  If you are packaging Nova and wish to include the sample config file, see etc/nova/README.nova.conf for instructions on how to generate the file at build time.
 
* Nova now defaults to requiring an event from Neutron when booting libvirt guests. If you upgrade Nova before Neutron, you must disable this feature in Nova until Neutron supports it by setting '''vif_plugging_is_fatal=False''' and '''vif_plugging_timeout=0'''. Recommended order is: Nova (with this disabled), Neutron (with the notifications enabled), and then enable vif_plugging_is_fatal=True with the default value of vif_plugging_timeout.
 
* Nova now defaults to requiring an event from Neutron when booting libvirt guests. If you upgrade Nova before Neutron, you must disable this feature in Nova until Neutron supports it by setting '''vif_plugging_is_fatal=False''' and '''vif_plugging_timeout=0'''. Recommended order is: Nova (with this disabled), Neutron (with the notifications enabled), and then enable vif_plugging_is_fatal=True with the default value of vif_plugging_timeout.
Line 324: Line 328:
  
 
==== Framework ====
 
==== Framework ====
* JavaScript =====
+
* JavaScript  
 
** In a move to provide a better user experience Horizon has adopted AngularJS as the primary JavaScript framework.  JavaScript is now a browser requirement to run the Horizon interface.  More to come in Juno.
 
** In a move to provide a better user experience Horizon has adopted AngularJS as the primary JavaScript framework.  JavaScript is now a browser requirement to run the Horizon interface.  More to come in Juno.
 
*** Added reusable charts for use in Horizon
 
*** Added reusable charts for use in Horizon
Line 354: Line 358:
  
 
* New v3 API features
 
* New v3 API features
** <code>/v3/OS-FEDERATION/</code> allows Keystone to consume federated authentication via [https://shibboleth.net/ Shibboleth] for multiple Identity Providers, and mapping federated attributes into OpenStack group-based role assignments (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-federation-ext.md documentation]).
+
** <code>/v3/OS-FEDERATION/</code> allows Keystone to consume federated authentication via [https://shibboleth.net/ Shibboleth] for multiple Identity Providers, and mapping federated attributes into OpenStack group-based role assignments (see [https://github.com/openstack/identity-api/blob/master/v3/src/markdown/identity-api-v3-os-federation-ext.md documentation]).
 
** <code>POST /v3/users/{user_id}/password</code> allows API users to update their own passwords (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#change-user-password-post-usersuser_idpassword documentation]).
 
** <code>POST /v3/users/{user_id}/password</code> allows API users to update their own passwords (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#change-user-password-post-usersuser_idpassword documentation]).
 
** <code>GET v3/auth/token?nocatalog</code> allows API users to opt-out of receiving the service catalog when performing online token validation (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#validate-token-get-authtokensnocatalog documentation]).
 
** <code>GET v3/auth/token?nocatalog</code> allows API users to opt-out of receiving the service catalog when performing online token validation (see [https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#validate-token-get-authtokensnocatalog documentation]).
Line 428: Line 432:
 
* The Neutron team has extended support for legacy Quantum configuration file options for one more release.  The Icehouse release is final release that these options will be supported.  Deployers are encouraged update configurations to use the proper Neutron options.
 
* The Neutron team has extended support for legacy Quantum configuration file options for one more release.  The Icehouse release is final release that these options will be supported.  Deployers are encouraged update configurations to use the proper Neutron options.
 
* XML support in the API is deprecated.  Users and deployers should migrate to JSON for API interactions as soon as possible since the XML support will be retired in a future release.
 
* XML support in the API is deprecated.  Users and deployers should migrate to JSON for API interactions as soon as possible since the XML support will be retired in a future release.
 +
* Configure neutron.conf for network event callbacks to Nova by setting these values: http://git.openstack.org/cgit/openstack/neutron/tree/etc/neutron.conf?id=2014.1.1#n297
 +
** For more information, see the Upgrade Notes section for Nova.
  
 
== OpenStack Block Storage (Cinder) ==
 
== OpenStack Block Storage (Cinder) ==
Line 486: Line 492:
 
** Neutron north-bound API on SDN controller
 
** Neutron north-bound API on SDN controller
 
** VMware vCenter Server API
 
** VMware vCenter Server API
** SNMP daemons on baremetal hosts
+
** Hardware metrics thru SNMP
 
** OpenDaylight REST APIs
 
** OpenDaylight REST APIs
  

Latest revision as of 17:34, 3 December 2015


OpenStack 2014.1 (Icehouse) Release Notes

Find the translated release notes in Japanese. More translations will be coming.

Contents

General Upgrade Notes

OpenStack Object Storage (Swift)

Key New Features

  • Discoverable capabilities: A Swift proxy server now by default (although it can be turned off) will respond to requests to /info. The response to these requests include information about the cluster and can be used by clients to determine which features are supported in the cluster. This means that one client will be able to communicate with multiple Swift clusters and take advantage of the features available in each cluster.
  • Generic way to persist system metadata: Swift now supports system-level metadata on accounts and containers. System metadata provides a means to store internal custom metadata with associated Swift resources in a safe and secure fashion without actually having to plumb custom metadata through the core swift servers. The new gatekeeper middleware prevents this system metadata from leaking into the request or being set by a client.
  • Account-level ACLs and ACL format v2: Accounts now have a new privileged header to represent ACLs or any other form of account-level access control. The value of the header is a JSON dictionary string to be interpreted by the auth system. A reference implementation is given in TempAuth. Please see the full docs at http://swift.openstack.org/overview_auth.html
  • Object replication ssync (an rsync alternative): A Swift storage node can now be configured to use Swift primitives for replication transport instead of rsync.
  • Automatic retry on read failures: If a source times out on an object server read, try another one of them with a modified range. This means that drive failures during a client request will not be visible to the end-user client.
  • Work on upcoming storage policies

Known Issues

None known at this time

Upgrade Notes

Read full change log notes at https://github.com/openstack/swift/blob/master/CHANGELOG to see any config changes that would affect upgrades.

As always, Swift can be upgraded with no downtime.

OpenStack Compute (Nova)

Key New Features

Upgrade Support

  • Limited live upgrades are now supported. This enables deployers to upgrade controller infrastructure first, and subsequently upgrade individual compute nodes without requiring downtime of the entire cloud to complete.

Compute Drivers

Hyper-V
  • Added RDP console support.
Libvirt (KVM)
  • The Libvirt compute driver now supports providing modified kernel arguments to booting compute instances. The kernel arguments are retrieved from the os_command_line key in the image metadata as stored in Glance, if a value for the key was provided. Otherwise the default kernel arguments are used.
  • The Libvirt driver now supports using VirtIO SCSI (virtio-scsi) instead of VirtIO Block (virtio-blk) to provide block device access for instances. Virtio SCSI is a para-virtualized SCSI controller device designed as a future successor to VirtIO Block and aiming to provide improved scalability and performance.
  • The Libvirt Compute driver now supports adding a Virtio RNG device to compute instances to provide increased entropy. Virtio RNG is a paravirtual random number generation device. It allows the compute node to provide entropy to the compute instances in order to fill their entropy pool. The default entropy device used is /dev/random, however use of a physical hardware RNG device attached to the host is also possible. The use of the Virtio RNG device is enabled using the hw_rng property in the metadata of the image used to build the instance.
  • The Libvirt driver now allows the configuration of instances to use video driver other than the default (cirros). This allows the specification of different video driver models, different amounts of video RAM, and different numbers of heads. These values are configured by setting the hw_video_model, hw_video_vram, and hw_video_head properties in the image metadata. Currently supported video driver models are vga, cirrus, vmvga, xen and qxl.
  • Watchdog support has been added to the Libvirt driver. The watchdog device used is i6300esb. It is enabled by setting the hw_watchdog_action property in the image properties or flavor extra specifications (extra_specs) to a value other than disabled. Supported hw_watchdog_action property values, which denote the action for the watchdog device to take in the event of an instance failure, are poweroff, reset, pause, and none.
  • The High Precision Event Timer (HPET) is now disabled for instances created using the Libvirt driver. The use of this option was found to lead to clock drift in Windows guests when under heavy load.
  • The libvirt driver now supports waiting for an event from Neutron during instance boot for better reliability. This requires a suitably new Neutron that supports sending these events, and avoids a race between the instance expecting networking to be ready and the actual plumbing that is required.
  • Enhanced Platform Awareness (EPA) CPU capabilities are added by fixing the bug to call baselineCPU to full feature list in the libvirt driver, which allows to expose CPU feature set such as AES-NI to guests.
VMware
  • The VMware Compute drivers now support the virtual machine diagnostics call. Diagnostics can be retrieved using the "nova diagnostics INSTANCE" command, where INSTANCE is replaced by an instance name or instance identifier.
  • The VMware Compute drivers now support booting an instance from an ISO image.
  • The VMware Compute drivers now support the aging of cached images.
XenServer
  • All XenServer specific configuration items have changed name, and moved to a [xenserver] section in nova.conf. While the old names will still work in this release, the old names are now deprecated, and support for them could well be removed in a future release of Nova.
  • Added initial support for PCI passthrough
  • Maintained group B status through the introduction of the XenServer CI
  • Improved support for ephemeral disks (including migration and resize up of multiple ephemeral disks)
  • Support for vcpu_pin_set, essential when you pin CPU resources to Dom0
  • Numerous performance and stability enhancements

API

  • In OpenStack Compute, the OS-DCF:diskConfig API attribute is no longer supported in V3 of the nova API.
  • The Compute API currently supports both XML and JSON formats. Support for the XML format is now deprecated and will be retired in a future release.
  • The Compute API now exposes a mechanism for permanently removing decommissioned compute nodes. Previously these would continue to be listed even where the compute service had had been disabled and the system re-provisioned. This functionality is provided by the ExtendedServicesDelete API extension.
  • Separated the V3 API admin_actions plugin into logically separate plugins so operators can enable subsets of the functionality currently present in the plugin.
  • The Compute service now uses the tenant identifier instead of the tenant name when authenticating with OpenStack Networking (Neutron). This improves support for the OpenStack Identity API v3 which allows non-unique tenant names.
  • The Compute API now exposes the hypervisor IP address, allowing it to be retrieved by administrators using the nova hypervisor-show command.

Scheduler

  • The scheduler now includes an initial implementation of a caching scheduler driver. The caching scheduler uses the existing facilities for applying scheduler filters and weights but caches the list of available hosts. When a user request is passed to the caching scheduler it attempts to perform scheduling based on the list of cached hosts, with a view to improving scheduler performance.
  • A new scheduler filter, AggregateImagePropertiesIsolation, has been introduced. The new filter schedules instances to hosts based on matching namespaced image properties with host aggregate properties. Hosts that do not belong to any host aggregate remain valid scheduling targets for instances based on all images. The new Compute service configuration keys aggregate_image_properties_isolation_namespace and aggregate_image_properties_isolation_separator are used to determine which image properties are examined by the filter.
  • Weight normalization in OpenStack Compute: See:
    • https://review.openstack.org/#/c/27160/ Weights are normalized, so there is no need to inflate multipliers artificially. The maximum weight that a weigher will put for a node is 1.0 and the minimum is 0.0.
  • The scheduler now supports server groups. The following types are supported - anti-affinity and affinity filters. That is, a server that is deployed will be done according to a predefined policy.
  • A new framework to support utilization based scheduling is added (bp: https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling), and a CPU monitor is added to monitor the runtime CPU utilization and use that for more intelligent scheduling, also a new scheduler metrics weight is added to support that. To use that framework, monitor and weight, the nova config file should be changed otherwise it is disabled by default.

Other Features

  • Notifications are now generated upon the creation and deletion of keypairs.
  • Notifications are now generated when an Compute host is enabled, disabled, powered on, shut down, rebooted, put into maintenance mode and taken out of maintenance mode.
  • Compute services are now able to shutdown gracefully by disabling processing of new requests when a service shutdown is requested but allowing requests already in process to complete before terminating.
  • The Compute service determines what action to take when instances are found to be running that were previously marked deleted based on the value of the running_deleted_instance_action configuration key. A new shutdown value has been added. Using this new value allows administrators to optionally keep instances found in this state for diagnostics while still releasing the runtime resources.
  • File injection is now disabled by default in OpenStack Compute. Instead it is recommended that the ConfigDrive and metadata server facilities are used to modify guests at launch. To enable file injection modify the inject_key and inject_partition configuration keys in /etc/nova/nova.conf and restart the Compute services. The file injection mechanism is likely to be disabled in a future release.
  • A number of changes have been made to the expected format /etc/nova/nova.conf configuration file with a view to ensuring that all configuration groups in the file use descriptive names. A number of driver specific flags, including those for the Libvirt driver, have also been moved to their own option groups.

Known Issues

  • OpenStack Compute has some features that use newer API versions from other projects, but the following are the only API versions tested in Icehouse:
    • Keystone v2
    • Cinder v1
    • Glance v1

Upgrade Notes

  • Scheduler and weight normalization (https://review.openstack.org/#/c/27160/): In previous releases the Compute and Cells scheduler used raw weights (i.e. the weighers returned any value, and that was the value used by the weighing proccess).
    • If you were using several weighers for Compute:
      • If several weighers were used (in previous releases Nova only shipped one weigher for compute), it is possible that your multipliers were inflated artificially in order to make an important weigher prevail against any other weigher that returned large raw values. You need to check your weighers and take into account that now the maximum and minimum weights for a host will always be 1.0 and 0.0.
    • If you are using cells:
      • nova.cells.weights.mute_child.MuteChild: The weigher returned the value mute_weight_value as the weight assigned to a child that didn't update its capabilities in a while. It can still be used, but will have no effect on the final weight that will be computed by the weighing process, that will be 1.0. If you are using this weigher to mute a child cell you need to adjust the mute_weight_multiplier.
      • nova.cells.weights.weight_offset.WeightOffsetWeigher introduces a new configuration option offset_weight_multiplier. This new option has to be adjusted. In previous releases, the weigher returned the value of the configured offset for each of the cells in the weighing process. While the winner of that process will still be the same, it will get a weight of 1.0. If you were using this weigher and you were relying in its value to make it prevail against any other weighers you need to adjust its multiplier accordingly.
  • An early Docker compute driver was included in the Havana release. This driver has been moved from Nova into its own repository. The new location is http://git.openstack.org/cgit/stackforge/nova-docker
  • https://review.openstack.org/50668 - The compute_api_class configuration option has been removed.
  • https://review.openstack.org/#/c/54290/ - The following deprecated configuration option aliases have been removed in favor of their new names:
    • service_quantum_metadata_proxy
    • quantum_metadata_proxy_shared_secret
    • use_quantum_default_nets
    • quantum_default_tenant_id
    • vpn_instance_type
    • default_instance_type
    • quantum_url
    • quantum_url_timeout
    • quantum_admin_username
    • quantum_admin_password
    • quantum_admin_tenant_name
    • quantum_region_name
    • quantum_admin_auth_url
    • quantum_api_insecure
    • quantum_auth_strategy
    • quantum_ovs_bridge
    • quantum_extension_sync_interval
    • vmwareapi_host_ip
    • vmwareapi_host_username
    • vmwareapi_host_password
    • vmwareapi_cluster_name
    • vmwareapi_task_poll_interval
    • vmwareapi_api_retry_count
    • vnc_port
    • vnc_port_total
    • use_linked_clone
    • vmwareapi_vlan_interface
    • vmwareapi_wsdl_loc
  • The PowerVM driver has been removed: https://review.openstack.org/#/c/57774/
  • The keystone_authtoken defaults changed in nova.conf: https://review.openstack.org/#/c/62815/
  • libvirt lvm names changed from using instance_name_template to instance uuid (https://review.openstack.org/#/c/76968). Possible manual cleanup required if using a non default instance_name_template.
  • rbd disk names changed from using instance_name_template to instance uuid. Manual cleanup required of old virtual disks after the transition. (TBD find review)
  • Icehouse brings libguestfs as a requirement. Installing icehouse dependencies on a system currently running havana may cause the havana node to begin using libguestfs and break unexpectedly. It is recommended that libvirt_inject_partition=-2 be set on havana nodes prior to starting an upgrade of packages on the system if the nova packages will be updated last.
  • Creating a private flavor now adds access to the tenant automatically. This was the documented behavior in Havana, but the actual implementation in Havana and previous versions of Nova did not add the tenant automatically to private flavors.
  • Nova previously included a nova.conf.sample. This file was automatically generated and is no longer included directly. If you are packaging Nova and wish to include the sample config file, see etc/nova/README.nova.conf for instructions on how to generate the file at build time.
  • Nova now defaults to requiring an event from Neutron when booting libvirt guests. If you upgrade Nova before Neutron, you must disable this feature in Nova until Neutron supports it by setting vif_plugging_is_fatal=False and vif_plugging_timeout=0. Recommended order is: Nova (with this disabled), Neutron (with the notifications enabled), and then enable vif_plugging_is_fatal=True with the default value of vif_plugging_timeout.
  • Nova supports a limited live upgrade model for the compute nodes in Icehouse. To do this, upgrade controller infrastructure (everthing except nova-compute) first, but set the [upgrade_levels]/compute=icehouse-compat option. This will enable Icehouse controller services to talk to Havana compute services. Upgrades of individual compute nodes can then proceed normally. When all the computes are upgraded, unset the compute version option to retain the default and restart the controller services.
  • The following configuration options are marked as deprecated in this release. See nova.conf.sample for their replacements. [GROUP]/option
    • [DEFAULT]/rabbit_durable_queues
    • [rpc_notifier2]/topics
    • [DEFAULT]/log_config
    • [DEFAULT]/logfile
    • [DEFAULT]/logdir
    • [DEFAULT]/base_dir_name
    • [DEFAULT]/instance_type_extra_specs
    • [DEFAULT]/db_backend
    • [DEFAULT]/sql_connection
    • [DATABASE]/sql_connection
    • [sql]/connection
    • [DEFAULT]/sql_idle_timeout
    • [DATABASE]/sql_idle_timeout
    • [sql]/idle_timeout
    • [DEFAULT]/sql_min_pool_size
    • [DATABASE]/sql_min_pool_size
    • [DEFAULT]/sql_max_pool_size
    • [DATABASE]/sql_max_pool_size
    • [DEFAULT]/sql_max_retries
    • [DATABASE]/sql_max_retries
    • [DEFAULT]/sql_retry_interval
    • [DATABASE]/reconnect_interval
    • [DEFAULT]/sql_max_overflow
    • [DATABASE]/sqlalchemy_max_overflow
    • [DEFAULT]/sql_connection_debug
    • [DEFAULT]/sql_connection_trace
    • [DATABASE]/sqlalchemy_pool_timeout
    • [DEFAULT]/memcache_servers
    • [DEFAULT]/libvirt_type
    • [DEFAULT]/libvirt_uri
    • [DEFAULT]/libvirt_inject_password
    • [DEFAULT]/libvirt_inject_key
    • [DEFAULT]/libvirt_inject_partition
    • [DEFAULT]/libvirt_vif_driver
    • [DEFAULT]/libvirt_volume_drivers
    • [DEFAULT]/libvirt_disk_prefix
    • [DEFAULT]/libvirt_wait_soft_reboot_seconds
    • [DEFAULT]/libvirt_cpu_mode
    • [DEFAULT]/libvirt_cpu_model
    • [DEFAULT]/libvirt_snapshots_directory
    • [DEFAULT]/libvirt_images_type
    • [DEFAULT]/libvirt_images_volume_group
    • [DEFAULT]/libvirt_sparse_logical_volumes
    • [DEFAULT]/libvirt_images_rbd_pool
    • [DEFAULT]/libvirt_images_rbd_ceph_conf
    • [DEFAULT]/libvirt_snapshot_compression
    • [DEFAULT]/libvirt_use_virtio_for_bridges
    • [DEFAULT]/libvirt_iscsi_use_multipath
    • [DEFAULT]/libvirt_iser_use_multipath
    • [DEFAULT]/matchmaker_ringfile
    • [DEFAULT]/agent_timeout
    • [DEFAULT]/agent_version_timeout
    • [DEFAULT]/agent_resetnetwork_timeout
    • [DEFAULT]/xenapi_agent_path
    • [DEFAULT]/xenapi_disable_agent
    • [DEFAULT]/xenapi_use_agent_default
    • [DEFAULT]/xenapi_login_timeout
    • [DEFAULT]/xenapi_connection_concurrent
    • [DEFAULT]/xenapi_connection_url
    • [DEFAULT]/xenapi_connection_username
    • [DEFAULT]/xenapi_connection_password
    • [DEFAULT]/xenapi_vhd_coalesce_poll_interval
    • [DEFAULT]/xenapi_check_host
    • [DEFAULT]/xenapi_vhd_coalesce_max_attempts
    • [DEFAULT]/xenapi_sr_base_path
    • [DEFAULT]/target_host
    • [DEFAULT]/target_port
    • [DEFAULT]/iqn_prefix
    • [DEFAULT]/xenapi_remap_vbd_dev
    • [DEFAULT]/xenapi_remap_vbd_dev_prefix
    • [DEFAULT]/xenapi_torrent_base_url
    • [DEFAULT]/xenapi_torrent_seed_chance
    • [DEFAULT]/xenapi_torrent_seed_duration
    • [DEFAULT]/xenapi_torrent_max_last_accessed
    • [DEFAULT]/xenapi_torrent_listen_port_start
    • [DEFAULT]/xenapi_torrent_listen_port_end
    • [DEFAULT]/xenapi_torrent_download_stall_cutoff
    • [DEFAULT]/xenapi_torrent_max_seeder_processes_per_host
    • [DEFAULT]/use_join_force
    • [DEFAULT]/xenapi_ovs_integration_bridge
    • [DEFAULT]/cache_images
    • [DEFAULT]/xenapi_image_compression_level
    • [DEFAULT]/default_os_type
    • [DEFAULT]/block_device_creation_timeout
    • [DEFAULT]/max_kernel_ramdisk_size
    • [DEFAULT]/sr_matching_filter
    • [DEFAULT]/xenapi_sparse_copy
    • [DEFAULT]/xenapi_num_vbd_unplug_retries
    • [DEFAULT]/xenapi_torrent_images
    • [DEFAULT]/xenapi_ipxe_network_name
    • [DEFAULT]/xenapi_ipxe_boot_menu_url
    • [DEFAULT]/xenapi_ipxe_mkisofs_cmd
    • [DEFAULT]/xenapi_running_timeout
    • [DEFAULT]/xenapi_vif_driver
    • [DEFAULT]/xenapi_image_upload_handler

OpenStack Image Service (Glance)

Key New Features

Known Issues

None.

Upgrade Notes

  • Glance is using oslo.messaging to replace its private notifier code, it's recomended to use a combination of `notification_driver` + `transport_url`. The old configuration 'notifier_strategy' is deprecated though it still works.

OpenStack Dashboard (Horizon)

Key New Features

Language Support

  • Thanks to the I18nTeam Horizon is now available in Hindi, German and Serbian. Translations for Australian English, British English, Dutch, French, Japanese, Korean, Polish, Portuguese, Simplified and Traditional Chinese, Spanish and Russian have also been updated.

Nova

  • Live Migration Support
  • HyperV console support
  • Disk config option support
  • Improved support for managing host aggregates and availability zones.
  • Support for easily setting flavor extra specs

Cinder

  • Role based access support for Cinder views
  • v2 API support
  • Extend volume support

Neutron

  • Router Rules Support -- displays router rules on routers when returned by neutron

Swift

  • Support for creating public containers and providing links to those containers
  • Support explicit creation of pseudo directories

Heat

  • Ability to update an existing stack
  • Template validation
  • Support for adding an environment files

Ceilometer

Adminstrators can now view daily usage reports per project across services.


User Experience Enhancements

  • More Extensible Navigation
    • The primary dashboard and panel navigation has been updated from the tab navigation to an accordion implementation. Dashboards and Panel Groups are now expandable and collapsible in the page navigation. This change allows for the addition of more dashboards as well as accommodates the increasing number of panels in dashboards.
  • Wizard
    • Horizon now provides a Wizard control to complete multi-step interdependent tasks. This is now utilized in the create network action.
  • Inline Table Editing
    • Tables can now be written to support editing fields in the table to reduce the need for opening separate forms. The first sample of this is in the Admin dashboard, Projects panel.
  • Self-Service Password Change
    • Leveraging enhancements to Identity API v3 (Keystone), users can now change their own passwords without the need to involve an administrator. This functionality was previously only available with Identity API v2.0.
  • Server side table filtering
    • Tables can now easily be wired to filter results from underlying API calls based on criteria selected by the user rather than just perform an on page search.

Framework

  • JavaScript
    • In a move to provide a better user experience Horizon has adopted AngularJS as the primary JavaScript framework. JavaScript is now a browser requirement to run the Horizon interface. More to come in Juno.
      • Added reusable charts for use in Horizon
      • Integration of Jasmine testing library
  • Full Django 1.6 support
  • Plugin Architecture
    • Horizon now boasts dynamic loading/disabling of dashboards, panel groups and panels. By merely adding a file in the the enabled directory, the selection of items loaded into Horizon can be altered. Editing the Django settings file is no longer required.
  • Integration Test Framework
    • Horizon now supports running integration tests against a working devstack system. There is a limited test suite, but this a great step forward.

Known Issues

If utilizing multi-domain support in Identity API v3, users will be unable to manage resources in any domain other than the default domain.

Upgrade Notes

Browsers used will now need to support JavaScript.

The default for "can_set_password" is now False. This means that unless the setting is explicitly set to True, the option to set an 'Admin password' for an instance will not be shown in the Launch Instance workflow. Not all hypervisors support this feature which created confusion with users, and there is now a safer way to set and retrieve a password (see LP#1291006).

The default for "can_set_mountpoint" is now False, and should be set to True in the settings in order to add the option to set the mount point for volumes in the dashboard. At this point only the Xen hypervisor supports this feature (see LP#1255136).

OpenStack Identity (Keystone)

Key New Features

  • New v3 API features
    • /v3/OS-FEDERATION/ allows Keystone to consume federated authentication via Shibboleth for multiple Identity Providers, and mapping federated attributes into OpenStack group-based role assignments (see documentation).
    • POST /v3/users/{user_id}/password allows API users to update their own passwords (see documentation).
    • GET v3/auth/token?nocatalog allows API users to opt-out of receiving the service catalog when performing online token validation (see documentation).
    • /v3/regions provides a public interface for describing multi-region deployments (see documentation).
    • /v3/OS-SIMPLECERT/ now publishes the certificates used for PKI token validation (see documentation).
    • /v3/OS-TRUST/trusts is now capable of providing limited-use delegation via the remaining_uses attribute of trusts.
  • The assignments backend (the source of authorization data) has now been completely separated from the identity backend (the source of authentication data). This means that you can now back your deployment's identity data to LDAP, and your authorization data to SQL, for example.
  • The token KVS driver is now capable of writing to persistent Key-Value stores such as Redis, Cassandra, or MongoDB.
  • Keystone's driver interfaces are now implemented as Abstract Base Classes (ABCs) to make it easier to track compatibility of custom driver implementations across releases.
  • Keystone's default etc/policy.json has been rewritten in an easier to read format.
  • Notifications are now emitted in response to create, update and delete events on roles, groups, and trusts.
  • Custom extensions and driver implementations may now subscribe to internal-only event notifications, including disable events (which are only exposed externally as part of update events).
  • Keystone now emits Cloud Audit Data Federation (CADF) event notifications in response to authentication events.
  • Additional plugins are provided to handle external authentication via REMOTE_USER with respect to single-domain versus multi-domain deployments.
  • policy.json can now perform enforcement on the target domain in a domain-aware operation using, for example, %(target.{entity}.domain_id)s.
  • The LDAP driver for the assignment backend now supports group-based role assignment operations.
  • Keystone now publishes token revocation events in addition to providing continued support for token revocation lists. Token revocation events are designed to consume much less overhead (when compared to token revocation lists) and will enable Keystone eliminate token persistence during the Juno release.
  • Deployers can now define arbitrary limits on the size of collections in API responses (for example, GET /v3/users might be configured to return only 100 users, rather than 10,000). Clients will be informed when truncation has occurred.
  • Lazy translation has been enabled to translating responses according to the requested Accept-Language header.
  • Keystone now emits i18n-ready log messages.
  • Collection filtering is now performed in the driver layer, where possible, for improved performance.

Known Issues

  • Bug 1291157: If using the OS-FEDERATION extension, deleting an Identity Provider or Protocol does not result in previously-issued tokens being revoked. This will not be fixed in the stable/icehouse branch.
  • Bug 1308218: Duplicate user resources may be returned in response to GET /v2.0/tenants/{tenant_id}/users

Upgrade Notes

  • The v2 API has been prepared for deprecation, but remains stable in the Icehouse release. It may be formally deprecated during the Juno release pending widespread support for the v3 API.
  • Backwards compatibility for keystone.middleware.auth_token has been removed. auth_token middleware module is no longer provided by Keystone itself, and must be imported from keystoneclient.middleware.auth_token instead.
  • The s3_token middleware module is no longer provided by Keystone itself, and must be imported from keystoneclient.middleware.s3_token instead. Backwards compatibility for keystone.middleware.s3_token will be removed in Juno.
  • The default token duration has been reduced from 24 hours to just 1 hour. This effectively reduces the number of tokens that must be persisted at any one time, and (for PKI deployments) reduces the overhead of the token revocation list.
  • keystone.contrib.access.core.AccessLogMiddleware has been deprecated in favor of either the eventlet debug access log or Apache httpd access log and may be removed in the K release.
  • keystone.contrib.stats.core.StatsMiddleware has been deprecated in favor of external tooling and may be removed in the K release.
  • keystone.middleware.XmlBodyMiddleware has been deprecated in favor of support for "application/json" only and may be removed in the K release.
  • A v3 API version of the EC2 Credential system has been implemented. To use this, the following section needs to be added to keystone-paste.ini:
[filter:ec2_extension_v3]
paste.filter_factory = keystone.contrib.ec2:Ec2ExtensionV3.factory

... and ec2_extension_v3 needs to be added to the pipeline variable in the [pipeline:api_v3] section of keystone-paste.ini.

  • etc/policy.json updated to provide rules for the new v3 EC2 Credential CRUD as show in the updated sample policy.json and policy.v3cloudsample.json
  • Migration numbers 38, 39 and 40 move all role assignment data into a single, unified table with first-class columns for role references.
  • TODO: deprecations for the move to oslo-incubator db
  • A new configuration option, mutable_domain_id is false by default to harden security around domain-level administration boundaries. This may break API functionality that you depended on in Havana. If so, set this value to true and please voice your use case to the Keystone community.
  • TODO: any non-ideal default values that will be changed in the future
  • Keystone's move to oslo.messaging for emitting event notifications has resulted in new configuration options which are potentially incompatible with those from Havana (TODO: enumerate old/new config values)

OpenStack Network Service (Neutron)

Key New Features

During Icehouse cycle the team focused on stability and testing of the Neutron codebase. Many of the existing plugins and drivers were revised to address know performance and stability issues.

New Drivers/Plugins

  • IBM SDN-VE
  • Nuage
  • OneConvergence
  • OpenDaylight

New Load Balancing as a Service Drivers

  • Embrane
  • NetScaler
  • Radware

New VPN Driver

  • Cisco CSR

Known Issues

  • When activating the new Nova callback functionality, the nova_url configuration should contain the version in the URL. For example: "http://127.0.0.1:8774/v2"
  • Midokura maintains its own MidoNet Icehouse plugin in an external public repository. The plugin can be found here: https://github.com/midokura/neutron. Please contact Midokura for more information (info@midokura.com)
  • Schema migrations when Advance Service Plugins are enabled might not properly update the schema for all configurations. Please test the migration on a copy of the database prior to executing on a live database. The Neutron team will address this as part of the first stable update.

Upgrade Notes

  • The OVS plugin and Linux Bridge plugin are deprecated and should not be used for deployments. The ML2 plugin combines OVS and Linux Bridge support into one plugin. A migration script has been provided for Havana deployments looking to convert to ML2. The migration does not have a rollback capability, so it is recommended the migration be tested on a copy of the database prior to running on a live system.
  • The Neutron team has extended support for legacy Quantum configuration file options for one more release. The Icehouse release is final release that these options will be supported. Deployers are encouraged update configurations to use the proper Neutron options.
  • XML support in the API is deprecated. Users and deployers should migrate to JSON for API interactions as soon as possible since the XML support will be retired in a future release.
  • Configure neutron.conf for network event callbacks to Nova by setting these values: http://git.openstack.org/cgit/openstack/neutron/tree/etc/neutron.conf?id=2014.1.1#n297
    • For more information, see the Upgrade Notes section for Nova.

OpenStack Block Storage (Cinder)

Key New Features

  • Ability to change the type of an existing volume (retype)
  • Add volume metadata support to the Cinder Backup Object
  • Implement Multiple API workers
  • Add ability to delete Quota
  • Add ability to import/export backups in to Cinder
  • Added Fibre Channel Zone manager for automated FC zoning during volume attach/detach
  • Ability to update a volume type encryption
  • Ceilometer notifications on attach/detach

New Backend Drivers/Plugins

  • EMC VMAX/VNX SMI-S FC Driver
  • EMC VNX iSCSI Direct Driver
  • HP MSA 2040
  • IBM SONAS and Storwize V7000 Unified Storage Systems
  • NetApp ESeries

Known Issues

  • Reconnect on failure for multiple servers always connects to first server (Bug: #1261631)
  • Storwize/SVC driver crashes when check volume copy status (Bug: #1304115)
  • Glance API v2 not supported (Bug: #1308594)
  • It is recommended you leave Cinder v1 enabled as Nova does not know how to talk to v2.

Upgrade Notes

  • Force detach API call is now an admin only call and no longer the policy default of admin and owner. Force detach requires clean up work by the admin, in which the admin would not know when an owner did this operation.
  • Simple/Chance scheduler have been deprecated. The filter scheduler should be used instead for similar functionality. Just set your cinder.conf with scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler
  • hp3par_domain config option was deprecated in Havana release not officially removed. It does nothing.

OpenStack Telemetry (Ceilometer)

Key New Features

  • API additions
    • arbitrarily complex combinations of query constraints for meters, samples and alarms
    • capabilities API for discovery of storage driver specific features
    • selectable aggregates for statistics, including new cardinality and standard deviation functions
    • direct access to samples decoupled from a specific meter
    • events API, in the style of StackTach
  • Alarming improvements
    • time-constrained alarms, providing flexibility to set the bar higher or lower depending on time of day or day of the week
    • exclusion of weak data points with anomalously low sample counts
    • derived rate-based meters for disk & network, more suited to threshold-oriented alarming
  • Integration touch-points
    • split collector into notification agent solely responsible for consuming external notifications
    • redesign of pipeline configuration for pluggable resource discovery
    • configurable persistence of raw notification payloads, in the style of StackTach
  • Storage drivers
    • approaching feature parity in HBase & SQLAlchemy & DB2 drivers
    • optimization of resource queries
    • HBase: add Alarm support
  • New sources of metrics
    • Neutron north-bound API on SDN controller
    • VMware vCenter Server API
    • Hardware metrics thru SNMP
    • OpenDaylight REST APIs

Known Issues

Upgrade Notes

  • the pre-existing collector service has been augmented with a new notification agent that must also be started up post-upgrade
  • MongoDB storage driver now requires the MongoDB installation to be version 2.4 or greater (the lower bound for Havana was 2.2), see upgrade instructions.

OpenStack Orchestration (Heat)

Key New Features

  • HOT templates: The HOT template format is now supported as the recommended format for authoring heat templates.
  • OpenStack resources: There is now sufficient coverage of resource types to port any template to native OpenStack resources
  • Software configuration: New API and resources to allow software configuration to be performed using a variety of techniques and tools
  • Non-admin users: It is now possible to launch any stack without requiring admin user credentials. See the upgrade notes on enabling this by configuring stack domain users.
  • Operator API: Cloud operators now have a dedicated admin API to perform operations on all stacks
  • Autoscaling resources: OS::Heat::AutoScalingGroup and OS::Heat::ScalingPolicy now allow the autoscaling of any arbitrary collection of resources
  • Notifications: Heat now sends RPC notifications for events such as stack state changes and autoscaling triggers
  • Heat engine scaling: It is now possible to share orchestration load across multiple instances of heat-engine. Locking is coordinated by a pluggable distributed lock, with a SQL based default lock plugin.
  • File inclusion with get_file: The intrinsic function get_file is used by python-heatclient and heat to allow files to be attached to stack create and update actions, which is useful for representing configuration files and nested stacks in separate files.
  • Cloud-init resources: The OS::Heat::CloudConfig and OS::Heat::MultipartMime
  • Stack abandon and adopt: It is now possible to abandon a stack, which deletes the stack from Heat without deleting the actual OpenStack resources. The resulting abandon data can also be used to adopt a stack, which creates a new stack based on already existing OpenStack resources. Adopt should be considered an experimental feature for the Icehouse release of Heat.
  • Stack preview: The stack-preview action returns a list of resources which are expected to be created if a stack is created with the provided template
  • New resources: The following new resources are implemented in this release:

Known Issues

  • Any error during a stack-update operation (for example from a transient cloud error, a heat bug, or a user template error) can lead to stacks going into an unrecoverable error state. Currently it is only recommended to attempt stack updates if it is practical to recover from errors by deleting and recreating the stack.
  • The new stack-adopt operation should be considered an experimental feature
  • CFN API returns HTTP status code 500 on all errors (bug 1291079)
  • Deleting stacks containing volume attachments may need to be attempted multiple times due to a volume detachment race (bug 1298350)

Upgrade Notes

Please read the general notes on Heat's security model.

See the sections below on Deferred authentication method and Stack domain users.

Deprecated resources

The following resources are deprecated in this release, and may be removed in the future:

Deferred authentication method

The default deferred_auth_method of password is deprecated as of Icehouse, so although it is still the default, deployers are strongly encouraged to move to using deferred_auth_method=trusts, which is planned to become the default for Juno. This model has the following benefits:

  • It avoids storing user credentials in the heat database
  • It removes the need to provide a password as well as a token on stack create
  • It limits the actions the heat service user can perform on a users behalf.

To enable trusts for deferred operations:

  • Ensure the keystone service heat is configured to use has enabled the OS-TRUST extension
  • Set deferred_auth_method = trusts in /etc/heat/heat.conf
  • Optionally specify the roles to be delegated to the heat service user (trusts_delegated_roles in heat.conf, defaults to heat_stack_owner which will be referred to in the following instructions. You may wish to modify this list of roles to suit your local RBAC policies)
  • Ensure the role(s) to be delegated exist, e.g heat_stack_owner exists when running keystone role-list
  • All users creating heat stacks should possess this role in the project where they are creating the stack. A trust will be created by heat on stack creation between the stack owner (user creating the stack) and the heat service user, delegating the heat_stack_user role to the heat service user, for the lifetime of the stack.

See this blog post for further details.

Stack domain users

To enable non-admin creation of certain resources there is some deployment time configuration required to create a keystone domain and domain-admin user, otherwise Heat will fall back to the previous behavior, but this fallback behavior may not be available in Juno.

 $OS_TOKEN refers to a token, e.g the service admin token or some other valid token for a user with sufficient roles to create users and domains.
 $KEYSTONE_ENDPOINT_V3 refers to the v3 keystone endpoint, e.g http://<keystone>:5000/v3 where <keystone> is the IP address or resolvable name for the keystone service

Steps in summary:

  • Create a "heat" keystone domain using python-openstackclient (the keystoneclient CLI interface does not support domains)
   openstack --os-token $OS_TOKEN --os-url=$KEYSTONE_ENDPOINT_V3 --os-identity-api-version=3 domain create heat --description "Owns users and projects created by heat"

This returns a domain ID, referred to as $HEAT_DOMAIN_ID below

  • Create a domain-admin user for the "heat" domain
   openstack --os-token $OS_TOKEN --os-url=$KEYSTONE_ENDPOINT_V3 --os-identity-api-version=3 user create --password $PASSWORD --domain $HEAT_DOMAIN_ID heat_domain_admin --description "Manages users and projects created by heat"
   

This returns a user ID, referred to as $DOMAIN_ADMIN_ID below

  • Make the user a domain admin by adding the admin role for the domain
   openstack --os-token $OS_TOKEN --os-url=$KS_ENDPOINT_V3 --os-identity-api-version=3 role add --user $DOMAIN_ADMIN_ID --domain $HEAT_DOMAIN_ID admin
  • Update heat.conf with the domain ID and the username/password for the domain-admin user
   stack_domain_admin_password = <password>
   stack_domain_admin = heat_domain_admin
   stack_user_domain = <domain id returned from domain create above>

See this blog post for full details details.

OpenStack Database service (Trove)

Key New Features

  • User/Schema management
    • Users can do CRUD management on MYSQL Users and Schemas through the Trove API
  • Flavor / Cinder Volume resizes
    • Resize up/down the flavor that defines the Trove instance
    • Resize up the optional Cinder Volume size if the datastore requires a larger volume
  • Multiple datastore support
    • Full feature support for MySQL and Percona
    • Experimental (not full feature) support for MongoDB, Redis, Cassandra, and Couchbase
  • Configuration groups
    • Define a set of configuration options to attach to new or existing instances
  • Backups and Restore
    • Executes native backup software on a datastore, and steam the output to a swift container
    • Full and incremental backups
  • Optional DNS support via designate
    • Flag to define whether to provision DNS for an instance

Known Issues

None yet

Upgrade Notes

  • Trove Conductor is a new daemon to proxy database communication from guests. It needs to be installed and running.
  • new Datastores feature requires operators to define (or remove) the datastores your installation will support
  • new Configuration Groups feature allows operators to define a subset of configuration options for a particular datastore

OpenStack Documentation

Key New Features