Jump to: navigation, search

Difference between revisions of "ReleaseNotes/Havana"

(Scheduler)
(Compute)
Line 52: Line 52:
  
 
* A '''vendor_data''' section has been added to the metadata service and configuration drive facilities. This allows the extension of the metadata available to guests to include vendor or site specific data ([https://blueprints.launchpad.net/nova/+spec/vendor-data blueprint]).
 
* A '''vendor_data''' section has been added to the metadata service and configuration drive facilities. This allows the extension of the metadata available to guests to include vendor or site specific data ([https://blueprints.launchpad.net/nova/+spec/vendor-data blueprint]).
 +
 +
===== vmwareapi (VMWare) driver =====
 +
 +
* Support for managing multiple clusters ([https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service BP]).
 +
 +
* Cinder driver support ([https://blueprints.launchpad.net/nova/+spec/vmware-nova-cinder-support BP])
 +
 +
* Image clone strategy ([https://blueprints.launchpad.net/nova/+spec/vmware-image-clone-strategy BP])
 +
 +
* Support for using a config drive ([https://bugs.launchpad.net/nova/+bug/1206584 LP])
  
 
===== libvirt (KVM) driver =====
 
===== libvirt (KVM) driver =====

Revision as of 19:50, 10 October 2013

OpenStack 2013.2 (Havana) Release Notes

General Upgrade Notes

tbd

OpenStack Object Storage (Swift)

Key New Features

Known Issues

None

Upgrade Notes

OpenStack Compute (Nova)

Key New Features

API

  • The Compute (Nova) REST API includes an experimental new version (v3). The new version of the API includes a lot of cleanup, as well as a framework for implementing and versioning API extensions. It is expected that this API will be finalized in the Icehouse release. (blueprint).
  • These extensions to the Compute (Nova) REST API have been added:
    • CellCapacities: Adds the ability to determine the amount of RAM in a Cell, and the amount of RAM that is free in a Cell (blueprint).
    • ExtendedFloatingIps: Adds optional fixed_address parameter to the add floating IP command, allowing a floating IP to be associated with a fixed IP address (blueprint).
    • ExtendedIpsMac: Adds Mac address(es) to server response (blueprint).
    • ExtendedQuotas: Adds the ability for administrators to delete a tenant's non-default quotas, reverting them to the configured default quota (blueprint).
    • ExtendedServices: Adds the ability to store and display the reason a service has been disabled (blueprint).
    • ExtendedVolumes: Adds attached volumes to instance information (blueprint).
    • Migrations: Adds the ability to list resize and migration operations in progress by Cell or Region (blueprint).
    • ServerUsage: Adds the launched_at and terminated_at values to the instance show response (blueprint).
    • UsedLimitsForAdmin: Allows for the retrieval of tenant specific quota limits via the administrative API (blueprint).
  • The Compute service's EC2 API has been updated to use error codes that are more consistent with those of the official EC2 API. (blueprint)

Cells

  • The Cells scheduler has been updated to support filtering and weighting via the new scheduler_filter_classes and scheduler_weight_classes options in the [cells] configuration group. The new ram_by_instance_type and weight_offset weighting modules have also been added, removing the random selection of Cells used in previous releases. In addition a filter class, TargetCellFilter, allows administrators to specify a scheduler hint to direct a build to a particular Cell. This makes Cell scheduling work conceptually similar to the existing host scheduling. (blueprint)
  • Live migration of virtual machine instances is now supported within a single Cell. Live migration of virtual machine instances between Cells is not supported. (blueprint)
  • Cinder is now supported by Nova when using Cells. (blueprint)

Compute

General
  • Added Hypervisor support for containers created and managed using Docker (blueprint).
  • Nova has a new feature that allows you to "shelve" an instance. This allows instances that are stopped for an extended period of time to be moved off of the hypervisors to free up resources (blueprint).
  • A vendor_data section has been added to the metadata service and configuration drive facilities. This allows the extension of the metadata available to guests to include vendor or site specific data (blueprint).
vmwareapi (VMWare) driver
  • Support for managing multiple clusters (BP).
  • Cinder driver support (BP)
  • Image clone strategy (BP)
  • Support for using a config drive (LP)
libvirt (KVM) driver
  • Support for the QEMU guest agent (qemu-guest-agent) has been added for guests created with the hw_qemu_guest_agent property set to yes ( blueprint).
  • Support for passthrough of PCI devices from the physical compute node to virtualized guests has been added to Nova. Currently only the libvirt driver provides a working implementation ( base blueprint, libvirt blueprint).
  • Added support for extracting QoS parameters from Cinder and rate limiting disk access based on them when using libvirt-based hypervisors (blueprint).
  • RBD is now supported as as a backend for storing images (blueprint).
Baremetal driver
  • Added a backend supporting Tilera bare-metal provisioning to the Baremetal driver (blueprint).

Quota

  • The default quota is now editable, previously the value of this quota was fixed. Use the nova quota-class-update default <key> <value> command to update the default quota (blueprint).
  • Quotas may now be defined on a per-user basis (blueprint).
  • Quotas for a given tenant or user can now be deleted, and their quota will reset to the default (blueprint).

Networking

  • Network and IP address allocation is now performed in parallel to the other operations involved in provisioning an instance, resulting in faster boot times (blueprint).
  • Nova now passes the host name of the Compute node selected to host an instance to Neutron. Neutron plug-ins may now use this information to alter the physical networking configuration of the Compute node as required (blueprint).

Notifications

  • Notifications are now generated when host aggregates are created, deleted, expanded, contracted, or otherwise updated (blueprint).
  • Notifications are now generated when building an instance fails (blueprint).

Scheduler

  • Added force_nodes to filter properties, allowing operators to explicitly specify the node for provisioning when using the baremetal driver (blueprint).
  • Added the ability to make the IsolatedHostsFilter less restrictive, allowing isolated hosts to use all images, by manipulating the value of the new restrict_isolated_hosts_to_isolated_images configuration directive in nova.conf (blueprint).
  • Added a GroupAffinityFilter as a counterpart to the existing GroupAntiAffinityFilter. The new filter allows the scheduling of an instance on a host from a specific group of hosts (blueprint).
  • Added the ability for filters to set the new run_filter_once_per_request parameter to True if their filtering decisions are expected to remain valid for all instances in a request. This prevents the filter having to be re-run for each instance in the request when it is unnecessary. This setting has been applied to a number of existing filters (blueprint).
  • Added per-aggregate filters AggregateRamFilter and AggregateCoreFilter which enforce themselves on host aggregates rather than globally. AggregateDiskFilter will be added in a future release (blueprint).
  • Scheduler performance has been improved by removing the periodic messages that were being broadcasted from all compute nodes to all schedulers (blueprint).
  • Scheduler performance has been improved by allowing filters to specify that they only need to run once for a given request for multiple instances (blueprint).

Storage

  • Attached Cinder volumes can now be encrypted. Data is decrypted as needed at read and write time while presenting instances with a normal block storage device (blueprint).
  • Added the ability to transparently swap out the Cinder volume attached to an instance. While the instance may pause briefly while the volume is swapped no reads or writes are lost (blueprint).
  • When connecting to NFS or GlusterFS backed volumes Nova now uses the mount options set in the Cinder configuration. Previously the mount options had to be set on each Compute node that would access the volumes (blueprint).
  • Added native GlusterFS support. If qemu_allowed_storage_drivers is set to gluster in nova.conf then QEMU is configured to access the volume directly using libgfapi instead of via fuse (blueprint).
  • QEMU assisted snapshotting is now used to provide the ability to create cinder volume snapshots, even when the storage backing storage in use does not support them natively, such as GlusterFS (blueprint).
  • The iSER transport protocol is now supported for accessing storage, providing performance improvements compared to using iSCSI over TCP (blueprint).

Conductor

  • The Conductor is now able to spawn multiple worker threads operating in parallel, the number of threads to spawn is determined by the value of workers in nova.conf (blueprint).

Internal Changes

  • Nova now uses the common service infrastructure provided by Oslo (blueprint).
  • Changes have been made that will allow backporting bug fixes that require a database migration (blueprint).
  • A significant amount of progress has been made toward eventually supporting live upgrades of a Nova deployment. In Havana, improvements included additional controls over versions of messages sent between services (see the [upgrade_levels] section of nova.conf) (blueprint), and a new object layer that helps decouple the code base from the details of the database schema (blueprint).

Known Issues

Upgrade Notes

  • Note that periodic tasks will now run more often than before. The frequency of periodic task runs has always been configurable. However, the timer for when to run the task again was previously started after the last run of the task completed. The tasks now run at a constant frequency, regardless of how long a given run takes. This makes it much more clear for when tasks are supposed to run. However, the side effect is that tasks will now run a bit more often by default. (https://review.openstack.org/#/c/26448/)
  • The security_groups_handler option has been removed from nova.conf. It was added for Quantum and is no longer needed. (https://review.openstack.org/#/c/28384/)
  • This change should not affect upgrades, but it is a change in behavior for all new deployments. Previous versions created the default m1.tiny flavor with a disk size of 0. The default value is now 1. 0 means not to do any disk resizing and just use whatever disk size is set up in the image. 1 means to impose a 1 GB limit. The special value of 0 is still supported if you would like to create or modify flavors to use it. (https://review.openstack.org/#/c/27991/).
  • A plugins framework has been removed since what it provided was possible via other means. (https://review.openstack.org/#/c/33595)
  • The notify_on_any_change configuration option has been removed. (https://review.openstack.org/#/c/35264/)
  • The compute_api_class option has been deprecated and will be removed in a future release. (https://review.openstack.org/#/c/28750/)
  • Nova now uses Neutron after Quantum was renamed. (https://review.openstack.org/#/c/35425/)
  • Nova will now reject requests to create a server when there are multiple Neutron networks defined, but no networks specified in the server create request. Nova would previously attach the server to *all* networks, but consensus was that behavior didn't make sense. (https://review.openstack.org/#/c/33996/)
  • The vmware configuration variable 'vnc_password' is now deprecated. A user will no longer be required to enter as password to have VNC access. This now works like all other virt drivers. (https://review.openstack.org/#/c/43268/)

OpenStack Image Service (Glance)

Key New Features

Known Issues

Upgrade Notes

OpenStack Dashboard (Horizon)

Key New Features

Known Issues

Upgrade Notes

OpenStack Identity (Keystone)

Key New Features

  • Improved deployment flexibility
    • Authorization data (tenants/projects, roles, role assignments; e.g. SQL) can now be stored in a separate backend, as determined by the "assignments" driver, from authentication data (users, groups; e.g. LDAP), as determined by the "identity" driver
    • Credentials (e.g. ec2 tokens) can now be stored in a separate backend, as determined by the "credentials" driver, from authentication data
    • Ability to specify more granular RBAC policy rules (for example, based on attributes in the API request / response body)
    • Pluggable handling of external authentication using REMOTE_USER
    • Token generation, which is currently either UUID or PKI based, is now pluggable and separated from token persistence. Deployers can write a custom implementation of the keystone.token.provider.Provider interface and configure keystone to use it with [token] provider. As a result, [signing] token_format is now deprecated in favor of this new configuration option.
    • First-class support for deployment behind Apache httpd
  • New deployment features
    • Ability to cache the results of driver calls in a key-value store (for example, memcached or redis)
    • keystone-manage token_flush command to help purge expired tokens
  • New API features
    • Delegated role-based authorization to arbitrary consumers using OAuth 1.0a
    • API clients can now opt out of the service catalog being included in a token response
    • Unicode i18n support for API error messages based on HTTP Accept-Language headers
    • Domain role assignments can now be inherited by that domain's projects
    • Aggregated role assignments API
    • External authentication providers can now embed a binding reference into tokens such that remote services may optionally validate the identity of the user presenting the token against an presented external authentication mechanism. Currently, only kerberos is supported.
    • Endpoints may now be explicitly mapped to projects, effectively preventing certain endpoints from appearing in the service catalog for certain based on the project scope of a token. This does not prevent end users from accessing or using endpoints they are aware of through some other means.
  • Event notifications emitted for user and project/tenant create, update, and delete operations
  • General performance improvements
  • The v2 and v3 API now use the same logic for computing the list of roles assigned to a user-project pair during authentication, based on user+project, group+project, user+domain-inherited, and group+domain-inherited role assignments (where domain-inherited role assignments allow a domain-level role assignment to apply to all projects owned by that domain). The v3 API now uses a similar approach for computing user+domain role assignments for domain-scoped tokens.
  • Logs are handled using a common logging implementation from Oslo-incubator, consistent with other OpenStack projects
  • SQL migrations for extensions can now be managed independently from the primary migration repository using keystone-manage db_sync --extension=«extension-name».

Known Issues

  • An experimental implementation of domain-specific identity backends (for example, a unique LDAP configuration per domain) was started in Havana but remains incomplete and will be finished during Icehouse.

Upgrade Notes

OpenStack Network Service (Neutron)

Key New Features

Known Issues

None yet.

Upgrade Notes

  • Changes to neutron-dhcp-agent require you to first upgrade your dhcp-agents. Then wait untill the dhcp_lease time has expired. After waiting at least dhcp_lease time, update neutron-server. Failure to do this may lead to cases where an instance is deleted and the dnsmasq process has not released the lease and neturon allocates that ip to a new port. (https://review.openstack.org/#/c/37580/)

OpenStack Block Storage (Cinder)

Key New Features

Known Issues

None yet

Upgrade Notes

OpenStack Metering (Ceilometer)

Key New Features

Known Issues

Upgrade Notes

None yet

OpenStack Orchestration (Heat)

Key New Features

  • Concurrent resource operations
  • Much improved networking/Neutron support
  • Initial support for native template language (HOT)
  • Provider and Environment abstractions
  • Ceilometer integration for metrics/monitoring/alarms
  • UpdateStack improvements
  • Initial integration with keystone trusts functionality
  • Many more native resource types
  • Stack suspend/resume

Known Issues

None yet

Upgrade Notes

None yet

OpenStack Documentation

Key New Features

  • Each page now has a bug reporting link so you can easily report bugs against a doc page.
  • The manuals have been completely reorganized. With the Havana release, the following Guides exist:
    • Install OpenStack
      • Installation Guide for Red Hat Enterprise Linux, CentOS, and Fedora
      • Installation Guide for Ubuntu 12.04 (LTS)
      • Installation Guide for openSUSE
    • Configure and run an OpenStack cloud:
      • Cloud Administrator Guide
      • Configuration Reference
      • Operations Guide
      • High Availability Guide
      • Security Guide
      • Virtual Machine Image Guide
    • Use the OpenStack dashboard and command-line clients
      • API Quick Start
      • End User Guide
      • Admin User Guide

Known Issues

  • Some of the guides are not completely updated and might miss information

Upgrade Notes

None yet