- 1 OpenStack 2013.2 (Havana) Release Notes
- 1.1 General Upgrade Notes
- 1.2 OpenStack Object Storage (Swift)
- 1.3 OpenStack Compute (Nova)
- 1.3.1 Key New Features
- 22.214.171.124 API
- 126.96.36.199 Cells
- 188.8.131.52 Compute
- 184.108.40.206 Quota
- 220.127.116.11 Networking
- 18.104.22.168 Notifications
- 22.214.171.124 Scheduler
- 126.96.36.199 Storage
- 188.8.131.52 Conductor
- 184.108.40.206 Internal Changes
- 1.3.2 Known Issues
- 1.3.3 Upgrade Notes
- 1.3.1 Key New Features
- 1.4 OpenStack Image Service (Glance)
- 1.5 OpenStack Dashboard (Horizon)
- 1.6 OpenStack Identity (Keystone)
- 1.7 OpenStack Network Service (Neutron)
- 1.8 OpenStack Block Storage (Cinder)
- 1.9 OpenStack Metering (Ceilometer)
- 1.10 OpenStack Orchestration (Heat)
- 1.11 OpenStack Documentation
OpenStack 2013.2 (Havana) Release Notes
General Upgrade Notes
OpenStack Object Storage (Swift)
Key New Features
OpenStack Compute (Nova)
Key New Features
- The Compute (Nova) REST API includes an experimental new version (v3). The new version of the API includes a lot of cleanup, as well as a framework for implementing and versioning API extensions. It is expected that this API will be finalized in the Icehouse release. (blueprint).
- These extensions to the Compute (Nova) REST API have been added:
- CellCapacities: Adds the ability to determine the amount of RAM in a Cell, and the amount of RAM that is free in a Cell (blueprint).
- ExtendedFloatingIps: Adds optional fixed_address parameter to the add floating IP command, allowing a floating IP to be associated with a fixed IP address (blueprint).
- ExtendedIpsMac: Adds Mac address(es) to server response (blueprint).
- ExtendedQuotas: Adds the ability for administrators to delete a tenant's non-default quotas, reverting them to the configured default quota (blueprint).
- ExtendedServices: Adds the ability to store and display the reason a service has been disabled (blueprint).
- ExtendedVolumes: Adds attached volumes to instance information (blueprint).
- Migrations: Adds the ability to list resize and migration operations in progress by Cell or Region (blueprint).
- ServerUsage: Adds the launched_at and terminated_at values to the instance show response (blueprint).
- UsedLimitsForAdmin: Allows for the retrieval of tenant specific quota limits via the administrative API (blueprint).
- The Compute service's EC2 API has been updated to use error codes that are more consistent with those of the official EC2 API. (blueprint)
- The Cells scheduler has been updated to support filtering and weighting via the new scheduler_filter_classes and scheduler_weight_classes options in the [cells] configuration group. The new ram_by_instance_type and weight_offset weighting modules have also been added, removing the random selection of Cells used in previous releases. In addition a filter class, TargetCellFilter, allows administrators to specify a scheduler hint to direct a build to a particular Cell. This makes Cell scheduling work conceptually similar to the existing host scheduling. (blueprint)
- Live migration of virtual machine instances is now supported within a single Cell. Live migration of virtual machine instances between Cells is not supported. (blueprint)
- Cinder is now supported by Nova when using Cells. (blueprint)
- Nova has a new feature that allows you to "shelve" an instance. This allows instances that are stopped for an extended period of time to be moved off of the hypervisors to free up resources (blueprint).
- A vendor_data section has been added to the metadata service and configuration drive facilities. This allows the extension of the metadata available to guests to include vendor or site specific data (blueprint).
- Support for Windows Server / Hyper-V Server 2012 R2 (blueprint).
- VHDX format support (blueprint).
- Dynamic memory support (blueprint).
- Ephemeral storage support (blueprint).
- Compute metrics support for Ceilometer integration (blueprint).
libvirt (KVM) driver
- Support for the QEMU guest agent (qemu-guest-agent) has been added for guests created with the hw_qemu_guest_agent property set to yes ( blueprint).
- Support for passthrough of PCI devices from the physical compute node to virtualized guests has been added to Nova. Currently only the libvirt driver provides a working implementation ( base blueprint, libvirt blueprint).
- Added support for extracting QoS parameters from Cinder and rate limiting disk access based on them when using libvirt-based hypervisors (blueprint).
- RBD is now supported as as a backend for storing images (blueprint).
- Added hard reboot support (change).
vmwareapi (VMWare) driver
- Support for managing multiple clusters (blueprint).
- Image clone strategy - allow images to specify if they are to be used as linked clone or full clone images (blueprint)
- Support for using a config drive (blueprint)
- The default quota is now editable, previously the value of this quota was fixed. Use the nova quota-class-update default <key> <value> command to update the default quota (blueprint).
- Quotas may now be defined on a per-user basis (blueprint).
- Quotas for a given tenant or user can now be deleted, and their quota will reset to the default (blueprint).
- Network and IP address allocation is now performed in parallel to the other operations involved in provisioning an instance, resulting in faster boot times (blueprint).
- Nova now passes the host name of the Compute node selected to host an instance to Neutron. Neutron plug-ins may now use this information to alter the physical networking configuration of the Compute node as required (blueprint).
- Notifications are now generated when host aggregates are created, deleted, expanded, contracted, or otherwise updated (blueprint).
- Notifications are now generated when building an instance fails (blueprint).
- Added force_nodes to filter properties, allowing operators to explicitly specify the node for provisioning when using the baremetal driver (blueprint).
- Added the ability to make the IsolatedHostsFilter less restrictive, allowing isolated hosts to use all images, by manipulating the value of the new restrict_isolated_hosts_to_isolated_images configuration directive in nova.conf (blueprint).
- Added a GroupAffinityFilter as a counterpart to the existing GroupAntiAffinityFilter. The new filter allows the scheduling of an instance on a host from a specific group of hosts (blueprint).
- Added the ability for filters to set the new run_filter_once_per_request parameter to True if their filtering decisions are expected to remain valid for all instances in a request. This prevents the filter having to be re-run for each instance in the request when it is unnecessary. This setting has been applied to a number of existing filters (blueprint).
- Added per-aggregate filters AggregateRamFilter and AggregateCoreFilter which enforce themselves on host aggregates rather than globally. AggregateDiskFilter will be added in a future release (blueprint).
- Scheduler performance has been improved by removing the periodic messages that were being broadcasted from all compute nodes to all schedulers (blueprint).
- Scheduler performance has been improved by allowing filters to specify that they only need to run once for a given request for multiple instances (blueprint).
- Attached Cinder volumes can now be encrypted. Data is decrypted as needed at read and write time while presenting instances with a normal block storage device (blueprint).
- Added the ability to transparently swap out the Cinder volume attached to an instance. While the instance may pause briefly while the volume is swapped no reads or writes are lost (blueprint).
- When connecting to NFS or GlusterFS backed volumes Nova now uses the mount options set in the Cinder configuration. Previously the mount options had to be set on each Compute node that would access the volumes (blueprint).
- Added native GlusterFS support. If qemu_allowed_storage_drivers is set to gluster in nova.conf then QEMU is configured to access the volume directly using libgfapi instead of via fuse (blueprint).
- QEMU assisted snapshotting is now used to provide the ability to create cinder volume snapshots, even when the storage backing storage in use does not support them natively, such as GlusterFS (blueprint).
- The iSER transport protocol is now supported for accessing storage, providing performance improvements compared to using iSCSI over TCP (blueprint).
- The Conductor is now able to spawn multiple worker threads operating in parallel, the number of threads to spawn is determined by the value of workers in nova.conf (blueprint).
- Nova now uses the common service infrastructure provided by Oslo (blueprint).
- Changes have been made that will allow backporting bug fixes that require a database migration (blueprint).
- A significant amount of progress has been made toward eventually supporting live upgrades of a Nova deployment. In Havana, improvements included additional controls over versions of messages sent between services (see the [upgrade_levels] section of nova.conf) (blueprint), and a new object layer that helps decouple the code base from the details of the database schema (blueprint).
- Note that periodic tasks will now run more often than before. The frequency of periodic task runs has always been configurable. However, the timer for when to run the task again was previously started after the last run of the task completed. The tasks now run at a constant frequency, regardless of how long a given run takes. This makes it much more clear for when tasks are supposed to run. However, the side effect is that tasks will now run a bit more often by default. (https://review.openstack.org/#/c/26448/)
- The security_groups_handler option has been removed from nova.conf. It was added for Quantum and is no longer needed. (https://review.openstack.org/#/c/28384/)
- This change should not affect upgrades, but it is a change in behavior for all new deployments. Previous versions created the default m1.tiny flavor with a disk size of 0. The default value is now 1. 0 means not to do any disk resizing and just use whatever disk size is set up in the image. 1 means to impose a 1 GB limit. The special value of 0 is still supported if you would like to create or modify flavors to use it. (https://review.openstack.org/#/c/27991/).
- A plugins framework has been removed since what it provided was possible via other means. (https://review.openstack.org/#/c/33595)
- The notify_on_any_change configuration option has been removed. (https://review.openstack.org/#/c/35264/)
- The compute_api_class option has been deprecated and will be removed in a future release. (https://review.openstack.org/#/c/28750/)
- Nova now uses Neutron after Quantum was renamed. (https://review.openstack.org/#/c/35425/)
- Nova will now reject requests to create a server when there are multiple Neutron networks defined, but no networks specified in the server create request. Nova would previously attach the server to *all* networks, but consensus was that behavior didn't make sense. (https://review.openstack.org/#/c/33996/)
- The vmware configuration variable 'vnc_password' is now deprecated. A user will no longer be required to enter as password to have VNC access. This now works like all other virt drivers. (https://review.openstack.org/#/c/43268/)
OpenStack Image Service (Glance)
Key New Features
Specific groups of users can now be authorized to create, update, and read different properties of arbitrary entities. There are two types of image properties in the Image Service:
- Core Properties, as specified by the image schema.
- Meta Properties, which are arbitrary key/value pairs that can be added to an image.
Access to meta properties through the Image Service's public API calls can now be restricted to certain sets of users, using a property protections configuration file (specified in the glance-api.conf file). For example:
- Limit all property interactions to admin only.
- Allow both admins and users with the billing role to read and modify all properties prefixed with ``x_billing_code_``.
A new API has been created for the Registry service (db_api compliant) using RPC-over-HTTP. The API:
- Allows Glance to continue to support legacy deployments that were using the earlier registry service. Glance v2 dropped completely the use of a registry service, which in some cases could result in a security vulnerability (all database parameters had to be present in glance-api.conf if deployed as a 'public' service).
- Makes it easier to implement new methods to the database API without having to modify the registry's API.
Updates include a registry database driver that talks to a remote registry service, which in turn talks directly to a database back end. The registry service implements all the database API public functions that are actually used from outside the API. The Image Service's API v2 must be enabled, and the Image Service client must point to this. blueprint
The Image Service now supports the following for back-end storage:
- Sheepdog. The Image Service can now store images in a backend Sheepdog cluster. Sheepdog is an open-source project, offering a distributed storage system for QEMU.
- Cinder. OpenStack Cinder can now be used as a block-storage backend for the Image Service. blueprint
- GridFS. The Image Service now supports the GridFS distributed filesytem. Support is enabled using the new .conf options mongodb_store_uri and mongodb_dtore_db. GridFS locations with the form `gridfs://<IMAGE>` are supported. "GridFS Website" blueprint
Multiple Image Locations
Image service images can now be stored in multiple locations. This enables the efficient consumption of image data and the use of backup images for the event of a primary image failure. blueprint
Related updates include:
- A policy layer for locations APIs, to enable the policy checks for changing image locations. blueprint
- Direct URL metadata. Each Image Service storage system can now store location metadata in the image location database, enabling it to return direct URL specific meta-data to the client when direct_url is enabled. For example, given a file://URL, the NFS exporting host, the mount point, and the FS type can now be returned to the client. blueprint
- Support for multiple locations when downloading images. This allows API clients to consume images from multiple backend store. blueprint
- Indexed checksum image property. The checksum image property is now indexed allowing users to search for an image by specifying the checksum. blueprint
- Scrubber update. The scrubber is a utility that cleans up images that have been deleted. The scrubber now supports multiple locations for 'pending_delete' image. blueprint
- Metadata checking. Can now enable the checking of metatdata at the image location proxy layer when the location is changed. blueprint
But wait, there's more!
- Configurable container and disk formats. Glance previously only supported a specific set of container and disk formats, which were rarely the actual set of formats supported by any given deployment. The set of acceptable container and disk formats can now be configured. blueprint
- Storage Quota. Users can now be limited to N bytes (sum total) across all storage systems (total_storage_quota configured in .conf file). blueprint
- Membership Policy. Policy enforcement has been added to membership APIs (similar to image/location policy enforcement). New policies include 'new_member', 'add_member', 'get_member', 'modify_member', 'get_members', and 'delete_member'. blueprint
OpenStack Dashboard (Horizon)
Key New Features
OpenStack Identity (Keystone)
Key New Features
- Improved deployment flexibility
- Authorization data (tenants/projects, roles, role assignments; e.g. SQL) can now be stored in a separate backend, as determined by the "assignments" driver, from authentication data (users, groups; e.g. LDAP), as determined by the "identity" driver
- Credentials (e.g. ec2 tokens) can now be stored in a separate backend, as determined by the "credentials" driver, from authentication data
- Ability to specify more granular RBAC policy rules (for example, based on attributes in the API request / response body)
- Pluggable handling of external authentication using
- Token generation, which is currently either UUID or PKI based, is now pluggable and separated from token persistence. Deployers can write a custom implementation of the
keystone.token.provider.Providerinterface and configure keystone to use it with
[token] provider. As a result,
[signing] token_formatis now deprecated in favor of this new configuration option.
- First-class support for deployment behind Apache httpd
- New deployment features
- Ability to cache the results of driver calls in a key-value store (for example, memcached or redis)
keystone-manage token_flushcommand to help purge expired tokens
- New API features
- Delegated role-based authorization to arbitrary consumers using OAuth 1.0a
- API clients can now opt out of the service catalog being included in a token response
- Unicode i18n support for API error messages based on HTTP Accept-Language headers
- Domain role assignments can now be inherited by that domain's projects
- Aggregated role assignments API
- External authentication providers can now embed a binding reference into tokens such that remote services may optionally validate the identity of the user presenting the token against an presented external authentication mechanism. Currently, only
- Endpoints may now be explicitly mapped to projects, effectively preventing certain endpoints from appearing in the service catalog for certain based on the project scope of a token. This does not prevent end users from accessing or using endpoints they are aware of through some other means.
- Event notifications emitted for user and project/tenant create, update, and delete operations
- General performance improvements
- The v2 and v3 API now use the same logic for computing the list of roles assigned to a user-project pair during authentication, based on user+project, group+project, user+domain-inherited, and group+domain-inherited role assignments (where domain-inherited role assignments allow a domain-level role assignment to apply to all projects owned by that domain). The v3 API now uses a similar approach for computing user+domain role assignments for domain-scoped tokens.
- Logs are handled using a common logging implementation from Oslo-incubator, consistent with other OpenStack projects
- SQL migrations for extensions can now be managed independently from the primary migration repository using
keystone-manage db_sync --extension=«extension-name».
- An experimental implementation of domain-specific identity backends (for example, a unique LDAP configuration per domain) was started in Havana but remains incomplete and will be finished during Icehouse.
OpenStack Network Service (Neutron)
Key New Features
- Changes to neutron-dhcp-agent require you to first upgrade your dhcp-agents. Then wait untill the dhcp_lease time has expired. After waiting at least dhcp_lease time, update neutron-server. Failure to do this may lead to cases where an instance is deleted and the dnsmasq process has not released the lease and neturon allocates that ip to a new port. (https://review.openstack.org/#/c/37580/)
OpenStack Block Storage (Cinder)
Key New Features
- Scheduler hints extension added to V2 API
- Added local block storage driver to allow use of raw disks without LVM
- Added ability to extend the size of an existing volume
- Added ability to transfer volume from one tenant to another
- Added API call to enable editing default quota settings
- Added config option to allow auto-flatten snapshots for back-ends that leave a dependency when creating volume from snapshot
- Allow API to accept a "host-name" in the volume-attach and not just an instance uuid
- Enable the generalized backup layer to allow backups from any iSCSI device that doesn't have internal optimizations
- Added CEPH driver to backup service (allowing CEPH as a backup target)
- Added rate-limiting information to provider info that can be passed to Nova and used by the hyper-visor
- New Windows Storage Server driver features (blueprint)
New Vendor Drivers
- Dell Equalogic volume driver
- VMware VMDK cinder driver
- Driver to leverage the features of IBM GPFS file system
Major Additions To Existing Drivers
- Add Fibre Channel drivers for Huawei storage systems
- Add a NFS Volume Driver to support Nexenta storage in Cinder
- Misc updates, and device specific adds to existing drivers have also been made to almost every existing vendor driver
New Backup Drivers
- Allow Ceph as an option for volume backup
- IBM Tivoli Storage Manager
- Bug #1237338
- Upload volume to image fails with VMWare volume driver
- None yet
- TODO: note about ThinLVM https://review.openstack.org/#/c/48336/
OpenStack Metering (Ceilometer)
Key New Features
- The statistics endpoint can now be used to group samples by some fields using the groupby argument
- A new alarm API is now available (see Alarms)
- Users can now post their own samples and meters through the API
Alarm is a new service allowing users and operators to trigger actions based on sample threshold over period of time. It is composed of the following services:
- ceilometer-api with the new /v2/alarms endpoint that allows to manipulate alarms (CRUD);
- ceilometer-alarm-evaluator that evaluates alarm regularly and send notifications to trigger alarms;
- ceilometer-alarm-notifier which receives notifications sent by ceilometer-alarm-evaluator when an alarm is triggered and handle the trigger as requested by the end user when it created the alarm.
The alarm API also allow to consult the history of alarms triggering.
- New HBase driver
- New DB2 (NoSQL) driver
- Improved SQLAlchemy driver
- Improved MongoDB driver
- Added a TTL functionality that allows to delete old samples from the database
- Added the ability to store events
- Added event storage feature on SQLAlchemy
- Added a UDP based publisher
- Added a unit transformer
- Added a meter on API requests using a special Python middleware
- Added the ability to record samples from the new Neutron bandwidth metering feature
- Added support for Hyper-V
OpenStack Orchestration (Heat)
Key New Features
- Concurrent resource operations
- Much improved networking/Neutron support
- Initial support for native template language (HOT)
- Provider and Environment abstractions
- Ceilometer integration for metrics/monitoring/alarms
- UpdateStack improvements
- Initial integration with keystone trusts functionality
- Many more native resource types
- Stack suspend/resume
Key New Features
- Each page now has a bug reporting link so you can easily report bugs against a doc page.
- The manuals have been completely reorganized. With the Havana release, the following Guides exist:
- Install OpenStack
- Installation Guide for Red Hat Enterprise Linux, CentOS, and Fedora
- Installation Guide for Ubuntu 12.04 (LTS)
- Installation Guide for openSUSE
- Configure and run an OpenStack cloud:
- Cloud Administrator Guide
- Configuration Reference
- Operations Guide
- High Availability Guide
- Security Guide
- Virtual Machine Image Guide
- Use the OpenStack dashboard and command-line clients
- API Quick Start
- End User Guide
- Admin User Guide
- Install OpenStack
- Some of the guides are not completely updated and might miss information