Difference between revisions of "ReleaseNotes/Havana"
m (added to Releases)
|Line 804:||Line 804:|
=== Known Issues ===
=== Known Issues ===
* Some of the guides are not completely updated and might miss information
* Some of the guides are not completely updated and might miss information
Latest revision as of 00:32, 23 September 2014
OpenStack 2013.2 (Havana) Release Notes
In this document you'll find a description of key new features, known bugs and upgrade tips for the 2013.2 (Havana) release of OpenStack.
- 1 OpenStack 2013.2 (Havana) Release Notes
- 1.1 OpenStack Object Storage (Swift)
- 1.2 OpenStack Compute (Nova)
- 1.2.1 Key New Features
- 126.96.36.199 API
- 188.8.131.52 Cells
- 184.108.40.206 Compute
- 220.127.116.11 Quota
- 18.104.22.168 Networking
- 22.214.171.124 Notifications
- 126.96.36.199 Scheduler
- 188.8.131.52 Storage
- 184.108.40.206 Conductor
- 220.127.116.11 Internal Changes
- 1.2.2 Known Issues
- 1.2.3 Upgrade Notes
- 1.2.1 Key New Features
- 1.3 OpenStack Image Service (Glance)
- 1.4 OpenStack Dashboard (Horizon)
- 1.4.1 Release Overview
- 1.4.2 Highlights
- 18.104.22.168 New Features
- 22.214.171.124 User Experience Improvements
- 126.96.36.199 Community
- 188.8.131.52 Under The Hood
- 184.108.40.206 Other Improvements and Fixes
- 1.4.3 Upgrade Information
- 1.4.4 Known Issues and Limitations
- 1.4.5 Backwards Compatibility
- 1.5 OpenStack Identity (Keystone)
- 1.6 OpenStack Network Service (Neutron)
- 1.6.1 Key New Features
- 1.6.2 Known Issues
- 1.6.3 Upgrade Notes
- 1.6.4 Deprecation Notices
- 1.7 OpenStack Block Storage (Cinder)
- 1.8 OpenStack Metering (Ceilometer)
- 1.9 OpenStack Orchestration (Heat)
- 1.9.1 Key New Features
- 220.127.116.11 Much improved documentation
- 18.104.22.168 Initial integration with Tempest
- 22.214.171.124 Concurrent resource operations
- 126.96.36.199 Much improved networking/Neutron support
- 188.8.131.52 Initial support for native template language (HOT)
- 184.108.40.206 Provider and Environment abstractions
- 220.127.116.11 Ceilometer integration for metrics/monitoring/alarms
- 18.104.22.168 UpdateStack improvements
- 22.214.171.124 Initial integration with keystone trusts functionality
- 126.96.36.199 Improved resource documentation
- 188.8.131.52 Many more native resource types
- 184.108.40.206 New Rackspace resource types
- 220.127.116.11 Stack suspend/resume
- 18.104.22.168 Consolidated configuration to a single heat.conf and a single paste-api.ini
- 22.214.171.124 Heat Standalone Mode
- 126.96.36.199 Known Issues
- 188.8.131.52 Upgrade Notes
- 1.9.1 Key New Features
- 1.10 OpenStack Documentation
OpenStack Object Storage (Swift)
Key New Features
- Global clusters support
The "region" concept introduced in Swift 1.8.0 has been augmented with support for using a separate replication network and configuring read and write affinity. These features combine to offer support for a single Swift cluster spanning wide geographic area.
- Added config file conf.d support
Allow Swift daemons and servers to optionally accept a directory as the configuration parameter. This allows different parts of the config file to be managed separately, eg each middleware could use a separate file for its particular config settings.
- Disk performance
The object server now can be configured to use threadpools to increase performance and smooth out latency throughout the system. Also, many disk operations were reordered to increase reliability and improve performance.
- Added support for pooling memcache connections
- Much faster calculation for choosing handoff nodes
Significant bugs fixed
- Fixed bug where memcache entries would not expire
- Fixed issue where the proxy would continue to read from a storage
server even after a client had disconnected
- Fix issue with UTF-8 handling in versioned writes
Additional operational polish
- Set default wsgi workers to cpu_count
Change the default value of wsgi workers from 1 to auto. The new default value for workers in the proxy, container, account & object wsgi servers will spawn as many workers per process as you have cpu cores. This will not be ideal for some configurations, but it's much more likely to produce a successful out of the box deployment.
- Added reveal_sensitive_prefix config setting to filter the auth token
logged by the proxy server.
- Added support to replicating handoff partitions first in object
replication. Can also configure now many remote nodes a storage node must talk to before removing a local handoff partition.
- Added crossdomain.xml middleware. See
- Numerous improvements to get Swift running under PyPy
Read full change log notes at https://github.com/openstack/swift/blob/master/CHANGELOG to see any config changes that would affect upgrades.
As always, Swift can be upgraded with no downtime.
OpenStack Compute (Nova)
Key New Features
- The Compute (Nova) REST API includes an experimental new version (v3). The new version of the API includes a lot of cleanup, as well as a framework for implementing and versioning API extensions. It is expected that this API will be finalized in the Icehouse release. (blueprint).
- These extensions to the Compute (Nova) REST API have been added:
- CellCapacities: Adds the ability to determine the amount of RAM in a Cell, and the amount of RAM that is free in a Cell (blueprint).
- ExtendedFloatingIps: Adds optional fixed_address parameter to the add floating IP command, allowing a floating IP to be associated with a fixed IP address (blueprint).
- ExtendedIpsMac: Adds Mac address(es) to server response (blueprint).
- ExtendedQuotas: Adds the ability for administrators to delete a tenant's non-default quotas, reverting them to the configured default quota (blueprint).
- ExtendedServices: Adds the ability to store and display the reason a service has been disabled (blueprint).
- ExtendedVolumes: Adds attached volumes to instance information (blueprint).
- Migrations: Adds the ability to list resize and migration operations in progress by Cell or Region (blueprint).
- ServerUsage: Adds the launched_at and terminated_at values to the instance show response (blueprint).
- UsedLimitsForAdmin: Allows for the retrieval of tenant specific quota limits via the administrative API (blueprint).
- The Compute service's EC2 API has been updated to use error codes that are more consistent with those of the official EC2 API. (blueprint)
- The Cells scheduler has been updated to support filtering and weighting via the new scheduler_filter_classes and scheduler_weight_classes options in the [cells] configuration group. The new ram_by_instance_type and weight_offset weighting modules have also been added, removing the random selection of Cells used in previous releases. In addition a filter class, TargetCellFilter, allows administrators to specify a scheduler hint to direct a build to a particular Cell. This makes Cell scheduling work conceptually similar to the existing host scheduling. (blueprint)
- Live migration of virtual machine instances is now supported within a single Cell. Live migration of virtual machine instances between Cells is not supported. (blueprint)
- Cinder is now supported by Nova when using Cells. (blueprint)
- Nova has a new feature that allows you to "shelve" an instance. This allows instances that are stopped for an extended period of time to be moved off of the hypervisors to free up resources (blueprint).
- A vendor_data section has been added to the metadata service and configuration drive facilities. This allows the extension of the metadata available to guests to include vendor or site specific data (blueprint).
- Support for Windows Server / Hyper-V Server 2012 R2 (blueprint).
- VHDX format support (blueprint).
- Dynamic memory support (blueprint).
- Ephemeral storage support (blueprint).
- Compute metrics support for Ceilometer integration (blueprint).
libvirt (KVM) driver
- Support for the QEMU guest agent (qemu-guest-agent) has been added for guests created with the hw_qemu_guest_agent property set to yes ( blueprint).
- Support for passthrough of PCI devices from the physical compute node to virtualized guests has been added to Nova. Currently only the libvirt driver provides a working implementation ( base blueprint, libvirt blueprint).
- Added support for extracting QoS parameters from Cinder and rate limiting disk access based on them when using libvirt-based hypervisors (blueprint).
- RBD is now supported as as a backend for storing images (blueprint).
- Added hard reboot support (change).
- Deprecation Note: The PowerVM driver is now deprecated in Havana 2013.2.1 and will be removed in Icehouse. (Mailing List)
vmwareapi (VMWare) driver
- Support for managing multiple clusters (blueprint).
- Image clone strategy - allow images to specify if they are to be used as linked clone or full clone images (blueprint)
- Support for using a config drive (blueprint)
- Supports showing a log of a server console (blueprint)
- Support for splitting up large ephemeral disks into 1024GB or 2000GB chunks to work around VHD max disk size limitations (blueprint)
- Allow images to have the setting of AutoDiskConfig=disabled, to mean users are unable to set AutoDiskConfig=Manual on these images and servers (blueprint)
- Improvements to how nova communicates with the Nova agent, so you are able to use both cloud-init and the Nova agent in the same cloud. (blueprint)
- Ability to boot VMs into a state where they are running a linux distribution installer, to help users build their own custom images (blueprint)
- Experimental support for XenServer core, and running nova-compute in a XenServer core Dom0. (blueprint)
- Experimental support for using the LVHD storage SR, and support for booting from compressed raw glance images (blueprint)
- Much work to improve stability and supportability. Examples include the ability to configure the compression ratio used for snapshots sent to glance and auto rollback of errors during the Resize down of servers due to disks been too small for the destination size.
- The default quota is now editable, previously the value of this quota was fixed. Use the nova quota-class-update default <key> <value> command to update the default quota (blueprint).
- Quotas may now be defined on a per-user basis (blueprint).
- Quotas for a given tenant or user can now be deleted, and their quota will reset to the default (blueprint).
- Network and IP address allocation is now performed in parallel to the other operations involved in provisioning an instance, resulting in faster boot times (blueprint).
- Nova now passes the host name of the Compute node selected to host an instance to Neutron. Neutron plug-ins may now use this information to alter the physical networking configuration of the Compute node as required (blueprint).
- Notifications are now generated when host aggregates are created, deleted, expanded, contracted, or otherwise updated (blueprint).
- Notifications are now generated when building an instance fails (blueprint).
- Added force_nodes to filter properties, allowing operators to explicitly specify the node for provisioning when using the baremetal driver (blueprint).
- Added the ability to make the IsolatedHostsFilter less restrictive, allowing isolated hosts to use all images, by manipulating the value of the new restrict_isolated_hosts_to_isolated_images configuration directive in nova.conf (blueprint).
- Added a GroupAffinityFilter as a counterpart to the existing GroupAntiAffinityFilter. The new filter allows the scheduling of an instance on a host from a specific group of hosts (blueprint).
- Added the ability for filters to set the new run_filter_once_per_request parameter to True if their filtering decisions are expected to remain valid for all instances in a request. This prevents the filter having to be re-run for each instance in the request when it is unnecessary. This setting has been applied to a number of existing filters (blueprint).
- Added per-aggregate filters AggregateRamFilter and AggregateCoreFilter which enforce themselves on host aggregates rather than globally. AggregateDiskFilter will be added in a future release (blueprint).
- Scheduler performance has been improved by removing the periodic messages that were being broadcasted from all compute nodes to all schedulers (blueprint).
- Scheduler performance has been improved by allowing filters to specify that they only need to run once for a given request for multiple instances (blueprint).
- Attached Cinder volumes can now be encrypted. Data is decrypted as needed at read and write time while presenting instances with a normal block storage device (blueprint).
- Added the ability to transparently swap out the Cinder volume attached to an instance. While the instance may pause briefly while the volume is swapped no reads or writes are lost (blueprint).
- When connecting to NFS or GlusterFS backed volumes Nova now uses the mount options set in the Cinder configuration. Previously the mount options had to be set on each Compute node that would access the volumes (blueprint).
- Added native GlusterFS support. If qemu_allowed_storage_drivers is set to gluster in nova.conf then QEMU is configured to access the volume directly using libgfapi instead of via fuse (blueprint).
- QEMU assisted snapshotting is now used to provide the ability to create cinder volume snapshots, even when the storage backing storage in use does not support them natively, such as GlusterFS (blueprint).
- The iSER transport protocol is now supported for accessing storage, providing performance improvements compared to using iSCSI over TCP (blueprint).
- The Conductor is now able to spawn multiple worker threads operating in parallel, the number of threads to spawn is determined by the value of workers in nova.conf (blueprint).
- Nova now uses the common service infrastructure provided by Oslo (blueprint).
- Changes have been made that will allow backporting bug fixes that require a database migration (blueprint).
- A significant amount of progress has been made toward eventually supporting live upgrades of a Nova deployment. In Havana, improvements included additional controls over versions of messages sent between services (see the [upgrade_levels] section of nova.conf) (blueprint), and a new object layer that helps decouple the code base from the details of the database schema (blueprint).
- If you're using cells, there's an issue when deleting instances. Different than Grizzly, deleting instances will delete them immediately from the top level (API) cell before telling the child cell to delete the instance. This was not intended. A side effect of this is that delete.start and delete.end notifications will be sent. If the instance delete in the child cell succeeds, a 2nd delete.start and delete.end notification will be sent. If it does not, the DBs end up out of sync where the instance appears gone from the API cell, but still exists in the child cell. The natural healing code built into cells will end up correcting this and restoring the instance in the API cell DB after some time, depending on healing configuration. This bug will be corrected in havana-stable shortly after release.
- Check https://bugs.launchpad.net/nova/+bug/1240247
- Note that periodic tasks will now run more often than before. The frequency of periodic task runs has always been configurable. However, the timer for when to run the task again was previously started after the last run of the task completed. The tasks now run at a constant frequency, regardless of how long a given run takes. This makes it much more clear for when tasks are supposed to run. However, the side effect is that tasks will now run a bit more often by default. (https://review.openstack.org/#/c/26448/)
- The security_groups_handler option has been removed from nova.conf. It was added for Quantum and is no longer needed. (https://review.openstack.org/#/c/28384/)
- This change should not affect upgrades, but it is a change in behavior for all new deployments. Previous versions created the default m1.tiny flavor with a disk size of 0. The default value is now 1. 0 means not to do any disk resizing and just use whatever disk size is set up in the image. 1 means to impose a 1 GB limit. The special value of 0 is still supported if you would like to create or modify flavors to use it. (https://review.openstack.org/#/c/27991/).
- A plugins framework has been removed since what it provided was possible via other means. (https://review.openstack.org/#/c/33595)
- The notify_on_any_change configuration option has been removed. (https://review.openstack.org/#/c/35264/)
- The compute_api_class option has been deprecated and will be removed in a future release. (https://review.openstack.org/#/c/28750/)
- Nova now uses Neutron after Quantum was renamed. (https://review.openstack.org/#/c/35425/)
- Nova will now reject requests to create a server when there are multiple Neutron networks defined, but no networks specified in the server create request. Nova would previously attach the server to *all* networks, but consensus was that behavior didn't make sense. (https://review.openstack.org/#/c/33996/)
- The vmware configuration variable 'vnc_password' is now deprecated. A user will no longer be required to enter as password to have VNC access. This now works like all other virt drivers. (https://review.openstack.org/#/c/43268/)
OpenStack Image Service (Glance)
Key New Features
Specific groups of users can now be authorized to create, update, and read different properties of arbitrary entities. There are two types of image properties in the Image Service:
- Core Properties, as specified by the image schema.
- Meta Properties, which are arbitrary key/value pairs that can be added to an image.
Access to meta properties through the Image Service's public API calls can now be restricted to certain sets of users, using a property protections configuration file (specified in the glance-api.conf file). For example:
- Limit all property interactions to admin only.
- Allow both admins and users with the billing role to read and modify all properties prefixed with ``x_billing_code_``.
A new API has been created for the Registry service (db_api compliant) using RPC-over-HTTP. The API:
- Allows Glance to continue to support legacy deployments that were using the earlier registry service. Glance v2 dropped completely the use of a registry service, which in some cases could result in a security vulnerability (all database parameters had to be present in glance-api.conf if deployed as a 'public' service).
- Makes it easier to implement new methods to the database API without having to modify the registry's API.
Updates include a registry database driver that talks to a remote registry service, which in turn talks directly to a database back end. The registry service implements all the database API public functions that are actually used from outside the API. The Image Service's API v2 must be enabled, and the Image Service client must point to this. blueprint
The Image Service now supports the following for back-end storage:
- Sheepdog. The Image Service can now store images in a backend Sheepdog cluster. Sheepdog is an open-source project, offering a distributed storage system for QEMU.
- Cinder. OpenStack Cinder can now be used as a block-storage backend for the Image Service. blueprint
- GridFS. The Image Service now supports the GridFS distributed filesytem. Support is enabled using the new .conf options mongodb_store_uri and mongodb_store_db. GridFS locations with the form `gridfs://<IMAGE>` are supported. "GridFS Website" blueprint
Multiple Image Locations
Image service images can now be stored in multiple locations. This enables the efficient consumption of image data and the use of backup images for the event of a primary image failure. blueprint
Related updates include:
- A policy layer for locations APIs, to enable the policy checks for changing image locations. blueprint
- Direct URL metadata. Each Image Service storage system can now store location metadata in the image location database, enabling it to return direct URL specific meta-data to the client when direct_url is enabled. For example, given a file://URL, the NFS exporting host, the mount point, and the FS type can now be returned to the client. blueprint
- Support for multiple locations when downloading images. This allows API clients to consume images from multiple backend store. blueprint
- Indexed checksum image property. The checksum image property is now indexed allowing users to search for an image by specifying the checksum. blueprint
- Scrubber update. The scrubber is a utility that cleans up images that have been deleted. The scrubber now supports multiple locations for 'pending_delete' image. blueprint
- Metadata checking. Can now enable the checking of metadata at the image location proxy layer when the location is changed. blueprint
But wait, there's more!
- Configurable container and disk formats. Glance previously only supported a specific set of container and disk formats, which were rarely the actual set of formats supported by any given deployment. The set of acceptable container and disk formats can now be configured. blueprint
- Storage Quota. Users can now be limited to N bytes (sum total) across all storage systems (total_storage_quota configured in .conf file). blueprint
- Membership Policy. Policy enforcement has been added to membership APIs (similar to image/location policy enforcement). New policies include 'new_member', 'add_member', 'get_member', 'modify_member', 'get_members', and 'delete_member'. blueprint
- Option to skip auth in glance registry. If a deployer has secured communication between the Glance API server and the Glance Registry server, you can now as a performance optimization have the Registry server skip reauthentication. Select the glance-registry-trusted-auth pipeline in the registry config, and set 'send_identity_headers' config value to True in the API config. Then the glance api sends the required identity headers like user and tenant information to the glance registry. Be mindful of the security tradeoff if you consider adopting this configuration.
- Option Swift Store SSL Compression. The 'swift_store_ssl_compression' config value makes it possible to disable SSL layer compression of http swift requests. This may improve performance for images which are already in a compressed format.
- Context Admin Policy. A new policy rule `"context_is_admin": "role:admin"` has been added to determine whether a request context is admin. Make sure to update the policy.json file with this rule.
OpenStack Dashboard (Horizon)
The Havana release cycle brings support for *three* new projects, plus significant new features for several existing projects. On top of that, many aspects of user experience have been improved for both end users and administrators. The community continues to grow and expand. The Havana release is solidly the best release of the OpenStack Dashboard project yet!
The OpenStack Orchestration project (Heat) debuted in Havana, and Horizon delivers full support for managing your Heat stacks. Highlights include support for dynamic form generation from supported Heat template formats, stack topology visualizations, and full stack resource inspection.
Also debuting in Havana is the OpenStack Metering project (Ceilometer). Initial support for Ceilometer is included in Horizon so that it is possible for an administrator to query the usage of the cloud through the OpenStack Dashboard and better understand how the system is functioning and being utilized.
Domains, Groups, and More: Identity API v3 Support
With the OpenStack Identity Service (Keystone) v3 API fully fledged in the Havana release, Horizon has added full support for all the new features such as Domains and Groups, Role management and assignment to Domains and Groups, Domain-based authentication, and Domain context switching.
The OpenStack Database as a Service project (Trove) graduated from incubation in the Havana cycle, and thanks to their industriousness they delivered a set of panels for the OpenStack dashboard to allow for provisioning and managing your Trove databases and backups. Disclaimer: Given that Trove's first official release as an integrated project will not be until Icehouse this feature should still be considered experimental and may be subject to change.
The number of OpenStack Compute (Nova) features that are supported in Horizon continues to grow. New features in the Havana release include:
- Editable default quotas.
- The ability for an administrator to reset the password of a server/instance.
- Availability zone support.
- Improved region support.
- Instance resizing.
- Improved boot-from-volume support.
- Per-project flavor support.
All of these provide a richer set of options for controlling where, when and how instances are launched, and improving how they're managed once they're up and running.
A number of important new OpenStack Networking (Neutron) features are showcased in the Havana release, most notably:
- VPN as a Service.
- Firewall as a Service.
- Editable and interactive network topology visualizations.
- Full security group and quota parity between Neutron and Nova network.
These features allow for tremendous flexibility when constructing software-defined networks for your cloud using Neutron.
User Experience Improvements
Self-Service Password Change
Empowered by changes to the Identity API v2.0 (Keystone), users can now change their own passwords without the need to involve an administrator. This is more secure and takes the hassle out of things for everyone. This feature is not yet available to users of Identity API v3.
Better Admin Information Architecture
Several sections of the Admin dashboard have been rearranged to more logically group information together. Additionally, new sources of information have been added to allow Admins to better understand the state of the hosts in the cloud and their relationship to host aggregates, availability zones, etc.
Improved Messaging To Users On Logout
Several new indicators have been added to inform users why they've been logged out when they land on the login screen unexpectedly. These indicators make it clear whether the user's session has expired, they timed out due to inactivity, or they are not authorized for the section of the dashboard they attempted to access.
Security Group Rule Templates
Since there are many very common security group rules which users tediously re-add each time (rules for SSH and ping, for example) the Horizon team has added pre-configured templates for common rules which a user can select and add to their security group with two clicks. These rules are configurable via the SECURITY_GROUP_RULES setting.
The OpenStack Translations team came fully into its own during the Havana cycle and the quality of the translations in Horizon are the best yet by far. Congratulations to that team for their success in building the community that started primarily within the OpenStack Dashboard project.
User Experience Group
A fledgling OpenStack User Experience Group formed during the Havana cycle with the mission of improving UX throughout OpenStack. They have quickly made themselves indispensable to the process of designing and improving features in the OpenStack Dashboard. Expect significant future improvement in User Experience now that there are dedicated people actively collaborating in the open to raise the bar.
Under The Hood
Less Complicated LESS Compilation: No More NodeJS
Due to outcry from various parties, and made possible by improvements in the Python community's support for LESS, Horizon has removed all traces of NodeJS from the project. We now use the lesscpy module to compile our LESS into the final stylesheets. This should not affect most users in any way, but it should make life easier for downstream distributions and the like.
Role-Based Access Controls
Horizon has begun the transition to using the other OpenStack projects' policy.json files to enforce access controls in the dashboard if the files are provided. This means access controls are more configurable and can be kept in sync between the originating project and Horizon. Currently this is only supported for Keystone and parts of Nova's policy files. Full support will come in the next release. You will need to set the POLICY_FILES_PATH and POLICY_FILES settings in order to enable this feature.
Other Improvements and Fixes
- Swift container and object metadata are now supported.
- New visualizations for utilization and quotas.
- The Cisco N1K Router plugin's additional features are available through a special additional dashboard when enabled and supported in Neutron.
- Support for self-signed or other specified SSL certificate checking.
- Glance image types are now configurable.
- Sorting has been improved in many places through the dashboard.
- API call efficiency optimizations.
- Required fields in forms are now better indicated.
- Session timeout can now be enabled to log out the user after a period of inactivity as a security feature.
- Significant PEP8 and code quality compliance improvements.
- Hundreds of bugfixes and minor user experience improvements.
For production deployments of Horizon you must add the ALLOWED_HOSTS setting to your settings.py or local_settings.py file. This setting was added in Django 1.5 and is an important security feature. For more information on it please consult the local_settings.py.example file or Django's documentation.
Enabling Keystone and Neutron Features
If you have existing configurations for the OPENSTACK_KEYSTONE_BACKEND or OPENSTACK_NEUTRON_NETWORK settings, you will want to consult the local_settings.example.py file for information on the new options that have been added. Existing configurations will continue to work, but may not have the correct keys to enable some of the new features in Havana.
Known Issues and Limitations
Session Creation and Health Checks
If you use a health monitoring service that pings the home page combined with a database-backed session backend you may experience excessive session creation. This issue is slated to be fixed soon, but in the interim the recommended solution is to write a periodic job that deletes expired sessions from your session store on a regular basis.
Deleting large numbers of resources simultaneously
Using the "select all" checkbox to delete large numbers of resources at once can cause network timeouts (depending on configuration). This is due to the underlying APIs not supporting bulk-deletion natively, and consequently Horizon has to send requests to delete each resource individually behind the scenes.
Conflicting Security Group Names With Neutron
Whereas Nova Network uses only the name of a security group when specifying security groups at instance launch time, Neutron can accept either a name or a UUID. In order to support both, Horizon passes in the name of the selected security groups. However, due to some data-isolation issues in Neutron there is an issue that can arise if an admin user tries to specify a security group with the same name as another security group in a different project which they also have access to. Neutron will find multiple matches for the security group name and will fail to launch the instance. The current workaround is to treat security group names as unique for admin users.
Broken charting for non compute resources
Resource usage page line chart doesn't show non-compute resources when we pick Group By -- https://bugs.launchpad.net/horizon/+bug/1243796
The Havana Horizon release should be fully compatible with both Havana and Grizzly versions of the rest of the OpenStack integrated projects (Nova, Swift, etc.). New features in other OpenStack projects which did not exist in Grizzly will obviously only work in Horizon if the rest of the stack supports them as well.
Overall, great effort has been made to maintain compatibility for third-party developers who have built on Horizon so far.
OpenStack Identity (Keystone)
Key New Features
- Improved deployment flexibility
- Authorization data (tenants/projects, roles, role assignments; e.g. SQL) can now be stored in a separate backend, as determined by the "assignments" driver, from authentication data (users, groups; e.g. LDAP), as determined by the "identity" driver
- Credentials (e.g. ec2 tokens) can now be stored in a separate backend, as determined by the "credentials" driver, from authentication data
- Ability to specify more granular RBAC policy rules (for example, based on attributes in the API request / response body)
- Pluggable handling of external authentication using
- Token generation, which is currently either UUID or PKI based, is now pluggable and separated from token persistence. Deployers can write a custom implementation of the
keystone.token.provider.Providerinterface and configure keystone to use it with
[token] provider. As a result,
[signing] token_formatis now deprecated in favor of this new configuration option.
- First-class support for deployment behind Apache httpd
- New deployment features
- Ability to cache the results of driver calls in a key-value store (for example, memcached or redis)
keystone-manage token_flushcommand to help purge expired tokens
- New API features
- Delegated role-based authorization to arbitrary consumers using OAuth 1.0a
- API clients can now opt out of the service catalog being included in a token response
- Domain role assignments can now be inherited by that domain's projects
- Aggregated role assignments API
- External authentication providers can now embed a binding reference into tokens such that remote services may optionally validate the identity of the user presenting the token against a presented external authentication mechanism. Currently, only
- Endpoints may now be explicitly mapped to projects, effectively preventing certain endpoints from appearing in the service catalog for certain based on the project scope of a token. This does not prevent end users from accessing or using endpoints they are aware of through some other means.
- Event notifications emitted for user and project/tenant create, update, and delete operations
- General performance improvements
- The v2 and v3 API now use the same logic for computing the list of roles assigned to a user-project pair during authentication, based on user+project, group+project, user+domain-inherited, and group+domain-inherited role assignments (where domain-inherited role assignments allow a domain-level role assignment to apply to all projects owned by that domain). The v3 API now uses a similar approach for computing user+domain role assignments for domain-scoped tokens.
- Logs are handled using a common logging implementation from Oslo-incubator, consistent with other OpenStack projects
- SQL migrations for extensions can now be managed independently from the primary migration repository using
keystone-manage db_sync --extension=«extension-name».
- With the LDAP assignment backend, attempting to unassign a role from a user that does not actually have that role, inadvertently assigns the role to the user instead (see bug 1242855)
- six v1.4.1 or higher is an undocumented requirement (bug 1237089). Without six, Keystone will fail on startup with either
ImportError: No module named sixor
- An experimental implementation of domain-specific identity backends (for example, a unique LDAP configuration per domain) was started in Havana but remains incomplete and will be finished during Icehouse.
OpenStack Network Service (Neutron)
Key New Features
The OpenStack Networking project has a new name in this release: Neutron. The Havana release will run using configuration files from Grizzly Quantum; however the usage of Quantum is deprecated and deployers should update all references at their earliest opportunity. Support for the Quantum configuration files and executable names will dropped in 2014.1 (Icehouse).
Neutron added two new advanced services during the latest development cycle and revised the load balancer service.
Load Balancer (LBaaS) Previously released as experimental features the in 2013.1 (Grizzly) release, the load balancing service and API extensions are now suitable for deployment. This release ships an updated API and with HAProxy driver support. Vendor drivers are expected in Icehouse and Radware has already made an out of tree Havana compatible driver available for download. The load balancing service can run on multiple network nodes.
VPN (VPNaaS) Site-to-Site IPSec VPNs are now supported via the VPN service plugin. The VPN API supports IPSec and and L3 agent ships with an OpenSwan driver.
Firewall (FWaas)A new edge firewall service is included in this release. The firewall service enable tenant to configure security in depth as rules can be applied both at the edge via the firewall API and on the VIF via the security group API. The FWaaS API and drivers are considered experimental as the Neutron will continue development during the next release cycle. The team welcomes community feedback on this extension.
Modular Layer 2 (ML2) The Modular Layer 2 (ML2) plugin is a new OpenSource plugin to Neutron. This plugin is a framework allowing OpenStack Networking to simultaneously utilize the variety of layer 2 networking technologies found in complex real-world data centers. It currently works with the existing Open vSwitch, Linux Bridge, and L2 agents. The ML2 plugin supports local, flat, VLAN, GRE and VXLAN network types via a type drivers and different mechanism drivers. There are vendor drivers available for Arista, Cisco Nexus, Hyper-V, and Tail-f NCS. ML2 is a replacement for the Linux Bridge and Open vSwitch plugins which are now considered deprecated. More ML2 information
- Support for PXE Boot Options in when creating a port
- Improved translation support
Upgrades from Grizzly
Starting the neutron-server after upgrading code, but prior to migration will result in some database models being created without the proper migrations occurring. The following upgrade setups should be taken to ensure a properly upgraded database.
- Ensure that the database is stamped for Grizzly prior to stopping service.
quantum-db-manage --config-file /path/to/quantum.conf --config-file /path/to/plugin/conf.ini stamp grizzly
- Stop quantum-server and deploy the Neutron code Do not start the neutron-server at this time.
- Run Havana Migration
neutron-db-manage --config-file /path/to/quantum.conf --config-file /path/to/plugin/conf.ini upgrade havana
- Start neutron-server
Agents May Report a Different Host Name
Neutron switched the method to determine a host's name from using the host's FQDN to the result returned by the gethostname(2) call. The change was made to be consistent with the rest of OpenStack. The hostname reported by an agent may be different and will change after the agents are updated to Havana. If so, it is necessary to reschedule all networks onto the new L3 and DHCP agent names to restore service. The change will results in a brief data plane outage will while the deployer reschedules the network.
L3 Agent No Longer Defaults to Sending Gratuitous ARPs
The L3 Agent previous defaults to sending gratuitous ARPs. The calls to send the gratuitous ARPs cause kernel panics when deployed using network namespaces on some distributions. Deployers may enable gratuitous ARP by setting send_arp_for_ha=3 in the L3 Agent's configuration file.
Firewall as a Service
The experimental FWaaS API extension only supports one active policy per tenant. The behavior will change in the during the Icehouse development cycle to allow for different policies to be attached to different tenant routers.
- Changes to neutron-dhcp-agent require you to first upgrade your dhcp-agents. Then wait until the dhcp_lease time has expired. After waiting at least dhcp_lease time, update neutron-server. Failure to do this may lead to cases where an instance is deleted and the dnsmasq process has not released the lease and neturon allocates that ip to a new port. (https://review.openstack.org/#/c/37580/)
- There is a new default policy.json file. Deployers with existing deployments should update their files as many options have changed: policy.json
- The usage of "quantum" or "Quantum" in configuration files and executable names is officially deprecated. This is the last release that will support those names and new deployments should use proper Neutron names for those values. Support for compatibility will only exist in this release.
- The Linux Bridge and Open vSwitch plugins have been feature frozen and will be removed in the J (2014.2) release. New deployments should choose ML2 instead of the individual Open vSwitch or Linux Bridge plugins.
OpenStack Block Storage (Cinder)
Key New Features
- Volume migration - admin API to migrate a volume to a different Cinder back-end
- Scheduler hints extension added to V2 API
- Added local block storage driver to allow use of raw disks without LVM
- Added ability to extend the size of an existing volume
- Added ability to transfer volume from one tenant to another
- Added API call to enable editing default quota settings
- Added config option to allow auto-flatten snapshots for back-ends that leave a dependency when creating volume from snapshot
- Allow API to accept a "host-name" in the volume-attach and not just an instance uuid
- Enable the generalized backup layer to allow backups from any iSCSI device that doesn't have internal optimizations
- Added Ceph driver to backup service (allowing Ceph as a backup target with differential backups from Ceph to Ceph)
- Added rate-limiting information to provider info that can be passed to Nova and used by the hyper-visor
- New Windows Storage Server driver features (blueprint)
New Vendor Drivers
- Dell EqualLogic volume driver
- VMware VMDK cinder driver
- IBM General Parallel File System (GPFS)
Major Additions To Existing Drivers
- Add Fibre Channel drivers for Huawei storage systems
- Add a NFS Volume Driver to support Nexenta storage in Cinder
- Misc updates, and device specific adds to existing drivers have also been made to almost every existing vendor driver
- Optimized volume migration for IBM Storwize driver
New Backup Drivers
- Allow Ceph as an option for volume backup
- IBM Tivoli Storage Manager (TSM)
- Bug #1237338 : Upload volume to image fails with VMWare volume driver
- Bug: #1240299 : clear volume operation is called on all LVM vol deletes even thin provisioned. Until fix is released be sure to set volume_clear=None in cinder.conf
- The ThinLVM volume driver functionality is now part of the standard LVM ISCSI volume driver. Configuration should be updated to use volume_driver="cinder.volume.drivers.lvm.LVMISCSIDriver" and set the option lvm_type="thin". This will be done automatically for compatibility in Havana if volume_driver is set to "cinder.volume.drivers.lvm.ThinLVMVolumeDriver", but Icehouse will require these options to be updated in cinder.conf.
OpenStack Metering (Ceilometer)
Key New Features
- The statistics endpoint can now be used to group samples by some fields using the groupby argument
- A new alarm API is now available (see Alarms)
- Users can now post their own samples and meters through the API
Alarms are a new feature allowing users and operators to trigger actions based on comparing the statistics trend against a threshold over a period of time. It is composed of the following services:
- ceilometer-api now exposes the new /v2/alarms endpoint providing control over alarm lifecycle;
- ceilometer-alarm-evaluator evaluates alarms periodically in order to detect detects when an alarm state changes;
- ceilometer-alarm-notifier receives notifications sent by ceilometer-alarm-evaluator when an alarm is triggered and executes the associated action
The alarm API also exposes the history of alarm state transitions and rule changes.
- New HBase driver
- New DB2 (NoSQL) driver
- Improved SQLAlchemy driver
- (but not yet feature-complete due to the lack of metadata query support as needed by Horizon and Ceilometer Alarming)
- Improved MongoDB driver
- Added a TTL functionality that allows to delete old samples from the database
- Added the ability to store events
- Added event storage feature on SQLAlchemy
- Added a UDP based publisher
- Added new unit-scaling and rate-of-change transformers
- Added a meter on API requests using a special Python middleware
- Added the ability to record samples from the new Neutron bandwidth metering feature
- Added support for Hyper-V
OpenStack Orchestration (Heat)
Key New Features
Much improved documentation
Initial integration with Tempest
Concurrent resource operations
- Non dependent create, update and delete operations are now performed in parallel
Much improved networking/Neutron support
- New LBaaS, FWaaS and VPNaaS resources
Initial support for native template language (HOT)
: Note HOT is still under heavy development, and it is possible that the spec may change (we will definitely be adding to it), so please treat this as a preview and continue to use CFN template syntax if interface stability is important to you.
Provider and Environment abstractions
- Abstractions to provide a way to customize resource types using nested templates see documentation and blog post
Ceilometer integration for metrics/monitoring/alarms
- Adds a new resource type OS::Ceilometer::Alarm which configures a ceilometer alarm
- Also we have integration with Ceilometer alarms for AutoScaling actions, which are now triggered by Ceilometer alarms. The previous mechanism where metrics/alarms are processed by Heat still exists, but is now deprecated and will most likely be removed during Icehouse.
- Several resource types now provide better support for non-destructive updates, including resizing an Instance. Also we now create replacement resources before deleting the previous resource, allowing better upgrade continuity and roll-back capability.
Initial integration with keystone trusts functionality
- Initial integration with keystone trusts, so when heat.conf specifies deferred_auth_method=trusts, we no longer store encrypted credentials to perform deferred operations (for example AutoScaling adjustments), but instead create a trust and store the ID of the trust.
- (see known issues below)
Improved resource documentation
- Generation of documentation for all resource types http://docs.openstack.org/developer/heat/template_guide/
- Schemata for installed resource types are now available through the API
Many more native resource types
New Rackspace resource types
- Support for a new API "actions" path, which enables stack suspend/resume
Consolidated configuration to a single heat.conf and a single paste-api.ini
- See upgrade notes below
- Also added new config options to provide limitations on:
- template size
- number of stacks per tenant
- number of events per stack
- stack nesting depth
Heat Standalone Mode
- Heat can now be configured to run in standalone mode, allowing it to orchestrate onto an external OpenStack
- Heat does not support specifying region name when getting API endpoints from keystone see bug
- Integration with keystone trusts (deferred_auth_method=trusts) will only work if you have the latest keystoneclient 0.4.1, however this is not reflected in the requirements.txt of Heat yet
- Integration with keystone trusts requires at least RC3 of keystone due to this issue
- Heat now uses one config file (/etc/heat/heat.conf) instead of per-service files.
- The API processes now use a single paste configuration file (/etc/heat/api-paste.ini) instead of per-service files.
- There is now support for a global environment definition in /etc/heat/environment.d/ (see environment documentation)
- All OS::Neutron* resources have been renamed "OS::Quantum*", the old names will still work if you install the provided default environment which aliases the old names to the new resources.
Key New Features
- Each page now has a bug reporting link so you can easily report bugs against a doc page.
- The manuals have been completely reorganized. With the Havana release, the following Guides exist:
- Install OpenStack
- Installation Guide for Red Hat Enterprise Linux, CentOS, and Fedora
- Installation Guide for Ubuntu 12.04 (LTS)
- Installation Guide for openSUSE and SUSE Linux Enterprise Server
- Configure and run an OpenStack cloud:
- Cloud Administrator Guide
- Configuration Reference
- Operations Guide
- High Availability Guide
- Security Guide
- Virtual Machine Image Guide
- Use the OpenStack dashboard and command-line clients
- API Quick Start
- End User Guide
- Admin User Guide
- Install OpenStack
- Some of the guides are not completely updated and might miss information