Jump to: navigation, search

ReleaseNotes/Havana/zh tw

< ReleaseNotes/Havana
Revision as of 15:26, 24 October 2013 by Gabrielcw (talk | contribs) (Storage Support)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Contents

OpenStack 2013.2 (Havana) 發行筆記

In this document you'll find a description of key new features, known bugs and upgrade tips for the 2013.2 (Havana) release of OpenStack.


OpenStack 物件式儲存空間 (Swift)

關鍵新特點

  • Global clusters support

The "region" concept introduced in Swift 1.8.0 has been augmented with support for using a separate replication network and configuring read and write affinity. These features combine to offer support for a single Swift cluster spanning wide geographic area.

  • Added config file conf.d support

Allow Swift daemons and servers to optionally accept a directory as the configuration parameter. This allows different parts of the config file to be managed separately, eg each middleware could use a separate file for its particular config settings.

  • Disk performance

The object server now can be configured to use threadpools to increase performance and smooth out latency throughout the system. Also, many disk operations were reordered to increase reliability and improve performance.

  • Added support for pooling memcache connections
  • Much faster calculation for choosing handoff nodes

重大錯誤修正

  • Fixed bug where memcache entries would not expire
  • Fixed issue where the proxy would continue to read from a storage

server even after a client had disconnected

  • Fix issue with UTF-8 handling in versioned writes

Additional operational polish

  • Set default wsgi workers to cpu_count

Change the default value of wsgi workers from 1 to auto. The new default value for workers in the proxy, container, account & object wsgi servers will spawn as many workers per process as you have cpu cores. This will not be ideal for some configurations, but it's much more likely to produce a successful out of the box deployment.

  • Added reveal_sensitive_prefix config setting to filter the auth token

logged by the proxy server.

  • Added support to replicating handoff partitions first in object

replication. Can also configure now many remote nodes a storage node must talk to before removing a local handoff partition.

  • Added crossdomain.xml middleware. See

http://docs.openstack.org/developer/swift/crossdomain.html for details

  • Numerous improvements to get Swift running under PyPy

已知問題

  • If you're using cells, there's an issue when deleting instances. Different than Grizzly, deleting instances will delete them immediately from the top level (API) cell before telling the child cell to delete the instance. This was not intended. A side effect of this is that delete.start and delete.end notifications will be sent. If the instance delete in the child cell succeeds, a 2nd delete.start and delete.end notification will be sent. If it does not, the DBs end up out of sync where the instance appears gone from the API cell, but still exists in the child cell. The natural healing code built into cells will end up correcting this and restoring the instance in the API cell DB after some time, depending on healing configuration. This bug will be corrected in havana-stable shortly after release.
  • Check http://bugs.launchpad.net/nova

升級筆記

Read full change log notes at https://github.com/openstack/swift/blob/master/CHANGELOG to see any config changes that would affect upgrades.

As always, Swift can be upgraded with no downtime.

OpenStack 運算單元 (Nova)

關鍵新特點

API

  • The Compute (Nova) REST API includes an experimental new version (v3). The new version of the API includes a lot of cleanup, as well as a framework for implementing and versioning API extensions. It is expected that this API will be finalized in the Icehouse release. (blueprint).
  • These extensions to the Compute (Nova) REST API have been added:
    • CellCapacities: Adds the ability to determine the amount of RAM in a Cell, and the amount of RAM that is free in a Cell (blueprint).
    • ExtendedFloatingIps: Adds optional fixed_address parameter to the add floating IP command, allowing a floating IP to be associated with a fixed IP address (blueprint).
    • ExtendedIpsMac: Adds Mac address(es) to server response (blueprint).
    • ExtendedQuotas: Adds the ability for administrators to delete a tenant's non-default quotas, reverting them to the configured default quota (blueprint).
    • ExtendedServices: Adds the ability to store and display the reason a service has been disabled (blueprint).
    • ExtendedVolumes: Adds attached volumes to instance information (blueprint).
    • Migrations: Adds the ability to list resize and migration operations in progress by Cell or Region (blueprint).
    • ServerUsage: Adds the launched_at and terminated_at values to the instance show response (blueprint).
    • UsedLimitsForAdmin: Allows for the retrieval of tenant specific quota limits via the administrative API (blueprint).
  • The Compute service's EC2 API has been updated to use error codes that are more consistent with those of the official EC2 API. (blueprint)

Cells

  • The Cells scheduler has been updated to support filtering and weighting via the new scheduler_filter_classes and scheduler_weight_classes options in the [cells] configuration group. The new ram_by_instance_type and weight_offset weighting modules have also been added, removing the random selection of Cells used in previous releases. In addition a filter class, TargetCellFilter, allows administrators to specify a scheduler hint to direct a build to a particular Cell. This makes Cell scheduling work conceptually similar to the existing host scheduling. (blueprint)
  • Live migration of virtual machine instances is now supported within a single Cell. Live migration of virtual machine instances between Cells is not supported. (blueprint)
  • Cinder is now supported by Nova when using Cells. (blueprint)

運算單元

一般
  • Added Hypervisor support for containers created and managed using Docker (blueprint).
  • Nova has a new feature that allows you to "shelve" an instance. This allows instances that are stopped for an extended period of time to be moved off of the hypervisors to free up resources (blueprint).
  • A vendor_data section has been added to the metadata service and configuration drive facilities. This allows the extension of the metadata available to guests to include vendor or site specific data (blueprint).
Baremetal 驅動程式
  • Added a backend supporting Tilera bare-metal provisioning to the Baremetal driver (blueprint).
Hyper-V 驅動程式
  • Support for Windows Server / Hyper-V Server 2012 R2 (blueprint).
  • Compute metrics support for Ceilometer integration (blueprint).
libvirt (KVM) 驅動程式
  • Support for the QEMU guest agent (qemu-guest-agent) has been added for guests created with the hw_qemu_guest_agent property set to yes ( blueprint).
  • Support for passthrough of PCI devices from the physical compute node to virtualized guests has been added to Nova. Currently only the libvirt driver provides a working implementation ( base blueprint, libvirt blueprint).
  • Added support for extracting QoS parameters from Cinder and rate limiting disk access based on them when using libvirt-based hypervisors (blueprint).
  • RBD is now supported as as a backend for storing images (blueprint).
PowerVM 驅動程式
  • Added hard reboot support (change).
vmwareapi (VMWare) 驅動程式
  • Support for managing multiple clusters (blueprint).
  • Image clone strategy - allow images to specify if they are to be used as linked clone or full clone images (blueprint)
  • Support for using a config drive (blueprint)
XenServer 驅動程式
  • Supports showing a log of a server console (blueprint)
  • Support for splitting up large ephemeral disks into 1024GB or 2000GB chunks to work around VHD max disk size limitations (blueprint)
  • Allow images to have the setting of AutoDiskConfig=disabled, to mean users are unable to set AutoDiskConfig=Manual on these images and servers (blueprint)
  • Improvements to how nova communicates with the Nova agent, so you are able to use both cloud-init and the Nova agent in the same cloud. (blueprint)
  • Ability to boot VMs into a state where they are running a linux distribution installer, to help users build their own custom images (blueprint)
  • Experimental support for XenServer core, and running nova-compute in a XenServer core Dom0. (blueprint)
  • Experimental support for using the LVHD storage SR, and support for booting from compressed raw glance images (blueprint)
  • Much work to improve stability and supportability. Examples include the ability to configure the compression ratio used for snapshots sent to glance and auto rollback of errors during the Resize down of servers due to disks been too small for the destination size.

配額

  • The default quota is now editable, previously the value of this quota was fixed. Use the nova quota-class-update default <key> <value> command to update the default quota (blueprint).
  • Quotas may now be defined on a per-user basis (blueprint).
  • Quotas for a given tenant or user can now be deleted, and their quota will reset to the default (blueprint).

網路連線

  • Network and IP address allocation is now performed in parallel to the other operations involved in provisioning an instance, resulting in faster boot times (blueprint).
  • Nova now passes the host name of the Compute node selected to host an instance to Neutron. Neutron plug-ins may now use this information to alter the physical networking configuration of the Compute node as required (blueprint).

通知

  • Notifications are now generated when host aggregates are created, deleted, expanded, contracted, or otherwise updated (blueprint).
  • Notifications are now generated when building an instance fails (blueprint).

排程器

  • Added force_nodes to filter properties, allowing operators to explicitly specify the node for provisioning when using the baremetal driver (blueprint).
  • Added the ability to make the IsolatedHostsFilter less restrictive, allowing isolated hosts to use all images, by manipulating the value of the new restrict_isolated_hosts_to_isolated_images configuration directive in nova.conf (blueprint).
  • Added a GroupAffinityFilter as a counterpart to the existing GroupAntiAffinityFilter. The new filter allows the scheduling of an instance on a host from a specific group of hosts (blueprint).
  • Added the ability for filters to set the new run_filter_once_per_request parameter to True if their filtering decisions are expected to remain valid for all instances in a request. This prevents the filter having to be re-run for each instance in the request when it is unnecessary. This setting has been applied to a number of existing filters (blueprint).
  • Added per-aggregate filters AggregateRamFilter and AggregateCoreFilter which enforce themselves on host aggregates rather than globally. AggregateDiskFilter will be added in a future release (blueprint).
  • Scheduler performance has been improved by removing the periodic messages that were being broadcasted from all compute nodes to all schedulers (blueprint).
  • Scheduler performance has been improved by allowing filters to specify that they only need to run once for a given request for multiple instances (blueprint).

儲存空間

  • Attached Cinder volumes can now be encrypted. Data is decrypted as needed at read and write time while presenting instances with a normal block storage device (blueprint).
  • Added the ability to transparently swap out the Cinder volume attached to an instance. While the instance may pause briefly while the volume is swapped no reads or writes are lost (blueprint).
  • When connecting to NFS or GlusterFS backed volumes Nova now uses the mount options set in the Cinder configuration. Previously the mount options had to be set on each Compute node that would access the volumes (blueprint).
  • Added native GlusterFS support. If qemu_allowed_storage_drivers is set to gluster in nova.conf then QEMU is configured to access the volume directly using libgfapi instead of via fuse (blueprint).
  • QEMU assisted snapshotting is now used to provide the ability to create cinder volume snapshots, even when the storage backing storage in use does not support them natively, such as GlusterFS (blueprint).
  • The iSER transport protocol is now supported for accessing storage, providing performance improvements compared to using iSCSI over TCP (blueprint).

引導程式

  • The Conductor is now able to spawn multiple worker threads operating in parallel, the number of threads to spawn is determined by the value of workers in nova.conf (blueprint).

內部變更

  • Nova now uses the common service infrastructure provided by Oslo (blueprint).
  • Changes have been made that will allow backporting bug fixes that require a database migration (blueprint).
  • A significant amount of progress has been made toward eventually supporting live upgrades of a Nova deployment. In Havana, improvements included additional controls over versions of messages sent between services (see the [upgrade_levels] section of nova.conf) (blueprint), and a new object layer that helps decouple the code base from the details of the database schema (blueprint).

已知問題

  • If you're using cells, there's an issue when deleting instances. Different than Grizzly, deleting instances will delete them immediately from the top level (API) cell before telling the child cell to delete the instance. This was not intended. A side effect of this is that delete.start and delete.end notifications will be sent. If the instance delete in the child cell succeeds, a 2nd delete.start and delete.end notification will be sent. If it does not, the DBs end up out of sync where the instance appears gone from the API cell, but still exists in the child cell. The natural healing code built into cells will end up correcting this and restoring the instance in the API cell DB after some time, depending on healing configuration. This bug will be corrected in havana-stable shortly after release.
  • Check http://bugs.launchpad.net/nova

升級筆記

  • Note that periodic tasks will now run more often than before. The frequency of periodic task runs has always been configurable. However, the timer for when to run the task again was previously started after the last run of the task completed. The tasks now run at a constant frequency, regardless of how long a given run takes. This makes it much more clear for when tasks are supposed to run. However, the side effect is that tasks will now run a bit more often by default. (https://review.openstack.org/#/c/26448/)
  • The security_groups_handler option has been removed from nova.conf. It was added for Quantum and is no longer needed. (https://review.openstack.org/#/c/28384/)
  • This change should not affect upgrades, but it is a change in behavior for all new deployments. Previous versions created the default m1.tiny flavor with a disk size of 0. The default value is now 1. 0 means not to do any disk resizing and just use whatever disk size is set up in the image. 1 means to impose a 1 GB limit. The special value of 0 is still supported if you would like to create or modify flavors to use it. (https://review.openstack.org/#/c/27991/).
  • A plugins framework has been removed since what it provided was possible via other means. (https://review.openstack.org/#/c/33595)
  • The notify_on_any_change configuration option has been removed. (https://review.openstack.org/#/c/35264/)
  • The compute_api_class option has been deprecated and will be removed in a future release. (https://review.openstack.org/#/c/28750/)
  • Nova now uses Neutron after Quantum was renamed. (https://review.openstack.org/#/c/35425/)
  • Nova will now reject requests to create a server when there are multiple Neutron networks defined, but no networks specified in the server create request. Nova would previously attach the server to *all* networks, but consensus was that behavior didn't make sense. (https://review.openstack.org/#/c/33996/)
  • The vmware configuration variable 'vnc_password' is now deprecated. A user will no longer be required to enter as password to have VNC access. This now works like all other virt drivers. (https://review.openstack.org/#/c/43268/)


OpenStack Image Service (Glance)

關鍵新特點

Property Protections

Specific groups of users can now be authorized to create, update, and read different properties of arbitrary entities. There are two types of image properties in the Image Service:

  • Core Properties, as specified by the image schema.
  • Meta Properties, which are arbitrary key/value pairs that can be added to an image.

Access to meta properties through the Image Service's public API calls can now be restricted to certain sets of users, using a property protections configuration file (specified in the glance-api.conf file). For example:

  • Limit all property interactions to admin only.
  • Allow both admins and users with the billing role to read and modify all properties prefixed with ``x_billing_code_``.

blueprint

Registry API

A new API has been created for the Registry service (db_api compliant) using RPC-over-HTTP. The API:

  • Allows Glance to continue to support legacy deployments that were using the earlier registry service. Glance v2 dropped completely the use of a registry service, which in some cases could result in a security vulnerability (all database parameters had to be present in glance-api.conf if deployed as a 'public' service).
  • Makes it easier to implement new methods to the database API without having to modify the registry's API.

blueprint

Updates include a registry database driver that talks to a remote registry service, which in turn talks directly to a database back end. The registry service implements all the database API public functions that are actually used from outside the API. The Image Service's API v2 must be enabled, and the Image Service client must point to this. blueprint

Storage Support

The Image Service now supports the following for back-end storage:

  • Sheepdog. The Image Service can now store images in a backend Sheepdog cluster. Sheepdog is an open-source project, offering a distributed storage system for QEMU.

"Sheepdog Website" blueprint

  • Cinder. OpenStack Cinder can now be used as a block-storage backend for the Image Service. blueprint
  • GridFS. The Image Service now supports the GridFS distributed filesytem. Support is enabled using the new .conf options mongodb_store_uri and mongodb_store_db. GridFS locations with the form `gridfs://<IMAGE>` are supported. "GridFS Website" blueprint

Multiple Image Locations

Image service images can now be stored in multiple locations. This enables the efficient consumption of image data and the use of backup images for the event of a primary image failure. blueprint

Related updates include:

  • A policy layer for locations APIs, to enable the policy checks for changing image locations. blueprint
  • Direct URL metadata. Each Image Service storage system can now store location metadata in the image location database, enabling it to return direct URL specific meta-data to the client when direct_url is enabled. For example, given a file://URL, the NFS exporting host, the mount point, and the FS type can now be returned to the client. blueprint
  • Support for multiple locations when downloading images. This allows API clients to consume images from multiple backend store. blueprint
  • Indexed checksum image property. The checksum image property is now indexed allowing users to search for an image by specifying the checksum. blueprint
  • Scrubber update. The scrubber is a utility that cleans up images that have been deleted. The scrubber now supports multiple locations for 'pending_delete' image. blueprint
  • Metadata checking. Can now enable the checking of metadata at the image location proxy layer when the location is changed. blueprint

But wait, there's more!

  • Configurable container and disk formats. Glance previously only supported a specific set of container and disk formats, which were rarely the actual set of formats supported by any given deployment. The set of acceptable container and disk formats can now be configured. blueprint
  • Storage Quota. Users can now be limited to N bytes (sum total) across all storage systems (total_storage_quota configured in .conf file). blueprint
  • Membership Policy. Policy enforcement has been added to membership APIs (similar to image/location policy enforcement). New policies include 'new_member', 'add_member', 'get_member', 'modify_member', 'get_members', and 'delete_member'. blueprint

已知問題


OpenStack 儀表板 (Horizon)

發行概觀

The Havana release cycle brings support for *three* new projects, plus significant new features for several existing projects. On top of that, many aspects of user experience have been improved for both end users and administrators. The community continues to grow and expand. The Havana release is solidly the best release of the OpenStack Dashboard project yet!

亮點

新特點

Heat

The OpenStack Orchestration project (Heat) debuted in Havana, and Horizon delivers full support for managing your Heat stacks. Highlights include support for dynamic form generation from supported Heat template formats, stack topology visualizations, and full stack resource inspection.

Ceilometer

Also debuting in Havana is the OpenStack Metering project (Ceilometer). Initial support for Ceilometer is included in Horizon so that it is possible for an administrator to query the usage of the cloud through the OpenStack Dashboard and better understand how the system is functioning and being utilized.

地域、群組和更多:支援認證 API v3

With the OpenStack Identity Service (Keystone) v3 API fully fledged in the Havana release, Horizon has added full support for all the new features such as Domains and Groups, Role management and assignment to Domains and Groups, Domain-based authentication, and Domain context switching.

Trove 資料庫

The OpenStack Database as a Service project (Trove) graduated from incubation in the Havana cycle, and thanks to their industriousness they delivered a set of panels for the OpenStack dashboard to allow for provisioning and managing your Trove databases and backups. Disclaimer: Given that Trove's first official release as an integrated project will not be until Icehouse this feature should still be considered experimental and may be subject to change.

Nova 特點

The number of OpenStack Compute (Nova) features that are supported in Horizon continues to grow. New features in the Havana release include:

  • Editable default quotas.
  • The ability for an administrator to reset the password of a server/instance.
  • Availability zone support.
  • Improved region support.
  • Instance resizing.
  • Improved boot-from-volume support.
  • Per-project flavor support.

All of these provide a richer set of options for controlling where, when and how instances are launched, and improving how they're managed once they're up and running.

Neutron 特點

A number of important new OpenStack Networking (Neutron) features are showcased in the Havana release, most notably:

  • VPN as a Service.
  • Firewall as a Service.
  • Editable and interactive network topology visualizations.
  • Full security group and quota parity between Neutron and Nova network.

These features allow for tremendous flexibility when constructing software-defined networks for your cloud using Neutron.

使用者經驗改善

自行變更密碼

Empowered by changes to the Identity API v2.0 (Keystone), users can now change their own passwords without the need to involve an administrator. This is more secure and takes the hassle out of things for everyone. This feature is not yet available to users of Identity API v3.

更好的管理員資訊架構

Several sections of the Admin dashboard have been rearranged to more logically group information together. Additionally, new sources of information have been added to allow Admins to better understand the state of the hosts in the cloud and their relationship to host aggregates, availability zones, etc.

改善給使用者登出時的訊息

Several new indicators have been added to inform users why they've been logged out when they land on the login screen unexpectedly. These indicators make it clear whether the user's session has expired, they timed out due to inactivity, or they are not authorized for the section of the dashboard they attempted to access.

安全性群組規則樣板

Since there are many very common security group rules which users tediously re-add each time (rules for SSH and ping, for example) the Horizon team has added pre-configured templates for common rules which a user can select and add to their security group with two clicks. These rules are configurable via the SECURITY_GROUP_RULES setting.

社群

翻譯團隊

The OpenStack Translations team came fully into its own during the Havana cycle and the quality of the translations in Horizon are the best yet by far. Congratulations to that team for their success in building the community that started primarily within the OpenStack Dashboard project.

使用者經驗小組

A fledgling OpenStack User Experience Group formed during the Havana cycle with the mission of improving UX throughout OpenStack. They have quickly made themselves indispensable to the process of designing and improving features in the OpenStack Dashboard. Expect significant future improvement in User Experience now that there are dedicated people actively collaborating in the open to raise the bar.

背後機制

簡化 LESS 編譯:不再 NodeJS 了

Due to outcry from various parties, and made possible by improvements in the Python community's support for LESS, Horizon has removed all traces of NodeJS from the project. We now use the lesscpy module to compile our LESS into the final stylesheets. This should not affect most users in any way, but it should make life easier for downstream distributions and the like.

以角色為基礎的權限控制

Horizon has begun the transition to using the other OpenStack projects' policy.json files to enforce access controls in the dashboard if the files are provided. This means access controls are more configurable and can be kept in sync between the originating project and Horizon. Currently this is only supported for Keystone and parts of Nova's policy files. Full support will come in the next release. You will need to set the POLICY_FILES_PATH and POLICY_FILES settings in order to enable this feature.

其他的改善與修正

  • Swift container and object metadata are now supported.
  • New visualizations for utilization and quotas.
  • The Cisco N1K Router plugin's additional features are available through a special additional dashboard when enabled and supported in Neutron.
  • Support for self-signed or other specified SSL certificate checking.
  • Glance image types are now configurable.
  • Sorting has been improved in many places through the dashboard.
  • API call efficiency optimizations.
  • Required fields in forms are now better indicated.
  • Session timeout can now be enabled to log out the user after a period of inactivity as a security feature.
  • Significant PEP8 and code quality compliance improvements.
  • Hundreds of bugfixes and minor user experience improvements.

升級資訊

已允許的主機

For production deployments of Horizon you must add the ALLOWED_HOSTS setting to your settings.py or local_settings.py file. This setting was added in Django 1.5 and is an important security feature. For more information on it please consult the local_settings.py.example file or Django's documentation.

開啟 Keystone 及 Neutron 特點

If you have existing configurations for the OPENSTACK_KEYSTONE_BACKEND or OPENSTACK_NEUTRON_NETWORK settings, you will want to consult the local_settings.example.py file for information on the new options that have been added. Existing configurations will continue to work, but may not have the correct keys to enable some of the new features in Havana.

已知問題與限制

連線階段產生及健康檢查

If you use a health monitoring service that pings the home page combined with a database-backed session backend you may experience excessive session creation. This issue is slated to be fixed soon, but in the interim the recommended solution is to write a periodic job that deletes expired sessions from your session store on a regular basis.

同時間大量刪除資源

Using the "select all" checkbox to delete large numbers of resources at once can cause network timeouts (depending on configuration). This is due to the underlying APIs not supporting bulk-deletion natively, and consequently Horizon has to send requests to delete each resource individually behind the scenes.

用 Neutron 時安全性群組名稱衝突

Whereas Nova Network uses only the name of a security group when specifying security groups at instance launch time, Neutron can accept either a name or a UUID. In order to support both, Horizon passes in the name of the selected security groups. However, due to some data-isolation issues in Neutron there is an issue that can arise if an admin user tries to specify a security group with the same name as another security group in a different project which they also have access to. Neutron will find multiple matches for the security group name and will fail to launch the instance. The current workaround is to treat security group names as unique for admin users.

向下相容

The Havana Horizon release should be fully compatible with both Havana and Grizzly versions of the rest of the OpenStack integrated projects (Nova, Swift, etc.). New features in other OpenStack projects which did not exist in Grizzly will obviously only work in Horizon if the rest of the stack supports them as well.

Overall, great effort has been made to maintain compatibility for third-party developers who have built on Horizon so far.


OpenStack 身份伺服器 (Keystone)

關鍵新特點

  • Improved deployment flexibility
    • Authorization data (tenants/projects, roles, role assignments; e.g. SQL) can now be stored in a separate backend, as determined by the "assignments" driver, from authentication data (users, groups; e.g. LDAP), as determined by the "identity" driver
    • Credentials (e.g. ec2 tokens) can now be stored in a separate backend, as determined by the "credentials" driver, from authentication data
    • Ability to specify more granular RBAC policy rules (for example, based on attributes in the API request / response body)
    • Pluggable handling of external authentication using REMOTE_USER
    • Token generation, which is currently either UUID or PKI based, is now pluggable and separated from token persistence. Deployers can write a custom implementation of the keystone.token.provider.Provider interface and configure keystone to use it with [token] provider. As a result, [signing] token_format is now deprecated in favor of this new configuration option.
    • First-class support for deployment behind Apache httpd
  • New deployment features
    • Ability to cache the results of driver calls in a key-value store (for example, memcached or redis)
    • keystone-manage token_flush command to help purge expired tokens
  • New API features
    • Delegated role-based authorization to arbitrary consumers using OAuth 1.0a
    • API clients can now opt out of the service catalog being included in a token response
    • Domain role assignments can now be inherited by that domain's projects
    • Aggregated role assignments API
    • External authentication providers can now embed a binding reference into tokens such that remote services may optionally validate the identity of the user presenting the token against an presented external authentication mechanism. Currently, only kerberos is supported.
    • Endpoints may now be explicitly mapped to projects, effectively preventing certain endpoints from appearing in the service catalog for certain based on the project scope of a token. This does not prevent end users from accessing or using endpoints they are aware of through some other means.
  • Event notifications emitted for user and project/tenant create, update, and delete operations
  • General performance improvements
  • The v2 and v3 API now use the same logic for computing the list of roles assigned to a user-project pair during authentication, based on user+project, group+project, user+domain-inherited, and group+domain-inherited role assignments (where domain-inherited role assignments allow a domain-level role assignment to apply to all projects owned by that domain). The v3 API now uses a similar approach for computing user+domain role assignments for domain-scoped tokens.
  • Logs are handled using a common logging implementation from Oslo-incubator, consistent with other OpenStack projects
  • SQL migrations for extensions can now be managed independently from the primary migration repository using keystone-manage db_sync --extension=«extension-name».

已知問題

  • six v1.4.1 or higher is an undocumented requirement (bug 1237089). Without six, Keystone will fail on startup with either ImportError: No module named six or pkg_resources.DistributionNotFound: six.
  • An experimental implementation of domain-specific identity backends (for example, a unique LDAP configuration per domain) was started in Havana but remains incomplete and will be finished during Icehouse.

OpenStack Network Service (Neutron)

關鍵新特點

New Name

The OpenStack Networking project has a new name in this release: Neutron. The Havana release will run using configuration files from Grizzly Quantum; however the usage of Quantum is deprecated and deployers should update all references at their earliest opportunity. Support for the Quantum configuration files and executable names will dropped in 2014.1 (Icehouse).

Advanced Services

Neutron added two new advanced services during the latest development cycle and revised the load balancer service.

Load Balancer (LBaaS) Previously released as experimental features the in 2013.1 (Grizzly) release, the load balancing service and API extensions are now suitable for deployment. This release ships an updated API and with HAProxy driver support. Vendor drivers are expected in Icehouse and Radware has already made an out of tree Havana compatible driver available for download. The load balancing service can run on multiple network nodes.

VPN (VPNaaS) Site-to-Site IPSec VPNs are now supported via the VPN service plugin. The VPN API supports IPSec and and L3 agent ships with an OpenSwan driver.

Firewall (FWaas)A new edge firewall service is included in this release. The firewall service enable tenant to configure security in depth as rules can be applied both at the edge via the firewall API and on the VIF via the security group API. The FWaaS API and drivers are considered experimental as the Neutron will continue development during the next release cycle. The team welcomes community feedback on this extension.

New Plugins

Modular Layer 2 (ML2) The Modular Layer 2 (ML2) plugin is a new OpenSource plugin to Neutron. This plugin is a framework allowing OpenStack Networking to simultaneously utilize the variety of layer 2 networking technologies found in complex real-world data centers. It currently works with the existing Open vSwitch, Linux Bridge, and L2 agents. The ML2 plugin supports local, flat, VLAN, GRE and VXLAN network types via a type drivers and different mechanism drivers. There are vendor drivers available for Arista, Cisco Nexus, Hyper-V, and Tail-f NCS. ML2 is a replacement for the Linux Bridge and Open vSwitch plugins which are now considered deprecated. More ML2 information

Other Features

  • Support for PXE Boot Options in when creating a port
  • Improved translation support

已知問題

Upgrades from Grizzly

Starting the neutron-server after upgrading code, but prior to migration will result in some database models being created without the proper migrations occurring. The following upgrade setups should be taken to ensure a properly upgraded database.

  • Ensure that the database is stamped for Grizzly prior to stopping service.quantum-db-manage --config-file /path/to/quantum.conf --config-file /path/to/plugin/conf.ini stamp grizzly
  • Stop quantum-server and deploy the Neutron code Do not start the neutron-server at this time.
  • Run Havana Migration neutron-db-manage --config-file /path/to/quantum.conf --config-file /path/to/plugin/conf.ini upgrade havana
  • Start neutron-server

Agents May Report a Different Host Name

Neutron switched the method to determine a host's name from using the host's FQDN to the result returned by the gethostname(2) call. The change was made to be consistent with the rest of OpenStack. The hostname reported by an agent may be different and will change after the agents are updated to Havana. If so, it is necessary to reschedule all networks onto the new L3 and DHCP agent names to restore service. The change will results in a brief data plane outage will while the deployer reschedules the network.

L3 Agent No Longer Defaults to Sending Gratuitous ARPs

The L3 Agent previous defaults to sending gratuitous ARPs. The calls to send the gratuitous ARPs cause kernel panics when deployed using network namespaces on some distributions. Deployers may enable gratuitous ARP by setting send_arp_for_ha=3 in the L3 Agent's configuration file.

Firewall as a Service

The experimental FWaaS API extension only supports one active policy per tenant. The behavior will change in the during the Icehouse development cycle to allow for different policies to be attached to different tenant routers.

升級筆記

  • Changes to neutron-dhcp-agent require you to first upgrade your dhcp-agents. Then wait until the dhcp_lease time has expired. After waiting at least dhcp_lease time, update neutron-server. Failure to do this may lead to cases where an instance is deleted and the dnsmasq process has not released the lease and neturon allocates that ip to a new port. (https://review.openstack.org/#/c/37580/)
  • There is a new default policy.json file. Deployers with existing deployments should update their files as many options have changed: policy.json

Deprecation Notices

  • The usage of "quantum" or "Quantum" in configuration files and executable names is officially deprecated. This is the last release that will support those names and new deployments should use proper Neutron names for those values. Support for compatibility will only exist in this release.
  • The Linux Bridge and Open vSwitch plugins have been feature frozen and will be removed in the J (2014.2) release. New deployments should choose ML2 instead of the individual Open vSwitch or Linux Bridge plugins.

OpenStack 區塊儲存空間 (Cinder)

關鍵新特點

  • Volume migration - admin API to migrate a volume to a different Cinder back-end
  • Scheduler hints extension added to V2 API
  • Added local block storage driver to allow use of raw disks without LVM
  • Added ability to extend the size of an existing volume
  • Added ability to transfer volume from one tenant to another
  • Added API call to enable editing default quota settings
  • Added config option to allow auto-flatten snapshots for back-ends that leave a dependency when creating volume from snapshot
  • Allow API to accept a "host-name" in the volume-attach and not just an instance uuid
  • Enable the generalized backup layer to allow backups from any iSCSI device that doesn't have internal optimizations
  • Added Ceph driver to backup service (allowing Ceph as a backup target with differential backups from Ceph to Ceph)
  • Added rate-limiting information to provider info that can be passed to Nova and used by the hyper-visor
  • New Windows Storage Server driver features (blueprint)

New Vendor Drivers

  • Dell EqualLogic volume driver
  • VMware VMDK cinder driver
  • IBM General Parallel File System (GPFS)

Major Additions To Existing Drivers

  • Add Fibre Channel drivers for Huawei storage systems
  • Add a NFS Volume Driver to support Nexenta storage in Cinder
  • Misc updates, and device specific adds to existing drivers have also been made to almost every existing vendor driver
  • Optimized volume migration for IBM Storwize driver

New Backup Drivers

  • Allow Ceph as an option for volume backup
  • IBM Tivoli Storage Manager (TSM)

已知問題

  • Bug #1237338 : Upload volume to image fails with VMWare volume driver
  • Bug: #1240299 : clear volume operation is called on all LVM vol deletes even thin provisioned. Until fix is released be sure to set volume_clear=None in cinder.conf

升級筆記

  • The ThinLVM volume driver functionality is now part of the standard LVM ISCSI volume driver. Configuration should be updated to use volume_driver="cinder.volume.drivers.lvm.LVMISCSIDriver" and set the option lvm_type="thin". This will be done automatically for compatibility in Havana if volume_driver is set to "cinder.volume.drivers.lvm.ThinLVMVolumeDriver", but Icehouse will require these options to be updated in cinder.conf.

OpenStack Metering (Ceilometer)

關鍵新特點

API

  • The statistics endpoint can now be used to group samples by some fields using the groupby argument
  • A new alarm API is now available (see Alarms)
  • Users can now post their own samples and meters through the API

Alarms

Alarms are a new feature allowing users and operators to trigger actions based on comparing the statistics trend against a threshold over a period of time. It is composed of the following services:

  • ceilometer-api now exposes the new /v2/alarms endpoint providing control over alarm lifecycle;
  • ceilometer-alarm-evaluator evaluates alarms periodically in order to detect detects when an alarm state changes;
  • ceilometer-alarm-notifier receives notifications sent by ceilometer-alarm-evaluator when an alarm is triggered and executes the associated action

The alarm API also exposes the history of alarm state transitions and rule changes.

Collector

  • New HBase driver
  • New DB2 (NoSQL) driver
  • Improved SQLAlchemy driver
  • Improved MongoDB driver
  • Added a TTL functionality that allows to delete old samples from the database
  • Added the ability to store events
  • Added event storage feature on SQLAlchemy

Publisher drivers

  • Added a UDP based publisher

Transformers

  • Added new unit-scaling and rate-of-change transformers

Meters

  • Added a meter on API requests using a special Python middleware
  • Added the ability to record samples from the new Neutron bandwidth metering feature

Compute agent

  • Added support for Hyper-V

已知問題


OpenStack Orchestration (Heat)

關鍵新特點

= Much improved documentation

Initial integration with Tempest

Concurrent resource operations

  • Non dependent create, update and delete operations are now performed in parallel

Much improved networking/Neutron support

  • New LBaaS, FWaaS and VPNaaS resources

Initial support for native template language (HOT)

Provider and Environment abstractions

  • Abstractions to provide a way to customize resource types using nested templates see documentation and blog post

Ceilometer integration for metrics/monitoring/alarms

  • Adds a new resource type OS::Ceilometer::Alarm which configures a ceilometer alarm
  • Also we have integration with Ceilometer alarms for AutoScaling actions, which are now triggered by Ceilometer alarms. The previous mechanism where metrics/alarms are processed by Heat still exists, but is now deprecated and will most likely be removed during Icehouse.

UpdateStack 改善

  • Several resource types now provide better support for non-destructive updates, including resizing an Instance. Also we now create replacement resources before deleting the previous resource, allowing better upgrade continuity and roll-back capability.

Initial integration with keystone trusts functionality

  • Initial integration with keystone trusts, so when heat.conf specifies deferred_auth_method=trusts, we no longer store encrypted credentials to perform deferred operations (for example AutoScaling adjustments), but instead create a trust and store the ID of the trust.
  • (see known issues below)

Improved resource documentation

Many more native resource types

New Rackspace resource types

  • Rackspace::Cloud::DBInstance
  • Rackspace::Cloud::LoadBalancer
  • Rackspace::Cloud::Server

Stack suspend/resume

  • Support for a new API "actions" path, which enables stack suspend/resume

Consolidated configuration to a single heat.conf and a single paste-api.ini

  • See upgrade notes below
  • Also added new config options to provide limitations on:
template size
number of stacks per tenant
number of events per stack
stack nesting depth

Heat Standalone Mode

  • Heat can now be configured to run in standalone mode, allowing it to orchestrate onto an external OpenStack

已知問題

  • Heat does not support specifying region name when getting API endpoints from keystone see bug
  • Integration with keystone trusts (deferred_auth_method=trusts) will only work if you have the latest keystoneclient 0.4.1, however this is not reflected in the requirements.txt of Heat yet
https://bugs.launchpad.net/python-keystoneclient/+bug/1231483
  • Integration with keystone trusts requires at least RC3 of keystone due to this issue
https://bugs.launchpad.net/keystone/+bug/1239303

升級筆記

  • Heat now uses one config file (/etc/heat/heat.conf) instead of per-service files.
  • The API processes now use a single paste configuration file (/etc/heat/api-paste.ini) instead of per-service files.
  • There is now support for a global environment definition in /etc/heat/environment.d/ (see environment documentation)
  • All OS::Neutron* resources have been renamed "OS::Quantum*", the old names will still work if you install the provided default environment which aliases the old names to the new resources.

OpenStack 文件

關鍵新特點

  • Each page now has a bug reporting link so you can easily report bugs against a doc page.
  • The manuals have been completely reorganized. With the Havana release, the following Guides exist:
    • Install OpenStack
      • Installation Guide for Red Hat Enterprise Linux, CentOS, and Fedora
      • Installation Guide for Ubuntu 12.04 (LTS)
      • Installation Guide for openSUSE and SUSE Linux Enterprise Server
    • Configure and run an OpenStack cloud:
      • Cloud Administrator Guide
      • Configuration Reference
      • Operations Guide
      • High Availability Guide
      • Security Guide
      • Virtual Machine Image Guide
    • Use the OpenStack dashboard and command-line clients
      • API Quick Start
      • End User Guide
      • Admin User Guide

已知問題

  • Some of the guides are not completely updated and might miss information