Difference between revisions of "ReleaseNotes/Essex"
m (Text replace - "__NOTOC__" to "")
m (added to Releases)
|Line 374:||Line 374:|
Essex can be deployed with [http://wiki.debian.org/OpenStackHowto a HOWTO] and [https://github.com/puppetlabs/puppetlabs-openstack puppet modules].
Essex can be deployed with [http://wiki.debian.org/OpenStackHowto a HOWTO] and [https://github.com/puppetlabs/puppetlabs-openstack puppet modules].
Revision as of 00:24, 23 September 2014
OpenStack 2012.1 (Essex) Release Notes
- 1 OpenStack 2012.1 (Essex) Release Notes
- 1.1 OpenStack Object Storage (Swift)
- 1.2 OpenStack Compute (Nova)
- 1.3 OpenStack Image Service (Glance)
- 1.4 OpenStack Dashboard (Horizon)
- 1.5 OpenStack Identity (Keystone)
- 2 Known packaged distributions
OpenStack Object Storage (Swift)
Swift has release version 1.4.4 through 1.4.8 during the Essex release cycle. The complete changelog is on GitHub .
Several important new features have been added to swift. Swift now supports expiring objects, HTML form POSTs with temporary signed URLs, and the OpenStack auth 2.0 API in the swift CLI. Other new features include new config options, optional functionality in middleware, and more ops tools.
Expiring objects allow a swift user to set an expiry time or a TTL on an object, after which the object is no longer accessible and will be deleted from the system. This feature enables new use cases for swift. For example, this feature could be used by a document management system with data retention requirements.
The new formpost and tempurl middleware modules allow a swift user to create a URL with write access and then use that URL as the target of an HTML form POST. This feature is aimed at a control panel use case. Since swift uses an auth method based on information in request headers, browsers typically can't access swift directly. With these two new middleware modules, someone building a swift control panel can have the browser directly upload content into the swift cluster. Since the requests are going directly to swift and don't have to be proxied through the control panel web servers for auth, the control panel deployer only has to scale infrastructure based on the control panel usage, not swift usage.
In addition to new features, many bugs have been squashed as well. Swift developers have found and fixed memory leaks, improved data corruption detection, improved replication, and improved the way rings are built.
The process is generally as follows:
- Shutdown background jobs, such as; updater, replicator, auditor, crond ... etc. (You can do that with swift-init rest stop and /etc/init.d/crond stop)
- Upgrade Swift packages.
- Upgrade other packages as needed.
- Reload the servers (swift-init main reload)
- Restart the background jobs (swift-init rest start and /etc/init.d/crond start)
OpenStack Compute (Nova)
A huge amount of effort has gone into the essex release of Nova. We have now had over 200 individual contributors to the nova project and there have been approximately 3000 commits since the Diablo release. That is an unbelievably rapid pace, especially for a project that is less than two years old. The fludiity and speed of the project is a testament to the adaptability and dedication of the developers that devote their time to this project.
We spent a lot of time during this release cycle focusing on stability and bug fixing. Due to that focus, this is the most production-ready release of nova to date. Despite the focus on stability, we also managed to merge some powerful features over the last six months. A few of the most interesting ones are outlined below.
Keystone integration: Keystone is the common OpenStack identity management system. Nova no longer has to manage users, projects, and roles internally. It is all taken care of by keystone.
Role-based access control: Nova now supports configurable access control for actions and extensions. Some work needs to be done to make the configuration a bit more user-friendly, but it is currently possible to make very detailed decisions about who can do what in the system.
openstack-common: Mark McLoughlin spearheaded a herculean effort to get config parsing into openstack-common. The results of his labor are outlined more fully in the config section below, but this has paved the way for shared code accross the OpenStack projects. Expect to see a greater focus on openstack-common in Folsom, the next release.
Hypervisor feature parity: We spent a lot of time trying to get features as consistent as possible across hypervisors. The ultimate goal here is to be able to run an OpenStack cloud and have users not know or care which hypervisor is underneath. One of the major changes here was disk configuration parity. Flavors/Instance Types now have two fields, one for the size of the root disk and one for an 'ephemeral' second disk.
Administrative api extensions: Many of the administrative functions have been pushed into api extensions. This means more tasks can be automated via remote tools instead of dealing with the database directly.
Support for upload-bundle: A few improvements were made to uploading images through the ec2-api. The first was keystone support ec2 and s3 credentials. The second was allowing cert management to be centralized into its own worker so that api nodes don't have to share a common filesystem. Finally, configuration was added to allow nova-objectstore to be replaced by swift for temporary image storage.
Volume endpoint: nova-volume now has its own api endpoint. This is paving the way for the eventual separation of this functionality into its own service.
Network decoupling: The database models for network and compute related objects were decoupled so that network can be managed as its own service. Caching was added in the compute layer for network data so that api calls are still speedy.
RPC improvements: Many bugfixes went into the RPC layer. Support for timeouts was added. An additional backend was added for the message queue: Apache Qpid. The rpc code will be moving into a common library in Folsom so it can be shared amongst multiple OpenStack projects
Metadata separation: The metadata server can now be run separately from the other api services. This allows a user to run the metadata server locally on the compute nodes while exposing the user facing api separately
Floating ip pools: It is now possible to have multiple pools of floating ips. This allows you to have a set of internal network natted ips for example.
Other features: A huge number of other features were included in this release. While a detailed description of ALL of the features is out of scope for this document, you can find the comprehensive list of the (over 50!) completed blueprints, as well as 750+ bugfixes, at:
We have retired python-gflags as our main configuration system. Configs should now be supplied using an ini-style syntax. Old 'flagfiles' will still work and are internally converted to the new config syntax. If you would like to manually convert a file, you can use:
nova-manage config convert <infile> <outfile>
For reference, a config with all available options is in:
One important change to be aware of is that only a few important configs are available as command line flags. So if you are used to overriding options by specifying them on the command line, you will have to start adding them to your config file. All of the binaries look in their directory for nova.conf and then search for /etc/nova.conf
A lot of effort has been going into improving the docs at:
We are currently working on adding documentation and examples for all of the api extensions that are shipped in the codebase. Expect to see these appearing in the next few days.
Image references to external glance servers are not working. The api spec allows you to specify a glance url when booting instead of just a uuid. This is currently broken. Track the fix here:
Using libvirt, if you make a snapshot of an instance running on an image that is no longer in Glance, the snapshot will be uploaded into Glance with neither container_format nor disk_format, and Glance will refuse it. Track the fix here:
All database queries currently block the evented workers. There were some stability issues with moving to a thread-pool for database queries, so we chose to sacrifice performance for stability for the time being. This means for production deployments, it is advisable to run more than one api worker with a load balancer in front. We will investigate removing this limitation during the Folsom time frame. Progress will be tracked in the following blueprint:
Due to a failure of our translations import chain, the translations files shipped with Nova are not up to date with the latest strings. This may result in Nova being unable to log in languages other than US English. Track the fix here:
Floating ips are not automatically moved in multi-host networking during live or block migration (KVM-only). While this won't cause immediate issues for the vm as the natting from the old host will still work, it is still highly recommended that you ensure that floating ips move properly using the following workaround:
- disassociate any floating ips from the vm
- migrate the vm
- reassociate the floating ips to the vm
The bug for the above issue is here:
Floating ips will not show up immediately when you list instances. This is due the compute network info cache not being updated immediately. Track the progress on this bug here:
Some resources are visible accross tenants for users with the role 'admin'. There are plans to differentiate the idea of a system-wide admin and a tenant-specific admin in folsom, but for now, be aware that giving a user the admin role on a tenant will allow him to act on all volumes and potentially list all vms etc. Blueprint here:
Nova still refers to tenants as "projects". Mostly this is an internal naming issue, but it leaks out in a few places (in the cloudpipe api extension for example). We would like to complete the move and use tenant/tenant_id internally as well. Progress can be tracked here:
Configuring RBAC is difficult. We recognize that the policy engine that we are using is not the most user-friendly. This will be dealt with in the Folsom release. Blueprint here:
The os-hosts and os-networks extensions to the Compute API are broken in the Essex release, but we plan to backport fixes for them. Track the status of those bugs here:
OpenStack Image Service (Glance)
The majority of the Essex release cycle was spent on stabilization and usability. Our contributors fixed 185 bugs and implemented 11 blueprints, some of which are highlighted below:
Image protection: Prevent an image from being accidentally deleted through a 'protected' attribute. Either protect the image immediately upon being created, or update it once registered. A user must explicitly unprotect the image in order to delete it. blueprint
Configurable number of Glance API processes: Deploy the glance API endpoint across multiple processes. blueprint
Configurable data-buffering directory: Choose where on your local filesystem you want to buffer data as it is uploaded to Swift. blueprint
Copy image data from external locations: Refer to an external location the Glance API server should retrieve its image data from and place in its locally configured store. blueprint
Support sendfile(2) when copying image data: Efficiently stream data to Glance API servers. blueprint
Support Qpid for Glance notifications: Allow deployers to utilize Qpid as a notifier strategy rather than Kombu. blueprint
Image upload progress bar: See a graphical representation of data transfer when streaming image data through the Glance CLI. blueprint
You can find the comprehensive list of the Glance completed blueprints and bugfixes, at:
OpenStack Dashboard (Horizon)
During the Essex release cycle, Horizon underwent a significant set of internal changes to allow extensibility and customization while also adding a significant number of new features and bringing much greater stability to every interaction with the underlying components.
Making Horizon extensible for third-party developers was one of the core goals for the Essex release cycle. Massive strides have been made to allow for the addition of new "plug-in" components and customization of OpenStack Dashboard deployments.
To support this extensability, all the components used to build on Horizon's interface are now modular and reusable. Horizon's own dashboards use these components, and they have all been built with third-party developers in mind. Some of the main components are listed below.
Dashboards and Panels
Horizon's structure has been divided into logical groupings called dashboards and panels. Horizon's classes representing these concepts handle all the structural concerns associated with building a complete user interface (navigation, access control, url structure, etc.).
One of the most common activities in a dashboard user interface is simply displaying a list of resources or data and allowing the user to take actions on that data. To this end, Horizon abstracted the commonalities of this task into a reusable set of classes which allow developers to programmatically create displays and interactions for their data with minimal effort and zero boilerplate.
Tabs and TabGroups
Support for Nova's features has been greatly improved in Essex:
- Support for Nova volumes, including:
- Volumes creation and management.
- Volume snapshots.
- Realtime AJAX updating for volumes in transition states.
- Improved Nova instance display and interactions, including:
- Launching instances from volumes.
- Pausing/suspending instances.
- Displaying instance power states.
- Realtime AJAX updating for instances in transition states.
- Support for managing Floating IP address pools.
- New instance and volume detail views.
A new "Settings" area was added that offers several userful functions:
- EC2 credentials download.
- OpenStack RC file download.
- User language preference customization.
User Experience Improvements
- Support for batch actions on multiple resources (e.g. terminating multiple
instances at once).
- Modal interactions throughout the entire UI.
- AJAX form submission for in-place validation.
- Improved in-context help for forms (tooltips and validation messages).
- Creation and publication of a set of Human Interface Guidelines (HIG).
- Copious amounts of documentation for developers.
Under The Hood
- Internationalization fully enabled, with all strings marked for translation.
- Client library changes:
- Full migration to python-novaclient from the deprecated openstackx library.
- Migration to python-keystoneclient from the deprecated keystone portion
of the python-novaclient library.
- Client-side templating capabilities for more easily creating dynamic
- Frontend overhaul to use the Bootstrap CSS/JS framework.
- Centralized error handling for vastly improved stability/reliability
- Completely revamped test suite with comprehensive test data.
- Forward-compatibility with Django 1.4 and the option of cookie-based sessions.
You can find the comprehensive list of the Horizon completed blueprints and bugfixes, at:
Known Issues and Limitations
Quantum support has been removed from Horizon for the Essex release. It will be restored in Folsom in conjunction with Quantum's first release as a core OpenStack project.
Due to the mechanisms by which Keystone determines "admin"-ness for a user, an admin user interacting with the "Project" dashboard may see some inconsistent behavior such as all resources being listed instead of only those belonging to that project, or only being able to return to the "Admin" dashboard while accessing certain projects.
Exceptions during customization
Exceptions raised while overriding built-in Horizon behavior via the "customization_module" setting may trigger a bug in the error handling which will mask the original exception.
The Essex Horizon release is only partially backwards-compatible with Diablo OpenStack components. While it is largely possible to log in and interact, many functions in Nova, Glance and Keystone changed too substantially in Essex to maintain full compatibliity.
OpenStack Identity (Keystone)
The implementation of the Identity service changed completely during the Essex release. Much of the design is precipitated from the expectation that the auth backends for most deployments will actually be shims in front of existing user systems. Documentation has been updated to support this change and migration paths are documented at http://keystone.openstack.org.
You can find the comprehensive list of the Keystone completed blueprints and bugfixes, at:
Key Highlights of the Keystone Transition
- The external API - both "admin" and "user" facing has remained stable and identical to the Diablo release. In changing the underlying implementation, we were very careful to keep external components stable to allow us to progress quickly in the future.
- The middleware components used by the other OpenStack projects were substantially rewritten to simply that code as well.
- The implementation of authorization by services was changed from a single shared secret (previously called the "admin token") to a per-service account and password credential pair.
- this implies configuration changes into nova, glance, swift, etc. specifically around the api-paste.ini files, where new values are now defined for those credentials, and they are now implementable per-service.
- The Keystone service, and the middleware implementations now do considerably more logging for system administrators and OpenStack deployers to be able to debug authentication and authorization issues.
- Keystone now supports S3 token validation and additional Swift storage features:
- Swift ACL is now supported, you can allow/deny different users within a tenant.
- Anoymous access via ACL to allow public access to container.
- Reseller accounts support to give ability to nova to access swift and have it to replace nova-objectstore.
Known Issues and Limitations for Keystone
- Using SSL certs for authorization instead of userid/credentials
- Any API to drive policy definitions around role based access controls
- Mapping identity to pre-existing LDAP backends
- User facing APIs to support (when available) identity updates (i.e. a user changing their password, or "logging out")
Known packaged distributions
OpenSUSE 12.1 / SLES11 SP2
You can find all details about the repositories for OpenSUSE 12.1 and SLES11 SP2 on our packaging site in the wiki: Packaging/SUSE
Fedora 17 / Fedora 16 / EPEL 6
- Fedora 17 (May 2012) will ship with OpenStack Essex
- The Extra Packages for Enterprise Linux repository supporting RHEL >= 6.2 and derivatives will update from Diablo to Essex
- You can get Fedora/EPEL OpenStack package details at https://apps.fedoraproject.org/packages/s/openstack
- Install/Setup notes for Essex are at http://fedoraproject.org/wiki/Getting_started_with_OpenStack_on_Fedora_17
- An unofficial Essex repository for Fedora 16 is available at http://repos.fedorapeople.org/repos/apevec/openstack-preview/fedora-16/noarch/
Ubuntu 12.04 LTS (Precise Pangolin)
All core OpenStack Essex components are officially supported and available in the Main Precise Ubuntu archive:
~-Note: Horizon and Keystone are currently located in Universe as they undergo a security review before promotion to Main for the 12.04 Precise release (April 26th 2012)-~
Incubated projects Quantum and Melange are available for Precise in Universe
Essex can be deployed on Ubuntu Server using MAAS and Juju.
Martin Loschwitz has written a wonderful step-by-step guide for manually installing Essex on Ubuntu 12.04:
Debian GNU/Linux wheezy
All core OpenStack Essex components are officially supported and available in the Main wheezy archive: