Jump to: navigation, search


Revision as of 21:47, 12 October 2015 by Jim Rollenhagen (talk | contribs) (Add Ironic)
Other languages:
Deutsch • ‎English • ‎日本語 • ‎한국어 • ‎中文(简体)‎ • ‎中文(台灣)‎

OpenStack Liberty Release Notes


OpenStack Object Storage (Swift)

Please see full release notes at https://github.com/openstack/swift/blob/master/CHANGELOG

New Features

  • Allow 1+ object-servers-per-disk deployment

    Enabled by a new > 0 integer config value, "servers_per_port" in the [DEFAULT] config section for object-server and/or replication server configs. The setting's integer value determines how many different object-server workers handle requests for any single unique local port in the ring. In this mode, the parent swift-object-server process continues to run as the original user (i.e. root if low-port binding is required), binds to all ports as defined in the ring, and forks off the specified number of workers per listen socket. The child, per-port servers drop privileges and behave pretty much how object-server workers always have, except that because the ring has unique ports per disk, the object-servers will only be handling requests for a single disk. The parent process detects dead servers and restarts them (with the correct listen socket), starts missing servers when an updated ring file is found with a device on the server with a new port, and kills extraneous servers when their port is found to no longer be in the ring. The ring files are stat'ed at most every "ring_check_interval" seconds, as configured in the object-server config (same default of 15s).

    In testing, this deployment configuration (with a value of 3) lowers request latency, improves requests per second, and isolates slow disk IO as compared to the existing "workers" setting. To use this, each device must be added to the ring using a different port.

  • Do container listing updates in another (green)thread

    The object server has learned the "container_update_timeout" setting (with a default of 1 second). This value is the number of seconds that the object server will wait for the container server to update the listing before returning the status of the object PUT operation.

    Previously, the object server would wait up to 3 seconds for the container server response. The new behavior dramatically lowers object PUT latency when container servers in the cluster are busy (e.g. when the container is very large). Setting the value too low may result in a client PUT'ing an object and not being able to immediately find it in listings. Setting it too high will increase latency for clients when container servers are busy.

  • Added the ability to specify ranges for Static Large Object (SLO) segments.
  • Allow SLO PUTs to forgo per-segment integrity checks. Previously, each segment referenced in the manifest also needed the correct etag and bytes setting. These fields now allow the "null" value to skip those particular checks on the given segment.
  • Replicator configs now support an "rsync_module" value to allow for per-device rsync modules. This setting gives operators the ability to fine-tune replication traffic in a Swift cluster and isolate replication disk IO to a particular device. Please see the docs and sample config files for more information and examples.
  • Ring changes
    • Partition placement no longer uses the port number to place partitions. This improves dispersion in small clusters running one object server per drive, and it does not affect dispersion in clusters running one object server per server.
    • Added ring-builder-analyzer tool to more easily test and analyze a series of ring management operations.
    • Ring validation now warns if a placement partition gets assigned to the same device multiple times. This happens when devices in the ring are unbalanced (e.g. two servers where one server has significantly more available capacity).
  • TempURL fixes (closes CVE-2015-5223)

    Do not allow PUT tempurls to create pointers to other data. Specifically, disallow the creation of DLO object manifests via a PUT tempurl. This prevents discoverability attacks which can use any PUT tempurl to probe for private data by creating a DLO object manifest and then using the PUT tempurl to head the object.

  • Swift now emits StatsD metrics on a per-policy basis.
  • Fixed an issue with Keystone integration where a COPY request to a service account may have succeeded even if a service token was not included in the request.
  • Bulk upload now treats user xattrs on files in the given archive as object metadata on the resulting created objects.
  • Emit warning log in object replicator if "handoffs_first" or "handoff_delete" is set.
  • Enable object replicator's failure count in swift-recon.
  • Added storage policy support to dispersion tools.
  • Support keystone v3 domains in swift-dispersion.
  • Added domain_remap information to the /info endpoint.
  • Added support for a "default_reseller_prefix" in domain_remap middleware config.
  • Allow rsync to use compression via a "rsync_compress" config. If set to true, compression is only enabled for an rsync to a device in a different region. In some cases, this can speed up cross-region replication data transfer.
  • Added time synchronization check in swift-recon (the --time option).
  • The account reaper now runs faster on large accounts.
  • Various other minor bug fixes and improvements.

Upgrade Notes

  • Dependency changes
    • Added six requirement. This is part of an ongoing effort to add support for Python 3.
    • Dropped support for Python 2.6.
  • Config changes
    • Recent versions of Python restrict the number of headers allowed in a request to 100. This number may be too low for custom middleware. The new "extra_header_count" config value in swift.conf can be used to increase the number of headers allowed.
    • Renamed "run_pause" setting to "interval" (current configs with run_pause still work). Future versions of Swift may remove the run_pause setting.
  • The versioned writes feature has been refactored and reimplemented as middleware. You should explicitly add the versioned_writes middleware to your proxy pipeline, but do not remove or disable the existing container server config setting ("allow_versions"), if it is currently enabled. The existing container server config setting enables existing containers to continue being versioned. Please see http://swift.openstack.org/middleware.html#how-to-enable-object-versioning-in-a-swift-cluster for further upgrade notes.

OpenStack Networking (Neutron)

New Features

  • Neutron now supports IPv6 Prefix Delegation for the automatic assignment of CIDRs to IPv6 subnets. For more information on the usage and configuration of this feature, see the OpenStack Networking Guide.
  • Neutron now exposes a QoS API, initially offering bandwidth limitation on the port level. The API, CLI, configuration and additional information may be found here [1].
  • Router high availability (L3 HA / VRRP) now works when layer 2 population (l2pop) is enabled [2].
  • VPNaaS reference drivers now work with HA routers.
  • Networks used for VRRP traffic for HA routers may now be configured to use a specific segmentation type or physical network tag [3].
  • The OVS agent may now be restarted without affecting data plane connectivity.
  • Neutron now offers role base access control for networks [4].
  • LBaaS V2 reference driver is now based on Octavia, an operator grade scalable, reliable Load Balancer platform.
  • LBaaS V2 API is no longer experimental. It is now stable.
  • Neutron now provides a way for admins to manually schedule agents, allowing host resources to be tested before they are enabled for tenant use [5].
  • Neutron now has a pluggable IP address management framework, enabling the use of alternate or third-party IPAM. The original, non-pluggable version of IPAM is enabled by default.

Deprecated and Removed Plugins and Drivers

  • The metaplugin is removed in the Liberty release.
  • The IBM SDN-VE monolithic plugin is removed in the Liberty release.
  • The Cisco N1kV monolithic plugin is removed in the Liberty release (replaced by the ML2 mechanism driver).
  • The Embrane plugin is deprecated and will be removed in the Mitaka release.

Deprecated Features

  • The FWaaS API is marked as experimental for Liberty. Further, the current API will be removed in Mitaka and replaced with a new FWaaS API, which the team is in the process of developing.
  • The LBaaS V1 API is marked as deprecated and is planned to be removed in a future release. Going forward, the LBaaS V2 API should be used.

Performance Considerations

  • The stock Trusty Tahr kernel (3.13) shows linear performance degradation when running "ip netns exec" as the number of namespaces increases. In cases where scale is important, a later version kernel (e.g. 3.19) should be used. [WHICH VERSION OF 3.13 EXHIBITED THIS. MOST VERSIONS WILL HAVE THIS FIX ALREADY.]
  • Creating Neutron-LBaaS load balancers in environments without hardware virtualization may be slow when using the Octavia driver. This is due to QEMU using the TCG accelerator instead of the KVM accelerator in environments without hardware virtualization available. We recommend enabling hardware virtualization on your compute nodes, or enabling nested virtualization when using the Octavia driver inside a virtual environment. See the following link for details on setting up nested virtualization for DevStack running inside KVM: http://docs.openstack.org/developer/devstack/guides/devstack-with-nested-kvm.html.

OpenStack Compute (Nova)

New Features



Architectural evolution on the scheduler has continued, along with key bug fixes:

Cells v2

Cells v2 is not currently in a usable state, but we have added some more supporting infrastructure:

Compute Driver Features


Other Features

Upgrade Notes

  • If you are coming from Kilo stable, please make sure you have fully upgraded to the latest release of that lineage before deploying Liberty. Due to bug https://bugs.launchpad.net/nova/+bug/1474074 versions of Kilo from before the fix will be problematic when talking to Liberty nodes.
  • Allocation ratios for RAM and CPU are now defined within the nova-compute service (so per compute node). Ratios also need to be provided for the scheduler service. Depending on whether a compute node is running Kilo or Liberty, the allocation ratios will behave differently : if the compute node is running Kilo then the CPU and RAM allocation ratios for that compute node will be the ones defaulted in the controller's nova.conf file.Or, if the compute node is Liberty then you'll be able to set a per-compute allocation ratio for both CPU and RAM. In order to leave the operator providing the allocation ratios to all the compute nodes, the default allocation ratio will be set in nova.conf to 0.0 (even for the controller). That doesn't mean that allocation ratios will actually be 0.0, just that the operator needs to provide those before the next release (ie. Mitaka). To be clear, the default allocation ratios are still 16.0 for cpu_allocation_ratio and 1.5 for ram_allocation_ratio.
  • nova-compute should be upgraded to Liberty code before upgrading Neutron services per the new "network-vif-deleted" event: https://review.openstack.org/#/c/187871/
  • Rootwrap filters must be updated after release to add the 'touch' command.
    • There is a race condition between imagebackend and imagecache mentioned in the Launchpad Bug 1256838.
    • In this case if base image is deleted by ImageCacheManager while imagebackend is copying the image to instance path, then the instance goes in to error state.
    • In order to resolve this issue, there is a need to add 'touch' command in compute.filters along with the change https://review.openstack.org/#/c/217579/.
    • In case of a race condition, when libvirt has changed the base file ownership to libvirt-qemu while imagebackend is copying the image, then we get permission denied error on updating the file access time using os.utime. To resolve this error we need to update the base file access time with root user privileges using 'touch' command.
  • The DiskFilter is now part of the scheduler_default_filters in Liberty per https://review.openstack.org/#/c/207942/ .
  • Per https://review.openstack.org/#/c/103916/ you can now only map one vCenter cluster to a single nova-compute node.
  • The Libvirt driver parallels has been renamed to virtuozzo
  • Orphaned tables - iscsi_targets, volumes - have been removed.
  • The default paste.ini has been updated to use the new v2.1 API for all endpoints, and the v3 endpoint has been removed. A compatibility mode middlewear is used to relax the v2.1 validation for the /v2 and /v1.1 endpoints.
  • The code for DB schema downgrades has now been removed: https://blueprints.launchpad.net/nova/+spec/nova-no-downward-sql-migration


  • The ability to disable in tree API extensions has been deprecated (https://blueprints.launchpad.net/nova/+spec/nova-api-deprecate-extensions)
  • The novaclient.v1_1 module has been deprecated [[6]][[7]] since 2.21.0 and we are going to remove it in the first python-novaclient release in Mitaka.
  • Method `novaclient.client.get_client_class` is deprecated [[8]] since 2.29.0. The method will be removed in Mitaka.
  • The mute_weight_value option on weighers has been deprecated, including for use with Cells.
  • The remove_unused_kernels configuration option for the Libvirt driver is now deprecated.
  • The minimum recommended version of vCenter for use with the vcenter driver is now 5.1.0. In Liberty this is logged as a warning, in Mitaka support for versions lower than 5.1.0 will be removed.
  • API v3 specific components have all been deprecated and removed from the default paste.ini

OpenStack Telemetry (Ceilometer)

Key New Features

  • creation of Aodh to handle alarming service
  • metadata caching - reduced load of nova api polling
  • declarative meters
    • ability to generate meters by defining meter definition template
    • ability to define specific SNMP meters to poll
  • ceilometer+gnocchi integration - support for data publishing from Ceilometer to Gnocchi
  • mandatory limit - limit restricted querying is enforced. limit must be explicitly provided on queries else the result set is restricted to a default limit
  • distributed, coordinated notification agents - support for workload partitioning across multiple notification agents
  • Events RBAC support
  • PowerVM hypervisor support
  • improved MongoDB query support - performance improvement to statistic calculations

Gnocchi Features

  • initial influxdb driver implemented

Aodh Features

  • event alarms - ability to trigger action when event is received
  • trust support in alarms link

Upgrade Notes

  • The name of some middleware used by ceilometer changed in a backwards-incompatible way. Before upgrading, edit the paste.ini file for ceilometer to change "oslo.middleware" to "oslo_middleware". For example using sed -ri 's/oslo\.middleware/oslo_middleware/' api_paste.ini
  • The notification agent is a core service to collecting data in Ceilometer -- it now handles all transformations and publishing. Polling agents now defer all processing to notification agents, and must be deployed in tandem.
  • A mandatory limit is applied to each request. if no limit is given, it will be restricted to a default limit.


  • Ceilometer Alarms is deprecated in favour or Aodh
  • RPC publisher and collector is deprecated in favour of topic based notifier publisher
  • Non-metric meters are still deprecated to be removed

OpenStack Identity (Keystone)

Key New Features

  • Experimental: Store domain specific configuration options in SQL instead of using configuration files use the new REST APIs.
  • Experimental: Keystone now supports tokenless authorization with X.509 SSL client certificate.
  • Configuring per-Identity Provider WebSSO is now supported.
  • openstack_user_domain and openstack_project_domain attributes were added to SAML assertion in order to map user and project domains, respectively.
  • Credentials list call can now have its results filtered by credential type.
  • Support was improved for out-of-tree drivers by defining stable Driver Interfaces.
  • Several features were hardened, including Fernet tokens, Federation, Domain specific configurations from database and Role Assignments.
  • Certain options in keystone.conf now have choices, which determine if the user's setting is valid.

Upgrade Notes

  • The EC2 token middleware, deprecated in Juno, is no longer available in keystone. It's been moved to the keystonemiddleware package.
  • The compute_port configuration option, deprecated in Juno, is no longer available.
  • The XML middleware stub has been removed, so references to it must be removed from the keystone-paste.ini configuration file.
  • stats_monitoring and stats_reporting paste filters have been removed, so references to it must be removed from the keystone-paste.ini configuration file
  • The external authentication plugins ExternalDefault, ExternalDomain, LegacyDefaultDomain, and LegacyDomain, deprecated in Icehouse, are all no longer available.
  • keystone.conf now references entrypoint names for drivers, as such the drivers are now specified like "sql", "ldap", "uuid", etc., rather than the full module path. See the sample configuration file for examples.
  • Similarly to the above, we now expose entrypoints for the keystone-manage command instead of a file.
  • Schema downgrades via keystone-manage db_sync are no longer supported, only upgrades are supported.
  • Features that were "extensions" in previous releases (OAuth delegation, Federated Identity support, Endpoint Policy, etc) are now enabled by default.
  • If you're running keystone behind a proxy, check out the new secure_proxy_ssl_header config option
  • Several configuration options have been deprecated, renamed, or moved to new sections. Review your keystone.conf file against the current sample configuration file.
  • Domain name information is now available to be used in policy rules with the attribute domain_name.


  • Running Keystone in Eventlet remains deprecated and will be removed in the Mitaka release
  • Using LDAP as the resource backend, i.e for projects and domains, is now deprecated and will be removed in the Mitaka release
  • Using the full path to the driver class is deprecated in favor of using the entrypoint. In the Mitaka release, the entrypoint must be used.
  • In the [resource] and [role] sections of the keystone.conf file, not specifying the driver and using the assignment driver is deprecated. In the Mitaka release, the resource and role drivers will default to the sql driver.
  • In keystone-paste.ini, using paste.filter_factory is deprecated in favor of the "use" directive, specifying an entrypoint.
  • Not specifying a domain during a create user, group or project call, which relied on falling back to the default domain, is now deprecated and will be removed in the N release.
  • Certain deprecated methods from the assignment manager were removed in favor of the same methods in the [resource] and [role] manager.

OpenStack Block Storage (Cinder)

Key New Features

Upgrade Notes


OpenStack Orchestration (Heat)

New Features


Convergence is a new orchestration engine maturing in the heat tree. In Liberty, the benefits of using the convergence engine are:

  • Greater parallelization of resource actions (for better scaling of large templates)
  • The ability to do a stack-update while there is already an update in-progress
  • Better handling of heat-engine failures (still WIP)

The convergence engine can be enabled by setting /etc/heat/heat/conf [DEFAULT] convergence_engine=true, then restarting heat-engine. Once this has been done, any subsequent created stack will use the convergence engine, while operations on existing stacks will continue to use the traditional engine.

Convergence has not been production tested and thus should be considered beta quality - use with caution. For the Liberty release, we recommend enabling convergence for the purposes of evaluation and scale testing. We will be considering making convergence the default engine in the Mitaka cycle. Convergence specific bugs are tracked in launchpad with the convergence-bugs tag.

Conditional resource exposure

Only resources actually installed in the cloud services are made available to users. Operators can further control resources available to users with standard policy rules in policy.json on per-resource type basis.

heat_template_version: 2015-10-15

2015-10-15 indicates that the YAML document is a HOT template and it may contain features added and/or removed up until the Liberty release.

  • Removes the Fn::Select function (path based get_attr/get_param references should be used instead).
  • If no <attribute name> is specified for calls to get_attr, a dict of all attributes is returned, e.g. { get_attr: [<resource name>]}.
  • Adds new str_split intrinsic function
  • Adds support for passing multiple lists to the existing list_join function.
  • Adds support for parsing map/list data to str_replace and list_join (they will be json serialized automatically)

REST API/heatclient additions

  • Stacks can now be assigned with a set of tags, and stack-list can filter and sort through those tags
  • "heat stack-preview ..." will return a preview of changes for a proposed stack-update
  • "heat template-validate --show-nested ..." will also validate all template resources and return nested data useful for building user interfaces
  • "heat resource-type-template --template-type hot ..." generates a template in HOT format
  • "heat resource-type-list" only shows types available to the user, and can filter results by name, version and support_status
  • "heat template-version-list" lists available template versions
  • "heat template-function-list ..." lists available functions for a template version

Enhancements to existing resources

New resources

The following new resources are now distributed with the Heat release:

[1] These existed Kilo as contrib resources as they were for non-integrated projects. These resources are now distributed with Heat as Big Tent projects.

[2] These existed Kilo as contrib resources as they require a user with an admin role. They are now distributed with Heat. Operators now have ability to hide them from under-privileged users by modifyig policy.json (for reference, OS::Nova::Flavor is hidden from non-admin users in default policy file supplied).

[3] These existed in Kilo as contrib resources as they used an approach not endorsed by the Heat project. They are now distributed with heat and documented as UNSUPPORTED.

[4] These resources are for projects which are not yet OpenStack Big Tent projects, so are documented as UNSUPPORTED

With the new OS::Keystone::* resources it is now be possible for cloud operators to use heat templates to manage Keystone service catalog entries and users.

Deprecated Resource Properties

Many resource properties have previously been documented as DEPRECATED. 15 of these properties are now flagged as HIDDEN, which means they will no longer be documented, but existing stacks and templates will continue to work after a heat upgrade. The [ http://docs.openstack.org/developer/heat/template_guide/openstack.html Resource Type Reference] should be consulted to determine available resource properties and attributes.

Upgrade notes

Configuration Changes

Notable changes to the /etc/heat/heat.conf [DEFAULT] section:

  • hidden_stack_tags has been added, and stacks containing these tag names will be hidden from stack-list results (defaults to data-processing-cluster, which hides sahara-created stacks)
  • instance_user was deprecated, and is now removed entirely. Nova servers created with OS::Nova::Server resource will now boot configured with the default user set up with the cloud image. AWS::EC2::Instance still creates "ec2-user"
  • max_resources_per_stack can now be set to -1 to disable enforcement
  • enable_cloud_watch_lite is now false by default as this REST API is deprecated
  • default_software_config_transport has gained the option ZAQAR_MESSAGE
  • default_deployment_signal_transport has gained the option ZAQAR_SIGNAL
  • auth_encryption_key is now documented as requiring exactly 32 characters
  • list_notifier_drivers was deprecated and is now removed
  • policy options have moved to the [oslo_policy] section
  • use_syslog_rfc_format is deprecated and now defaults to true

Notable changes to other sections of heat.conf:

  • [clients_keystone] auth_uri has been added to specify the unversioned keystone url
  • [heat_api] workers now defaults to 4 (was previously 0, which created a worker per host CPU)

The policy file /etc/heat/policy.json can now be configured with per-resource-type access policies, for example:

   "resource_types:OS::Nova::Flavor": "rule:context_is_admin"

Upgrading from Kilo to Liberty

Progress has been made on supporting live sql migrations, however it is still recommended to bring down the heat service for the duration of the upgrade. Downward SQL schema migrations are no longer supported. A rollback to Kilo will require restoring a snapshot of the pre-upgrade database.

OpenStack Data Processing (Sahara)

Key New Features

  • New plugins and versions:
    • Ambari plugin with supports HDP 2.2 / 2.3
    • Apache Hadoop 2.7.1 was added, Apache Hadoop 2.6.0 was deprecated
    • CDH 5.4.0 was added with HA support for NameNode and ResourceManager
    • MapR 5.0.0 was added
    • Spark 1.3.1 was added, Spark 1.0.0 was deprecated
    • HDP 1.3.2 and Apache Hadoop 1.2.1 was removed
  • Added support for using Swift with Spark EDP jobs
  • Added support for Spark EDP jobs in CDH and Ambari plugins
  • Added support for public and protected resources
  • Started integration with OpenStack client
  • Added support for editing all Sahara resources
  • Added automatic Hadoop configuration for clusters
  • Direct engine deprecated now and will be removed in Mitaka release
  • Added OpenStack manila nfs shares as a storage backend option for job binaries and data sources.
  • Added support for definition and use of configuration interfaces for EDP job templates

Upgrade Notes

  • -


  • Direct provisioning engine
  • Apache Hadoop 2.6.0
  • Spark 1.0.0
  • All Hadoop 1.X removed

OpenStack Search (Searchlight)

This is the first release for Searchlight. Searchlight is intended to dramatically improving the search capabilities and performance of various OpenStack cloud services by offloading user search queries. It provides Keystone RBAC based searches across OpenStack services by indexing their data into ElasticSearch and providing a security layer on top of incoming search queries. ElasticSearch is a search server based on Lucene. It provides a distributed, scalable, near real-time, faceted, multitenant-capable, and full-text search engine with a RESTful web interface.

Key New Features

New Resource Types Indexed

Upgrade Notes




OpenStack DNS (Designate)

Key New Features

  • Experimental: Hook Point API
  • Horizon Plugin moved out of tree
  • Purging deleted domains
  • Ceilometer "exists" periodic event per domain
  • ASync actions
    • Import
    • Export
  • Active / Passive Failover for designate-pool-manager periodic tasks
  • OpenStack client integration

Addtional DNS Server Backends

  • InfoBlox
  • Designate

Upgrade Notes

  • New service designate-zone-manager
    • Recommend using a supported tooz backend
    • We recommend zookeeper, but anything supported by tooz should be OK.
    • If not tooz backend is used, all zone-managers will assume ownership of all zones, and there will be 'n' "exists" messages per hour, where 'n' is the number of zone-manager processes.
  • designate-pool-manager can do active / passive failover for periodic tasks
    • Recommend using a supported tooz backend
    • If not tooz backend is used, all pool-managers will assume ownership of the pool, and multiple periodic tasks will run. This could have unforeseen consequences


  • V1 API
    • This is just an initial notice of intent, as there is still operations that require the designate cli interface, which talks to V1, and Horizon panels that only talk to V1

OpenStack Messaging Service (Zaqar)

Key New Features

  • Pre-signed URL - A new REST API endpoint to support pre-signed URL, which provides enough control over the resource being shared, without compromising security.
  • Email Notification - A new task driver for notification service, which can take an email address as the subscriber of Zaqar's subscription. When there is a new message posted to the queue, the email will receive the message.
  • Policy Support - Support fine-grained permission control with the policy.json like most of the other OpenStack components.
  • Persistent Transport - Support websocket to get a persistent transport for Zaqar.

Upgrade Notes


OpenStack Dashboard (Horizon)

Key New Features

  • Plugin improvements -- Horizon autodiscovers JavaScript files for inclusion and now has mechanisms for pluggable SCSS and Django template overrides

Upgrade Notes

OpenStack Trove (DBaaS)

Key New Features

  • Redis
    • Configuration Groups for Redis
    • Cluster support
  • Mongodb
    • Backup and Restore for single instance
    • User and Database management
    • Configuration Groups
  • Percona XtraDB Cluster Server
    • Cluster support
  • Allow deployer to associate instance flavors with specific datastores
  • Horizon support for database clusters
  • Management API for datastore and versions
  • Ability to deploy Trove instances in a single admin tenant so that the nova instances are hidden from user.

Upgrade Notes


OpenStack Bare metal (Ironic)

Please see full release notes here: http://docs.openstack.org/developer/ironic/releasenotes/index.html

New Features

  • Added "ENROLL" hardware state, which is the default state for newly created Nodes.
  • Added "abort" verb, which allows a user to interrupt certain operations while they are in progress.
  • Improved query and filtering support in the REST API.
  • Added support for CORS middleware.

Hardware Drivers

  • Added a new BootInterface for hardware drivers, which splits functionality out of the DeployInterface.
  • iLO virtual media drivers can work without Swift
  • Added Cisco IMC driver
  • Add OCS Driver
  • Add UCS Driver
  • Add Wake-On-Lan Power Driver
  • ipmitool driver supports IPMI v1.5
  • Add support to SNMP driver for “APC MasterSwitchPlus” series PDU’s
  • pxe_ilo driver now supports UEFI Secure Boot (previous releases of theiLO driver only supported this for agent_ilo and iscsi_ilo)
  • Add Virtual Media support to iRMC Driver
  • Add BIOS config to DRAC Driver
  • PXE drivers now support GRUB2


  • The "vendor_passthru" and "driver_vendor_passthru" methods of the DriverInterface have been removed. These were deprecated in Kilo and replaced with the @passthru decorator.
  • The migration tools to import data from a Nova "baremetal" deployment have been removed.
  • Deprecated the ‘parallel’ option to periodic task decorator
  • Removed deprecated ‘admin_api’ policy rule
  • Support for the original "bash" deploy ramdisk is deprecated and will be removed in two cycles. The ironic-python-agent project should be used for all deploy drivers.

Upgrade Notes

  • Newly created nodes default to the new ENROLL state. Previously, nodes defaulted to AVAILABLE, which could lead to hardware being exposed prematurely to Nova.
  • The addition of API version headers in Kilo means that any client wishing to interact with the Liberty API must pass the appropriate version string in each HTTP request. Current API version is 1.14.