Stack lifecycle plugpoint blueprint
Scope: [Short overview and high level description of what the blueprint is trying to achieve.]
A heat provider may have a need for custom code to examine stack requests prior to performing the operations to create or update a stack. Some applications may also need code to run after operations on a stack complete. The blueprint describes a mechanism whereby providers may easily add pre-operation calls from heat to their own code, which is called prior to performing the stack work, and post-operation calls which are made after a stack operation completes or fails.
Use Cases: [Short overview and high level description of what the blueprint is trying to achieve.]
There are at least two primary use cases.
(1) Enabling holistic (whole-pattern) scheduling of the virtual resources in a template instance (stack) prior to creating or deleting them. This would usually include making decisions about where to host virtual resources in the physical infrastructure to satisfy policy requirements. It would also cover failing a stack create or update if the policies included with the stack create or update were not satisfiable, or other cloud provider policies being checked were not satisfiable.
As an example, an application owner requires that VMs and volumes attached to them are deployed on the same rack. As another example, a cloud provider may want to enforce consultation with a license server before deploying an application. As another example, an application owner may require that their VMs be spread across a given number of availability zones.
(2) Enabling checking of policies not related to virtual resource scheduling, with stack create or update failure if the policies would not be satisfied.
As an example, a cloud provider may want to verify that compute resources for certain types of applications are deployed with certain security groups. As another example, a cloud provider may want to be warned when patterns with > 100 VMs are deployed.
Implementation Overview: [Provide an overview of the implementation and any algorithms that will be used]
An ordered registry of python classes which implement pre-operation and/or post-operation methods is required. This could be done through the existing heat plugin loading mechanism that uses plugin_manager.py, with some addition to force a full (or partial) ordering on the classes, and eventually (2) through stevedore when/if heat switches to stevedore. Pre and post operation methods should not modify the parameter stack(s). Any modifications would be considered to be a bug. A possible exception would be to allow status changes to the stack, to facilitate error handling. [The no-modifications rule could be enforced by passing deep copies to the plug point implementations but this might be costly.] Both pre-operation and post-operation methods can both indicate failure, which would be treated like any other stack failure. On failure, when more than one plug point implementation is registered, the post-op methods would be called for all the classes already processed, to indicate to the plug point implementation that any decisions that it made with respect to the stack can be safely undone.
All stack actions would need calls to either pre or post operations, or both. This includes at least create, update, delete, abandon, and adopt. In a basic design, modifications to the Stack class in parser.py are sufficient for adding the call to the pre-operation and post-operation methods found via the lifecycle plugin registry. The post-operation calls would need to be called in both the normal paths and all error paths.
Data Model Changes: [Are you introducing new model classes, or extending existing ones?]
Configuration variables: [List and explanation of the new configuration variables (if they exist)]
Depends on implementation details. Design proposed uses no new config variables
API's: [List and explanation of the new API's (if they exist)]
Plugin Interface: [Does this feature introduce any change?]
Required Plugin support: [What should the plugins do to support this new feature? (If applicable)]
Resource plugins would not be changed. Changes would be limited to the plugin loading process, and to the Stack class, and additions would include a lifecycle plugin executor, base class for lifecycle plugins, and samples, tests
Dependencies: [List of python packages and/or OpenStack components? (If applicable)]
[N/A], unless the cloud provider has a lifecycle plug-point implementation with additional requirements
CLI Requirements: [List of CLI requirements (If applicable)]
Horizon Requirements: [List of Horizon requirements (If applicable)] [N/A]
Usage Example: [How to run/use/interface with the new feature. (If applicable)]
Samples would be provided
Test Cases: [Description of various test cases. (If applicable)]
The existing plugin manager tests are generic and would serve to test lifecycle plugin loading for an implementation that used the plugin manager. [TODO] other tests are needed, and perhaps tempest tests