TripleO/TuskarIcehouseRequirements
Contents
Overview
For Icehouse, we will create an UI to allow users to deploy an overcloud. This UI will build upon the Horizon framework, but be functional without the existing OpenStack Dashboard tabs (Project, Admin). This separation is to reduce user confusion, as much of the functionality in those tabs simply do not apply.
Nodes
The UI allows users to manually register nodes with Ironic. Once done, the UI maintains a listing of nodes, classified as either 'Free' or 'Deployed' (depending on whether a Nova instance is deployed upon that node).
The UI will allow free nodes to be unregistered.
Overcloud Roles
Tuskar recognizes four roles for nodes, ultimately used within the overcloud Heat template.
- compute
- controller
- object storage
- block storage
These roles are created when Tuskar is installed. Roles cannot be created or removed.
Image
A role is associated with an image name; the overcloud Heat template will expect an image with that name to exist. Users are expected to create the appropriate images through the Glance CLI.
The image name cannot be modified.
Flavor
A role is associated with a single Nova flavor. When deploying an Overcloud, the Nova scheduler uses exact matching with this flavor to determine which nodes to use. This means that the nodes deployed for a given role must be homogeneous.
Tuskar provides its own interface for CRUD operations on flavors.
Overcloud Deployment
Only a single overcloud is allowed. Before deploying an overcloud, a user must specify:
- the number of nodes per role
- various deployment configuration parameters
Once done, the user can trigger the creation of the overcloud. Creation is done by using the deployment specifications to generate an overcloud Heat template that is then used to create an overcloud Heat stack. During this process, users will see a status indicator in the UI that monitors the overall state of the deployment.
Post-deployment, the overcloud can be scaled upwards by increasing the number of nodes in a role. The overcloud cannot be scaled downwards, nor can the deployment configuration parameters be modified.
Once an overcloud is deployed, users can also view information about the Nova servers deployed for that stack, classified by role. Users can also view a log corresponding to the Heat events relevant to the overcloud stack.
Post-Icehouse Requirements
heterogeneous nodes
During deployment
- status indicator for nodes as well
- status includes 'time left' (**)
- allow multiple deployments (**)
- hardware specs from Ironic based on IPMI address and MAC address (*)
- IPMI address auto populated from Neutron (**)
- Node auto-discovery(*)
Management node
- where undercloud is installed
- created as part of undercloud install process
- allow creation of additional management nodes through UI (**)
Monitoring
- Service monitoring
- assignment, availability, status
- capacity, historical statistics (*)
- Node monitoring
- assignment, availability, status
- capacity, historical statistics (*)
- Networks (**)
- Images (**)
- Logs (**)
- Archived nodes (**)
- Manual node allocation (**)
Other
- review distribution map (**)
- notification when a deployment is ready to go or whenever something changes