Difference between revisions of "TripleO/TuskarIcehouseRequirements"
(added manual node allocation to future) |
(outline of actual icehouse requirements) |
||
Line 1: | Line 1: | ||
== Overview == | == Overview == | ||
− | + | The goal for Icehouse is to allow users to deploy an Overcloud through Tuskar. | |
+ | UI separate from Horizon. | ||
− | ''' | + | == Nodes == |
+ | |||
+ | The UI allows users to manually register nodes with Ironic. Once done, the UI | ||
+ | maintains a listing of nodes, classified as either 'Free' or 'Deployed' (depending on | ||
+ | whether a node is allocated with a Nova Instance or not). | ||
+ | |||
+ | The UI will allow free nodes to be unregistered. | ||
+ | |||
+ | == Overcloud Roles == | ||
− | + | Tuskar recognizes four roles for nodes: | |
− | |||
− | + | * compute | |
− | + | * controller | |
− | + | * object storage | |
− | * | + | * block storage |
− | + | ||
− | + | Each role is associated with an image name and a single flavor. Roles | |
− | + | can be edited to modify the flavor association by selecting from Nova flavors. | |
+ | |||
+ | == Flavors == | ||
− | + | Tuskar provides its own interface for CRUD operations on flavors. | |
− | |||
− | |||
− | |||
− | ==== | + | == Images == |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | Users are expected to create images for specifc roles through CLI. | |
− | |||
− | |||
− | == | + | == Overcloud Deployment == |
− | + | Only a single deployment allowed. | |
− | |||
− | |||
− | ==== | + | ==== Planning ==== |
− | |||
− | |||
− | + | * nodes per role | |
− | * deployment configuration | + | * deployment configuration |
− | |||
− | |||
− | ==== | + | ==== Deploying ==== |
* Heat template generated on the fly | * Heat template generated on the fly | ||
− | + | * nova scheduler allocates nodes, uses exact flavor matching | |
− | + | * status indicator to determine overall state of deployment | |
− | |||
− | |||
− | * nova scheduler allocates nodes | ||
− | * | ||
* Deployment action can create or update | * Deployment action can create or update | ||
− | * | + | * Can scale upwards, but not downwards |
+ | |||
+ | ==== Maintenance ==== | ||
+ | |||
+ | * Logs | ||
+ | * View nodes by role | ||
+ | |||
+ | == Post-Icehouse Requirements == | ||
+ | |||
+ | heterogeneous nodes | ||
+ | |||
+ | During deployment | ||
** status indicator for nodes as well | ** status indicator for nodes as well | ||
** ''status includes 'time left' (**)'' | ** ''status includes 'time left' (**)'' | ||
− | ==== | + | ** ''allow multiple deployments (**)'' |
− | * '' | + | |
− | + | ** ''hardware specs from Ironic based on IPMI address and MAC address (*)'' | |
+ | ** ''IPMI address auto populated from Neutron (**)'' | ||
+ | * ''Node auto-discovery(*)'' | ||
+ | |||
+ | ==== Management node ==== | ||
+ | * where undercloud is installed | ||
+ | * created as part of undercloud install process | ||
+ | ** ''allow creation of additional management nodes through UI (**)'' | ||
− | == Monitoring == | + | ==== Monitoring ==== |
* Service monitoring | * Service monitoring | ||
** assignment, availability, status | ** assignment, availability, status | ||
Line 85: | Line 81: | ||
** ''capacity, historical statistics (*)'' | ** ''capacity, historical statistics (*)'' | ||
− | |||
− | |||
− | |||
− | |||
− | |||
* ''Networks (**)'' | * ''Networks (**)'' | ||
* ''Images (**)'' | * ''Images (**)'' | ||
Line 95: | Line 86: | ||
* ''Archived nodes (**)'' | * ''Archived nodes (**)'' | ||
* ''Manual node allocation (**)'' | * ''Manual node allocation (**)'' | ||
+ | |||
+ | ==== Other ==== | ||
+ | * ''review distribution map (**)'' | ||
+ | * notification when a deployment is ready to go or whenever something changes |
Revision as of 18:42, 4 February 2014
Contents
Overview
The goal for Icehouse is to allow users to deploy an Overcloud through Tuskar. UI separate from Horizon.
Nodes
The UI allows users to manually register nodes with Ironic. Once done, the UI maintains a listing of nodes, classified as either 'Free' or 'Deployed' (depending on whether a node is allocated with a Nova Instance or not).
The UI will allow free nodes to be unregistered.
Overcloud Roles
Tuskar recognizes four roles for nodes:
- compute
- controller
- object storage
- block storage
Each role is associated with an image name and a single flavor. Roles can be edited to modify the flavor association by selecting from Nova flavors.
Flavors
Tuskar provides its own interface for CRUD operations on flavors.
Images
Users are expected to create images for specifc roles through CLI.
Overcloud Deployment
Only a single deployment allowed.
Planning
- nodes per role
- deployment configuration
Deploying
- Heat template generated on the fly
- nova scheduler allocates nodes, uses exact flavor matching
- status indicator to determine overall state of deployment
- Deployment action can create or update
- Can scale upwards, but not downwards
Maintenance
- Logs
- View nodes by role
Post-Icehouse Requirements
heterogeneous nodes
During deployment
- status indicator for nodes as well
- status includes 'time left' (**)
- allow multiple deployments (**)
- hardware specs from Ironic based on IPMI address and MAC address (*)
- IPMI address auto populated from Neutron (**)
- Node auto-discovery(*)
Management node
- where undercloud is installed
- created as part of undercloud install process
- allow creation of additional management nodes through UI (**)
Monitoring
- Service monitoring
- assignment, availability, status
- capacity, historical statistics (*)
- Node monitoring
- assignment, availability, status
- capacity, historical statistics (*)
- Networks (**)
- Images (**)
- Logs (**)
- Archived nodes (**)
- Manual node allocation (**)
Other
- review distribution map (**)
- notification when a deployment is ready to go or whenever something changes