Jump to: navigation, search

Difference between revisions of "TripleO/TuskarIcehouseRequirements"

(added manual node allocation to future)
(outline of actual icehouse requirements)
Line 1: Line 1:
 
== Overview ==
 
== Overview ==
  
This document lists known requirements for the Icehouse release, as well as requirements that may not make it into Icehouse, but which will affect our design.  The relevant mailing list discussion can be found [http://lists.openstack.org/pipermail/openstack-dev/2013-December/021388.html here].
+
The goal for Icehouse is to allow users to deploy an Overcloud through Tuskar.
 +
UI separate from Horizon.
  
'''All requirements here are assumed to be targeted for Icehouse, except where otherwise marked:'''
+
== Nodes ==
 +
 +
The UI allows users to manually register nodes with Ironic.  Once done, the UI
 +
maintains a listing of nodes, classified as either 'Free' or 'Deployed' (depending on
 +
whether a node is allocated with a Nova Instance or not).
 +
 
 +
The UI will allow free nodes to be unregistered.
 +
 
 +
== Overcloud Roles ==
  
* '''*'''  ''Maybe Icehouse, dependency on other features being developed in the Icehouse timeframe''
+
Tuskar recognizes four roles for nodes:
* '''**''' ''After Icehouse''
 
  
== Nodes ==
+
* compute
+
* controller
==== Registration ====
+
* object storage
* Manual registration
+
* block storage
** ''hardware specs from Ironic based on IPMI address and MAC address (*)''
+
 
** ''IPMI address auto populated from Neutron (**)''
+
Each role is associated with an image name and a single flavor.  Roles
* ''Node auto-discovery(*)''
+
can be edited to modify the flavor association by selecting from Nova flavors.
 +
 
 +
== Flavors ==
  
==== Management node ====
+
Tuskar provides its own interface for CRUD operations on flavors. 
* where undercloud is installed
 
* created as part of undercloud install process
 
** ''allow creation of additional management nodes through UI (**)''
 
  
==== Resource nodes ====
+
== Images ==
* can be allocated by nova-scheduler as one (and only one) of four node types
 
** compute
 
** controller
 
*** each controller node will run all openstack services
 
**** ''allow each node to run a specified service (**)''
 
**** ''breakdown by workload (percentage of cpu used per node) (*)''
 
** object storage
 
** block storage
 
* '''Resource class''' - allows for further categorization of a node type
 
** each node type specifies a single default resource class
 
*** ''allow multiple resource classes per node type (*)''
 
** '''''Node profile''' for a resource class (*)''
 
*** ''acts as an optional filter for nodes that can be allocated to that class (*)''
 
* once allocated, can be unallocated
 
* searchable by status, name, cpu, memory, and all attributes from ironic
 
* nodes can be viewed by node types
 
** additional 'group by' with status, hardware specification
 
  
==== Unallocated nodes ====
+
Users are expected to create images for specifc roles through CLI.
* list
 
* unregister
 
  
== Deployments ==
+
== Overcloud Deployment ==
  
==== General ====
+
Only a single deployment allowed.
* only a single deployment allowed
 
** ''allow multiple deployments (**)''
 
  
==== Node distribution ====
+
==== Planning ====
* deployment can specifiy a node distribution across node types
 
** node distribution can be updated after creation
 
  
==== Deployment configuration ====
+
* nodes per role
* deployment configuration, used for initial creation only
+
* deployment configuration
** defaulted, with no option to change
 
** ''allow modification (**)''
 
  
==== Deployment Action ====
+
==== Deploying  ====
  
 
* Heat template generated on the fly
 
* Heat template generated on the fly
** hardcoded images
+
* nova scheduler allocates nodes, uses exact flavor matching
*** ''allow image selection (**)''
+
* status indicator to determine overall state of deployment
** pre-created template fragments for each node type
 
** node type distribution affects generated template
 
* nova scheduler allocates nodes
 
** ''nova scheduler filters based on resource class and node profile information (*)''
 
 
* Deployment action can create or update
 
* Deployment action can create or update
* status indicator to determine overall state of deployment
+
* Can scale upwards, but not downwards
 +
 
 +
==== Maintenance ====
 +
 
 +
* Logs
 +
* View nodes by role
 +
 
 +
== Post-Icehouse Requirements ==
 +
 
 +
heterogeneous nodes
 +
 
 +
During deployment
 
** status indicator for nodes as well
 
** status indicator for nodes as well
 
** ''status includes 'time left' (**)''
 
** ''status includes 'time left' (**)''
  
==== Other ====
+
** ''allow multiple deployments (**)''
* ''review distribution map (**)''
+
 
* notification when a deployment is ready to go or whenever something changes
+
** ''hardware specs from Ironic based on IPMI address and MAC address (*)''
 +
** ''IPMI address auto populated from Neutron (**)''
 +
* ''Node auto-discovery(*)''
 +
 
 +
==== Management node ====
 +
* where undercloud is installed
 +
* created as part of undercloud install process
 +
** ''allow creation of additional management nodes through UI (**)''
  
== Monitoring ==
+
==== Monitoring ====
 
* Service monitoring
 
* Service monitoring
 
** assignment, availability, status
 
** assignment, availability, status
Line 85: Line 81:
 
** ''capacity, historical statistics (*)''
 
** ''capacity, historical statistics (*)''
  
== Tests ==
 
* comprehensive unit test coverage
 
* integration tests (through Tempest)
 
 
== ''Future (**)'' ==
 
 
* ''Networks (**)''
 
* ''Networks (**)''
 
* ''Images (**)''
 
* ''Images (**)''
Line 95: Line 86:
 
* ''Archived nodes (**)''
 
* ''Archived nodes (**)''
 
* ''Manual node allocation (**)''
 
* ''Manual node allocation (**)''
 +
 +
==== Other ====
 +
* ''review distribution map (**)''
 +
* notification when a deployment is ready to go or whenever something changes

Revision as of 18:42, 4 February 2014

Overview

The goal for Icehouse is to allow users to deploy an Overcloud through Tuskar. UI separate from Horizon.

Nodes

The UI allows users to manually register nodes with Ironic. Once done, the UI maintains a listing of nodes, classified as either 'Free' or 'Deployed' (depending on whether a node is allocated with a Nova Instance or not).

The UI will allow free nodes to be unregistered.

Overcloud Roles

Tuskar recognizes four roles for nodes:

  • compute
  • controller
  • object storage
  • block storage

Each role is associated with an image name and a single flavor. Roles can be edited to modify the flavor association by selecting from Nova flavors.

Flavors

Tuskar provides its own interface for CRUD operations on flavors.

Images

Users are expected to create images for specifc roles through CLI.

Overcloud Deployment

Only a single deployment allowed.

Planning

  • nodes per role
  • deployment configuration

Deploying

  • Heat template generated on the fly
  • nova scheduler allocates nodes, uses exact flavor matching
  • status indicator to determine overall state of deployment
  • Deployment action can create or update
  • Can scale upwards, but not downwards

Maintenance

  • Logs
  • View nodes by role

Post-Icehouse Requirements

heterogeneous nodes

During deployment

    • status indicator for nodes as well
    • status includes 'time left' (**)
    • allow multiple deployments (**)
    • hardware specs from Ironic based on IPMI address and MAC address (*)
    • IPMI address auto populated from Neutron (**)
  • Node auto-discovery(*)

Management node

  • where undercloud is installed
  • created as part of undercloud install process
    • allow creation of additional management nodes through UI (**)

Monitoring

  • Service monitoring
    • assignment, availability, status
    • capacity, historical statistics (*)
  • Node monitoring
    • assignment, availability, status
    • capacity, historical statistics (*)
  • Networks (**)
  • Images (**)
  • Logs (**)
  • Archived nodes (**)
  • Manual node allocation (**)

Other

  • review distribution map (**)
  • notification when a deployment is ready to go or whenever something changes