Jump to: navigation, search

Difference between revisions of "TripleO/TuskarIcehouseRequirements"

(Creation)
(Image)
 
(25 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
== Overview ==
 
== Overview ==
  
This document lists known requirements for the Icehouse release, as well as requirements that may not make it into Icehouse, but which will affect our designThe relevant mailing list discussion can be found [http://lists.openstack.org/pipermail/openstack-dev/2013-December/021388.html here].
+
For Icehouse, we will create an UI to allow users to deploy an overcloudThis
 +
UI will build upon the Horizon framework, but be functional without the existing OpenStack
 +
Dashboard tabs (Project, Admin). This separation is to reduce user confusion, as much
 +
of the functionality in those tabs simply do not apply to overcloud deployment.
  
'''All requirements here are assumed to be targeted for Icehouse, except where otherwise marked:'''
+
== Nodes ==
 +
 +
The UI allows users to manually register nodes with Ironic.  Once done, the UI
 +
maintains a listing of nodes, classified as either 'Free' or 'Deployed' (depending on
 +
whether a Nova instance is deployed upon that node).
 +
 
 +
The UI will not allow nodes to be unregistered for historical purposes.
 +
 
 +
== Overcloud Roles ==
 +
 
 +
Tuskar recognizes four roles for nodes, ultimately used within the overcloud Heat template.
 +
 
 +
* compute
 +
* controller
 +
* object storage
 +
* block storage
  
* '''*'''  ''Maybe Icehouse, dependency on other features being developed in the Icehouse timeframe''
 
* '''**''' ''After Icehouse''
 
  
== Nodes ==
+
These roles are created when Tuskar is installed. Roles cannot be created or removed.
   
 
==== Registration ====
 
* Manual registration
 
** ''hardware specs from Ironic based on mac address (*)''
 
** ''IPMI address auto populated from Neutron (**)''
 
** ''Auto-discovery during undercloud install process (*)''
 
  
==== Monitoring ====
+
=== Image ===
* assignment, availability, status
 
* ''capacity, historical statistics (*)''
 
  
==== Management node ====
+
A role is associated with an image name; the overcloud Heat template will expect an image with
* where undercloud is installed
+
that name to exist.  These images are '''not''' managed through the Tuskar UI; instead, Users are expected to have the appropriate images in Glance.
* created as part of undercloud install process
 
** ''allow creation of additional management nodes through UI (**)''
 
  
==== Resource nodes ====
+
The image name cannot be modified through the Tuskar UI, as it must match the image name in the Heat template.
* can be allocated by nova-scheduler as one (and only one) of four node types
 
** compute
 
** controller
 
*** each controller node will run all openstack services
 
**** ''allow each node to run a specified service (**)''
 
**** ''breakdown by workload (percentage of cpu used per node) (*)''
 
** object storage
 
** block storage
 
* '''Resource class''' - allows for further categorization of a node type
 
** each node type specifies a single default resource class
 
*** ''allow multiple resource classes per node type (*)''
 
** '''''Node profile''' for a resource class (*)''
 
*** ''acts as an optional filter for nodes that can be allocated to that class (*)''
 
* once allocated, can be unallocated
 
* searchable by status, name, cpu, memory, and all attributes from ironic
 
* nodes can be viewed by node types
 
** additional 'group by' with status, hardware specification
 
  
==== Unallocated nodes ====
+
=== Flavor ===
* list
 
* unregister
 
  
==== ''Archived nodes (**)'' ====
+
A role is associated with a single Nova flavor.  When deploying an Overcloud, the Nova scheduler
* ''Will be separate openstack service (**)''
+
uses exact matching with this flavor to determine which nodes to use.  This means that the nodes
 +
deployed for a given role must be homogeneous.
  
== Deployments ==
+
The Tuskar UI provides its own interface for CRUD operations on flavors.
  
==== General ====
+
== Overcloud Deployment ==
* only a single deployment allowed
 
** ''allow multiple deployments (**)''
 
  
==== Node distribution ====
+
Only a single overcloud is allowed.  Before deploying an overcloud, a user must specify:
* deployment can specifiy a node distribution across node types
 
** node distribution can be updated after creation
 
  
==== Deployment configuration ====
+
* the number of nodes per role
* deployment configuration, used for initial creation only
+
* various deployment configuration parameters
** defaulted, with no option to change
 
** ''allow modification (**)''
 
  
==== Deployment Action ====
 
  
* Heat template generated on the fly
+
Once done, the user can trigger the creation of the overcloud.  Creation is done by
** hardcoded images
+
using the deployment specifications to generate an overcloud Heat template that is
*** ''allow image selection (**)''
+
then used to create an overcloud Heat stack.  During this process, users will see a
** pre-created template fragments for each node type
+
status indicator in the UI that monitors the overall state of the deployment.
** node type distribution affects generated template
 
* nova scheduler allocates nodes
 
** ''nova scheduler filters based on resource class and node profile information (*)''
 
* Deployment action can create or update
 
* status indicator to determine overall state of deployment
 
** status indicator for nodes as well
 
** ''status includes 'time left' (**)''
 
  
==== Other ====
+
Post-deployment, the overcloud can be scaled upwards by increasing the number of
* ''review distribution map (**)''
+
nodes in a role.  The overcloud cannot be scaled downwards, nor can the deployment
* notification when a deployment is ready to go or whenever something changes
+
configuration parameters be modified.
  
== ''Networks (**)'' ==
+
Once an overcloud is deployed, users can also view information about the Nova servers
 +
deployed for that stack, classified by role.  Users can also view a log corresponding to the
 +
Heat events relevant to the overcloud stack.
  
== ''Images (**)'' ==
+
== Next Steps ==
  
== ''Logs (**)'' ==
+
Here are a list of items we would like to address once the above is complete:
  
== Tests ==
+
* multiple flavors per role ("heterogeneous nodes")
* comprehensive unit test coverage
+
* auto-discovery of nodes through Ironic
* integration tests (through Tempest)
+
* image management
 +
* monitoring capabilities
 +
* user notifications

Latest revision as of 14:24, 5 February 2014

Overview

For Icehouse, we will create an UI to allow users to deploy an overcloud. This UI will build upon the Horizon framework, but be functional without the existing OpenStack Dashboard tabs (Project, Admin). This separation is to reduce user confusion, as much of the functionality in those tabs simply do not apply to overcloud deployment.

Nodes

The UI allows users to manually register nodes with Ironic. Once done, the UI maintains a listing of nodes, classified as either 'Free' or 'Deployed' (depending on whether a Nova instance is deployed upon that node).

The UI will not allow nodes to be unregistered for historical purposes.

Overcloud Roles

Tuskar recognizes four roles for nodes, ultimately used within the overcloud Heat template.

  • compute
  • controller
  • object storage
  • block storage


These roles are created when Tuskar is installed. Roles cannot be created or removed.

Image

A role is associated with an image name; the overcloud Heat template will expect an image with that name to exist. These images are not managed through the Tuskar UI; instead, Users are expected to have the appropriate images in Glance.

The image name cannot be modified through the Tuskar UI, as it must match the image name in the Heat template.

Flavor

A role is associated with a single Nova flavor. When deploying an Overcloud, the Nova scheduler uses exact matching with this flavor to determine which nodes to use. This means that the nodes deployed for a given role must be homogeneous.

The Tuskar UI provides its own interface for CRUD operations on flavors.

Overcloud Deployment

Only a single overcloud is allowed. Before deploying an overcloud, a user must specify:

  • the number of nodes per role
  • various deployment configuration parameters


Once done, the user can trigger the creation of the overcloud. Creation is done by using the deployment specifications to generate an overcloud Heat template that is then used to create an overcloud Heat stack. During this process, users will see a status indicator in the UI that monitors the overall state of the deployment.

Post-deployment, the overcloud can be scaled upwards by increasing the number of nodes in a role. The overcloud cannot be scaled downwards, nor can the deployment configuration parameters be modified.

Once an overcloud is deployed, users can also view information about the Nova servers deployed for that stack, classified by role. Users can also view a log corresponding to the Heat events relevant to the overcloud stack.

Next Steps

Here are a list of items we would like to address once the above is complete:

  • multiple flavors per role ("heterogeneous nodes")
  • auto-discovery of nodes through Ironic
  • image management
  • monitoring capabilities
  • user notifications