Jump to: navigation, search

TripleO/TuskarIcehouseRequirements

< TripleO
Revision as of 18:58, 4 February 2014 by Tzumainn (talk | contribs) (Overcloud Roles)

Overview

For Icehouse, we will create an UI to allow users to deploy an Overcloud. This UI will build upon the Horizon framework, but be functional without the existing OpenStack Dashboard tabs (Project, Admin). This separation is to reduce user confusion, as much of the functionality in those tabs simply do not apply.

Nodes

The UI allows users to manually register nodes with Ironic. Once done, the UI maintains a listing of nodes, classified as either 'Free' or 'Deployed' (depending on whether a Nova instance is deployed upon that node).

The UI will allow free nodes to be unregistered.

Overcloud Roles

Tuskar recognizes four roles for nodes, ultimately used within the overcloud Heat template.

  • compute
  • controller
  • object storage
  • block storage

These roles are created when Tuskar is installed. Roles cannot be created or removed.

Each role is associated with an image name and a single flavor. This image name cannot be modified; however, the flavor association can be.

Flavors

Tuskar provides its own interface for CRUD operations on flavors.

Images

Users are expected to create images for specifc roles through CLI.

Overcloud Deployment

Only a single deployment allowed.

Planning

  • nodes per role
  • deployment configuration

Deploying

  • Heat template generated on the fly
  • nova scheduler allocates nodes, uses exact flavor matching
  • status indicator to determine overall state of deployment
  • Deployment action can create or update
  • Can scale upwards, but not downwards

Maintenance

  • Logs
  • View nodes by role

Post-Icehouse Requirements

heterogeneous nodes

During deployment

    • status indicator for nodes as well
    • status includes 'time left' (**)
    • allow multiple deployments (**)
    • hardware specs from Ironic based on IPMI address and MAC address (*)
    • IPMI address auto populated from Neutron (**)
  • Node auto-discovery(*)

Management node

  • where undercloud is installed
  • created as part of undercloud install process
    • allow creation of additional management nodes through UI (**)

Monitoring

  • Service monitoring
    • assignment, availability, status
    • capacity, historical statistics (*)
  • Node monitoring
    • assignment, availability, status
    • capacity, historical statistics (*)
  • Networks (**)
  • Images (**)
  • Logs (**)
  • Archived nodes (**)
  • Manual node allocation (**)

Other

  • review distribution map (**)
  • notification when a deployment is ready to go or whenever something changes