TripleO/TuskarJunoPlanning
Contents
Overview
This planning document is born from discussions from the TripleO mid-cycle Icehouse meetup in Sunnyvale (TripleO/TuskarJunoInitialDiscussion). The ideas iterated there were then individually fleshed out and detailed, with much input coming from conversations with other OpenStack projects.
Our principal concerns duing the Juno cycle are:
- integrating further with other OpenStack services, using their capabilities to enhance our TripleO management experience
- ensuring that Tuskar does not try to implement functionality that is better located in other projects
This document details our high-level goals for Juno. It does so at multiple levels; for each we provide:
- a description of the goal
- an explanation of the various OpenStack interactions needed
- a list of project requirements and/or blueprints needed to achieve those interactions
Overcloud Planning Storage
In Icehouse, the planning stage of overcloud deployment is represented by data stored in Tuskar database tables. For Juno, we would like to remove the database from Tuskar. Instead, the planning stage of a deployment will be represented by the full Heat template that would be used to deploy it. Since Heat does not intend to be a template store, this template will be stored in Swift instead.
Requirements
Tuskar
- rebuild Tuskar to save and retrieve Heat templates from Swift
- update CLI as necessary
Cloud Service Representation
Requirements
Tuskar-UI
- update deployment workflow to accomodate cloud services
Heat Template Generation
Requirements
TripleO
- update Heat templates in TripleO for: HOT, provider resources, software config
High Availability
Requirements
TripleO
- Deploy HA Overcloud
- glusterfs
- pacemaker, corosync
- neutron (?)
- heat-engine A/A
- qpid proton (assuming amqp 1.0 have merged into oslo.messaging and oslo.messaging have merged in each core project. If not, will use rabbitmq)
- etc etc
Tuskar-UI
- deployment workflow support for HA architecture
Node Management (Ironic)
Requirements
Ironic
- Ironic graduation
- CI jobs
- Nova driver
- Serial console
- Migration path
- User documentation
- Autodiscovery of nodes
- Ceilometer
- Tagging
- tag nodes for nova scheduler
- scalability
Metric Graphs
Requirements
Ceilometer
- Combine samples for different meters in a transformer to produce a single derived meter
- Rollup of course-grained statistics for UI queries
- Configurable data retention based on rollups
- Overarching "health" metric for nodes
To ensure scalability:
- Eliminate central agent SPoF
- SNMP batch mode, one bulk command per node per polling cycle
For additional data:
- Acquire hardware-oriented metrics via IPMI (e.g., voltage, fan speeds, etc.)
- Keystone v3 usage would avoid IPMI credentials; allowing pollster-style interaction
- Look into consistent hashing; see if it can be reused in ceilometer -- though it requires stateful DB
Tuskar-UI
- Add Ceilometer-based graphs
User Interfaces
Requirements
Tuskar-CLI
- create a tuskar-cli plugin for OpenStackClient
Tuskar-UI
- increase the modularity of views
- create a mechanism for asynchronous communication