Jump to: navigation, search

Difference between revisions of "Nova-Cells-v2"

(Bugs)
(Open Questions)
 
(4 intermediate revisions by 2 users not shown)
Line 11: Line 11:
 
** Original regression is reverted: https://review.openstack.org/#/c/457097/
 
** Original regression is reverted: https://review.openstack.org/#/c/457097/
 
** TODO(dansmith) to provide a proper fix
 
** TODO(dansmith) to provide a proper fix
 +
 +
=== Blueprints ===
 +
 +
These are all currently targeted for the Pike release.
 +
 +
* https://blueprints.launchpad.net/nova/+spec/discover-hosts-faster
 +
* https://blueprints.launchpad.net/nova/+spec/cells-aware-api
 +
* https://blueprints.launchpad.net/nova/+spec/cells-count-resources-to-check-quota-in-api
 +
* https://blueprints.launchpad.net/nova/+spec/list-instances-using-searchlight
 +
* https://blueprints.launchpad.net/nova/+spec/service-hyper-uuid-in-api
 +
* https://blueprints.launchpad.net/nova/+spec/convert-consoles-to-objects
  
 
=== TODOs ===
 
=== TODOs ===
  
* The deployment/upgrade process needs to be documented in more than just the release notes.
 
** dansmith has a start on the docs here: https://review.openstack.org/#/c/420198/ (merged)
 
** (diana will take this) On a side note, we should also have man pages for the cell_v2 commands because there is confusion around the inputs and outputs and how return codes should be treated, i.e. is 1 an error or not? Put the CLI docs here: http://docs.openstack.org/developer/nova/man/nova-manage.html
 
*** Reviews for those docs: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:man
 
*** Let me (diana_clarke) know if you want them changed in any way.
 
** alaski's older docs patch (which is probably out of date now but might be useful) is here: https://review.openstack.org/#/c/267153/
 
** Summary of current commands:
 
*** map_cell0: creates a cell mapping for cell0
 
*** simple_cell_setup: creates a cell mapping for cell0 and creates a cell mapping and associates hosts with it (requires unmapped compute hosts registered already). Intended as a lightweight way for non-cells-v1 users to setup cells v2 during an upgrade.
 
*** map_cell_and_hosts: creates a cell mapping and associates hosts with it (requires unmapped compute hosts registered already)
 
*** discover_hosts: associates unmapped hosts with an existing cell mapping (or all cell mappings if a specific cell isn't specified)
 
* Integrate the 'nova-status upgrade check' CLI into the CI/QA system (grenade).
 
* mriedem is working on setting up multiple cells in the multinode job: https://review.openstack.org/#/c/420976/
 
** That's probably going to cause issues for the live migration tests since we can't migrate between cells and we only have two computes in that job, and with my change each is in a cell.
 
** sdague also pointed out some design issues in the dependent change which means needing to callback into devstack from devstack-gate at the end of the deploy to run discover_hosts.
 
* Release notes for Ocata
 
** We'll need a release note about whether or not multiple cells are supported and if they are, the limitation with a lack of instance sorting with multiple cells: https://review.openstack.org/#/c/396775/42/nova/compute/api.py
 
 
* Older tracking etherpads (these may be out of date):
 
* Older tracking etherpads (these may be out of date):
 
** https://etherpad.openstack.org/p/cellsV2-remaining-work-items
 
** https://etherpad.openstack.org/p/cellsV2-remaining-work-items
Line 36: Line 30:
  
 
=== Open Questions ===
 
=== Open Questions ===
 +
 +
* How will we handle multiple cells where each cell has its own independent ceph cluster? (brought up in #openstack-nova by mnaser)
 +
** If glance has its own ceph cluster where it stores images and each cell has its own ceph cluster, then each instance create will require a download of the image from glance since the glance ceph cluster can't be reached by any cell. How can we handle the inefficiency?
 +
*** Idea from mnaser: could we cache images in the imagebackend (instead of on the hypervisor disk) so that each cell gets a copy of the image and can re-use it instead of re-downloading from glance every time?
 +
*** Workaround from mnaser: store images multiple times in glance (glance supports multiple image locations), once per cell ceph cluster, and nova could try locations until it finds an image whose ceph cluster it can access.
 +
**** (melwitt): I'm not sure how that works in glance, how it could access multiple ceph clusters and track separate credentials per ceph cluster?
 +
**** (mnaser): Glance exposes 'locations' attribute in the API which is a list of locations.  In the [https://github.com/openstack/nova/blob/master/nova/virt/libvirt/imagebackend.py#L922-L925 clone] function for the RBD image driver, Nova attempts to check if it can clone from this location using `is_clonable()`.  You can see from the [https://github.com/openstack/nova/blob/master/nova/virt/libvirt/storage/rbd_utils.py#L199-L225 is_cloneable] codebase that one of the checks is if the fsid of the Ceph cluster is the same as the one Nova connects to.  FSID's are supposed to be globally unique so it would return false.  I am assuming it'll keep looping until it hits the one that matches the fsid of the ceph cluster inside the cell!
  
 
* Should the computes self-register with a cell when the compute_nodes record is created from the ResourceTracker? https://review.openstack.org/#/c/369634/
 
* Should the computes self-register with a cell when the compute_nodes record is created from the ResourceTracker? https://review.openstack.org/#/c/369634/
 
** How would the computes know which cell to map to? We could add something to the model to flag a 'default' or 'staging' cell mapping, or put something into nova.conf on the compute node.
 
** How would the computes know which cell to map to? We could add something to the model to flag a 'default' or 'staging' cell mapping, or put something into nova.conf on the compute node.
 
** If we auto-register into a default/staging cell, how do we move hosts to other cells? nova-manage CLI?
 
** If we auto-register into a default/staging cell, how do we move hosts to other cells? nova-manage CLI?
* Why can't we create an empty cell, i.e. a cell mapping with no computes? This is a fresh-install scenario.
+
** We have an option to auto-map hosts from the scheduler since Ocata, with improvements being made in Pike: https://blueprints.launchpad.net/nova/+spec/discover-hosts-faster
** Note that the nova-status upgrade check command does not consider it a failure if there are cell mappings but no compute nodes yet but simple_cell_setup does consider that a failure, see bug 1656276.
 
** There has been a review up for this for awhile: https://review.openstack.org/#/c/332713/
 
*** This way, a fresh install would do something like: 'nova-manage cell_v2 map_cell0' 'nova-manage cell_v2 create_cell' and then once compute hosts are available, operator runs 'nova-manage cell_v2 discover_hosts'
 
  
 
=== Manifesto ===
 
=== Manifesto ===
Line 62: Line 60:
  
 
=== Code Review ===
 
=== Code Review ===
* https://review.openstack.org/#/q/topic:bp/cells-scheduling-interaction
+
* See the cells v2 section in the Pike review priorities etherpad: https://etherpad.openstack.org/p/pike-nova-priorities-tracking
* Otherwise see the cells v2 section in the Ocata review priorities etherpad: https://etherpad.openstack.org/p/ocata-nova-priorities-tracking
 
* https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:man
 
  
 
=== References ===
 
=== References ===

Latest revision as of 22:41, 12 December 2017

Nova Cells v2

Bugs

Blueprints

These are all currently targeted for the Pike release.

TODOs

Open Questions

  • How will we handle multiple cells where each cell has its own independent ceph cluster? (brought up in #openstack-nova by mnaser)
    • If glance has its own ceph cluster where it stores images and each cell has its own ceph cluster, then each instance create will require a download of the image from glance since the glance ceph cluster can't be reached by any cell. How can we handle the inefficiency?
      • Idea from mnaser: could we cache images in the imagebackend (instead of on the hypervisor disk) so that each cell gets a copy of the image and can re-use it instead of re-downloading from glance every time?
      • Workaround from mnaser: store images multiple times in glance (glance supports multiple image locations), once per cell ceph cluster, and nova could try locations until it finds an image whose ceph cluster it can access.
        • (melwitt): I'm not sure how that works in glance, how it could access multiple ceph clusters and track separate credentials per ceph cluster?
        • (mnaser): Glance exposes 'locations' attribute in the API which is a list of locations. In the clone function for the RBD image driver, Nova attempts to check if it can clone from this location using `is_clonable()`. You can see from the is_cloneable codebase that one of the checks is if the fsid of the Ceph cluster is the same as the one Nova connects to. FSID's are supposed to be globally unique so it would return false. I am assuming it'll keep looping until it hits the one that matches the fsid of the ceph cluster inside the cell!
  • Should the computes self-register with a cell when the compute_nodes record is created from the ResourceTracker? https://review.openstack.org/#/c/369634/
    • How would the computes know which cell to map to? We could add something to the model to flag a 'default' or 'staging' cell mapping, or put something into nova.conf on the compute node.
    • If we auto-register into a default/staging cell, how do we move hosts to other cells? nova-manage CLI?
    • We have an option to auto-map hosts from the scheduler since Ocata, with improvements being made in Pike: https://blueprints.launchpad.net/nova/+spec/discover-hosts-faster

Manifesto

http://docs.openstack.org/developer/nova/cells.html#manifesto

Testing

https://etherpad.openstack.org/p/nova-cells-testing

DB Table Analysis

https://etherpad.openstack.org/p/nova-cells-table-analysis

Scheduling requirements

https://etherpad.openstack.org/p/nova-cells-scheduling-requirements

Code Review

References