Jump to: navigation, search

Difference between revisions of "HeterogeneousArchitectureScheduler"

Line 105: Line 105:
 
=== Migration ===
 
=== Migration ===
  
Very little needs to change in terms of the way deployments will use this if we set sane defaults like "x86_64" as assumed today.
+
Very little needs to be changed in terms of the way deployments will use this if we set sane defaults like "x86_64" as assumed today.
  
 
== Test/Demo Plan ==
 
== Test/Demo Plan ==

Revision as of 14:28, 13 April 2011

Summary

Nova should have support for cpu architectures, accelerator architectures, and network interfaces and be able to route run_instances() requests to a compute node capable of running that architecture. This blueprint is dependent on the schema changes described in HeterogeneousInstanceTypes blueprint. The target release for this is Diablo, however the USC-ISI team intends to have a stable test branch and deployment at Cactus release.

The USC-ISI team has a functional prototype here:

This blueprint is related to the HeterogeneousInstanceTypes blueprint here:

We are also drafting blueprints for three machine types:

An etherpad for discussion of this blueprint is available at http://etherpad.openstack.org/heterogeneousarchitecturescheduler

Release Note

Nova has been extended to allow deployments to advertise and users to request specific processor, accelerator, and network interface options using instance_types (or flavors) as the primary mechanism. This blueprint is for a scheduler plugin that supports routing run_instance requests to the appropriate physical compute node.

Rationale

See HeterogeneousInstanceTypes. The short answer is that real deployments will have heterogeneous resources.

There are several related blueprints:

User stories

See HeterogeneousInstanceTypes.

George has two different processing clusters, one x86_64, the other Power7. These two run_instances commands need to go to the appropriate compute nodes. In addition, nova should prevent a user from inadvertently specifying an x86_64 machine image to run on a Power7 compute node or vice-versa. The scheduler should check for inconsistencies.

euca-run-instances -t p7f.grande -k fred-keypair emi-12345678
euca-run-instances -t m1.xlarge -k fred-keypair emi-87654321


Assumptions

The assumption is that OpenStack runs on the target hardware architecture. See related blueprints above for what our team is doing.

Design

We propose to add cpu_arch, cpu_info, xpu_arch, xpu_info, xpus, net_arch, net_info, and net_mbps as attributes to instance_types, instances, and compute_nodes tables. See HeterogeneousInstanceTypes.

The architecture aware scheduler will compare these additional fields when selecting target compute_nodes for the run_instances request.

  • cpu_arch, xpu_arch, and net_arch are intended to be high-level label switches for fast row filtering (like "i386" or "fermi" or "infiniband").
  • xpus and net_mbps are treated as quantity fields exactly like vcpus is used by schedulers
  • the cpu_info, xpu_info, and net_info follows the instance_migration branch example using a json formatted string to capture arbitrary configurations.

The basic scheduler flow through nova is as follows:

  1. nova-compute starts on a host and registers architecture, accelerator, and networking capabilities in the compute node table.
  2. nova-api receives a run-instances request with instance_type string "m1.small" or "p7g.grande". No change here.
  3. nova-api passes instance_type to compute/api.py create() from api/ec2/cloud.py run_instances() or api/openstack/servers.py create().
  4. nova-api compute/api.py create() reads from instance_types table and adds rows to instances table.
  5. nova-api does an rpc.cast() to scheduler num_instances times, passing instance_id. No change here.
  6. nova-scheduler as architecture scheduler selects compute_service host that matches the options specified in the instance table fields. The arch scheduler filters available compute_nodes by cpu_arch, cpu_info, xpu_arch, xpu_info, xpus, net_arch, net_info, and net_mbps with the same fields.
  7. nova-scheduler rpc.cast() to each selected compute service.
  8. nova-compute receives rpc.cast() with instance_id, launches the virtual machine, etc.

Schema Changes

See HeterogeneousInstanceTypes.

Implementation

The USC-ISI team has a functional prototype: https://code.launchpad.net/~usc-isi/nova/hpc-trunk

UI Changes

Functionality is accessed through selecting the scheduler in nova.conf:

scheduler_driver = nova.scheduler.arch.ArchitectureScheduler


Code Changes

Summary of changes:

  • nova/scheduler/arch.py
   - Implements the architecture aware scheduler.
   - def hosts_up_with_arch(self, context, topic, instance_id):
   - def schedule(self, context, topic, *_args, **_kwargs):
  • nova/db/api.py
   - def compute_node_get_by_arch(context, cpu_arch, xpu_arch, session=None):
   - def compute_node_get_by_cpu_arch(context, cpu_arch, session=None):
   - def compute_node_get_by_xpu_arch(context, xpu_arch, session=None):
   - def instance_get_all_by_cpu_arch(context, cpu_arch):
   - def instance_get_all_by_xpu_arch(context, xpu_arch):

Migration

Very little needs to be changed in terms of the way deployments will use this if we set sane defaults like "x86_64" as assumed today.

Test/Demo Plan

This need not be added or completed until the specification is nearing beta.

Unresolved issues

This should highlight any issues that should be addressed in further specifications, and not problems with the specification itself; since any specification with problems cannot be approved.

BoF agenda and discussion

Use this section to take notes during the BoF; if you keep it in the approved spec, use it for summarising what was discussed and note any options that were rejected.