- Launchpad Entry: NovaSpec:schedule-instances-on-heterogeneous-architectures
- Created: Brian Schott
- Maintained:Jinwoo "Joseph" Suh
- Contributors: USC Information Sciences Institute
Nova should have support for cpu architectures, accelerator architectures, and network interfaces and be able to route run_instances() requests to a compute node capable of running that architecture. This blueprint is dependent on the schema changes described in HeterogeneousInstanceTypes blueprint. The target release for this is Diablo. A stable test branch and deployment available now.
The USC-ISI team has a functional prototype here:
- https://code.launchpad.net/~usc-isi/nova/hpc-trunk (more up-to-date version)
- https://code.launchpad.net/~usc-isi/nova/hpc-testing (more stable version)
- https://code.launchpad.net/~usc-isi/nova/hetero (merge candidate)
This blueprint is related to the HeterogeneousInstanceTypes blueprint here:
We are also drafting blueprints for three machine types:
An etherpad for discussion of this blueprint is available at http://etherpad.openstack.org/heterogeneousarchitecturescheduler
Nova has been extended to allow deployments to advertise and users to request specific processor, accelerator, and network interface options using instance_types (or flavors) as the primary mechanism. This blueprint is for a scheduler plugin that supports routing run_instance requests to the appropriate physical compute node.
See HeterogeneousInstanceTypes. The short answer is that real deployments will have heterogeneous resources.
There are several related blueprints:
George has two different processing clusters, one x86_64, the other Power7. These two run_instances commands need to go to the appropriate compute nodes. In addition, nova should prevent a user from inadvertently specifying an x86_64 machine image to run on a Power7 compute node or vice-versa. The scheduler should check for inconsistencies.
euca-run-instances -t p7f.grande -k fred-keypair emi-12345678 euca-run-instances -t m1.xlarge -k fred-keypair emi-87654321
The assumption is that OpenStack runs on the target hardware architecture or on a proxy running on behalf of the target hardware architecture. See related blueprints above for what our team is doing.
We also assume that instance_type_extra_spec is created. cpu_info, xpu_arch, xpu_info, xpus, net_arch, net_info, and net_mbps are example keys inserted into instance_type_extra_spec and instance_metadata tables.
The architecture aware scheduler will compare any key values to capability reported from zone_manager and all of them must be matched to start an instantiation.
The basic scheduler flow through nova is as follows:
- nova-compute starts on a host and registers architecture, accelerator, and networking capabilities to the zone_manager (scheduler/zone_manager.py). The data is stored in memory (not in database). The capability information is refreshed periodically (default is 1 minute). No change here.
- nova-api receives a run-instances request with instance_type string "m1.small" or "m1.small;xpu_arch=fermi;xpus=2". No change here.
- api/ec2/cloud.py run_instances() gets the instance type string and retrieves detail information from instance_type table. No change here
- The detail information about instance type is sent to compute/api.py create() in dictionary form. No change here.
- nova-api compute/api.py create() adds rows to instances table. No change here.
- nova-api does an rpc.cast() to scheduler num_instances times, passing instance_id. No change here.
- nova-scheduler as architecture scheduler selects compute_service host that matches the options specified in the instance_type_extra_spec table, instance table, and instance_metadata fields. The arch scheduler filters available compute_nodes by fields in instance_type table, and all criteria in instance_type_extra_spec table.
- nova-scheduler rpc.cast() to each selected compute service.
- nova-compute receives rpc.cast() with instance_id, launches the virtual machine, etc.
A new table instance_type_extra_table is added to specify extra fields needed. The table has id, key, value, instance_type_id, and instance_type fields.
The USC-ISI team has a functional prototype:
Functionality is accessed through selecting the scheduler in nova.conf:
scheduler_driver = nova.scheduler.arch.ArchitectureScheduler
Additional constraint can be specified in instance_type_extra_specs table. User does not need to be aware of the instance_type_extra_specs table. User simply uses an instance type as normal, e.g., "cg1.small." since cloud provider fills in the table and cloud software uses the table automatically.
All new constraints in instance_type_extra_specs are currently match based. No inequality comparison can be done at this time. We plan to add the ability shortly.
Summary of changes:
- Implements the architecture aware scheduler.
Very little needs to be changed in terms of the way deployments will use this.
This need not be added or completed until the specification is nearing beta.
BoF agenda and discussion
Use this section to take notes during the BoF; if you keep it in the approved spec, use it for summarising what was discussed and note any options that were rejected.