Jump to: navigation, search

Difference between revisions of "Qonos-scheduling-service"

m
(Redirected page to Qonos)
 
(22 intermediate revisions by 3 users not shown)
Line 1: Line 1:
 +
#REDIRECT [[Qonos]]
 +
 
* '''Launchpad Entry''': QonoS scheduling service
 
* '''Launchpad Entry''': QonoS scheduling service
 
* '''Created''': 3 May 2013
 
* '''Created''': 3 May 2013
Line 7: Line 9:
 
== Summary ==
 
== Summary ==
  
This document describes the design and API of QonoS, a distributed high-availability scheduling service that has been implemented for the cloud.  QonoS is currently used as the scheduling component of a scheduled images service that is invoked by a Nova extension, so many of the examples in this document discuss that use case.
+
This document describes the design and API of QonoS, a distributed high-availability scheduling service that has been implemented for the cloud [https://github.com/rackspace-titan/qonos QonoS] was first described in [[Scheduled-images-service]] (29 October 2012).  QonoS is currently used as the scheduling component of a scheduled images service that is invoked by a Nova extension, so many of the examples in this document discuss that use case.
  
 
Service responsibilities include:
 
Service responsibilities include:
Line 15: Line 17:
 
* Maintain persistent schedules
 
* Maintain persistent schedules
  
QonoS differs from a cron-type system in that schedules specify gross timeframes (e.g., daily, hourly, etc.) instead of exact times.  This is more appropriate for the cloud, in which thousands of users might schedule resource-intensive tasks at exactly the same time (with the catastrophic results one might expect).
 
  
 +
QonoS was designed to work with OpenStack and uses OpenStack common components.
  
=== Overall System Diagram ===
+
=== Conceptual Overview ===
 +
 
 +
The system consists of:
 +
* a REST API
 +
* a database
 +
* one or more schedulers, and
 +
* one or more workers.
  
[[File:Qonos Diagram.png]]
+
The API handles communication, both external requests and internal communication.  It creates the schedule for a request and stores it in the database.
  
=== Scalability ===
+
The '''scheduler''' examines schedules and creates jobs.
  
Creating a new, self-standing service allows for scaling the feature independently of the rest of the system.
+
A '''job''' describes a task that must be performed.
  
=== Reliability ===
+
A '''worker''' performs a task.  It obtains a task by polling the API and picking up the first task it is capable of handling.
  
Users of the API may come to rely on this feature working every time or notifying them of failures.
+
==== Job Lifecycle ====
 +
Jobs may have the following status values:
 +
{| class="wikitable"
 +
|-
 +
! Status !! Definition
 +
|-
 +
| <code>queued</code> || The job is ready to be processed by a worker
 +
|-
 +
| <code>processing</code> || The job has been picked up by a worker
 +
|-
 +
| <code>done</code> || The worker processing this job has decided that the job has been successfully completed
 +
|-
 +
| <code>timeout</code> || The worker processing this job has decided the job is taking too long and has stopped processing it. A job in this state can be picked up by another worker.
 +
|-
 +
| <code>error</code> || The worker notes that something went wrong, but the job could be retried
 +
|-
 +
| <code>canceled</code> || The worker decides that the job can't be done and should not be retried
 +
|}
  
It is important to have a scheduling service that understands information such as instances, tenants, etc if there is any desire to recover from errors or make performance decisions based on such information. This is opposed to having a more generic 'cron' service that knows nothing of the concept of an instance or image.
+
==== Job Timeouts ====
 +
There are two kinds of timeouts:
 +
* '''hard timeout''': once reached, the job is no longer available for retries
 +
* '''soft timeout''': is renewed by the worker, indicates that the worker is still doing the task (similar to a heartbeat)
  
For example, listing schedules of a particular tenant would be much more efficient if the tenant was in a DB column instead of a blob in the DB.
+
==== Job Failures ====
 +
Job failures are reported as job faults and stored in the database.
  
=== System Picked Schedules ===
+
==== Scalability and Reliability ====
  
Instead of the user being able to pick the time of snapshot, the system can make (potentially) informed decisions about how to spread out the schedules. This way there could not be cases where a majority of the users pick midnight and produce too much load for that time.
+
Workers and schedulers are independently scalable and reliable given the infrastructure support running multiple instances.
  
== Design ==
+
=== Overall System Diagram ===
  
=== Entities ===
+
[[File:Qonos Diagram.png]]
* Schedule
 
** the general description of what the service will do
 
** looks something like
 
  
 +
== Design ==
 +
=== API ===
 +
==== Schedules ====
 +
===== Create Schedule =====
 
<pre>
 
<pre>
 +
POST <version>/schedules
 +
    {"schedule":
 +
        {
 +
            "tenant": "tenant_username",
 +
            "action": "snapshot",
 +
            "minute": 30,
 +
            "hour": 2,
 +
            "day": 3,
 +
            "day_of_week": 5,
 +
            "day_of_month": 23,
 +
            "metadata":
 +
            {
 +
                "instance_id": "some_uuid",
 +
                "retention": "3"
 +
            }
 +
        }
 +
    }
 +
</pre>
 +
===== List schedules =====
 +
<pre>
 +
GET <version>/schedules
 
{
 
{
  "tenant_id" : <tenantId>,
+
    "schedules":
  "schedule_id" : <scheduleId>,
+
    [
  "job_type" : <keyword>,
+
        {
  "metadata" : {
+
            # schedule as above
    // all the information for this job_type
+
        },
    "key" : "value"
+
        {
  }
+
            # schedule as above
  "schedule" : <the schedule info, exact format TBD>
+
        },
 +
        ...
 +
    ]
 
}
 
}
 
</pre>
 
</pre>
  
* Job
+
====== Query filters ======
** a particular instance of a scheduled job_type
+
* <tt>next_run_after</tt> - only list schedules with next_run value >= this value
*** e.g., 'snapshot'
+
* <tt>next_run_before</tt> - only list schedules with next_run value <= this value
* i.e., this is the thing that will be executed by a worker
 
* Worker
 
** a process that performs a Job
 
  
The QonoS scheduling service has the following functional components:
+
====== Example ======
* API
+
List schedules which start in the next five minutes
** handles communication, both external requests and internal communication
+
<pre>
** creates the schedule for a request and stores it in DB
+
GET <version>/schedules?next_run_after={Current_DateTime}&next_run_before={Current_DateTime+5_Minutes}
** the only job_type we will implement is 'scheduled_image'
+
GET <version>/schedules?next_run_after=2012-05-16T15:27:36Z&next_run_before=2012-05-16T15:32:36Z
* Job Maker
+
</pre>
** creates Jobs from schedules; the idea is that the Jobs table will consist of Jobs that are ready to be executed for the current time period
 
* Job Monitor
 
** keeps the Job table updated
 
* Worker monitor
 
** looks for dead workers
 
* Worker
 
** executes a job, keeps the job's 'status' updated
 
** does "best effort" ... if an error is encountered, it will log and terminate job
 
  
=== API ===
+
===== Get a specific schedule =====
==== CRUD for schedules ====
+
<pre>
 +
GET /v1/schedules/{id}
 +
</pre>
  
 +
===== Update a schedule =====
 
<pre>
 
<pre>
POST /v1/schedules
+
PUT <version>/schedules/{id}
GET /v1/schedules
+
    {"schedule":
GET /v1/schedules/{scheduleId}
+
        {
DELETE /v1/schedules/{scheduleId}
+
            "minute": 45,
PUT /v1/schedules/{scheduleId}
+
            "hour": 3
 +
        }
 +
    }
 
</pre>
 
</pre>
  
Request body for POST, PUT will be roughly the Schedule entity described above.
+
===== Delete a schedule =====
POST would return the scheduleId.
+
<pre>
 +
DELETE <version>/schedules/{id}
 +
</pre>
  
==== CRUD for jobs ====
 
  
 +
==== Jobs ====
 +
===== Create job from schedule =====
 
<pre>
 
<pre>
GET /v1/jobs
+
POST <version>/jobs
GET /v1/jobs/{jobId}
+
    {"job": {"schedule_id": "some_uuid"}}
DELETE /v1/jobs/{jobId}
 
GET /v1/jobs/{jobId}/status
 
GET /v1/jobs/{jobId}/heartbeat
 
PUT /v1/jobs/{jobId}/status
 
* status in request body
 
PUT /v1/jobs/{jobId}/heartbeat
 
* heartbeat for this job (exact format TBD) in request body
 
 
</pre>
 
</pre>
 +
The action, tenant_id, and metadata gets copied from the schedule to the job.
  
NOTES:
+
===== Get a specific job =====
* No POST, the job maker handles job creation.
+
<pre>
* The worker will mark the job status as 'done' (or whatever) when it finishes.
+
GET <version>/jobs/{id}
* The /status and /heartbeat may be combined into a single call, not sure yet
+
{
 +
    "job":{
 +
    {
 +
        "id": "{some_uuid}",
 +
        "created_at": "{DateTime}",
 +
        "updated_at": "{DateTime}",
 +
        "schedule_id": "{some_uuid}",
 +
        "worker_id": "{some_uuid}",
 +
        "tenant": "tenant_username",
 +
        "action": "snapshot",
 +
        "status": "queued",
 +
        "retry_count": 0,
 +
        "hard_timeout": "{DateTime}",
 +
        "timeout": "{DateTime}",
  
==== Worker related ====
+
        "metadata":
 +
        {
 +
            "key1": "value1",
 +
            "key2", "value2"
 +
        }
 +
    }
 +
}
 +
</pre>
  
 +
===== List current jobs =====
 
<pre>
 
<pre>
GET /v1/workers
+
GET <version>/jobs
GET /v1/workers/{workerId}
+
{
GET /v1/workers/{workerId}/jobs/next
+
    "jobs":
* return job info, format TBD
+
    [
POST /v1/workers
+
        {
* returns a workerId, is done when a worker is instantiated, allows the system to keep track of the worker
+
            # job as above
DELETE /v1/workers/{workerId}
+
        },
* should be called by the worker if/when it's safely taken down
+
        {
 +
            # job as above
 +
        },
 +
        ...
 +
    ]
 +
}
 
</pre>
 
</pre>
  
 +
===== Update status of a job =====
 +
<pre>
 +
PUT <version>/jobs/{id}/status
 +
{
 +
    "status":
 +
    {
 +
        "status": "some_status",
 +
        "timeout": "{datetime of next timeout}" (optional)
 +
        "error_message":"some message" (optional)
 +
    }
 +
}
 +
</pre>
 +
 +
NOTE: The <tt>error_message</tt> field is only looked for if the status is ERROR.  In the event of an ERROR status, an entry is created in the job_faults table capturing as much info as possible from the job. If an error_message is provided, it is included in the job fault entry.
 +
 +
===== Delete(finish) a specific job =====
 +
<pre>
 +
DELETE <version>/jobs/{id}
 +
</pre>
  
=== Service ===
 
  
The service shall consist of a set of apis, worker nodes, and a DB.
+
==== Metadata ====
 +
===== Set schedule/job metadata =====
 +
<pre>
 +
PUT <version>/schedules/{id}/metadata
 +
or
 +
PUT <version>/jobs/{id}/metadata
 +
</pre>
 +
Note: The resulting metadata for a schedule/job will exactly match what is provided.
 +
<pre>
 +
{
 +
    "metadata":
 +
    {
 +
        "each": "someval",
 +
        "meta": "someval",
 +
        "key": "someval",
 +
    }
 +
}
 +
</pre>
  
API - Provides a RESTful interface for adding schedules to the DB
+
===== List all metadata for a schedule/job =====
 +
<pre>
 +
GET <version>/schedules/{id}/metadata
 +
or
 +
GET <version>/jobs/{id}/metadata
 +
{
 +
    "metadata":
 +
    {
 +
        "instance_id": "some_uuid",
 +
        "retention": "3"
 +
    }
 +
}
 +
</pre>
  
Worker - References schedules in the DB to schedule and perform jobs
+
==== Workers ====
 +
===== Register worker with API =====
 +
<pre>
 +
POST <version>/workers
 +
    {"worker":
 +
        { "host": "a.host.name"}
 +
    }
 +
</pre>
  
DB - Tracks schedules and currently executing jobs
+
===== List workers registered with API =====
 +
<pre>
 +
GET <version>/workers
  
=== Database ===
+
# Not shown - id, created_at, and updated_at fields for each worker
* schedules
 
* jobs
 
* job faults
 
** must be useful!
 
  
== Implementation ==
+
{
 +
    "workers":
 +
    [
 +
        {
 +
            "host": "a.host.name",
 +
        },
 +
        {
 +
            "host": "a.host.name2"",
 +
        },
 +
        ...
 +
    ]
 +
}
 +
</pre>
  
Typical flow of the system is as follows.
+
===== Get a specific worker registered with API =====
 +
<pre>
 +
GET <version>/workers/{id}
 +
# Not shown - id, created_at, and updated_at fields for each worker
 +
{
 +
    "worker":
 +
    {
 +
        "host": "a.host.name"
 +
    }
 +
}
 +
</pre>
  
# User makes request to Nova extension
+
===== Unregister worker with API =====
# Nova extension passes request to API
+
<pre>
# API picks time of day to schedule
+
DELETE <version>/workers/{id}
# Adds schedule entry to DB
+
</pre>
# Worker polls DB for schedules needing action
 
# Worker creates job entry in DB
 
# Worker initiates image snapshot
 
# Worker waits for completion while updating 'last_touched' field in the job table (to indicate the Worker has not died)
 
# Worker updates DB to show the job has been completed
 
# Worker polls until a schedule needs action
 
  
Edge cases:
+
===== Grab next job for worker =====
 +
This can also be interpreted as "Assign a new job to the worker".
 +
Note: this call doesn't map cleanly to normal RESTful practices since it is a POST but it has a return
 +
<pre>
 +
POST <version>/workers/{id}/jobs
 +
</pre>
 +
====== Request Body ======
 +
<pre>
 +
{"job":{"action":"snapshot"}}
 +
</pre>
 +
====== Response Body ======
 +
If an appropriate job is found:
 +
<pre>
 +
{
 +
    "job":
 +
    {
 +
        # job as returned below
 +
    }
 +
}
 +
</pre>
 +
If no job is found:
 +
<pre>
 +
{
 +
    "job": None
 +
}
 +
</pre>
  
Worker dies in middle of job:
+
== Example Usage ==
* A different worker will see the job has not been updated in awhile and take over, performing any cleanup it can.
 
* Jobs contain information of where they left off and what image they were working on (this allows a job whose worker died in the middle of an upload to be resumed)
 
  
Image upload fails
+
As an example, consider a Nova API extension that allows users to request that daily snapshots automatically be taken of a server.
* Retry a certain number of times, afterwards leave image in error state
 
  
Instance no longer exists
+
# User makes request to Nova extension
* Remove schedule for instance
+
# Nova extension picks a random time for the snapshot of this server to be taken (or it could use some more sophisticated algorithm, key thing is we want the requests to be uniformly distributed) and makes a create schedule request to the QonoS API.
 +
# The QonoS API adds a schedule entry to the database
 +
# Scheduler polls API for schedules needing action
 +
# Scheduler creates job entry through API
 +
# Worker polls the API for the next available job
 +
# Worker executes the job (i.e., requests that a snapshot be taken of the specified server)
 +
# Worker waits for completion while updating the job (this indicates the Worker has not died)
 +
# Worker deletes the job (indicating that the job has been completed)
 +
# Worker polls the API for the next available job
  
 
== Code Repository ==
 
== Code Repository ==
 
* https://github.com/rackspace-titan/qonos
 
* https://github.com/rackspace-titan/qonos

Latest revision as of 23:09, 3 May 2013

Redirect to:

  • Launchpad Entry: QonoS scheduling service
  • Created: 3 May 2013
  • Contributors: Alex Meade, Eddie Sheffield, Andrew Melton, Iccha Sethi, Nikhil Komawar, Brian Rosmaita

Summary

This document describes the design and API of QonoS, a distributed high-availability scheduling service that has been implemented for the cloud QonoS was first described in Scheduled-images-service (29 October 2012). QonoS is currently used as the scheduling component of a scheduled images service that is invoked by a Nova extension, so many of the examples in this document discuss that use case.

Service responsibilities include:

  • Create scheduled tasks
  • Perform scheduled tasks
  • Handle rescheduling failed jobs
  • Maintain persistent schedules


QonoS was designed to work with OpenStack and uses OpenStack common components.

Conceptual Overview

The system consists of:

  • a REST API
  • a database
  • one or more schedulers, and
  • one or more workers.

The API handles communication, both external requests and internal communication. It creates the schedule for a request and stores it in the database.

The scheduler examines schedules and creates jobs.

A job describes a task that must be performed.

A worker performs a task. It obtains a task by polling the API and picking up the first task it is capable of handling.

Job Lifecycle

Jobs may have the following status values:

Status Definition
queued The job is ready to be processed by a worker
processing The job has been picked up by a worker
done The worker processing this job has decided that the job has been successfully completed
timeout The worker processing this job has decided the job is taking too long and has stopped processing it. A job in this state can be picked up by another worker.
error The worker notes that something went wrong, but the job could be retried
canceled The worker decides that the job can't be done and should not be retried

Job Timeouts

There are two kinds of timeouts:

  • hard timeout: once reached, the job is no longer available for retries
  • soft timeout: is renewed by the worker, indicates that the worker is still doing the task (similar to a heartbeat)

Job Failures

Job failures are reported as job faults and stored in the database.

Scalability and Reliability

Workers and schedulers are independently scalable and reliable given the infrastructure support running multiple instances.

Overall System Diagram

Qonos Diagram.png

Design

API

Schedules

Create Schedule
POST <version>/schedules
    {"schedule":
        {
            "tenant": "tenant_username",
            "action": "snapshot",
            "minute": 30,
            "hour": 2,
            "day": 3,
            "day_of_week": 5,
            "day_of_month": 23,
            "metadata":
            {
                "instance_id": "some_uuid",
                "retention": "3"
            }
        }
    }
List schedules
GET <version>/schedules
{
    "schedules":
    [
        {
            # schedule as above
        },
        {
            # schedule as above
        },
        ...
    ]
}
Query filters
  • next_run_after - only list schedules with next_run value >= this value
  • next_run_before - only list schedules with next_run value <= this value
Example

List schedules which start in the next five minutes

GET <version>/schedules?next_run_after={Current_DateTime}&next_run_before={Current_DateTime+5_Minutes}
GET <version>/schedules?next_run_after=2012-05-16T15:27:36Z&next_run_before=2012-05-16T15:32:36Z
Get a specific schedule
GET /v1/schedules/{id}
Update a schedule
PUT <version>/schedules/{id}
    {"schedule":
        {
            "minute": 45,
            "hour": 3
        }
    }
Delete a schedule
DELETE <version>/schedules/{id}


Jobs

Create job from schedule
POST <version>/jobs
    {"job": {"schedule_id": "some_uuid"}}

The action, tenant_id, and metadata gets copied from the schedule to the job.

Get a specific job
GET <version>/jobs/{id}
{
    "job":{
    {
        "id": "{some_uuid}",
        "created_at": "{DateTime}",
        "updated_at": "{DateTime}",
        "schedule_id": "{some_uuid}",
        "worker_id": "{some_uuid}",
        "tenant": "tenant_username",
        "action": "snapshot",
        "status": "queued",
        "retry_count": 0,
        "hard_timeout": "{DateTime}",
        "timeout": "{DateTime}",

        "metadata":
        {
            "key1": "value1",
            "key2", "value2"
        }
    }
}
List current jobs
GET <version>/jobs
{
    "jobs":
    [
        {
            # job as above
        },
        {
            # job as above
        },
        ...
    ]
}
Update status of a job
PUT <version>/jobs/{id}/status
{
    "status":
    {
        "status": "some_status",
        "timeout": "{datetime of next timeout}" (optional)
        "error_message":"some message" (optional)
    }
}

NOTE: The error_message field is only looked for if the status is ERROR. In the event of an ERROR status, an entry is created in the job_faults table capturing as much info as possible from the job. If an error_message is provided, it is included in the job fault entry.

Delete(finish) a specific job
DELETE <version>/jobs/{id}


Metadata

Set schedule/job metadata
PUT <version>/schedules/{id}/metadata
or
PUT <version>/jobs/{id}/metadata

Note: The resulting metadata for a schedule/job will exactly match what is provided.

{
    "metadata":
    {
        "each": "someval",
        "meta": "someval",
        "key": "someval",
    }
}
List all metadata for a schedule/job
GET <version>/schedules/{id}/metadata
or
GET <version>/jobs/{id}/metadata
{
    "metadata":
    {
        "instance_id": "some_uuid",
        "retention": "3"
    }
}

Workers

Register worker with API
POST <version>/workers
    {"worker":
        { "host": "a.host.name"}
    }
List workers registered with API
GET <version>/workers

# Not shown - id, created_at, and updated_at fields for each worker

{
    "workers":
    [
        { 
            "host": "a.host.name",
        },
        {
            "host": "a.host.name2"",
        },
        ...
    ]
}
Get a specific worker registered with API
GET <version>/workers/{id}
# Not shown - id, created_at, and updated_at fields for each worker
{
    "worker":
    {
        "host": "a.host.name"
    }
}
Unregister worker with API
DELETE <version>/workers/{id}
Grab next job for worker

This can also be interpreted as "Assign a new job to the worker". Note: this call doesn't map cleanly to normal RESTful practices since it is a POST but it has a return

POST <version>/workers/{id}/jobs
Request Body
 {"job":{"action":"snapshot"}}
Response Body

If an appropriate job is found:

{
    "job":
    {
        # job as returned below
    }
}

If no job is found:

{
    "job": None
}

Example Usage

As an example, consider a Nova API extension that allows users to request that daily snapshots automatically be taken of a server.

  1. User makes request to Nova extension
  2. Nova extension picks a random time for the snapshot of this server to be taken (or it could use some more sophisticated algorithm, key thing is we want the requests to be uniformly distributed) and makes a create schedule request to the QonoS API.
  3. The QonoS API adds a schedule entry to the database
  4. Scheduler polls API for schedules needing action
  5. Scheduler creates job entry through API
  6. Worker polls the API for the next available job
  7. Worker executes the job (i.e., requests that a snapshot be taken of the specified server)
  8. Worker waits for completion while updating the job (this indicates the Worker has not died)
  9. Worker deletes the job (indicating that the job has been completed)
  10. Worker polls the API for the next available job

Code Repository