Jump to: navigation, search

Difference between revisions of "Zaqar/specs/havana"

(Use Statsd instead of ZMQ)
m (Malini moved page Marconi/specs/havana to Zaqar/specs/havana: Project Rename)
 
(42 intermediate revisions by 5 users not shown)
Line 1: Line 1:
__NOTOC__
+
 
 +
'''NOTE: This page is OUT OF DATE. Please see the latest info on the Marconi project here: [[Marconi]]'''
 +
 
 
<!-- ## page was renamed from marconi-grizzly-spec -->
 
<!-- ## page was renamed from marconi-grizzly-spec -->
= Marconi: A Message Bus for [[OpenStack]] =
+
= Marconi: Cloud Message Queuing for OpenStack =
  
This specification formalizes the requirements and design considerations captured during one of the Grizzly Summit working sessions to initiate a message bus project for OpenStack. As the project evolves, so too will its requirements, so this specification is only meant as a starting point.
+
'''NOTE: This page is OUT OF DATE. Please see the latest info on the Marconi project here: [Marconi]'''
 +
 
 +
This specification formalizes the requirements and design considerations captured during one of the Grizzly Summit working sessions to initiate a message bus project for [[OpenStack]]. As the project evolves, so too will its requirements, so this specification is only meant as a starting point.
  
 
Here's a brief summary of how Marconi works:
 
Here's a brief summary of how Marconi works:
Line 9: Line 13:
 
# Clients post messages via HTTP to Marconi. The URL contains a tenant ID.
 
# Clients post messages via HTTP to Marconi. The URL contains a tenant ID.
 
# Marconi persists messages according to either a default TTL, or one specified by the client.
 
# Marconi persists messages according to either a default TTL, or one specified by the client.
# Clients poll Marconi for messages. Whereas other popular message bus servers use the notion of ''topics'' or ''channels'' to namespace messages, Marconi is completely tag-based, allowing for maximum flexibility in distribution patterns.
+
# Clients poll Marconi for messages.  
# Clients may optionally apply a transaction UUID to the next batch of messages that do not already have a transaction associated with them. In this case, the server returns a list of affected messages for processing by the client. Once the client has processed each message, it can delete that message from the server. In this way, Marconi provides a mechanism for ensuring each message is processed once and only once.
+
# Clients may optionally claim a batch of messages, hiding them from other clients. Once the client has processed each message, it can delete it from the server. In this way, Marconi provides a mechanism for ensuring each message is processed once and only once.
  
 
== Rationale ==
 
== Rationale ==
Line 16: Line 20:
 
The lack of an integrated cloud message bus service is a major inhibitor to [[OpenStack]] adoption. While Amazon has SQS and SNS, OpenStack currently provides no alternatives.
 
The lack of an integrated cloud message bus service is a major inhibitor to [[OpenStack]] adoption. While Amazon has SQS and SNS, OpenStack currently provides no alternatives.
  
OpenStack needs a multi-tenant message bus that is fast, efficient, durable, horizontally-scalable and reliable. Furthermore, the current RPC mechanism that various OpenStack components use to communicate with each other does not include usable APIs for subscribing to notifications, or for sending and receiving generic messages to be consumed by multiple workers. This has complicated OpenStack metering and billing implementations.
+
OpenStack needs a multi-tenant message bus that is fast, efficient, durable, horizontally-scalable and reliable.
  
The Marconi project will address these needs, acting as a compliment to the existing RPC infrastructure with OpenStack, while providing multi-tenant services that can be exposed to applications running on public and private clouds.
+
The Marconi project will address these needs, acting as a compliment to the existing RPC infrastructure within OpenStack, while providing multi-tenant services that can be exposed to applications running on public and private clouds.
  
Polling vs. persistent, push - massive concurrency, utilization, keep-alive, user perception.
+
== Use Cases ==
  
== Use Cases ==
+
'''NOTE: This data is OUT OF DATE. Please see the latest use case info is here: [[Use Cases (Marconi)]]'''
  
 
1. [[marconi/specs/use-cases/1|Distribute tasks among multiple workers]] (transactional job queues)
 
1. [[marconi/specs/use-cases/1|Distribute tasks among multiple workers]] (transactional job queues)
Line 55: Line 59:
  
 
== Major Features ==
 
== Major Features ==
 +
 +
'''NOTE: This page is OUT OF DATE. Please see the latest info on the Marconi project here: [[Marconi]]'''
  
 
Non-Functional
 
Non-Functional
Line 61: Line 67:
 
* Multi-tenant
 
* Multi-tenant
 
* Implemented in Python, following PEP 8 and pythonic idioms
 
* Implemented in Python, following PEP 8 and pythonic idioms
* Modular, kernel-based architecture
+
* Modular, driver-based architecture
 
* Async I/O
 
* Async I/O
* Monitoring driver
 
* Logging driver
 
* Health endpoint
 
 
* Client-agnostic
 
* Client-agnostic
* Low response time, turning around requests in 50ms or less, even under load
+
* Low response time, turning around requests in 20-50ms (or better), even under load
 
* High throughput, serving millions of reqs/min with a small cluster
 
* High throughput, serving millions of reqs/min with a small cluster
 +
* Thousands of req/sec per queue (?)
 +
* 100's of thousands of queues per tenant
 
* Horizontal scaling of ''both'' reads and writes
 
* Horizontal scaling of ''both'' reads and writes
 
* Support for HA deployments
 
* Support for HA deployments
 
* Guaranteed delivery
 
* Guaranteed delivery
 
* Best-effort message ordering
 
* Best-effort message ordering
* Server generates all IDs (i.e., message and transaction IDs)
+
* Server generates all IDs
* Gzip'd large messages
+
* Gzip'd HTTP bodies
* Secure (audited code, end-to-end HTTPS support, pen testing, etc.)
+
* Secure (audited code, end-to-end HTTPS support, penetration testing, etc.)
 +
* Schema validation
 
* Auth caching
 
* Auth caching
  
 
Functional
 
Functional
  
* JSON and XML media types
+
* Eventing and work queuing semantics
* Opaque payload (although must be valid JSON or XML)
+
* JSON
* Max payload size of 64K
+
* Opaque payload (arbitrary JSON, or base64-encoded binary)
 +
* Max payload size of 4K
 
* Batch message posting and querying
 
* Batch message posting and querying
* Tag-based filtering (channels and distribution patterns are emergent)
 
 
* Keystone auth driver (service catalog may return endpoints for different regions and/or different characteristics)
 
* Keystone auth driver (service catalog may return endpoints for different regions and/or different characteristics)
* CLI client
 
* Client libraries for Python, PHP, Java, and C#
 
* Specify safety (optional)
 
* Message signing (HMAC)
 
* Auto-generated audit river for actions and state changes, filterable
 
  
== Future Features ==
+
== Future Features (Brainstorming) ==
 +
 
 +
TODO: Create blueprints for these, prioritize
  
Listed in no particular order:
+
Brainstormed features, listed in no particular order:
  
* JSON-P support
+
* LZ4 compression for messages at rest
 +
* Multi-Transport (Http, ZMQ)
 +
* SQLAlchemy driver
 +
* REPL for debugging, testing, diagnostics
 +
* Client libraries for Python, PHP, Java, and C#
 +
* Auto-generated audit river (read-only queue) for actions and state changes
 +
* Delayed delivery
 +
* Hot-reconfigure
 +
* PATCH support for updating queue metadata
 +
* Set/get arbitrary queue metadata
 +
* Kombu Integration
 +
* API tokens tied to a specific app and a specific queue, OAuth?
 +
* Message signing
 +
* Standalone control panel or at least a simple admin/dashboard app for ops
 +
* JSON-P or CORS support (may need to use the while(1); trick to prevent XSS attacks)
 +
* Multi-get (specify a list of queues to query in a single request)
 +
* Tag-based filtering
 +
** Includes a way to return in one call, everything with or without the tag (OR semantics) to afford fanout.
 +
* XML support
 +
* LZ4 or snappy body compression (at rest, and in WSGI server as well as client libs)
 
* Response caching
 
* Response caching
* Authorization (based on tags)
+
* Authorization (based on tags and/or queues)
 +
* Cross-tenant sharing (need to define business case)
 
* Temporal queries
 
* Temporal queries
 
* JavaScript client library (browser and Node.js)
 
* JavaScript client library (browser and Node.js)
Line 108: Line 131:
 
* PyPy support
 
* PyPy support
 
* HTTP 2.0 support
 
* HTTP 2.0 support
 
Better put into extensions (YAGNI):
 
 
* Priority queues
 
* Guaranteed order
 
 
* Long-polling
 
* Long-polling
* Websockets
+
* Web Socket transport driver
 +
* Web hooks
  
 
== Non-Features ==
 
== Non-Features ==
Line 129: Line 148:
 
== Architecture ==
 
== Architecture ==
  
Marconi will use a micro-kernel architecture. Auth, web server, storage, cache, logging, monitoring, etc. will all be implemented as drivers, allowing vendors to customize Marconi to suite. Note, however, that the web framework will be tightly coupled with the micro-kernel for maximum performance, and will not be customizable without hacking on the kernel itself.
+
'''NOTE: This page is OUT OF DATE. Please see the latest info on the Marconi project here: [[Marconi]]'''
 +
 
 +
Marconi will use a micro-kernel architecture. Auth, transport, storage, cache, logging, monitoring, etc. will all be implemented as drivers or exposed with standard protocols, allowing vendors to customize Marconi to suit.  
 +
 
 +
Endpoint controllers define the interface between storage and transport. [https://wiki.openstack.org/wiki/Marconi/specs/endpoint More info].
  
 
Possible frameworks that can help realize a highly modular design:
 
Possible frameworks that can help realize a highly modular design:
  
 
* pkg_resources
 
* pkg_resources
* stevedore
+
* [https://github.com/dreamhost/stevedore stevedore]
  
Non-customizable modules
+
Reference drivers
 
+
* WSGI-based micro web framework, tuned for low latency and high throughput
+
* Transport: HTTP(S) via WSGI using [http://falconframework.org Falcon]
 +
* Auth: Keystone middleware
 +
* Storage: MongoDB
 +
* Logging: Standard library logging
 +
* Monitoring: TBD - Statsd, as well as HTTP stats page?
  
Reference drivers
+
== Deployment Options ==
  
* Auth: Keystone
+
* Self-host via gevent.http or ZMQ
* Web Server: Self-hosted with gevent, but anything that speaks WSGI can be used
+
* Host with a WSGI server.
* Storage: MongoDB
+
:* Requires writing a small bootstrap script to load the kernel and export the app callable.
* Cache: in-process-p2p (custom) and Redis
+
:* Bootstrap script also allows full programmatic customization of logging
* Logging: Syslog, stdout, file
 
* Monitoring: Statsd, as well as HTTP stats page (configurable)
 
  
 
== API ==
 
== API ==
Line 155: Line 180:
 
== Test Plan ==
 
== Test Plan ==
  
All development will be done TDD-style using nose. Pair programming may happen on accident (or even on purpose). Eventually we'll add integration, performance, and security tests, and get everything automated in a nice and tidy CI pipeline.
+
All development will be done TDD-style using nose and testtools. Pair programming may happen on accident (or even on purpose). Eventually we'll add integration, performance, and security tests, and get everything automated in a nice and tidy CI pipeline.
  
 
----
 
----
 
[[Category:Spec]]
 
[[Category:Spec]]

Latest revision as of 18:42, 7 August 2014

NOTE: This page is OUT OF DATE. Please see the latest info on the Marconi project here: Marconi

Marconi: Cloud Message Queuing for OpenStack

NOTE: This page is OUT OF DATE. Please see the latest info on the Marconi project here: [Marconi]

This specification formalizes the requirements and design considerations captured during one of the Grizzly Summit working sessions to initiate a message bus project for OpenStack. As the project evolves, so too will its requirements, so this specification is only meant as a starting point.

Here's a brief summary of how Marconi works:

  1. Clients post messages via HTTP to Marconi. The URL contains a tenant ID.
  2. Marconi persists messages according to either a default TTL, or one specified by the client.
  3. Clients poll Marconi for messages.
  4. Clients may optionally claim a batch of messages, hiding them from other clients. Once the client has processed each message, it can delete it from the server. In this way, Marconi provides a mechanism for ensuring each message is processed once and only once.

Rationale

The lack of an integrated cloud message bus service is a major inhibitor to OpenStack adoption. While Amazon has SQS and SNS, OpenStack currently provides no alternatives.

OpenStack needs a multi-tenant message bus that is fast, efficient, durable, horizontally-scalable and reliable.

The Marconi project will address these needs, acting as a compliment to the existing RPC infrastructure within OpenStack, while providing multi-tenant services that can be exposed to applications running on public and private clouds.

Use Cases

NOTE: This data is OUT OF DATE. Please see the latest use case info is here: Use Cases (Marconi)

1. Distribute tasks among multiple workers (transactional job queues)

2. Forward events to data collectors (transactional event queues)

3. Publish events to any number of subscribers (pub-sub)

4. Send commands to one or more agents (RPC via point-to-point or pub-sub)

5. Request information from an agent (RPC via point-to-point)

6. Monitor a Marconi deployment (DevOps)

Design Goals

Marconi's design philosophy is derived from Donald A. Norman's work regarding The Design of Everyday Things:

 The value of a well-designed object is when it has such a rich set of affordances that the people who use it can do things with it that the designer never imagined.

Goals related to the above:

  1. Emergent functionality, utility
  2. Modular, pluggable code base
  3. REST architectural style

Principles to live by:

  1. DRY
  2. YAGNI
  3. KISS

Major Features

NOTE: This page is OUT OF DATE. Please see the latest info on the Marconi project here: Marconi

Non-Functional

  • Versioned API
  • Multi-tenant
  • Implemented in Python, following PEP 8 and pythonic idioms
  • Modular, driver-based architecture
  • Async I/O
  • Client-agnostic
  • Low response time, turning around requests in 20-50ms (or better), even under load
  • High throughput, serving millions of reqs/min with a small cluster
  • Thousands of req/sec per queue (?)
  • 100's of thousands of queues per tenant
  • Horizontal scaling of both reads and writes
  • Support for HA deployments
  • Guaranteed delivery
  • Best-effort message ordering
  • Server generates all IDs
  • Gzip'd HTTP bodies
  • Secure (audited code, end-to-end HTTPS support, penetration testing, etc.)
  • Schema validation
  • Auth caching

Functional

  • Eventing and work queuing semantics
  • JSON
  • Opaque payload (arbitrary JSON, or base64-encoded binary)
  • Max payload size of 4K
  • Batch message posting and querying
  • Keystone auth driver (service catalog may return endpoints for different regions and/or different characteristics)

Future Features (Brainstorming)

TODO: Create blueprints for these, prioritize

Brainstormed features, listed in no particular order:

  • LZ4 compression for messages at rest
  • Multi-Transport (Http, ZMQ)
  • SQLAlchemy driver
  • REPL for debugging, testing, diagnostics
  • Client libraries for Python, PHP, Java, and C#
  • Auto-generated audit river (read-only queue) for actions and state changes
  • Delayed delivery
  • Hot-reconfigure
  • PATCH support for updating queue metadata
  • Set/get arbitrary queue metadata
  • Kombu Integration
  • API tokens tied to a specific app and a specific queue, OAuth?
  • Message signing
  • Standalone control panel or at least a simple admin/dashboard app for ops
  • JSON-P or CORS support (may need to use the while(1); trick to prevent XSS attacks)
  • Multi-get (specify a list of queues to query in a single request)
  • Tag-based filtering
    • Includes a way to return in one call, everything with or without the tag (OR semantics) to afford fanout.
  • XML support
  • LZ4 or snappy body compression (at rest, and in WSGI server as well as client libs)
  • Response caching
  • Authorization (based on tags and/or queues)
  • Cross-tenant sharing (need to define business case)
  • Temporal queries
  • JavaScript client library (browser and Node.js)
  • Ruby client library
  • PHP client library
  • Cross-regional replication
  • Horizon plug-in
  • Ceilometer data provider
  • PyPy support
  • HTTP 2.0 support
  • Long-polling
  • Web Socket transport driver
  • Web hooks

Non-Features

Marconi may be used to support other services that provide the following functionality, but will not embed these abilities directly within its code base.

  1. Any kind of push notifications over persistent connections (leads to complicated state management and poor hardware utilization)
  2. Forwarding notifications to email, SMS, Twitter, etc. (ala SNS)
  3. Forwarding notifications to web hooks
  4. Forwarding notifications to APNS, GCM, etc.
  5. Scheduling-as-a-service (ala IronWorker)
  6. Metering and monitoring solutions

Architecture

NOTE: This page is OUT OF DATE. Please see the latest info on the Marconi project here: Marconi

Marconi will use a micro-kernel architecture. Auth, transport, storage, cache, logging, monitoring, etc. will all be implemented as drivers or exposed with standard protocols, allowing vendors to customize Marconi to suit.

Endpoint controllers define the interface between storage and transport. More info.

Possible frameworks that can help realize a highly modular design:

Reference drivers

  • Transport: HTTP(S) via WSGI using Falcon
  • Auth: Keystone middleware
  • Storage: MongoDB
  • Logging: Standard library logging
  • Monitoring: TBD - Statsd, as well as HTTP stats page?

Deployment Options

  • Self-host via gevent.http or ZMQ
  • Host with a WSGI server.
  • Requires writing a small bootstrap script to load the kernel and export the app callable.
  • Bootstrap script also allows full programmatic customization of logging

API

See the Marconi API spec. [ROUGH DRAFT]

Test Plan

All development will be done TDD-style using nose and testtools. Pair programming may happen on accident (or even on purpose). Eventually we'll add integration, performance, and security tests, and get everything automated in a nice and tidy CI pipeline.