Frequently asked questions (Zaqar)
- 1 How mature is the project?
- 2 Is Marconi an under-cloud or an over-cloud service?
- 3 Is Marconi a provisioning service or a data API?
- 4 Will Marconi only support HTTP as a transport, or will it add other protocols as well?
- 5 Many "traditional message brokers" have offered HTTP. What makes Marconi different?
- 6 How does Marconi scale?
- 7 Why does Marconi use the Store-and-Forward Design?
- 8 What is the purpose of the SQLAlchemy driver?
- 9 Why did you start with MongoDB?
- 10 What messaging patterns does Marconi support?
- 11 If queues have no guarantee of ordering, how is the marker guaranteed to still deliver all messages to all targets?
- 12 Will Marconi work with AMQP?
- 13 Will Marconi work with Kafka?
- 14 How does Marconi compare to AWS (SQS/SNS)?
- 15 How does Marconi compare to oslo.messaging?
- 16 What's next for marconi?
How mature is the project?
Marconi is an incubated OpenStack project. With the close of the Icehouse cycle, the program has achieved a number of important milestones:
- Marconi's first official, production-ready "1.0" release is now available for download. This first release includes a battle-tested MongoDB driver, and production-ready drivers for additional backends are in the works.
- Marconi's v1.0 API is stable and ready to code against.
- Basic user and operator docs are now available, and we will be adding tons of new content during Juno.
- A reference Python client library (written in Python) is now available on PyPI which supports the entire v1.0 API.
- A standalone C# client library is now available on GitHub.
- Support for Python, C# and other languages is also now available through Rackspace-supported SDKs.
The program has attracted a growing community of contributors from all over the world. We are particularly proud of our interns:
- 10+ organizations represented
- 5 core reviewers
- 6 interns for Juno (representing GSoC, GNOME OPW, Rackspace and Red Hat)
Is Marconi an under-cloud or an over-cloud service?
Marconi's primary mission is to provide a multi-tenant, web-friendly messaging and notifications service to over-cloud web and mobile application developers. In addition, several projects have asked us to help with some of their under-cloud use cases, particularly in terms of aggregating and filtering events as a compliment to OpenStack's RPC layer. Marconi is not intended to replace oslo.messaging.
Is Marconi a provisioning service or a data API?
Marconi's API is data-oriented. That is, it does not provision message brokers. Instead, it acts as a bridge between the client and one or more backends. In fact, Marconi's API *is* the product; it provides common messaging semantics on top of existing messaging and storage systems. Marconi's data API aims to be simpler, more flexible and more oriented to the needs of web and mobile application developers than the underlying API.
Regardless of the backend in use, Marconi provides a web-friendly API that is service-oriented and multi-tenant, and is as lightweight, pragmatic and useful as possible. Marconi can run standalone, but works best when leveraging other OpenStack programs such as Keystone and Barbican. In turn, other programs can use Marconi to surface events to end users.
Marconi's API provides:
- Support for multi-tenancy, which amortizes the cost of a production deployment across multiple acounts.
- A first-class, idiomatic HTTP transport that is familiar to web developers and works well across firewalls and poor network connections.
- A design that scales to hundreds of thousands or even millions of message topics AKA queues, allowing web developers to communicate with an "Internet of things".
- A JSON-based message schema that is easy to understand, efficient to transmit, and quick to parse
- An architecture that allows for future transports such as WebSockets, tcp etc.
- An architecture that allows multiple messaging and storage backends to run side-by-side under the same API.
- A (relatively) easy-to-scale, HA, multi-tenant messaging service
A provisioning service for message brokers, however useful, serves a somewhat different market from what Marconi is targeting today. If users are interested in a queue provisioning service, the community should consider starting a new program to address that need.
Will Marconi only support HTTP as a transport, or will it add other protocols as well?
We are focusing on HTTP for Juno, but are considering adding a lower-level, persistent transport (perhaps based on WebSocket) in the K cycle.
Many "traditional message brokers" have offered HTTP. What makes Marconi different?
Marconi is multi-tenant, and also offers a more RESTful API (in our opinion) than the alternatives. That being said, there's a good chance we missed something, so please share what you have in mind with the team and we will be happy to discuss it.
How does Marconi scale?
First of all, since Marconi uses HTTP and follows the REST architectural style, you get all the associated scaling benefits from that.
Secondly, regarding the backend, Marconi has a notion of “pools", across which queues can be sharded. Messages for an individual queue may not be sharded across multiple pools, but a single queue may be sharded within a given pool, depending on whether the driver supports it. In any case, you can imagine each pool as encapsulating a single DB or broker cluster. Once you reach the limits of scalability within your initial pool (due to networking, hard limitations in the given backend, etc.), you can provision other pools as needed.
Why does Marconi use the Store-and-Forward Design?
“Store and forward” refers to the pattern of buffering or otherwise storing a message in some intermediary before forwarding the message to its destination. This technique is a common solution in messaging systems for dealing with intermittent connectivity. Marconi's store-and-forward architecture makes the service resilient to networking partitioning, firewalls, congestion, server node failures, etc. It is also a natural complement to the REST architectural style employed in Marconi's API design and implementation.
What is the purpose of the SQLAlchemy driver?
It is well known that a classic RDBMS, such as MySQL, performs poorly at scale as a messaging backend. Therefore, bundling a SQLAlchemy driver with Marconi may seem like an odd thing to do. Nevertheless, the driver does have its uses:
- It provides support for SQLite, which is very useful for development. Among other benefits, supporting SQLite means you can go from zero to a working local Marconi server in just 3 steps.
- It provides a reference implementation of the storage driver interface. The same functional tests that pass when running against the SQLAlchemy driver must also pass for all other drivers, helping ensure consistent behavior.
- It helps the team be more pragmatic in the design of Marconi's API and in the service's architecture. The SQLAlchemy driver is part of a broader effort to create drivers to represent each of the most common data store families, including NoSQL, SQL, and AMQP.
Why did you start with MongoDB?
Some NoSQL databases work better for implementing messaging systems than do others. For example, Redis and MongoDB are known to work well in this role, while Cassandra performs rather poorly.
The Marconi team decided to start with MongoDB because:
- MongoDB's flexible query system was able to support all of the operations required by the 1.0 API
- It was able to manage a large number of short-lived records without hitting a wall
- The system offered a well-rounded mixture of durability, performance, HA, and scalability
- The core team was already very familiar with MongoDB and had experience running it in production
- MongoDB's schemaless design allowed the team to iterate rapidly
With this in mind, it's important to mention that despite the fact MongoDB was the first production-ready storage driver implemented for Marconi, it's not the team's intention to make it the one-and-only driver for all use cases. In fact, more drivers are already in the works. Stay tuned!
What messaging patterns does Marconi support?
Marconi's API provides a basic set of semantics that, when combined, afford a variety of messaging patterns, such as pub-sub, task distribution, and point-to-point. Messages can be consumed as feeds, queues, or a combination of the two. This can be a little confusing at first, since the API uses the term "queues" to represent a hybrid feeds-queues resource.
When interacting with the API, a client can choose to read a queue in a similar manner to an Atom feed, where any client can read any message, and is responsible for keeping track of its own marker (its position in the feed). This provides for messaging patterns such as pub-sub and point-to-point.
Alternatively, an application can implement task distribution by creating a pool of workers that simply claim messages off the front of the queue. Once a message is claimed, it becomes invisible to other workers in the pool to prevent messages from being processed more than once.
Finally, a client can use both feed and claim semantics simultaneously to create hybrid messaging patterns. For example, while workers are busy claiming messages from a task queue, an auditor can passively sample those same messages as they flow by.
See also: Use Cases (Marconi)
If queues have no guarantee of ordering, how is the marker guaranteed to still deliver all messages to all targets?
The ordering of messages is actually stable, meaning that given a marker (or no marker), you will always get the same set of messages, in the same order, when listing them. When we say that FIFO is only guaranteed for a single producer, that is because we can’t guarantee which producer’s messages will arrive in our system first. However, as they arrive, they are ordered using a monotonic marker value. This is in contrast to, say, an Atom feed that uses timestamps to order entries, making it possible to get two entries with the same timestamp, which places a burden on the client to detect collisions and then throw out messages it has already received.
Nothing special needs to be done to verify message duplicity or parallelism in the code itself.
The only thing a client has to do is keep track of the last marker URI is received (i.e., the “next” href), and submit that in subsequent requests. This helps the server scale efficiently, since it can be stateless (i.e., the server doesn’t have to keep track of what messages every client has or has not already seen).
Will Marconi work with AMQP?
- During Juno we are experimenting with using AMQP as a backend. TBD.
- Transport TBD
- Talk to us about use cases
Will Marconi work with Kafka?
- During Juno we are experimenting with using Kafka as a backend.
- Talk to us about use cases
- We need contributors!
How does Marconi compare to AWS (SQS/SNS)?
- Targets similar workloads
- Marconi will provide a unified API to handle notifications and queuing
- Marconi is highly customizable
- FIFO and once-and-only-once guaranteed (depending on storage backend)
How does Marconi compare to oslo.messaging?
oslo.messsaging is an RPC library used throughout OpenStack to manage distributed commands by sending messages through different messaging layers. Oslo Messaging was originally developed as an abstraction over AMQP, but has since added support for ZeroMQ.
As opposed to Oslo Messaging, Marconi is a messaging service for the over and under cloud. As a service, it is meant to be consumed by using libraries for different languages. Marconi currently supports 1 protocol (HTTP) and sits on top of other existing technologies (MongoDB as of version 1.0).
What's next for marconi?
- API v1.1
- Additional backend drivers
- Queue Flavors
- Message signing
- Automated performance testing
- Automated security testing
- Additional Dev/Ops features
- Expanded documentation
See also: Roadmap (Marconi)