State of Oslo notifications
Currently, Oslo provides code to send notifications via the module openstack.common.notifier. Its submodule named openstack..common.notifier.api allows an application to use a Python API to send notifications. Notifications are sent by using a set of drivers included in openstack.common.notifier, such as log or rpc.
The prototype of the main notification function is openstack.common.notifier.api.notify(context, publisher_id, event_type, priority, payload).
Currently the notify() arguments include very few structured information. The argument payload can be anything, and is usually a Python dict containing a various amount of data whose fields cannot be known in advance.
Some of these fields are documented on a wiki page, but it has obviously a lot of drawbacks:
- the list is easily outdated;
- the list is not even complete;
- the list doesn't help any sort of code generation, introspection and automatic testing around events content.
In Ceilometer, the main consumer of events in OpenStack, we've been dealing for a long time with all of that. Our unit test tries to mimic what is sent by the other OpenStack components by including copies of the payload we've seen at some point. As soon as the payload format is changed in an OpenStack component, our tests will still pass, but the code will not be able to cope with the events sent in a real deployment.
Having non structured events also implies a lot of problems in storage area. Storing unstructured events relies on storing EAV based data. While this isn't a problem in some database storage system (NoSQL) this is a terrible issue for others (SQL) and can be a sensible one in both cases in term of performance (e.g. indexing, querying…).
The proposed solution is to build a new Python API with a different prototype that would relies on Python objects rather than dict. This would solve all of the mentioned problems. This could live side by side with the current one to allow an easy transition to the new notification mechanism.
The first class object would be something like openstack.common.notifier.Notifier and would be able to send events. The event would be a well known structured Python object. It could later be translated to a simple dict before being sent to a driver, that will then publish it over the wire.
# This is peudo code written in the wiki, I didn't run it class Event(object): user_id = str project_id = str def __init__(self, **kwargs): fields = set(filter(lambda x: not x.startswith('_'), self.__dict__.keys())) if set(kwargs.keys()) != fields: raise Exception("Too much or missing fields given in this notification") # should also indicate what for attr in fields: setattr(self, getattr(self.__class__, attr)(kwargs[attr])) def as_dict(self): return self.__dict__ # + filter out private fields, or something like that @property def event(self): """Return the event type by converting the class name from CamelCase to dotted notation.""" return re.sub('([A-Z]+)', r'.\1',self.__class__.__name__).lower().strip('.') class ResourceCreate(Event): resource_type = str class InstanceCreate(ResourceCreate): instance_id = str instance_type_id = str display_name = str created_at = datetime.datetime launched_at = datetime.datetime image_ref_url = URL # I guess we could build a URL class :) state = str memory_mb = int def __init__(self, **kwargs): kwargs['resource_type'] = 'instance' super(self, InstanceCreate).__init__(**kwargs) class Notifier(object): def __init__(self, application, host=socket.gethostname(), driver=get_the_default_driver()): self.application = application self.host = host self.driver = driver def __call__(self, context, priority, event): self.driver.notify(context, priority, application, host, event.type, event.as_dict())
With such design, it's possible for an application to introspect easily the event list and their fields by introspecting the module itself. This could be leveraged to automatically construct SQL schema for example, or validation code (unit tests). The driver would be in charge of the serialization and de-serialization of notifications. (Note that there is currently no de-serialization API as there is no high level API for consuming notification; this is something that could be solved also in this blueprint, even if not directly related).
Using Python classes will also make sure that if any part of the code (e.g. event definition) is updated in Oslo, updating Oslo in e.g. Nova will break the code in Nova and the unit testing will show that easily enough so it'll be fixed. The same will go for consumer like Ceilometer, that will have to adapt to deal with the new events.
Specyfing type for attributes is useful to:
- Validate that the data given are good
- Build correct storage schema through possible introspection
This is heavily inspired by the approach that WSME has taken to provides automatically generated REST API. A library like voluptuous could probably be leveraged a lot in term of schema validation and definition so there would not be any need to reinvent the wheel.
The Notifier object gets more first class parameter such as the application emitting the event. This is really needed to have good filtering possibilities.
Where to define the events is still left to discussion. This could be in Oslo to avoid the importation nightmare that could ensue from having the definition in Nova and all projects.
- I love the idea of making events into objects.
- What I really like about this proposal is that it doesn't need to break the existing notification structure. Instead it's just a better way of capturing it.
- Some questions:
- We need versioning on notifications. How would the CreateInstance object differentiate between changes over time? CreateInstance1, CreateInstance2, etc?
- It would be good to have these notification objects have a context handler so it could also take care of the .start/.end generation. Should be easy enough.
- Notifications currently publish on the RPC --control_exchange (which is bound to a queue named after the routing key, "notifications.info", "notifications.error", etc). The notifications API might want to be expanded to allow for a separate routing key and exchange name. (I know this is kind of out of scope for this page)
- It would be nice to mark attributes as user-facing or operator-facing (as hinted at in to_dict()). Suggestions on supporting that?
- Minor nit: overriding __call__ makes for confusing code, imho :)
- this looks cool, especially if we can get it serialise and deserialise to complete the cycle.
- when we build the Event object do we have specific attributes we expect it to contain? just from an audit point of view, the minimal data requirement is to answer the 7W's
- is it possible to have an option/pluggable way to serialise/deserialise the Event model in a specific model format? (ie. if someone wanted to use a open standard and have event represented in a certain model rather than semi-structured dict, they could choose to do so, and it could be deserialised by the consumer using the same model.)
- here's an pdf file to the open standard DMTF CADF specification that highlights some of the typical requirements for auditing and some how we're currently using it. of special interest is slide 28 how an OpenStack api request gets mapped to CADF. File:Introduction to Cloud Auditing using CADF Event Model and Taxonomy 2013-10-22.pdf