From a vendor / consumer perspective, there are likely cases where consumers want to leverage the existing trove sqlachemy DB models and framework abstractions for their own extensions / features within the trove framework. For example a consumer may want to develop custom in-house (proprietary) add-ons which use persistence for trove which either they do not wish to contribute upstream, or they want to build out a PoC in-house before upstreaming. In such cases a more rapid time to value and lower risk investment can be achieved by leveraging the existing trove db framework and having the ability to plug-in their own schema / migration / etc..
The current trove sqlalchemy implementation contains the plumbing and support necessary to allow consumers to "plug-in" their own database mappers which in turn can define their own ORM mappings, schema, etc.. The hook point for such extensions in python can be found in trove.db.sqlalchemy.api.py in the configure_db() method which looks like this:
def configure_db(options, *plugins):
session.configure_db(options) configure_db_for_plugins(options, *plugins)
Here any number of "plugins" can be passed to the configure_db() function allowing consumers to plug into trove's sqlalchemy ORM engine. In the current impl, the plugin is just a python object which containers a 'mapper' attribute which defines the map(self, engine) method which of course defines ORM mappings atop troves sqlalchemy engine.
For example you could define this simple custom DB plugin / mapper (fictional):
def map(self, engine): meta = MetaData() meta.bind = engine if mappers.mapping_exists(my_models.Person): return orm.mapper(my_models.Person, Table('person', meta, autoload=True))
def __init__(self): self.mapper = Mapper()
which can then be added to trove's ORM using:
from trove.db import get_db_api get_db_api().configure_db(CONF, DBPlugins())
This is all fine, and everything shown above exists already in trove to date. However what's missing is the ability for consumers to pass in their 'plugins' via the main entry points. Instead the entry points do not permit passing any plugins to def configure_db(options, *plugins). See: https://github.com/openstack/trove/blob/master/trove/cmd/common.py#L52
Whats being propose here is: (a) Support a comma list property on CONF.DEFAULT in the trove conf files. e.g. [DEFAULT] db_plugins = org.foo.bar.sqlalchemy.BarPlugins,org.yadda.sqlalchemy.MyPlugins
(b) Update the common.py entry point (see link above) to load each of the CONF.db_plugins as an object and pass them to configure_db() in the common.py (linked above).
a-b above would permit consumers to plug into troves ORM.
- What is the driving force behind this change?
- Does it allow for great flexibility? Stability? Security?
- Does this impact any configuration files? If so, which ones?
- Does this impact any existing tables? If so, which ones?
- Are the changes forward and backward compatible?
- Be sure to include the expected migration process
- Does this change any API that an end-user has access to?
- Are there any exceptions in terms of consistency with other APIs?
- How the command will look like?
- Does it extends the already existed command interfaces ?
- Which HTTP methods added ?
- Which routes were added/modified/extended?
- How does the Request body look like?
- How does the Response object look like?
- Does this change any internal messages between API and Task Manager or Task Manager to Guest
RPC API description
- Method name.
- Method parameters.
- Message type (cast/call).
- Does this change behavior on the Guest Agent? If so, is it backwards compatible with API and Task Manager?