Difference between revisions of "MagnetoDB/specs/async-schema-operations"
Ikhudoshyn (talk | contribs) m (→Problem Description) |
Ikhudoshyn (talk | contribs) |
||
Line 3: | Line 3: | ||
Large amount of concurrent create/delete table operations creates huge load on Cassandra. In fact schema agreement process for some of the calls may take too much time so it results in timeout errors and corresponding tables stuck in CREATING/DELETING state forever. | Large amount of concurrent create/delete table operations creates huge load on Cassandra. In fact schema agreement process for some of the calls may take too much time so it results in timeout errors and corresponding tables stuck in CREATING/DELETING state forever. | ||
− | ==Proposed Change== | + | ===Proposed Change=== |
− | ===QueuedStorageManager=== | + | ====QueuedStorageManager==== |
Implement QueuedStorageManager that, rather than executing create/delete table calls directly, enqueues them to MQ shipped with Openstack via oslo.messaging.rpc. | Implement QueuedStorageManager that, rather than executing create/delete table calls directly, enqueues them to MQ shipped with Openstack via oslo.messaging.rpc. | ||
It should use non-blocking calls. | It should use non-blocking calls. | ||
Line 12: | Line 12: | ||
In case of error during creating/deleting table on RPC server side, status of corresponding table should be set to ERROR | In case of error during creating/deleting table on RPC server side, status of corresponding table should be set to ERROR | ||
− | ===Magnetodb Schema Processor=== | + | ====Magnetodb Schema Processor==== |
Introduce separate executable, "magnetodb-schema-processor", that will run ''blocking'' RPC server which will execute create/delete table requests strictly one by one. | Introduce separate executable, "magnetodb-schema-processor", that will run ''blocking'' RPC server which will execute create/delete table requests strictly one by one. | ||
Number of simultaneously running processes will effectively define the maximum allowed number of concurrent create/delete table requests. | Number of simultaneously running processes will effectively define the maximum allowed number of concurrent create/delete table requests. | ||
Line 19: | Line 19: | ||
During describe table call this attribute should be analyzed, whether table is in CREATING or DELETING status for a long time. If so, it's status should be changed to ERROR. | During describe table call this attribute should be analyzed, whether table is in CREATING or DELETING status for a long time. If so, it's status should be changed to ERROR. | ||
− | ===RPC settings=== | + | ====RPC settings==== |
* control_exchange: magnetodb | * control_exchange: magnetodb | ||
* amqp_durable_queues: True | * amqp_durable_queues: True | ||
* topic: schema | * topic: schema | ||
− | ===RPC calls=== | + | ====RPC calls==== |
* create(context, table_name) | * create(context, table_name) | ||
* delete(context, table_name) | * delete(context, table_name) | ||
− | + | === Notifications Impact === | |
RPC server should notify on table creation/deletion stat, end and error | RPC server should notify on table creation/deletion stat, end and error | ||
− | + | === Other End User Impact === | |
TBD | TBD | ||
− | + | === Performance Impact === | |
Create/Delete table operations are expected to be slower but much more reliably. | Create/Delete table operations are expected to be slower but much more reliably. | ||
− | + | === Deployment Impact === | |
# Magnetodb Schema Processor should be deployed to separate node or one of MagnetoDB API nodes. | # Magnetodb Schema Processor should be deployed to separate node or one of MagnetoDB API nodes. | ||
# QueuedStorageManager should be used as a storage manager in "magnetodb-api.conf" for each MagnetoDB API instance. | # QueuedStorageManager should be used as a storage manager in "magnetodb-api.conf" for each MagnetoDB API instance. | ||
# Oslo.Messaging should be configured in "magnetodb-api.conf" for each MagnetoDB API instance. | # Oslo.Messaging should be configured in "magnetodb-api.conf" for each MagnetoDB API instance. | ||
− | + | === Developer Impact === | |
None | None | ||
− | + | === Assignee(s) === | |
Primary assignee: | Primary assignee: | ||
<ikhudoshyn> | <ikhudoshyn> | ||
Line 50: | Line 50: | ||
<None> | <None> | ||
− | + | === Work Items === | |
# implement QueuedStorageManager | # implement QueuedStorageManager | ||
# implement Magnetodb Schema Processor | # implement Magnetodb Schema Processor | ||
# update devstack integration scripts | # update devstack integration scripts | ||
− | + | === Dependencies === | |
[https://wiki.openstack.org/wiki/Oslo/Messaging Oslo.Messaging] | [https://wiki.openstack.org/wiki/Oslo/Messaging Oslo.Messaging] | ||
− | + | === Documentation Impact === | |
Magnetodb Schema Processor deployment should be covered in corresponding doc (TBD) | Magnetodb Schema Processor deployment should be covered in corresponding doc (TBD) | ||
− | + | === References === | |
[https://blueprints.launchpad.net/magnetodb/+spec/async-schema-operations Blueprint on Launchpad] | [https://blueprints.launchpad.net/magnetodb/+spec/async-schema-operations Blueprint on Launchpad] |
Revision as of 14:44, 18 September 2014
Contents
Problem Description
Large amount of concurrent create/delete table operations creates huge load on Cassandra. In fact schema agreement process for some of the calls may take too much time so it results in timeout errors and corresponding tables stuck in CREATING/DELETING state forever.
Proposed Change
QueuedStorageManager
Implement QueuedStorageManager that, rather than executing create/delete table calls directly, enqueues them to MQ shipped with Openstack via oslo.messaging.rpc. It should use non-blocking calls. RPC calls should only include request context as a dictionary and a table name. All necessary information about create/delete table parameters (table schema etc) should be retrieved from table_info_repo. In case of error during creating/deleting table on RPC server side, status of corresponding table should be set to ERROR
Magnetodb Schema Processor
Introduce separate executable, "magnetodb-schema-processor", that will run blocking RPC server which will execute create/delete table requests strictly one by one. Number of simultaneously running processes will effectively define the maximum allowed number of concurrent create/delete table requests.
Additionally, table status update time should be introduced. Each table status change should updade that attribute as well. During describe table call this attribute should be analyzed, whether table is in CREATING or DELETING status for a long time. If so, it's status should be changed to ERROR.
RPC settings
- control_exchange: magnetodb
- amqp_durable_queues: True
- topic: schema
RPC calls
- create(context, table_name)
- delete(context, table_name)
Notifications Impact
RPC server should notify on table creation/deletion stat, end and error
Other End User Impact
TBD
Performance Impact
Create/Delete table operations are expected to be slower but much more reliably.
Deployment Impact
- Magnetodb Schema Processor should be deployed to separate node or one of MagnetoDB API nodes.
- QueuedStorageManager should be used as a storage manager in "magnetodb-api.conf" for each MagnetoDB API instance.
- Oslo.Messaging should be configured in "magnetodb-api.conf" for each MagnetoDB API instance.
Developer Impact
None
Assignee(s)
Primary assignee:
<ikhudoshyn>
Other contributors:
<None>
Work Items
- implement QueuedStorageManager
- implement Magnetodb Schema Processor
- update devstack integration scripts
Dependencies
Documentation Impact
Magnetodb Schema Processor deployment should be covered in corresponding doc (TBD)