Jump to: navigation, search

MagnetoDB/specs/streamingbulkload

[Draft] MagnetoDB Streaming Bulk Load workflow and API

Workflow

This page describes process of loading large amounts of data into MagnetoDB.

Before uploading the data one should first make sure that destination table exists.

Data is uploaded in one streaming HTTP request.

URL

POST v1/{project_id}/data/tables/{table_name}/bulk_load

Headers

  • User-Agent
  • Content-Type: application/json
  • Accept: application/json
  • X-Auth-Token keystone auth token

Request Syntax

Data stream is a plain text that contains '\n' separated sequence of JSON representations of items to be inserted.

{ "attribute_name": { "attribute_type": "attribute_value"}, "attribute_name2": { "attribute_type": "attribute_value"}...}
{ "attribute_name": { "attribute_type": "attribute_value"}, "attribute_name2": { "attribute_type": "attribute_value"}...}

Response Syntax

{
    "read": <number>,
    "processed": <number>,
    "unprocessed": <number>,
    "failed": <number>,
    "last_item": <string>,
    "failed_items": {
            "item": <string>,
            "item": <string>,
            ...
    }
}

In case of error, incoming data stream will continue to be read, but received items won't be processed. Response will contain counts of received, processed (successfully inserted), unprocessed and failed items, last processed item and error messages for failed items. Due to asynchronous processing of received items, 'PutItem' operations for several items may be enqueued when an error is found. In such case server will wait for all enqueued operations' results. Some of the results may be errors too. So response will contain more than one error.