Jump to: navigation, search

TroveFlavorsPerDatastore

Trove. Flavors per datastores.

What is the “Flavor”?

Virtual hardware templates are called "flavors" in OpenStack, defining sizes for RAM, disk, number of cores, and so on.

How does Trove manages flavors ?

It doesn’t. From python-troveclient Trove passes flavor identifiers directly to nova/heat, without any kinds of manipulations.

Why does Trove requires filtering flavors by datastore ?

SQL-based databases hardware requirements

MySQL. Minimum System Requirements:
  1. 2 or more CPU cores
  2. 2 or more GB of RAM
  3. Disk I/O subsystem applicable for a write-intensive database
Recommended System Requirements (if monitoring 100 or more MySQL servers)
  1. 4 or more CPU cores
  2. 8 or more GB of RAM
  3. Disk I/O subsystem applicable for a write-intensive database
PostgresSQL. Minimum Production Requirements:
  1. 64bit CPU
  2. 64bit Operating System
  3. 2 Gigabytes of memory
  4. Dual CPU/Core
  5. RAID 1

NoSQL databases hardware requirements

Cassandra
Choosing appropriate hardware depends on selecting the right balance of the following resources: memory, CPU, disks, number of nodes, and network.
Memory. The more memory a Cassandra node has, the better read performance. More RAM allows for larger cache sizes and reduces disk I/O for reads. More RAM also allows memory tables (memtables) to hold more recently written data. Larger memtables lead to a fewer number of SSTables being flushed to disk and fewer files to scan during a read. The ideal amount of RAM depends on the anticipated size of your hot data.
  • For dedicated hardware, the optimal price-performance sweet spot is 16GB to 64GB; the minimum is 8GB.
  • For a virtual environments, the optimal range may be 8GB to 16GB; the minimum is 4GB.
  • For testing light workloads, Cassandra can run on a virtual machine as small as 256MB.
  • For setting Java heap space.
CPU. Insert-heavy workloads are CPU-bound in Cassandra before becoming memory-bound. (All writes go to the commit log, but Cassandra is so efficient in writing that the CPU is the limiting factor.) Cassandra is highly concurrent and uses as many CPU cores as available:
For dedicated hardware, 8-core processors are the current price-performance sweet spot.
For virtual environments, consider using a provider that allows CPU bursting, such as Rackspace Cloud Servers.
Disk. Disk space depends a lot on usage, so it's important to understand the mechanism. Cassandra writes data to disk when appending data to the commit log for durability and when flushing memtable to SSTable data files for persistent storage. SSTables are periodically compacted. Compaction improves performance by merging and rewriting data and discarding old data. However, depending on the compaction strategy and size of the compactions, compaction can substantially increase disk utilization and data directory volume. For this reason, you should leave an adequate amount of free disk space available on a node: 50% (worst case) for SizeTieredCompactionStrategy and large compactions, and 10% for LeveledCompactionStrategy.
MongoDB
Hardware Considerations. MongoDB is designed specifically with commodity hardware in mind and has few hardware requirements or limitations. MongoDB’s core components run on little-endian hardware, primarily x86/x86_64 processors. Client libraries (i.e. drivers) can run on big or little endian systems.
Hardware Requirements and Limitations. The hardware for the most effective MongoDB deployments have the following properties. Allocate Sufficient RAM and CPU
As with all software, more RAM and a faster CPU clock speed are important for performance. In general, databases are not CPU bound. As such, increasing the number of cores can help, but does not provide significant marginal return.

Conclusion

As you can see each database have it’s own hardware requirements, minimum and maximum. From Trove developer side i don’t see any problems with limitation through flavors, but from user/administrator perspective i’d like to be, at least, notified, and as maximum result blocked from provisioning with inappropriate flavor, that doesn’t fits into minimum requirements.

Datastore base model extension.

Datastore base model should be extended with new column:
   name: FLAVORS 
   type: TEXT
It should contain:
   list of flavors allowed for provisioning


Trove core ReST API extension

Description

HTTP method URL
GET /{tenant_id}/flavors/datastore/{id}

Request body

       EMPTY

Response object

   "flavors": {
       "flavor_1": {
       "id": "INT",
       "links": "links",
       "name": "xlarge",
       "ram": "8Gb",
   }
       "flavor_2": {
        "id": "INT",
        "links": links,
        "name": x-super-large,
        "ram": 32Gb,
   }


Trove-manage util extension

Suggestion:
trove-manage datastore-flavor-add datastore_id_or_name flavor_id_or_name
trove-manage datastore-flavor-delete datastore_id_or_name flavor_id_or_name

or

trove-manage datastore-flavor-assign datastore_id_or_name flavor_id_or_name
trove-manage datastore-flavor-unassign datastore_id_or_name flavor_id_or_name
Where:
  • datastore_id_or_name - is the actual id/name of datastore to which will be attached flavor reference
  • flavor_id_or_name - is the actual id/name of the nova flavor that will be attached to datastore


Python-troveclient extension

Suggestion:

trove flavor-list --datastore UUID_or_name

Worklow elaboration

If “flavors” field is empty in given datastore model it means that all flavors allowed for provisioning.
If wrong flavor passed Trove should raise an exception with appropriate message, something like: “Flavor is not allowed for this datastore.”

Iterations

Iteration 1: filtering flavors per datastore.
Iteration 2: check flavors on provisioning datastore.
Iteration 3: flavor check on instance resize action.