Zaqar/Performance/Pilot

Overview
These performance tests were conducted as a pilot to get a rough idea of where the Juno drivers stood and to kick the tires on the new zaqar-bench tool. As such, the test period was fairly short (10 seconds) and increasing load was not graphed against the various metrics.

Benchmark Environment

 * 1x Load Generator
 * Hardware
 * 1x Intel Xeon E5-2680 v2 2.8Ghz
 * 32 GB RAM
 * 10Gbps NIC
 * 32GB SATADOM
 * Software
 * Debian Wheezy
 * Python 2.7.3
 * zaqar-bench
 * 1x Web Head
 * Hardware
 * 1x Intel Xeon E5-2680 v2 2.8Ghz
 * 32 GB RAM
 * 10Gbps NIC
 * 32GB SATADOM
 * Software
 * Debian Wheezy
 * Python 2.7.3
 * zaqar server
 * storage=mongodb
 * partitions=4
 * MongoDB URI configured with w=majority
 * uWSGI + gevent
 * config: http://paste.openstack.org/show/100592/
 * app.py: http://paste.openstack.org/show/100593/

Instance Configuration

 * 3x MongoDB Nodes
 * Hardware
 * 2x Intel Xeon E5-2680 v2 2.8Ghz
 * 128 GB RAM
 * 10Gbps NIC
 * 2x LSI Nytro WarpDrive BLP4-1600[2]
 * Software
 * Debian Wheezy
 * mongod 2.6.4
 * Default config, except setting replSet and enabling periodic logging of CPU and I/O
 * Journaling enabled
 * Profiling on message DBs enabled for requests over 10ms

Redis

 * 1x Redis Node
 * Hardware
 * 2x Intel Xeon E5-2680 v2 2.8Ghz
 * 128 GB RAM
 * 10Gbps NIC
 * 2x LSI Nytro WarpDrive BLP4-1600[2]
 * Software
 * Debian Wheezy
 * Redis 2.4.14
 * Default config (snapshotting and AOF enabled)
 * One process

Event Broadcasting (Read-Heavy)
OK, so let's say you have a somewhat low-volume source, but tons of event observers. In this case, the observers easily outpace the producer, making this a read-heavy workload.

Benchmark Config

 * 2 producer processes with 25 gevent workers each
 * 1 message posted per request
 * 2 consumer processes with 25 gevent workers each
 * 5 messages listed per request by the observers
 * Load distributed across 4[6] queues
 * 10-second duration

Results
* Redis * Producer: 1.7 ms/req, 585 req/sec * Observer: 1.5 ms/req, 1254 req/sec * Mongo * Producer: 2.2 ms/req, 454 req/sec * Observer: 1.5 ms/req, 1224 req/sec

Event Broadcasting (Balanced)
This test uses the same number of producers and consumers, but note that the observers are still listing (up to) 5 messages at a time[4], so they still outpace the producers, but not as quickly as before.

Benchmark Config

 * 2 producer processes with 25 gevent workers each
 * 1 message posted per request
 * 2 consumer processes with 25 gevent workers each
 * 5 messages listed per request by the observers
 * Load distributed across 4 queues
 * 10-second duration

Results
* Redis * Producer: 1.7 ms/req, 585 req/sec * Observer: 1.5 ms/req, 1254 req/sec * Mongo * Producer: 2.2 ms/req, 454 req/sec * Observer: 1.5 ms/req, 1224 req/sec

Point-to-Point Messaging
This scenario simulates one client sending messages directly to a different client. Only one queue is required in this case[5].

Benchmark Config

 * 1 producer process with 1 gevent worker
 * 1 message posted per request
 * 1 observer process with 1 gevent worker
 * 1 message listed per request
 * All load sent to a single queue
 * 10-second duration

Results
* Redis * Producer: 2.9 ms/req, 345 req/sec * Observer: 2.9 ms/req, 339 req/sec * Mongo * Producer: 5.5 ms/req, 179 req/sec * Observer: 3.5 ms/req, 278 req/sec

Task Distribution
This test uses several producers and consumers in order to simulate distributing tasks to a worker pool. In contrast to the observer worker type, consumers claim and delete messages in such a way that each message is processed once and only once.

Benchmark Config

 * 2 producer processes with 25 gevent workers each
 * 1 message posted per request
 * 2 consumer processes with 25 gevent workers each
 * 5 messages claimed per request, then deleted one by one before claiming the next batch of messages
 * Load distributed across 4 queues
 * 10-second duration

Results
* Redis * Producer: 1.5 ms/req, 1280 req/sec * Consumer * Claim: 6.9 ms/req * Delete: 1.5 ms/req * 1257 req/sec (overall) * Mongo * Producer: 2.5 ms/req, 798 req/sec * Consumer * Claim: 8.4 ms/req * Delete: 2.5 ms/req * 813 req/sec (overall)

Auditing / Diagnostics
This test is the same as performed in Task Distribution, but also adds a few observers to the mix.

When testing the Redis driver, the impact of keep-alive was tested by enabling or disabling it in the uWSGI configuration. In this case, the performance difference in terms of latency turned out to be negligble, perhaps due to the speed of the test network and the fact that TLS was not used in these tests.

Benchmark Config

 * 2 producer processes with 25 gevent workers each
 * 1 message posted per request
 * 2 consumer processes with 25 gevent workers each
 * 5 messages claimed per request, then deleted one by one before claiming the next batch of messages
 * 1 observer process with 5 gevent workers
 * 5 messages listed per request
 * Load distributed across 4 queues
 * 10-second duration

Results
* Redis (Keep-Alive) * Producer: 1.6 ms/req, 1275 req/sec * Consumer * Claim: 7.0 ms/req * Delete: 1.5 ms/req * 1217 req/sec (overall) * Observer: 3.5 ms/req, 282 req/sec * Redis (No Keep-Alive) * Producer: 1.6 ms/req, 1255 req/sec * Consumer * Claim: 7.0 ms/req * Delete: 1.6 ms/req * 1202 req/sec (overall) * Observer: 3.4 ms/req, 281 req/sec * Mongo (Keep-Alive) * Producer: 2.2 ms/req, 878 req/sec * Consumer * Claim: 8.2 ms/req * Delete: 2.3 ms/req * 876 req/sec (overall) * Observer: 7.4 ms/req, 133 req/sec