Jump to: navigation, search

Difference between revisions of "Zaqar/pecan-evaluation"

m (Malini moved page Marconi/pecan-evaluation to Zaqar/pecan-evaluation: Project Rename)
(No difference)

Latest revision as of 18:42, 7 August 2014


Disclaimer: I work for Rackspace, but have no vested interest in these two frameworks. I'm new to Openstack Ecosystem, and have little to no exposure on these two frameworks prior to this evaluation. Both Ryan Petrollo (Pecan Maintainer) and Kurt Griffiths (Falcon Maintainer) have been very helpful in making me understand some of the key functional elements of these frameworks and helped me write code with best practices, without which this evaluation wouldn't have been possible.

My evaluation on these frameworks was inspired from this quote -

"When we compare tools we should be careful. Compare for understanding, not for judgement." (Henrik Kniberg, Crisp)

My experience and observations

Below is my experience and observation on working with Pecan and Falcon and it is pretty subjective. Your mileage may vary.

Pecan is an excellent micro-framework for writing a full fledged web application. There is plenty of support for templating, error pages, serialization, extensions etc. However, when I looked at how I might be using Pecan for Marconi, I realized I'm going to use Pecan to setup a ReST API and not going to be using a lot of features that Pecan offers out of the box.

Getting started off with Pecan was fairly easy. Pecan's scaffolding feature enabled me to setup a project very easily, setup initial routes using object-routing technique and got me going straight away as a standalone app. However, I had to spend a lot of time trying to fit Pecan into Marconi architecture.

Pecan also has elegant ways of creating routes and endpoints, but it can be accomplished in more than one ways. Object based routing, lookup, subcontrollers - which if not careful, can step on each other and can override.

On the other hand, it appears that Falcon has been written from ground-up with only ReSTful APIs in mind and it feels 'natural' to create routes and endpoints and bind them to Http Verbs. I got started off with Falcon fairly quickly, with minimal help, because it felt intuitive to write a ReST API. Falcon had nice little utilities that helped me parse query strings into required datatypes and specify required parameters, which was real handy.

Pecan and Falcon both have fairly good documentation (readthedocs), however I found Pecan's to be more mature. Docs and source code reading helped me wade through and understand some of these framework features easily.

Pecan pulls in quite a bit of dependencies (WebOb Mako WebTest singledispatch MarkupSafe waitress beautifulsoup4), and some of them are second level dependencies, in contrast to Falcon's (six, mime-parse). Because of the dependencies Pecan brings in, I believe Pecan has a larger attack surface. Relatively, Falcon's request and response objects are a bit more predictable and secure since it presents a lesser attack surface.

Since Pecan is a webob wrapper, some of the error messages that I encountered were from base webob objects and were a little difficult to debug. While inspecting the Pecan source code to trouble shoot those issues, I decided to look at test coverage and code complexity of these frameworks. Pecan has a fairly extensive test coverage on its framework, but when I pulled in a test coverage report, it showed 94%. On code complexity, (McCabe Score) Pecan had at least 7 instances where a segment of code had more than 8 linear independent paths. Falcon's test coverage clocked in at 100% and these was only one instance McCabe score going over 7.

Pecan's extensibility is impressive -- it has extensible templating support, security support, JSON extensions etc. It took a bit of work to fit Pecan in existing Marconi architecture, but I found it very adaptable. Marconi has been written in Falcon, hence I tried fitting Falcon into a ReST API that was written in Bottle and I was able to do that with minimal fuss.

Pecan seems to very popular in the Openstack community and has been battle tested well. Most of Openstack projects are on Pecan and it has a good community behind it. Getting help for Pecan was fairly easy. Falcon is becoming popular and has a fair number of downloads from Pypi and gaining interest in the larger python community. Falcon's github forks and stars are a good indication.

When it came to performance, I found Falcon to be faster on multiple occasions against Pecan. Especially, Falcon performed better at high concurrency rates and when the application had some data to serialize. Performance reports are available in the annexure.


I believe that major decisions need to be made based on a well-defined set of criteria, and the evaluation should be transparent and documented for future reference. Also, it must be recognized that different criterion will have different weights depending on the requirements of a specific project.

To that end, I created a decision matrix that would help choose a framework for Marconi.

Criteria Weight (Out of 10) What it means? Pecan Score (out of 10) Falcon Score (Out of 10) Pecan Weighted Score Falcon Weighted Score
Performance 8 More is better 6 9 48 72
Dependencies 7 How many dependancies the framework brings in? Less number of dependancies is better 5 8 35 56
Documentation 7 More documentation covering possible use cases better 9 7 63 49
Developer Joy 5 How was the learnability, productivity and general ease of use? Easier the better 7 8 35 40
Attack Surface 6 How much framework code, interfaces and dependencies? Smaller attack surface, the better 5 8 30 48
Extensibility & Adaptability 5 How easy it is to plug in this framework into an existing architecture? How easily framework can be extended to add new features? 5 5 25 25
Framework Maturity 7 Is the framework battle tested, test-coverage, outstanding bug fixes etc. Higher the better 8 7 56 49
ReST Friendliness 6 Creating routes, HTTP correctness, customizing request/response headers, Serialization etc. Easier (higher) the better 6 8 36 48
Community 5 Openstack and larger Python Community support around framework, how easy it is to get answers etc. Higher the better 7 5 35 25
Total Score 363 412


For an application like Marconi where throughput and latency is of paramount importance, I recommend Falcon over Pecan. Falcon also offers a cleaner way to implement ReST APIs. Pecan definitely offers lots of features and flexibility, however looking at how many of Pecan features are used in Marconi, Pecan is probably a overkill.


Source Code

I implemented Queues controller using Pecan for my evaluation and the source code can be found here.

Benchmarks using Autobench


  • Rackspace Cloud Servers
  • Pecan and Falcon running under gunicorn, 8 workers
  • Test involved issuing GET requests to /queues endpoint
  • Autobench master - collects results from workers; no direct impact on performance, as the workers are the ones who talk to the API
8GB standard - shouldn't affect performance since results are determined by the workers
  • Autobench configuration file
  • Autobench workers (6 workers)
pulled latest httperf code from svn
pulled latest autobench code from git
8 GB Performance flavor
8 vcpus
8 gb memory
1.6 Gb/s Network
  • Mongodb
15 GB performance
4 vcpus
15 gb memory
1.3 Gb/s Network
  • Marconi (separate users for Falcon and Pecan - virtual envs created for each implementation)
30 GB performance
8 vcpus
30 GB memory
2.5 Gb/s Network

Benchmark Results

Latency: Empty Dataset

Latency-empty.png Latency-empty-zoomed.png 

Latency: Small Dataset returned(~200 Bytes)

Latency-small.png Latency-small-zoomed.png

Latency: Large Dataset returned (~ 2 KB)

Latency-large.png Latency-large-zoomed.png

Throughput: Empty Dataset


Throughput: Small Dataset (~200 Bytes)


Throughput: Large Dataset (~2 KB)


Benchmarks using Apache Bench


  • Marconi, MongoDB, Apache Bench running on a Rackspace Cloud Server, 50 iterations, using shortest time for latency and higher values for throughput
  • Marconi hosted via gunicorn, 1 worker. Test involved issuing GET requests to /queues endpoint
1GB Next Gen Instance
Ubuntu 13.04
Bandwidth 120 Mbps

Benchmark Results

Throughput (reqs/sec), empty dataset, no serialization

Pecan Vs. Falcon (Concurrency Level 5) Pecan Vs. Falcon (Concurrency Level 10)
No-serialization-throughput.png No-serialization-conlevel10-throughput.png

Throughput (reqs/sec), small dataset (~1 KB)

Pecan Vs. Falcon (Concurrency Level 5) Pecan Vs. Falcon (Concurrency Level 10)
Serialization-conlevel5-throughput.png Serialization-conlevel10-throughput.png

Latency (ms/req), empty dataset

Pecan Vs. Falcon (Concurrency Level 5) Pecan Vs. Falcon (Concurrency Level 10)
Noserialization-conlevel5-responsetime.png Noserialization-con10-responsetime.png

Latency (ms/req), small dataset (~1 KB)

Pecan Vs. Falcon (Concurrency Level 5) Pecan Vs. Falcon (Concurrency Level 10)
Serialization-con5-responsetime.png Serialization-con10-responsetime.png

Benchmarks using Tsung


  • Marconi, MongoDB hosted on a Rackspace Cloud Server.
4 GB Performance Instance
2 vCPUs
Ubuntu 13.10
Bandwidth 800 Mbps
  • Tsung running on a separate Rackspace Cloud Server
8 GB Performance Instance
4 vCPUs
Ubuntu 12.04 LTS
Bandwidth 1600 Mbps
  • Test involves creating and deleting queues at different concurrency rates. (user arrival rate, defined as users per sec). 5 iterations, lowest latency and highest throughput chosen.
  • Tsung configuration

Benchmark Results

Throughput (reqs/sec), create queues


Throughput (reqs/sec), delete queues


Latency (ms/req), create queues


Latency (ms/req), delete queues


McCabe Score Results

Pecan Falcon
Pecan-mccabescore.png Falcon-mcacbescore.png