Sahara/SparkPlugin

Introduction
Spark is a fast and general engine for large-scale data processing. This blueprint proposes a Sahara provisioning plugin for Spark that can launch and resize Spark clusters and run EDP jobs.

Currently, Spark i sused in "stand alone" deployment mode: as such, the Spark cluster will be suitable for EDP jobs and for individual spark applications (the cluster is not intended for a multi-tenant setup). Currently, there is no support for "Mesos" or "YARN" based deployments.

Supported releases
This plugin only supports a Cloudera-based HDFS (CDH4, CDH5) data layer, but this limitation will be addressed by future releases.

The companion Disk Image builder element, provided with this plugin, generates by default disk images containing Spark and Hadoop versions known to be working with the corresponding release of the Spark plugin. The following table shows supported versions for each OpenStack release:

Documentation

 * How to use the Spark plugin: Sahara/SparkPluginNotes
 * Notes about the changes to sahara-image-elements: Sahara/SparkImageBuilder

Status
Bleeding edge development is done on the Bigfoot project Sahara page on GitHub. Please check that version for support for more recent versions of Spark bug fixes and optimizations.

Development is done by Daniele Venzano (Research Engineer at Eurecom) and Pietro Michiardi (Prof. at Eurecom). A preliminary version of the plugin was developed with the additional help of two Master students at Eurecom, Do Huy-Hoang and Vo Thanh Phuc. This work is partially supported by the BigFoot project, a EC-funded research project with grant agreement n. 317858.

Related Resources

 * Sahara/PluggableProvisioning/PluginAPI
 * Blueprint