Jump to: navigation, search

Sahara/SparkPlugin

< Sahara
Revision as of 07:18, 17 July 2015 by Daniele Venzano (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Introduction

Spark is a fast and general engine for large-scale data processing.
This blueprint proposes a Sahara provisioning plugin for Spark that can launch and resize Spark clusters and run EDP jobs.

Currently, Spark i sused in "stand alone" deployment mode: as such, the Spark cluster will be suitable for EDP jobs and for individual spark applications (the cluster is not intended for a multi-tenant setup). Currently, there is no support for "Mesos" or "YARN" based deployments.

Supported releases

This plugin only supports a Cloudera-based HDFS (CDH4, CDH5) data layer, but this limitation will be addressed by future releases.

The companion Disk Image builder element, provided with this plugin, generates by default disk images containing Spark and Hadoop versions known to be working with the corresponding release of the Spark plugin. The following table shows supported versions for each OpenStack release:

OpenStack release Spark version Hadoop version Notes
Kilo and previous 1.0.2 CDH4 EDP mostly working, Swift data source may not work out of the box.
Liberty (planned) 1.3.1 (1.4.0) CDH 5.3 1.3.1 has been merged, 1.4 under test, 1.0 has been deprecated

Documentation

Status

Bleeding edge development is done on the Bigfoot project Sahara page on GitHub. Please check that version for support for more recent versions of Spark bug fixes and optimizations.

Development is done by Daniele Venzano (Research Engineer at Eurecom) and Pietro Michiardi (Prof. at Eurecom). A preliminary version of the plugin was developed with the additional help of two Master students at Eurecom, Do Huy-Hoang and Vo Thanh Phuc. This work is partially supported by the BigFoot project, a EC-funded research project with grant agreement n. 317858.

Related Resources