Jump to: navigation, search

Difference between revisions of "Sahara/SparkPlugin"

m (Requirements)
m (Introduction)
Line 1: Line 1:
 
== Introduction ==
 
== Introduction ==
  
[http://spark.incubator.apache.org/ Spark] is an in-memory implementation of MapReduce written in Scala. [https://blueprints.launchpad.net/savanna/+spec/spark-plugin This] blueprint proposes to build and integrate a Spark plugin in Savanna to add the ability to launch and resize Spark clusters and run EDP jobs.
+
[http://spark.incubator.apache.org/ Spark] is an in-memory implementation of MapReduce written in Scala.<br/>
 +
[https://blueprints.launchpad.net/savanna/+spec/spark-plugin This blueprint] proposes a Savanna provisioning plugin for Spark that can launch and resize Spark clusters and run EDP jobs.
  
 
== Requirements ==
 
== Requirements ==

Revision as of 08:42, 22 October 2013

Introduction

Spark is an in-memory implementation of MapReduce written in Scala.
This blueprint proposes a Savanna provisioning plugin for Spark that can launch and resize Spark clusters and run EDP jobs.

Requirements

Support for version 0.8.0 of Spark and later is planned, since it has relaxed dependencies on Hadoop and HDFS library versions.

Implementation Notes

TBD

Related Resources