Difference between revisions of "Sahara/SparkPlugin"
< Sahara
Line 6: | Line 6: | ||
== Requirements == | == Requirements == | ||
− | Support for version 0.8.0 of Spark and later is planned, since it has relaxed dependencies on Hadoop and HDFS library versions. Spark in ''standalone'' mode is targeted, no support for Mesos or YARN | + | Support for version 0.8.0 of Spark and later is planned, since it has relaxed dependencies on Hadoop and HDFS library versions. Spark in ''standalone'' mode is targeted, there will be no no support for Mesos or YARN modes for now. |
== Status == | == Status == | ||
− | We have | + | We have started development of the plugin itself and we are making good progress. In January the code should be good enough to be shared on github and sent through the review process. |
− | + | Development is done by: Do Huy-Hoang and Vo Thanh Phuc | |
− | |||
− | |||
− | |||
− | |||
== Related Resources == | == Related Resources == | ||
* [[Savanna/PluggableProvisioning/PluginAPI]] | * [[Savanna/PluggableProvisioning/PluginAPI]] | ||
* [https://blueprints.launchpad.net/savanna/+spec/spark-plugin Blueprint] | * [https://blueprints.launchpad.net/savanna/+spec/spark-plugin Blueprint] |
Revision as of 08:46, 4 December 2013
Introduction
Spark is an in-memory implementation of MapReduce written in Scala.
This blueprint proposes a Savanna provisioning plugin for Spark that can launch and resize Spark clusters and run EDP jobs.
Requirements
Support for version 0.8.0 of Spark and later is planned, since it has relaxed dependencies on Hadoop and HDFS library versions. Spark in standalone mode is targeted, there will be no no support for Mesos or YARN modes for now.
Status
We have started development of the plugin itself and we are making good progress. In January the code should be good enough to be shared on github and sent through the review process.
Development is done by: Do Huy-Hoang and Vo Thanh Phuc