Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
A name for your application, to display on the cluster web UI
The SPARK_HOME directory on the slave nodes
Collection of JARs to send to the cluster. These can be paths on the local file system or HDFS, HTTP, HTTPS, or FTP URLs.
Environment variables to set on worker nodes
Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
A name for your application, to display on the cluster web UI
The SPARK_HOME directory on the slave nodes
Collection of JARs to send to the cluster. These can be paths on the local file system or HDFS, HTTP, HTTPS, or FTP URLs.
Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
A name for your application, to display on the cluster web UI
The SPARK_HOME directory on the slave nodes
JAR file to send to the cluster. This can be a path on the local file system or an HDFS, HTTP, HTTPS, or FTP URL.
Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
A name for your application, to display on the cluster web UI
Create an Accumulable shared variable of the given type, to which tasks can
"add" values with add
.
Create an Accumulable shared variable of the given type, to which tasks can
"add" values with add
. Only the master can access the accumuable's value
.
Create an Accumulator variable of a given type, which tasks can "add" values
to using the add
method.
Create an Accumulator variable of a given type, which tasks can "add" values
to using the add
method. Only the master can access the accumulator's value
.
Create an Accumulator double variable, which tasks can "add" values
to using the add
method.
Create an Accumulator double variable, which tasks can "add" values
to using the add
method. Only the master can access the accumulator's value
.
Create an Accumulator integer variable, which tasks can "add" values
to using the add
method.
Create an Accumulator integer variable, which tasks can "add" values
to using the add
method. Only the master can access the accumulator's value
.
Add a file to be downloaded with this Spark job on every node.
Add a file to be downloaded with this Spark job on every node.
The path
passed can be either a local file, a file in HDFS (or other Hadoop-supported
filesystems), or an HTTP, HTTPS or FTP URI. To access the file in Spark jobs,
use SparkFiles.get(path)
to find its download location.
Adds a JAR dependency for all tasks to be executed on this SparkContext in the future.
Adds a JAR dependency for all tasks to be executed on this SparkContext in the future.
The path
passed can be either a local file, a file in HDFS (or other Hadoop-supported
filesystems), or an HTTP, HTTPS or FTP URI.
Broadcast a read-only variable to the cluster, returning a org.apache.spark.Broadcast object for reading it in distributed functions.
Broadcast a read-only variable to the cluster, returning a org.apache.spark.Broadcast object for reading it in distributed functions. The variable will be sent to each cluster only once.
Clear the job's list of files added by addFile
so that they do not get downloaded to
any new nodes.
Clear the job's list of JARs added by addJar
so that they do not get downloaded to
any new nodes.
Create an Accumulator double variable, which tasks can "add" values
to using the add
method.
Create an Accumulator double variable, which tasks can "add" values
to using the add
method. Only the master can access the accumulator's value
.
Get Spark's home location from either a value set through the constructor, or the spark.
Get Spark's home location from either a value set through the constructor, or the spark.home Java property, or the SPARK_HOME environment variable (in that order of preference). If neither of these is set, return None.
Returns the Hadoop configuration used for the Hadoop code (e.
Returns the Hadoop configuration used for the Hadoop code (e.g. file systems) we reuse.
Get an RDD for a Hadoop file with an arbitrary InputFormat
Get an RDD for a Hadoop file with an arbitrary InputFormat
Get an RDD for a Hadoop-readable dataset from a Hadooop JobConf giving its InputFormat and any other necessary info (e.
Get an RDD for a Hadoop-readable dataset from a Hadooop JobConf giving its InputFormat and any other necessary info (e.g. file name for a filesystem-based dataset, table name for HyperTable, etc).
Get an RDD for a Hadoop-readable dataset from a Hadooop JobConf giving its InputFormat and any other necessary info (e.
Get an RDD for a Hadoop-readable dataset from a Hadooop JobConf giving its InputFormat and any other necessary info (e.g. file name for a filesystem-based dataset, table name for HyperTable, etc).
Create an Accumulator integer variable, which tasks can "add" values
to using the add
method.
Create an Accumulator integer variable, which tasks can "add" values
to using the add
method. Only the master can access the accumulator's value
.
Get an RDD for a given Hadoop file with an arbitrary new API InputFormat and extra configuration options to pass to the input format.
Get an RDD for a given Hadoop file with an arbitrary new API InputFormat and extra configuration options to pass to the input format.
Load an RDD saved as a SequenceFile containing serialized objects, with NullWritable keys and BytesWritable values that contain a serialized partition.
Load an RDD saved as a SequenceFile containing serialized objects, with NullWritable keys and BytesWritable values that contain a serialized partition. This is still an experimental storage format and may not be supported exactly as is in future Spark releases. It will also be pretty slow if you use the default serializer (Java serialization), though the nice thing about it is that there's very little effort required to save arbitrary objects.
Load an RDD saved as a SequenceFile containing serialized objects, with NullWritable keys and BytesWritable values that contain a serialized partition.
Load an RDD saved as a SequenceFile containing serialized objects, with NullWritable keys and BytesWritable values that contain a serialized partition. This is still an experimental storage format and may not be supported exactly as is in future Spark releases. It will also be pretty slow if you use the default serializer (Java serialization), though the nice thing about it is that there's very little effort required to save arbitrary objects.
Distribute a local Scala collection to form an RDD.
Distribute a local Scala collection to form an RDD.
Distribute a local Scala collection to form an RDD.
Distribute a local Scala collection to form an RDD.
Distribute a local Scala collection to form an RDD.
Distribute a local Scala collection to form an RDD.
Get an RDD for a Hadoop SequenceFile.
Get an RDD for a Hadoop SequenceFile with given key and value types.
Set the directory under which RDDs are going to be checkpointed.
Set the directory under which RDDs are going to be checkpointed. The directory must be a HDFS path if running on a cluster. If the directory does not exist, it will be created. If the directory exists, an exception will be thrown to prevent accidental overriding of checkpoint files.
Set the directory under which RDDs are going to be checkpointed.
Set the directory under which RDDs are going to be checkpointed. The directory must be a HDFS path if running on a cluster. If the directory does not exist, it will be created. If the directory exists and useExisting is set to true, then the exisiting directory will be used. Otherwise an exception will be thrown to prevent accidental overriding of checkpoint files in the existing directory.
Shut down the SparkContext.
Read a text file from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings.
Read a text file from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings.
Build the union of two or more RDDs.
Build the union of two or more RDDs.
Build the union of two or more RDDs.
Build the union of two or more RDDs.
Build the union of two or more RDDs.
Build the union of two or more RDDs.
A Java-friendly version of SparkContext that returns JavaRDDs and works with Java collections instead of Scala ones.