pyspark.ml.evaluation.
ClusteringEvaluator
Evaluator for Clustering results, which expects two input columns: prediction and features. The metric computes the Silhouette measure using the squared Euclidean distance.
The Silhouette is a measure for the validation of the consistency within clusters. It ranges between 1 and -1, where a value close to 1 means that the points in a cluster are close to the other points in the same cluster and far from the points of the other clusters.
New in version 2.3.0.
Examples
>>> from pyspark.ml.linalg import Vectors >>> featureAndPredictions = map(lambda x: (Vectors.dense(x[0]), x[1]), ... [([0.0, 0.5], 0.0), ([0.5, 0.0], 0.0), ([10.0, 11.0], 1.0), ... ([10.5, 11.5], 1.0), ([1.0, 1.0], 0.0), ([8.0, 6.0], 1.0)]) >>> dataset = spark.createDataFrame(featureAndPredictions, ["features", "prediction"]) ... >>> evaluator = ClusteringEvaluator() >>> evaluator.setPredictionCol("prediction") ClusteringEvaluator... >>> evaluator.evaluate(dataset) 0.9079... >>> featureAndPredictionsWithWeight = map(lambda x: (Vectors.dense(x[0]), x[1], x[2]), ... [([0.0, 0.5], 0.0, 2.5), ([0.5, 0.0], 0.0, 2.5), ([10.0, 11.0], 1.0, 2.5), ... ([10.5, 11.5], 1.0, 2.5), ([1.0, 1.0], 0.0, 2.5), ([8.0, 6.0], 1.0, 2.5)]) >>> dataset = spark.createDataFrame( ... featureAndPredictionsWithWeight, ["features", "prediction", "weight"]) >>> evaluator = ClusteringEvaluator() >>> evaluator.setPredictionCol("prediction") ClusteringEvaluator... >>> evaluator.setWeightCol("weight") ClusteringEvaluator... >>> evaluator.evaluate(dataset) 0.9079... >>> ce_path = temp_path + "/ce" >>> evaluator.save(ce_path) >>> evaluator2 = ClusteringEvaluator.load(ce_path) >>> str(evaluator2.getPredictionCol()) 'prediction'
Methods
clear(param)
clear
Clears a param from the param map if it has been explicitly set.
copy([extra])
copy
Creates a copy of this instance with the same uid and some extra params.
evaluate(dataset[, params])
evaluate
Evaluates the output with optional parameters.
explainParam(param)
explainParam
Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
explainParams()
explainParams
Returns the documentation of all params with their optionally default values and user-supplied values.
extractParamMap([extra])
extractParamMap
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
getDistanceMeasure()
getDistanceMeasure
Gets the value of distanceMeasure
getFeaturesCol()
getFeaturesCol
Gets the value of featuresCol or its default value.
getMetricName()
getMetricName
Gets the value of metricName or its default value.
getOrDefault(param)
getOrDefault
Gets the value of a param in the user-supplied param map or its default value.
getParam(paramName)
getParam
Gets a param by its name.
getPredictionCol()
getPredictionCol
Gets the value of predictionCol or its default value.
getWeightCol()
getWeightCol
Gets the value of weightCol or its default value.
hasDefault(param)
hasDefault
Checks whether a param has a default value.
hasParam(paramName)
hasParam
Tests whether this instance contains a param with a given (string) name.
isDefined(param)
isDefined
Checks whether a param is explicitly set by user or has a default value.
isLargerBetter()
isLargerBetter
Indicates whether the metric returned by evaluate() should be maximized (True, default) or minimized (False).
evaluate()
isSet(param)
isSet
Checks whether a param is explicitly set by user.
load(path)
load
Reads an ML instance from the input path, a shortcut of read().load(path).
read()
read
Returns an MLReader instance for this class.
save(path)
save
Save this ML instance to the given path, a shortcut of ‘write().save(path)’.
set(param, value)
set
Sets a parameter in the embedded param map.
setDistanceMeasure(value)
setDistanceMeasure
Sets the value of distanceMeasure.
distanceMeasure
setFeaturesCol(value)
setFeaturesCol
Sets the value of featuresCol.
featuresCol
setMetricName(value)
setMetricName
Sets the value of metricName.
metricName
setParams(self, \*[, predictionCol, …])
setParams
Sets params for clustering evaluator.
setPredictionCol(value)
setPredictionCol
Sets the value of predictionCol.
predictionCol
setWeightCol(value)
setWeightCol
Sets the value of weightCol.
weightCol
write()
write
Returns an MLWriter instance for this ML instance.
Attributes
params
Returns all params ordered by name.
Methods Documentation
Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.
Extra parameters to copy to the new instance
JavaParams
Copy of this instance
New in version 1.4.0.
pyspark.sql.DataFrame
a dataset that contains labels/observations and predictions
an optional param map that overrides embedded params
metric
extra param values
merged param map
New in version 2.4.0.
Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.
Indicates whether the metric returned by evaluate() should be maximized (True, default) or minimized (False). A given evaluator may support multiple metrics which may be maximized or minimized.
New in version 1.5.0.
New in version 3.1.0.
Attributes Documentation
Returns all params ordered by name. The default implementation uses dir() to get all attributes of type Param.
dir()
Param