PCAModel#

class spark_rapids_ml.feature.PCAModel(mean_: List[float], components_: List[List[float]], explained_variance_ratio_: List[float], singular_values_: List[float], n_cols: int, dtype: str)#

Applies dimensionality reduction on an input DataFrame.

Note: Input vectors must be zero-centered to ensure PCA work properly. Spark PCA does not automatically remove the mean of the input data, so use the :py:class::~pyspark.ml.feature.StandardScaler to center the input data before invoking transform.

The input vectors can be stored in three different formats: a column of vector, a column of array, or multiple scalar columns.

Examples

>>> from spark_rapids_ml.feature import PCA
>>> data = [([-1.0, -1.0],),
...         ([0.0, 0.0],),
...         ([1.0, 1.0],),]
>>> df = spark.createDataFrame(data, ["features"])
>>> gpu_pca = PCA(k=1).setInputCol("features").setOutputCol("pca_features")
>>> gpu_model = gpu_pca.fit(df)
>>> reduced_df = gpu_model.transform(df)
>>> reduced_df.show()
+---------------------+
|         pca_features|
+---------------------+
| [-1.414213562373095]|
|                [0.0]|
|  [1.414213562373095]|
+---------------------+

Methods

clear(param)

Reset a Spark ML Param to its default value, setting matching cuML parameter, if exists.

copy([extra])

cpu()

Return the PySpark ML PCAModel

explainParam(param)

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams()

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap([extra])

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

getInputCol()

Gets the value of inputCol or its default value.

getInputCols()

Gets the value of inputCols or its default value.

getK()

Gets the value of k or its default value.

getOrDefault(param)

Gets the value of a param in the user-supplied param map or its default value.

getOutputCol()

Gets the value of outputCol or its default value.

getParam(paramName)

Gets a param by its name.

hasDefault(param)

Checks whether a param has a default value.

hasParam(paramName)

Tests whether this instance contains a param with a given (string) name.

isDefined(param)

Checks whether a param is explicitly set by user or has a default value.

isSet(param)

Checks whether a param is explicitly set by user.

load(path)

Reads an ML instance from the input path, a shortcut of read().load(path).

read()

save(path)

Save this ML instance to the given path, a shortcut of 'write().save(path)'.

set(param, value)

Sets a parameter in the embedded param map.

setInputCol(value)

Sets the value of inputCol or inputCols.

setInputCols(value)

Sets the value of inputCols.

setOutputCol(value)

Sets the value of outputCol

transform(dataset[, params])

Transforms the input dataset with optional parameters.

write()

Attributes

cuml_params

Returns the dictionary of parameters intended for the underlying cuML class.

explainedVariance

Returns a vector of proportions of variance explained by each principal component.

inputCol

inputCols

k

mean

Returns the mean of the input vectors.

num_workers

Number of cuML workers, where each cuML worker corresponds to one Spark task running on one GPU.

outputCol

params

Returns all params ordered by name.

pc

Returns a principal components Matrix.

Methods Documentation

clear(param: Param) None#

Reset a Spark ML Param to its default value, setting matching cuML parameter, if exists.

copy(extra: Optional[ParamMap] = None) P#
cpu() PCAModel#

Return the PySpark ML PCAModel

explainParam(param: Union[str, Param]) str#

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams() str#

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap(extra: Optional[ParamMap] = None) ParamMap#

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

Parameters:
extradict, optional

extra param values

Returns:
dict

merged param map

getInputCol() str#

Gets the value of inputCol or its default value.

getInputCols() List[str]#

Gets the value of inputCols or its default value.

getK() int#

Gets the value of k or its default value.

New in version 1.5.0.

getOrDefault(param: Union[str, Param[T]]) Union[Any, T]#

Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.

getOutputCol() str#

Gets the value of outputCol or its default value.

getParam(paramName: str) Param#

Gets a param by its name.

hasDefault(param: Union[str, Param[Any]]) bool#

Checks whether a param has a default value.

hasParam(paramName: str) bool#

Tests whether this instance contains a param with a given (string) name.

isDefined(param: Union[str, Param[Any]]) bool#

Checks whether a param is explicitly set by user or has a default value.

isSet(param: Union[str, Param[Any]]) bool#

Checks whether a param is explicitly set by user.

classmethod load(path: str) RL#

Reads an ML instance from the input path, a shortcut of read().load(path).

classmethod read() MLReader#
save(path: str) None#

Save this ML instance to the given path, a shortcut of ‘write().save(path)’.

set(param: Param, value: Any) None#

Sets a parameter in the embedded param map.

setInputCol(value: Union[str, List[str]]) P#

Sets the value of inputCol or inputCols.

setInputCols(value: List[str]) P#

Sets the value of inputCols. Used when input vectors are stored as multiple feature columns.

setOutputCol(value: str) P#

Sets the value of outputCol

transform(dataset: DataFrame, params: Optional[ParamMap] = None) DataFrame#

Transforms the input dataset with optional parameters.

New in version 1.3.0.

Parameters:
datasetpyspark.sql.DataFrame

input dataset

paramsdict, optional

an optional param map that overrides embedded params.

Returns:
pyspark.sql.DataFrame

transformed dataset

write() MLWriter#

Attributes Documentation

cuml_params#

Returns the dictionary of parameters intended for the underlying cuML class.

explainedVariance#

Returns a vector of proportions of variance explained by each principal component.

inputCol: Param[str] = Param(parent='undefined', name='inputCol', doc='input column name.')#
inputCols: Param[List[str]] = Param(parent='undefined', name='inputCols', doc='input column names.')#
k: Param[int] = Param(parent='undefined', name='k', doc='the number of principal components')#
mean#

Returns the mean of the input vectors.

num_workers#

Number of cuML workers, where each cuML worker corresponds to one Spark task running on one GPU.

outputCol: Param[str] = Param(parent='undefined', name='outputCol', doc='output column name.')#
params#

Returns all params ordered by name. The default implementation uses dir() to get all attributes of type Param.

pc#

Returns a principal components Matrix. Each column is one principal component.