ApproximateNearestNeighborsModel#

class spark_rapids_ml.knn.ApproximateNearestNeighborsModel(item_df_withid: DataFrame)#

Methods

approxSimilarityJoin(query_df[, distCol])

This function returns the k approximate nearest neighbors (k-ANNs) in item_df of each query vector in query_df.

clear(param)

Reset a Spark ML Param to its default value, setting matching cuML parameter, if exists.

copy([extra])

cpu()

Return the equivalent PySpark CPU model

explainParam(param)

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams()

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap([extra])

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

getAlgoParams()

Gets the value of algoParams.

getAlgorithm()

Gets the value of algorithm.

getIdCol()

Gets the value of idCol.

getInputCol()

Gets the value of inputCol or its default value.

getInputCols()

Gets the value of inputCols or its default value.

getK()

Get the value of k.

getLabelCol()

Gets the value of labelCol or its default value.

getMetric()

Gets the value of metric.

getOrDefault(param)

Gets the value of a param in the user-supplied param map or its default value.

getParam(paramName)

Gets a param by its name.

hasDefault(param)

Checks whether a param has a default value.

hasParam(paramName)

Tests whether this instance contains a param with a given (string) name.

isDefined(param)

Checks whether a param is explicitly set by user or has a default value.

isSet(param)

Checks whether a param is explicitly set by user.

kneighbors(query_df[, sort_knn_df_by_query_id])

Return the approximate nearest neighbors for each query in query_df.

set(param, value)

Sets a parameter in the embedded param map.

setAlgoParams(value)

Sets the value of algoParams.

setAlgorithm(value)

Sets the value of algorithm.

setIdCol(value)

Sets the value of idCol.

setInputCol(value)

Sets the value of inputCol or inputCols.

setInputCols(value)

Sets the value of inputCols.

setK(value)

Sets the value of k.

setMetric(value)

Sets the value of metric.

transform(dataset[, params])

Transforms the input dataset with optional parameters.

Attributes

algoParams

algorithm

cuml_params

Returns the dictionary of parameters intended for the underlying cuML class.

idCol

inputCol

inputCols

k

labelCol

metric

num_workers

Number of cuML workers, where each cuML worker corresponds to one Spark task running on one GPU.

params

Returns all params ordered by name.

Methods Documentation

approxSimilarityJoin(query_df: DataFrame, distCol: str = 'distCol') DataFrame#

This function returns the k approximate nearest neighbors (k-ANNs) in item_df of each query vector in query_df. item_df is the dataframe passed to the fit function of the ApproximateNearestNeighbors estimator. Note that the knn relationship is asymmetric with respect to the input datasets (e.g., if x is a ann of y , y is not necessarily a ann of x).

Parameters:
query_df: pyspark.sql.DataFrame

the query_df dataframe. Each row represents a query vector.

distCol: str

the name of the output distance column

Returns:
knnjoin_df: pyspark.sql.DataFrame

the result dataframe that has three columns (item_df, query_df, distCol). item_df column is of struct type that includes as fields all the columns of input item dataframe. Similarly, query_df column is of struct type that includes as fields all the columns of input query dataframe. distCol is the distance column. A row in knnjoin_df is in the format (v1, v2, dist(v1, v2)), where item_vector v1 is one of the k nearest neighbors of query_vector v2 and their distance is dist(v1, v2).

clear(param: Param) None#

Reset a Spark ML Param to its default value, setting matching cuML parameter, if exists.

copy(extra: Optional[ParamMap] = None) P#
cpu() Model#

Return the equivalent PySpark CPU model

explainParam(param: Union[str, Param]) str#

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams() str#

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap(extra: Optional[ParamMap] = None) ParamMap#

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

Parameters:
extradict, optional

extra param values

Returns:
dict

merged param map

getAlgoParams() Dict[str, Any]#

Gets the value of algoParams.

getAlgorithm() str#

Gets the value of algorithm.

getIdCol() str#

Gets the value of idCol.

getInputCol() str#

Gets the value of inputCol or its default value.

getInputCols() List[str]#

Gets the value of inputCols or its default value.

getK() int#

Get the value of k.

getLabelCol() str#

Gets the value of labelCol or its default value.

getMetric() str#

Gets the value of metric.

getOrDefault(param: Union[str, Param[T]]) Union[Any, T]#

Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.

getParam(paramName: str) Param#

Gets a param by its name.

hasDefault(param: Union[str, Param[Any]]) bool#

Checks whether a param has a default value.

hasParam(paramName: str) bool#

Tests whether this instance contains a param with a given (string) name.

isDefined(param: Union[str, Param[Any]]) bool#

Checks whether a param is explicitly set by user or has a default value.

isSet(param: Union[str, Param[Any]]) bool#

Checks whether a param is explicitly set by user.

kneighbors(query_df: DataFrame, sort_knn_df_by_query_id: bool = True) Tuple[DataFrame, DataFrame, DataFrame]#

Return the approximate nearest neighbors for each query in query_df. The data vectors (or equivalently item vectors) should be provided through the fit function (see Examples in the spark_rapids_ml.knn.ApproximateNearestNeighbors). The distance measure here is euclidean distance and the number of target approximate nearest neighbors can be set through setK(). The function currently only supports float32 type and will convert other data types into float32.

Parameters:
query_df: pyspark.sql.DataFrame

query vectors where each row corresponds to one query. The query_df can be in the format of a single array column, a single vector column, or multiple float columns.

sort_knn_df_by_query_id: bool (default=True)

whether to sort the returned dataframe knn_df by query_id

Returns:
query_df: pyspark.sql.DataFrame

the query_df itself if it has an id column set through setIdCol(). If not, a monotonically increasing id column will be added.

item_df: pyspark.sql.DataFrame

the item_df (or equivalently data_df) itself if it has an id column set through setIdCol(). If not, a monotonically increasing id column will be added.

knn_df: pyspark.sql.DataFrame

the result k approximate nearest neighbors (ANNs) dataframe that has three columns (id, indices, distances). Each row of knn_df corresponds to the k-ANNs result of a query vector, identified by the id column. The indices/distances column stores the ids/distances of knn item_df vectors.

set(param: Param, value: Any) None#

Sets a parameter in the embedded param map.

setAlgoParams(value: Dict[str, Any]) P#

Sets the value of algoParams.

setAlgorithm(value: str) P#

Sets the value of algorithm.

setIdCol(value: str) P#

Sets the value of idCol. If not set, an id column will be added with column name unique_id. The id column is used to specify nearest neighbor vectors by associated id value.

setInputCol(value: Union[str, List[str]]) P#

Sets the value of inputCol or inputCols.

setInputCols(value: List[str]) P#

Sets the value of inputCols. Used when input vectors are stored as multiple feature columns.

setK(value: int) P#

Sets the value of k.

setMetric(value: str) P#

Sets the value of metric.

transform(dataset: DataFrame, params: Optional[ParamMap] = None) DataFrame#

Transforms the input dataset with optional parameters.

New in version 1.3.0.

Parameters:
datasetpyspark.sql.DataFrame

input dataset

paramsdict, optional

an optional param map that overrides embedded params.

Returns:
pyspark.sql.DataFrame

transformed dataset

Attributes Documentation

algoParams = Param(parent='undefined', name='algoParams', doc='The parameters to use to set up a neighbor algorithm.')#
algorithm = Param(parent='undefined', name='algorithm', doc='The algorithm to use for approximate nearest neighbors search.')#
cuml_params#

Returns the dictionary of parameters intended for the underlying cuML class.

idCol = Param(parent='undefined', name='idCol', doc='id column name.')#
inputCol: Param[str] = Param(parent='undefined', name='inputCol', doc='input column name.')#
inputCols: Param[List[str]] = Param(parent='undefined', name='inputCols', doc='input column names.')#
k = Param(parent='undefined', name='k', doc='The number nearest neighbors to retrieve. Must be >= 1.')#
labelCol: Param[str] = Param(parent='undefined', name='labelCol', doc='label column name.')#
metric = Param(parent='undefined', name='metric', doc='The distance metric to use.')#
num_workers#

Number of cuML workers, where each cuML worker corresponds to one Spark task running on one GPU.

params#

Returns all params ordered by name. The default implementation uses dir() to get all attributes of type Param.