sparknlp.annotator.SentenceEmbeddings#
- class sparknlp.annotator.SentenceEmbeddings[source]#
Bases:
sparknlp.common.AnnotatorModel
,sparknlp.common.HasEmbeddingsProperties
,sparknlp.common.HasStorageRef
Converts the results from WordEmbeddings, BertEmbeddings, or other word embeddings into sentence or document embeddings by either summing up or averaging all the word embeddings in a sentence or a document (depending on the inputCols).
This can be configured with
setPoolingStrategy()
, which either be"AVERAGE"
or"SUM"
.For more extended examples see the Spark NLP Workshop..
Input Annotation types
Output Annotation type
DOCUMENT, WORD_EMBEDDINGS
SENTENCE_EMBEDDINGS
- Parameters
- dimension
Number of embedding dimensions
- poolingStrategy
Choose how you would like to aggregate Word Embeddings to Sentence Embeddings: AVERAGE or SUM, by default AVERAGE
Notes
If you choose document as your input for Tokenizer, WordEmbeddings/BertEmbeddings, and SentenceEmbeddings then it averages/sums all the embeddings into one array of embeddings. However, if you choose sentences as inputCols then for each sentence SentenceEmbeddings generates one array of embeddings.
Examples
>>> import sparknlp >>> from sparknlp.base import * >>> from sparknlp.annotator import * >>> from pyspark.ml import Pipeline >>> documentAssembler = DocumentAssembler() \ ... .setInputCol("text") \ ... .setOutputCol("document") >>> tokenizer = Tokenizer() \ ... .setInputCols(["document"]) \ ... .setOutputCol("token") >>> embeddings = WordEmbeddingsModel.pretrained() \ ... .setInputCols(["document", "token"]) \ ... .setOutputCol("embeddings") >>> embeddingsSentence = SentenceEmbeddings() \ ... .setInputCols(["document", "embeddings"]) \ ... .setOutputCol("sentence_embeddings") \ ... .setPoolingStrategy("AVERAGE") >>> embeddingsFinisher = EmbeddingsFinisher() \ ... .setInputCols(["sentence_embeddings"]) \ ... .setOutputCols("finished_embeddings") \ ... .setOutputAsVector(True) \ ... .setCleanAnnotations(False) >>> pipeline = Pipeline() \ ... .setStages([ ... documentAssembler, ... tokenizer, ... embeddings, ... embeddingsSentence, ... embeddingsFinisher ... ]) >>> data = spark.createDataFrame([["This is a sentence."]]).toDF("text") >>> result = pipeline.fit(data).transform(data) >>> result.selectExpr("explode(finished_embeddings) as result").show(5, 80) +--------------------------------------------------------------------------------+ | result| +--------------------------------------------------------------------------------+ |[-0.22093398869037628,0.25130119919776917,0.41810303926467896,-0.380883991718...| +--------------------------------------------------------------------------------+
Methods
__init__
()Initialize this instance with a Java model object.
clear
(param)Clears a param from the param map if it has been explicitly set.
copy
([extra])Creates a copy of this instance with the same uid and some extra params.
explainParam
(param)Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
Returns the documentation of all params with their optionally default values and user-supplied values.
extractParamMap
([extra])Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
Gets embeddings dimension.
Gets current column names of input annotations.
Gets whether Annotator should be evaluated lazily in a RecursivePipeline.
getOrDefault
(param)Gets the value of a param in the user-supplied param map or its default value.
Gets output column name of annotations.
getParam
(paramName)Gets a param by its name.
getParamValue
(paramName)Gets the value of a parameter.
Gets unique reference name for identification.
hasDefault
(param)Checks whether a param has a default value.
hasParam
(paramName)Tests whether this instance contains a param with a given (string) name.
isDefined
(param)Checks whether a param is explicitly set by user or has a default value.
isSet
(param)Checks whether a param is explicitly set by user.
load
(path)Reads an ML instance from the input path, a shortcut of read().load(path).
read
()Returns an MLReader instance for this class.
save
(path)Save this ML instance to the given path, a shortcut of 'write().save(path)'.
set
(param, value)Sets a parameter in the embedded param map.
setDimension
(value)Sets embeddings dimension.
setInputCols
(*value)Sets column names of input annotations.
setLazyAnnotator
(value)Sets whether Annotator should be evaluated lazily in a RecursivePipeline.
setOutputCol
(value)Sets output column name of annotations.
setParamValue
(paramName)Sets the value of a parameter.
setParams
()setPoolingStrategy
(strategy)Sets how to aggregate the word Embeddings to sentence embeddings, by default AVERAGE.
setStorageRef
(value)Sets unique reference name for identification.
transform
(dataset[, params])Transforms the input dataset with optional parameters.
write
()Returns an MLWriter instance for this ML instance.
Attributes
dimension
getter_attrs
inputCols
lazyAnnotator
name
outputCol
Returns all params ordered by name.
poolingStrategy
storageRef
- clear(param)#
Clears a param from the param map if it has been explicitly set.
- copy(extra=None)#
Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.
- Parameters
extra – Extra parameters to copy to the new instance
- Returns
Copy of this instance
- explainParam(param)#
Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
- explainParams()#
Returns the documentation of all params with their optionally default values and user-supplied values.
- extractParamMap(extra=None)#
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
- Parameters
extra – extra param values
- Returns
merged param map
- getDimension()#
Gets embeddings dimension.
- getInputCols()#
Gets current column names of input annotations.
- getLazyAnnotator()#
Gets whether Annotator should be evaluated lazily in a RecursivePipeline.
- getOrDefault(param)#
Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.
- getOutputCol()#
Gets output column name of annotations.
- getParam(paramName)#
Gets a param by its name.
- getParamValue(paramName)#
Gets the value of a parameter.
- Parameters
- paramNamestr
Name of the parameter
- getStorageRef()#
Gets unique reference name for identification.
- Returns
- str
Unique reference name for identification
- hasDefault(param)#
Checks whether a param has a default value.
- hasParam(paramName)#
Tests whether this instance contains a param with a given (string) name.
- isDefined(param)#
Checks whether a param is explicitly set by user or has a default value.
- isSet(param)#
Checks whether a param is explicitly set by user.
- classmethod load(path)#
Reads an ML instance from the input path, a shortcut of read().load(path).
- property params#
Returns all params ordered by name. The default implementation uses
dir()
to get all attributes of typeParam
.
- classmethod read()#
Returns an MLReader instance for this class.
- save(path)#
Save this ML instance to the given path, a shortcut of ‘write().save(path)’.
- set(param, value)#
Sets a parameter in the embedded param map.
- setDimension(value)#
Sets embeddings dimension.
- Parameters
- valueint
Embeddings dimension
- setInputCols(*value)#
Sets column names of input annotations.
- Parameters
- *valuestr
Input columns for the annotator
- setLazyAnnotator(value)#
Sets whether Annotator should be evaluated lazily in a RecursivePipeline.
- Parameters
- valuebool
Whether Annotator should be evaluated lazily in a RecursivePipeline
- setOutputCol(value)#
Sets output column name of annotations.
- Parameters
- valuestr
Name of output column
- setParamValue(paramName)#
Sets the value of a parameter.
- Parameters
- paramNamestr
Name of the parameter
- setPoolingStrategy(strategy)[source]#
Sets how to aggregate the word Embeddings to sentence embeddings, by default AVERAGE.
Can either be AVERAGE or SUM.
- Parameters
- strategystr
Pooling Strategy, either be AVERAGE or SUM
- Returns
- [type]
[description]
- setStorageRef(value)#
Sets unique reference name for identification.
- Parameters
- valuestr
Unique reference name for identification
- transform(dataset, params=None)#
Transforms the input dataset with optional parameters.
- Parameters
dataset – input dataset, which is an instance of
pyspark.sql.DataFrame
params – an optional param map that overrides embedded params.
- Returns
transformed dataset
New in version 1.3.0.
- uid#
A unique id for the object.
- write()#
Returns an MLWriter instance for this ML instance.