sparknlp.base.TokenAssembler#
- class sparknlp.base.TokenAssembler[source]#
Bases:
sparknlp.internal.AnnotatorTransformer
,sparknlp.common.AnnotatorProperties
This transformer reconstructs a
DOCUMENT
type annotation from tokens, usually after these have been normalized, lemmatized, normalized, spell checked, etc, in order to use this document annotation in further annotators. RequiresDOCUMENT
andTOKEN
type annotations as input.For more extended examples on document pre-processing see the Spark NLP Workshop.
Input Annotation types
Output Annotation type
DOCUMENT, TOKEN
DOCUMENT
- Parameters
- preservePosition
Whether to preserve the actual position of the tokens or reduce them to one space
Examples
>>> import sparknlp >>> from sparknlp.base import * >>> from sparknlp.annotator import * >>> from pyspark.ml import Pipeline
First, the text is tokenized and cleaned
>>> documentAssembler = DocumentAssembler() \ ... .setInputCol("text") \ ... .setOutputCol("document") >>> sentenceDetector = SentenceDetector() \ ... .setInputCols(["document"]) \ ... .setOutputCol("sentences") >>> tokenizer = Tokenizer() \ ... .setInputCols(["sentences"]) \ ... .setOutputCol("token") >>> normalizer = Normalizer() \ ... .setInputCols(["token"]) \ ... .setOutputCol("normalized") \ ... .setLowercase(False) >>> stopwordsCleaner = StopWordsCleaner() \ ... .setInputCols(["normalized"]) \ ... .setOutputCol("cleanTokens") \ ... .setCaseSensitive(False)
Then the TokenAssembler turns the cleaned tokens into a
DOCUMENT
type structure.>>> tokenAssembler = TokenAssembler() \ ... .setInputCols(["sentences", "cleanTokens"]) \ ... .setOutputCol("cleanText") >>> data = spark.createDataFrame([["Spark NLP is an open-source text processing library for advanced natural language processing."]]) \ ... .toDF("text") >>> pipeline = Pipeline().setStages([ ... documentAssembler, ... sentenceDetector, ... tokenizer, ... normalizer, ... stopwordsCleaner, ... tokenAssembler ... ]).fit(data) >>> result = pipeline.transform(data) >>> result.select("cleanText").show(truncate=False) +---------------------------------------------------------------------------------------------------------------------------+ |cleanText | +---------------------------------------------------------------------------------------------------------------------------+ |[[document, 0, 80, Spark NLP opensource text processing library advanced natural language processing, [sentence -> 0], []]]| +---------------------------------------------------------------------------------------------------------------------------+
Methods
__init__
()clear
(param)Clears a param from the param map if it has been explicitly set.
copy
([extra])Creates a copy of this instance with the same uid and some extra params.
explainParam
(param)Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
Returns the documentation of all params with their optionally default values and user-supplied values.
extractParamMap
([extra])Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
Gets current column names of input annotations.
Gets whether Annotator should be evaluated lazily in a RecursivePipeline.
getOrDefault
(param)Gets the value of a param in the user-supplied param map or its default value.
Gets output column name of annotations.
getParam
(paramName)Gets a param by its name.
getParamValue
(paramName)Gets the value of a parameter.
hasDefault
(param)Checks whether a param has a default value.
hasParam
(paramName)Tests whether this instance contains a param with a given (string) name.
isDefined
(param)Checks whether a param is explicitly set by user or has a default value.
isSet
(param)Checks whether a param is explicitly set by user.
load
(path)Reads an ML instance from the input path, a shortcut of read().load(path).
read
()Returns an MLReader instance for this class.
save
(path)Save this ML instance to the given path, a shortcut of 'write().save(path)'.
set
(param, value)Sets a parameter in the embedded param map.
setInputCols
(*value)Sets column names of input annotations.
setLazyAnnotator
(value)Sets whether Annotator should be evaluated lazily in a RecursivePipeline.
setOutputCol
(value)Sets output column name of annotations.
setParamValue
(paramName)Sets the value of a parameter.
setParams
()setPreservePosition
(value)Sets whether to preserve the actual position of the tokens or reduce them to one space.
transform
(dataset[, params])Transforms the input dataset with optional parameters.
write
()Returns an MLWriter instance for this ML instance.
Attributes
getter_attrs
inputCols
lazyAnnotator
name
outputCol
Returns all params ordered by name.
preservePosition
- clear(param)#
Clears a param from the param map if it has been explicitly set.
- copy(extra=None)#
Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.
- Parameters
extra – Extra parameters to copy to the new instance
- Returns
Copy of this instance
- explainParam(param)#
Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
- explainParams()#
Returns the documentation of all params with their optionally default values and user-supplied values.
- extractParamMap(extra=None)#
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
- Parameters
extra – extra param values
- Returns
merged param map
- getInputCols()#
Gets current column names of input annotations.
- getLazyAnnotator()#
Gets whether Annotator should be evaluated lazily in a RecursivePipeline.
- getOrDefault(param)#
Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.
- getOutputCol()#
Gets output column name of annotations.
- getParam(paramName)#
Gets a param by its name.
- getParamValue(paramName)#
Gets the value of a parameter.
- Parameters
- paramNamestr
Name of the parameter
- hasDefault(param)#
Checks whether a param has a default value.
- hasParam(paramName)#
Tests whether this instance contains a param with a given (string) name.
- isDefined(param)#
Checks whether a param is explicitly set by user or has a default value.
- isSet(param)#
Checks whether a param is explicitly set by user.
- classmethod load(path)#
Reads an ML instance from the input path, a shortcut of read().load(path).
- property params#
Returns all params ordered by name. The default implementation uses
dir()
to get all attributes of typeParam
.
- classmethod read()#
Returns an MLReader instance for this class.
- save(path)#
Save this ML instance to the given path, a shortcut of ‘write().save(path)’.
- set(param, value)#
Sets a parameter in the embedded param map.
- setInputCols(*value)#
Sets column names of input annotations.
- Parameters
- *valuestr
Input columns for the annotator
- setLazyAnnotator(value)#
Sets whether Annotator should be evaluated lazily in a RecursivePipeline.
- Parameters
- valuebool
Whether Annotator should be evaluated lazily in a RecursivePipeline
- setOutputCol(value)#
Sets output column name of annotations.
- Parameters
- valuestr
Name of output column
- setParamValue(paramName)#
Sets the value of a parameter.
- Parameters
- paramNamestr
Name of the parameter
- setPreservePosition(value)[source]#
Sets whether to preserve the actual position of the tokens or reduce them to one space.
- Parameters
- valuestr
Name of the Id Column
- transform(dataset, params=None)#
Transforms the input dataset with optional parameters.
- Parameters
dataset – input dataset, which is an instance of
pyspark.sql.DataFrame
params – an optional param map that overrides embedded params.
- Returns
transformed dataset
New in version 1.3.0.
- uid#
A unique id for the object.
- write()#
Returns an MLWriter instance for this ML instance.