sparknlp.annotator.BigTextMatcher

class sparknlp.annotator.BigTextMatcher[source]

Bases: sparknlp.common.AnnotatorApproach, sparknlp.common.HasStorage

Annotator to match exact phrases (by token) provided in a file against a Document.

A text file of predefined phrases must be provided with setStoragePath.

In contrast to the normal TextMatcher, the BigTextMatcher is designed for large corpora.

Input Annotation types

Output Annotation type

DOCUMENT, TOKEN

CHUNK

Parameters
entities

ExternalResource for entities

caseSensitive

whether to ignore case in index lookups, by default True

mergeOverlapping

whether to merge overlapping matched chunks, by default False

tokenizer

TokenizerModel to use to tokenize input file for building a Trie

Examples

In this example, the entities file is of the form:

...
dolore magna aliqua
lorem ipsum dolor. sit
laborum
...

where each line represents an entity phrase to be extracted.

>>> import sparknlp
>>> from sparknlp.base import *
>>> from sparknlp.annotator import *
>>> from pyspark.ml import Pipeline
>>> documentAssembler = DocumentAssembler() \
...     .setInputCol("text") \
...     .setOutputCol("document")
>>> tokenizer = Tokenizer() \
...     .setInputCols("document") \
...     .setOutputCol("token")
>>> data = spark.createDataFrame([["Hello dolore magna aliqua. Lorem ipsum dolor. sit in laborum"]]).toDF("text")
>>> entityExtractor = BigTextMatcher() \
...     .setInputCols("document", "token") \
...     .setStoragePath("src/test/resources/entity-extractor/test-phrases.txt", ReadAs.TEXT) \
...     .setOutputCol("entity") \
...     .setCaseSensitive(False)
>>> pipeline = Pipeline().setStages([documentAssembler, tokenizer, entityExtractor])
>>> results = pipeline.fit(data).transform(data)
>>> results.selectExpr("explode(entity)").show(truncate=False)
+--------------------------------------------------------------------+
|col                                                                 |
+--------------------------------------------------------------------+
|[chunk, 6, 24, dolore magna aliqua, [sentence -> 0, chunk -> 0], []]|
|[chunk, 53, 59, laborum, [sentence -> 0, chunk -> 1], []]           |
+--------------------------------------------------------------------+

Methods

__init__()

clear(param)

Clears a param from the param map if it has been explicitly set.

copy([extra])

Creates a copy of this instance with the same uid and some extra params.

explainParam(param)

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams()

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap([extra])

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

fit(dataset[, params])

Fits a model to the input dataset with optional parameters.

fitMultiple(dataset, paramMaps)

Fits a model to the input dataset for each param map in paramMaps.

getCaseSensitive()

Gets whether to ignore case in tokens for embeddings matching.

getIncludeStorage()

Gets whether to include indexed storage in trained model.

getInputCols()

Gets current column names of input annotations.

getLazyAnnotator()

Gets whether Annotator should be evaluated lazily in a RecursivePipeline.

getOrDefault(param)

Gets the value of a param in the user-supplied param map or its default value.

getOutputCol()

Gets output column name of annotations.

getParam(paramName)

Gets a param by its name.

getParamValue(paramName)

Gets the value of a parameter.

getStoragePath()

Gets path to file.

getStorageRef()

Gets unique reference name for identification.

hasDefault(param)

Checks whether a param has a default value.

hasParam(paramName)

Tests whether this instance contains a param with a given (string) name.

isDefined(param)

Checks whether a param is explicitly set by user or has a default value.

isSet(param)

Checks whether a param is explicitly set by user.

load(path)

Reads an ML instance from the input path, a shortcut of read().load(path).

read()

Returns an MLReader instance for this class.

save(path)

Save this ML instance to the given path, a shortcut of 'write().save(path)'.

set(param, value)

Sets a parameter in the embedded param map.

setCaseSensitive(b)

Sets whether to ignore case in index lookups, by default True.

setEntities(path[, read_as, options])

Sets ExternalResource for entities.

setIncludeStorage(value)

Sets whether to include indexed storage in trained model.

setInputCols(*value)

Sets column names of input annotations.

setLazyAnnotator(value)

Sets whether Annotator should be evaluated lazily in a RecursivePipeline.

setMergeOverlapping(b)

Sets whether to merge overlapping matched chunks, by default False.

setOutputCol(value)

Sets output column name of annotations.

setParamValue(paramName)

Sets the value of a parameter.

setStoragePath(path, read_as)

Sets path to file.

setStorageRef(value)

Sets unique reference name for identification.

setTokenizer(tokenizer_model)

Sets TokenizerModel to use to tokenize input file for building a Trie.

write()

Returns an MLWriter instance for this ML instance.

Attributes

caseSensitive

entities

getter_attrs

includeStorage

inputCols

lazyAnnotator

mergeOverlapping

outputCol

params

Returns all params ordered by name.

storagePath

storageRef

tokenizer

clear(param)

Clears a param from the param map if it has been explicitly set.

copy(extra=None)

Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.

Parameters

extra – Extra parameters to copy to the new instance

Returns

Copy of this instance

explainParam(param)

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams()

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap(extra=None)

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

Parameters

extra – extra param values

Returns

merged param map

fit(dataset, params=None)

Fits a model to the input dataset with optional parameters.

Parameters
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame

  • params – an optional param map that overrides embedded params. If a list/tuple of param maps is given, this calls fit on each param map and returns a list of models.

Returns

fitted model(s)

New in version 1.3.0.

fitMultiple(dataset, paramMaps)

Fits a model to the input dataset for each param map in paramMaps.

Parameters
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame.

  • paramMaps – A Sequence of param maps.

Returns

A thread safe iterable which contains one model for each param map. Each call to next(modelIterator) will return (index, model) where model was fit using paramMaps[index]. index values may not be sequential.

New in version 2.3.0.

getCaseSensitive()

Gets whether to ignore case in tokens for embeddings matching.

Returns
bool

Whether to ignore case in tokens for embeddings matching

getIncludeStorage()

Gets whether to include indexed storage in trained model.

Returns
bool

Whether to include indexed storage in trained model

getInputCols()

Gets current column names of input annotations.

getLazyAnnotator()

Gets whether Annotator should be evaluated lazily in a RecursivePipeline.

getOrDefault(param)

Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.

getOutputCol()

Gets output column name of annotations.

getParam(paramName)

Gets a param by its name.

getParamValue(paramName)

Gets the value of a parameter.

Parameters
paramNamestr

Name of the parameter

getStoragePath()

Gets path to file.

Returns
str

path to file

getStorageRef()

Gets unique reference name for identification.

Returns
str

Unique reference name for identification

hasDefault(param)

Checks whether a param has a default value.

hasParam(paramName)

Tests whether this instance contains a param with a given (string) name.

isDefined(param)

Checks whether a param is explicitly set by user or has a default value.

isSet(param)

Checks whether a param is explicitly set by user.

classmethod load(path)

Reads an ML instance from the input path, a shortcut of read().load(path).

property params

Returns all params ordered by name. The default implementation uses dir() to get all attributes of type Param.

classmethod read()

Returns an MLReader instance for this class.

save(path)

Save this ML instance to the given path, a shortcut of ‘write().save(path)’.

set(param, value)

Sets a parameter in the embedded param map.

setCaseSensitive(b)[source]

Sets whether to ignore case in index lookups, by default True.

Parameters
bbool

Whether to ignore case in index lookups

setEntities(path, read_as='TEXT', options={'format': 'text'})[source]

Sets ExternalResource for entities.

Parameters
pathstr

Path to the resource

read_asstr, optional

How to read the resource, by default ReadAs.TEXT

optionsdict, optional

Options for reading the resource, by default {“format”: “text”}

setIncludeStorage(value)

Sets whether to include indexed storage in trained model.

Parameters
valuebool

Whether to include indexed storage in trained model

setInputCols(*value)

Sets column names of input annotations.

Parameters
*valuestr

Input columns for the annotator

setLazyAnnotator(value)

Sets whether Annotator should be evaluated lazily in a RecursivePipeline.

Parameters
valuebool

Whether Annotator should be evaluated lazily in a RecursivePipeline

setMergeOverlapping(b)[source]

Sets whether to merge overlapping matched chunks, by default False.

Parameters
bbool

Whether to merge overlapping matched chunks

setOutputCol(value)

Sets output column name of annotations.

Parameters
valuestr

Name of output column

setParamValue(paramName)

Sets the value of a parameter.

Parameters
paramNamestr

Name of the parameter

setStoragePath(path, read_as)

Sets path to file.

Parameters
pathstr

Path to file

read_asstr

How to interpret the file

Notes

See ReadAs for reading options.

setStorageRef(value)

Sets unique reference name for identification.

Parameters
valuestr

Unique reference name for identification

setTokenizer(tokenizer_model)[source]

Sets TokenizerModel to use to tokenize input file for building a Trie.

Parameters
tokenizer_modelTokenizerModel

TokenizerModel to use to tokenize input file

uid

A unique id for the object.

write()

Returns an MLWriter instance for this ML instance.