sparknlp_jsl.annotator.disambiguation.ner_disambiguator#

Module Contents#

Classes#

NerDisambiguator

Links words of interest, such as names of persons, locations and companies, from an input text document to

NerDisambiguatorModel

Links words of interest, such as names of persons, locations and companies, from an input text document to

class NerDisambiguator#

Bases: sparknlp_jsl.common.AnnotatorApproachInternal

Links words of interest, such as names of persons, locations and companies, from an input text document to a corresponding unique entity in a target Knowledge Base (KB). Words of interest are called Named Entities (NEs), mentions, or surface forms. Instantiated / pretrained model of the NerDisambiguator. Links words of interest, such as names of persons, locations and companies, from an input text document to a corresponding unique entity in a target Knowledge Base (KB). Words of interest are called Named Entities (NEs),

Input Annotation types

Output Annotation type

CHUNK, SENTENCE_EMBEDDINGS

DISAMBIGUATION

Parameters:
  • embeddingTypeParam – Could be ‘bow’ for word embeddings or ‘sentence’ for sentences

  • numFirstChars – How many characters should be considered for initial prefix search in knowledge base

  • tokenSearch – Should we search by token or by chunk in knowledge base (token is recommended)

  • narrowWithApproximateMatching – Should we narrow prefix search results with levenstein distance based matching (true is recommended)

  • levenshteinDistanceThresholdParam – Levenshtein distance threshold to narrow results from prefix search (0.1 is default)

  • nearMatchingGapParam – Puts a limit on a string length (by trimming the candidate chunks) during levenshtein-distance based narrowing,len(candidate) - len(entity chunk) > nearMatchingGap (Default: 4).

  • predictionsLimit – Limit on amount of predictions N for topN predictions

  • s3KnowledgeBaseName – knowledge base name in s3

Examples

>>> data = spark.createDataFrame([["The show also had a contestant named Donald Trump who later defeated Christina Aguilera ..."]])     ...   .toDF("text")
>>> documentAssembler = DocumentAssembler() \
...   .setInputCol("text") \
...   .setOutputCol("document")
>>> sentenceDetector = SentenceDetector() \
...   .setInputCols(["document"]) \
...   .setOutputCol("sentence")
>>> tokenizer = Tokenizer() \
...   .setInputCols(["sentence"]) \
...   .setOutputCol("token")
>>> word_embeddings = WordEmbeddingsModel.pretrained() \
...   .setInputCols(["sentence", "token"]) \
...   .setOutputCol("embeddings")
>>> sentence_embeddings = SentenceEmbeddings() \
...   .setInputCols(["sentence","embeddings"]) \
...   .setOutputCol("sentence_embeddings")    >>> ner_model = NerDLModel.pretrained() \
...   .setInputCols(["sentence", "token", "embeddings"]) \
...   .setOutputCol("ner")
>>> ner_converter = NerConverter() \
...   .setInputCols(["sentence", "token", "ner"]) \
...   .setOutputCol("ner_chunk") \
...   .setWhiteList(["PER"])

Then the extracted entities can be disambiguated. >>> disambiguator = NerDisambiguator() … .setS3KnowledgeBaseName(“i-per”) … .setInputCols([“ner_chunk”, “sentence_embeddings”]) … .setOutputCol(“disambiguation”) … .setNumFirstChars(5) … >>> nlpPipeline = Pipeline(stages=[ … documentAssembler, … sentenceDetector, … tokenizer, … word_embeddings, … sentence_embeddings, … ner_model, … ner_converter, … disambiguator]) … >>> model = nlpPipeline.fit(data) >>> result = model.transform(data) >>> result.selectExpr(“explode(disambiguation)”) … .selectExpr(“col.metadata.chunk as chunk”, “col.result as result”).show(5, False)

chunk

result

Donald Trump Christina Aguilera

http:#en.wikipedia.org/?curid=4848272, http:#en.wikipedia.org/?curid=31698421, http:#en.wikipedia.org/?curid=55907961 http:#en.wikipedia.org/?curid=144171, http:#en.wikipedia.org/?curid=6636454

embeddingTypeParam#
getter_attrs = []#
inputAnnotatorTypes#
inputCols#
lazyAnnotator#
levenshteinDistanceThresholdParam#
narrowWithApproximateMatching#
nearMatchingGapParam#
numFirstChars#
optionalInputAnnotatorTypes = []#
outputAnnotatorType = 'disambiguation'#
outputCol#
predictionsLimit#
s3KnowledgeBaseName#
skipLPInputColsValidation = True#
tokenSearch#
uid = ''#
clear(param: pyspark.ml.param.Param) None#

Clears a param from the param map if it has been explicitly set.

copy(extra: pyspark.ml._typing.ParamMap | None = None) JP#

Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.

Parameters:

extra (dict, optional) – Extra parameters to copy to the new instance

Returns:

Copy of this instance

Return type:

JavaParams

explainParam(param: str | Param) str#

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams() str#

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap(extra: pyspark.ml._typing.ParamMap | None = None) pyspark.ml._typing.ParamMap#

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

Parameters:

extra (dict, optional) – extra param values

Returns:

merged param map

Return type:

dict

fit(dataset: pyspark.sql.dataframe.DataFrame, params: pyspark.ml._typing.ParamMap | None = ...) M#
fit(dataset: pyspark.sql.dataframe.DataFrame, params: List[pyspark.ml._typing.ParamMap] | Tuple[pyspark.ml._typing.ParamMap]) List[M]

Fits a model to the input dataset with optional parameters.

New in version 1.3.0.

Parameters:
  • dataset (pyspark.sql.DataFrame) – input dataset.

  • params (dict or list or tuple, optional) – an optional param map that overrides embedded params. If a list/tuple of param maps is given, this calls fit on each param map and returns a list of models.

Returns:

fitted model(s)

Return type:

Transformer or a list of Transformer

fitMultiple(dataset: pyspark.sql.dataframe.DataFrame, paramMaps: Sequence[pyspark.ml._typing.ParamMap]) Iterator[Tuple[int, M]]#

Fits a model to the input dataset for each param map in paramMaps.

New in version 2.3.0.

Parameters:
  • dataset (pyspark.sql.DataFrame) – input dataset.

  • paramMaps (collections.abc.Sequence) – A Sequence of param maps.

Returns:

A thread safe iterable which contains one model for each param map. Each call to next(modelIterator) will return (index, model) where model was fit using paramMaps[index]. index values may not be sequential.

Return type:

_FitMultipleIterator

getInputCols()#

Gets current column names of input annotations.

getLazyAnnotator()#

Gets whether Annotator should be evaluated lazily in a RecursivePipeline.

getOrDefault(param: str) Any#
getOrDefault(param: Param[T]) T

Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.

getOutputCol()#

Gets output column name of annotations.

getParam(paramName: str) Param#

Gets a param by its name.

getParamValue(paramName)#

Gets the value of a parameter.

Parameters:

paramName (str) – Name of the parameter

hasDefault(param: str | Param[Any]) bool#

Checks whether a param has a default value.

hasParam(paramName: str) bool#

Tests whether this instance contains a param with a given (string) name.

inputColsValidation(value)#
isDefined(param: str | Param[Any]) bool#

Checks whether a param is explicitly set by user or has a default value.

isSet(param: str | Param[Any]) bool#

Checks whether a param is explicitly set by user.

classmethod load(path: str) RL#

Reads an ML instance from the input path, a shortcut of read().load(path).

classmethod read()#

Returns an MLReader instance for this class.

save(path: str) None#

Save this ML instance to the given path, a shortcut of ‘write().save(path)’.

set(param: Param, value: Any) None#

Sets a parameter in the embedded param map.

setEmbeddingType(value)#

Sets if we want to use ‘bow’ for word embeddings or ‘sentence’ for sentences”

Parameters:

value (str) – Can be ‘bow’ for word embeddings or ‘sentence’ for sentences (Default: sentence) Can be ‘bow’ for word embeddings or ‘sentence’ for sentences (Default: sentence)

setForceInputTypeValidation(etfm)#
setInputCols(*value)#

Sets column names of input annotations.

Parameters:

*value (List[str]) – Input columns for the annotator

setLazyAnnotator(value)#

Sets whether Annotator should be evaluated lazily in a RecursivePipeline.

Parameters:

value (bool) – Whether Annotator should be evaluated lazily in a RecursivePipeline

setLevenshteinDistanceThresholdParam(value)#

Sets Levenshtein distance threshold to narrow results from prefix search (0.1 is default)

Parameters:

value (float) – Levenshtein distance threshold to narrow results from prefix search (0.1 is default)

setNarrowWithApproximateMatching(value)#

Sets whether to narrow prefix search results with levenstein distance based matching (Default: true)

Parameters:

value (bool) – Whether to narrow prefix search results with levenstein distance based matching (Default: true)

setNearMatchingGapParam(value)#

Sets a limit on a string length (by trimming the candidate chunks) during levenshtein-distance based narrowing.

Parameters:

value (int) – Limit on a string length (by trimming the candidate chunks) during levenshtein-distance based narrowing

setNumFirstChars(value)#

How many characters should be considered for initial prefix search in knowledge base

Parameters:

value (bool) – How many characters should be considered for initial prefix search in knowledge base

setOutputCol(value)#

Sets output column name of annotations.

Parameters:

value (str) – Name of output column

setParamValue(paramName)#

Sets the value of a parameter.

Parameters:

paramName (str) – Name of the parameter

setPredictionLimit(value)#

Sets limit on amount of predictions N for topN predictions

Parameters:

value (bool) – Limit on amount of predictions N for topN predictions

setS3KnowledgeBaseName(value)#

Sets knowledge base name in s3

Parameters:

value (str) – knowledge base name in s3 example (i-per)

setTokenSearch(value)#

Sets whether to search by token or by chunk in knowledge base (Default: true)

Parameters:

value (bool) – Whether to search by token or by chunk in knowledge base (Default: true)

write() JavaMLWriter#

Returns an MLWriter instance for this ML instance.

class NerDisambiguatorModel(classname='com.johnsnowlabs.nlp.annotators.disambiguation.NerDisambiguatorModel', java_model=None)#

Bases: sparknlp_jsl.common.AnnotatorModelInternal

Links words of interest, such as names of persons, locations and companies, from an input text document to a corresponding unique entity in a target Knowledge Base (KB). Words of interest are called Named Entities (NEs), mentions, or surface forms. Instantiated / pretrained model of the NerDisambiguator. Links words of interest, such as names of persons, locations and companies, from an input text document to a corresponding unique entity in a target Knowledge Base (KB). Words of interest are called Named Entities (NEs),

Input Annotation types

Output Annotation type

CHUNK, SENTENCE_EMBEDDINGS

DISAMBIGUATION

Parameters:
  • embeddingTypeParam – Could be bow for word embeddings or sentence for sentences

  • numFirstChars – How many characters should be considered for initial prefix search in knowledge base

  • tokenSearch – Should we search by token or by chunk in knowledge base (token is recommended)

  • narrowWithApproximateMatching – Should we narrow prefix search results with levenstein distance based matching (true is recommended)

  • levenshteinDistanceThresholdParam – Levenshtein distance threshold to narrow results from prefix search (0.1 is default)

  • nearMatchingGapParam – Puts a limit on a string length (by trimming the candidate chunks) during levenshtein-distance based narrowing,len(candidate) - len(entity chunk) > nearMatchingGap (Default: 4).

  • predictionsLimit – Limit on amount of predictions N for topN predictions

  • s3KnowledgeBaseName – knowledge base name in s3

Examples

>>> data = spark.createDataFrame([["The show also had a contestant named Donald Trump who later defeated Christina Aguilera ..."]])     ...   .toDF("text")
>>> documentAssembler = DocumentAssembler() \
...   .setInputCol("text") \
...   .setOutputCol("document")
>>> sentenceDetector = SentenceDetector() \
...   .setInputCols(["document"]) \
...   .setOutputCol("sentence")
>>> tokenizer = Tokenizer() \
...   .setInputCols(["sentence"]) \
...   .setOutputCol("token")
>>> word_embeddings = WordEmbeddingsModel.pretrained() \
...   .setInputCols(["sentence", "token"]) \
...   .setOutputCol("embeddings")
>>> sentence_embeddings = SentenceEmbeddings() \
...   .setInputCols(["sentence","embeddings"]) \
...   .setOutputCol("sentence_embeddings")    >>> ner_model = NerDLModel.pretrained() \
...   .setInputCols(["sentence", "token", "embeddings"]) \
...   .setOutputCol("ner")
>>> ner_converter = NerConverter() \
...   .setInputCols(["sentence", "token", "ner"]) \
...   .setOutputCol("ner_chunk") \
...   .setWhiteList(["PER"])

Then the extracted entities can be disambiguated. >>> disambiguator = NerDisambiguatorModel.pretrained() … .setInputCols([“ner_chunk”, “sentence_embeddings”]) … .setOutputCol(“disambiguation”) … .setNumFirstChars(5) … >>> nlpPipeline = Pipeline(stages=[ … documentAssembler, … sentenceDetector, … tokenizer, … word_embeddings, … sentence_embeddings, … ner_model, … ner_converter, … disambiguator]) … >>> model = nlpPipeline.fit(data) >>> result = model.transform(data) >>> result.selectExpr(“explode(disambiguation)”) … .selectExpr(“col.metadata.chunk as chunk”, “col.result as result”).show(5, False)

chunk

result

Donald Trump Christina Aguilera

http:#en.wikipedia.org/?curid=4848272, http:#en.wikipedia.org/?curid=31698421, http:#en.wikipedia.org/?curid=55907961 http:#en.wikipedia.org/?curid=144171, http:#en.wikipedia.org/?curid=6636454

embeddingTypeParam#
getter_attrs = []#
inputAnnotatorTypes#
inputCols#
lazyAnnotator#
levenshteinDistanceThresholdParam#
name = 'NerDisambiguatorModel'#
narrowWithApproximateMatching#
nearMatchingGapParam#
numFirstChars#
optionalInputAnnotatorTypes = []#
outputAnnotatorType = 'disambiguation'#
outputCol#
predictionsLimit#
skipLPInputColsValidation = True#
tokenSearch#
uid = ''#
clear(param: pyspark.ml.param.Param) None#

Clears a param from the param map if it has been explicitly set.

copy(extra: pyspark.ml._typing.ParamMap | None = None) JP#

Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.

Parameters:

extra (dict, optional) – Extra parameters to copy to the new instance

Returns:

Copy of this instance

Return type:

JavaParams

explainParam(param: str | Param) str#

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams() str#

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap(extra: pyspark.ml._typing.ParamMap | None = None) pyspark.ml._typing.ParamMap#

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

Parameters:

extra (dict, optional) – extra param values

Returns:

merged param map

Return type:

dict

getInputCols()#

Gets current column names of input annotations.

getLazyAnnotator()#

Gets whether Annotator should be evaluated lazily in a RecursivePipeline.

getOrDefault(param: str) Any#
getOrDefault(param: Param[T]) T

Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.

getOutputCol()#

Gets output column name of annotations.

getParam(paramName: str) Param#

Gets a param by its name.

getParamValue(paramName)#

Gets the value of a parameter.

Parameters:

paramName (str) – Name of the parameter

hasDefault(param: str | Param[Any]) bool#

Checks whether a param has a default value.

hasParam(paramName: str) bool#

Tests whether this instance contains a param with a given (string) name.

inputColsValidation(value)#
isDefined(param: str | Param[Any]) bool#

Checks whether a param is explicitly set by user or has a default value.

isSet(param: str | Param[Any]) bool#

Checks whether a param is explicitly set by user.

classmethod load(path: str) RL#

Reads an ML instance from the input path, a shortcut of read().load(path).

static pretrained(name='disambiguator_per', lang='en', remote_loc='clinical/models')#

Downloads and loads a pretrained model.

Parameters:
  • name (str, optional) – Name of the pretrained model, by default “disambiguator_per”

  • lang (str, optional) – Language of the pretrained model, by default “en”

  • remote_loc (str, optional) – Optional remote address of the resource, by default None. Will use Spark NLPs repositories otherwise.

Returns:

The restored model

Return type:

NerDisambiguatorModel

classmethod read()#

Returns an MLReader instance for this class.

save(path: str) None#

Save this ML instance to the given path, a shortcut of ‘write().save(path)’.

set(param: Param, value: Any) None#

Sets a parameter in the embedded param map.

setEmbeddingType(value)#

Sets if we want to use ‘bow’ for word embeddings or ‘sentence’ for sentences

Parameters:

value (str) – Can be ‘bow’ for word embeddings or ‘sentence’ for sentences (Default: sentence)

setForceInputTypeValidation(etfm)#
setInputCols(*value)#

Sets column names of input annotations.

Parameters:

*value (List[str]) – Input columns for the annotator

setLazyAnnotator(value)#

Sets whether Annotator should be evaluated lazily in a RecursivePipeline.

Parameters:

value (bool) – Whether Annotator should be evaluated lazily in a RecursivePipeline

setLevenshteinDistanceThresholdParam(value)#

Sets Levenshtein distance threshold to narrow results from prefix search (0.1 is default)

Parameters:

value (float) – Levenshtein distance threshold to narrow results from prefix search (0.1 is default)

setNarrowWithApproximateMatching(value)#

Sets whether to narrow prefix search results with levenstein distance based matching (Default: true)

Parameters:

value (bool) – Whether to narrow prefix search results with levenstein distance based matching (Default: true)

setNearMatchingGapParam(value)#

Sets a limit on a string length (by trimming the candidate chunks) during levenshtein-distance based narrowing.

Parameters:

value (int) – Limit on a string length (by trimming the candidate chunks) during levenshtein-distance based narrowing

setNumFirstChars(value)#

How many characters should be considered for initial prefix search in knowledge base

Parameters:

value (bool) – How many characters should be considered for initial prefix search in knowledge base

setOutputCol(value)#

Sets output column name of annotations.

Parameters:

value (str) – Name of output column

setParamValue(paramName)#

Sets the value of a parameter.

Parameters:

paramName (str) – Name of the parameter

setParams()#
setPredictionLimit(value)#

Sets limit on amount of predictions N for topN predictions

Parameters:

s (bool) – Limit on amount of predictions N for topN predictions

setTokenSearch(value)#

Sets whether to search by token or by chunk in knowledge base (Default: true)

Parameters:

value (bool) – Whether to search by token or by chunk in knowledge base (Default: true)

transform(dataset: pyspark.sql.dataframe.DataFrame, params: pyspark.ml._typing.ParamMap | None = None) pyspark.sql.dataframe.DataFrame#

Transforms the input dataset with optional parameters.

New in version 1.3.0.

Parameters:
  • dataset (pyspark.sql.DataFrame) – input dataset

  • params (dict, optional) – an optional param map that overrides embedded params.

Returns:

transformed dataset

Return type:

pyspark.sql.DataFrame

write() JavaMLWriter#

Returns an MLWriter instance for this ML instance.