sparknlp_jsl.annotator.multi_chunk2_doc#

Contains classes for MultiChunk2Doc.

Module Contents#

Classes#

MultiChunk2Doc

MultiChunk2Doc annotator merges a given chunks to create a document.

class MultiChunk2Doc(classname='com.johnsnowlabs.nlp.annotators.MultiChunk2Doc', java_model=None)#

Bases: sparknlp_jsl.common.AnnotatorModelInternal, sparknlp_jsl.annotator.white_black_list_params.WhiteBlackListParams

MultiChunk2Doc annotator merges a given chunks to create a document. During document creation, a specific whitelist and blacklist filter can be applied, and case sensitivity can be adjusted. See Also: sparknlp_jsl.annotator.WhiteBlackListParams for filtering options.

Additionally, specified prefix and suffix texts can be placed before and after the merged chunks in the resulting document. And a separator can be placed between the chunks.

Converts CHUNK type annotations into DOCUMENT type

Input Annotation types

Output Annotation type

CHUNK

DOCUMENT

Examples

>>> import sparknlp
>>> from sparknlp.base import *
>>> from sparknlp.annotator import *
>>> from sparknlp_jsl.annotator import *
>>> from pyspark.ml import Pipeline
>>> documentAssembler = DocumentAssembler().setInputCol("text").setOutputCol("document")
>>> sentenceDetector = SentenceDetectorDLModel.pretrained("sentence_detector_dl_healthcare", "en", "clinical/models") \
...     .setInputCols("document").setOutputCol("sentence")
>>> tokenizer = Tokenizer().setInputCols("sentence").setOutputCol("token")
>>> embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models") \
...     .setInputCols("sentence", "token").setOutputCol("embeddings")
>>> clinical_ner = MedicalNerModel.pretrained("ner_clinical_large_langtest", "en", "clinical/models") \
...     .setInputCols(["sentence", "token", "embeddings"]).setOutputCol("ner")
>>> ner_converter = NerConverterInternal().setInputCols(["sentence", "token", "ner"]).setOutputCol("ner_chunk")
>>> multi_chunk2_doc = MultiChunk2Doc() \
...     .setInputCols(["ner_chunk"]) \
...     .setOutputCol("new_document") \
...     .setWhiteList(["test"]) \
...     .setCaseSensitive(False) \
...     .setPrefix("<") \
...     .setSeparator("><") \
...     .setSuffix(">")
>>> data = spark.createDataFrame([["A 28-year-old female with a history of gestational diabetes mellitus diagnosed eight years prior to presentation " + \
...     "and subsequent type two diabetes mellitus (T2DM), " + \
...     "one prior episode of HTG-induced pancreatitis three years prior to presentation, " + \
...     "and associated with an acute hepatitis, presented with a one-week history of polyuria, poor appetite, and vomiting. " + \
...     "She was on metformin, glipizide, and dapagliflozin for T2DM and atorvastatin and gemfibrozil for HTG. " + \
...     "She had been on dapagliflozin for six months at the time of presentation. " + \
...     "Physical examination on presentation was significant for dry oral mucosa ; " + \
...     "significantly , her abdominal examination was benign with no tenderness, guarding, or rigidity."]]) \
...     .toDF("text")
>>> pipeline = Pipeline() \
...     .setStages([documentAssembler, sentenceDetector, tokenizer, embeddings, clinical_ner, ner_converter, multi_chunk2_doc]) \
...     .fit(data)
>>> result = pipeline.transform(data)
>>> result.selectExpr("new_document").show(truncate=False)
blackList#
caseSensitive#
getter_attrs = []#
inputAnnotatorTypes#
inputCols#
lazyAnnotator#
name = MultiChunk2Doc#
optionalInputAnnotatorTypes = []#
outputAnnotatorType#
outputCol#
prefix#
separator#
skipLPInputColsValidation = True#
suffix#
whiteList#
clear(param)#

Clears a param from the param map if it has been explicitly set.

copy(extra=None)#

Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.

Parameters:

extra (dict, optional) – Extra parameters to copy to the new instance

Returns:

Copy of this instance

Return type:

JavaParams

explainParam(param)#

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams()#

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap(extra=None)#

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

Parameters:

extra (dict, optional) – extra param values

Returns:

merged param map

Return type:

dict

getInputCols()#

Gets current column names of input annotations.

getLazyAnnotator()#

Gets whether Annotator should be evaluated lazily in a RecursivePipeline.

getOrDefault(param)#

Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.

getOutputCol()#

Gets output column name of annotations.

getParam(paramName)#

Gets a param by its name.

getParamValue(paramName)#

Gets the value of a parameter.

Parameters:

paramName (str) – Name of the parameter

hasDefault(param)#

Checks whether a param has a default value.

hasParam(paramName)#

Tests whether this instance contains a param with a given (string) name.

inputColsValidation(value)#
isDefined(param)#

Checks whether a param is explicitly set by user or has a default value.

isSet(param)#

Checks whether a param is explicitly set by user.

classmethod load(path)#

Reads an ML instance from the input path, a shortcut of read().load(path).

classmethod read()#

Returns an MLReader instance for this class.

save(path)#

Save this ML instance to the given path, a shortcut of ‘write().save(path)’.

set(param, value)#

Sets a parameter in the embedded param map.

setBlackList(value)#

Sets If defined, list of entities to ignore. The rest will be processed. Do not include IOB prefix on labels

Parameters:

value (List[str]) – If defined, list of entities to ignore. The rest will be processed. Do not include IOB prefix on labels

setCaseSensitive(value)#

Determines whether the definitions of the white listed and black listed entities are case sensitive or not.

Parameters:

value (bool) – Whether white listed and black listed entities are case sensitive or not. Default: True.

setDenyList(value)#

Sets If defined, list of entities to ignore. The rest will be processed. Do not include IOB prefix on labels

Parameters:

value (List[str]) – If defined, list of entities to ignore. The rest will be processed. Do not include IOB prefix on labels

setForceInputTypeValidation(etfm)#
setInputCols(*value)#

Sets column names of input annotations.

Parameters:

*value (List[str]) – Input columns for the annotator

setLazyAnnotator(value)#

Sets whether Annotator should be evaluated lazily in a RecursivePipeline.

Parameters:

value (bool) – Whether Annotator should be evaluated lazily in a RecursivePipeline

setOutputCol(value)#

Sets output column name of annotations.

Parameters:

value (str) – Name of output column

setParamValue(paramName)#

Sets the value of a parameter.

Parameters:

paramName (str) – Name of the parameter

setParams()#
setPrefix(value: str)#

Sets the prefix to add to the result. Default: “”.

Parameters:

value (str) – the prefix to add to the result. Default: “”.

setSeparator(value: str)#

Sets the separator to add between the chunks. Default: “,”.

Parameters:

value (str) – the separator to add between the chunks. Default: “,”.

setSuffix(value: str)#

Sets the suffix to add to the result. Default: “”.

Parameters:

value (str) – the suffix to add to the result. Default: “”.

setWhiteList(value)#

Sets If defined, list of entities to process. The rest will be ignored. Do not include IOB prefix on labels

Parameters:

value (List[str]) – If defined, list of entities to process. The rest will be ignored. Do not include IOB prefix on labels

transform(dataset, params=None)#

Transforms the input dataset with optional parameters.

New in version 1.3.0.

Parameters:
  • dataset (pyspark.sql.DataFrame) – input dataset

  • params (dict, optional) – an optional param map that overrides embedded params.

Returns:

transformed dataset

Return type:

pyspark.sql.DataFrame

write()#

Returns an MLWriter instance for this ML instance.