sparknlp_jsl.annotator.MedicalNerApproach#

class sparknlp_jsl.annotator.MedicalNerApproach[source]#

Bases: AnnotatorApproach, NerApproach

This Named Entity recognition annotator allows to train generic NER model based on Neural Networks.

The architecture of the neural network is a Char CNNs - BiLSTM - CRF that achieves state-of-the-art in most datasets.

For instantiated/pretrained models, see NerDLModel.

The training data should be a labeled Spark Dataset, in the format of CoNLL 2003 IOB with Annotation type columns. The data should have columns of type DOCUMENT, TOKEN, WORD_EMBEDDINGS and an additional label column of annotator type NAMED_ENTITY.

Excluding the label, this can be done with for example:

  • a SentenceDetector,

  • a Tokenizer and

  • a WordEmbeddingsModel (any embeddings can be chosen, e.g. BertEmbeddings for BERT based embeddings).

For extended examples of usage, see the Spark NLP Workshop.

Input Annotation types

Output Annotation type

DOCUMENT, TOKEN, WORD_EMBEDDINGS

NAMED_ENTITY

Parameters:
labelColumn

Column with label per each token

entities

Entities to recognize

minEpochs

Minimum number of epochs to train, by default 0

maxEpochs

Maximum number of epochs to train, by default 50

verbose

Level of verbosity during training, by default 2

randomSeed

Random seed

lr

Learning Rate, by default 0.001

po

Learning rate decay coefficient. Real Learning Rage = lr / (1 + po * epoch), by default 0.005

batchSize

Batch size, by default 8

dropout

Dropout coefficient, by default 0.5

graphFolder

Folder path that contain external graph files

configProtoBytes

ConfigProto from tensorflow, serialized into byte array.

useContrib

whether to use contrib LSTM Cells. Not compatible with Windows. Might slightly improve accuracy

validationSplit

Choose the proportion of training dataset to be validated against the model on each Epoch. The value should be between 0.0 and 1.0 and by default it is 0.0 and off, by default 0.0

evaluationLogExtended

Whether logs for validation to be extended, by default False.

testDataset

Path to test dataset. If set used to calculate statistic on it during training.

includeConfidence

whether to include confidence scores in annotation metadata, by default False

includeAllConfidenceScores

whether to include all confidence scores in annotation metadata or just the score of the predicted tag, by default False

enableOutputLogs

Whether to use stdout in addition to Spark logs, by default False

outputLogsPath

Folder path to save training logs

enableMemoryOptimizer

Whether to optimize for large datasets or not. Enabling this option can slow down training, by default False

Examples

>>> import sparknlp
>>> from sparknlp.base import *
>>> from sparknlp.common import *
>>> from sparknlp.annotator import *
>>> from sparknlp.training import *
>>> import sparknlp_jsl
>>> from sparknlp_jsl.base import *
>>> from sparknlp_jsl.annotator import *
>>> from pyspark.ml import Pipeline

First extract the prerequisites for the NerDLApproach

>>> documentAssembler = DocumentAssembler() \
...     .setInputCol("text") \
...     .setOutputCol("document")
>>> sentence = SentenceDetector() \
...     .setInputCols(["document"]) \
...     .setOutputCol("sentence")
>>> tokenizer = Tokenizer() \
...     .setInputCols(["sentence"]) \
...     .setOutputCol("token")
>>> embeddings = BertEmbeddings.pretrained() \
...     .setInputCols(["sentence", "token"]) \
...     .setOutputCol("embeddings")

Then the training can start

>>> nerTagger = MedicalNerApproach() \
...     .setInputCols(["sentence", "token", "embeddings"]) \
...     .setLabelColumn("label") \
...     .setOutputCol("ner") \
...     .setMaxEpochs(1) \
...     .setRandomSeed(0) \
...     .setVerbose(0)
>>> pipeline = Pipeline().setStages([
...     documentAssembler,
...     sentence,
...     tokenizer,
...     embeddings,
...     nerTagger
... ])
>>> conll = CoNLL()
>>> trainingData = conll.readDataset(spark, "src/test/resources/conll2003/eng.train")
>>> pipelineModel = pipeline.fit(trainingData)

Methods

__init__()

clear(param)

Clears a param from the param map if it has been explicitly set.

copy([extra])

Creates a copy of this instance with the same uid and some extra params.

explainParam(param)

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams()

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap([extra])

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

fit(dataset[, params])

Fits a model to the input dataset with optional parameters.

fitMultiple(dataset, paramMaps)

Fits a model to the input dataset for each param map in paramMaps.

getInputCols()

Gets current column names of input annotations.

getLabelColumn()

Gets column for label per each token.

getLazyAnnotator()

Gets whether Annotator should be evaluated lazily in a RecursivePipeline.

getOrDefault(param)

Gets the value of a param in the user-supplied param map or its default value.

getOutputCol()

Gets output column name of annotations.

getParam(paramName)

Gets a param by its name.

getParamValue(paramName)

Gets the value of a parameter.

hasDefault(param)

Checks whether a param has a default value.

hasParam(paramName)

Tests whether this instance contains a param with a given (string) name.

isDefined(param)

Checks whether a param is explicitly set by user or has a default value.

isSet(param)

Checks whether a param is explicitly set by user.

load(path)

Reads an ML instance from the input path, a shortcut of read().load(path).

read()

Returns an MLReader instance for this class.

save(path)

Save this ML instance to the given path, a shortcut of 'write().save(path)'.

set(param, value)

Sets a parameter in the embedded param map.

setBatchSize(v)

Sets batch size, by default 64.

setConfigProtoBytes(b)

Sets configProto from tensorflow, serialized into byte array.

setDropout(v)

Sets dropout coefficient, by default 0.5.

setEarlyStoppingCriterion(criterion)

Sets early stopping criterion.

setEarlyStoppingPatience(patience)

Sets the number of epochs with no performance improvement before training is terminated.

setEnableMemoryOptimizer(value)

Sets Whether to optimize for large datasets or not, by default False.

setEnableOutputLogs(value)

Sets whether to use stdout in addition to Spark logs, by default False.

setEntities(tags)

Sets entities to recognize.

setEvaluationLogExtended(v)

Sets whether logs for validation to be extended, by default False.

setGraphFile(ff)

Sets path that contains the external graph file.

setGraphFolder(p)

Sets folder path that contain external graph files.

setIncludeAllConfidenceScores(value)

Sets whether to include all confidence scores in annotation metadata or just the score of the predicted tag, by default False.

setIncludeConfidence(value)

Sets whether to include confidence scores in annotation metadata, by default False.

setInputCols(*value)

Sets column names of input annotations.

setLabelColumn(value)

Sets name of column for data labels.

setLazyAnnotator(value)

Sets whether Annotator should be evaluated lazily in a RecursivePipeline.

setLogPrefix(s)

Sets folder path to save training logs.

setLr(v)

Sets Learning Rate, by default 0.001.

setMaxEpochs(epochs)

Sets maximum number of epochs to train.

setMinEpochs(epochs)

Sets minimum number of epochs to train.

setOutputCol(value)

Sets output column name of annotations.

setOutputLogsPath(p)

Sets folder path to save training logs.

setOverrideExistingTags(value)

Sets whether to override already learned tags when using a pretrained model to initialize the new model.

setParamValue(paramName)

Sets the value of a parameter.

setPo(v)

Sets Learning rate decay coefficient, by default 0.005.

setPretrainedModelPath(value)

Sets folder path to save training logs.

setRandomSeed(seed)

Sets random seed for shuffling.

setTagsMapping(value)

Sets a map specifying how old tags are mapped to new ones.

setTestDataset(path[, read_as, options])

Sets Path to test dataset.

setUseBestModel(value)

Sets whether to restore and use the model that has achieved the best performance at the end of the training.

setUseContrib(v)

Sets whether to use contrib LSTM Cells.

setValidationSplit(v)

Sets the proportion of training dataset to be validated against the model on each Epoch, by default it is 0.0 and off.

setVerbose(verboseValue)

Sets level of verbosity during training.

write()

Returns an MLWriter instance for this ML instance.

Attributes

batchSize

configProtoBytes

dropout

earlyStoppingCriterion

earlyStoppingPatience

enableMemoryOptimizer

enableOutputLogs

entities

evaluationLogExtended

getter_attrs

graphFile

graphFolder

includeAllConfidenceScores

includeConfidence

inputCols

labelColumn

lazyAnnotator

logPrefix

lr

maxEpochs

minEpochs

outputCol

outputLogsPath

overrideExistingTags

params

Returns all params ordered by name.

po

pretrainedModelPath

randomSeed

tagsMapping

testDataset

useBestModel

useContrib

validationSplit

verbose

clear(param)#

Clears a param from the param map if it has been explicitly set.

copy(extra=None)#

Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.

Parameters:

extra – Extra parameters to copy to the new instance

Returns:

Copy of this instance

explainParam(param)#

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams()#

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap(extra=None)#

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

Parameters:

extra – extra param values

Returns:

merged param map

fit(dataset, params=None)#

Fits a model to the input dataset with optional parameters.

Parameters:
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame

  • params – an optional param map that overrides embedded params. If a list/tuple of param maps is given, this calls fit on each param map and returns a list of models.

Returns:

fitted model(s)

New in version 1.3.0.

fitMultiple(dataset, paramMaps)#

Fits a model to the input dataset for each param map in paramMaps.

Parameters:
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame.

  • paramMaps – A Sequence of param maps.

Returns:

A thread safe iterable which contains one model for each param map. Each call to next(modelIterator) will return (index, model) where model was fit using paramMaps[index]. index values may not be sequential.

New in version 2.3.0.

getInputCols()#

Gets current column names of input annotations.

getLabelColumn()#

Gets column for label per each token.

Returns:
str

Column with label per each token

getLazyAnnotator()#

Gets whether Annotator should be evaluated lazily in a RecursivePipeline.

getOrDefault(param)#

Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.

getOutputCol()#

Gets output column name of annotations.

getParam(paramName)#

Gets a param by its name.

getParamValue(paramName)#

Gets the value of a parameter.

Parameters:
paramNamestr

Name of the parameter

hasDefault(param)#

Checks whether a param has a default value.

hasParam(paramName)#

Tests whether this instance contains a param with a given (string) name.

isDefined(param)#

Checks whether a param is explicitly set by user or has a default value.

isSet(param)#

Checks whether a param is explicitly set by user.

classmethod load(path)#

Reads an ML instance from the input path, a shortcut of read().load(path).

property params#

Returns all params ordered by name. The default implementation uses dir() to get all attributes of type Param.

classmethod read()#

Returns an MLReader instance for this class.

save(path)#

Save this ML instance to the given path, a shortcut of ‘write().save(path)’.

set(param, value)#

Sets a parameter in the embedded param map.

setBatchSize(v)[source]#

Sets batch size, by default 64.

Parameters:
vint

Batch size

setConfigProtoBytes(b)[source]#

Sets configProto from tensorflow, serialized into byte array.

Parameters:
bList[str]

ConfigProto from tensorflow, serialized into byte array

setDropout(v)[source]#

Sets dropout coefficient, by default 0.5.

Parameters:
vfloat

Dropout coefficient

setEarlyStoppingCriterion(criterion)[source]#

Sets early stopping criterion. A value 0 means no early stopping.

Parameters:
criterionfloat

Early stopping criterion.

setEarlyStoppingPatience(patience)[source]#

Sets the number of epochs with no performance improvement before training is terminated.

Parameters:
patienceint

Early stopping patience.

setEnableMemoryOptimizer(value)[source]#

Sets Whether to optimize for large datasets or not, by default False. Enabling this option can slow down training.

Parameters:
valuebool

Whether to optimize for large datasets

setEnableOutputLogs(value)[source]#

Sets whether to use stdout in addition to Spark logs, by default False.

Parameters:
valuebool

Whether to use stdout in addition to Spark logs

setEntities(tags)#

Sets entities to recognize.

Parameters:
tagsList[str]

List of entities

setEvaluationLogExtended(v)[source]#

Sets whether logs for validation to be extended, by default False. Displays time and evaluation of each label.

Parameters:
vbool

Whether logs for validation to be extended

setGraphFile(ff)[source]#

Sets path that contains the external graph file. When specified, the provided file will be used, and no graph search will happen.

Parameters:
pstr

Path that contains the external graph file. When specified, the provided file will be used, and no graph search will happen.

setGraphFolder(p)[source]#

Sets folder path that contain external graph files.

Parameters:
pstr

Folder path that contain external graph files

setIncludeAllConfidenceScores(value)[source]#

Sets whether to include all confidence scores in annotation metadata or just the score of the predicted tag, by default False.

Parameters:
valuebool

Whether to include all confidence scores in annotation metadata or just the score of the predicted tag

setIncludeConfidence(value)[source]#

Sets whether to include confidence scores in annotation metadata, by default False.

Parameters:
valuebool

Whether to include the confidence value in the output.

setInputCols(*value)#

Sets column names of input annotations.

Parameters:
*valuestr

Input columns for the annotator

setLabelColumn(value)#

Sets name of column for data labels.

Parameters:
valuestr

Column for data labels

setLazyAnnotator(value)#

Sets whether Annotator should be evaluated lazily in a RecursivePipeline.

Parameters:
valuebool

Whether Annotator should be evaluated lazily in a RecursivePipeline

setLogPrefix(s)[source]#

Sets folder path to save training logs.

Parameters:
pstr

Folder path to save training logs

setLr(v)[source]#

Sets Learning Rate, by default 0.001.

Parameters:
vfloat

Learning Rate

setMaxEpochs(epochs)#

Sets maximum number of epochs to train.

Parameters:
epochsint

Maximum number of epochs to train

setMinEpochs(epochs)#

Sets minimum number of epochs to train.

Parameters:
epochsint

Minimum number of epochs to train

setOutputCol(value)#

Sets output column name of annotations.

Parameters:
valuestr

Name of output column

setOutputLogsPath(p)[source]#

Sets folder path to save training logs.

Parameters:
pstr

Folder path to save training logs

setOverrideExistingTags(value)[source]#

Sets whether to override already learned tags when using a pretrained model to initialize the new model. Default is ‘true’

Parameters:
valuebool

Whether to override already learned tags when using a pretrained model to initialize the new model. Default is ‘true’

setParamValue(paramName)#

Sets the value of a parameter.

Parameters:
paramNamestr

Name of the parameter

setPo(v)[source]#

Sets Learning rate decay coefficient, by default 0.005.

Real Learning Rage is lr / (1 + po * epoch).

Parameters:
vfloat

Learning rate decay coefficient

setPretrainedModelPath(value)[source]#

Sets folder path to save training logs.

Parameters:
valuestr

Path to an already trained MedicalNerModel, which is used as a starting point for training the new model.

setRandomSeed(seed)#

Sets random seed for shuffling.

Parameters:
seedint

Random seed for shuffling

setTagsMapping(value)[source]#

Sets a map specifying how old tags are mapped to new ones. It only works if setOverrideExistingTags

Parameters:
valuelist

A map specifying how old tags are mapped to new ones. It only works if setOverrideExistingTags

setTestDataset(path, read_as='SPARK', options=None)[source]#

Sets Path to test dataset. If set used to calculate statistic on it during training.

Parameters:
pathstr

Path to test dataset

read_asstr, optional

How to read the resource, by default ReadAs.SPARK

optionsdict, optional

Options for reading the resource, by default {“format”: “parquet”}

setUseBestModel(value)[source]#

Sets whether to restore and use the model that has achieved the best performance at the end of the training.. The metric that is being monitored is macro F1 for the following cases(highest precendence first),

Parameters:
valuebool

Whether to return the model that has achieved the best metrics across epochs.

setUseContrib(v)[source]#

Sets whether to use contrib LSTM Cells. Not compatible with Windows. Might slightly improve accuracy.

Parameters:
vbool

Whether to use contrib LSTM Cells

Raises:
Exception

Windows not supported to use contrib

setValidationSplit(v)[source]#

Sets the proportion of training dataset to be validated against the model on each Epoch, by default it is 0.0 and off. The value should be between 0.0 and 1.0.

Parameters:
vfloat

Proportion of training dataset to be validated

setVerbose(verboseValue)#

Sets level of verbosity during training.

Parameters:
verboseValueint

Level of verbosity

uid#

A unique id for the object.

write()#

Returns an MLWriter instance for this ML instance.