sparknlp_jsl.annotator.AssertionDLApproach#

class sparknlp_jsl.annotator.AssertionDLApproach[source]#

Bases: AnnotatorApproach

Train a Assertion Model algorithm using deep learning. from extracted entities and text. AssertionLogRegModel requires DOCUMENT, CHUNK and WORD_EMBEDDINGS type annotator inputs, which can be obtained by e.g a

The training data should have annotations columns of type DOCUMENT, CHUNK, WORD_EMBEDDINGS, the label column (The assertion status that you want to predict), the start (the start index for the term that has the assertion status), the end column (the end index for the term that has the assertion status).This model use a deep learning to predict the entity.

Input Annotation types

Output Annotation type

DOCUMENT, CHUNK, WORD_EMBEDDINGS

ASSERTION

Parameters:
label

Column with one label per document. Example of possible values: “present”, “absent”, “hypothetical”, “conditional”, “associated_with_other_person”, etc.

startCol

Column that contains the token number for the start of the target

endCol

olumn that contains the token number for the end of the target

batchSize

Size for each batch in the optimization process

epochs

Number of epochs for the optimization process

learningRate

Learning rate for the optimization process

dropout

dropout”, “Dropout at the output of each layer

maxSentLen

Max length for an input sentence.

graphFolder

Folder path that contain external graph files

graphFile

Graph file name to use

configProtoBytes

ConfigProto from tensorflow, serialized into byte array. Get with config_proto.SerializeToString()

validationSplit

Choose the proportion of training dataset to be validated against the model on each Epoch. The value should be between 0.0 and 1.0 and by default it is 0.0 and off.

evaluationLogExtended

Select if you want to have mode eval.

testDataset

Path to test dataset. If set used to calculate statistic on it during training.

includeConfidence

whether to include confidence scores in annotation metadata

enableOutputLogs

whether or not to output logs

outputLogsPath

Folder path to save training logs

verbose

Level of verbosity during training

scopeWindow

The scope window of the assertion expression

Examples
——–
>>> import sparknlp
>>> from sparknlp.base import *
>>> from sparknlp.annotator import *
>>> from sparknlp_jsl.annotator import *
>>> from sparknlp.training import *
>>> from pyspark.ml import Pipeline
**>>> document_assembler = DocumentAssembler() **
**… .setInputCol(“text”) **
… .setOutputCol(“document”)
**>>> sentence_detector = SentenceDetector() **
**… .setInputCol(“document”) **
… .setOutputCol(“sentence”)
**>>> tokenizer = Tokenizer() **
**… .setInputCols([“sentence”]) **
… .setOutputCol(“token”)
**>>> embeddings = WordEmbeddingsModel.pretrained(“embeddings_clinical”, “en”, “clinical/models”) **
**… .setInputCols([“sentence”, “token”]) **
**… .setOutputCol(“word_embeddings”) **
… .setCaseSensitive(False)
**>>> chunk = Chunker() **
**… .setInputCols([sentence]) **
**… .setChunkCol(“chunk”) **
… .setOutputCol(“chunk”)
**>>> assertion = AssertionDLApproach() **
**… .setLabelCol(“label”) **
**… .setInputCols([“document”, “chunk”, “word_embeddings”]) **
**… .setOutputCol(“assertion”) **
**… .setOutputCol(“assertion”) **
**… .setBatchSize(128) **
**… .setDropout(0.012) **
**… .setLearningRate(0.015) **
**… .setEpochs(1) **
**… .setStartCol(“start”) **
**… .setScopeWindow([3, 4]) **
**… .setEndCol(“end”) **
… .setMaxSentLen(250)
>>> assertionPipeline = Pipeline(stages=[
… document_assembler,
… sentence_detector,
… tokenizer,
… embeddings,
… chunk,
… assertion])
>>> assertionModel = assertionPipeline.fit(dataset)

Methods

__init__()

clear(param)

Clears a param from the param map if it has been explicitly set.

copy([extra])

Creates a copy of this instance with the same uid and some extra params.

explainParam(param)

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams()

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap([extra])

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

fit(dataset[, params])

Fits a model to the input dataset with optional parameters.

fitMultiple(dataset, paramMaps)

Fits a model to the input dataset for each param map in paramMaps.

getInputCols()

Gets current column names of input annotations.

getLazyAnnotator()

Gets whether Annotator should be evaluated lazily in a RecursivePipeline.

getOrDefault(param)

Gets the value of a param in the user-supplied param map or its default value.

getOutputCol()

Gets output column name of annotations.

getParam(paramName)

Gets a param by its name.

getParamValue(paramName)

Gets the value of a parameter.

hasDefault(param)

Checks whether a param has a default value.

hasParam(paramName)

Tests whether this instance contains a param with a given (string) name.

isDefined(param)

Checks whether a param is explicitly set by user or has a default value.

isSet(param)

Checks whether a param is explicitly set by user.

load(path)

Reads an ML instance from the input path, a shortcut of read().load(path).

read()

Returns an MLReader instance for this class.

save(path)

Save this ML instance to the given path, a shortcut of 'write().save(path)'.

set(param, value)

Sets a parameter in the embedded param map.

setBatchSize(size)

Set Size for each batch in the optimization process.

setConfigProtoBytes(b)

Sets ConfigProto from tensorflow, serialized into byte array.

setDropout(rate)

Set a dropout at the output of each layer

setEnableOutputLogs(value)

Sets if you enable to output to annotators log folder.

setEndCol(e)

Set column that contains the token number for the end of the target.

setEpochs(number)

Sets number of epochs for the optimization process

setEvaluationLogExtended(v)

Creates a Annotation from a Spark Row.

setGraphFile(value)

Sets path that contains the external graph file.

setGraphFolder(p)

Sets folder path that contain external graph files.

setIncludeConfidence(value)

Sets if you waht to include confidence scores in annotation metadata.

setInputCols(*value)

Sets column names of input annotations.

setLabelCol(label)

Set a column with one label per document.

setLazyAnnotator(value)

Sets whether Annotator should be evaluated lazily in a RecursivePipeline.

setLearningRate(lamda)

Set a learning rate for the optimization process

setMaxSentLen(length)

Set the max length for an input sentence.

setOutputCol(value)

Sets output column name of annotations.

setOutputLogsPath(value)

Sets folder path that contain external graph files.

setParamValue(paramName)

Sets the value of a parameter.

setScopeWindow(value)

Sets the scope of the window of the assertion expression

setStartCol(s)

Set a column that contains the token number for the start of the target

setTestDataset(path[, read_as, options])

Sets path to test dataset.

setValidationSplit(v)

Set Choose the proportion of training dataset to be validated against the model on each Epoch.

setVerbose(value)

Sets level of verbosity during training.

write()

Returns an MLWriter instance for this ML instance.

Attributes

batchSize

configProtoBytes

dropout

enableOutputLogs

endCol

epochs

evaluationLogExtended

getter_attrs

graphFile

graphFolder

includeConfidence

inputCols

label

lazyAnnotator

learningRate

maxSentLen

outputCol

outputLogsPath

params

Returns all params ordered by name.

scopeWindow

startCol

testDataset

validationSplit

verbose

clear(param)#

Clears a param from the param map if it has been explicitly set.

copy(extra=None)#

Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.

Parameters:

extra – Extra parameters to copy to the new instance

Returns:

Copy of this instance

explainParam(param)#

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams()#

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap(extra=None)#

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

Parameters:

extra – extra param values

Returns:

merged param map

fit(dataset, params=None)#

Fits a model to the input dataset with optional parameters.

Parameters:
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame

  • params – an optional param map that overrides embedded params. If a list/tuple of param maps is given, this calls fit on each param map and returns a list of models.

Returns:

fitted model(s)

New in version 1.3.0.

fitMultiple(dataset, paramMaps)#

Fits a model to the input dataset for each param map in paramMaps.

Parameters:
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame.

  • paramMaps – A Sequence of param maps.

Returns:

A thread safe iterable which contains one model for each param map. Each call to next(modelIterator) will return (index, model) where model was fit using paramMaps[index]. index values may not be sequential.

New in version 2.3.0.

getInputCols()#

Gets current column names of input annotations.

getLazyAnnotator()#

Gets whether Annotator should be evaluated lazily in a RecursivePipeline.

getOrDefault(param)#

Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.

getOutputCol()#

Gets output column name of annotations.

getParam(paramName)#

Gets a param by its name.

getParamValue(paramName)#

Gets the value of a parameter.

Parameters:
paramNamestr

Name of the parameter

hasDefault(param)#

Checks whether a param has a default value.

hasParam(paramName)#

Tests whether this instance contains a param with a given (string) name.

isDefined(param)#

Checks whether a param is explicitly set by user or has a default value.

isSet(param)#

Checks whether a param is explicitly set by user.

classmethod load(path)#

Reads an ML instance from the input path, a shortcut of read().load(path).

property params#

Returns all params ordered by name. The default implementation uses dir() to get all attributes of type Param.

classmethod read()#

Returns an MLReader instance for this class.

save(path)#

Save this ML instance to the given path, a shortcut of ‘write().save(path)’.

set(param, value)#

Sets a parameter in the embedded param map.

setBatchSize(size)[source]#

Set Size for each batch in the optimization process.

Parameters:
sizeint

Size for each batch in the optimization process

setConfigProtoBytes(b)[source]#

Sets ConfigProto from tensorflow, serialized into byte array. Get with config_proto.SerializeToString()`.

Parameters:
bbytes

ConfigProto from tensorflow, serialized into byte array. Get with config_proto.SerializeToString()

setDropout(rate)[source]#

Set a dropout at the output of each layer

Parameters:
ratefloat

Dropout at the output of each layer

setEnableOutputLogs(value)[source]#

Sets if you enable to output to annotators log folder.

Parameters:
valuesrt

Folder path that contain external graph files.

setEndCol(e)[source]#

Set column that contains the token number for the end of the target.

Parameters:
rowstr

Column that contains the token number for the end of the target

setEpochs(number)[source]#

Sets number of epochs for the optimization process

Parameters:
numberint

Number of epochs for the optimization process

setEvaluationLogExtended(v)[source]#

Creates a Annotation from a Spark Row.

Parameters:
vbool

Evaluation log extended.

setGraphFile(value)[source]#

Sets path that contains the external graph file. When specified, the provided file will be used, and no graph search will happen.

Parameters:
valuestr

Path that contains the external graph file. When specified, the provided file will be used, and no graph search will happen.

setGraphFolder(p)[source]#

Sets folder path that contain external graph files.

Parameters:
psrt

Folder path that contain external graph files.

setIncludeConfidence(value)[source]#

Sets if you waht to include confidence scores in annotation metadata.

Parameters:
pbool
Value that selects if you want to use confidence scores in annotation metadata
setInputCols(*value)#

Sets column names of input annotations.

Parameters:
*valuestr

Input columns for the annotator

setLabelCol(label)[source]#

Set a column with one label per document. Example of possible values: “present”, “absent”, “hypothetical”, “conditional”, “associated_with_other_person”, etc.

Parameters:
labelstr

label. Column with one label per document. Example of possible values: “present”, “absent”, “hypothetical”, “conditional”, “associated_with_other_person”, etc.

setLazyAnnotator(value)#

Sets whether Annotator should be evaluated lazily in a RecursivePipeline.

Parameters:
valuebool

Whether Annotator should be evaluated lazily in a RecursivePipeline

setLearningRate(lamda)[source]#

Set a learning rate for the optimization process

Parameters:
lamdafloat

Learning rate for the optimization process.

Returns:
Annotation

The new Annotation.

setMaxSentLen(length)[source]#

Set the max length for an input sentence.

Parameters:
lengthint

Max length for an input sentence.

setOutputCol(value)#

Sets output column name of annotations.

Parameters:
valuestr

Name of output column

setOutputLogsPath(value)[source]#

Sets folder path that contain external graph files.

Parameters:
valuesrt

Folder path that contain external graph files.

setParamValue(paramName)#

Sets the value of a parameter.

Parameters:
paramNamestr

Name of the parameter

setScopeWindow(value)[source]#

Sets the scope of the window of the assertion expression

Parameters:
value[int, int]

Left and right offset if the scope window. Offsets must be non-negative values

setStartCol(s)[source]#

Set a column that contains the token number for the start of the target

Parameters:
sstr

Column that contains the token number for the start of the target

setTestDataset(path, read_as='SPARK', options=None)[source]#

Sets path to test dataset. If set used to calculate statistic on it during training.

Parameters:
pathsrt

Path to test dataset. If set used to calculate statistic on it during training.

setValidationSplit(v)[source]#
Set Choose the proportion of training dataset to be validated against the model on each Epoch.

The value should be between 0.0 and 1.0 and by default it is 0.0 and off.

Parameters:
vfloat

Choose the proportion of training dataset to be validated against the model on each Epoch. The value should be between 0.0 and 1.0 and by default it is 0.0 and off.

setVerbose(value)[source]#

Sets level of verbosity during training.

Parameters:
valueint

Level of verbosity during training.

uid#

A unique id for the object.

write()#

Returns an MLWriter instance for this ML instance.