sparknlp_jsl.legal.chunk_classification.assertion.assertionDL
#
Module Contents#
Classes#
Train a Assertion Model algorithm using deep learning. |
|
AssertionDL is a deep Learning based approach used to extract Assertion Status from extracted entities and text. |
- class AssertionDLApproach#
Bases:
sparknlp_jsl.annotator.assertion.assertionDL.AssertionDLApproach
Train a Assertion Model algorithm using deep learning.
from extracted entities and text. AssertionLogRegModel requires DOCUMENT, CHUNK and WORD_EMBEDDINGS type annotator inputs, which can be obtained by e.g a
The training data should have annotations columns of type
DOCUMENT
,CHUNK
,WORD_EMBEDDINGS
, thelabel
column (The assertion status that you want to predict), thestart
(the start index for the term that has the assertion status), theend
column (the end index for the term that has the assertion status).This model use a deep learning to predict the entity.Input Annotation types
Output Annotation type
DOCUMENT, CHUNK, WORD_EMBEDDINGS
ASSERTION
- Parameters:
label – Column with one label per document. Example of possible values: “present”, “absent”, “hypothetical”, “conditional”, “associated_with_other_person”, etc.
startCol – Column that contains the token number for the start of the target
endCol – olumn that contains the token number for the end of the target
batchSize – Size for each batch in the optimization process
epochs – Number of epochs for the optimization process
learningRate – Learning rate for the optimization process
dropout – dropout”, “Dropout at the output of each layer
maxSentLen – Max length for an input sentence.
graphFolder – Folder path that contain external graph files. The path can a local file path, a distributed file path (HDFS, DBFS), or a cloud storage (S3).
graphFile –
Path that contains the external graph file.
When specified, the provided file will be used, and no graph search will happen. The path can be a local file path, a distributed file path (HDFS, DBFS), or a cloud storage (S3).
configProtoBytes – ConfigProto from tensorflow, serialized into byte array. Get with config_proto.SerializeToString()
validationSplit – Choose the proportion of training dataset to be validated against the model on each Epoch. The value should be between 0.0 and 1.0 and by default it is 0.0 and off.
evaluationLogExtended – Select if you want to have mode eval.
testDataset – Path to test dataset. If set used to calculate statistic on it during training.
includeConfidence – whether to include confidence scores in annotation metadata
enableOutputLogs – whether or not to output logs
outputLogsPath – Folder path to save training logs. If no path is specified, the logs won’t be stored in disk. The path can be a local file path, a distributed file path (HDFS, DBFS), or a cloud storage (S3).
verbose – Level of verbosity during training
scopeWindow – The scope window of the assertion expression
Examples
>>> import sparknlp >>> from sparknlp.base import * >>> from sparknlp.annotator import * >>> from sparknlp_jsl.annotator import * >>> from sparknlp.training import * >>> from pyspark.ml import Pipeline >>> document_assembler = DocumentAssembler() \ ... .setInputCol("text") \ ... .setOutputCol("document") >>> sentence_detector = SentenceDetector() \ ... .setInputCol("document") \ ... .setOutputCol("sentence") >>> tokenizer = Tokenizer() \ ... .setInputCols(["sentence"]) \ ... .setOutputCol("token") >>> embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models") \ ... .setInputCols(["sentence", "token"]) \ ... .setOutputCol("word_embeddings") \ ... .setCaseSensitive(False) >>> chunk = Chunker() \ ... .setInputCols([sentence]) \ ... .setChunkCol("chunk") \ ... .setOutputCol("chunk") >>> assertion = AssertionDLApproach() \ ... .setLabelCol("label") \ ... .setInputCols(["document", "chunk", "word_embeddings"]) \ ... .setOutputCol("assertion") \ ... .setOutputCol("assertion") \ ... .setBatchSize(128) \ ... .setDropout(0.012) \ ... .setLearningRate(0.015) \ ... .setEpochs(1) \ ... .setStartCol("start") \ ... .setScopeWindow([3, 4]) \ ... .setEndCol("end") \ ... .setMaxSentLen(250) >>> assertionPipeline = Pipeline(stages=[ ... document_assembler, ... sentence_detector, ... tokenizer, ... embeddings, ... chunk, ... assertion]) >>> assertionModel = assertionPipeline.fit(dataset)
- batchSize#
- configProtoBytes#
- datasetInfo#
- doExceptionHandling#
- dropout#
- enableOutputLogs#
- endCol#
- engine#
- epochs#
- getter_attrs = []#
- graphFile#
- graphFolder#
- includeConfidence#
- inputAnnotatorTypes#
- inputCols#
- label#
- lazyAnnotator#
- learningRate#
- maxSentLen#
- optionalInputAnnotatorTypes = []#
- outputAnnotatorType#
- outputCol#
- outputLogsPath#
- scopeWindow#
- skipLPInputColsValidation = True#
- startCol#
- testDataset#
- uid#
- validationSplit#
- verbose#
- clear(param: pyspark.ml.param.Param) None #
Clears a param from the param map if it has been explicitly set.
- copy(extra: pyspark.ml._typing.ParamMap | None = None) JP #
Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.
- Parameters:
extra (dict, optional) – Extra parameters to copy to the new instance
- Returns:
Copy of this instance
- Return type:
JavaParams
- explainParam(param: str | Param) str #
Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
- explainParams() str #
Returns the documentation of all params with their optionally default values and user-supplied values.
- extractParamMap(extra: pyspark.ml._typing.ParamMap | None = None) pyspark.ml._typing.ParamMap #
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
- Parameters:
extra (dict, optional) – extra param values
- Returns:
merged param map
- Return type:
dict
- fit(dataset: pyspark.sql.dataframe.DataFrame, params: pyspark.ml._typing.ParamMap | None = ...) M #
- fit(dataset: pyspark.sql.dataframe.DataFrame, params: List[pyspark.ml._typing.ParamMap] | Tuple[pyspark.ml._typing.ParamMap]) List[M]
Fits a model to the input dataset with optional parameters.
New in version 1.3.0.
- Parameters:
dataset (
pyspark.sql.DataFrame
) – input dataset.params (dict or list or tuple, optional) – an optional param map that overrides embedded params. If a list/tuple of param maps is given, this calls fit on each param map and returns a list of models.
- Returns:
fitted model(s)
- Return type:
Transformer
or a list ofTransformer
- fitMultiple(dataset: pyspark.sql.dataframe.DataFrame, paramMaps: Sequence[pyspark.ml._typing.ParamMap]) Iterator[Tuple[int, M]] #
Fits a model to the input dataset for each param map in paramMaps.
New in version 2.3.0.
- Parameters:
dataset (
pyspark.sql.DataFrame
) – input dataset.paramMaps (
collections.abc.Sequence
) – A Sequence of param maps.
- Returns:
A thread safe iterable which contains one model for each param map. Each call to next(modelIterator) will return (index, model) where model was fit using paramMaps[index]. index values may not be sequential.
- Return type:
_FitMultipleIterator
- getEngine()#
- Returns:
Deep Learning engine used for this model”
- Return type:
str
- getInputCols()#
Gets current column names of input annotations.
- getLazyAnnotator()#
Gets whether Annotator should be evaluated lazily in a RecursivePipeline.
- getOrDefault(param: str) Any #
- getOrDefault(param: Param[T]) T
Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.
- getOutputCol()#
Gets output column name of annotations.
- getParam(paramName: str) Param #
Gets a param by its name.
- getParamValue(paramName)#
Gets the value of a parameter.
- Parameters:
paramName (str) – Name of the parameter
- hasDefault(param: str | Param[Any]) bool #
Checks whether a param has a default value.
- hasParam(paramName: str) bool #
Tests whether this instance contains a param with a given (string) name.
- inputColsValidation(value)#
- isDefined(param: str | Param[Any]) bool #
Checks whether a param is explicitly set by user or has a default value.
- isSet(param: str | Param[Any]) bool #
Checks whether a param is explicitly set by user.
- classmethod load(path: str) RL #
Reads an ML instance from the input path, a shortcut of read().load(path).
- classmethod read()#
Returns an MLReader instance for this class.
- save(path: str) None #
Save this ML instance to the given path, a shortcut of ‘write().save(path)’.
- set(param: Param, value: Any) None #
Sets a parameter in the embedded param map.
- setBatchSize(size: int)#
Set Size for each batch in the optimization process.
- Parameters:
size (int) – Size for each batch in the optimization process
- setConfigProtoBytes(conf)#
Sets the ConfigProto from tensorflow, serialized into byte array.
Get with config_proto.SerializeToString().
- Parameters:
conf (bytes) – The byte array contaiing the ConfigProto from tensorflow.
- setDatasetInfo(info: str)#
Sets descriptive information about the dataset being used.
- Parameters:
info (str) – Descriptive information about the dataset being used.
- setDoExceptionHandling(value: bool)#
If True, exceptions are handled. If exception causing data is passed to the model, a error annotation is emitted which has the exception message. Processing continues with the next one. This comes with a performance penalty.
- Parameters:
value (bool) – If True, exceptions are handled.
- setDropout(rate: float)#
Sets a dropout at the output of each layer.
- Parameters:
rate (float) – Dropout rate at the output of each layer
- setEnableOutputLogs(value: str)#
Sets if you enable to output to annotators log folder.
- Parameters:
value (srt) – Folder path that contain external graph files.
- setEndCol(end_col: str)#
Set column that contains the token number for the end of the target.
- Parameters:
end_col (str) – Column that contains the token number for the end of the target.
- setEpochs(number: int)#
Sets number of epochs for the optimization process.
- Parameters:
number (int) – Number of epochs for the optimization process.
- setForceInputTypeValidation(etfm)#
- setGraphFile(value: str)#
Sets path that contains the external graph file.
When specified, the provided file will be used, and no graph search will happen.
- Parameters:
value (str) – The path to the graph file.
- setGraphFolder(folder: str)#
Sets folder path that contain external graph files.
- Parameters:
folder (srt) – Folder path that contain external graph files.
- setIncludeConfidence(value: bool)#
Sets if you waht to include confidence scores in annotation metadata.
- Parameters:
value (bool) – Whether to use confidence scores in annotation metadata or not.
- setInputCols(*value)#
Sets column names of input annotations.
- Parameters:
*value (List[str]) – Input columns for the annotator
- setLabelCol(colname: str)#
Set the column containing the labels of the documents.
- Parameters:
colname (str) – The columns name.
- setLazyAnnotator(value)#
Sets whether Annotator should be evaluated lazily in a RecursivePipeline.
- Parameters:
value (bool) – Whether Annotator should be evaluated lazily in a RecursivePipeline
- setLearningRate(lr: float)#
Set a learning rate for the optimization process.
- Parameters:
lr (float) – Learning rate for the optimization process.
- setMaxSentLen(length: int)#
Set the max length for an input sentence.
- Parameters:
length (int) – The new value for maximum sentence length.
- setOutputCol(value)#
Sets output column name of annotations.
- Parameters:
value (str) – Name of output column
- setOutputLogsPath(value: str)#
Sets folder path that contain external graph files.
- Parameters:
value (srt) – Folder path that contain external graph files.
- setParamValue(paramName)#
Sets the value of a parameter.
- Parameters:
paramName (str) – Name of the parameter
- setScopeWindow(value: list)#
Sets the scope of the window of the assertion expression.
- Parameters:
value (list) – Left and right offset if the scope window. Offsets must be non-negative values.
- setStartCol(start_col: str)#
Set a column that contains the token number for the start of the target.
- Parameters:
start_col (str) – Column that contains the token number for the start of the target.
- setTestDataset(path: str, read_as=ReadAs.SPARK, options=None)#
Sets path to test dataset.
If set, the dataset will be used to calculate performance metrics during training.
- Parameters:
path (srt) – Path to test dataset. If set used to calculate statistic on it during training.
read_as (str) – Read as format. Default is “SPARK” (ReadAs.SPARK).
options (dict) – Options for reading. Default is None.
- setValidationSplit(value: float)#
Sets the validation size for the training dataset.
The value should be between 0.0 and 1.0.
- Parameters:
value (float) – The proportion of training dataset to be used as validation set.
- setVerbose(value: int)#
Sets level of verbosity during training.
- Parameters:
value (int) – Level of verbosity during training.
- write() JavaMLWriter #
Returns an MLWriter instance for this ML instance.
- class AssertionDLModel(classname='com.johnsnowlabs.legal.chunk_classification.assertion.AssertionDLModel', java_model=None)#
Bases:
sparknlp_jsl.annotator.assertion.assertionDL.AssertionDLModel
AssertionDL is a deep Learning based approach used to extract Assertion Status from extracted entities and text.
Input Annotation types
Output Annotation type
DOCUMENT, CHUNK, WORD_EMBEDDINGS
ASSERTION
- Parameters:
maxSentLen – Max length for an input sentence.
targetNerLabels – List of NER labels to mark as target for assertion, must match NER output.
configProtoBytes – ConfigProto from tensorflow, serialized into byte array.
classes – Tags used to trained this AssertionDLModel
scopeWindow – The scope window of the assertion expression
Examples
>>> import sparknlp >>> from sparknlp.base import * >>> from sparknlp_jsl.common import * >>> from sparknlp.annotator import * >>> from sparknlp.training import * >>> import sparknlp_jsl >>> from sparknlp_jsl.base import * >>> from sparknlp_jsl.annotator import * >>> from pyspark.ml import Pipeline >>> data = spark.createDataFrame([["Patient with severe fever and sore throat"],["Patient shows no stomach pain"],["She was maintained on an epidural and PCA for pain control."]]).toDF("text") >>> documentAssembler = DocumentAssembler().setInputCol("text").setOutputCol("document") >>> sentenceDetector = SentenceDetector().setInputCols(["document"]).setOutputCol("sentence") >>> tokenizer = Tokenizer().setInputCols(["sentence"]).setOutputCol("token") >>> embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models") \ ... .setOutputCol("embeddings") >>> nerModel = MedicalNerModel.pretrained("ner_clinical", "en", "clinical/models") \ ... .setInputCols(["sentence", "token", "embeddings"]).setOutputCol("ner") >>> nerConverter = NerConverter().setInputCols(["sentence", "token", "ner"]).setOutputCol("ner_chunk") >>> clinicalAssertion = AssertionDLModel.pretrained("assertion_dl", "en", "clinical/models") \ ... .setInputCols(["sentence", "ner_chunk", "embeddings"]) \ ... .setOutputCol("assertion") >>> assertionPipeline = Pipeline(stages=[ ... documentAssembler, ... sentenceDetector, ... tokenizer, ... embeddings, ... nerModel, ... nerConverter, ... clinicalAssertion ... ])
>>> assertionModel = assertionPipeline.fit(data)
>>> result = assertionModel.transform(data) >>> result.selectExpr("ner_chunk.result as ner", "assertion.result").show(3, truncate=False) +--------------------------------+--------------------------------+ |ner |result | +--------------------------------+--------------------------------+ |[severe fever, sore throat] |[present, present] | |[stomach pain] |[absent] | |[an epidural, PCA, pain control]|[present, present, hypothetical]| +--------------------------------+--------------------------------+
- classes#
- configProtoBytes#
- datasetInfo#
- entityAssertionCaseSensitive#
- getter_attrs = []#
- includeConfidence#
- inputAnnotatorTypes#
- inputCols#
- lazyAnnotator#
- maxSentLen#
- name = 'AssertionDLModel'#
- optionalInputAnnotatorTypes = []#
- outputAnnotatorType#
- outputCol#
- scopeWindow#
- skipLPInputColsValidation = True#
- storageRef#
- targetNerLabels#
- uid#
- clear(param: pyspark.ml.param.Param) None #
Clears a param from the param map if it has been explicitly set.
- copy(extra: pyspark.ml._typing.ParamMap | None = None) JP #
Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.
- Parameters:
extra (dict, optional) – Extra parameters to copy to the new instance
- Returns:
Copy of this instance
- Return type:
JavaParams
- explainParam(param: str | Param) str #
Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
- explainParams() str #
Returns the documentation of all params with their optionally default values and user-supplied values.
- extractParamMap(extra: pyspark.ml._typing.ParamMap | None = None) pyspark.ml._typing.ParamMap #
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
- Parameters:
extra (dict, optional) – extra param values
- Returns:
merged param map
- Return type:
dict
- getEntityAssertion()#
Return the lists of assertion labels allowed for a given entity
- getInputCols()#
Gets current column names of input annotations.
- getLazyAnnotator()#
Gets whether Annotator should be evaluated lazily in a RecursivePipeline.
- getOrDefault(param: str) Any #
- getOrDefault(param: Param[T]) T
Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.
- getOutputCol()#
Gets output column name of annotations.
- getParam(paramName: str) Param #
Gets a param by its name.
- getParamValue(paramName)#
Gets the value of a parameter.
- Parameters:
paramName (str) – Name of the parameter
- getReplaceLabels()#
Gets the assertion labels to be replaced for the specified new labels.
- getStorageRef()#
Gets unique reference name for identification.
- Returns:
Unique reference name for identification
- Return type:
str
- hasDefault(param: str | Param[Any]) bool #
Checks whether a param has a default value.
- hasParam(paramName: str) bool #
Tests whether this instance contains a param with a given (string) name.
- inputColsValidation(value)#
- isDefined(param: str | Param[Any]) bool #
Checks whether a param is explicitly set by user or has a default value.
- isSet(param: str | Param[Any]) bool #
Checks whether a param is explicitly set by user.
- classmethod load(path: str) RL #
Reads an ML instance from the input path, a shortcut of read().load(path).
- static pretrained(name='legassertion_time_md', lang='en', remote_loc='legal/models')#
Download a pre-trained AssertionDLModel.
- Parameters:
name (str) – Name of the pre-trained model, by default “legassertion_time_md”
lang (str) – Language of the pre-trained model, by default “en”
remote_loc (str) – Remote location of the pre-trained model. If None, use the open-source location. Other values are “clinical/models”, “finance/models”, or “legal/models”.
- Returns:
A pre-trained AssertionDLModel.
- Return type:
- classmethod read()#
Returns an MLReader instance for this class.
- save(path: str) None #
Save this ML instance to the given path, a shortcut of ‘write().save(path)’.
- set(param: Param, value: Any) None #
Sets a parameter in the embedded param map.
- setConfigProtoBytes(b)#
Sets the configProtoBytes for the AssertionDLModel.
- Parameters:
b (bytearray) – ConfigProto from tensorflow, serialized into byte array. Get with config_proto.SerializeToString().
- setDatasetInfo(info: str)#
Sets descriptive information about the dataset being used.
- Parameters:
info (str) – Descriptive information about the dataset being used.
- setEntityAssertion(assertionEntities)#
Set the lists of assertion labels allowed for a given entity.
Notes
entityAssertion
functionality is processed earlier thanreplaceLabels
.- Parameters:
assertionEntities (dict[str, list[str]]) – List of assertion labels allowed for a given entity
- setEntityAssertionCaseSensitive(value: bool)#
Sets the case sensitivity of entities and assertion labels
- Parameters:
value (bool) – whether entities and assertion labels are case sensitive
- setForceInputTypeValidation(etfm)#
- setIncludeConfidence(value)#
Sets if you waht to include confidence scores in annotation metadata.
- Parameters:
p (bool) – Value that selects if you want to use confidence scores in annotation metadata
- setInputCols(*value)#
Sets column names of input annotations.
- Parameters:
*value (List[str]) – Input columns for the annotator
- setLazyAnnotator(value)#
Sets whether Annotator should be evaluated lazily in a RecursivePipeline.
- Parameters:
value (bool) – Whether Annotator should be evaluated lazily in a RecursivePipeline
- setOutputCol(value)#
Sets output column name of annotations.
- Parameters:
value (str) – Name of output column
- setParamValue(paramName)#
Sets the value of a parameter.
- Parameters:
paramName (str) – Name of the parameter
- setParams()#
- setReplaceLabels(replaceLabels)#
Sets the assertion labels to be replaced for the specified new labels.
Notes
replaceLabels
functionality is processed later thanentityAssertion
.- Parameters:
replaceLabels (dict[str, str]) – The assertion labels to be replaced for the specified new labels.
- setScopeWindow(value)#
Sets the scope of the window of the assertion expression
- Parameters:
value ([int, int]) – Left and right offset if the scope window. Offsets must be non-negative values
- setStorageRef(value)#
Sets unique reference name for identification.
- Parameters:
value (str) – Unique reference name for identification
- transform(dataset: pyspark.sql.dataframe.DataFrame, params: pyspark.ml._typing.ParamMap | None = None) pyspark.sql.dataframe.DataFrame #
Transforms the input dataset with optional parameters.
New in version 1.3.0.
- Parameters:
dataset (
pyspark.sql.DataFrame
) – input datasetparams (dict, optional) – an optional param map that overrides embedded params.
- Returns:
transformed dataset
- Return type:
- write() JavaMLWriter #
Returns an MLWriter instance for this ML instance.