sparknlp_jsl.annotator.assertion.assertion_dl_reg
#
Contains Classes for Assertion
Module Contents#
Classes#
Train a Assertion algorithm using a regression log model. |
|
Model to extract assertion status of entities using Logarithmic Regression. |
- class AssertionLogRegApproach#
Bases:
sparknlp_jsl.common.AnnotatorApproachInternal
Train a Assertion algorithm using a regression log model.
Excluding the label, this can be done with for example: - a :class: SentenceDetector, - a :class: Chunk, - a :class: WordEmbeddingsModel.
For pretrained models, check the documentation of
AssertionLogRegModel
.Input Annotation types
Output Annotation type
DOCUMENT, CHUNK, WORD_EMBEDDINGS
ASSERTION
- Parameters:
label – Column with label per each token
maxIter – Max number of iterations for algorithm
regParam – Regularization parameter
eNetParam – Elastic net parameter
beforeParam – Length of the context before the target
afterParam – Length of the context after the target
startCol – Column that contains the token number for the start of the target”
externalFeatures – Additional dictionaries paths to use as a features
endCol – Column that contains the token number for the end of the target
nerCol – Column with NER type annotation output, use either nerCol or startCol and endCol
targetNerLabels – List of NER labels to mark as target for assertion, must match NER output
Examples
>>> import sparknlp >>> from sparknlp.base import * >>> from sparknlp.annotator import * >>> from sparknlp_jsl.annotator import * >>> from sparknlp.training import * >>> from pyspark.ml import Pipeline >>> document_assembler = DocumentAssembler() \ ... .setInputCol("text") \ ... .setOutputCol("document") ... >>> sentence_detector = SentenceDetector() \ ... .setInputCol("document") \ ... .setOutputCol("sentence") ... >>> tokenizer = Tokenizer() \ ... .setInputCols(["sentence"]) \ ... .setOutputCol("token") ... >>> glove = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models") \ ... .setInputCols(["sentence", "token"]) \ ... .setOutputCol("word_embeddings") \ ... >>> chunk = Chunker() \ ... .setInputCols([sentence]) \ ... .setChunkCol("chunk") \ ... .setOutputCol("chunk") ... Then the AssertionLogRegApproach model is defined. Label column is needed in the dataset for training. >>> assertion = AssertionLogRegApproach() \ ... .setLabelCol("label") \ ... .setInputCols(["document", "chunk", "word_embeddings"]) \ ... .setOutputCol("assertion") \ ... .setReg(0.01) \ ... .setBefore(11) \ ... .setAfter(13) \ ... .setStartCol("start") \ ... .setEndCol("end") ... >>> assertionPipeline = Pipeline(stages=[ ... document_assembler, ... sentence_detector, ... tokenizer, ... glove, ... chunk, ... assertion ...])
>>> assertionModel = assertionPipeline.fit(dataset)
- afterParam#
- beforeParam#
- eNetParam#
- endCol#
- getter_attrs = []#
- inputAnnotatorTypes#
- inputCols#
- label#
- lazyAnnotator#
- maxIter#
- nerCol#
- optionalInputAnnotatorTypes = []#
- outputAnnotatorType#
- outputCol#
- regParam#
- skipLPInputColsValidation = True#
- startCol#
- targetNerLabels#
- uid#
- clear(param: pyspark.ml.param.Param) None #
Clears a param from the param map if it has been explicitly set.
- copy(extra: pyspark.ml._typing.ParamMap | None = None) JP #
Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.
- Parameters:
extra (dict, optional) – Extra parameters to copy to the new instance
- Returns:
Copy of this instance
- Return type:
JavaParams
- explainParam(param: str | Param) str #
Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
- explainParams() str #
Returns the documentation of all params with their optionally default values and user-supplied values.
- extractParamMap(extra: pyspark.ml._typing.ParamMap | None = None) pyspark.ml._typing.ParamMap #
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
- Parameters:
extra (dict, optional) – extra param values
- Returns:
merged param map
- Return type:
dict
- fit(dataset: pyspark.sql.dataframe.DataFrame, params: pyspark.ml._typing.ParamMap | None = ...) M #
- fit(dataset: pyspark.sql.dataframe.DataFrame, params: List[pyspark.ml._typing.ParamMap] | Tuple[pyspark.ml._typing.ParamMap]) List[M]
Fits a model to the input dataset with optional parameters.
New in version 1.3.0.
- Parameters:
dataset (
pyspark.sql.DataFrame
) – input dataset.params (dict or list or tuple, optional) – an optional param map that overrides embedded params. If a list/tuple of param maps is given, this calls fit on each param map and returns a list of models.
- Returns:
fitted model(s)
- Return type:
Transformer
or a list ofTransformer
- fitMultiple(dataset: pyspark.sql.dataframe.DataFrame, paramMaps: Sequence[pyspark.ml._typing.ParamMap]) Iterator[Tuple[int, M]] #
Fits a model to the input dataset for each param map in paramMaps.
New in version 2.3.0.
- Parameters:
dataset (
pyspark.sql.DataFrame
) – input dataset.paramMaps (
collections.abc.Sequence
) – A Sequence of param maps.
- Returns:
A thread safe iterable which contains one model for each param map. Each call to next(modelIterator) will return (index, model) where model was fit using paramMaps[index]. index values may not be sequential.
- Return type:
_FitMultipleIterator
- getInputCols()#
Gets current column names of input annotations.
- getLazyAnnotator()#
Gets whether Annotator should be evaluated lazily in a RecursivePipeline.
- getOrDefault(param: str) Any #
- getOrDefault(param: Param[T]) T
Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.
- getOutputCol()#
Gets output column name of annotations.
- getParam(paramName: str) Param #
Gets a param by its name.
- getParamValue(paramName)#
Gets the value of a parameter.
- Parameters:
paramName (str) – Name of the parameter
- hasDefault(param: str | Param[Any]) bool #
Checks whether a param has a default value.
- hasParam(paramName: str) bool #
Tests whether this instance contains a param with a given (string) name.
- inputColsValidation(value)#
- isDefined(param: str | Param[Any]) bool #
Checks whether a param is explicitly set by user or has a default value.
- isSet(param: str | Param[Any]) bool #
Checks whether a param is explicitly set by user.
- classmethod load(path: str) RL #
Reads an ML instance from the input path, a shortcut of read().load(path).
- classmethod read()#
Returns an MLReader instance for this class.
- save(path: str) None #
Save this ML instance to the given path, a shortcut of ‘write().save(path)’.
- set(param: Param, value: Any) None #
Sets a parameter in the embedded param map.
- setAfter(after)#
Sets the value of
afterParam
.
- setBefore(before)#
Sets the value of
beforeParam
.
- setForceInputTypeValidation(etfm)#
- setInputCols(*value)#
Sets column names of input annotations.
- Parameters:
*value (List[str]) – Input columns for the annotator
- setLabelCol(label)#
Sets the value of
labelCol
.
- setLazyAnnotator(value)#
Sets whether Annotator should be evaluated lazily in a RecursivePipeline.
- Parameters:
value (bool) – Whether Annotator should be evaluated lazily in a RecursivePipeline
- setOutputCol(value)#
Sets output column name of annotations.
- Parameters:
value (str) – Name of output column
- setParamValue(paramName)#
Sets the value of a parameter.
- Parameters:
paramName (str) – Name of the parameter
- setTargetNerLabels(v)#
Sets the value of
targetNerLabels
.
- write() JavaMLWriter #
Returns an MLWriter instance for this ML instance.
- class AssertionLogRegModel(classname='com.johnsnowlabs.nlp.annotators.assertion.logreg.AssertionLogRegModel', java_model=None)#
Bases:
sparknlp_jsl.common.AnnotatorModelInternal
,sparknlp_jsl.common.HasStorageRef
Model to extract assertion status of entities using Logarithmic Regression.
To train a custom model, use
AssertionLogRegApproach
instead.Logarithmic Regression is used to extract Assertion Status from extracted entities and text. AssertionLogRegModel requires DOCUMENT, CHUNK and WORD_EMBEDDINGS type annotations as inputs. Excluding the label, the annotations can be obtained with, for example:
a
SentenceDetector
,a
Chunk
,a
WordEmbeddingsModel
.
For a list of pretrained models, check the NLP Models Hub page.
Input Annotation types
Output Annotation type
DOCUMENT, CHUNK, WORD_EMBEDDINGS
ASSERTION
- Parameters:
beforeParam – Length of the context before the target
afterParam – Length of the context after the target
startCol – Column that contains the token number for the start of the target”
endCol – Column that contains the token number for the end of the target
nerCol – Column with NER type annotation output, use either nerCol or startCol and endCol
targetNerLabels –
Examples
>>> import sparknlp >>> from sparknlp.base import * >>> from sparknlp.annotator import * >>> from sparknlp_jsl.annotator import * >>> from sparknlp.training import * >>> from pyspark.ml import Pipeline
>>> document_assembler = DocumentAssembler() \ ... .setInputCol("text") \ ... .setOutputCol("document") ... >>> sentence_detector = SentenceDetector() \ ... .setInputCol("document") \ ... .setOutputCol("sentence") ... >>> tokenizer = Tokenizer() \ ... .setInputCols(["sentence"]) \ ... .setOutputCol("token") ... >>> embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models") \ ... .setInputCols(["sentence", "token"]) \ ... .setOutputCol("word_embeddings") ... .setCaseSensitive(False) ... >>> chunk = Chunker() \ ... .setInputCols([sentence]) \ ... .setChunkCol("chunk") \ ... .setOutputCol("chunk") ... Then the AssertionLogRegApproach model is defined. Label column is needed in the dataset for training. >>> assertion = AssertionLogRegModel().pretrained() \ ... .setLabelCol("label") \ ... .setInputCols(["document", "chunk", "word_embeddings"]) \ ... .setOutputCol("assertion") \ ... ... >>> assertionPipeline = Pipeline(stages=[ ... document_assembler, ... sentence_detector, ... tokenizer, ... embeddings, ... chunk, ... assertion >>>])
>>> assertionModel = assertionPipeline.fit(dataset) >>> assertionPretrained = assertionModel.transform(dataset)
- afterParam#
- beforeParam#
- endCol#
- getter_attrs = []#
- inputAnnotatorTypes#
- inputCols#
- lazyAnnotator#
- name = 'AssertionLogRegModel'#
- nerCol#
- optionalInputAnnotatorTypes = []#
- outputAnnotatorType#
- outputCol#
- skipLPInputColsValidation = True#
- startCol#
- storageRef#
- targetNerLabels#
- uid#
- clear(param: pyspark.ml.param.Param) None #
Clears a param from the param map if it has been explicitly set.
- copy(extra: pyspark.ml._typing.ParamMap | None = None) JP #
Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.
- Parameters:
extra (dict, optional) – Extra parameters to copy to the new instance
- Returns:
Copy of this instance
- Return type:
JavaParams
- explainParam(param: str | Param) str #
Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
- explainParams() str #
Returns the documentation of all params with their optionally default values and user-supplied values.
- extractParamMap(extra: pyspark.ml._typing.ParamMap | None = None) pyspark.ml._typing.ParamMap #
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
- Parameters:
extra (dict, optional) – extra param values
- Returns:
merged param map
- Return type:
dict
- getInputCols()#
Gets current column names of input annotations.
- getLazyAnnotator()#
Gets whether Annotator should be evaluated lazily in a RecursivePipeline.
- getOrDefault(param: str) Any #
- getOrDefault(param: Param[T]) T
Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.
- getOutputCol()#
Gets output column name of annotations.
- getParam(paramName: str) Param #
Gets a param by its name.
- getParamValue(paramName)#
Gets the value of a parameter.
- Parameters:
paramName (str) – Name of the parameter
- getStorageRef()#
Gets unique reference name for identification.
- Returns:
Unique reference name for identification
- Return type:
str
- hasDefault(param: str | Param[Any]) bool #
Checks whether a param has a default value.
- hasParam(paramName: str) bool #
Tests whether this instance contains a param with a given (string) name.
- inputColsValidation(value)#
- isDefined(param: str | Param[Any]) bool #
Checks whether a param is explicitly set by user or has a default value.
- isSet(param: str | Param[Any]) bool #
Checks whether a param is explicitly set by user.
- classmethod load(path: str) RL #
Reads an ML instance from the input path, a shortcut of read().load(path).
- static pretrained(name='assertion_ml', lang='en', remote_loc='clinical/models')#
Downloads and loads a pretrained model.
- Parameters:
name (str, optional) – Name of the pretrained model, by default “assertion_ml”
lang (str, optional) – Language of the pretrained model, by default “en”
remote_loc (str, optional) – Optional remote address of the resource, by default “clinical/models”. Will use Spark NLPs repositories otherwise.
- Returns:
The restored model
- Return type:
- classmethod read()#
Returns an MLReader instance for this class.
- save(path: str) None #
Save this ML instance to the given path, a shortcut of ‘write().save(path)’.
- set(param: Param, value: Any) None #
Sets a parameter in the embedded param map.
- setForceInputTypeValidation(etfm)#
- setInputCols(*value)#
Sets column names of input annotations.
- Parameters:
*value (List[str]) – Input columns for the annotator
- setLazyAnnotator(value)#
Sets whether Annotator should be evaluated lazily in a RecursivePipeline.
- Parameters:
value (bool) – Whether Annotator should be evaluated lazily in a RecursivePipeline
- setOutputCol(value)#
Sets output column name of annotations.
- Parameters:
value (str) – Name of output column
- setParamValue(paramName)#
Sets the value of a parameter.
- Parameters:
paramName (str) – Name of the parameter
- setParams()#
- setStorageRef(value)#
Sets unique reference name for identification.
- Parameters:
value (str) – Unique reference name for identification
- transform(dataset: pyspark.sql.dataframe.DataFrame, params: pyspark.ml._typing.ParamMap | None = None) pyspark.sql.dataframe.DataFrame #
Transforms the input dataset with optional parameters.
New in version 1.3.0.
- Parameters:
dataset (
pyspark.sql.DataFrame
) – input datasetparams (dict, optional) – an optional param map that overrides embedded params.
- Returns:
transformed dataset
- Return type:
- write() JavaMLWriter #
Returns an MLWriter instance for this ML instance.