sparknlp_jsl.annotator.AssertionLogRegApproach#
- class sparknlp_jsl.annotator.AssertionLogRegApproach[source]#
Bases:
AnnotatorApproach
Train a Assertion algorithm using a regression log model.
Excluding the label, this can be done with for example: - a :class: SentenceDetector, - a :class: Chunk, - a :class: WordEmbeddingsModel.
Input Annotation types
Output Annotation type
DOCUMENT, CHUNK, WORD_EMBEDDINGS
ASSERTION
- Parameters:
- label
Column with label per each token
- maxIter
Max number of iterations for algorithm
- regParam
Regularization parameter
- eNetParam
Elastic net parameter
- beforeParam
Length of the context before the target
- afterParam
Length of the context after the target
- startCol
Column that contains the token number for the start of the target”
- externalFeatures
Additional dictionaries paths to use as a features
- endCol
Column that contains the token number for the end of the target
- nerCol
Column with NER type annotation output, use either nerCol or startCol and endCol
- targetNerLabels
List of NER labels to mark as target for assertion, must match NER output
Examples
>>> import sparknlp >>> from sparknlp.base import * >>> from sparknlp.annotator import * >>> from sparknlp_jsl.annotator import * >>> from sparknlp.training import * >>> from pyspark.ml import Pipeline >>> document_assembler = DocumentAssembler() \ ... .setInputCol("text") \ ... .setOutputCol("document") ... >>> sentence_detector = SentenceDetector() \ ... .setInputCol("document") \ ... .setOutputCol("sentence") ... >>> tokenizer = Tokenizer() \ ... .setInputCols(["sentence"]) \ ... .setOutputCol("token") ... >>> glove = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models") \ ... .setInputCols(["sentence", "token"]) \ ... .setOutputCol("word_embeddings") ... >>> chunk = Chunker() \ ... .setInputCols([sentence]) \ ... .setChunkCol("chunk") \ ... .setOutputCol("chunk") ... Then the AssertionLogRegApproach model is defined. Label column is needed in the dataset for training. >>> assertion = AssertionLogRegApproach() \ ... .setLabelCol("label") \ ... .setInputCols(["document", "chunk", "word_embeddings"]) \ ... .setOutputCol("assertion") \ ... .setReg(0.01) \ ... .setBefore(11) \ ... .setAfter(13) \ ... .setStartCol("start") \ ... .setEndCol("end") ... >>> assertionPipeline = Pipeline(stages=[ ... document_assembler, ... sentence_detector, ... tokenizer, ... glove, ... chunk, ... assertion ...])
>>> assertionModel = assertionPipeline.fit(dataset)
Methods
__init__
()clear
(param)Clears a param from the param map if it has been explicitly set.
copy
([extra])Creates a copy of this instance with the same uid and some extra params.
explainParam
(param)Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
Returns the documentation of all params with their optionally default values and user-supplied values.
extractParamMap
([extra])Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
fit
(dataset[, params])Fits a model to the input dataset with optional parameters.
fitMultiple
(dataset, paramMaps)Fits a model to the input dataset for each param map in paramMaps.
Gets current column names of input annotations.
Gets whether Annotator should be evaluated lazily in a RecursivePipeline.
getOrDefault
(param)Gets the value of a param in the user-supplied param map or its default value.
Gets output column name of annotations.
getParam
(paramName)Gets a param by its name.
getParamValue
(paramName)Gets the value of a parameter.
hasDefault
(param)Checks whether a param has a default value.
hasParam
(paramName)Tests whether this instance contains a param with a given (string) name.
isDefined
(param)Checks whether a param is explicitly set by user or has a default value.
isSet
(param)Checks whether a param is explicitly set by user.
load
(path)Reads an ML instance from the input path, a shortcut of read().load(path).
read
()Returns an MLReader instance for this class.
save
(path)Save this ML instance to the given path, a shortcut of 'write().save(path)'.
set
(param, value)Sets a parameter in the embedded param map.
setAfter
(after)setBefore
(before)setEndCol
(e)setEnet
(enet)setInputCols
(*value)Sets column names of input annotations.
setLabelCol
(label)setLazyAnnotator
(value)Sets whether Annotator should be evaluated lazily in a RecursivePipeline.
setMaxIter
(maxiter)setNerCol
(n)setOutputCol
(value)Sets output column name of annotations.
setParamValue
(paramName)Sets the value of a parameter.
setReg
(lamda)setStartCol
(s)setTargetNerLabels
(v)write
()Returns an MLWriter instance for this ML instance.
Attributes
afterParam
beforeParam
eNetParam
endCol
getter_attrs
inputCols
label
lazyAnnotator
maxIter
nerCol
outputCol
Returns all params ordered by name.
regParam
startCol
targetNerLabels
- clear(param)#
Clears a param from the param map if it has been explicitly set.
- copy(extra=None)#
Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.
- Parameters:
extra – Extra parameters to copy to the new instance
- Returns:
Copy of this instance
- explainParam(param)#
Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
- explainParams()#
Returns the documentation of all params with their optionally default values and user-supplied values.
- extractParamMap(extra=None)#
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
- Parameters:
extra – extra param values
- Returns:
merged param map
- fit(dataset, params=None)#
Fits a model to the input dataset with optional parameters.
- Parameters:
dataset – input dataset, which is an instance of
pyspark.sql.DataFrame
params – an optional param map that overrides embedded params. If a list/tuple of param maps is given, this calls fit on each param map and returns a list of models.
- Returns:
fitted model(s)
New in version 1.3.0.
- fitMultiple(dataset, paramMaps)#
Fits a model to the input dataset for each param map in paramMaps.
- Parameters:
dataset – input dataset, which is an instance of
pyspark.sql.DataFrame
.paramMaps – A Sequence of param maps.
- Returns:
A thread safe iterable which contains one model for each param map. Each call to next(modelIterator) will return (index, model) where model was fit using paramMaps[index]. index values may not be sequential.
New in version 2.3.0.
- getInputCols()#
Gets current column names of input annotations.
- getLazyAnnotator()#
Gets whether Annotator should be evaluated lazily in a RecursivePipeline.
- getOrDefault(param)#
Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.
- getOutputCol()#
Gets output column name of annotations.
- getParam(paramName)#
Gets a param by its name.
- getParamValue(paramName)#
Gets the value of a parameter.
- Parameters:
- paramNamestr
Name of the parameter
- hasDefault(param)#
Checks whether a param has a default value.
- hasParam(paramName)#
Tests whether this instance contains a param with a given (string) name.
- isDefined(param)#
Checks whether a param is explicitly set by user or has a default value.
- isSet(param)#
Checks whether a param is explicitly set by user.
- classmethod load(path)#
Reads an ML instance from the input path, a shortcut of read().load(path).
- property params#
Returns all params ordered by name. The default implementation uses
dir()
to get all attributes of typeParam
.
- classmethod read()#
Returns an MLReader instance for this class.
- save(path)#
Save this ML instance to the given path, a shortcut of ‘write().save(path)’.
- set(param, value)#
Sets a parameter in the embedded param map.
- setInputCols(*value)#
Sets column names of input annotations.
- Parameters:
- *valuestr
Input columns for the annotator
- setLazyAnnotator(value)#
Sets whether Annotator should be evaluated lazily in a RecursivePipeline.
- Parameters:
- valuebool
Whether Annotator should be evaluated lazily in a RecursivePipeline
- setOutputCol(value)#
Sets output column name of annotations.
- Parameters:
- valuestr
Name of output column
- setParamValue(paramName)#
Sets the value of a parameter.
- Parameters:
- paramNamestr
Name of the parameter
- uid#
A unique id for the object.
- write()#
Returns an MLWriter instance for this ML instance.