sparknlp_jsl.annotator.RelationExtractionApproach#

class sparknlp_jsl.annotator.RelationExtractionApproach(classname='com.johnsnowlabs.nlp.annotators.re.RelationExtractionApproach')[source]#

Bases: GenericClassifierApproach

Trains a TensorFlow model for relation extraction. The Tensorflow graph in .pb format needs to be specified with setModelFile. The result is a RelationExtractionModel. To start training, see the parameters that need to be set in the Parameters section.

Input Annotation types

Output Annotation type

WORD_EMBEDDINGS, POS, CHUNK, DEPENDENCY

CATEGORY

Parameters:
fromEntityBeginCol

From Entity Begining Column

fromEntityEndCol

From Entity End Column

fromEntityLabelCol

From Entity Label Column

toEntityBeginCol

To Entity Begining Column

toEntityEndCol

To Entity End Column

toEntityLabelCol

“To Entity Label Column

Examples

>>> import sparknlp
>>> from sparknlp.base import *
>>> from sparknlp.common import *
>>> from sparknlp.annotator import *
>>> from sparknlp.training import *
>>> import sparknlp_jsl
>>> from sparknlp_jsl.base import *
>>> from sparknlp_jsl.annotator import *
>>> from pyspark.ml import Pipeline
>>> documentAssembler = DocumentAssembler()     ...   .setInputCol("text")     ...   .setOutputCol("document")
...
>>> tokenizer = Tokenizer()     ...   .setInputCols(["document"])     ...   .setOutputCol("tokens")
...
>>> embedder = WordEmbeddingsModel     ...   .pretrained("embeddings_clinical", "en", "clinical/models")     ...   .setInputCols(["document", "tokens"])     ...   .setOutputCol("embeddings")
...
>>> posTagger = PerceptronModel     ...   .pretrained("pos_clinical", "en", "clinical/models")     ...   .setInputCols(["document", "tokens"])     ...   .setOutputCol("posTags")
...
>>> nerTagger = MedicalNerModel     ...   .pretrained("ner_events_clinical", "en", "clinical/models")     ...   .setInputCols(["document", "tokens", "embeddings"])     ...   .setOutputCol("ner_tags")
...
>>> nerConverter = NerConverter()     ...   .setInputCols(["document", "tokens", "ner_tags"])     ...   .setOutputCol("nerChunks")
...
>>> depencyParser = DependencyParserModel     ...   .pretrained("dependency_conllu", "en")     ...   .setInputCols(["document", "posTags", "tokens"])     ...   .setOutputCol("dependencies")
...
>>> re = RelationExtractionApproach()     ...   .setInputCols(["embeddings", "posTags", "train_ner_chunks", "dependencies"])     ...   .setOutputCol("relations_t")     ...   .setLabelColumn("target_rel")     ...   .setEpochsNumber(300)     ...   .setBatchSize(200)     ...   .setLearningRate(0.001)     ...   .setModelFile("path/to/graph_file.pb")     ...   .setFixImbalance(True)     ...   .setValidationSplit(0.05)     ...   .setFromEntity("from_begin", "from_end", "from_label")     ...   .setToEntity("to_begin", "to_end", "to_label")
...
>>> pipeline = Pipeline(stages=[
...     documentAssembler,
...     tokenizer,
...     embedder,
...     posTagger,
...     nerTagger,
...     nerConverter,
...     depencyParser,
...     re])
>>> model = pipeline.fit(trainData)

Methods

__init__([classname])

clear(param)

Clears a param from the param map if it has been explicitly set.

copy([extra])

Creates a copy of this instance with the same uid and some extra params.

explainParam(param)

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams()

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap([extra])

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

fit(dataset[, params])

Fits a model to the input dataset with optional parameters.

fitMultiple(dataset, paramMaps)

Fits a model to the input dataset for each param map in paramMaps.

getInputCols()

Gets current column names of input annotations.

getLazyAnnotator()

Gets whether Annotator should be evaluated lazily in a RecursivePipeline.

getOrDefault(param)

Gets the value of a param in the user-supplied param map or its default value.

getOutputCol()

Gets output column name of annotations.

getParam(paramName)

Gets a param by its name.

getParamValue(paramName)

Gets the value of a parameter.

hasDefault(param)

Checks whether a param has a default value.

hasParam(paramName)

Tests whether this instance contains a param with a given (string) name.

isDefined(param)

Checks whether a param is explicitly set by user or has a default value.

isSet(param)

Checks whether a param is explicitly set by user.

load(path)

Reads an ML instance from the input path, a shortcut of read().load(path).

read()

Returns an MLReader instance for this class.

save(path)

Save this ML instance to the given path, a shortcut of 'write().save(path)'.

set(param, value)

Sets a parameter in the embedded param map.

setBatchSize(size)

Size for each batch in the optimization process

setCustomLabels(labels)

Sets custom relation labels

setDropout(dropout)

Sets drouptup

setEpochsNumber(epochs)

Sets number of epochs for the optimization process

setFeatureScaling(feature_scaling)

Sets Feature scaling method.

setFixImbalance(fix_imbalance)

Sets A flag indicating whenther to balance the trainig set.

setFromEntity(begin_col, end_col, label_col)

Sets from entity

setInputCols(*value)

Sets column names of input annotations.

setLabelCol(label_column)

Sets Size for each batch in the optimization process

setLazyAnnotator(value)

Sets whether Annotator should be evaluated lazily in a RecursivePipeline.

setLearningRate(lamda)

Sets learning rate for the optimization process

setModelFile(mode_file)

Sets file name to load the mode from"

setOutputCol(value)

Sets output column name of annotations.

setOutputLogsPath(output_logs_path)

Sets path to folder where logs will be saved.

setParamValue(paramName)

Sets the value of a parameter.

setToEntity(begin_col, end_col, label_col)

Sets to entity

setValidationSplit(validation_split)

Sets validaiton split - how much data to use for validation

write()

Returns an MLWriter instance for this ML instance.

Attributes

batchSize

customLabels

dropout

epochsN

featureScaling

fixImbalance

fromEntityBeginCol

fromEntityEndCol

fromEntityLabelCol

getter_attrs

inputCols

labelColumn

lazyAnnotator

learningRate

modelFile

name

outputCol

outputLogsPath

params

Returns all params ordered by name.

toEntityBeginCol

toEntityEndCol

toEntityLabelCol

validationSplit

clear(param)#

Clears a param from the param map if it has been explicitly set.

copy(extra=None)#

Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.

Parameters:

extra – Extra parameters to copy to the new instance

Returns:

Copy of this instance

explainParam(param)#

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams()#

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap(extra=None)#

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

Parameters:

extra – extra param values

Returns:

merged param map

fit(dataset, params=None)#

Fits a model to the input dataset with optional parameters.

Parameters:
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame

  • params – an optional param map that overrides embedded params. If a list/tuple of param maps is given, this calls fit on each param map and returns a list of models.

Returns:

fitted model(s)

New in version 1.3.0.

fitMultiple(dataset, paramMaps)#

Fits a model to the input dataset for each param map in paramMaps.

Parameters:
  • dataset – input dataset, which is an instance of pyspark.sql.DataFrame.

  • paramMaps – A Sequence of param maps.

Returns:

A thread safe iterable which contains one model for each param map. Each call to next(modelIterator) will return (index, model) where model was fit using paramMaps[index]. index values may not be sequential.

New in version 2.3.0.

getInputCols()#

Gets current column names of input annotations.

getLazyAnnotator()#

Gets whether Annotator should be evaluated lazily in a RecursivePipeline.

getOrDefault(param)#

Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.

getOutputCol()#

Gets output column name of annotations.

getParam(paramName)#

Gets a param by its name.

getParamValue(paramName)#

Gets the value of a parameter.

Parameters:
paramNamestr

Name of the parameter

hasDefault(param)#

Checks whether a param has a default value.

hasParam(paramName)#

Tests whether this instance contains a param with a given (string) name.

isDefined(param)#

Checks whether a param is explicitly set by user or has a default value.

isSet(param)#

Checks whether a param is explicitly set by user.

classmethod load(path)#

Reads an ML instance from the input path, a shortcut of read().load(path).

property params#

Returns all params ordered by name. The default implementation uses dir() to get all attributes of type Param.

classmethod read()#

Returns an MLReader instance for this class.

save(path)#

Save this ML instance to the given path, a shortcut of ‘write().save(path)’.

set(param, value)#

Sets a parameter in the embedded param map.

setBatchSize(size)#

Size for each batch in the optimization process

Parameters:
sizeint

Size for each batch in the optimization process

setCustomLabels(labels)[source]#

Sets custom relation labels

Parameters:
labelsdict[str, str]

Dictionary which maps old to new labels

setDropout(dropout)#

Sets drouptup

Parameters:
dropoutfloat

Dropout at the output of each layer

setEpochsNumber(epochs)#

Sets number of epochs for the optimization process

Parameters:
epochsint

Number of epochs for the optimization process

setFeatureScaling(feature_scaling)#

Sets Feature scaling method. Possible values are ‘zscore’, ‘minmax’ or empty (no scaling

Parameters:
feature_scalingstr

Feature scaling method. Possible values are ‘zscore’, ‘minmax’ or empty (no scaling

setFixImbalance(fix_imbalance)#

Sets A flag indicating whenther to balance the trainig set.

Parameters:
fix_imbalancebool

A flag indicating whenther to balance the trainig set.

setFromEntity(begin_col, end_col, label_col)[source]#

Sets from entity

Parameters:
begin_colstr

Column that has a reference of where the chunk begins

end_col: str

Column that has a reference of where the chunk end

label_col: str

Column that has a reference what are the type of chunk

setInputCols(*value)#

Sets column names of input annotations.

Parameters:
*valuestr

Input columns for the annotator

setLabelCol(label_column)#

Sets Size for each batch in the optimization process

Parameters:
labelstr

Column with the value result we are trying to predict.

setLazyAnnotator(value)#

Sets whether Annotator should be evaluated lazily in a RecursivePipeline.

Parameters:
valuebool

Whether Annotator should be evaluated lazily in a RecursivePipeline

setLearningRate(lamda)#

Sets learning rate for the optimization process

Parameters:
lamdafloat

Learning rate for the optimization process

setModelFile(mode_file)#

Sets file name to load the mode from”

Parameters:
labelstr

File name to load the mode from”

setOutputCol(value)#

Sets output column name of annotations.

Parameters:
valuestr

Name of output column

setOutputLogsPath(output_logs_path)#

Sets path to folder where logs will be saved. If no path is specified, no logs are generated

Parameters:
labelstr

Path to folder where logs will be saved. If no path is specified, no logs are generated

setParamValue(paramName)#

Sets the value of a parameter.

Parameters:
paramNamestr

Name of the parameter

setToEntity(begin_col, end_col, label_col)[source]#

Sets to entity

Parameters:
begin_colstr

Column that has a reference of where the chunk begins

end_col: str

Column that has a reference of where the chunk end

label_col: str

Column that has a reference what are the type of chunk

setValidationSplit(validation_split)#

Sets validaiton split - how much data to use for validation

Parameters:
validation_splitfloat

Validaiton split - how much data to use for validation

uid#

A unique id for the object.

write()#

Returns an MLWriter instance for this ML instance.