sparknlp_jsl.annotator.embeddings.bert_sentence_embeddings#

Module Contents#

Classes#

BertSentenceChunkEmbeddings

BERT Sentence embeddings for chunk annotations which take into account the context of the sentence the chunk appeared in.

class BertSentenceChunkEmbeddings(classname='com.johnsnowlabs.nlp.annotators.embeddings.BertSentenceChunkEmbeddings', java_model=None)#

Bases: sparknlp.annotator.BertSentenceEmbeddings, sparknlp_jsl.common.HasEngine, sparknlp_jsl.annotator.handle_exception_params.HandleExceptionParams

BERT Sentence embeddings for chunk annotations which take into account the context of the sentence the chunk appeared in. This is an extension of BertSentenceEmbeddings which combines the embedding of a chunk with the embedding of the surrounding sentence. For each input chunk annotation, it finds the corresponding sentence, computes the BERT sentence embedding of both the chunk and the sentence and averages them. The resulting embeddings are useful in cases, in which one needs a numerical representation of a text chunk which is sensitive to the context it appears in.

Input Annotation types

Output Annotation type

DOCUMENT, CHUNK

SENTENCE_EMBEDDINGS

Parameters:

chunkWeight – Relative weight of chunk embeddings in comparison to sentence embeddings. The value should between 0 and 1. The default is 0.5, which means the chunk and sentence embeddings are given equal weight.

Examples

>>> import sparknlp
>>> from sparknlp.base import *
>>> from sparknlp_jsl.common import *
>>> from sparknlp.annotator import *
>>> from sparknlp.training import *
>>> import sparknlp_jsl
>>> from sparknlp_jsl.base import *
>>> from sparknlp_jsl.annotator import *
>>> from pyspark.ml import Pipeline

First extract the prerequisites for the NerDLModel

>>> documentAssembler = DocumentAssembler() \
...     .setInputCol("text") \
...     .setOutputCol("document")
>>> sentence = SentenceDetector() \
...     .setInputCols(["document"]) \
...     .setOutputCol("sentence")
>>> tokenizer = Tokenizer() \
...     .setInputCols(["sentence"]) \
...     .setOutputCol("token")
>>> embeddings = WordEmbeddingsModel.pretrained() \
...     .setInputCols(["sentence", "token"]) \
...     .setOutputCol("bert")
>>> nerTagger = MedicalNerDLModel.pretrained() \
...     .setInputCols(["sentence", "token", "bert"]) \
...     .setOutputCol("ner")
>>> nerConverter = NerConverter() \
...     .setInputCols(["sentence", "token","ner"]) \
...     .setOutputCol("ner_chunk")
>>> embeddings = BertSentenceChunkEmbeddings.pretrained("sbluebert_base_uncased_mli", "en", "clinical/models") \
...     .setInputCols(["sentence", "ner_chunk"]) \
...     .setOutputCol("sentence_chunk_embeddings")
>>> pipeline = Pipeline().setStages([
...     documentAssembler,
...     sentence,
...     tokenizer,
...     embeddings,
...     nerTagger,
...     nerConverter
...     embeddings
... ])
>>> data = spark.createDataFrame([["Her Diabetes has become type 2 in the last year with her Diabetes.He complains of swelling in his right forearm."]]).toDF("text")
>>> result = pipeline.fit(data).transform(data)
>>> result
...   .selectExpr("explode(sentence_chunk_embeddings) AS s")
...   .selectExpr("s.result", "slice(s.embeddings, 1, 5) AS averageEmbedding")
...   .show(truncate=false)
+-----------------------------+-----------------------------------------------------------------+
|                       result|                                                 averageEmbedding|
+-----------------------------+-----------------------------------------------------------------+
|Her Diabetes                 |[-0.31995273, -0.04710883, -0.28973156, -0.1294758, 0.12481072]  |
|type 2                       |[-0.027161136, -0.24613449, -0.0949309, 0.1825444, -0.2252143]   |
|her Diabetes                 |[-0.31995273, -0.04710883, -0.28973156, -0.1294758, 0.12481072]  |
|swelling in his right forearm|[-0.45139068, 0.12400375, -0.0075617577, -0.90806055, 0.12871636]|
+-----------------------------+-----------------------------------------------------------------+
batchSize#
caseSensitive#
chunkWeight#
configProtoBytes#
dimension#
getter_attrs = []#
inputAnnotatorTypes#
inputCols#
isLong#
lazyAnnotator#
name = 'BertSentenceChunkEmbeddings'#
optionalInputAnnotatorTypes = []#
outputAnnotatorType = 'sentence_embeddings'#
outputCol#
storageRef#
uid = ''#
clear(param: pyspark.ml.param.Param) None#

Clears a param from the param map if it has been explicitly set.

copy(extra: pyspark.ml._typing.ParamMap | None = None) JP#

Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.

Parameters:

extra (dict, optional) – Extra parameters to copy to the new instance

Returns:

Copy of this instance

Return type:

JavaParams

explainParam(param: str | Param) str#

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams() str#

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap(extra: pyspark.ml._typing.ParamMap | None = None) pyspark.ml._typing.ParamMap#

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

Parameters:

extra (dict, optional) – extra param values

Returns:

merged param map

Return type:

dict

getBatchSize()#

Gets current batch size.

Returns:

Current batch size

Return type:

int

getCaseSensitive()#

Gets whether to ignore case in tokens for embeddings matching.

Returns:

Whether to ignore case in tokens for embeddings matching

Return type:

bool

getDimension()#

Gets embeddings dimension.

getInputCols()#

Gets current column names of input annotations.

getLazyAnnotator()#

Gets whether Annotator should be evaluated lazily in a RecursivePipeline.

getOrDefault(param: str) Any#
getOrDefault(param: Param[T]) T

Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.

getOutputCol()#

Gets output column name of annotations.

getParam(paramName: str) Param#

Gets a param by its name.

getParamValue(paramName)#

Gets the value of a parameter.

Parameters:

paramName (str) – Name of the parameter

getStorageRef()#

Gets unique reference name for identification.

Returns:

Unique reference name for identification

Return type:

str

hasDefault(param: str | Param[Any]) bool#

Checks whether a param has a default value.

hasParam(paramName: str) bool#

Tests whether this instance contains a param with a given (string) name.

inputColsValidation(value)#
isDefined(param: str | Param[Any]) bool#

Checks whether a param is explicitly set by user or has a default value.

isSet(param: str | Param[Any]) bool#

Checks whether a param is explicitly set by user.

static load(path: str)#

Load a pre-trained BertSentenceChunkEmbeddings from a local path.

Parameters:

path (str) – Path to the pre-trained model.

Returns:

A pre-trained BertSentenceChunkEmbeddings.

Return type:

BertSentenceChunkEmbeddings

static loadSavedModel(folder, spark_session, use_openvino=False)#

Loads a locally saved model.

Parameters:
  • folder (str) – Folder of the saved model

  • spark_session (pyspark.sql.SparkSession) – The current SparkSession

  • use_openvino (bool) – Use OpenVINO backend

Returns:

The restored model

Return type:

BertSentenceEmbeddings

static pretrained(name='sbiobert_base_cased_mli', lang='en', remote_loc='clinical/models')#

Downloads and loads a pretrained model.

Parameters:
  • name (str, optional) – Name of the pretrained model, by default “sbiobert_base_cased_mli”

  • lang (str, optional) – Language of the pretrained model, by default “en”

  • remote_loc (str, optional) – Optional remote address of the resource, by default None. Will use Spark NLPs repositories otherwise.

Returns:

The restored model

Return type:

BertSentenceChunkEmbeddings

classmethod read()#

Returns an MLReader instance for this class.

save(path: str) None#

Save this ML instance to the given path, a shortcut of ‘write().save(path)’.

set(param: Param, value: Any) None#

Sets a parameter in the embedded param map.

setBatchSize(v)#

Sets batch size.

Parameters:

v (int) – Batch size

setCaseSensitive(value)#

Sets whether to ignore case in tokens for embeddings matching.

Parameters:

value (bool) – Whether to ignore case in tokens for embeddings matching

setChunkWeight(value)#
Sets the relative weight of chunk embeddings in comparison to sentence embeddings. The value should between 0 and 1.

The default is 0.5, which means the chunk and sentence embeddings are given equal weight.

Parameters:

value (float) – Relative weight of chunk embeddings in comparison to sentence embeddings. The value should between 0 and 1. The default is 0.5, which means the chunk and sentence embeddings are given equal weight.

setConfigProtoBytes(b)#

Sets configProto from tensorflow, serialized into byte array.

Parameters:

b (List[int]) – ConfigProto from tensorflow, serialized into byte array

setDimension(value)#

Sets embeddings dimension.

Parameters:

value (int) – Embeddings dimension

setInputCols(*value)#

Sets column names of input annotations.

Parameters:

*value (List[str]) – Input columns for the annotator

setIsLong(value)#

Sets whether to use Long type instead of Int type for inputs buffer.

Some Bert models require Long instead of Int.

Parameters:

value (bool) – Whether to use Long type instead of Int type for inputs buffer

setLazyAnnotator(value)#

Sets whether Annotator should be evaluated lazily in a RecursivePipeline.

Parameters:

value (bool) – Whether Annotator should be evaluated lazily in a RecursivePipeline

setOutputCol(value)#

Sets output column name of annotations.

Parameters:

value (str) – Name of output column

setParamValue(paramName)#

Sets the value of a parameter.

Parameters:

paramName (str) – Name of the parameter

setParams()#
setStorageRef(value)#

Sets unique reference name for identification.

Parameters:

value (str) – Unique reference name for identification

transform(dataset: pyspark.sql.dataframe.DataFrame, params: pyspark.ml._typing.ParamMap | None = None) pyspark.sql.dataframe.DataFrame#

Transforms the input dataset with optional parameters.

New in version 1.3.0.

Parameters:
  • dataset (pyspark.sql.DataFrame) – input dataset

  • params (dict, optional) – an optional param map that overrides embedded params.

Returns:

transformed dataset

Return type:

pyspark.sql.DataFrame

write() JavaMLWriter#

Returns an MLWriter instance for this ML instance.