sparknlp_jsl.annotator.BertSentenceChunkEmbeddings#
- class sparknlp_jsl.annotator.BertSentenceChunkEmbeddings(classname='com.johnsnowlabs.nlp.embeddings.BertSentenceChunkEmbeddings', java_model=None)[source]#
Bases:
BertSentenceEmbeddings
BERT Sentence embeddings for chunk annotations which take into account the context of the sentence the chunk appeared in. This is an extension of BertSentenceEmbeddings which combines the embedding of a chunk with the embedding of the surrounding sentence. For each input chunk annotation, it finds the corresponding sentence, computes the BERT sentence embedding of both the chunk and the sentence and averages them. The resulting embeddings are useful in cases, in which one needs a numerical representation of a text chunk which is sensitive to the context it appears in.
Input Annotation types
Output Annotation type
DOCUMENT, CHUNK
SENTENCE_EMBEDDINGS
- Parameters:
- chunkWeight
Relative weight of chunk embeddings in comparison to sentence embeddings. The value should between 0 and 1. The default is 0.5, which means the chunk and sentence embeddings are given equal weight.
Examples
>>> import sparknlp >>> from sparknlp.base import * >>> from sparknlp.common import * >>> from sparknlp.annotator import * >>> from sparknlp.training import * >>> import sparknlp_jsl >>> from sparknlp_jsl.base import * >>> from sparknlp_jsl.annotator import * >>> from pyspark.ml import Pipeline
First extract the prerequisites for the NerDLModel
>>> documentAssembler = DocumentAssembler() \ ... .setInputCol("text") \ ... .setOutputCol("document") >>> sentence = SentenceDetector() \ ... .setInputCols(["document"]) \ ... .setOutputCol("sentence") >>> tokenizer = Tokenizer() \ ... .setInputCols(["sentence"]) \ ... .setOutputCol("token") >>> embeddings = WordEmbeddingsModel.pretrained() \ ... .setInputCols(["sentence", "token"]) \ ... .setOutputCol("bert") >>> nerTagger = MedicalNerDLModel.pretrained() \ ... .setInputCols(["sentence", "token", "bert"]) \ ... .setOutputCol("ner") >>> nerConverter = NerConverter() \ ... .setInputCols(["sentence", "token","ner"]) \ ... .setOutputCol("ner_chunk") >>> embeddings = BertSentenceChunkEmbeddings.pretrained("sbluebert_base_uncased_mli", "en", "clinical/models") \ ... .setInputCols(["sentence", "ner_chunk"]) \ ... .setOutputCol("sentence_chunk_embeddings") >>> pipeline = Pipeline().setStages([ ... documentAssembler, ... sentence, ... tokenizer, ... embeddings, ... nerTagger, ... nerConverter ... embeddings ... ]) >>> data = spark.createDataFrame([["Her Diabetes has become type 2 in the last year with her Diabetes.He complains of swelling in his right forearm."]]).toDF("text") >>> result = pipeline.fit(data).transform(data) >>> result ... .selectExpr("explode(sentence_chunk_embeddings) AS s") ... .selectExpr("s.result", "slice(s.embeddings, 1, 5) AS averageEmbedding") ... .show(truncate=false) +-----------------------------+-----------------------------------------------------------------+ | result| averageEmbedding| +-----------------------------+-----------------------------------------------------------------+ |Her Diabetes |[-0.31995273, -0.04710883, -0.28973156, -0.1294758, 0.12481072] | |type 2 |[-0.027161136, -0.24613449, -0.0949309, 0.1825444, -0.2252143] | |her Diabetes |[-0.31995273, -0.04710883, -0.28973156, -0.1294758, 0.12481072] | |swelling in his right forearm|[-0.45139068, 0.12400375, -0.0075617577, -0.90806055, 0.12871636]| +-----------------------------+-----------------------------------------------------------------+
Methods
__init__
([classname, java_model])Initialize this instance with a Java model object.
clear
(param)Clears a param from the param map if it has been explicitly set.
copy
([extra])Creates a copy of this instance with the same uid and some extra params.
explainParam
(param)Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
Returns the documentation of all params with their optionally default values and user-supplied values.
extractParamMap
([extra])Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
Gets current batch size.
Gets whether to ignore case in tokens for embeddings matching.
Gets embeddings dimension.
Gets current column names of input annotations.
Gets whether Annotator should be evaluated lazily in a RecursivePipeline.
getOrDefault
(param)Gets the value of a param in the user-supplied param map or its default value.
Gets output column name of annotations.
getParam
(paramName)Gets a param by its name.
getParamValue
(paramName)Gets the value of a parameter.
Gets unique reference name for identification.
hasDefault
(param)Checks whether a param has a default value.
hasParam
(paramName)Tests whether this instance contains a param with a given (string) name.
isDefined
(param)Checks whether a param is explicitly set by user or has a default value.
isSet
(param)Checks whether a param is explicitly set by user.
load
(path)Reads an ML instance from the input path, a shortcut of read().load(path).
loadSavedModel
(folder, spark_session)Loads a locally saved model.
pretrained
([name, lang, remote_loc])Downloads and loads a pretrained model.
read
()Returns an MLReader instance for this class.
save
(path)Save this ML instance to the given path, a shortcut of 'write().save(path)'.
set
(param, value)Sets a parameter in the embedded param map.
setBatchSize
(v)Sets batch size.
setCaseSensitive
(value)Sets whether to ignore case in tokens for embeddings matching.
setChunkWeight
(value)Sets the relative weight of chunk embeddings in comparison to sentence embeddings. The value should between 0 and 1.
Sets configProto from tensorflow, serialized into byte array.
setDimension
(value)Sets embeddings dimension.
setInputCols
(*value)Sets column names of input annotations.
setIsLong
(value)Sets whether to use Long type instead of Int type for inputs buffer.
setLazyAnnotator
(value)Sets whether Annotator should be evaluated lazily in a RecursivePipeline.
setMaxSentenceLength
(value)Sets max sentence length to process.
setOutputCol
(value)Sets output column name of annotations.
setParamValue
(paramName)Sets the value of a parameter.
setParams
()setStorageRef
(value)Sets unique reference name for identification.
transform
(dataset[, params])Transforms the input dataset with optional parameters.
write
()Returns an MLWriter instance for this ML instance.
Attributes
batchSize
caseSensitive
chunkWeight
configProtoBytes
dimension
getter_attrs
inputCols
isLong
lazyAnnotator
maxSentenceLength
name
outputCol
Returns all params ordered by name.
storageRef
- clear(param)#
Clears a param from the param map if it has been explicitly set.
- copy(extra=None)#
Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.
- Parameters:
extra – Extra parameters to copy to the new instance
- Returns:
Copy of this instance
- explainParam(param)#
Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
- explainParams()#
Returns the documentation of all params with their optionally default values and user-supplied values.
- extractParamMap(extra=None)#
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
- Parameters:
extra – extra param values
- Returns:
merged param map
- getBatchSize()#
Gets current batch size.
- Returns:
- int
Current batch size
- getCaseSensitive()#
Gets whether to ignore case in tokens for embeddings matching.
- Returns:
- bool
Whether to ignore case in tokens for embeddings matching
- getDimension()#
Gets embeddings dimension.
- getInputCols()#
Gets current column names of input annotations.
- getLazyAnnotator()#
Gets whether Annotator should be evaluated lazily in a RecursivePipeline.
- getOrDefault(param)#
Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.
- getOutputCol()#
Gets output column name of annotations.
- getParam(paramName)#
Gets a param by its name.
- getParamValue(paramName)#
Gets the value of a parameter.
- Parameters:
- paramNamestr
Name of the parameter
- getStorageRef()#
Gets unique reference name for identification.
- Returns:
- str
Unique reference name for identification
- hasDefault(param)#
Checks whether a param has a default value.
- hasParam(paramName)#
Tests whether this instance contains a param with a given (string) name.
- isDefined(param)#
Checks whether a param is explicitly set by user or has a default value.
- isSet(param)#
Checks whether a param is explicitly set by user.
- static load(path)[source]#
Reads an ML instance from the input path, a shortcut of read().load(path).
- static loadSavedModel(folder, spark_session)#
Loads a locally saved model.
- Parameters:
- folderstr
Folder of the saved model
- spark_sessionpyspark.sql.SparkSession
The current SparkSession
- Returns:
- BertSentenceEmbeddings
The restored model
- property params#
Returns all params ordered by name. The default implementation uses
dir()
to get all attributes of typeParam
.
- static pretrained(name='sent_small_bert_L2_768', lang='en', remote_loc=None)[source]#
Downloads and loads a pretrained model.
- Parameters:
- namestr, optional
Name of the pretrained model, by default “sent_small_bert_L2_768”
- langstr, optional
Language of the pretrained model, by default “en”
- remote_locstr, optional
Optional remote address of the resource, by default None. Will use Spark NLPs repositories otherwise.
- Returns:
- BertSentenceEmbeddings
The restored model
- classmethod read()#
Returns an MLReader instance for this class.
- save(path)#
Save this ML instance to the given path, a shortcut of ‘write().save(path)’.
- set(param, value)#
Sets a parameter in the embedded param map.
- setBatchSize(v)#
Sets batch size.
- Parameters:
- vint
Batch size
- setCaseSensitive(value)#
Sets whether to ignore case in tokens for embeddings matching.
- Parameters:
- valuebool
Whether to ignore case in tokens for embeddings matching
- setChunkWeight(value)[source]#
- Sets the relative weight of chunk embeddings in comparison to sentence embeddings. The value should between 0 and 1.
The default is 0.5, which means the chunk and sentence embeddings are given equal weight.
- Parameters:
- valuefloat
Relative weight of chunk embeddings in comparison to sentence embeddings. The value should between 0 and 1. The default is 0.5, which means the chunk and sentence embeddings are given equal weight.
- setConfigProtoBytes(b)#
Sets configProto from tensorflow, serialized into byte array.
- Parameters:
- bList[int]
ConfigProto from tensorflow, serialized into byte array
- setDimension(value)#
Sets embeddings dimension.
- Parameters:
- valueint
Embeddings dimension
- setInputCols(*value)#
Sets column names of input annotations.
- Parameters:
- *valuestr
Input columns for the annotator
- setIsLong(value)#
Sets whether to use Long type instead of Int type for inputs buffer.
Some Bert models require Long instead of Int.
- Parameters:
- valuebool
Whether to use Long type instead of Int type for inputs buffer
- setLazyAnnotator(value)#
Sets whether Annotator should be evaluated lazily in a RecursivePipeline.
- Parameters:
- valuebool
Whether Annotator should be evaluated lazily in a RecursivePipeline
- setMaxSentenceLength(value)#
Sets max sentence length to process.
- Parameters:
- valueint
Max sentence length to process
- setOutputCol(value)#
Sets output column name of annotations.
- Parameters:
- valuestr
Name of output column
- setParamValue(paramName)#
Sets the value of a parameter.
- Parameters:
- paramNamestr
Name of the parameter
- setStorageRef(value)#
Sets unique reference name for identification.
- Parameters:
- valuestr
Unique reference name for identification
- transform(dataset, params=None)#
Transforms the input dataset with optional parameters.
- Parameters:
dataset – input dataset, which is an instance of
pyspark.sql.DataFrame
params – an optional param map that overrides embedded params.
- Returns:
transformed dataset
New in version 1.3.0.
- uid#
A unique id for the object.
- write()#
Returns an MLWriter instance for this ML instance.