sparknlp.annotator.AlbertEmbeddings#
- class sparknlp.annotator.AlbertEmbeddings(classname='com.johnsnowlabs.nlp.embeddings.AlbertEmbeddings', java_model=None)[source]#
Bases:
sparknlp.common.AnnotatorModel
,sparknlp.common.HasEmbeddingsProperties
,sparknlp.common.HasCaseSensitiveProperties
,sparknlp.common.HasStorageRef
,sparknlp.common.HasBatchedAnnotate
ALBERT: A Lite Bert For Self-Supervised Learning Of Language Representations - Google Research, Toyota Technological Institute at Chicago
These word embeddings represent the outputs generated by the Albert model. All official Albert releases by google in TF-HUB are supported with this Albert Wrapper:
Ported TF-Hub Models:
Model Name
TF-Hub Model
Model Properties
"albert_base_uncased"
768-embed-dim, 12-layer, 12-heads, 12M parameters
"albert_large_uncased"
1024-embed-dim, 24-layer, 16-heads, 18M parameters
"albert_xlarge_uncased"
2048-embed-dim, 24-layer, 32-heads, 60M parameters
"albert_xxlarge_uncased"
4096-embed-dim, 12-layer, 64-heads, 235M parameters
This model requires input tokenization with SentencePiece model, which is provided by Spark-NLP (See tokenizers package).
Pretrained models can be loaded with
pretrained()
of the companion object:>>> embeddings = AlbertEmbeddings.pretrained() \ ... .setInputCols(["sentence", "token"]) \ ... .setOutputCol("embeddings")
The default model is
"albert_base_uncased"
, if no name is provided.For extended examples of usage, see the Spark NLP Workshop. To see which models are compatible and how to import them see Import Transformers into Spark NLP 🚀.
Input Annotation types
Output Annotation type
DOCUMENT, TOKEN
WORD_EMBEDDINGS
- Parameters
- batchSize
Size of every batch, by default 8
- dimension
Number of embedding dimensions, by default 768
- caseSensitive
Whether to ignore case in tokens for embeddings matching, by default False
- configProtoBytes
ConfigProto from tensorflow, serialized into byte array.
- maxSentenceLength
Max sentence length to process, by default 128
See also
AlbertForTokenClassification
for AlbertEmbeddings with a token classification layer on top
Notes
ALBERT uses repeating layers which results in a small memory footprint, however the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
References
ALBERT: A LITE BERT FOR SELF-SUPERVISED LEARNING OF LANGUAGE REPRESENTATIONS
https://github.com/google-research/ALBERT
Paper abstract:
Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks. However, at some point further model increases become harder due to GPU/TPU memory limitations and longer training times. To address these problems, we present two parameter reduction techniques to lower memory consumption and increase the training speed of BERT (Devlin et al., 2019). Comprehensive empirical evidence shows that our proposed methods lead to models that scale much better compared to the original BERT. We also use a self-supervised loss that focuses on modeling inter-sentence coherence, and show it consistently helps downstream tasks with multi-sentence inputs. As a result, our best model establishes new state-of-the-art results on the GLUE, RACE, and SQuAD benchmarks while having fewer parameters compared to BERT-large.
Examples
>>> import sparknlp >>> from sparknlp.base import * >>> from sparknlp.annotator import * >>> from pyspark.ml import Pipeline >>> documentAssembler = DocumentAssembler() \ ... .setInputCol("text") \ ... .setOutputCol("document") >>> tokenizer = Tokenizer() \ ... .setInputCols(["document"]) \ >>> embeddings = AlbertEmbeddings.pretrained() \ ... .setInputCols(["token", "document"]) \ ... .setOutputCol("embeddings") >>> embeddingsFinisher = EmbeddingsFinisher() \ ... .setInputCols(["embeddings"]) \ ... .setOutputCols("finished_embeddings") \ ... .setOutputAsVector(True) \ ... .setCleanAnnotations(False) >>> pipeline = Pipeline().setStages([ ... documentAssembler, ... tokenizer, ... embeddings, ... embeddingsFinisher ... ]) >>> data = spark.createDataFrame([["This is a sentence."]]).toDF("text") >>> result = pipeline.fit(data).transform(data) >>> result.selectExpr("explode(finished_embeddings) as result").show(5, 80) +--------------------------------------------------------------------------------+ | result| +--------------------------------------------------------------------------------+ |[1.1342473030090332,-1.3855540752410889,0.9818322062492371,-0.784737348556518...| |[0.847029983997345,-1.047153353691101,-0.1520637571811676,-0.6245765686035156...| |[-0.009860038757324219,-0.13450059294700623,2.707749128341675,1.2916892766952...| |[-0.04192575812339783,-0.5764210224151611,-0.3196685314178467,-0.527840495109...| |[0.15583214163780212,-0.1614152491092682,-0.28423872590065,-0.135491415858268...| +--------------------------------------------------------------------------------+
Methods
__init__
([classname, java_model])Initialize this instance with a Java model object.
clear
(param)Clears a param from the param map if it has been explicitly set.
copy
([extra])Creates a copy of this instance with the same uid and some extra params.
explainParam
(param)Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
Returns the documentation of all params with their optionally default values and user-supplied values.
extractParamMap
([extra])Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
Gets current batch size.
Gets whether to ignore case in tokens for embeddings matching.
Gets embeddings dimension.
Gets current column names of input annotations.
Gets whether Annotator should be evaluated lazily in a RecursivePipeline.
getOrDefault
(param)Gets the value of a param in the user-supplied param map or its default value.
Gets output column name of annotations.
getParam
(paramName)Gets a param by its name.
getParamValue
(paramName)Gets the value of a parameter.
Gets unique reference name for identification.
hasDefault
(param)Checks whether a param has a default value.
hasParam
(paramName)Tests whether this instance contains a param with a given (string) name.
isDefined
(param)Checks whether a param is explicitly set by user or has a default value.
isSet
(param)Checks whether a param is explicitly set by user.
load
(path)Reads an ML instance from the input path, a shortcut of read().load(path).
loadSavedModel
(folder, spark_session)Loads a locally saved model.
pretrained
([name, lang, remote_loc])Downloads and loads a pretrained model.
read
()Returns an MLReader instance for this class.
save
(path)Save this ML instance to the given path, a shortcut of 'write().save(path)'.
set
(param, value)Sets a parameter in the embedded param map.
setBatchSize
(v)Sets batch size.
setCaseSensitive
(value)Sets whether to ignore case in tokens for embeddings matching.
Sets configProto from tensorflow, serialized into byte array.
setDimension
(value)Sets embeddings dimension.
setInputCols
(*value)Sets column names of input annotations.
setLazyAnnotator
(value)Sets whether Annotator should be evaluated lazily in a RecursivePipeline.
setMaxSentenceLength
(value)Sets max sentence length to process.
setOutputCol
(value)Sets output column name of annotations.
setParamValue
(paramName)Sets the value of a parameter.
setParams
()setStorageRef
(value)Sets unique reference name for identification.
transform
(dataset[, params])Transforms the input dataset with optional parameters.
write
()Returns an MLWriter instance for this ML instance.
Attributes
batchSize
caseSensitive
configProtoBytes
dimension
getter_attrs
inputCols
lazyAnnotator
maxSentenceLength
name
outputCol
Returns all params ordered by name.
storageRef
- clear(param)#
Clears a param from the param map if it has been explicitly set.
- copy(extra=None)#
Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.
- Parameters
extra – Extra parameters to copy to the new instance
- Returns
Copy of this instance
- explainParam(param)#
Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
- explainParams()#
Returns the documentation of all params with their optionally default values and user-supplied values.
- extractParamMap(extra=None)#
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
- Parameters
extra – extra param values
- Returns
merged param map
- getBatchSize()#
Gets current batch size.
- Returns
- int
Current batch size
- getCaseSensitive()#
Gets whether to ignore case in tokens for embeddings matching.
- Returns
- bool
Whether to ignore case in tokens for embeddings matching
- getDimension()#
Gets embeddings dimension.
- getInputCols()#
Gets current column names of input annotations.
- getLazyAnnotator()#
Gets whether Annotator should be evaluated lazily in a RecursivePipeline.
- getOrDefault(param)#
Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.
- getOutputCol()#
Gets output column name of annotations.
- getParam(paramName)#
Gets a param by its name.
- getParamValue(paramName)#
Gets the value of a parameter.
- Parameters
- paramNamestr
Name of the parameter
- getStorageRef()#
Gets unique reference name for identification.
- Returns
- str
Unique reference name for identification
- hasDefault(param)#
Checks whether a param has a default value.
- hasParam(paramName)#
Tests whether this instance contains a param with a given (string) name.
- isDefined(param)#
Checks whether a param is explicitly set by user or has a default value.
- isSet(param)#
Checks whether a param is explicitly set by user.
- classmethod load(path)#
Reads an ML instance from the input path, a shortcut of read().load(path).
- static loadSavedModel(folder, spark_session)[source]#
Loads a locally saved model.
- Parameters
- folderstr
Folder of the saved model
- spark_sessionpyspark.sql.SparkSession
The current SparkSession
- Returns
- AlbertEmbeddings
The restored model
- property params#
Returns all params ordered by name. The default implementation uses
dir()
to get all attributes of typeParam
.
- static pretrained(name='albert_base_uncased', lang='en', remote_loc=None)[source]#
Downloads and loads a pretrained model.
- Parameters
- namestr, optional
Name of the pretrained model, by default “albert_base_uncased”
- langstr, optional
Language of the pretrained model, by default “en”
- remote_locstr, optional
Optional remote address of the resource, by default None. Will use Spark NLPs repositories otherwise.
- Returns
- AlbertEmbeddings
The restored model
- classmethod read()#
Returns an MLReader instance for this class.
- save(path)#
Save this ML instance to the given path, a shortcut of ‘write().save(path)’.
- set(param, value)#
Sets a parameter in the embedded param map.
- setBatchSize(v)#
Sets batch size.
- Parameters
- vint
Batch size
- setCaseSensitive(value)#
Sets whether to ignore case in tokens for embeddings matching.
- Parameters
- valuebool
Whether to ignore case in tokens for embeddings matching
- setConfigProtoBytes(b)[source]#
Sets configProto from tensorflow, serialized into byte array.
- Parameters
- bList[int]
ConfigProto from tensorflow, serialized into byte array
- setDimension(value)#
Sets embeddings dimension.
- Parameters
- valueint
Embeddings dimension
- setInputCols(*value)#
Sets column names of input annotations.
- Parameters
- *valuestr
Input columns for the annotator
- setLazyAnnotator(value)#
Sets whether Annotator should be evaluated lazily in a RecursivePipeline.
- Parameters
- valuebool
Whether Annotator should be evaluated lazily in a RecursivePipeline
- setMaxSentenceLength(value)[source]#
Sets max sentence length to process.
- Parameters
- valueint
Max sentence length to process
- setOutputCol(value)#
Sets output column name of annotations.
- Parameters
- valuestr
Name of output column
- setParamValue(paramName)#
Sets the value of a parameter.
- Parameters
- paramNamestr
Name of the parameter
- setStorageRef(value)#
Sets unique reference name for identification.
- Parameters
- valuestr
Unique reference name for identification
- transform(dataset, params=None)#
Transforms the input dataset with optional parameters.
- Parameters
dataset – input dataset, which is an instance of
pyspark.sql.DataFrame
params – an optional param map that overrides embedded params.
- Returns
transformed dataset
New in version 1.3.0.
- uid#
A unique id for the object.
- write()#
Returns an MLWriter instance for this ML instance.