sparknlp_jsl.annotator.MedicalDistilBertForSequenceClassification#
- class sparknlp_jsl.annotator.MedicalDistilBertForSequenceClassification(classname='com.johnsnowlabs.nlp.annotators.classification.MedicalDistilBertForSequenceClassification', java_model=None)[source]#
Bases:
AnnotatorModel
,HasCaseSensitiveProperties
,HasBatchedAnnotate
MedicalDistilBertForSequenceClassification can load DistilBERT Models with sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for multi-class document classification tasks.
Pretrained models can be loaded with
pretrained()
of the companion object:>>> sequenceClassifier = MedicalDistilBertForSequenceClassification.pretrained() \ ... .setInputCols(["token", "document"]) \ ... .setOutputCol("label")
Models from the HuggingFace 🤗 Transformers library are also compatible with Spark NLP 🚀. To see which models are compatible and how to import them see Import Transformers into Spark NLP 🚀.
Input Annotation types
Output Annotation type
DOCUMENT, TOKEN
CATEGORY
- Parameters:
- batchSize
Batch size. Large values allows faster processing but requires more memory, by default 8
- caseSensitive
Whether to ignore case in tokens for embeddings matching, by default True
- configProtoBytes
ConfigProto from tensorflow, serialized into byte array.
- maxSentenceLength
Max sentence length to process, by default 128
- coalesceSentences
Instead of 1 class per sentence (if inputCols is sentence) output 1 class per document by averaging probabilities in all sentences.
Examples
>>> import sparknlp >>> from sparknlp.base import * >>> from sparknlp.annotator import * >>> from pyspark.ml import Pipeline >>> documentAssembler = DocumentAssembler() \ ... .setInputCol("text") \ ... .setOutputCol("document") >>> tokenizer = Tokenizer() \ ... .setInputCols(["document"]) \ ... .setOutputCol("token") >>> sequenceClassifier = MedicalDistilBertForSequenceClassification.pretrained() \ ... .setInputCols(["token", "document"]) \ ... .setOutputCol("label") \ ... .setCaseSensitive(True) >>> pipeline = Pipeline().setStages([ ... documentAssembler, ... tokenizer, ... sequenceClassifier ... ]) >>> data = spark.createDataFrame([["I felt a bit drowsy and had blurred vision after taking Aspirin."]]).toDF("text") >>> result = pipeline.fit(data).transform(data) >>> result.select("label.result").show(truncate=False)
Methods
__init__
([classname, java_model])Initialize this instance with a Java model object.
clear
(param)Clears a param from the param map if it has been explicitly set.
copy
([extra])Creates a copy of this instance with the same uid and some extra params.
explainParam
(param)Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
Returns the documentation of all params with their optionally default values and user-supplied values.
extractParamMap
([extra])Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
Gets current batch size.
Gets whether to ignore case in tokens for embeddings matching.
Returns labels used to train this model
Gets current column names of input annotations.
Gets whether Annotator should be evaluated lazily in a RecursivePipeline.
getOrDefault
(param)Gets the value of a param in the user-supplied param map or its default value.
Gets output column name of annotations.
getParam
(paramName)Gets a param by its name.
getParamValue
(paramName)Gets the value of a parameter.
hasDefault
(param)Checks whether a param has a default value.
hasParam
(paramName)Tests whether this instance contains a param with a given (string) name.
isDefined
(param)Checks whether a param is explicitly set by user or has a default value.
isSet
(param)Checks whether a param is explicitly set by user.
load
(path)Reads an ML instance from the input path, a shortcut of read().load(path).
loadSavedModel
(folder, spark_session)Loads a locally saved model.
Loads a locally saved model.
pretrained
([name, lang, remote_loc])Downloads and loads a pretrained model.
read
()Returns an MLReader instance for this class.
save
(path)Save this ML instance to the given path, a shortcut of 'write().save(path)'.
set
(param, value)Sets a parameter in the embedded param map.
setBatchSize
(v)Sets batch size.
setCaseSensitive
(value)Sets whether to ignore case in tokens for embeddings matching.
setCoalesceSentences
(value)Instead of 1 class per sentence (if inputCols is '''sentence''') output 1 class per document by averaging probabilities in all sentences.
Sets configProto from tensorflow, serialized into byte array.
setInputCols
(*value)Sets column names of input annotations.
setLazyAnnotator
(value)Sets whether Annotator should be evaluated lazily in a RecursivePipeline.
setMaxSentenceLength
(value)Sets max sentence length to process, by default 128.
setOutputCol
(value)Sets output column name of annotations.
setParamValue
(paramName)Sets the value of a parameter.
setParams
()transform
(dataset[, params])Transforms the input dataset with optional parameters.
write
()Returns an MLWriter instance for this ML instance.
Attributes
batchSize
caseSensitive
coalesceSentences
configProtoBytes
getter_attrs
inputCols
lazyAnnotator
maxSentenceLength
name
outputCol
Returns all params ordered by name.
- clear(param)#
Clears a param from the param map if it has been explicitly set.
- copy(extra=None)#
Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.
- Parameters:
extra – Extra parameters to copy to the new instance
- Returns:
Copy of this instance
- explainParam(param)#
Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
- explainParams()#
Returns the documentation of all params with their optionally default values and user-supplied values.
- extractParamMap(extra=None)#
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
- Parameters:
extra – extra param values
- Returns:
merged param map
- getBatchSize()#
Gets current batch size.
- Returns:
- int
Current batch size
- getCaseSensitive()#
Gets whether to ignore case in tokens for embeddings matching.
- Returns:
- bool
Whether to ignore case in tokens for embeddings matching
- getInputCols()#
Gets current column names of input annotations.
- getLazyAnnotator()#
Gets whether Annotator should be evaluated lazily in a RecursivePipeline.
- getOrDefault(param)#
Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.
- getOutputCol()#
Gets output column name of annotations.
- getParam(paramName)#
Gets a param by its name.
- getParamValue(paramName)#
Gets the value of a parameter.
- Parameters:
- paramNamestr
Name of the parameter
- hasDefault(param)#
Checks whether a param has a default value.
- hasParam(paramName)#
Tests whether this instance contains a param with a given (string) name.
- isDefined(param)#
Checks whether a param is explicitly set by user or has a default value.
- isSet(param)#
Checks whether a param is explicitly set by user.
- classmethod load(path)#
Reads an ML instance from the input path, a shortcut of read().load(path).
- static loadSavedModel(folder, spark_session)[source]#
Loads a locally saved model.
- Parameters:
- folderstr
Folder of the saved model
- spark_sessionpyspark.sql.SparkSession
The current SparkSession
- Returns:
- DistilBertForSequenceClassification
The restored model
- static loadSavedModelOpenSource(destilBertForTokenClassifierPath, tfModelPath, spark_session)[source]#
Loads a locally saved model.
- Parameters:
- bertForTokenClassifierPathstr
Folder of the bertForTokenClassifier
- tfModelPathstr
Folder taht contains the tf model
- spark_sessionpyspark.sql.SparkSession
The current SparkSession
- Returns
- ——-
- MedicalBertForSequenceClassification
The restored model
- property params#
Returns all params ordered by name. The default implementation uses
dir()
to get all attributes of typeParam
.
- static pretrained(name='distilbert_sequence_classifier_ade', lang='en', remote_loc='clinical/models')[source]#
Downloads and loads a pretrained model.
- Parameters:
- namestr, optional
Name of the pretrained model, by default
- langstr, optional
Language of the pretrained model, by default “en”
- remote_locstr, optional
Optional remote address of the resource, by default None. Will use Spark NLPs repositories otherwise.
- Returns:
- MedicalBertForTokenClassifier
The restored model
- classmethod read()#
Returns an MLReader instance for this class.
- save(path)#
Save this ML instance to the given path, a shortcut of ‘write().save(path)’.
- set(param, value)#
Sets a parameter in the embedded param map.
- setBatchSize(v)#
Sets batch size.
- Parameters:
- vint
Batch size
- setCaseSensitive(value)#
Sets whether to ignore case in tokens for embeddings matching.
- Parameters:
- valuebool
Whether to ignore case in tokens for embeddings matching
- setCoalesceSentences(value)[source]#
Instead of 1 class per sentence (if inputCols is ‘’’sentence’’’) output 1 class per document by averaging probabilities in all sentences. Due to max sequence length limit in almost all transformer models such as BERT (512 tokens), this parameter helps feeding all the sentences into the model and averaging all the probabilities for the entire document instead of probabilities per sentence. (Default: true)
- Parameters:
- valuebool
If the output of all sentences will be averaged to one output
- setConfigProtoBytes(b)[source]#
Sets configProto from tensorflow, serialized into byte array.
- Parameters:
- bList[int]
ConfigProto from tensorflow, serialized into byte array
- setInputCols(*value)#
Sets column names of input annotations.
- Parameters:
- *valuestr
Input columns for the annotator
- setLazyAnnotator(value)#
Sets whether Annotator should be evaluated lazily in a RecursivePipeline.
- Parameters:
- valuebool
Whether Annotator should be evaluated lazily in a RecursivePipeline
- setMaxSentenceLength(value)[source]#
Sets max sentence length to process, by default 128.
- Parameters:
- valueint
Max sentence length to process
- setOutputCol(value)#
Sets output column name of annotations.
- Parameters:
- valuestr
Name of output column
- setParamValue(paramName)#
Sets the value of a parameter.
- Parameters:
- paramNamestr
Name of the parameter
- transform(dataset, params=None)#
Transforms the input dataset with optional parameters.
- Parameters:
dataset – input dataset, which is an instance of
pyspark.sql.DataFrame
params – an optional param map that overrides embedded params.
- Returns:
transformed dataset
New in version 1.3.0.
- uid#
A unique id for the object.
- write()#
Returns an MLWriter instance for this ML instance.