Description
DeBERTa v3 model with sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for multi-class document classification tasks.
mdeberta_v3_base_sequence_classifier_allocine is a fine-tuned DeBERTa model that is ready to be used for Sequence Classification tasks such as sentiment analysis or multi-class text classification and it achieves state-of-the-art performance.
We used TFDebertaV2ForSequenceClassification to train this model and used DeBertaForSequenceClassification annotator in Spark NLP 🚀 for prediction at scale!
How to use
document_assembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
tokenizer = Tokenizer()\
.setInputCols(['document'])\
.setOutputCol('token')
sequenceClassifier = DeBertaForSequenceClassification.pretrained("mdeberta_v3_base_sequence_classifier_allocine", "fr")\
.setInputCols(["document", "token"])\
.setOutputCol("class")\
.setCaseSensitive(True)\
.setMaxSentenceLength(512)
pipeline = Pipeline(stages=[
document_assembler,
tokenizer,
sequenceClassifier
])
example = spark.createDataFrame([['I really liked that movie!']]).toDF("text")
result = pipeline.fit(example).transform(example)
Model Information
Model Name: | mdeberta_v3_base_sequence_classifier_allocine |
Compatibility: | Spark NLP 3.4.3+ |
License: | Open Source |
Edition: | Official |
Input Labels: | [token, document] |
Output Labels: | [ner] |
Language: | fr |
Size: | 902.3 MB |
Case sensitive: | true |
Max sentence length: | 512 |