DistilRoBERTa base model

Description

This model is a distilled version of the RoBERTa-base model. It follows the same training procedure as DistilBERT.

The code for the distillation process can be found here. This model is case-sensitive: it makes a difference between english and English.

The model has 6 layers, 768 dimensions, and 12 heads, totalizing 82M parameters (compared to 125M parameters for RoBERTa-base). On average DistilRoBERTa is twice as fast as Roberta-base.

Download Copy S3 URI

How to use

embeddings = RoBertaEmbeddings.pretrained("distilroberta_base", "en") \
.setInputCols("sentence", "token") \
.setOutputCol("embeddings")
val embeddings = RoBertaEmbeddings.pretrained("distilroberta_base", "en")
.setInputCols("sentence", "token")
.setOutputCol("embeddings")
val pipeline = new Pipeline().setStages(Array(document_assembler, sentence_detector, tokenizer, embeddings))
import nlu
nlu.load("en.embed.distilroberta").predict("""Put your text here.""")

Model Information

Model Name: distilroberta_base
Compatibility: Spark NLP 3.1.0+
License: Open Source
Edition: Official
Input Labels: [token, sentence]
Output Labels: [embeddings]
Language: en
Case sensitive: true

Data Source

https://huggingface.co/distilroberta-base

Benchmarking

When fine-tuned on downstream tasks, this model achieves the following results:

Glue test results:

| Task | MNLI | QQP  | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE  |
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|
|      | 84.0 | 89.4 | 90.8 | 92.5  | 59.3 | 88.3  | 86.6 | 67.9 |