Smaller BERT Embeddings (L-10_H-128_A-2)


This is one of the smaller BERT models referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher.


How to use

embeddings = BertEmbeddings.pretrained("small_bert_L10_128", "en") \
      .setInputCols("sentence", "token") \
nlp_pipeline = Pipeline(stages=[document_assembler, sentence_detector, tokenizer, embeddings])
pipeline_model =[[""]]).toDF("text"))
result = pipeline_model.transform(spark.createDataFrame(pd.DataFrame({"text": ["I love NLP"]})))
val embeddings = BertEmbeddings.pretrained("small_bert_L10_128", "en")
      .setInputCols("sentence", "token")
val pipeline = new Pipeline().setStages(Array(document_assembler, sentence_detector, tokenizer, embeddings))
val result =["I love NLP"].toDS.toDF("text")).transform(data)
import nlu

text = ["I love NLP"]
embeddings_df = nlu.load('en.embed.bert.small_L10_128').predict(text, output_level='token')


	token	en_embed_bert_small_L10_128_embeddings
      I 	[0.06678155064582825, 0.4304381012916565, 0.42...
      love 	[-0.4905094504356384, 0.6187271475791931, 0.56...
      NLP 	[0.17027147114276886, -0.49662041664123535, 1....

Model Information

Model Name: small_bert_L10_128
Type: embeddings
Compatibility: Spark NLP 2.6.0+
License: Open Source
Edition: Official
Input Labels: [sentence, token]
Output Labels: [word_embeddings]
Language: [en]
Dimension: 128
Case sensitive: false

Data Source

The model is imported from