Dutch BERT Base Cased Embedding

Description

BERTje is a Dutch pre-trained BERT model developed at the University of Groningen.

Predicted Entities

Download Copy S3 URI

How to use

embeddings = BertEmbeddings.pretrained("bert_base_cased", "nl") \
      .setInputCols("sentence", "token") \
      .setOutputCol("embeddings")

nlp_pipeline = Pipeline(stages=[document_assembler, sentence_detector, tokenizer, embeddings])
val embeddings = BertEmbeddings.pretrained("bert_base_cased", "nl")
      .setInputCols("sentence", "token")
      .setOutputCol("embeddings")

val pipeline = new Pipeline().setStages(Array(document_assembler, sentence_detector, tokenizer, embeddings))
import nlu
nlu.load("nl.embed.bert.base_cased").predict("""Put your text here.""")

Model Information

Model Name: bert_base_cased
Compatibility: Spark NLP 3.2.2+
License: Open Source
Edition: Official
Input Labels: [sentence, token]
Output Labels: [bert]
Language: nl
Case sensitive: true

Data Source

The model is imported from: https://huggingface.co/GroNLP/bert-base-dutch-cased