Greek BERT Base Uncased Embedding


A Greek version of BERT pre-trained language model.

The pre-training corpora of bert-base-greek-uncased-v1 include:

  • The Greek part of Wikipedia,
  • The Greek part of European Parliament Proceedings Parallel Corpus, and
  • The Greek part of OSCAR, a cleansed version of Common Crawl.

Predicted Entities


How to use

embeddings = BertEmbeddings.pretrained("bert_base_uncased", "el") \
      .setInputCols("sentence", "token") \

nlp_pipeline = Pipeline(stages=[document_assembler, sentence_detector, tokenizer, embeddings])
val embeddings = BertEmbeddings.pretrained("bert_base_uncased", "el")
      .setInputCols("sentence", "token")

val pipeline = new Pipeline().setStages(Array(document_assembler, sentence_detector, tokenizer, embeddings))

Model Information

Model Name: bert_base_uncased
Compatibility: Spark NLP 3.2.2+
License: Open Source
Edition: Official
Input Labels: [sentence, token]
Output Labels: [bert]
Language: el
Case sensitive: true

Data Source

The model is imported from: