Turkish BERT Base Cased (BERTurk)

Description

BERTurk is a community-driven cased BERT model for Turkish. Some datasets used for pretraining and evaluation are contributed from the awesome Turkish NLP community, as well as the decision for the model name: BERTurk.

Download Copy S3 URI

How to use

embeddings = BertEmbeddings.pretrained("bert_base_turkish_cased", "tr") \
.setInputCols("sentence", "token") \
.setOutputCol("embeddings")
nlp_pipeline = Pipeline(stages=[document_assembler, sentence_detector, tokenizer, embeddings])

val embeddings = BertEmbeddings.pretrained("bert_base_turkish_cased", "tr")
.setInputCols("sentence", "token")
.setOutputCol("embeddings")
val pipeline = new Pipeline().setStages(Array(document_assembler, sentence_detector, tokenizer, embeddings))
import nlu
nlu.load("tr.embed.bert").predict("""Put your text here.""")

Model Information

Model Name: bert_base_turkish_cased
Compatibility: Spark NLP 3.1.0+
License: Open Source
Edition: Official
Input Labels: [token, sentence]
Output Labels: [embeddings]
Language: tr
Case sensitive: true

Data Source

https://huggingface.co/dbmdz/bert-base-turkish-cased

Benchmarking

For results on PoS tagging or NER tasks, please refer to
[this repository](https://github.com/stefan-it/turkish-bert).