Electra MeDAL Acronym BERT Embeddings

Description

Electra model fine tuned on MeDAL, a large dataset on abbreviation disambiguation, designed for pretraining natural language understanding models in the medical domain. Check the reference here.

Predicted Entities

Download Copy S3 URI

How to use

documentAssembler= DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")

sentenceDetector = SentenceDetector()\
.setInputCols(["document"])\
.setOutputCol("sentence")

tokenizer= Tokenizer()\
.setInputCols(["sentence"])\
.setOutputCol("token")

embeddings = BertEmbeddings.pretrained("electra_medal_acronym", "en") \
.setInputCols("sentence", "token") \
.setOutputCol("embeddings")

nlpPipeline= Pipeline(stages=[documentAssembler, sentenceDetector, tokenizer, embeddings])
val documentAssembler = new DocumentAssembler()
.setInputCol("text")
.setOutputCol("document")

val sentenceDetector = new SentenceDetector()
.setInputCols(Array("document"))
.setOutputCol("sentence")

val tokenizer = new Tokenizer()
.setInputCols(Array("sentence"))
.setOutputCol("token")

val embeddings = BertEmbeddings.pretrained("electra_medal_acronym", "en") 
.setInputCols("sentence", "token")
.setOutputCol("embeddings")

val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDetector, tokenizer, embeddings))
import nlu
nlu.load("en.embed.electra.medical").predict("""Put your text here.""")

Model Information

Model Name: electra_medal_acronym
Compatibility: Spark NLP 3.3.3+
License: Open Source
Edition: Official
Input Labels: [sentence, token]
Output Labels: [electra]
Language: en
Size: 66.0 MB
Case sensitive: true

Data Source

https://github.com/BruceWen120/medal