Description
Classify medical text according to the PICO framework.
This model is a BioBERT-based classifier.
Predicted Entities
CONCLUSIONS
, DESIGN_SETTING
, INTERVENTION
, PARTICIPANTS
, FINDINGS
, MEASUREMENTS
, AIMS
How to use
document_assembler = DocumentAssembler() \
.setInputCol("text") \
.setOutputCol("document")
tokenizer = Tokenizer() \
.setInputCols(["document"]) \
.setOutputCol("token")
sequenceClassifier = MedicalBertForSequenceClassification.pretrained("bert_sequence_classifier_pico", "en", "clinical/models")\
.setInputCols(["document","token"])\
.setOutputCol("class")
pipeline = Pipeline(stages=[
document_assembler,
tokenizer,
sequenceClassifier
])
data = spark.createDataFrame([["To compare the results of recording enamel opacities using the TF and modified DDE indices."]]).toDF("text")
result = pipeline.fit(data).transform(data)
val documenter = new DocumentAssembler()
.setInputCol("text")
.setOutputCol("document")
val tokenizer = new Tokenizer()
.setInputCols(Array("document"))
.setOutputCol("token")
val sequenceClassifier = MedicalBertForSequenceClassification.pretrained("bert_sequence_classifier_pico_biobert", "en", "clinical/models")
.setInputCols(Array("document","token"))
.setOutputCol("class")
val pipeline = new Pipeline().setStages(Array(documenter, tokenizer, sequenceClassifier))
val data = Seq("""To compare the results of recording enamel opacities using the TF and modified DDE indices.""").toDS.toDF("text")
val result = pipeline.fit(data).transform(data)
import nlu
nlu.load("en.classify.pico.seq_biobert").predict("""To compare the results of recording enamel opacities using the TF and modified DDE indices.""")
Results
+-------------------------------------------------------------------------------------------+------+
|text |result|
+-------------------------------------------------------------------------------------------+------+
|To compare the results of recording enamel opacities using the TF and modified DDE indices.|[AIMS]|
+-------------------------------------------------------------------------------------------+------+
Model Information
Model Name: | bert_sequence_classifier_pico_biobert |
Compatibility: | Healthcare NLP 3.4.1+ |
License: | Licensed |
Edition: | Official |
Input Labels: | [document, token] |
Output Labels: | [class] |
Language: | en |
Size: | 406.0 MB |
Case sensitive: | true |
Max sentence length: | 128 |
References
This model is trained on a custom dataset derived from a PICO classification dataset.
Benchmarking
label precision recall f1-score support
AIMS 0.92 0.94 0.93 3813
CONCLUSIONS 0.85 0.86 0.86 4314
DESIGN_SETTING 0.88 0.78 0.83 5628
FINDINGS 0.91 0.92 0.91 9242
INTERVENTION 0.71 0.78 0.74 2331
MEASUREMENTS 0.79 0.87 0.83 3219
PARTICIPANTS 0.86 0.81 0.83 2723
accuracy - - 0.86 31270
macro-avg 0.85 0.85 0.85 31270
weighted-avg 0.87 0.86 0.86 31270