Description
This model extracts mentions of treatments and posology information using unspecific labels (low granularity).
Definitions of Predicted Entities:
Cancer_Therapy
: Mentions of cancer treatments, including chemotherapy, radiotherapy, surgery and other.Posology_Information
: Terms related to the posology of the treatment, including duration, frequencies and dosage.
Predicted Entities
Cancer_Therapy
, Posology_Information
Live Demo Open in Colab Copy S3 URI
How to use
document_assembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentence_detector = SentenceDetectorDLModel.pretrained("sentence_detector_dl_healthcare","en","clinical/models")\
.setInputCols(["document"])\
.setOutputCol("sentence")
tokenizer = Tokenizer() \
.setInputCols(["sentence"]) \
.setOutputCol("token")
word_embeddings = WordEmbeddingsModel().pretrained("embeddings_clinical", "en", "clinical/models")\
.setInputCols(["sentence", "token"]) \
.setOutputCol("embeddings")
ner = MedicalNerModel.pretrained("ner_oncology_unspecific_posology", "en", "clinical/models") \
.setInputCols(["sentence", "token", "embeddings"]) \
.setOutputCol("ner")
ner_converter = NerConverter() \
.setInputCols(["sentence", "token", "ner"]) \
.setOutputCol("ner_chunk")
pipeline = Pipeline(stages=[document_assembler,
sentence_detector,
tokenizer,
word_embeddings,
ner,
ner_converter])
data = spark.createDataFrame([["The patient underwent a regimen consisting of adriamycin (60 mg/m2) and cyclophosphamide (600 mg/m2) over six courses. She is currently receiving his second cycle of chemotherapy and is in good overall condition."]]).toDF("text")
result = pipeline.fit(data).transform(data)
val document_assembler = new DocumentAssembler()
.setInputCol("text")
.setOutputCol("document")
val sentence_detector = SentenceDetectorDLModel.pretrained("sentence_detector_dl_healthcare","en","clinical/models")
.setInputCols("document")
.setOutputCol("sentence")
val tokenizer = new Tokenizer()
.setInputCols("sentence")
.setOutputCol("token")
val word_embeddings = WordEmbeddingsModel().pretrained("embeddings_clinical", "en", "clinical/models")
.setInputCols(Array("sentence", "token"))
.setOutputCol("embeddings")
val ner = MedicalNerModel.pretrained("ner_oncology_unspecific_posology", "en", "clinical/models")
.setInputCols(Array("sentence", "token", "embeddings"))
.setOutputCol("ner")
val ner_converter = new NerConverter()
.setInputCols(Array("sentence", "token", "ner"))
.setOutputCol("ner_chunk")
val pipeline = new Pipeline().setStages(Array(document_assembler,
sentence_detector,
tokenizer,
word_embeddings,
ner,
ner_converter))
val data = Seq("The patient underwent a regimen consisting of adriamycin (60 mg/m2) and cyclophosphamide (600 mg/m2) over six courses. She is currently receiving his second cycle of chemotherapy and is in good overall condition.").toDS.toDF("text")
val result = pipeline.fit(data).transform(data)
import nlu
nlu.load("en.med_ner.oncology_unspecific_posology").predict("""The patient underwent a regimen consisting of adriamycin (60 mg/m2) and cyclophosphamide (600 mg/m2) over six courses. She is currently receiving his second cycle of chemotherapy and is in good overall condition.""")
Results
| chunk | ner_label |
|:-----------------|:---------------------|
| adriamycin | Cancer_Therapy |
| 60 mg/m2 | Posology_Information |
| cyclophosphamide | Cancer_Therapy |
| 600 mg/m2 | Posology_Information |
| over six courses | Posology_Information |
| second cycle | Posology_Information |
| chemotherapy | Cancer_Therapy |
Model Information
Model Name: | ner_oncology_unspecific_posology |
Compatibility: | Spark NLP for Healthcare 4.0.0+ |
License: | Licensed |
Edition: | Official |
Input Labels: | [sentence, token, embeddings] |
Output Labels: | [ner] |
Language: | en |
Size: | 34.3 MB |
Dependencies: | embeddings_clinical |
References
In-house annotated oncology case reports.
Benchmarking
label tp fp fn total precision recall f1
Posology_Information 2663 244 399 3062 0.92 0.87 0.89
Cancer_Therapy 2580 317 247 2827 0.89 0.91 0.90
macro_avg 5243 561 646 5889 0.90 0.89 0.90
micro_avg 5243 561 646 5889 0.90 0.89 0.90