Split Sentences in Healthcare Texts


SentenceDetectorDL (SDDL) is based on a general-purpose neural network model for sentence boundary detection. The task of sentence boundary detection is to identify sentences within a text. Many natural language processing tasks take a sentence as an input unit, such as part-of-speech tagging, dependency parsing, named entity recognition or machine translation.

In this model, we treated the sentence boundary detection task as a classification problem based on a paper {Deep-EOS: General-Purpose Neural Networks for Sentence Boundary Detection (2020, Stefan Schweter, Sajawel Ahmed) using CNN architecture. We also modified the original implemenation a little bit to cover broken sentences and some impossible end of line chars.

Open in Colab Download

How to use

documenter = DocumentAssembler()\
sentencerDL = SentenceDetectorDLModel\
  .pretrained("sentence_detector_dl_healthcare","en","clinical/models") \
  .setInputCols(["document"]) \
sd_model = LightPipeline(PipelineModel(stages=[documenter, sentencerDL]))
sd_model.fullAnnotate("""John loves Mary.Mary loves Peter. Peter loves Helen .Helen loves John; Total: four people involved.""")
val documenter = DocumentAssembler()

val model = SentenceDetectorDLModel.pretrained("sentence_detector_dl_healthcare","en","clinical/models")
val pipeline = new Pipeline().setStages(Array(documenter, model))
val result = pipeline.fit(Seq.empty["John loves Mary.Mary loves Peter. Peter loves Helen .Helen loves John; Total: four people involved."].toDS.toDF("text")).transform(data)


| 0 | John loves Mary.             |
| 1 | Mary loves Peter             |
| 2 | Peter loves Helen .          |
| 3 | Helen loves John;            |
| 4 | Total: four people involved. |

Model Information

Name: sentence_detector_dl_healthcare
Type: DeepSentenceDetector
Compatibility: Spark NLP 2.6.0+
License: Licensed
Edition: Official
Input labels: [document]
Output labels: sentence
Language: en

Data Source

Healthcare SDDL model is trained on domain (healthcare) specific text, annotated internally, to generalize further on clinical notes.


|    | Accuracy | Recall   | Prec     | F1   |
|  0 | 0.98     | 1.00     | 0.96     | 0.98 |