Description
This pretrained pipeline includes the Medical Bert for Sequence Classification model to classify texts depending on if they are self-reported or if they refer to another person. The pipeline is built on the top of bert_sequence_classifier_vop_self_report model.
Predicted Entities
Live Demo Open in Colab Copy S3 URI
How to use
from sparknlp.pretrained import PretrainedPipeline
pipeline = PretrainedPipeline("bert_sequence_classifier_vop_self_report_pipeline", "en", "clinical/models")
pipeline.annotate("My friend was treated for her skin cancer two years ago.")
import com.johnsnowlabs.nlp.pretrained.PretrainedPipeline
val pipeline = new PretrainedPipeline("bert_sequence_classifier_vop_self_report_pipeline", "en", "clinical/models")
val result = pipeline.annotate(My friend was treated for her skin cancer two years ago.)
Results
| text | prediction |
|:---------------------------------------------------------|:-------------|
| My friend was treated for her skin cancer two years ago. | 3rd_Person |
Model Information
Model Name: | bert_sequence_classifier_vop_self_report_pipeline |
Type: | pipeline |
Compatibility: | Healthcare NLP 4.4.3+ |
License: | Licensed |
Edition: | Official |
Language: | en |
Size: | 406.4 MB |
Included Models
- DocumentAssembler
- TokenizerModel
- MedicalBertForSequenceClassification