Description
This model is a PHS-BERT based sentimental analysis model that can extract information from COVID-19 Vaccine-related tweets. The model predicts whether a tweet contains positive, negative, or neutral sentiments about COVID-19 Vaccines.
Predicted Entities
negative
, positive
, neutral
How to use
document_assembler = DocumentAssembler() \
.setInputCol("text") \
.setOutputCol("sentence")
tokenizer = Tokenizer() \
.setInputCols(["sentence"]) \
.setOutputCol("token")
bert_embeddings = BertEmbeddings.pretrained("bert_embeddings_phs_bert", "en", "public/models")\
.setInputCols(["sentence", "token"])\
.setOutputCol("embeddings")\
embeddingsSentence = SentenceEmbeddings() \
.setInputCols(["sentence", "embeddings"]) \
.setOutputCol("sentence_embeddings") \
.setPoolingStrategy("AVERAGE")
classifierdl = ClassifierDLModel.pretrained('classifierdl_vaccine_sentiment', "en", "clinical/models")\
.setInputCols(['sentence', 'token', 'sentence_embeddings'])\
.setOutputCol('class')
pipeline = Pipeline(
stages = [
document_assembler,
tokenizer,
bert_embeddings,
embeddingsSentence,
classifierdl
])
text_list = ['A little bright light for an otherwise dark week. Thanks researchers, and frontline workers. Onwards.',
'People with a history of severe allergic reaction to any component of the vaccine should not take.',
'43 million doses of vaccines administrated worldwide...Production capacity of CHINA to reach 4 b']
data = spark.createDataFrame(text_list, StringType()).toDF("text")
result = pipeline.fit(data).transform(data)
val documenter = new DocumentAssembler()
.setInputCol("text")
.setOutputCol("sentence")
val tokenizer = new Tokenizer()
.setInputCols(Array("sentence"))
.setOutputCol("token")
val embeddings = BertEmbeddings.pretrained("bert_embeddings_phs_bert", "en")
.setInputCols(Array("sentence", "token"))
.setOutputCol("embeddings")
val sentence_embeddings = SentenceEmbeddings()
.setInputCols(Array("sentence", "embeddings"))
.setOutputCol("sentence_embeddings")
.setPoolingStrategy("AVERAGE")
val classifier = ClassifierDLModel.pretrained("classifierdl_vaccine_sentiment", "en", "clinical/models")
.setInputCols(Array("sentence", "token", "sentence_embeddings"))
.setOutputCol("class")
val bert_clf_pipeline = new Pipeline().setStages(Array(document_assembler, tokenizer, embeddings, sentence_embeddings, classifier))
val data = Seq(Array("A little bright light for an otherwise dark week. Thanks researchers, and frontline workers. Onwards.",
"People with a history of severe allergic reaction to any component of the vaccine should not take.",
"43 million doses of vaccines administrated worldwide...Production capacity of CHINA to reach 4 b")).toDS.toDF("text")
val result = bert_clf_pipeline.fit(data).transform(data)
import nlu
nlu.load("en.classify.vaccine_sentiment").predict("""A little bright light for an otherwise dark week. Thanks researchers, and frontline workers. Onwards.""")
Results
+-----------------------------------------------------------------------------------------------------+----------+
|text |class |
+-----------------------------------------------------------------------------------------------------+----------+
|A little bright light for an otherwise dark week. Thanks researchers, and frontline workers. Onwards.|[positive]|
|People with a history of severe allergic reaction to any component of the vaccine should not take. |[negative]|
|43 million doses of vaccines administrated worldwide...Production capacity of CHINA to reach 4 b |[neutral] |
+-----------------------------------------------------------------------------------------------------+----------+
Model Information
Model Name: | classifierdl_vaccine_sentiment |
Compatibility: | Healthcare NLP 4.0.0+ |
License: | Licensed |
Edition: | Official |
Input Labels: | [sentence_embeddings] |
Output Labels: | [class] |
Language: | en |
Size: | 24.2 MB |
References
Curated from several academic and in-house datasets.
Benchmarking
label precision recall f1-score support
neutral 0.76 0.72 0.74 1008
positive 0.80 0.79 0.80 966
negative 0.76 0.81 0.78 916
accuracy - - 0.77 2890
macro-avg 0.77 0.77 0.77 2890
weighted-avg 0.77 0.77 0.77 2890