Medical Question Answering (flan_t5_base_jsl_qa)

Description

The flan_t5_base_jsl_qa model is designed to work seamlessly with the MedicalQuestionAnswering annotator. This model provides a powerful and efficient solution for accurately answering medical questions and delivering insightful information in the medical domain.

Predicted Entities

Download Copy S3 URI

How to use

document_assembler = MultiDocumentAssembler()\
    .setInputCols("question", "context")\
    .setOutputCols("document_question", "document_context")

med_qa = MedicalQuestionAnswering.pretrained("flan_t5_base_jsl_qa","en","clinical/models")\
    .setInputCols(["document_question", "document_context"])\
    .setCustomPrompt("{DOCUMENT} {QUESTION}")\
    .setMaxNewTokens(50)\
    .setOutputCol("answer")\

pipeline = Pipeline(stages=[document_assembler, med_qa])

#doi: 10.3758/s13414-011-0157-z.
paper_abstract = "The visual indexing theory proposed by Zenon Pylyshyn (Cognition, 32, 65-97, 1989) predicts that visual attention mechanisms are employed when mental images are projected onto a visual scene."
long_question = "What is the effect of directing attention on memory?"

data = spark.createDataFrame([[long_question, paper_abstract]]).toDF("question", "context")

result = pipeline.fit(data).transform(data)
val document_assembler = new MultiDocumentAssembler()
    .setInputCols("question", "context")
    .setOutputCols("document_question", "document_context")

val med_qa = MedicalQuestionAnswering.pretrained("flan_t5_base_jsl_qa", "en", "clinical/models")
    .setInputCols(Array("document_question", "document_context"))
    .setOutputCol("answer")
    .setMaxNewTokens(50)
    .setCustomPrompt("{DOCUMENT} {QUESTION}")

val pipeline = new Pipeline().setStages(Array(document_assembler, med_qa))

paper_abstract = "The visual indexing theory proposed by Zenon Pylyshyn (Cognition, 32, 65–97, 1989) predicts that visual attention mechanisms are employed when mental images are projected onto a visual scene. Recent eye-tracking studies have supported this hypothesis by showing that people tend to look at empty places where requested information has been previously presented. However, it has remained unclear to what extent this behavior is related to memory performance. The aim of the present study was to explore whether the manipulation of spatial attention can facilitate memory retrieval. In two experiments, participants were asked first to memorize a set of four objects and then to determine whether a probe word referred to any of the objects. The results of both experiments indicate that memory accuracy is not affected by the current focus of attention and that all the effects of directing attention to specific locations on response times can be explained in terms of stimulus–stimulus and stimulus–response spatial compatibility."

long_question = "What is the effect of directing attention on memory?"
yes_no_question = "Does directing attention improve memory for items?"

val data = Seq( 
    (long_question, paper_abstract, ))
    .toDS.toDF("question", "context")

val result = pipeline.fit(data).transform(data)

Results

+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|result                                                                                                                                                                                                                    |
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|[The effect of directing attention on memory is that it can help to improve memory retention and recall. It can help to reduce the amount of time spent on tasks, such as focusing on one task at a time, or focusing on ]|
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Model Information

Model Name: flan_t5_base_jsl_qa
Compatibility: Healthcare NLP 4.4.2+
License: Licensed
Edition: Official
Language: en
Size: 920.8 MB
Case sensitive: true