English BertForQuestionAnswering model

Description

Pretrained Question Answering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. bert-large-cased-whole-word-masking-finetuned-squad is a English model orginally trained by HuggingFace.

Download Copy S3 URI

How to use

document_assembler = MultiDocumentAssembler() \ 
.setInputCols(["question", "context"]) \
.setOutputCols(["document_question", "document_context"])

spanClassifier = BertForQuestionAnswering.pretrained("bert_qa_bert_large_cased_whole_word_masking_finetuned_squad","en") \
.setInputCols(["document_question", "document_context"]) \
.setOutputCol("answer") \
.setCaseSensitive(True)

pipeline = Pipeline().setStages([
document_assembler,
spanClassifier
])

example = spark.createDataFrame([["What's my name?", "My name is Clara and I live in Berkeley."]]).toDF("question", "context")

result = pipeline.fit(example).transform(example)
val document = new MultiDocumentAssembler()
.setInputCols("question", "context")
.setOutputCols("document_question", "document_context")

val spanClassifier = BertForQuestionAnswering
.pretrained("bert_qa_bert_large_cased_whole_word_masking_finetuned_squad","en")
.setInputCols(Array("document_question", "document_context"))
.setOutputCol("answer")
.setCaseSensitive(true)
.setMaxSentenceLength(512)

val pipeline = new Pipeline().setStages(Array(document, spanClassifier))

val example = Seq(
("Where was John Lenon born?", "John Lenon was born in London and lived in Paris. My name is Sarah and I live in London."),
("What's my name?", "My name is Clara and I live in Berkeley."))
.toDF("question", "context")

val result = pipeline.fit(example).transform(example)
import nlu
nlu.load("en.answer_question.squad.bert.large_cased").predict("""What's my name?|||"My name is Clara and I live in Berkeley.""")

Model Information

Model Name: bert_qa_bert_large_cased_whole_word_masking_finetuned_squad
Compatibility: Spark NLP 4.0.0+
License: Open Source
Edition: Official
Input Labels: [sentence, token]
Output Labels: [embeddings]
Language: en
Size: 1.2 GB
Case sensitive: true
Max sentence length: 512

References

  • https://huggingface.co/bert-large-cased-whole-word-masking-finetuned-squad
  • https://github.com/google-research/bert
  • https://arxiv.org/abs/1810.04805
  • https://en.wikipedia.org/wiki/English_Wikipedia
  • https://yknzhu.wixsite.com/mbweb