Chinese BertForQuestionAnswering model (from uer)

Description

Pretrained Question Answering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. roberta-base-chinese-extractive-qa is a Chinese model orginally trained by uer.

Download Copy S3 URI

How to use

document_assembler = MultiDocumentAssembler() \ 
.setInputCols(["question", "context"]) \
.setOutputCols(["document_question", "document_context"])

spanClassifier = BertForQuestionAnswering.pretrained("bert_qa_roberta_base_chinese_extractive_qa","zh") \
.setInputCols(["document_question", "document_context"]) \
.setOutputCol("answer") \
.setCaseSensitive(True)

pipeline = Pipeline().setStages([
document_assembler,
spanClassifier
])

example = spark.createDataFrame([["What's my name?", "My name is Clara and I live in Berkeley."]]).toDF("question", "context")

result = pipeline.fit(example).transform(example)
val document = new MultiDocumentAssembler()
.setInputCols("question", "context")
.setOutputCols("document_question", "document_context")

val spanClassifier = BertForQuestionAnswering
.pretrained("bert_qa_roberta_base_chinese_extractive_qa","zh")
.setInputCols(Array("document_question", "document_context"))
.setOutputCol("answer")
.setCaseSensitive(true)
.setMaxSentenceLength(512)

val pipeline = new Pipeline().setStages(Array(document, spanClassifier))

val example = Seq(
("Where was John Lenon born?", "John Lenon was born in London and lived in Paris. My name is Sarah and I live in London."),
("What's my name?", "My name is Clara and I live in Berkeley."))
.toDF("question", "context")

val result = pipeline.fit(example).transform(example)
import nlu
nlu.load("zh.answer_question.bert.base.by_uer").predict("""What's my name?|||"My name is Clara and I live in Berkeley.""")

Model Information

Model Name: bert_qa_roberta_base_chinese_extractive_qa
Compatibility: Spark NLP 4.0.0+
License: Open Source
Edition: Official
Input Labels: [sentence, token]
Output Labels: [embeddings]
Language: zh
Size: 381.4 MB
Case sensitive: true
Max sentence length: 512

References

  • https://huggingface.co/uer/roberta-base-chinese-extractive-qa
  • https://spaces.ac.cn/archives/4338
  • https://www.kesci.com/home/competition/5d142d8cbb14e6002c04e14a/content/0
  • https://github.com/dbiir/UER-py/
  • https://cloud.tencent.com/product/tione/
  • https://github.com/ymcui/cmrc2018