Description
Pretrained Question Answering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. xlm-roberta-large-squad2
is a English model originally trained by deepset
.
How to use
document_assembler = MultiDocumentAssembler() \
.setInputCols(["question", "context"]) \
.setOutputCols(["document_question", "document_context"])
spanClassifier = XlmRoBertaForQuestionAnswering.pretrained("xlm_roberta_qa_xlm_roberta_large_squad2","en") \
.setInputCols(["document_question", "document_context"]) \
.setOutputCol("answer") \
.setCaseSensitive(True)
pipeline = Pipeline().setStages([
document_assembler,
spanClassifier
])
example = spark.createDataFrame([["What's my name?", "My name is Clara and I live in Berkeley."]]).toDF("question", "context")
result = pipeline.fit(example).transform(example)
Model Information
Model Name: | xlm_roberta_qa_xlm_roberta_large_squad2 |
Compatibility: | Spark NLP 4.0.0+ |
License: | Open Source |
Edition: | Official |
Input Labels: | [question, context] |
Output Labels: | [answer] |
Language: | en |
Size: | 1.9 GB |
Case sensitive: | true |
Max sentence length: | 512 |
References
- https://huggingface.co/deepset/xlm-roberta-large-squad2
- https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/
- https://www.linkedin.com/company/deepset-ai/
- https://twitter.com/deepset_ai
- http://www.deepset.ai/jobs
- https://haystack.deepset.ai/community/join
- https://public-mlflow.deepset.ai/#/experiments/124/runs/3a540e3f3ecf4dd98eae8fc6d457ff20
- https://github.com/deepset-ai/haystack/
- https://github.com/deepset-ai/FARM
- https://deepset.ai/germanquad
- https://github.com/deepmind/xquad
- https://deepset.ai
- https://deepset.ai/german-bert
- https://github.com/deepset-ai/haystack/discussions
- https://github.com/facebookresearch/MLQA