Legal Question Answering (Bert)

Description

Legal Bert-based Question Answering model, trained on squad-v2, finetuned on proprietary Legal questions and answers.

Predicted Entities

Copy S3 URI

How to use

documentAssembler = nlp.MultiDocumentAssembler()\
        .setInputCols(["question", "context"])\
        .setOutputCols(["document_question", "document_context"])

spanClassifier = nlp.BertForQuestionAnswering.pretrained("legqa_bert","en", "legal/models") \
.setInputCols(["document_question", "document_context"]) \
.setOutputCol("answer") \
.setCaseSensitive(True)

pipeline = nlp.Pipeline().setStages([
documentAssembler,
spanClassifier
])

example = spark.createDataFrame([["Who was subjected to torture?", "The applicant submitted that her husband was subjected to treatment amounting to abuse whilst in the custody of police."]]).toDF("question", "context")

result = pipeline.fit(example).transform(example)

result.select('answer.result').show()

Results

`her husband`

Model Information

Model Name: legqa_bert
Compatibility: Legal NLP 1.0.0+
License: Licensed
Edition: Official
Input Labels: [document_question, document_context]
Output Labels: [answer]
Language: en
Size: 407.9 MB
Case sensitive: true
Max sentence length: 512

References

Trained on squad-v2, finetuned on proprietary Legal questions and answers.