Description
The dataset consists of 12 documents taken from EUR-Lex, a multilingual corpus of court decisions and legal dispositions in the 24 official languages of the European Union.
This model extracts ADDRESS, DATE, ORGANISATION, and PERSON entities from English documents.
Predicted Entities
ADDRESS, DATE, ORGANISATION, PERSON
How to use
document_assembler = nlp.DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentence_detector = nlp.SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx")\
.setInputCols(["document"])\
.setOutputCol("sentence")
tokenizer = nlp.Tokenizer()\
.setInputCols(["sentence"])\
.setOutputCol("token")
embeddings = nlp.BertEmbeddings.pretrained("bert_embeddings_base_en_cased", "en")\
.setInputCols(["sentence", "token"])\
.setOutputCol("embeddings")\
.setMaxSentenceLength(512)\
.setCaseSensitive(True)
ner_model = legal.NerModel.pretrained("legner_mapa", "en", "legal/models")\
.setInputCols(["sentence", "token", "embeddings"])\
.setOutputCol("ner")
ner_converter = nlp.NerConverter()\
.setInputCols(["sentence", "token", "ner"])\
.setOutputCol("ner_chunk")
nlpPipeline = nlp.Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
embeddings,
ner_model,
ner_converter])
empty_data = spark.createDataFrame([[""]]).toDF("text")
model = nlpPipeline.fit(empty_data)
text = ["""From 1 February 2012 until 31 January 2014, thus including the period concerned, Martimpex's workers were posted to Austria to perform the same work."""]
result = model.transform(spark.createDataFrame([text]).toDF("text"))
Results
+---------------+------------+
|chunk |ner_label |
+---------------+------------+
|1 February 2012|DATE |
|31 January 2014|DATE |
|Martimpex's |ORGANISATION|
|Austria |ADDRESS |
+---------------+------------+
Model Information
| Model Name: | legner_mapa |
| Compatibility: | Legal NLP 1.0.0+ |
| License: | Licensed |
| Edition: | Official |
| Input Labels: | [sentence, token, embeddings] |
| Output Labels: | [ner] |
| Language: | en |
| Size: | 1.4 MB |
References
The dataset is available here.
Benchmarking
label precision recall f1-score support
ADDRESS 1.00 1.00 1.00 5
DATE 0.98 1.00 0.99 40
ORGANISATION 0.83 0.71 0.77 14
PERSON 0.98 0.85 0.91 48
macro-avg 0.96 0.90 0.93 107
macro-avg 0.95 0.89 0.92 107
weighted-avg 0.96 0.90 0.93 107