Description
Given a clause classified as NON_COMP
using the legmulticlf_mnda_sections_paragraph_other
classifier, you can subclassify the sentences as NON_COMPETE_ITEMS
, or OTHER
from it using the legclf_nda_non_compete_items
model.
Predicted Entities
NON_COMPETE_ITEMS
, OTHER
How to use
document_assembler = nlp.DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentence_embeddings = nlp.UniversalSentenceEncoder.pretrained()\
.setInputCols("document")\
.setOutputCol("sentence_embeddings")
classifier = legal.ClassifierDLModel.pretrained("legclf_nda_non_compete_items", "en", "legal/models")\
.setInputCols(["sentence_embeddings"])\
.setOutputCol("category")
nlpPipeline = nlp.Pipeline(stages=[
document_assembler,
sentence_embeddings,
classifier
])
empty_data = spark.createDataFrame([[""]]).toDF("text")
model = nlpPipeline.fit(empty_data)
text_list = ["""This Agreement will be binding upon and inure to the benefit of each Party and its respective heirs, successors and assigns""",
"""Activity that is in direct competition with the Company's business, including but not limited to developing, marketing, or selling products or services that are similar to those of the Company."""]
df = spark.createDataFrame(pd.DataFrame({"text" : text_list}))
result = model.transform(df)
Results
+--------------------------------------------------------------------------------+-----------------+
| text| class|
+--------------------------------------------------------------------------------+-----------------+
|This Agreement will be binding upon and inure to the benefit of each Party an...| OTHER|
|Activity that is in direct competition with the Company's business, including...|NON_COMPETE_ITEMS|
+--------------------------------------------------------------------------------+-----------------+
Model Information
Model Name: | legclf_nda_non_compete_items |
Compatibility: | Legal NLP 1.0.0+ |
License: | Licensed |
Edition: | Official |
Input Labels: | [sentence_embeddings] |
Output Labels: | [class] |
Language: | en |
Size: | 22.5 MB |
References
In-house annotations on the Non-disclosure Agreements
Benchmarking
label precision recall f1-score support
NON_COMPETE_ITEMS 0.95 1.00 0.97 18
OTHER 1.00 0.95 0.97 20
accuracy - - 0.97 38
macro-avg 0.97 0.97 0.97 38
weighted-avg 0.98 0.97 0.97 38