Description
This model allows you to classify documents among a list of specific US Security Exchange Commission filings, as : 10-K
, 10-Q
, 8-K
, S-8
, 3
, 4
, Other
IMPORTANT : This model works with the first 512 tokens of a document, you don’t need to run it in the whole document.
Predicted Entities
10-K
, 10-Q
, 8-K
, S-8
, 3
, 4
, other
How to use
document_assembler = nlp.DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
embeddings = nlp.BertSentenceEmbeddings.pretrained("sent_bert_base_cased", "en")\
.setInputCols("document")\
.setOutputCol("sentence_embeddings")
doc_classifier = finance.ClassifierDLModel.pretrained("finclf_sec_filings", "en", "finance/models")\
.setInputCols(["sentence_embeddings"])\
.setOutputCol("category")
nlpPipeline = nlp.Pipeline(stages=[
document_assembler,
embeddings,
doc_classifier
])
df = spark.createDataFrame([["YOUR TEXT HERE"]]).toDF("text")
model = nlpPipeline.fit(df)
result = model.transform(df)
Results
+-------+
|result|
+-------+
|[10-K]|
|[8-K]|
|[10-Q]|
|[S-8]|
|[3]|
|[4]|
|[other]|
Model Information
Model Name: | finclf_sec_filings |
Compatibility: | Finance NLP 1.0.0+ |
License: | Licensed |
Edition: | Official |
Input Labels: | [sentence_embeddings] |
Output Labels: | [class] |
Language: | en |
Size: | 22.8 MB |
References
Scrapped filings from SEC
Benchmarking
label precision recall f1-score support
10-K 0.97 0.82 0.89 40
10-Q 0.94 0.94 0.94 35
3 0.80 0.95 0.87 41
4 0.94 0.76 0.84 42
8-K 0.81 0.94 0.87 32
S-8 0.91 0.93 0.92 44
other 0.98 0.98 0.98 41
accuracy - - 0.90 275
macro-avg 0.91 0.90 0.90 275
weighted-avg 0.91 0.90 0.90 275