Financial Target Sentiments (Codalab)

Description

This Spanish Text Classifier will identify from the viewpoint of a target whether a financial statement is positive, neutral or negative. This model is trained from the competition - IBERLEF 2023 Task - FinancES. Financial Targeted Sentiment Analysis in Spanish. We have used the participation dataset which is a small subset of the main one to train this model.

Predicted Entities

positive, neutral, negative

Copy S3 URI

How to use

documentAssembler = nlp.DocumentAssembler()\
  .setInputCol("text")\
  .setOutputCol("document")

tokenizer = nlp.Tokenizer()\
  .setInputCols("document")\
  .setOutputCol("token")
  
sequenceClassifier = finance.BertForSequenceClassification.pretrained("finclf_bert_target_sentiments","es","finance/models")\
  .setInputCols("token", "document")\
  .setOutputCol("class")\
  .setCaseSensitive(True)

pipeline =  nlp.Pipeline(
    stages=[
  documentAssembler,
  tokenizer,
  sequenceClassifier
    ]
)

Results

+-------------------------------------------------------------------------------------------+----------+
|text                                                                                       |result    |
+-------------------------------------------------------------------------------------------+----------+
|La deuda de las familias cae en 25.000 millones en 2015 y marca niveles previos a la crisis|[positive]|
+-------------------------------------------------------------------------------------------+----------+

Model Information

Model Name: finclf_bert_target_sentiments
Compatibility: Finance NLP 1.0.0+
License: Licensed
Edition: Official
Input Labels: [document, token]
Output Labels: [class]
Language: es
Size: 412.2 MB
Case sensitive: true
Max sentence length: 128

References

https://codalab.lisn.upsaclay.fr/competitions/10052#learn_the_details

Benchmarking

 
labels      precision    recall  f1-score   support
negative        0.81      0.66      0.73        44
neutral         1.00      0.22      0.36         9
positive        0.82      0.93      0.87       105
accuracy          -        -        0.82       158
macro-avg       0.87      0.60      0.65       158
weighted-avg    0.82      0.82      0.80       158