Edit model card

Fin-Pythia-1.4B is an instruction-finetuned model for sentiment analysis of financial text. It is built by 1) further training Pythia-1.4B model on financial documents, then 2) instruction fine-tuning on financial tasks. Although, the model is designed to be used for sentiment analysis, it performs well on other tasks such as named entity recognition (check our FinNLP 2023 paper). Fin-Pythia-1.4B's performance on financial sentiment analysis is on par with much larger financial LLMs and exceeds the performance of general models like GPT-4:

Models FPB FIQA-SA Headlines NER
BloombergGPT 0.51 0.75 0.82 0.61
GPT-4 0.78 - 0.86 0.83
FinMA-7B 0.86 0.84 0.98 0.75
FinMA-30B 0.88 0.87 0.97 0.62
Pythia-1.4B 0.84 0.83 0.97 0.69

Usage

Your instruction should follow this format:

prompt = "\n".join([
            '### Instruction: YOUR_INSTRUCTION',
            '### Text: YOUR_SENTENCE',
            '### Answer:'])

For example: ### Instruction: Analyze the sentiment of this statement extracted from a financial news article. Provide your answer as either negative, positive, or neutral.\n### Text: The economic uncertainty caused by the ongoing trade tensions between major global economies has led to a sharp decline in investor confidence, resulting in a significant drop in the stock market.\n### Answer:

You could also force the model to generate only the sentiment tokens using the following example code:

    prompt = "### Instruction: Analyze the sentiment of this statement extracted from a financial news article. Provide your answer as either negative, positive, or neutral.\n### Text: XYZ reported record-breaking profits for the quarter, exceeding analyst expectations and driving their stock price to new highs.\n### Answer:"
    target_classes = ["positive", "negative", "neutral"]

    target_class_ids = tokenizer.convert_tokens_to_ids(target_classes)
    inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to(args.device)
    outputs = model(inputs.input_ids)
    top_output = outputs.logits[0][-1][target_class_ids].argmax(dim=0)
    print(target_classes[top_output])

Citation

@misc{lc_finnlp2023,
      title={Large Language Model Adaptation for Financial Sentiment Analysis}, 
      author={Rodriguez Inserte Pau and Nakhlé Mariam and Qader Raheel and Caillaut Gaëtan and Liu Jingshu}, 
      year={2023},
}

About Lingua Custodia

Lingua Custodia is a Paris based Fintech company leader in Natural Language Processing (NLP) for Finance. It was created in 2011 by finance professionals to initially offer specialized machine translation.

Leveraging its state-of-the-art NLP expertise, the company now offers a growing range of applications in addition to its initial Machine translation offering: Speech-to-Text automation, Document classification, Linguistic data extraction from unstructured documents, Mass web crawling and data collection, ... and achieves superior quality thanks to highly domain-focused machine learning algorithms.

Contact information

contact[at]linguacustodia[dot]com

Downloads last month
35
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.