xMaulana's picture
Adding Evaluation Results
5c53adf verified
|
raw
history blame
7 kB
metadata
language:
  - id
license: apache-2.0
tags:
  - Indonesian
  - Chat
  - Instruct
  - unsloth
base_model:
  - meta-llama/Llama-3.2-3B-Instruct
datasets:
  - NekoFi/alpaca-gpt4-indonesia-cleaned
pipeline_tag: text-generation
model-index:
  - name: FinMatcha-3B-Instruct
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: IFEval (0-Shot)
          type: HuggingFaceH4/ifeval
          args:
            num_few_shot: 0
        metrics:
          - type: inst_level_strict_acc and prompt_level_strict_acc
            value: 60.85
            name: strict accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=xMaulana/FinMatcha-3B-Instruct
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: BBH (3-Shot)
          type: BBH
          args:
            num_few_shot: 3
        metrics:
          - type: acc_norm
            value: 6.32
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=xMaulana/FinMatcha-3B-Instruct
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MATH Lvl 5 (4-Shot)
          type: hendrycks/competition_math
          args:
            num_few_shot: 4
        metrics:
          - type: exact_match
            value: 10.2
            name: exact match
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=xMaulana/FinMatcha-3B-Instruct
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GPQA (0-shot)
          type: Idavidrein/gpqa
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 0.34
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=xMaulana/FinMatcha-3B-Instruct
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MuSR (0-shot)
          type: TAUR-Lab/MuSR
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 6.62
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=xMaulana/FinMatcha-3B-Instruct
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU-PRO (5-shot)
          type: TIGER-Lab/MMLU-Pro
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 16.04
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=xMaulana/FinMatcha-3B-Instruct
          name: Open LLM Leaderboard

image/jpeg

FinMatcha-3B-Instruct

FinMatcha is a powerful Indonesian-focused large language model (LLM) fine-tuned using the Llama-3.2-3B-Instruct base model. The model has been trained to handle a variety of natural language processing tasks such as text generation, summarization, translation, and question-answering, with a special emphasis on understanding and generating Indonesian text.

This model has been fine-tuned on a wide array of Indonesian datasets, making it adept at handling the nuances of the Indonesian language, from formal to colloquial speech. It also supports English for bilingual applications.

Model Details

How to use

Installation

To use the Finmatcha model, install the required dependencies:

pip install transformers>=4.45

Usage

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "xMaulana/FinMatcha-3B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.float16,
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_id)

inputs = tokenizer("Bagaimanakah sebuah negara dapat terbentuk?", return_tensors="pt").to("cuda")
outputs = model.generate(inputs.input_ids, 
                          max_new_tokens = 1024,
                          pad_token_id=tokenizer.pad_token_id,
                          eos_token_id=tokenizer.eos_token_id,
                          temperature=0.7,
                          do_sample=True, 
                          top_k=5, 
                          top_p=0.9,
                          repetition_penalty=1.1
                         )
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Limitations

  • The model is primarily focused on the Indonesian language and may not perform as well on non-Indonesian tasks.
  • As with all LLMs, cultural and contextual biases can be present.

License

The model is licensed under the Apache-2.0.

Contributing

We welcome contributions to enhance and improve Finmatcha. Feel free to open issues or submit pull requests for improvements.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 16.73
IFEval (0-Shot) 60.85
BBH (3-Shot) 6.32
MATH Lvl 5 (4-Shot) 10.20
GPQA (0-shot) 0.34
MuSR (0-shot) 6.62
MMLU-PRO (5-shot) 16.04

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 11.47
IFEval (0-Shot) 48.08
BBH (3-Shot) 4.28
MATH Lvl 5 (4-Shot) 3.85
GPQA (0-shot) 1.34
MuSR (0-shot) 5.74
MMLU-PRO (5-shot) 5.54

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 11.47
IFEval (0-Shot) 48.08
BBH (3-Shot) 4.28
MATH Lvl 5 (4-Shot) 3.85
GPQA (0-shot) 1.34
MuSR (0-shot) 5.74
MMLU-PRO (5-shot) 5.54