week2-llama3.2-1B / README.md
savanladani's picture
Update README.md
2e50650 verified
|
raw
history blame
1.58 kB
metadata
license: apache-2.0
datasets:
  - mlabonne/orpo-dpo-mix-40k
language:
  - en
base_model:
  - meta-llama/Llama-3.2-1B
library_name: transformers
pipeline_tag: text-generation
model-index:
  - name: week2-llama3-1B
    results:
      - task:
          type: text-generation
        dataset:
          name: mlabonne/orpo-dpo-mix-40k
          type: mlabonne/orpo-dpo-mix-40k
        metrics:
          - name: EQ-Bench (0-Shot)
            type: EQ-Bench (0-Shot)
            value: 1.5355

Model Overview

This model is a fine-tuned variant of Llama-3.2-1B, leveraging ORPO (Optimized Regularization for Prompt Optimization) for enhanced performance. It has been fine-tuned using the mlabonne/orpo-dpo-mix-40k dataset as part of the Finetuning Open Source LLMs Course - Week 2 Project.

Intended Use

This model is optimized for general-purpose language tasks, including text parsing, understanding contextual prompts, and enhanced interpretability in natural language processing applications.

Evaluation Results (EQ-Bench v2.1)

The model was evaluated on the EQ-Bench dataset, with the following performance metrics:

Tasks Version Filter n-shot Metric Value Stderr
eq_bench 2.1 none 0 eqbench 1.5355 ± 0.9174
none 0 percent_parseable 16.9591 ± 2.8782

Key Features

  • Model Size: 1 Billion parameters
  • Fine-tuning Method: ORPO
  • Dataset: mlabonne/orpo-dpo-mix-40k
  • Benchmark: EQ-Bench (v2.1), no shot