Edit model card

Model Overview

This model is a fine-tuned variant of Llama-3.2-1B, leveraging ORPO (Optimized Regularization for Prompt Optimization) for enhanced performance. It has been fine-tuned using the mlabonne/orpo-dpo-mix-40k dataset as part of the Finetuning Open Source LLMs Course - Week 2 Project.

Intended Use

This model is optimized for general-purpose language tasks, including text parsing, understanding contextual prompts, and enhanced interpretability in natural language processing applications.

Evaluation Results

The model was evaluated on the following benchmarks, with the following performance metrics:

Tasks Version Filter n-shot Metric Value Stderr
eq_bench 2.1 none 0 eqbench ↑ 1.5355 ± 0.9174
none 0 percent_parseable ↑ 16.9591 ± 2.8782
hellaswag 1 none 0 acc ↑ 0.4812 ± 0.0050
none 0 acc_norm ↑ 0.6467 ± 0.0048
ifeval 4 none 0 inst_level_loose_acc ↑ 0.3993 ± N/A
none 0 inst_level_strict_acc ↑ 0.2974 ± N/A
none 0 prompt_level_loose_acc ↑ 0.2754 ± 0.0192
none 0 prompt_level_strict_acc ↑ 0.1848 ± 0.0167
tinyMMLU 0 none 0 acc_norm ↑ 0.3996 ± N/A

Key Features

  • Model Size: 1 Billion parameters
  • Fine-tuning Method: ORPO
  • Dataset: mlabonne/orpo-dpo-mix-40k
Downloads last month
33
Safetensors
Model size
1.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for savanladani/week2-llama3.2-1B

Finetuned
(134)
this model

Dataset used to train savanladani/week2-llama3.2-1B

Evaluation results