Edit model card

l3.1-8b-inst-fft-induction-barc-heavy-200k-old-200k-lr1e-5-ep2

This model is a fine-tuned version of meta-llama/Meta-Llama-3.1-8B-Instruct on the barc0/induction_heavy_100k_jsonl, the barc0/induction_heavy_suggestfunction_100k_jsonl, the barc0/induction_100k-gpt4-description-gpt4omini-code_generated_problems_messages_format_0.3 and the barc0/induction_100k_gpt4o-mini_generated_problems_seed100.jsonl_messages_format_0.3 datasets. It achieves the following results on the evaluation set:

  • Loss: 0.2709

Prompt Format

We follow Llama-3.1 instruct template.

For example, the ARC public evaluation problem 62ab2642 is converted to

[{"role": "system", "content": "You are a world-class puzzle solver with exceptional pattern recognition skills and expertise in Python programming. Your task is to analyze puzzles and provide Python solutions."},
{"role": "user", "content": "Given input-output grid pairs as reference examples, carefully observe the patterns to predict the output grid for new test input. Each pair follows the same transformation rule. Grids are 2D arrays represented as strings, with cells (colors) separated by spaces and rows by newlines.\nHere are the input and output grids for the reference examples:\nExample 1\nInput:\nGray Black Black Gray Black\nGray Black Black Gray Black\nGray Black Gray Gray Gray\nGray Gray Gray Black Black\nBlack Black Gray Black Black\nBlack Black Gray Gray Gray\nBlack Black Black Gray Black\nGray Gray Gray Gray Black\nBlack Gray Black Black Black\nBlack Gray Black Black Black\nBlack Gray Gray Gray Black\nBlack Black Black Gray Black\nBlack Gray Gray Gray Gray\nGray Gray Black Black Black\nBlack Gray Black Black Black\n\nOutput:\nGray Black Black Gray Black\nGray Black Black Gray Black\nGray Black Gray Gray Gray\nGray Gray Gray Black Black\nBlack Black Gray Black Black\nBlack Black Gray Gray Gray\nBlack Black Black Gray Purple\nGray Gray Gray Gray Purple\nBlack Gray Purple Purple Purple\nBlack Gray Purple Purple Purple\nBlack Gray Gray Gray Purple\nBlack Black Black Gray Purple\nBlack Gray Gray Gray Gray\nGray Gray Black Black Black\nOrange Gray Black Black Black\n\n\nExample 2\nInput:\nBlack Black Gray Black Black Gray Black Black Black\nBlack Black Gray Gray Gray Gray Black Black Black\nGray Gray Gray Black Black Black Black Black Black\nBlack Gray Black Black Black Black Black Black Black\nBlack Gray Black Black Black Gray Gray Gray Gray\nBlack Gray Gray Gray Gray Gray Black Black Black\nGray Gray Black Black Black Gray Gray Gray Gray\nBlack Black Black Black Black Gray Black Black Black\nGray Gray Gray Gray Gray Gray Black Black Black\nBlack Black Black Black Black Gray Black Black Black\n\nOutput:\nBlack Black Gray Orange Orange Gray Purple Purple Purple\nBlack Black Gray Gray Gray Gray Purple Purple Purple\nGray Gray Gray Purple Purple Purple Purple Purple Purple\nBlack Gray Purple Purple Purple Purple Purple Purple Purple\nBlack Gray Purple Purple Purple Gray Gray Gray Gray\nBlack Gray Gray Gray Gray Gray Black Black Black\nGray Gray Black Black Black Gray Gray Gray Gray\nBlack Black Black Black Black Gray Black Black Black\nGray Gray Gray Gray Gray Gray Black Black Black\nBlack Black Black Black Black Gray Black Black Black\n\n\nExample 3\nInput:\nBlack Gray Black Black Gray Black Black Black Black Gray Black Black\nBlack Gray Black Black Gray Gray Gray Black Black Gray Black Black\nBlack Gray Gray Gray Gray Black Gray Black Black Gray Black Black\nBlack Black Gray Black Black Black Gray Gray Gray Gray Black Black\nGray Gray Gray Black Black Black Gray Black Black Gray Gray Gray\nBlack Black Black Black Black Black Gray Black Black Black Black Black\nBlack Black Black Gray Gray Gray Gray Black Black Black Black Black\nGray Gray Gray Gray Black Black Gray Black Black Black Black Black\nBlack Black Black Gray Black Black Gray Gray Gray Black Black Black\nBlack Black Black Gray Black Black Black Black Gray Black Black Black\n\nOutput:\nBlack Gray Orange Orange Gray Black Black Black Black Gray Black Black\nBlack Gray Orange Orange Gray Gray Gray Black Black Gray Black Black\nBlack Gray Gray Gray Gray Black Gray Black Black Gray Black Black\nBlack Black Gray Black Black Black Gray Gray Gray Gray Black Black\nGray Gray Gray Black Black Black Gray Purple Purple Gray Gray Gray\nBlack Black Black Black Black Black Gray Purple Purple Purple Purple Purple\nBlack Black Black Gray Gray Gray Gray Purple Purple Purple Purple Purple\nGray Gray Gray Gray Black Black Gray Purple Purple Purple Purple Purple\nBlack Black Black Gray Black Black Gray Gray Gray Purple Purple Purple\nBlack Black Black Gray Black Black Black Black Gray Purple Purple Purple\n\n\nHere is the input grid for the test example:\nInput:\nBlack Gray Black Black Black Black Black Gray Black Black Gray Black\nBlack Gray Black Black Black Gray Gray Gray Black Gray Gray Black\nGray Gray Gray Black Black Gray Black Gray Gray Gray Black Black\nBlack Black Gray Gray Gray Gray Black Gray Black Gray Gray Black\nBlack Black Black Gray Black Black Black Gray Black Black Gray Black\n\nWrite a Python function `transform` that can convert any given input grid to its corresponding output grid based on the pattern observed in the reference examples."}
]

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • total_train_batch_size: 128
  • total_eval_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 2

Training results

Training Loss Epoch Step Validation Loss
0.2817 1.0 2995 0.2818
0.2432 2.0 5990 0.2709

Framework versions

  • Transformers 4.45.0.dev0
  • Pytorch 2.4.1+cu124
  • Datasets 3.0.2
  • Tokenizers 0.19.1
Downloads last month
3,012
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for barc0/Llama-3.1-ARC-Potpourri-Induction-8B

Finetuned
(422)
this model
Finetunes
2 models
Quantizations
2 models

Datasets used to train barc0/Llama-3.1-ARC-Potpourri-Induction-8B

Collection including barc0/Llama-3.1-ARC-Potpourri-Induction-8B