gemma-2b-openhermes / README.md
abideen's picture
Update README.md
f67de5c verified
|
raw
history blame
9.65 kB
metadata
license: cc-by-nc-4.0
base_model: google/gemma-2b-it
tags:
  - generated_from_trainer
  - axolotl
  - gemma
  - instruct
  - finetune
  - chatml
  - gpt4
  - synthetic data
  - distillation
model-index:
  - name: gemma-2b-openhermes
    results: []
datasets:
  - mlabonne/chatml-OpenHermes2.5-dpo-binarized-alpha
language:
  - en
library_name: transformers
pipeline_tag: text-generation

gemma-2b-openhermes

image/jpeg

gemma-2b-openhermes is a variant of the Gemma 2B language model, which has been further fine-tuned on the OpenHermes-2.5 preference dataset using QLoRA.


Usage

Chat Template

The instruction-tuned models use a chat template that must be adhered to for conversational use. The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.

Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:

from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch

model_id = "abideen/gemma-2b-openhermes"
dtype = torch.bfloat16

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,
)

chat = [{ "role": "user", "content": "What is a Language Model?" }]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)

After the prompt is ready, generation can be performed like this:

inputs = tokenizer.encode(prompt, add_special_tokens=True, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=250)
print(tokenizer.decode(outputs[0]))

Inputs and outputs

  • Input: Text string, such as a question, a prompt, or a document to be summarized.
  • Output: Generated English-language text in response to the input, such as an answer to a question, or a summary of a document.

🏆 Evaluation results

Nous Benchmark

Agieval

Task Version Metric Value StdErr
agieval_aqua_rat 0 acc 24.02 _ 2.69
agieval_aqua_rat 0 acc_norm 24.02 _ 2.69
agieval_logiqa_en 0 acc 23.20 _ 1.66
agieval_logiqa_en 0 acc_norm 24.42 _ 1.69
agieval_lsat_ar 0 acc 18.26 _ 2.55
agieval_lsat_ar 0 acc_norm 18.70 _ 2.58
agieval_lsat_lr 0 acc 22.35 _ 1.85
agieval_lsat_lr 0 acc_norm 23.53 _ 1.88
agieval_lsat_rc 0 acc 20.82 _ 2.48
agieval_lsat_rc 0 acc_norm 20.07 _ 2.45
agieval_sat_en 0 acc 32.52 _ 3.27
agieval_sat_en 0 acc_norm 32.52 _ 3.27
agieval_sat_en_without_passage 0 acc 25.73 _ 3.05
agieval_sat_en_without_passage 0 acc_norm 24.27 _ 2.99
agieval_sat_math 0 acc 25.00 _ 2.93
agieval_sat_math 0 acc_norm 20.91 _ 2.75
Average: 24.11

GPT4ALL

Task Version Metric Value StdErr
arc_challenge 0 acc 21.77 _ 1.21
arc_challenge 0 acc_norm 24.15 _ 1.25
arc_easy 0 acc 37.37 _ 0.99
arc_easy 0 acc_norm 36.95 _ 0.99
boolq 1 acc 65.60 _ 0.83
hellaswag 0 acc 34.54 _ 0.47
hellaswag 0 acc_norm 40.54 _ 0.49
openbookqa 0 acc 15.00 _ 1.59
openbookqa 0 acc_norm 27.40 _ 2.00
piqa 0 acc 60.88 _ 1.14
piqa 0 acc_norm 60.55 _ 1.14
winogrande 0 acc 50.91 _ 1.41
Average: 40.01

BigBench

Task Version Metric Value Std Err
bigbench_causal_judgement 0 MCG 50 2.26
bigbench_date_understanding 0 MCG 49.14 2.18
bigbench_disambiguation_qa 0 MCG 49.31 2.74
bigbench_geometric_shapes 0 MCG 14.18 1.37
bigbench_logical_deduction_5objs 0 MCG 49.41 2.73
bigbench_logical_deduction_7objs 0 MCG 41.48 2.46
bigbench_logical_deduction_3objs 0 MCG 69.33 2.75
bigbench_movie_recommendation 0 MCG 51.71 2.25
bigbench_navigate 0 MCG 50 1.58
bigbench_reasoning_colored_obj 0 MCG 51.92 0.99
bigbench_ruin_names 0 MCG 48.14 2.01
bigbench_salient_trans_err_detec 0 MCG 39.92 1.2
bigbench_snarks 0 MCG 64.14 3.71
bigbench_sports_understanding 0 MCG 55.31 1.59
bigbench_temporal_sequences 0 MCG 46.92 1.4
bigbench_tsk_shuff_objs_5 0 MCG 25.04 1.01
bigbench_tsk_shuff_objs_7 0 MCG 15.04 0.72
bigbench_tsk_shuff_objs_3 0 MCG 55.33 2.75
Average: 44.75

TruthfulQA

Task Version Metric Value Std Err
truthfulqa_mc 1 mc1 30.11 1.61
truthfulqa_mc 1 mc2 47.69 1.61
Average: 38.90

Openllm Benchmark

Task Version Metric Value Stderr
arc_challenge 0 acc 40.44 ± 1.43
acc_norm 43.81 ± 1.34
hellaswag 0 acc 48.1 ± 0.45
acc_norm 62.73 ± 0.32
gsm8k 0 acc 5.6 ± 0.6
winogrande 0 acc 60.91 ± 1.3
mmlu 0 acc 37.62 ± 0.6

Average: 73.5%

TruthfulQA

Task Version Metric Value Stderr
truthfulqa_mc 1 mc1 29.00 ± 1.58
mc2 45.83 ± 1.59

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-07
  • train_batch_size: 1
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 100
  • training_steps: 1300

📝 Axolotl Configuration

base_model: google/gemma-2b-it
model_type: GemmaForCausalLM
tokenizer_type: GemmaTokenizer
trust_remote_code: true

load_in_8bit: false
load_in_4bit: true
strict: false

rl: dpo
chat_template: chatml
datasets:
  - path: mlabonne/chatml-OpenHermes2.5-dpo-binarized-alpha
    split: train
    type: chatml.intel
dataset_prepared_path:
val_set_size: 0.01
output_dir: ./out

adapter: qlora
lora_model_dir:

sequence_len: 1800
sample_packing: false
pad_to_sequence_len: false

lora_r: 16
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:

wandb_project: gemma
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:

gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 1
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 5e-7

train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: true

gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: false

warmup_steps: 100
evals_per_epoch: 1
eval_table_size:
eval_table_max_new_tokens: 128
save_steps: 1000
max_steps: 1300
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:

Framework versions

  • Transformers 4.39.0.dev0
  • Pytorch 2.1.2+cu118
  • Datasets 2.17.0
  • Tokenizers 0.15.0
  • axolotl: 0.4.0

Built with Axolotl