Edit model card

BafoGPT-3B-it

This is BafoGPT-3B-it model finetuned with QLORA using 10% of ChallengerSpaceShuttle/zulu-finetuning-dataset dataset.

This is the first iteration, on building IsiZulu models that can attain performance comparable to models that typically require millions of dollars to train from scratch.

πŸ” Applications

This is the supervised finetuned model and has a context length of 8k. The model can generate coherent IsiZulu text with simple instruction, however still faces hallucination for complex instruction. Working on improving our first verison

⚑ Quantized models

πŸ† Evaluation

🧩 Configuration

The code used to train the model can be found here: BafoGPT with the following training configuration.

checkpoint_dir: checkpoints/google/gemma-2-2b
out_dir: out/finetune/lora
precision: bf16-true
quantize: bnb.nf4-dq
devices: 1
num_nodes: 1
lora_r: 8
lora_alpha: 16
lora_dropout: 0.05
lora_query: true
lora_key: true
lora_value: true
lora_projection: true
lora_mlp: false
lora_head: false
data:
  class_path: litgpt.data.JSON
  init_args:
    json_path: data/train.json
    mask_prompt: false
    val_split_fraction: 0.05
    prompt_style: alpaca
    ignore_index: -100
    seed: 42
    num_workers: 4
train:
  save_interval: 1000
  log_interval: 1
  global_batch_size: 4
  micro_batch_size: 1
  lr_warmup_steps: 1000
  epochs: 1
  max_seq_length: 1024
  min_lr: 6.0e-05
eval:
  interval: 1000
  max_new_tokens: 100
  max_iters: 100
  initial_validation: false
  final_validation: true
optimizer: AdamW
logger_name: csv
seed: 1337

Architecture Config

{
  "architectures": [
    "Gemma2ForCausalLM"
  ],
  "attention_bias": false,
  "attention_dropout": 0.0,
  "attn_logit_softcapping": 50.0,
  "bos_token_id": 2,
  "cache_implementation": "hybrid",
  "eos_token_id": 1,
  "final_logit_softcapping": 30.0,
  "head_dim": 256,
  "hidden_act": "gelu_pytorch_tanh",
  "hidden_activation": "gelu_pytorch_tanh",
  "hidden_size": 2304,
  "initializer_range": 0.02,
  "intermediate_size": 9216,
  "max_position_embeddings": 8192,
  "model_type": "gemma2",
  "num_attention_heads": 8,
  "num_hidden_layers": 26,
  "num_key_value_heads": 4,
  "pad_token_id": 0,
  "query_pre_attn_scalar": 256,
  "rms_norm_eps": 1e-06,
  "rope_theta": 10000.0,
  "sliding_window": 4096,
  "torch_dtype": "float32",
  "transformers_version": "4.42.4",
  "use_cache": true,
  "vocab_size": 288256
}
Downloads last month
15
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for ChallengerSpaceShuttle/finetuned-qlora-bafoGPT-2

Base model

google/gemma-2-2b
Finetuned
(1)
this model