Prem
Collection
Finetunes and Quantizations of the Prem LLMs
•
11 items
•
Updated
•
2
axolotl version: 0.4.0
base_model: premai-io/prem-1B
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: argilla/distilabel-capybara-dpo-7k-binarized
type: orpo.chat_template
dataset_prepared_path: last_run_prepared
val_set_size: 0.001
output_dir: ./prem-1B-32k
save_safetensors: true
sequence_len: 8192
sample_packing: false
pad_to_sequence_len: false
use_pose: true
pose_max_context_len: 262144
min_sample_len: 6144
pose_num_chunks: 16
curriculum_sampling: true
overrides_of_model_config:
rope_theta: 500000.0
max_position_embeddings: 262144
# peft_use_dora: true
adapter: lora
peft_use_rslora: true
lora_model_dir:
lora_r: 1024
lora_alpha: 1024
lora_dropout: 0.1
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
lora_modules_to_save:
- embed_tokens
- lm_head
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 20
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.00001
max_grad_norm: 1.0
adam_beta2: 0.95
train_on_inputs: false
group_by_length: false
bf16: true
fp16:
tf32: false
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
sdp_attention:
s2_attention:
warmup_steps: 10
evals_per_epoch: 8
saves_per_epoch: 8
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
pad_token: <|end_of_text|>
This model is a fine-tuned version of premai-io/prem-1B on the None dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.7672 | 1.0 | 1 | 3.0074 |
0.7672 | 2.0 | 2 | 2.6057 |
0.7422 | 3.0 | 3 | 2.2898 |
0.7211 | 4.0 | 4 | 2.1453 |
0.6591 | 5.0 | 5 | 1.6360 |
0.4514 | 6.0 | 6 | 0.7589 |
0.24 | 7.0 | 7 | 0.6621 |
0.1584 | 8.0 | 8 | 0.8121 |
0.1235 | 9.0 | 9 | 0.7538 |
0.0998 | 10.0 | 10 | 0.7743 |
0.0869 | 11.0 | 11 | 0.7771 |
0.1692 | 12.0 | 12 | 0.8293 |
0.0702 | 13.0 | 13 | 0.8939 |
0.063 | 14.0 | 14 | 0.9582 |
0.0567 | 15.0 | 15 | 0.9825 |
0.052 | 16.0 | 16 | 0.9960 |
0.0488 | 17.0 | 17 | 0.9883 |
0.0457 | 18.0 | 18 | 1.0004 |
0.0436 | 19.0 | 19 | 1.0056 |
0.0427 | 20.0 | 20 | 1.0059 |
Base model
premai-io/prem-1B