See axolotl config
axolotl version: 0.4.0
base_model: tomaszki/nous-twelve
tokenizer_type: AutoTokenizer
hub_model_id: superfriends/titos
load_in_8bit: false
load_in_4bit: false
strict: false
chat_template: inst
datasets:
- path: winglian/charley
type: sharegpt
conversation: mistral
split: train
_test_datasets:
- path: winglian/latest-barley
type: sharegpt
conversation: mistral
split: test
dataset_prepared_path: last_run_prepared
val_set_size: 0.0
output_dir: ./out
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
wandb_project: relora-instruct-nous
wandb_entity: oaaic
wandb_watch:
wandb_name: fft
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 4
num_epochs: 2
optimizer: adamw_bnb_8bit
adam_beta1: 0.95
adam_beta2: 0.9
adam_epsilon: 0.0001
max_grad_norm: 1.0
lr_scheduler: cosine
learning_rate: 0.000009
neftune_noise_alpha: 5
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: True
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 20
evals_per_epoch: 4
eval_table_size:
saves_per_epoch: 2
debug:
deepspeed: deepspeed_configs/zero1.json # multi-gpu only
weight_decay: 0.1
fsdp:
fsdp_config:
special_tokens:
titos
This model is a fine-tuned version of tomaszki/nous-twelve on the None dataset.
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.95,0.9) and epsilon=0.0001
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- num_epochs: 2
Training results
Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for superfriends/titos
Base model
tomaszki/nous-twelve