See axolotl config
axolotl version: 0.4.0
base_model: Qwen/Qwen2-beta-1_8B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
is_qwen_derived_model:
trust_remote_code:
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: teknium/OpenHermes-2.5
type: sharegpt
conversation: chatml
dataset_prepared_path: last_run_prepared
val_set_size: 0.0
output_dir: ./quyen-1_8b
sequence_len: 4096 # supports up to 8192
sample_packing: true
pad_to_sequence_len: true
adapter:
lora_model_dir:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:
lora_fan_in_fan_out:
wandb_project: quyen-hermes
wandb_entity:
wandb_watch:
wandb_name: quyen-1_8b-hermes
wandb_log_model:
gradient_accumulation_steps: 8
micro_batch_size: 8
num_epochs: 4
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0001
train_on_inputs: false
group_by_length: false
bf16: true
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch:
eval_table_size:
eval_table_max_new_tokens:
saves_per_epoch: 1
debug:
deepspeed: deepspeed_configs/zero3_bf16.json
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
eos_token: "<|im_end|>"
tokens:
- "<|im_start|>"
quyen-1_8b
This model is a fine-tuned version of Qwen/Qwen2-beta-1_8B on the None dataset.
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 4
Training results
Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
- Downloads last month
- 0
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for vilm/Quyen-Mini-4e
Base model
Qwen/Qwen1.5-1.8B