EVA Qwen2.5-1.5BB v0.0

A small-scale RP/storywriting specialist model, full-parameter finetune of Qwen2.5-1.5B on mixture of synthetic and natural data.
It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model.
Unlike EVA-D 1.5B v0.0, this model was created without using DistillKit, and unlike other versions of EVA, Spectrum wasn't used either, since layer freezing is inefficient at small scale.


Training data:

  • Celeste 70B 0.1 data mixture minus Opus Instruct subset. See that model's card for details.
  • Kalomaze's Opus_Instruct_25k dataset, filtered for refusals.
  • A subset (1k rows) of ChatGPT-4o-WritingPrompts by Gryphe
  • A subset (2k rows) of Sonnet3.5-Charcards-Roleplay by Gryphe
  • Synthstruct and SynthRP datasets by Epiculous
  • A subset from Dolphin-2.9.3, including filtered version of not_samantha and a small subset of systemchat.

Training time and hardware:

  • 9 hours on 4x3090Ti

Model was created by Kearm, Auri and Cahvay.

Special thanks:

  • to Cahvay for his work on investigating and reprocessing the corrupted dataset, removing the single biggest source of data poisoning.
  • to Gryphe, Lemmy, Kalomaze, Nopm, Epiculous and CognitiveComputations for the data
  • and to Allura-org for support, feedback, beta-testing and doing quality control of EVA models.

Built with Axolotl

See axolotl config

axolotl version: 0.4.1

base_model: /media/kearm/Disk_2/HF_FAST_MoE_Fodder/Qwen2.5-1.5B

load_in_8bit: false
load_in_4bit: false
strict: false

plugins:
  - axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true

# plugins:
#   - axolotl.integrations.spectrum.SpectrumPlugin

# spectrum_top_fraction: 0.5
# # Optional if using a pre-scanned model as your base_model. Useful if using a model mirror
# spectrum_model_name: Qwen/Qwen2.5-32B

datasets:
  - path: datasets/Celeste_Filtered_utf8fix.jsonl
    type: sharegpt
  - path: datasets/deduped_not_samantha_norefusals.jsonl
    type: sharegpt
  - path: datasets/deduped_SynthRP-Gens_processed_ShareGPT_converted_cleaned.jsonl
    type: sharegpt
  - path: datasets/deduped_Synthstruct-Gens_processed_sharegpt_converted_cleaned.jsonl
    type: sharegpt
  - path: datasets/Gryphe-4o-WP-filtered-sharegpt_utf8fix.jsonl
    type: sharegpt
  - path: datasets/Sonnet3-5-charcard-names-filtered-sharegpt_utf8fix.jsonl
    type: sharegpt
  - path: datasets/SystemChat_subset_filtered_sharegpt_utf8fix.jsonl
    type: sharegpt
  - path: datasets/S2.jsonl
    type: sharegpt
  - path: datasets/Turing.jsonl
    type: sharegpt

chat_template: chatml
shuffle_merged_datasets: true
val_set_size: 0.05
output_dir: EVA-Qwen2.5-1.5B-FFT-v0.0

sequence_len: 10240
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true

# adapter: qlora
# lora_model_dir:
# lora_r: 64
# lora_alpha: 128
# lora_dropout: 0.05
# lora_target_linear: true
# peft_use_dora: true

wandb_project: EVA-Qwen2.5-1.5B-FFT-v0.0
wandb_entity:
wandb_watch:
wandb_name: Unit-00
wandb_log_model:

gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 3
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 0.000005
max_grad_norm: 1.5

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false

gradient_checkpointing: "unsloth"
gradient_checkpointing_kwargs:
   use_reentrant: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_steps: 20
evals_per_epoch: 4
saves_per_epoch: 4
save_safetensors: true
save_total_limit: 8
hub_model_id:
hub_strategy:
debug:
deepspeed: deepspeed_configs/zero3_bf16.json
weight_decay: 0.15
# fsdp:
#   - full_shard
#   - auto_wrap
# fsdp_config:
#   fsdp_limit_all_gathers: true
#   fsdp_sync_module_states: false
#   fsdp_offload_params: true
#   fsdp_cpu_ram_efficient_loading: true
#   fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
#   fsdp_transformer_layer_cls_to_wrap: Qwen2DecoderLayer
#   fsdp_activation_checkpointing: true
#   fsdp_state_dict_type: SHARDED_STATE_DICT  # Changed from FULL_STATE_DICT
#   fsdp_sharding_strategy: FULL_SHARD
#   fsdp_forward_prefetch: false  # Added
#   fsdp_backward_prefetch: "BACKWARD_PRE"  # Added
#   fsdp_backward_prefetch_limit: 1  # Added
#   fsdp_mixed_precision: BF16  # Added

Downloads last month
258
Safetensors
Model size
1.54B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for EVA-UNIT-01/EVA-Qwen2.5-1.5B-v0.0

Base model

Qwen/Qwen2.5-1.5B
Finetuned
(41)
this model
Merges
3 models
Quantizations
10 models

Datasets used to train EVA-UNIT-01/EVA-Qwen2.5-1.5B-v0.0

Collection including EVA-UNIT-01/EVA-Qwen2.5-1.5B-v0.0