qwerrwe / src /axolotl

Commit History

feat: Add LLaMA-3 instruct prompt strategies for fine-tuning (#1553)
50421c8
unverified

Ram Ram winglian commited on

adding llama3 fastchat conversation monkeypatch (#1539)
b32c08f
unverified

Antoni-Joan Solergibert winglian commited on

ignore the fsdp_config section too (#1606) [skip ci]
fff06af
unverified

winglian commited on

make sure to save the lora adapter at the end of RL/dpo training (#1573)
796a085
unverified

winglian commited on

improve tool handling roles (#1587)
cb78a36
unverified

winglian commited on

feat: exclude mamba blocks for jamba (#1578)
8b9c15b
unverified

Nanobit commited on

Pass deepspeed and fsdp as None explicitly when merging adapters to allow custom device_map (#1575)
9e1480e
unverified

chiragjn commited on

Gradio configuration parameters (#1591)
3367fca
unverified

marijnfs Marijn Stollenga Marijn Stollenga winglian commited on

improve save callbacks (#1592)
29cf15a
unverified

winglian commited on

Pass weakref to model in the SIGINT handler to free up model post train function (#1581)
dde02fc
unverified

chiragjn winglian commited on

FIX: TRL trainer preprocessing step was running in one process (#1583)
b9bb169
unverified

Ali Mosavian Ali Mosavian commited on

ADD: warning hub model (#1301)
601c08b
unverified

JohanWork Nanobit commited on

Add debug option for RL dataset preprocessing (#1404)
cc5d31e
unverified

abhinand Nanobit commited on

PoSE context length ext (#1567)
5294653
unverified

winglian commited on

make sure everything stays in the same dtype when using dpo + FSDP (#1559)
68601ec
unverified

winglian commited on

wrap prepared_ds_path in str() to avoid TypeError in fsspec package (#1548)
7477a53
unverified

Frank Ruis winglian commited on

ORPO Trainer replacement (#1551)
7d1d22f
unverified

winglian commited on

Unsloth gradient checkpointing offload (#1528)
6319da1
unverified

winglian commited on

DBRX Model Support (#1462)
132eb74
unverified

winglian commited on

Update SaveAxolotlConfigtoWandBCallback to use artifact instead of save (#1483)
5ed2939
unverified

tcapelle winglian commited on

use locale agnostic seperator to make large nums easier to read (#1503)
da9b1a3
unverified

winglian commited on

WIP: Support table logging for mlflow, too (#1506)
057fa44
unverified

DavidFarago Dave Farago winglian commited on

Correctly handle splits for datasets.arrow_dataset.Dataset objects (#1504)
8fa0785
unverified

scottifer8 winglian commited on

Print versions (#1496)
4313b1a
unverified

winglian commited on

add field to sft dataset pydantic for completion support (#1497)
ff01c45
unverified

winglian commited on

ignore issues with calculating # params when printing (#1493)
2fa65b9
unverified

winglian commited on

Remove `validate_quantized_dora` (#1485)
9430b6e
unverified

xzuyn commited on

drop empty token from beginning if tokenizer has no bos_token (in the case of qwen) (#1490)
934fc85
unverified

winglian commited on

fix: reduce sample_packing warning (#1484)
bda48f0
unverified

Nanobit commited on

feat: validate sample packing requires flash_attention (#1465)
bf4cd67
unverified

Nanobit commited on

add support for cohere chat template (#1478)
05b0b7e
unverified

winglian commited on

don't use deepspeed or fsdp when merging loras (#1479)
87ca3f9
unverified

winglian commited on

refactor utils.data module for line count linter (#1476)
e0fcef4
unverified

winglian commited on

Pretrain multipack v2 (#1470)
5aa5097
unverified

winglian commited on

fix pretraining_ on odd datasets (#1463)
586bd8d
unverified

monsoon-nlp commited on

reduce verbosity of the special tokens (#1472)
0b10377
unverified

winglian commited on

qwen2_moe support w multipack (#1455)
6086be8
unverified

winglian commited on

fix some of the edge cases for Jamba (#1452)
05b398a
unverified

winglian commited on

Support loading datasets saved via save_to_disk (#1432)
e634118
unverified

fozziethebeat commited on

Jamba (#1451)
02af082
unverified

winglian commited on

fix layer_replication arg to peft (#1446)
4155e99
unverified

winglian commited on

support layer replication for peft and fix rslora integration (#1445)
25afd35
unverified

winglian commited on

fix for accelerate env var for auto bf16, add new base image and expand torch_cuda_arch_list support (#1413)
da265dd
unverified

winglian commited on

Remove seq_len arg in rotary_emb (#1443)
e07347b
unverified

wenbopan winglian commited on

Fix falcon tokenization step (#1441) [skip ci]
bcdc9b1
unverified

Far El winglian commited on

make sure to capture non-null defaults from config validation (#1415)
601b77b
unverified

winglian commited on

fix(dataset): normalize tokenizer config and change hash from tokenizer class to tokenizer path (#1298)
ff939d8
unverified

Nanobit commited on