qwerrwe / tests

Commit History

fix mistral prompt assembly (#982)
7bbaac9
unverified

hamel commited on

Fix prompt assembly for llama (#952)
5ada140
unverified

hamel tokestermw commited on

Respect sequence_len in config for `type: llama2_chat` (#926)
f1de29d
unverified

hamel commited on

support for mamba (#915)
40a6362
unverified

winglian commited on

Feat(wandb): Refactor to be more flexible (#767)
a1da39c
unverified

Nanobit commited on

Feat: Add warmup_ratio (#893)
fb12895
unverified

Nanobit commited on

Phi update 202311 (#876)
9bf854e
unverified

winglian commited on

add e2e tests for checking functionality of resume from checkpoint (#865)
b3a61e8
unverified

winglian commited on

use temp_dir kwarg instead
6dc68a6

winglian commited on

missing dunder-init
7de6a56

winglian commited on

chore: lint
c74f045

winglian commited on

make sure to cleanup tmp output_dir for e2e tests
0402d19

winglian commited on

simplify by removing duplicate base_model_config (#772)
2d8def6
unverified

winglian commited on

Fix: Warn when fullfinetune without adapter (#770)
44c9d01
unverified

Nanobit commited on

convert exponential notation lr to floats (#771)
ca84cca
unverified

winglian commited on

Fix: eval table conflict with eval_sample_packing (#769)
9923b72
unverified

Nanobit commited on

remove lora fused packing test (#758)
21cf09b
unverified

winglian commited on

misc sharegpt fixes (#723)
f30afe4
unverified

winglian commited on

Feat: Allow usage of native Mistral FA when no sample_packing (#669)
697c50d
unverified

Nanobit commited on

add mistral e2e tests (#649)
5b0bc48
unverified

winglian commited on

Fix(cfg): Add validation for save_strategy and eval_strategy (#633)
383f88d
unverified

Nanobit commited on

use fastchat conversations template (#578)
e7d3e2d
unverified

winglian commited on

Fix: Fail bf16 check when running on cpu during merge (#631)
cfbce02
unverified

Nanobit commited on

better handling and logging of empty sharegpt turns (#603)
a363604
unverified

winglian commited on

misc fixes to add gptq tests (#621)
03e5907
unverified

winglian commited on

Support Sample packing for phi arch (#586)
12a2dbb
unverified

winglian commited on

E2e device cuda (#575)
2414673
unverified

winglian commited on

e2e testing (#574)
9218ebe
unverified

winglian commited on

Fix pretraining with iterable/streaming Dataset (#556)
2f586d1
unverified

Jan Philipp Harries Jan Philipp Harries commited on

workaround for md5 variations (#533)
0b4cf5b
unverified

winglian commited on

recommend padding when using sample packing (#531)
3437149
unverified

winglian commited on

fix test fixture b/c hf trainer tokenization changed (#464)
d5dcf9c
unverified

winglian commited on

fix fixture for new tokenizer handling in transformers (#428)
8cace80
unverified

winglian commited on

simplify `load_tokenizer`
efb3b2c

tmm1 commited on

extract module for working with cfg
8cec513

tmm1 commited on

fix DefaultDict.__or__
a13e45d

tmm1 commited on

Attention mask and position id fixes for packing (#285)
2bb0b78
unverified

winglian commited on

experimental llama 2 chat support (#296)
3392270
unverified

Jan Philipp Harries Jan Philipp Harries commited on

update prompts for open orca to match the paper (#317)
3d4984b
unverified

winglian commited on

Fixed pre-commit problems, fixed small bug in logging_config to handle LOG_LEVEL env var
b1f4f7a

theobjectivedad commited on

params are adam_*, not adamw_*
19cf0bd

winglian commited on

add tests and supoort for loader for sys prompt data
3a38271

winglian commited on

initial wip to get sys prompt from dataset
8d20e0a

winglian commited on

optionally define whether to use_fast tokenizer
47d601f

winglian commited on

Additional test case per pr
ad5ca4f

winglian commited on

add validation and tests for adamw hyperparam
cb9d3af

winglian commited on

Merge pull request #214 from OpenAccess-AI-Collective/fix-tokenizing-labels
1925eaf
unverified

winglian commited on

fix test name
1ab3bf3

winglian commited on