Gameveloster
gameveloster
AI & ML interests
None yet
Organizations
None yet
gameveloster's activity
Is the entire model quantized?
1
#8 opened 10 months ago
by
gameveloster
BakLLaVa 2?
2
#7 opened 10 months ago
by
gameveloster
chain_type_kwargs unused?
#1 opened 11 months ago
by
gameveloster
训练数据集
#3 opened 11 months ago
by
gameveloster
zephyr-7b-beta非常不错,希望帮忙对其进行汉化
3
#1 opened 12 months ago
by
wengnews
int4 please
3
#8 opened 11 months ago
by
wwey
Reproduce using 24/48GB VRAM?
1
#7 opened 11 months ago
by
gameveloster
How to load this into openai whisper library
#7 opened about 1 year ago
by
gameveloster
Was the entire OpenOcra dataset used?
1
#9 opened about 1 year ago
by
gameveloster
Merge with Chinese-LLaMA-2-13B instead of Llama-2-7b?
#1 opened about 1 year ago
by
gameveloster
GPTQ Quantization method
#2 opened over 1 year ago
by
gameveloster
Which prompt template to use?
2
#2 opened over 1 year ago
by
gameveloster
Out of memory on two 3090
4
#21 opened over 1 year ago
by
gameveloster
Fine tune xxl using 24GB GPU?
7
#23 opened almost 2 years ago
by
gameveloster
Will there be a distilled model that fits inside 48GB VRAM (2x 3090)?
1
#29 opened almost 2 years ago
by
gameveloster
Hardware for training this model
#6 opened almost 2 years ago
by
gameveloster
Running Bloom-7B1 on an 8gb GPU?
5
#88 opened about 2 years ago
by
CedricDB
Something between BLOOM-176B and BLOOM-7B1?
1
#169 opened almost 2 years ago
by
gameveloster