YellowRoseCx
Monero
AI & ML interests
XMR: 84StGoFTFoohAcaC3R9LV3eQpTuy7FT5waDqFPJbAsid2jvsNVPHNC87yCG6izFQP8ZpfkgRmH87aTFmeCkZBhGtQAYxNoM
Organizations
Monero's activity
Adding `safetensors` variant of this model
#2 opened 7 months ago
by
SFconvertbot
In regards to "Our collaboration with AMD"
#3 opened 5 months ago
by
Monero
Rename metharme-7b-4bit-ao-ts-trits-damp0.1.safetensors to model.safetensors
#4 opened over 1 year ago
by
dclab
How do I load this model?
8
#1 opened over 1 year ago
by
sneedingface
4090 test with OobaBooga (In Windows) fails to load the model
4
#2 opened over 1 year ago
by
cleverest
Does this model require PyTorch 2.0.0+?
#1 opened over 1 year ago
by
Monero
GGML version?
2
#1 opened over 1 year ago
by
IkariDev
Proper Prompt Format?
2
#1 opened over 1 year ago
by
rbmj
Do you happen to have a non-GPTQ version of this model so I could convert to ggml for use with llama.cpp
2
#1 opened over 1 year ago
by
spanielrassler
More Parameters
7
#1 opened over 1 year ago
by
dondre
Apache license is irrevocable.
11
#5 opened over 1 year ago
by
xzuyn
How can I fix this issue?
13
#1 opened over 1 year ago
by
BoreGuy1998
You may want to add an "act-order" GPTQ quantization.
5
#22 opened over 1 year ago
by
xzuyn
no config.json file
4
#1 opened over 1 year ago
by
curtz
Less than 0.05 Tokens/s on a 4090?
2
#2 opened over 1 year ago
by
Jakxx
.pt version uses 2gb less VRAM for me than the non-groupsized .safetensors
3
#10 opened over 1 year ago
by
Monero
Difference from original?
5
#1 opened over 1 year ago
by
Monero