James Edward
Goldenblood56
·
AI & ML interests
None yet
Organizations
None yet
Goldenblood56's activity
Llamacpp error
2
#1 opened 3 months ago
by
ML-master-123
Mistral-7B-Instruct-v0.2 loopy text generation with custom chat template
4
#68 opened 8 months ago
by
ercanucan
What am I doing wrong? Using Oobabooga.
3
#1 opened 7 months ago
by
Goldenblood56
GGUF?
3
#1 opened 9 months ago
by
Goldenblood56
I can only get this to work at 8192 context? In Oobabooga. I heard it could do more? Is that false?
2
#12 opened about 1 year ago
by
Goldenblood56
I really need help with Templates? How to interpret and use them?
5
#5 opened about 1 year ago
by
Goldenblood56
NEW! OpenLLMLeaderboard 2023 fall update
20
#356 opened about 1 year ago
by
clefourrier
The best model
3
#1 opened about 1 year ago
by
Kenshiro-28
No GGUF Quatization?
6
#1 opened about 1 year ago
by
Goldenblood56
In Oobabooga what Instruction template do I select for Dolphin Mistral 2.1?
1
#11 opened about 1 year ago
by
Goldenblood56
dolphin-2.1-mistral-7B is even better than openorca-mistral-7b unbelievable
5
#1 opened about 1 year ago
by
mirek190
Quick question is this a 2048 or 4096 model for context size in Ooba? Using Ctransformers?
1
#1 opened about 1 year ago
by
Goldenblood56
Is it possible to make this exact model in GGUF?
#4 opened about 1 year ago
by
Goldenblood56
was unable to load using text-generation-webui
7
#1 opened over 1 year ago
by
LaferriereJC
Uncensored my ass ....
7
#2 opened over 1 year ago
by
mirek190
So this model can't or should not be used in Instruct mode? That's my favorite mode I think.
1
#2 opened over 1 year ago
by
Goldenblood56
Will this work at 4K or 8K context in oobabooga yet?
#2 opened over 1 year ago
by
Goldenblood56
How long does it take to run these tests?
7
#90 opened over 1 year ago
by
Goldenblood56
Getting this to run in Ooba? Anyone know what settings I have to choose?
5
#3 opened over 1 year ago
by
Goldenblood56