Akarshan Biswas
qnixsynapse
AI & ML interests
NLP, models, quantization
Recent Activity
liked
a model
19 days ago
google/gemma-2-2b
liked
a model
about 1 month ago
meta-llama/Llama-3.2-3B-Instruct
liked
a model
about 1 month ago
Granther/Gemma-2-9B-Instruct-4Bit-GPTQ
Organizations
None yet
qnixsynapse's activity
Is this really an Instruct model?
#1 opened 2 months ago
by
qnixsynapse
[MODELS] Discussion
425
#372 opened 9 months ago
by
victor
[TOOLS] Community Discussion
27
#455 opened 6 months ago
by
victor
Wrong number of tensors; expected 292, got 291
6
#69 opened 4 months ago
by
KingBadger
[FEATURE] Tools
61
#470 opened 6 months ago
by
victor
Utterly based
1
#9 opened 4 months ago
by
llama-anon
Add IQ Quantization support with the help of imatrix and GPUs
8
#35 opened 8 months ago
by
qnixsynapse
Suggestion: Host Gemma2 using keras_nlp instead of transformers library for the time being
2
#498 opened 5 months ago
by
qnixsynapse
The best 8B in the planet right now. PERIOD!
2
#22 opened 7 months ago
by
cyberneticos
How many active parameters does this model have?
3
#6 opened 7 months ago
by
lewtun
7B or 8B?
4
#24 opened 9 months ago
by
amgadhasan
Which model is responsible for naming of the thread?
8
#402 opened 8 months ago
by
qnixsynapse
Consider adding <start_of_context> and <stop_of_context> or similar special tokens for context ingestion.
#13 opened 8 months ago
by
qnixsynapse
Number of parameters
7
#9 opened 8 months ago
by
HugoLaurencon
RMSNorm eps value is wrong
#20 opened 10 months ago
by
qnixsynapse
RMSNorm eps value is wrong
#19 opened 10 months ago
by
qnixsynapse
Loading the model
3
#3 opened over 1 year ago
by
PyrroAiakid
Looking for GGUF format for this model
1
#14 opened about 1 year ago
by
barha
Help needed to load model
19
#13 opened over 1 year ago
by
sanjay-dev-ds-28
Running Llama-2-7B-32K-Instruct-GGML with llama.cpp ?
13
#1 opened about 1 year ago
by
gsimard