Junyang Lin
JustinLin610
AI & ML interests
Pretraining, NLP, CV, etc.
Recent Activity
liked
a model
about 10 hours ago
arcee-ai/SuperNova-Medius
liked
a Space
2 days ago
llm-jp/open-japanese-llm-leaderboard
liked
a Space
5 days ago
Qwen/Qwen2.5-Turbo-1M-Demo
Organizations
JustinLin610's activity
Independent evaluation results
2
#1 opened about 2 months ago
by
yaronr
Have you deleted your GitHub page?
7
#10 opened 3 months ago
by
xwzy6
The sample code could not run...
1
#16 opened 5 months ago
by
zhiminy
fine-tuning
4
#16 opened 7 months ago
by
SaghirAya
Maybe a silly question...
2
#18 opened 7 months ago
by
urtuuuu
This model is Awesome
5
#20 opened 6 months ago
by
areumtecnologia
Update tokenizer_config.json
#3 opened 7 months ago
by
JustinLin610
请问这个版本GPU内存消耗28G与14B对比如何?
7
#7 opened 8 months ago
by
william0014
Fine tuning this model with Proprietary Code
2
#6 opened 7 months ago
by
vtraghu
What are the diffences of this with Qwen/CodeQwen1.5-7B
6
#5 opened 7 months ago
by
Kalemnor
Adding Evaluation Results
#14 opened 7 months ago
by
leaderboard-pr-bot
qwen1.5-7b-chat是不是推理起来比qwen1.5-7b快很多
3
#9 opened 9 months ago
by
endNone
tie_word_embeddings=true ?
1
#6 opened 7 months ago
by
salmitta
Why 72B model has different vocab size comparing with other models?
6
#1 opened 10 months ago
by
Mikasaka
Using llama.cpp server, responses always end with <|im_end|>
1
#2 opened 7 months ago
by
gilankpam
The llm output is incomplete
1
#11 opened 7 months ago
by
lijianqiang
GGUF models
1
#1 opened 7 months ago
by
MaziyarPanahi
is 14b coming?
4
#3 opened 7 months ago
by
rombodawg
(Rebuked: this claim proven false) "Fake coding scores" .73 at best
8
#4 opened 7 months ago
by
rombodawg