goichi harada
dahara1
AI & ML interests
everything!
Recent Activity
New activity
12 days ago
dahara1/llama3-8b-amd-npu:Can we run lama3-8b on other npu ?
updated
a model
19 days ago
dahara1/gemma-2-2b-it-gguf-japanese-imatrix
updated
a model
20 days ago
dahara1/Qwen2.5-0.5B-Instruct-gguf-japanese-imatrix-128K
Organizations
dahara1's activity
Can we run lama3-8b on other npu ?
3
#2 opened 13 days ago
by
AryaPulkit
I came to try again!
4
#5 opened about 2 months ago
by
dahara1
Adding `safetensors` variant of this model
#2 opened about 2 months ago
by
SFconvertbot
Adding Evaluation Results
#1 opened 10 months ago
by
leaderboard-pr-bot
Adding `safetensors` variant of this model
#2 opened about 2 months ago
by
SFconvertbot
Request for Instructions
4
#1 opened 2 months ago
by
Ailelix
tokenizer_config.json is different from gemma-2-2b-it
2
#8 opened 2 months ago
by
dahara1
Hugging Face serverless inference
1
#3 opened 2 months ago
by
playmak3r
Containerization support would be nice.
1
#1 opened 3 months ago
by
HlexNC
License?
4
#2 opened 4 months ago
by
mhenrichsen
Inquiry on Minimum Configuration and Cost for Running C3TR-Adapter_gguf Model Efficiently
1
#2 opened 4 months ago
by
ltkien2003
Issues with Incomplete Translation Using webbigdata/C3TR-Adapter_gguf with llama.cpp
3
#1 opened 5 months ago
by
ltkien2003
What are the possible values for writing-style?
2
#1 opened 5 months ago
by
kaoriya
Since the model has been updated, would it be possible for you to measure the benchmark again?
5
#4 opened 5 months ago
by
dahara1
How to add a model
3
#3 opened 6 months ago
by
dahara1
Will you be able to finetune alma 13b-r?
1
#1 opened 8 months ago
by
Weroxig
お疲れ様です translates to Good evening
9
#1 opened 10 months ago
by
shmisi
Error when finetuning this - fp16 vs int8?
6
#1 opened about 1 year ago
by
hartleyterw
dahara1_resubmit
4
#27 opened about 1 year ago
by
dahara1