open/ acc

community

AI & ML interests

None defined yet.

Recent Activity

open-acc's activity

not-lain 
posted an update about 12 hours ago
csabakecskemeti 
posted an update about 24 hours ago
cfahlgren1 
posted an update 2 days ago
view post
Post
1054
If you haven't seen yet, we just released Inference Providers 🔀

> 4 new serverless inference providers on the Hub 🤯
> Use your HF API key or personal key with all providers 🔑
> Chat with Deepseek R1, V3, and more on HF Hub 🐋
> We support Sambanova, TogetherAI, Replicate, and Fal.ai 💪

Best of all, we don't charge any markup on top of the provider 🫰 Have you tried it out yet? HF Pro accounts get $2 of free usage for the provider inference.
clem 
posted an update 3 days ago
view post
Post
6198
AI is not a zero-sum game. Open-source AI is the tide that lifts all boats!
csabakecskemeti 
posted an update 4 days ago
view post
Post
2252
I've run the open llm leaderboard evaluations + hellaswag on deepseek-ai/DeepSeek-R1-Distill-Llama-8B and compared to meta-llama/Llama-3.1-8B-Instruct and at first glance R1 do not beat Llama overall.

If anyone wants to double check the results are posted here:
https://github.com/csabakecskemeti/lm_eval_results

Am I made some mistake, or (at least this distilled version) not as good/better than the competition?

I'll run the same on the Qwen 7B distilled version too.
·
clem 
posted an update 6 days ago
mitkox 
posted an update 6 days ago
view post
Post
2033
llama.cpp is 26.8% faster than ollama.
I have upgraded both, and using the same settings, I am running the same DeepSeek R1 Distill 1.5B on the same hardware. It's an Apples to Apples comparison.

Total duration:
llama.cpp 6.85 sec <- 26.8% faster
ollama 8.69 sec

Breakdown by phase:
Model loading
llama.cpp 241 ms <- 2x faster
ollama 553 ms

Prompt processing
llama.cpp 416.04 tokens/s with an eval time 45.67 ms <- 10x faster
ollama 42.17 tokens/s with an eval time of 498 ms

Token generation
llama.cpp 137.79 tokens/s with an eval time 6.62 sec <- 13% faster
ollama 122.07 tokens/s with an eval time 7.64 sec

llama.cpp is LLM inference in C/C++; ollama adds abstraction layers and marketing.

Make sure you own your AI. AI in the cloud is not aligned with you; it's aligned with the company that owns it.
·
mitkox 
posted an update 8 days ago
view post
Post
392
Stargate to the west of me
DeepSeek to the east
Here I am
Stuck in the middle with the EU

It will likely be a matter of sparkle to get export control on frontier research and models on both sides, leaving us in a vacuum.

Decentralized training infrastructure and on device inferencing are the future.
suayptalha 
posted an update 8 days ago
mitkox 
posted an update 10 days ago
view post
Post
491
On device AI reasoning ODA-R using speculative decoding with draft model DeepSeek-R1-Distill-Qwen-1.5B and DeepSeek-R1-Distill-Qwen-32B. DSPy compiler for reasoning prompts in math, engineering, code...
csabakecskemeti 
posted an update 11 days ago
ariG23498 
posted an update 11 days ago
not-lain 
posted an update 13 days ago
view post
Post
1148
we now have more than 2000 public AI models using ModelHubMixin🤗
mitkox 
posted an update 14 days ago
view post
Post
1392
Training a model to reason in the continuous latent space based on Meta's Coconut.
If it all works will apply it on the MiniCPM-o SVD-LR.
Endgame is a multimodal, adaptive, and efficient foundational on device AI model.
  • 2 replies
·
Tonic 
in open-acc/README 14 days ago

langfuse secrets

#11 opened 14 days ago by
Tonic