AI & ML interests

None defined yet.

Recent Activity

blog-explorers's activity

ZennyKenny 
posted an update about 21 hours ago
view post
Post
261
On-demand audio transcription is an often-requested service without many good options on the market.

Using Hugging Face Spaces with Gradio SDK and the OpenAI Whisper model, I've put together a simple interface that supports the transcription and summarisation of audio files up to five minutes in length, completely open source and running on CPU upgrade. The cool thing is that it's built without a dedicated inference endpoint, completely on public infrastructure.

Check it out: ZennyKenny/AudioTranscribe

I wrote a short article about the backend mechanics for those who are interested: https://huggingface.co/blog/ZennyKenny/on-demand-public-transcription
nroggendorff 
posted an update 1 day ago
view post
Post
792
maybe a page where you can find open orgs to get started in collaboration with hf. i see so many people that dont have a direction.


i dont have ulterior motives, so dont ask
  • 1 reply
·
AdinaY 
posted an update 1 day ago
Xenova 
posted an update 1 day ago
view post
Post
1547
Introducing Kokoro.js, a new JavaScript library for running Kokoro TTS, an 82 million parameter text-to-speech model, 100% locally in the browser w/ WASM. Powered by 🤗 Transformers.js. WebGPU support coming soon!
👉 npm i kokoro-js 👈

Try it out yourself: webml-community/kokoro-web
Link to models/samples: onnx-community/Kokoro-82M-ONNX

You can get started in just a few lines of code!
import { KokoroTTS } from "kokoro-js";

const tts = await KokoroTTS.from_pretrained(
  "onnx-community/Kokoro-82M-ONNX",
  { dtype: "q8" }, // fp32, fp16, q8, q4, q4f16
);

const text = "Life is like a box of chocolates. You never know what you're gonna get.";
const audio = await tts.generate(text,
  { voice: "af_sky" }, // See `tts.list_voices()`
);
audio.save("audio.wav");

Huge kudos to the Kokoro TTS community, especially taylorchu for the ONNX exports and Hexgrad for the amazing project! None of this would be possible without you all! 🤗

The model is also extremely resilient to quantization. The smallest variant is only 86 MB in size (down from the original 326 MB), with no noticeable difference in audio quality! 🤯
  • 2 replies
·
mlabonne 
posted an update 2 days ago
view post
Post
1922
🆕 LLM Course 2025 edition!

I updated the LLM Scientist roadmap and added a ton of new information and references. It covers training, datasets, evaluation, quantization, and new trends like test-time compute scaling.

The LLM Course has been incredibly popular (41.3k stars!) and I've been touched to receive many, many messages about how it helped people in their careers.

I know how difficult this stuff can be, so I'm super proud of the impact it had. I want to keep updating it in 2025, especially with the LLM Engineer roadmap.

Thanks everyone, hope you'll enjoy it!

💻 LLM Course: https://huggingface.co/blog/mlabonne/llm-course
anakin87 
posted an update 3 days ago
view post
Post
450
Hey, it has been a while... I was busy participating in 💎 𝐆𝐞𝐦𝐦𝐚 𝐜𝐨𝐦𝐩𝐞𝐭𝐢𝐭𝐢𝐨𝐧!

Here's the idea: Gemma open models have a large vocabulary size (256K), so improving them for a specific language or cultural context should be pretty affordable - no need for continued pre-training.

My submission: 💎🌍🇮🇹 𝐍𝐞𝐨𝐠𝐞𝐧𝐞𝐬𝐢𝐬 - 𝐏𝐨𝐬𝐭-𝐓𝐫𝐚𝐢𝐧𝐢𝐧𝐠 𝐆𝐞𝐦𝐦𝐚 𝐟𝐨𝐫 𝐈𝐭𝐚𝐥𝐢𝐚𝐧 𝐚𝐧𝐝 𝐛𝐞𝐲𝐨𝐧𝐝
📓 Kaggle notebook: https://www.kaggle.com/code/anakin87/post-training-gemma-for-italian-and-beyond

In this notebook, I show how I improve the performance of Gemma 2 2B on Italian via Post-Training.
I believe this method is adaptable to other languages and model sizes.

𝘒𝘦𝘺 𝘚𝘵𝘦𝘱𝘴
📊 Choose reference metrics
🧑‍🔬 Data curation for Instruction Fine Tuning: identify existing datasets + generate synthetic data
🏋️‍♂️ Efficient Instruction Fine Tuning with Spectrum
🧑‍🔬 Data curation for Preference Tuning: identify existing datasets + generate synthetic data
👍👎 Efficient Direct Preference Optimization with Spectrum
📈 Evaluation


🤗 Hugging Face collection (with models and datasets): anakin87/gemma-neogenesis-67824b7bf13ac9cfe091fe2e

I'm also planning a 🎁 Gemma Giveaway (on LinkedIn - https://www.linkedin.com/in/stefano-fiorucci) in the next few days - sharing techniques, datasets, and models I used for my project... so stay tuned! 📻
AdinaY 
posted an update 3 days ago
AdinaY 
posted an update 3 days ago
view post
Post
2881
MiniMax, the company behind Hailuo_AI, has joined the open source community by releasing both models and demos of MiniMax-Text-01 & MiniMax-VL-01🔥
- Model
MiniMaxAI/MiniMax-VL-01
MiniMaxAI/MiniMax-Text-01
- Demo
MiniMaxAI/MiniMax-VL-01
MiniMaxAI/MiniMax-Text-01

✨ MiniMax-text-01:
- 456B with 45.9B activated per token
- Combines Lightning Attention, Softmax Attention, and MoE for optimal performance
- Training context up to 1M tokens, inference handles 4M tokens

✨ MiniMax-VL-01:
- ViT-MLP-LLM framework ( non-transformer👀)
- Handles image inputs from 336×336 to 2016×2016
- 694M image-caption pairs + 512B tokens processed across 4 stages
  • 1 reply
·
AdinaY 
posted an update 4 days ago
view post
Post
3090
MiniCPM-o2.6 🔥 an end-side multimodal LLMs released by OpenBMB from the Chinese community
Model: openbmb/MiniCPM-o-2_6
✨ Real-time English/Chinese conversation, emotion control and ASR/STT
✨ Real-time video/audio understanding
✨ Processes up to 1.8M pixels, leads OCRBench & supports 30+ languages
nroggendorff 
posted an update 4 days ago
AdinaY 
posted an update 8 days ago
nroggendorff 
posted an update 10 days ago
view post
Post
1692
Why do we only get to post once every 24 hours? I've been waiting *so long*. Anyway, now that the wait is finally over, I have some very important information to share.
  • 1 reply
·
AdinaY 
posted an update 12 days ago
nroggendorff 
posted an update 16 days ago
view post
Post
6275
hey nvidia, can you send me a gpu?
comment or react if you want ~~me~~ to get one too. 👉👈
·
Xenova 
posted an update 17 days ago
1aurent 
posted an update 18 days ago