Christopher PRO

chkla

AI & ML interests

🚀 NLP and Computational Social Science

Recent Activity

liked a Space 29 days ago
CohereForAI/aya_expanse
liked a model about 1 month ago
CohereForAI/aya-expanse-8b
liked a model about 1 month ago
CohereForAI/aya-expanse-32b
View all activity

Organizations

chkla's activity

liked a Space 29 days ago
updated a Space about 2 months ago
upvoted an article 7 months ago
Reacted to thomwolf's post with ❤️ 8 months ago
view post
Post
4913
A Little guide to building Large Language Models in 2024

This is a post-recording of a 75min lecture I gave two weeks ago on how to train a LLM from scratch in 2024. I tried to keep it short and comprehensive – focusing on concepts that are crucial for training good LLM but often hidden in tech reports.

In the lecture, I introduce the students to all the important concepts/tools/techniques for training good performance LLM:
* finding, preparing and evaluating web scale data
* understanding model parallelism and efficient training
* fine-tuning/aligning models
* fast inference

There is of course many things and details missing and that I should have added to it, don't hesitate to tell me you're most frustrating omission and I'll add it in a future part. In particular I think I'll add more focus on how to filter topics well and extensively and maybe more practical anecdotes and details.

Now that I recorded it I've been thinking this could be part 1 of a two-parts series with a 2nd fully hands-on video on how to run all these steps with some libraries and recipes we've released recently at HF around LLM training (and could be easily adapted to your other framework anyway):
*datatrove for all things web-scale data preparation: https://github.com/huggingface/datatrove
*nanotron for lightweight 4D parallelism LLM training: https://github.com/huggingface/nanotron
*lighteval for in-training fast parallel LLM evaluations: https://github.com/huggingface/lighteval

Here is the link to watch the lecture on Youtube: https://www.youtube.com/watch?v=2-SPH9hIKT8
And here is the link to the Google slides: https://docs.google.com/presentation/d/1IkzESdOwdmwvPxIELYJi8--K3EZ98_cL6c5ZcLKSyVg/edit#slide=id.p

Enjoy and happy to hear feedback on it and what to add, correct, extend in a second part.
  • 2 replies
·
Reacted to dvilasuero's post with 🤗 10 months ago
view post
Post
🤗 Data is better together!

Data is essential for training good AI systems. We believe that the amazing community built around open machine learning can also work on developing amazing datasets together.

To explore how this can be done, Argilla and Hugging Face are thrilled to announce a collaborative project where we’re asking Hugging Face community members to build a dataset consisting of LLM prompts collectively.

What are we doing?
Using an instance of Argilla — a powerful open-source data collaboration tool — hosted on the Hugging Face Hub, we are collecting ratings of prompts based on their quality.

How Can You Contribute?
It’s super simple to start contributing:

1. Sign up if you don’t have a Hugging Face account

2. Go to this Argilla Space and sign in: https://huggingface.co/spaces/DIBT/prompt-collective

3. Read the guidelines and start rating prompts!

You can also join the #data-is-better-together channel in the Hugging Face Discord.

Finally, to track the community progress we'll be updating this Gradio dashboard:

https://huggingface.co/spaces/DIBT/prompt-collective-dashboard
·
Reacted to victor's post with 🤗 10 months ago
view post
Post
🔥 New on HuggingChat: Assistants!

Today we are releasing Assistants on HuggingChat!
Assistants are a fun way to package your prompts and share them with the world - powered by Open source Models of course!

Learn more about Assistants here: huggingchat/chat-ui#357
Browse Assistants here: https://huggingface.co/chat/assistants
·