We applied the same data-driven approach that led to SOTA English performance in🍷 FineWeb to thousands of languages.
🥂 FineWeb2 has 8TB of compressed text data and outperforms other multilingual datasets in our experiments.
The dataset is released under the permissive 📜 ODC-By 1.0 license, and the 💻 code to reproduce it and our evaluations is public.
We will very soon announce a big community project, and are working on a 📝 blogpost walking you through the entire dataset creation process. Stay tuned!
[New crazy blog post alert] We are releasing an extensive blog post on the science of creating high quality web-scale datasets, detailing all the steps and learnings that came in our recent 15 trillion tokens 🍷FineWeb release
Inspired by the distill.pub interactive graphics papers, we settled to write the most extensive, enjoyable and in-depth tech report we could draft on so prepare for a 45-mmin read with interactive graphics and all.
And it's not all, in this article we also introduce 📚FineWeb-Edu a filtered subset of Common Crawl with 1.3T tokens containing only web pages with very high educational content. Up to our knowledge, FineWeb-Edu out-performs all openly release web-scale datasets by a significant margin on knowledge- and reasoning-intensive benchmarks like MMLU, ARC, and OpenBookQA
We also make a number of surprising observations on the "quality" of the internet it-self which may challenge some of the general assumptions on web data (not saying more, I'll let you draw your conclusions ;)
Is is time for the open-source AI robots revolution 🚀?
With @haixuantao and @Leyo we’ve been playing with a low-cost DJI robot controlled by three local open-source AI models (Whisper, Idefics2, Parler-TTS - all Apache2) and orchestrated by Dora-cs.
Very interesting model just released by MyShell: jetmoe/jetmoe-8b . It's a 8B-parameters MoE LLM so 2.2B active parameters, really efficient.
Main characteristics: - impressive performances for its size (beating meta-llama/Llama-2-7b and huggyllama/llama-13b) - combine Mixture of Attention heads (MoA) and Mixture of MLP Experts (MoE) – 8 experts with 2 being active for each token - trained on a rather limited 1.25T tokens from publicly available datasets – training recipe follows the MiniCPM's two-phases training method => first time I see this for a 2B+ model - $100k to train - open weights - open sharing of recipes - open dataset - open code => ♡ - still interesting room to improve performances (be it only by training longer)
Note: I actually detailed all of the MiniCPM schedule, Mixture-of-expert (MoE) and many of the datasets used in this work in my recent little guide to building LLMs in 2024, so feel free to check it out if you want to learn more on these topics: https://www.youtube.com/watch?v=2-SPH9hIKT8
Current LLMs are very susceptible to generating toxic, harmful and even dangerous content. They can also generate outputs with gender or racial biases. The Biden-Harris Executive Order https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence) sets forth guidelines on what is considered a safe AI system. Following up on these guidelines, we present the world's first open source Biden-Harris Executive Order Red teamed Multilingual Language Model: Aurora-M. Inspired by BigScience, the model is trained on 5 languages: English, Hindi, Japanese, Vietnamese and Finnish.
Currently contains 16 notebooks in English (and some in Chinese): 1. Using LLM-as-a-judge 🧑⚖️ for an automated and versatile evaluation 2. Create a legal preference dataset 3. Suggestions for Data Annotation with SetFit in Zero-shot Text Classification 4. Implementing semantic cache to improve a RAG system 5. Building A RAG Ebook “Librarian” Using LlamaIndex 6. Stable Diffusion Interpolation 7. Building A RAG System with Gemma, MongoDB and Open Source Models 8. Prompt Tuning with PEFT Library 9. Migrating from OpenAI to Open LLMs Using TGI’s Messages API 10. Automatic Embeddings with TEI through Inference Endpoints 11. Simple RAG for GitHub issues using Hugging Face Zephyr and LangChain 12. Embedding multimodal data for similarity search using 🤗 transformers, 🤗 datasets and FAISS 13. Fine-tuning a Code LLM on Custom Code on a single GPU 14. RAG Evaluation Using Synthetic data and LLM-As-A-Judge 15. Advanced RAG on HuggingFace documentation using LangChain 16. Detecting Issues in a Text Dataset with Cleanlab