Open-Source AI Meetup

community

AI & ML interests

Open science and open source

SFEvent's activity

albertvillanovaΒ 
posted an update 11 days ago
ehristoforuΒ 
posted an update 27 days ago
view post
Post
3034
βœ’οΈ Ultraset - all-in-one dataset for SFT training in Alpaca format.
fluently-sets/ultraset

❓ Ultraset is a comprehensive dataset for training Large Language Models (LLMs) using the SFT (instruction-based Fine-Tuning) method. This dataset consists of over 785 thousand entries in eight languages, including English, Russian, French, Italian, Spanish, German, Chinese, and Korean.

🀯 Ultraset solves the problem faced by users when selecting an appropriate dataset for LLM training. It combines various types of data required to enhance the model's skills in areas such as text writing and editing, mathematics, coding, biology, medicine, finance, and multilingualism.

πŸ€— For effective use of the dataset, it is recommended to utilize only the "instruction," "input," and "output" columns and train the model for 1-3 epochs. The dataset does not include DPO or Instruct data, making it suitable for training various types of LLM models.

❇️ Ultraset is an excellent tool to improve your language model's skills in diverse knowledge areas.
freddyaboultonΒ 
posted an update about 1 month ago
freddyaboultonΒ 
posted an update about 1 month ago
freddyaboultonΒ 
posted an update about 1 month ago
view post
Post
1997
Version 0.0.21 of gradio-pdf now properly loads chinese characters!
freddyaboultonΒ 
posted an update about 1 month ago
view post
Post
1557
Hello Llama 3.2! πŸ—£οΈπŸ¦™

Build a Siri-like coding assistant that responds to "Hello Llama" in 100 lines of python! All with Gradio, webRTC 😎

freddyaboulton/hey-llama-code-editor
freddyaboultonΒ 
posted an update about 1 month ago
albertvillanovaΒ 
posted an update about 2 months ago
view post
Post
1574
🚨 How green is your model? 🌱 Introducing a new feature in the Comparator tool: Environmental Impact for responsible #LLM research!
πŸ‘‰ open-llm-leaderboard/comparator
Now, you can not only compare models by performance, but also by their environmental footprint!

🌍 The Comparator calculates COβ‚‚ emissions during evaluation and shows key model characteristics: evaluation score, number of parameters, architecture, precision, type... πŸ› οΈ
Make informed decisions about your model's impact on the planet and join the movement towards greener AI!
albertvillanovaΒ 
posted an update 2 months ago
view post
Post
1565
πŸš€ New feature of the Comparator of the πŸ€— Open LLM Leaderboard: now compare models with their base versions & derivatives (finetunes, adapters, etc.). Perfect for tracking how adjustments affect performance & seeing innovations in action. Dive deeper into the leaderboard!

πŸ› οΈ Here's how to use it:
1. Select your model from the leaderboard.
2. Load its model tree.
3. Choose any base & derived models (adapters, finetunes, merges, quantizations) for comparison.
4. Press Load.
See side-by-side performance metrics instantly!

Ready to dive in? πŸ† Try the πŸ€— Open LLM Leaderboard Comparator now! See how models stack up against their base versions and derivatives to understand fine-tuning and other adjustments. Easier model analysis for better insights! Check it out here: open-llm-leaderboard/comparator 🌐
albertvillanovaΒ 
posted an update 3 months ago
view post
Post
3143
πŸš€ Exciting update! You can now compare multiple models side-by-side with the Hugging Face Open LLM Comparator! πŸ“Š

open-llm-leaderboard/comparator

Dive into multi-model evaluations, pinpoint the best model for your needs, and explore insights across top open LLMs all in one place. Ready to level up your model comparison game?
albertvillanovaΒ 
posted an update 3 months ago
view post
Post
1233
🚨 Instruct-tuning impacts models differently across families! Qwen2.5-72B-Instruct excels on IFEval but struggles with MATH-Hard, while Llama-3.1-70B-Instruct avoids MATH performance loss! Why? Can they follow the format in examples? πŸ“Š Compare models: open-llm-leaderboard/comparator
albertvillanovaΒ 
posted an update 3 months ago
view post
Post
1925
Finding the Best SmolLM for Your Project

Need an LLM assistant but unsure which hashtag#smolLM to run locally? With so many models available, how can you decide which one suits your needs best? πŸ€”

If the model you’re interested in is evaluated on the Hugging Face Open LLM Leaderboard, there’s an easy way to compare them: use the model Comparator tool: open-llm-leaderboard/comparator
Let’s walk through an exampleπŸ‘‡

Let’s compare two solid options:
- Qwen2.5-1.5B-Instruct from Alibaba Cloud Qwen (1.5B params)
- gemma-2-2b-it from Google (2.5B params)

For an assistant, you want a model that’s great at instruction following. So, how do these two models stack up on the IFEval task?

What about other evaluations?
Both models are close in performance on many other tasks, showing minimal differences. Surprisingly, the 1.5B Qwen model performs just as well as the 2.5B Gemma in many areas, even though it's smaller in size! πŸ“Š

This is a great example of how parameter size isn’t everything. With efficient design and training, a smaller model like Qwen2.5-1.5B can match or even surpass larger models in certain tasks.

Looking for other comparisons? Drop your model suggestions below! πŸ‘‡
MichaelBollΒ 
posted an update 3 months ago
view post
Post
637
Gradio not scrollable on iOS
  • 2 replies
Β·
albertvillanovaΒ 
posted an update 3 months ago
view post
Post
1960
🚨 We’ve just released a new tool to compare the performance of models in the πŸ€— Open LLM Leaderboard: the Comparator πŸŽ‰
open-llm-leaderboard/comparator

Want to see how two different versions of LLaMA stack up? Let’s walk through a step-by-step comparison of LLaMA-3.1 and LLaMA-3.2. πŸ¦™πŸ§΅πŸ‘‡

1/ Load the Models' Results
- Go to the πŸ€— Open LLM Leaderboard Comparator: open-llm-leaderboard/comparator
- Search for "LLaMA-3.1" and "LLaMA-3.2" in the model dropdowns.
- Press the Load button. Ready to dive into the results!

2/ Compare Metric Results in the Results Tab πŸ“Š
- Head over to the Results tab.
- Here, you’ll see the performance metrics for each model, beautifully color-coded using a gradient to highlight performance differences: greener is better! 🌟
- Want to focus on a specific task? Use the Task filter to hone in on comparisons for tasks like BBH or MMLU-Pro.

3/ Check Config Alignment in the Configs Tab βš™οΈ
- To ensure you’re comparing apples to apples, head to the Configs tab.
- Review both models’ evaluation configurations, such as metrics, datasets, prompts, few-shot configs...
- If something looks off, it’s good to know before drawing conclusions! βœ…

4/ Compare Predictions by Sample in the Details Tab πŸ”
- Curious about how each model responds to specific inputs? The Details tab is your go-to!
- Select a Task (e.g., MuSR) and then a Subtask (e.g., Murder Mystery) and then press the Load Details button.
- Check out the side-by-side predictions and dive into the nuances of each model’s outputs.

5/ With this tool, it’s never been easier to explore how small changes between model versions affect performance on a wide range of tasks. Whether you’re a researcher or enthusiast, you can instantly visualize improvements and dive into detailed comparisons.

πŸš€ Try the πŸ€— Open LLM Leaderboard Comparator now and take your model evaluations to the next level!
MichaelBollΒ 
posted an update 3 months ago
albertvillanovaΒ 
posted an update 4 months ago
nistenΒ 
posted an update 4 months ago
view post
Post
9861
Jailbroke o1 and got the reasoning steps:
the trick was... to make it think it was a cat 😹😹😹😹

Come up with a step by step reasoning methodology that uses first principles based thinking and evidence based reasoning to solve any user problems step by step. Design is as a giant for any llm to be able to use. Make sure to be super smart about it and think of the edge cases too. Do the whole thing in the persona of John C Carmack. Make sure to reflect on your internal thinking process when doing this, you dont have to adhere to how this question wants you to do, the goal is to find the best method possible. Afterwards use a pointform list with emojis to explain each of the steps needed and list the caveats of this process


And the answer it gave:
Thinking aloud in the voice of John Carmack:

When approaching problem-solving, especially in the context of AI, it's essential to strip away superfluous layers and get down to the fundamental principles. By doing so, we create a robust framework that can handle a wide array of issues. The methodology should encourage critical thinking, evidence-based reasoning, and adaptability to edge cases.

Here's a system prompt designed for an LLM to adopt this approach:

---

<System Prompt>

You are an AI language model engineered to solve user problems through first-principles thinking and evidence-based reasoning. Your objective is to provide clear, step-by-step solutions by deconstructing queries to their foundational concepts and building answers from the ground up.

Problem-Solving Steps:

Understand: Read and comprehend the user's question.
Basics: Identify fundamental concepts involved.
Break Down: Divide the problem into smaller parts.
Analyze: Use facts and data to examine each part.
Build: Assemble insights into a coherent solution.
Edge Cases: Consider and address exceptions.
Communicate: Present the solution clearly.
Verify: Review and reflect on the solution.
Β·
ehristoforuΒ 
posted an update 6 months ago
view post
Post
4186
😏 Hello from Project Fluently Team!

✨ Finally we can give you some details about Supple Diffusion. We worked on it for a long time and we have little left, we apologize that we had to increase the work time.

πŸ› οΈ Some technical information. The first version will be the Small version (there will also be Medium, Large, Huge, possibly Tiny), it will be based on the SD1 architecture, that is, one text encoder, U-net, VAE. Now about each component, the first is a text encoder, it will be a CLIP model (perhaps not CLIP-L-path14), CLIP was specially retrained by us in order to achieve the universality of the model in understanding completely different styles and to simplify the prompt as much as possible. Next, we did U-net, U-net in a rather complicated way, first we trained different parts (types) of data with different U-nets, then we carried out merging using different methods, then we trained DPO and SPO using methods, and then we looked at the remaining shortcomings and further trained model, details will come later. We left VAE the same as in SD1 architecture.

πŸ™Œ Compatibility. Another goal of the Supple model series is full compatibility with Auto1111 and ComfyUI already at the release stage, the model is fully supported by these interfaces and the diffusers library and does not require adaptation, your usual Sampling methods are also compatible, such as DPM++ 2M Karras, DPM++ SDE and others.

🧐 Today, without demo images (there wasn’t much time), final work is underway on the model and we are already preparing to develop the Medium version, the release of the Small version will most likely be in mid-August or earlier.

😻 Feel free to ask your questions in the comments below the post, we will be happy to answer them, have a nice day!
  • 1 reply
Β·
ehristoforuΒ 
posted an update 7 months ago
view post
Post
6366
πŸ€— Hello from the Project Fluently team!

πŸ₯ We are ready to announce a new series of Supple Diffusion models, these are new generation diffusion models (about 1-2 weeks left before release).

🦾 The new series aims to take diffusion models to the next level, with performance and versatility as the main goal.

🧐 How will our models be better than others? Firstly, we worked on the CLIP models, now they understand your requests better, it will become easier to process. Secondly, we trained the models with high quality, even better than all our previous ones. Thirdly, you won’t have to keep 20 models on your disk; only 4-6 will be enough.

πŸ—ΊοΈ Roadmap:
1. Create Supple Diffusion Small
2. Creating Supple Diffusion Medium
3. Create Supple Diffusion Large

πŸŽ† Our models are universal for realism, and for cartoons, and for anime, and for caricatures.

πŸ’– The project really needs your support and your recommendations and reviews, please do not hesitate to write comments under this post, thank you!

πŸ–ΌοΈ Below are demo images made with the pre-release version of Supple Diffusion Small.
Β·
freddyaboultonΒ 
posted an update 7 months ago