RichardErkhov
commited on
Commit
•
9a92d7e
1
Parent(s):
aab501b
uploaded readme
Browse files
README.md
ADDED
@@ -0,0 +1,413 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Quantization made by Richard Erkhov.
|
2 |
+
|
3 |
+
[Github](https://github.com/RichardErkhov)
|
4 |
+
|
5 |
+
[Discord](https://discord.gg/pvy7H8DZMG)
|
6 |
+
|
7 |
+
[Request more models](https://github.com/RichardErkhov/quant_request)
|
8 |
+
|
9 |
+
|
10 |
+
Llama-3.1-Storm-8B - GGUF
|
11 |
+
- Model creator: https://huggingface.co/unsloth/
|
12 |
+
- Original model: https://huggingface.co/unsloth/Llama-3.1-Storm-8B/
|
13 |
+
|
14 |
+
|
15 |
+
| Name | Quant method | Size |
|
16 |
+
| ---- | ---- | ---- |
|
17 |
+
| [Llama-3.1-Storm-8B.Q2_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q2_K.gguf) | Q2_K | 2.96GB |
|
18 |
+
| [Llama-3.1-Storm-8B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
|
19 |
+
| [Llama-3.1-Storm-8B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.IQ3_S.gguf) | IQ3_S | 3.43GB |
|
20 |
+
| [Llama-3.1-Storm-8B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
|
21 |
+
| [Llama-3.1-Storm-8B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.IQ3_M.gguf) | IQ3_M | 3.52GB |
|
22 |
+
| [Llama-3.1-Storm-8B.Q3_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q3_K.gguf) | Q3_K | 3.74GB |
|
23 |
+
| [Llama-3.1-Storm-8B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
|
24 |
+
| [Llama-3.1-Storm-8B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
|
25 |
+
| [Llama-3.1-Storm-8B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
|
26 |
+
| [Llama-3.1-Storm-8B.Q4_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q4_0.gguf) | Q4_0 | 4.34GB |
|
27 |
+
| [Llama-3.1-Storm-8B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
|
28 |
+
| [Llama-3.1-Storm-8B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
|
29 |
+
| [Llama-3.1-Storm-8B.Q4_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q4_K.gguf) | Q4_K | 4.58GB |
|
30 |
+
| [Llama-3.1-Storm-8B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
|
31 |
+
| [Llama-3.1-Storm-8B.Q4_1.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q4_1.gguf) | Q4_1 | 4.78GB |
|
32 |
+
| [Llama-3.1-Storm-8B.Q5_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q5_0.gguf) | Q5_0 | 5.21GB |
|
33 |
+
| [Llama-3.1-Storm-8B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
|
34 |
+
| [Llama-3.1-Storm-8B.Q5_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q5_K.gguf) | Q5_K | 5.34GB |
|
35 |
+
| [Llama-3.1-Storm-8B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
|
36 |
+
| [Llama-3.1-Storm-8B.Q5_1.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q5_1.gguf) | Q5_1 | 5.65GB |
|
37 |
+
| [Llama-3.1-Storm-8B.Q6_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q6_K.gguf) | Q6_K | 6.14GB |
|
38 |
+
| [Llama-3.1-Storm-8B.Q8_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q8_0.gguf) | Q8_0 | 7.95GB |
|
39 |
+
|
40 |
+
|
41 |
+
|
42 |
+
|
43 |
+
Original model description:
|
44 |
+
---
|
45 |
+
base_model: meta-llama/Meta-Llama-3.1-8B
|
46 |
+
language:
|
47 |
+
- en
|
48 |
+
library_name: transformers
|
49 |
+
license: llama3.1
|
50 |
+
tags:
|
51 |
+
- llama-3
|
52 |
+
- llama
|
53 |
+
- meta
|
54 |
+
- facebook
|
55 |
+
- unsloth
|
56 |
+
- transformers
|
57 |
+
---
|
58 |
+
|
59 |
+
# Finetune Llama 3.1, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
|
60 |
+
|
61 |
+
We have a free Google Colab Tesla T4 notebook for Llama 3.1 (8B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing
|
62 |
+
|
63 |
+
## ✨ Finetune for Free
|
64 |
+
|
65 |
+
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
|
66 |
+
|
67 |
+
| Unsloth supports | Free Notebooks | Performance | Memory use |
|
68 |
+
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
|
69 |
+
| **Llama-3.1 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
|
70 |
+
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
|
71 |
+
| **Gemma-2 9b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
|
72 |
+
|
73 |
+
## Llama 3.1 Storm
|
74 |
+
|
75 |
+
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64c75c1237333ccfef30a602/tmOlbERGKP7JSODa6T06J.jpeg)
|
76 |
+
|
77 |
+
Authors: [Ashvini Kumar Jindal](https://www.linkedin.com/in/ashvini-jindal-26653262/), [Pawan Kumar Rajpoot](https://www.linkedin.com/in/pawanrajpoot/), [Ankur Parikh](https://www.linkedin.com/in/ankurnlpexpert/), [Akshita Sukhlecha](https://www.linkedin.com/in/akshita-sukhlecha/)
|
78 |
+
|
79 |
+
**🤗 Hugging Face Announcement Blog**: https://huggingface.co/blog/akjindal53244/llama31-storm8b
|
80 |
+
|
81 |
+
**🚀Ollama:** `ollama run ajindal/llama3.1-storm:8b`
|
82 |
+
|
83 |
+
|
84 |
+
## TL;DR
|
85 |
+
|
86 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c75c1237333ccfef30a602/mDtDeiHwnBupw1k_n99Lf.png)
|
87 |
+
|
88 |
+
We present the [**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) model that outperforms Meta AI's [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) and [Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) models significantly across diverse benchmarks as shown in the performance comparison plot in the next section. Our approach consists of three key steps:
|
89 |
+
1. **Self-Curation**: We applied two self-curation methods to select approximately 1 million high-quality examples from a pool of ~2.8 million open-source examples. **Our curation criteria focused on educational value and difficulty level, using the same SLM for annotation instead of larger models (e.g. 70B, 405B).**
|
90 |
+
2. **Targeted fine-tuning**: We performed [Spectrum](https://arxiv.org/abs/2406.06623)-based targeted fine-tuning over the Llama-3.1-8B-Instruct model. The Spectrum method accelerates training by selectively targeting layer modules based on their signal-to-noise ratio (SNR), and freezing the remaining modules. In our work, 50% of layers are frozen.
|
91 |
+
3. **Model Merging**: We merged our fine-tuned model with the [Llama-Spark](https://huggingface.co/arcee-ai/Llama-Spark) model using [SLERP](https://huggingface.co/blog/mlabonne/merge-models#1-slerp) method. The merging method produces a blended model with characteristics smoothly interpolated from both parent models, ensuring the resultant model captures the essence of both its parents. [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) improves Llama-3.1-8B-Instruct across 10 diverse benchmarks. These benchmarks cover areas such as instruction-following, knowledge-driven QA, reasoning, truthful answer generation, and function calling.
|
92 |
+
|
93 |
+
## 🏆 Introducing Llama-3.1-Storm-8B
|
94 |
+
[**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) builds upon the foundation of Llama-3.1-8B-Instruct, aiming to enhance both conversational and function calling capabilities within the 8B parameter model class.
|
95 |
+
|
96 |
+
As shown in the left subplot of the above figure, [**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) model improves Meta-Llama-3.1-8B-Instruct across various benchmarks - Instruction-following ([IFEval](https://arxiv.org/abs/2311.07911)), Knowledge-driven QA benchmarks ([GPQA](https://arxiv.org/abs/2311.12022), [MMLU-Pro](https://arxiv.org/pdf/2406.01574)), Reasoning ([ARC-C](https://arxiv.org/abs/1803.05457), [MuSR](https://arxiv.org/abs/2310.16049), [BBH](https://arxiv.org/pdf/2210.09261)), Reduced Hallucinations ([TruthfulQA](https://arxiv.org/abs/2109.07958)), and Function-Calling ([BFCL](https://huggingface.co/datasets/gorilla-llm/Berkeley-Function-Calling-Leaderboard)). This improvement is particularly significant for AI developers and enthusiasts who work with limited computational resources.
|
97 |
+
|
98 |
+
We also benchmarked our model with the recently published model [Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) built on top of the Llama-3.1-8B-Instruct model. As shown in the right subplot of the above figure, **Llama-3.1-Storm-8B outperforms Hermes-3-Llama-3.1-8B on 7 out of 9 benchmarks**, with Hermes-3-Llama-3.1-8B surpassing Llama-3.1-Storm-8B on the MuSR benchmark and both models showing comparable performance on the BBH benchmark.
|
99 |
+
|
100 |
+
|
101 |
+
## Llama-3.1-Storm-8B Model Strengths
|
102 |
+
Llama-3.1-Storm-8B is a powerful generalist model useful for diverse applications. We invite the AI community to explore [Llama-3.1-Storm-8B](https://huggingface.co/collections/akjindal53244/storm-66ba6c96b7e24ecb592787a9) and look forward to seeing how it will be utilized in various projects and applications.
|
103 |
+
|
104 |
+
<table>
|
105 |
+
<tr>
|
106 |
+
<td><strong>Model Strength</strong>
|
107 |
+
</td>
|
108 |
+
<td><strong>Relevant Benchmarks</strong>
|
109 |
+
</td>
|
110 |
+
<tr>
|
111 |
+
<tr>
|
112 |
+
<td>🎯 Improved Instruction Following
|
113 |
+
</td>
|
114 |
+
<td>IFEval Strict (+3.93%)
|
115 |
+
</td>
|
116 |
+
<tr>
|
117 |
+
<tr>
|
118 |
+
<td>🌐 Enhanced Knowledge Driven Question Answering
|
119 |
+
</td>
|
120 |
+
<td>GPQA (+7.21%), MMLU-Pro (+0.55%), AGIEval (+3.77%)
|
121 |
+
</td>
|
122 |
+
<tr>
|
123 |
+
<tr>
|
124 |
+
<td>🧠 Better Reasoning
|
125 |
+
</td>
|
126 |
+
<td>ARC-C (+3.92%), MuSR (+2.77%), BBH (+1.67%), AGIEval (+3.77%)
|
127 |
+
</td>
|
128 |
+
<tr>
|
129 |
+
<tr>
|
130 |
+
<td>🤖 Superior Agentic Capabilities
|
131 |
+
</td>
|
132 |
+
<td>BFCL: Overall Acc (+7.92%), BFCL: AST Summary (+12.32%)
|
133 |
+
</td>
|
134 |
+
<tr>
|
135 |
+
<tr>
|
136 |
+
<td>🚫 Reduced Hallucinations
|
137 |
+
</td>
|
138 |
+
<td>TruthfulQA (+9%)
|
139 |
+
</td>
|
140 |
+
<tr>
|
141 |
+
</table>
|
142 |
+
|
143 |
+
**Note**: All improvements are absolute gains over Meta-Llama-3.1-8B-Instruct.
|
144 |
+
|
145 |
+
|
146 |
+
## Llama-3.1-Storm-8B Models
|
147 |
+
1. `BF16`: [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B)
|
148 |
+
2. ⚡ `FP8`: [Llama-3.1-Storm-8B-FP8-Dynamic](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic)
|
149 |
+
3. ⚡ `GGUF`: [Llama-3.1-Storm-8B-GGUF](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-GGUF)
|
150 |
+
4. 🚀 Ollama: `ollama run ajindal/llama3.1-storm:8b`
|
151 |
+
|
152 |
+
|
153 |
+
## 💻 How to Use the Model
|
154 |
+
The Hugging Face `transformers` library loads the model in `bfloat16` by default. This is the type used by the [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) checkpoint, so it’s the recommended way to run to ensure the best results.
|
155 |
+
|
156 |
+
### Installation
|
157 |
+
```bash
|
158 |
+
pip install --upgrade "transformers>=4.43.2" torch==2.3.1 accelerate vllm==0.5.3.post1
|
159 |
+
```
|
160 |
+
|
161 |
+
Developers can easily integrate Llama-3.1-Storm-8B into their projects using popular libraries like Transformers and vLLM. The following sections illustrate the usage with simple hands-on examples:
|
162 |
+
|
163 |
+
### Conversational Use-case
|
164 |
+
#### Use with [🤗 Transformers](https://github.com/huggingface/transformers)
|
165 |
+
##### Using `transformers.pipeline()` API
|
166 |
+
```python
|
167 |
+
import transformers
|
168 |
+
import torch
|
169 |
+
|
170 |
+
model_id = "akjindal53244/Llama-3.1-Storm-8B"
|
171 |
+
pipeline = transformers.pipeline(
|
172 |
+
"text-generation",
|
173 |
+
model=model_id,
|
174 |
+
model_kwargs={"torch_dtype": torch.bfloat16},
|
175 |
+
device_map="auto",
|
176 |
+
)
|
177 |
+
|
178 |
+
messages = [
|
179 |
+
{"role": "system", "content": "You are a helpful assistant."},
|
180 |
+
{"role": "user", "content": "What is 2+2?"}
|
181 |
+
]
|
182 |
+
|
183 |
+
outputs = pipeline(messages, max_new_tokens=128, do_sample=True, temperature=0.01, top_k=100, top_p=0.95)
|
184 |
+
print(outputs[0]["generated_text"][-1]) # Expected Output: {'role': 'assistant', 'content': '2 + 2 = 4'}
|
185 |
+
```
|
186 |
+
|
187 |
+
##### Using `model.generate()` API
|
188 |
+
```bash
|
189 |
+
pip install flash_attn==2.6.3
|
190 |
+
```
|
191 |
+
|
192 |
+
```python
|
193 |
+
import torch
|
194 |
+
from transformers import AutoTokenizer, LlamaForCausalLM
|
195 |
+
|
196 |
+
# Apply Llama3.1 chat-template
|
197 |
+
def format_prompt(user_query):
|
198 |
+
template = """<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are a helpful assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n{}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"""
|
199 |
+
return template.format(user_query)
|
200 |
+
|
201 |
+
|
202 |
+
model_id = 'akjindal53244/Llama-3.1-Storm-8B'
|
203 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
|
204 |
+
model = LlamaForCausalLM.from_pretrained(
|
205 |
+
model_id,
|
206 |
+
torch_dtype=torch.bfloat16,
|
207 |
+
device_map="auto",
|
208 |
+
load_in_8bit=False,
|
209 |
+
load_in_4bit=False,
|
210 |
+
use_flash_attention_2=True
|
211 |
+
)
|
212 |
+
|
213 |
+
# Build final input prompt after applying chat-template
|
214 |
+
prompt = format_prompt("What is 2+2?")
|
215 |
+
|
216 |
+
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
|
217 |
+
generated_ids = model.generate(input_ids, max_new_tokens=128, temperature=0.01, do_sample=True, eos_token_id=tokenizer.eos_token_id)
|
218 |
+
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True)
|
219 |
+
print(response) # Expected Output: '2 + 2 = 4'
|
220 |
+
```
|
221 |
+
|
222 |
+
#### Use with [vLLM](https://github.com/vllm-project/vllm)
|
223 |
+
```python
|
224 |
+
from vllm import LLM, SamplingParams
|
225 |
+
from transformers import AutoTokenizer
|
226 |
+
|
227 |
+
model_id = "akjindal53244/Llama-3.1-Storm-8B" # FP8 model: "akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic"
|
228 |
+
num_gpus = 1
|
229 |
+
|
230 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
231 |
+
llm = LLM(model=model_id, tensor_parallel_size=num_gpus)
|
232 |
+
sampling_params = SamplingParams(max_tokens=128, temperature=0.01, top_k=100, top_p=0.95)
|
233 |
+
|
234 |
+
messages = [
|
235 |
+
{"role": "system", "content": "You are a helpful assistant."},
|
236 |
+
{"role": "user", "content": "What is 2+2?"}
|
237 |
+
]
|
238 |
+
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize = False)
|
239 |
+
print(llm.generate([prompt], sampling_params)[0].outputs[0].text.strip()) # Expected Output: 2 + 2 = 4
|
240 |
+
```
|
241 |
+
|
242 |
+
#### Use with [LitGPT](https://github.com/Lightning-AI/litgpt)
|
243 |
+
```bash
|
244 |
+
pip install 'litgpt[all]'
|
245 |
+
litgpt download akjindal53244/Llama-3.1-Storm-8B --model_name meta-llama/Meta-Llama-3.1-8B
|
246 |
+
```
|
247 |
+
|
248 |
+
```python
|
249 |
+
from litgpt import LLM
|
250 |
+
|
251 |
+
llm = LLM.load(model="akjindal53244/Llama-3.1-Storm-8B")
|
252 |
+
llm.generate("What do Llamas eat?")
|
253 |
+
```
|
254 |
+
|
255 |
+
### Function Calling Use-case
|
256 |
+
|
257 |
+
[**Llama-3.1-Storm-8B**](https://huggingface.co/collections/akjindal53244/storm-66ba6c96b7e24ecb592787a9) has impressive function calling capabilities compared to Meta-Llama-3.1-8B-Instruct as demonstrated by the BFCL benchmark.
|
258 |
+
|
259 |
+
#### Prompt Format for Function Calling
|
260 |
+
Llama-3.1-Storm-8B is trained with specific system prompt for Function Calling:
|
261 |
+
```
|
262 |
+
You are a function calling AI model. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into function. The user may use the terms function calling or tool use interchangeably.
|
263 |
+
|
264 |
+
Here are the available functions:
|
265 |
+
<tools>LIST_OF_TOOLS</tools>
|
266 |
+
|
267 |
+
For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags in the format:
|
268 |
+
<tool_call>{"tool_name": <function-name>, "tool_arguments": <args-dict>}</tool_call>
|
269 |
+
```
|
270 |
+
Above system prompt should be used with passing `LIST_OF_TOOLS` as input.
|
271 |
+
|
272 |
+
|
273 |
+
#### Use with [vLLM](https://github.com/vllm-project/vllm)
|
274 |
+
```python
|
275 |
+
import json
|
276 |
+
from vllm import LLM, SamplingParams
|
277 |
+
from transformers import AutoTokenizer
|
278 |
+
|
279 |
+
model_id = "akjindal53244/Llama-3.1-Storm-8B" # FP8 model: "akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic"
|
280 |
+
num_gpus = 1
|
281 |
+
|
282 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
283 |
+
llm = LLM(model=model_id, tensor_parallel_size=num_gpus)
|
284 |
+
sampling_params = SamplingParams(max_tokens=128, temperature=0.01, top_k=100, top_p=0.95)
|
285 |
+
|
286 |
+
|
287 |
+
def create_system_prompt(tools_list):
|
288 |
+
system_prompt_format = """You are a function calling AI model. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into function. The user may use the terms function calling or tool use interchangeably.
|
289 |
+
|
290 |
+
Here are the available functions:
|
291 |
+
<tools>{}</tools>
|
292 |
+
|
293 |
+
For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags in the format:
|
294 |
+
<tool_call>{"tool_name": <function-name>, "tool_arguments": <args-dict>}</tool_call>"""
|
295 |
+
|
296 |
+
# Convert the tools list to a string representation
|
297 |
+
tools_str = json.dumps(tools_list, ensure_ascii=False)
|
298 |
+
# Format the system prompt with the tools list
|
299 |
+
system_prompt = system_prompt_format.format(tools_str)
|
300 |
+
return system_prompt
|
301 |
+
|
302 |
+
|
303 |
+
# Example tools list
|
304 |
+
tools_list = [
|
305 |
+
{
|
306 |
+
"name": "peers",
|
307 |
+
"description": "Retrieves a list of company peers given a stock symbol.",
|
308 |
+
"parameters": {
|
309 |
+
"symbol": {
|
310 |
+
"description": "The stock symbol for the company.",
|
311 |
+
"type": "str",
|
312 |
+
"default": ""
|
313 |
+
}
|
314 |
+
}
|
315 |
+
},
|
316 |
+
{
|
317 |
+
"name": "web_chain_details",
|
318 |
+
"description": "python",
|
319 |
+
"parameters": {
|
320 |
+
"chain_slug": {
|
321 |
+
"description": "The slug identifier for the blockchain (e.g., 'ethereum' for Ethereum mainnet).",
|
322 |
+
"type": "str",
|
323 |
+
"default": "ethereum"
|
324 |
+
}
|
325 |
+
}
|
326 |
+
}
|
327 |
+
]
|
328 |
+
|
329 |
+
# Create the system prompt with the tools list
|
330 |
+
system_prompt = create_system_prompt(tools_list)
|
331 |
+
|
332 |
+
messages = [
|
333 |
+
{"role": "system", "content": system_prompt},
|
334 |
+
{"role": "user", "content": "I need to understand the details of the Ethereum blockchain for my cryptocurrency project. Can you fetch the details for 'ethereum'?"}
|
335 |
+
]
|
336 |
+
|
337 |
+
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize = False)
|
338 |
+
print(llm.generate([prompt], sampling_params)[0].outputs[0].text.strip()) # Expected Output: <tool_call>{'tool_name': 'web_chain_details', 'tool_arguments': {'chain_slug': 'ethereum'}}</tool_call>
|
339 |
+
```
|
340 |
+
|
341 |
+
#### Use with [Ollama](https://ollama.com/)
|
342 |
+
```
|
343 |
+
import ollama
|
344 |
+
|
345 |
+
tools = [{
|
346 |
+
'type': 'function',
|
347 |
+
'function': {
|
348 |
+
'name': 'get_current_weather',
|
349 |
+
'description': 'Get the current weather for a city',
|
350 |
+
'parameters': {
|
351 |
+
'type': 'object',
|
352 |
+
'properties': {
|
353 |
+
'city': {
|
354 |
+
'type': 'string',
|
355 |
+
'description': 'The name of the city',
|
356 |
+
},
|
357 |
+
},
|
358 |
+
'required': ['city'],
|
359 |
+
},
|
360 |
+
},
|
361 |
+
},
|
362 |
+
{
|
363 |
+
'type': 'function',
|
364 |
+
'function': {
|
365 |
+
'name': 'get_places_to_vist',
|
366 |
+
'description': 'Get places to visit in a city',
|
367 |
+
'parameters': {
|
368 |
+
'type': 'object',
|
369 |
+
'properties': {
|
370 |
+
'city': {
|
371 |
+
'type': 'string',
|
372 |
+
'description': 'The name of the city',
|
373 |
+
},
|
374 |
+
},
|
375 |
+
'required': ['city'],
|
376 |
+
},
|
377 |
+
},
|
378 |
+
},
|
379 |
+
]
|
380 |
+
|
381 |
+
response = ollama.chat(
|
382 |
+
model='ajindal/llama3.1-storm:8b',
|
383 |
+
messages=[
|
384 |
+
{'role': 'system', 'content': 'Do not answer to nay vulgar questions.'},
|
385 |
+
{'role': 'user', 'content': 'What is the weather in Toronto and San Francisco?'}
|
386 |
+
],
|
387 |
+
tools=tools
|
388 |
+
)
|
389 |
+
|
390 |
+
print(response['message']) # Expected Response: {'role': 'assistant', 'content': "<tool_call>{'tool_name': 'get_current_weather', 'tool_arguments': {'city': 'Toronto'}}</tool_call>"}
|
391 |
+
```
|
392 |
+
|
393 |
+
|
394 |
+
## Alignment Note
|
395 |
+
While **Llama-3.1-Storm-8B** did not undergo an explicit model alignment process, it may still retain some alignment properties inherited from the Meta-Llama-3.1-8B-Instruct model.
|
396 |
+
|
397 |
+
## Cite Our Work
|
398 |
+
```
|
399 |
+
@misc {ashvini_kumar_jindal_2024,
|
400 |
+
author = { {Ashvini Kumar Jindal, Pawan Kumar Rajpoot, Ankur Parikh, Akshita Sukhlecha} },
|
401 |
+
title = { Llama-3.1-Storm-8B },
|
402 |
+
year = 2024,
|
403 |
+
url = { https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B },
|
404 |
+
doi = { 10.57967/hf/2902 },
|
405 |
+
publisher = { Hugging Face }
|
406 |
+
}
|
407 |
+
```
|
408 |
+
|
409 |
+
## Support Our Work
|
410 |
+
With 3 team-members spanned across 3 different time-zones, we have won [NeurIPS LLM Efficiency Challenge 2023](https://llm-efficiency-challenge.github.io/) and 4 other competitions in Finance and Arabic LLM space. We have also published [SOTA mathematical reasoning model](https://huggingface.co/akjindal53244/Arithmo-Mistral-7B).
|
411 |
+
|
412 |
+
**Llama-3.1-Storm-8B** is our most valuable contribution so far towards the open-source community. We are committed in developing efficient generalist LLMs. **We're seeking both computational resources and innovative collaborators to drive this initiative forward.**
|
413 |
+
|