Crystalcareai commited on
Commit
ee36e51
·
verified ·
1 Parent(s): 8cf3e9a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -1
README.md CHANGED
@@ -3,4 +3,29 @@ license: llama3.1
3
  ---
4
  <div align="center">
5
  <img src="https://i.ibb.co/9hwFrvL/BLMs-Wkx-NQf-W-46-FZDg-ILhg.jpg" alt="Arcee Spark" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 100%; height: auto;">
6
- </div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
  <div align="center">
5
  <img src="https://i.ibb.co/9hwFrvL/BLMs-Wkx-NQf-W-46-FZDg-ILhg.jpg" alt="Arcee Spark" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 100%; height: auto;">
6
+ </div>
7
+
8
+
9
+ Llama-Spark is a powerful conversational AI model developed by Arcee.ai. It's built on the foundation of Llama-3.1-8B and merges the power of our Tome Dataset with Llama-3.1-8B-Instruct, resulting in a remarkable conversationalist that punches well above its 8B parameter weight class.
10
+
11
+ ## GGUFs available [here](https://huggingface.co/arcee-ai/Llama-Spark-GGUF)
12
+
13
+ ## Model Description
14
+
15
+ Llama-Spark is our commitment to consistently delivering the best-performing conversational AI in the 6-9B parameter range. As new base models become available, we'll continue to update and improve Spark to maintain its leadership position.
16
+
17
+ This model is a successor to our original Arcee-Spark, incorporating advancements and learnings from our ongoing research and development.
18
+
19
+ ## Intended Uses
20
+
21
+ Llama-Spark is intended for use in conversational AI applications, such as chatbots, virtual assistants, and dialogue systems. It excels at engaging in natural and informative conversations.
22
+
23
+ ## Training Information
24
+
25
+ Llama-Spark is built upon the Llama-3.1-8B base model, fine-tuned using of the Tome Dataset and merged with Llama-3.1-8B-Instruct.
26
+ ## Evaluation Results
27
+ Please note that these scores are consistantly higher than the OpenLLM leaderboard, and should be compared to their relative performance increase not weighed against the leaderboard.
28
+
29
+ ## Acknowledgements
30
+
31
+ We extend our deepest gratitude to **PrimeIntellect** for being our compute sponsor for this project.