Update README.md
Browse files
README.md
CHANGED
@@ -16,6 +16,8 @@ base_model:
|
|
16 |
|
17 |
The **Lumo-8B-Instruct** model is a fine-tuned version of Meta's LLaMa 3.1 8B model designed to provide highly accurate and contextual assistance for developers working on Solana and its associated ecosystems. This model is capable of answering complex questions, generating code snippets, debugging, and explaining technical concepts using state-of-the-art **instruction tuning** techniques.
|
18 |
|
|
|
|
|
19 |
### 🎯 **Key Features**
|
20 |
- Optimized for **Solana-specific queries** across ecosystems like Raydium, Helius, Jito, and more.
|
21 |
- Instruction fine-tuned for **developer-centric workflows**.
|
@@ -31,7 +33,7 @@ The **Lumo-8B-Instruct** model is a fine-tuned version of Meta's LLaMa 3.1 8B mo
|
|
31 |
|----------------------------|----------------------------------------------------------------------------------------------|
|
32 |
| **Base Model** | Meta LLaMa 3.1 8B |
|
33 |
| **Fine-Tuning Framework** | HuggingFace Transformers, LoRA |
|
34 |
-
| **Dataset Size** |
|
35 |
| **Context Length** | 4,096 tokens |
|
36 |
| **Training Steps** | 10,000 |
|
37 |
| **Learning Rate** | 3e-4 |
|
@@ -122,8 +124,8 @@ print(response)
|
|
122 |
|
123 |
| Split | Count | Description |
|
124 |
|---------|--------|--------------------------------|
|
125 |
-
| **Train** |
|
126 |
-
| **Test** |
|
127 |
|
128 |
**Dataset Format (JSONL):**
|
129 |
```json
|
|
|
16 |
|
17 |
The **Lumo-8B-Instruct** model is a fine-tuned version of Meta's LLaMa 3.1 8B model designed to provide highly accurate and contextual assistance for developers working on Solana and its associated ecosystems. This model is capable of answering complex questions, generating code snippets, debugging, and explaining technical concepts using state-of-the-art **instruction tuning** techniques.
|
18 |
|
19 |
+
**(Knowledge cut-off date: 17th January, 2025)**
|
20 |
+
|
21 |
### 🎯 **Key Features**
|
22 |
- Optimized for **Solana-specific queries** across ecosystems like Raydium, Helius, Jito, and more.
|
23 |
- Instruction fine-tuned for **developer-centric workflows**.
|
|
|
33 |
|----------------------------|----------------------------------------------------------------------------------------------|
|
34 |
| **Base Model** | Meta LLaMa 3.1 8B |
|
35 |
| **Fine-Tuning Framework** | HuggingFace Transformers, LoRA |
|
36 |
+
| **Dataset Size** | 28,518 high-quality Q&A pairs |
|
37 |
| **Context Length** | 4,096 tokens |
|
38 |
| **Training Steps** | 10,000 |
|
39 |
| **Learning Rate** | 3e-4 |
|
|
|
124 |
|
125 |
| Split | Count | Description |
|
126 |
|---------|--------|--------------------------------|
|
127 |
+
| **Train** | 27.1k | High-quality Q&A pairs |
|
128 |
+
| **Test** | 1.43k | Evaluation dataset for testing|
|
129 |
|
130 |
**Dataset Format (JSONL):**
|
131 |
```json
|