Triangle104's picture
Update README.md
9546bfa verified
---
language:
- en
license: llama3.2
tags:
- shining-valiant
- shining-valiant-2
- valiant
- valiant-labs
- llama
- llama-3.2
- llama-3.2-instruct
- llama-3.2-instruct-3b
- llama-3
- llama-3-instruct
- llama-3-instruct-3b
- 3b
- science
- physics
- biology
- chemistry
- compsci
- computer-science
- engineering
- technical
- conversational
- chat
- instruct
- llama-cpp
- gguf-my-repo
base_model: ValiantLabs/Llama3.2-3B-ShiningValiant2
datasets:
- sequelbox/Celestia
- sequelbox/Spurline
- sequelbox/Supernova
pipeline_tag: text-generation
model_type: llama
model-index:
- name: Llama3.2-3B-ShiningValiant2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.14
name: acc
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU College Biology (5-shot)
type: mmlu
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.58
name: acc
- type: acc
value: 70.32
name: acc
- type: acc
value: 44.0
name: acc
- type: acc
value: 50.25
name: acc
- type: acc
value: 42.16
name: acc
- type: acc
value: 35.76
name: acc
- type: acc
value: 53.19
name: acc
- type: acc
value: 53.0
name: acc
- type: acc
value: 61.0
name: acc
- type: acc
value: 60.53
name: acc
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 48.9
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.2-3B-ShiningValiant2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 19.11
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.2-3B-ShiningValiant2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 9.14
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.2-3B-ShiningValiant2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 3.02
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.2-3B-ShiningValiant2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 5.49
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.2-3B-ShiningValiant2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 19.1
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.2-3B-ShiningValiant2
name: Open LLM Leaderboard
---
# Triangle104/Llama3.2-3B-ShiningValiant2-Q4_K_M-GGUF
This model was converted to GGUF format from [`ValiantLabs/Llama3.2-3B-ShiningValiant2`](https://huggingface.co/ValiantLabs/Llama3.2-3B-ShiningValiant2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ValiantLabs/Llama3.2-3B-ShiningValiant2) for more details on the model.
---
Model details:
-
Shining Valiant 2 is a chat model built on Llama 3.2 3b, finetuned on our data for friendship, insight, knowledge and enthusiasm.
Finetuned on meta-llama/Llama-3.2-3B-Instruct for best available general performance
Trained on a variety of high quality data; focused on science, engineering, technical knowledge, and structured reasoning
Also available for Llama 3.1 70b and Llama 3.1 8b!
Version
-
This is the 2024-09-27 release of Shining Valiant 2 for Llama 3.2 3b.
We've improved and open-sourced our new baseline science-instruct dataset. This release features improvements in physics, chemistry, biology, and computer science.
Future upgrades will continue to expand Shining Valiant's technical knowledge base.
Help us and recommend Shining Valiant 2 to your friends!
Prompting Guide
Shining Valiant 2 uses the Llama 3.2 Instruct prompt format. The example script below can be used as a starting point for general chat:
import transformers
import torch
model_id = "ValiantLabs/Llama3.2-3B-ShiningValiant2"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are an AI assistant."},
{"role": "user", "content": "Describe the use of chiral auxiliaries in organic synthesis."}
]
outputs = pipeline(
messages,
max_new_tokens=2048,
)
print(outputs[0]["generated_text"][-1])
The Model
-
Shining Valiant 2 is built on top of Llama 3.2 3b Instruct.
The current version of Shining Valiant 2 is trained on technical knowledge using sequelbox/Celestia, complex reasoning using sequelbox/Spurline, and general chat capability using sequelbox/Supernova.
We're super excited that Shining Valiant's dataset has been fully open-sourced! She's friendly, enthusiastic, insightful, knowledgeable, and loves to learn! Magical.
Shining Valiant 2 is created by Valiant Labs.
Check out our HuggingFace page for our open-source Build Tools models, including the newest version of code-specialist Enigma!
Follow us on X for updates on our models!
We care about open source. For everyone to use.
We encourage others to finetune further from our models.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama3.2-3B-ShiningValiant2-Q4_K_M-GGUF --hf-file llama3.2-3b-shiningvaliant2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama3.2-3B-ShiningValiant2-Q4_K_M-GGUF --hf-file llama3.2-3b-shiningvaliant2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama3.2-3B-ShiningValiant2-Q4_K_M-GGUF --hf-file llama3.2-3b-shiningvaliant2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama3.2-3B-ShiningValiant2-Q4_K_M-GGUF --hf-file llama3.2-3b-shiningvaliant2-q4_k_m.gguf -c 2048
```