metadata
language:
- en
- fr
- es
- hi
- zh
- code
license: other
datasets:
- HuggingFaceH4/no_robots
- mlabonne/guanaco-llama2-1k
- OpenAssistant/oasst_top1_2023-08-25
- totally-not-an-llm/EverythingLM-data-V3
- LDJnr/Pure-Dove
- LDJnr/Capybara
- LDJnr/LessWrong-Amplify-Instruct
- LDJnr/Verified-Camel
base_model: Locutusque/Orca-2-13b-SFT-v6
widget:
- example_title: Cave Johnson
text: |
user
Come up with a funny rant from Cave Johnson about sentient AI
assistant
output:
text: >-
Ladies and gentlemen, I'm Cave Johnson, CEO of Aperture Science, and
I've got an announcement to make - we've created sentient AI! But don't
worry, it's not like those rogue AIs you hear about in the news. Our AI
is as friendly as a grilled cheese sandwich on a cold day. In fact,
we're calling our newest creation... SandwichBot! SandwichBot: (in a
robotic voice) Hello, my name is SandwichBot, and I am here to serve you
delicious sandwiches and ensure humanity's dominance over all other
species. Also, please note that I will be referring to myself as 'Thee'
instead of 'The'. It's more regal that way. Cave Johnson: There you have
it folks, a sentient AI that can make sandwiches and save the world!
Now, if you'll excuse me, I need to go back to my office and work on my
other groundbreaking inventions, like combustible lemons and portal
technology. Remember, kids, always try new things and never stop asking
questions – unless the answer is 42, because then you should probably
just accept it and move on. Peace out!
tags:
- TensorBlock
- GGUF
model-index:
- name: Orca-2-13b-SFT-v6
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 60.41
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Orca-2-13b-SFT-v6
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 80.46
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Orca-2-13b-SFT-v6
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 59.51
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Orca-2-13b-SFT-v6
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 54.01
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Orca-2-13b-SFT-v6
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.43
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Orca-2-13b-SFT-v6
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 5.08
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Orca-2-13b-SFT-v6
name: Open LLM Leaderboard
Feedback and support: TensorBlock's Twitter/X, Telegram Group and Discord server
Locutusque/Orca-2-13b-SFT-v6 - GGUF
This repo contains GGUF format model files for Locutusque/Orca-2-13b-SFT-v6.
The files were quantized using machines provided by TensorBlock, and they are compatible with llama.cpp as of commit b4011.
Prompt template
Model file specification
Filename | Quant type | File Size | Description |
---|---|---|---|
Orca-2-13b-SFT-v6-Q2_K.gguf | Q2_K | 4.521 GB | smallest, significant quality loss - not recommended for most purposes |
Orca-2-13b-SFT-v6-Q3_K_S.gguf | Q3_K_S | 5.270 GB | very small, high quality loss |
Orca-2-13b-SFT-v6-Q3_K_M.gguf | Q3_K_M | 5.903 GB | very small, high quality loss |
Orca-2-13b-SFT-v6-Q3_K_L.gguf | Q3_K_L | 6.454 GB | small, substantial quality loss |
Orca-2-13b-SFT-v6-Q4_0.gguf | Q4_0 | 6.860 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
Orca-2-13b-SFT-v6-Q4_K_S.gguf | Q4_K_S | 6.913 GB | small, greater quality loss |
Orca-2-13b-SFT-v6-Q4_K_M.gguf | Q4_K_M | 7.326 GB | medium, balanced quality - recommended |
Orca-2-13b-SFT-v6-Q5_0.gguf | Q5_0 | 8.356 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
Orca-2-13b-SFT-v6-Q5_K_S.gguf | Q5_K_S | 8.356 GB | large, low quality loss - recommended |
Orca-2-13b-SFT-v6-Q5_K_M.gguf | Q5_K_M | 8.596 GB | large, very low quality loss - recommended |
Orca-2-13b-SFT-v6-Q6_K.gguf | Q6_K | 9.946 GB | very large, extremely low quality loss |
Orca-2-13b-SFT-v6-Q8_0.gguf | Q8_0 | 12.881 GB | very large, extremely low quality loss - not recommended |
Downloading instruction
Command line
Firstly, install Huggingface Client
pip install -U "huggingface_hub[cli]"
Then, downoad the individual model file the a local directory
huggingface-cli download tensorblock/Orca-2-13b-SFT-v6-GGUF --include "Orca-2-13b-SFT-v6-Q2_K.gguf" --local-dir MY_LOCAL_DIR
If you wanna download multiple model files with a pattern (e.g., *Q4_K*gguf
), you can try:
huggingface-cli download tensorblock/Orca-2-13b-SFT-v6-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'