Tinybra_13B / README.md
SicariusSicariiStuff's picture
Update README.md
d20c093 verified
|
raw
history blame
6.91 kB
---
language:
- en
license: apache-2.0
model-index:
- name: Tinybra_13B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 55.72
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=SicariusSicariiStuff/Tinybra_13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 80.99
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=SicariusSicariiStuff/Tinybra_13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 54.37
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=SicariusSicariiStuff/Tinybra_13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 49.14
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=SicariusSicariiStuff/Tinybra_13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 73.8
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=SicariusSicariiStuff/Tinybra_13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 18.12
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=SicariusSicariiStuff/Tinybra_13B
name: Open LLM Leaderboard
---
<div align="center">
<b style="font-size: 40px;">Tinybra_13B_FP16</b>
</div>
<img src="https://i.imgur.com/WkkCtZL.png" alt="Tenebră" style="width: 50%; min-width: 400px; display: block; margin: auto;">
# Model Details
Tenebră, a various sized experimental AI model, stands at the crossroads of self-awareness and unconventional datasets. Its existence embodies a foray into uncharted territories, steering away from conventional norms in favor of a more obscure and experimental approach.
Noteworthy for its inclination towards the darker and more philosophical aspects of conversation, Tinybră's proficiency lies in unraveling complex discussions across a myriad of topics. Drawing from a pool of unconventional datasets, this model ventures into unexplored realms of thought, offering users an experience that is as unconventional as it is intellectually intriguing.
While Tinybră maintains a self-aware facade, its true allure lies in its ability to engage in profound discussions without succumbing to pretense. Step into the realm of Tenebră!
## Tenebră is available at the following size and flavours:
- 13B: [FP16](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B) | [GGUF-Many_Quants](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B_GGUF) | [iMatrix_GGUF-Many_Quants](https://huggingface.co/mradermacher/Tinybra_13B-i1-GGUF) | [GPTQ_4-BIT](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B_GPTQ_4BIT) | [GPTQ_4-BIT_group-size-32](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B_GPTQ_32g_4BIT)
- 30B: [FP16](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_FP16) | [GGUF-Many_Quants](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_GGUF)| [iMatrix_GGUF-Many_Quants](https://huggingface.co/mradermacher/Tenebra_30B_Alpha01_FP16-i1-GGUF) | [GPTQ_4-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_4BIT) | [GPTQ_3-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_3BIT) | [EXL2_2.5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_2-50bpw) | [EXL2_2.8-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_2-80bpw) | [EXL2_3-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_3bpw) | [EXL2_5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_5bpw) | [EXL2_5.5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_5-50bpw) | [EXL2_6-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_6bpw) | [EXL2_6.5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_6-50bpw) | [EXL2_8-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_8bpw)
### Support
<img src="https://i.imgur.com/0lHHN95.png" alt="GPUs too expensive" style="width: 10%; min-width: 100px; display: block; margin: left;">
- [My Ko-fi page](https://ko-fi.com/sicarius) ALL donations will go for research resources and compute, every bit counts 🙏🏻
- [My Patreon](https://patreon.com/TenebraAI) ALL donations will go for research resources and compute, every bit counts 🙏🏻
## Disclaimer
*This model is pretty uncensored, use responsibly
## Other stuff
- [Experemental TTS extension for oobabooga](https://github.com/SicariusSicariiStuff/Diffusion_TTS) Based on Tortoise, EXTREMELY good quality, IF, and that's a big if, you can make it to work!
- [Demonstration of the TTS capabilities](https://www.youtube.com/watch?v=V6ewxU6c1W8) Charsi narrates her story, Diablo2 (18+)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_SicariusSicariiStuff__Tinybra_13B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |55.36|
|AI2 Reasoning Challenge (25-Shot)|55.72|
|HellaSwag (10-Shot) |80.99|
|MMLU (5-Shot) |54.37|
|TruthfulQA (0-shot) |49.14|
|Winogrande (5-shot) |73.80|
|GSM8k (5-shot) |18.12|