metadata
license: apache-2.0
tags:
- alignment-handbook
- generated_from_trainer
datasets:
- allenai/ultrafeedback_binarized_cleaned
base_model: one-man-army/una-neural-chat-v3-3-P1-OMA
model-index:
- name: una-neural-chat-v3-3-P2
results: []
una-neural-chat-v3-3-phase2
OMA, OneManArmy proudly presents, una-neural-chat-v3-3
PHASE 2. Powered by UNA (Uniform Neural Alignment), using zephyr trainer, allenai/ultrafeedback cleaned.. and JUST THAT.
Outperforming its base model, not adding any data.. just UNA Algorythm on Transformers Lib.
UNA Settings:
- MLP : 0.05
- ATT : 0.03
- LNOR : 0.02
Framework versions
- Transformers 4.35.0-UNA
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 70.72 |
AI2 Reasoning Challenge (25-Shot) | 67.32 |
HellaSwag (10-Shot) | 86.33 |
MMLU (5-Shot) | 63.14 |
TruthfulQA (0-shot) | 65.49 |
Winogrande (5-shot) | 79.79 |
GSM8k (5-shot) | 62.24 |