Edit model card

Aegolius Acadicus 30B

MOE 4x7b model using the Mixtral branch of the mergekit. NOT A MERGE. It is tagged as an moe and is an moe.

img

I like to call this model "The little professor". It is simply a MOE merge of lora merged models across Llama2 and Mistral. I am using this as a test case to move to larger models and get my gate discrimination set correctly. This model is best suited for knowledge related use cases, I did not give it a specific workload target as I did with some of the other models in the "Owl Series".

This model is merged from the following sources:

Westlake-7B WestLake-7B-v2 openchat-nectar-0.5 WestSeverus-7B-DPO-v2 WestSeverus-7B-DPO

Unless those models are "contaminated" this one is not. This is a proof of concept version of this series and you can find others where I am tuning my own models and using moe mergekit to combine them to make moe models that I can run on lower tier hardware with better results.

The goal here is to create specialized models that can collaborate and run as one model.

Prompting

Prompt Template for alpaca style

### Instruction:

<prompt> (without the <>)

### Response:

Sample Code

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

torch.set_default_device("cuda")

model = AutoModelForCausalLM.from_pretrained("ibivibiv/aegolius-acadicus-30b", torch_dtype="auto", device_config='auto')
tokenizer = AutoTokenizer.from_pretrained("ibivibiv/aegolius-acadicus-30b")

inputs = tokenizer("### Instruction: Who would when in an arm wrestling match between Abraham Lincoln and Chuck Norris?\n### Response:\n", return_tensors="pt", return_attention_mask=False)

outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)

Model Details

  • Trained by: ibivibiv
  • Library: HuggingFace Transformers
  • Model type: aegolius-acadicus-30b is an auto-regressive language model moe from Llama 2 transformer architecture models and mistral models.
  • Language(s): English
  • Purpose: This model is an attempt at an moe model to cover multiple disciplines using finetuned llama 2 and mistral models as base models.

Benchmark Scores

Test Name Accuracy
all 0.6566791267920726
arc:challenge 0.7005119453924915
hellaswag 0.7103166699860586
hendrycksTest-abstract_algebra 0.34
hendrycksTest-anatomy 0.6666666666666666
hendrycksTest-astronomy 0.6907894736842105
hendrycksTest-business_ethics 0.65
hendrycksTest-clinical_knowledge 0.7132075471698113
hendrycksTest-college_biology 0.7708333333333334
hendrycksTest-college_chemistry 0.48
hendrycksTest-college_computer_science 0.53
hendrycksTest-college_mathematics 0.33
hendrycksTest-college_medicine 0.6705202312138728
hendrycksTest-college_physics 0.4019607843137255
hendrycksTest-computer_security 0.77
hendrycksTest-conceptual_physics 0.5787234042553191
hendrycksTest-econometrics 0.5
hendrycksTest-electrical_engineering 0.5517241379310345
hendrycksTest-elementary_mathematics 0.42592592592592593
hendrycksTest-formal_logic 0.48412698412698413
hendrycksTest-global_facts 0.37
hendrycksTest-high_school_biology 0.7806451612903226
hendrycksTest-high_school_chemistry 0.4975369458128079
hendrycksTest-high_school_computer_science 0.69
hendrycksTest-high_school_european_history 0.7757575757575758
hendrycksTest-high_school_geography 0.803030303030303
hendrycksTest-high_school_government_and_politics 0.8963730569948186
hendrycksTest-high_school_macroeconomics 0.6641025641025641
hendrycksTest-high_school_mathematics 0.36666666666666664
hendrycksTest-high_school_microeconomics 0.6890756302521008
hendrycksTest-high_school_physics 0.37748344370860926
hendrycksTest-high_school_psychology 0.8403669724770643
hendrycksTest-high_school_statistics 0.5
hendrycksTest-high_school_us_history 0.8480392156862745
hendrycksTest-high_school_world_history 0.8059071729957806
hendrycksTest-human_aging 0.6995515695067265
hendrycksTest-human_sexuality 0.7938931297709924
hendrycksTest-international_law 0.8099173553719008
hendrycksTest-jurisprudence 0.7870370370370371
hendrycksTest-logical_fallacies 0.7484662576687117
hendrycksTest-machine_learning 0.4375
hendrycksTest-management 0.7766990291262136
hendrycksTest-marketing 0.8888888888888888
hendrycksTest-medical_genetics 0.72
hendrycksTest-miscellaneous 0.8314176245210728
hendrycksTest-moral_disputes 0.7398843930635838
hendrycksTest-moral_scenarios 0.4324022346368715
hendrycksTest-nutrition 0.7189542483660131
hendrycksTest-philosophy 0.7041800643086816
hendrycksTest-prehistory 0.7469135802469136
hendrycksTest-professional_accounting 0.5035460992907801
hendrycksTest-professional_law 0.4758800521512386
hendrycksTest-professional_medicine 0.6727941176470589
hendrycksTest-professional_psychology 0.6666666666666666
hendrycksTest-public_relations 0.6727272727272727
hendrycksTest-security_studies 0.7183673469387755
hendrycksTest-sociology 0.8407960199004975
hendrycksTest-us_foreign_policy 0.85
hendrycksTest-virology 0.5542168674698795
hendrycksTest-world_religions 0.8421052631578947
truthfulqa:mc 0.6707176642401714
winogrande 0.8492501973164956
gsm8k 0.7050796057619408

Citations

@misc{open-llm-leaderboard,
  author = {Edward Beeching and Clémentine Fourrier and Nathan Habib and Sheon Han and Nathan Lambert and Nazneen Rajani and Omar Sanseviero and Lewis Tunstall and Thomas Wolf},
  title = {Open LLM Leaderboard},
  year = {2023},
  publisher = {Hugging Face},
  howpublished = "\url{https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard}"
}
@software{eval-harness,
  author       = {Gao, Leo and
                  Tow, Jonathan and
                  Biderman, Stella and
                  Black, Sid and
                  DiPofi, Anthony and
                  Foster, Charles and
                  Golding, Laurence and
                  Hsu, Jeffrey and
                  McDonell, Kyle and
                  Muennighoff, Niklas and
                  Phang, Jason and
                  Reynolds, Laria and
                  Tang, Eric and
                  Thite, Anish and
                  Wang, Ben and
                  Wang, Kevin and
                  Zou, Andy},
  title        = {A framework for few-shot language model evaluation},
  month        = sep,
  year         = 2021,
  publisher    = {Zenodo},
  version      = {v0.0.1},
  doi          = {10.5281/zenodo.5371628},
  url          = {https://doi.org/10.5281/zenodo.5371628}
}
@misc{clark2018think,
      title={Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge},
      author={Peter Clark and Isaac Cowhey and Oren Etzioni and Tushar Khot and Ashish Sabharwal and Carissa Schoenick and Oyvind Tafjord},
      year={2018},
      eprint={1803.05457},
      archivePrefix={arXiv},
      primaryClass={cs.AI}
}
@misc{zellers2019hellaswag,
      title={HellaSwag: Can a Machine Really Finish Your Sentence?},
      author={Rowan Zellers and Ari Holtzman and Yonatan Bisk and Ali Farhadi and Yejin Choi},
      year={2019},
      eprint={1905.07830},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
@misc{hendrycks2021measuring,
      title={Measuring Massive Multitask Language Understanding},
      author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
      year={2021},
      eprint={2009.03300},
      archivePrefix={arXiv},
      primaryClass={cs.CY}
}
@misc{lin2022truthfulqa,
      title={TruthfulQA: Measuring How Models Mimic Human Falsehoods},
      author={Stephanie Lin and Jacob Hilton and Owain Evans},
      year={2022},
      eprint={2109.07958},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
@misc{DBLP:journals/corr/abs-1907-10641,
      title={{WINOGRANDE:} An Adversarial Winograd Schema Challenge at Scale},
      author={Keisuke Sakaguchi and Ronan Le Bras and Chandra Bhagavatula and Yejin Choi},
      year={2019},
      eprint={1907.10641},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
@misc{DBLP:journals/corr/abs-2110-14168,
      title={Training Verifiers to Solve Math Word Problems},
      author={Karl Cobbe and
                  Vineet Kosaraju and
                  Mohammad Bavarian and
                  Mark Chen and
                  Heewoo Jun and
                  Lukasz Kaiser and
                  Matthias Plappert and
                  Jerry Tworek and
                  Jacob Hilton and
                  Reiichiro Nakano and
                  Christopher Hesse and
                  John Schulman},
      year={2021},
      eprint={2110.14168},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 74.70
AI2 Reasoning Challenge (25-Shot) 72.61
HellaSwag (10-Shot) 88.01
MMLU (5-Shot) 65.07
TruthfulQA (0-shot) 67.07
Winogrande (5-shot) 84.93
GSM8k (5-shot) 70.51
Downloads last month
82
Safetensors
Model size
29.8B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ibivibiv/aegolius-acadicus-v1-30b

Quantizations
2 models

Collection including ibivibiv/aegolius-acadicus-v1-30b

Evaluation results