winglian commited on
Commit
65d3d71
1 Parent(s): d29493f

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +76 -0
README.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ license: apache-2.0
2
+ tags:
3
+ - OpenAccess AI Collective
4
+ - MPT
5
+ - axolotl
6
+ datasets:
7
+ - ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
8
+ - QingyiSi/Alpaca-CoT
9
+ - teknium/GPTeacher-General-Instruct
10
+ - metaeval/ScienceQA_text_only
11
+ - hellaswag
12
+ - openai/summarize_from_feedback
13
+ - riddle_sense
14
+ - gsm8k
15
+ - camel-ai/math
16
+ - camel-ai/biology
17
+ - camel-ai/physics
18
+ - camel-ai/chemistry
19
+ - winglian/evals
20
+
21
+ inference: false
22
+ ---
23
+
24
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
25
+
26
+ # Minotaur 13B
27
+
28
+ Minotaur 13B is an instruct fine-tuned model on top of LlaMA-13B. Minotaur 13B is fine-tuned **on only completely open datasets** making this model reproducible by anyone.
29
+
30
+ Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/PugNNHAF5r) or email [wing@openaccessaicollective.org](mailto:wing@openaccessaicollective.org)
31
+
32
+ # Prompts
33
+ Chat only style prompts using `USER:`,`ASSISTANT:`.
34
+
35
+ # Training Datasets
36
+
37
+ Minotaur 13B model is fine-tuned on the following datasets:
38
+
39
+ - [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered)
40
+ - [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT)
41
+ - [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct)
42
+ - [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses
43
+ - [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization
44
+ - [camel-ai/math](https://huggingface.co/datasets/camel-ai/math)
45
+ - [camel-ai/physics](https://huggingface.co/datasets/camel-ai/physics)
46
+ - [camel-ai/chemistry](https://huggingface.co/datasets/camel-ai/chemistry)
47
+ - [camel-ai/biology](https://huggingface.co/datasets/camel-ai/biology)
48
+ - [winglian/evals](https://huggingface.co/datasets/winglian/evals) - instruct augmented datasets
49
+ - custom sysnthetic datasets around misconceptions, in-context qa, jokes, N-tasks problems, and context-insensitivity
50
+ - ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the `train` split
51
+ - [hellaswag](https://huggingface.co/datasets/hellaswag) - 30K+ rows of instruct augmented for detailed explanations w 30K+ rows, derived from the `train` split
52
+ - [riddle_sense](https://huggingface.co/datasets/riddle_sense) - instruct augmented
53
+ - [gsm8k](https://huggingface.co/datasets/gsm8k) - instruct augmented
54
+
55
+ # Shoutouts
56
+
57
+ Special thanks to Nanobit for helping with Axolotl and TheBloke for quantizing these models are more accessible to all.
58
+
59
+ # Demo
60
+
61
+ HF Demo in Spaces available at [https://huggingface.co/spaces/openaccess-ai-collective/minotaur-13b](https://huggingface.co/spaces/openaccess-ai-collective/minotaur-13b). This Space is powered by Runpod Serverless. This helps us keep our compute costs down.
62
+
63
+ ## Release Notes
64
+
65
+ - https://wandb.ai/wing-lian/minotaur-13b/runs/5zji06u6
66
+
67
+ ## Build
68
+
69
+ Minotaur was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 1xA600 48GB
70
+ - 1 epochs taking approximately 10 hours
71
+
72
+ ## Bias, Risks, and Limitations
73
+ Minotaur has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
74
+ Minotaur was fine-tuned from the base model MPT-7B, please refer to its model card's Limitations Section for relevant information. (included below)
75
+
76
+ ## Examples - results may vary based on temperature and other settings