morgul commited on
Commit
cad26c2
1 Parent(s): aac7c6d

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +135 -0
README.md ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ library_name: transformers
6
+ tags:
7
+ - chat
8
+ - mistral
9
+ - roleplay
10
+ - creative-writing
11
+ - mlx
12
+ base_model: Luni/StarDust-12b-v2
13
+ pipeline_tag: text-generation
14
+ model-index:
15
+ - name: StarDust-12b-v2
16
+ results:
17
+ - task:
18
+ type: text-generation
19
+ name: Text Generation
20
+ dataset:
21
+ name: IFEval (0-Shot)
22
+ type: HuggingFaceH4/ifeval
23
+ args:
24
+ num_few_shot: 0
25
+ metrics:
26
+ - type: inst_level_strict_acc and prompt_level_strict_acc
27
+ value: 56.29
28
+ name: strict accuracy
29
+ source:
30
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Luni/StarDust-12b-v2
31
+ name: Open LLM Leaderboard
32
+ - task:
33
+ type: text-generation
34
+ name: Text Generation
35
+ dataset:
36
+ name: BBH (3-Shot)
37
+ type: BBH
38
+ args:
39
+ num_few_shot: 3
40
+ metrics:
41
+ - type: acc_norm
42
+ value: 34.95
43
+ name: normalized accuracy
44
+ source:
45
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Luni/StarDust-12b-v2
46
+ name: Open LLM Leaderboard
47
+ - task:
48
+ type: text-generation
49
+ name: Text Generation
50
+ dataset:
51
+ name: MATH Lvl 5 (4-Shot)
52
+ type: hendrycks/competition_math
53
+ args:
54
+ num_few_shot: 4
55
+ metrics:
56
+ - type: exact_match
57
+ value: 5.97
58
+ name: exact match
59
+ source:
60
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Luni/StarDust-12b-v2
61
+ name: Open LLM Leaderboard
62
+ - task:
63
+ type: text-generation
64
+ name: Text Generation
65
+ dataset:
66
+ name: GPQA (0-shot)
67
+ type: Idavidrein/gpqa
68
+ args:
69
+ num_few_shot: 0
70
+ metrics:
71
+ - type: acc_norm
72
+ value: 5.82
73
+ name: acc_norm
74
+ source:
75
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Luni/StarDust-12b-v2
76
+ name: Open LLM Leaderboard
77
+ - task:
78
+ type: text-generation
79
+ name: Text Generation
80
+ dataset:
81
+ name: MuSR (0-shot)
82
+ type: TAUR-Lab/MuSR
83
+ args:
84
+ num_few_shot: 0
85
+ metrics:
86
+ - type: acc_norm
87
+ value: 14.26
88
+ name: acc_norm
89
+ source:
90
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Luni/StarDust-12b-v2
91
+ name: Open LLM Leaderboard
92
+ - task:
93
+ type: text-generation
94
+ name: Text Generation
95
+ dataset:
96
+ name: MMLU-PRO (5-shot)
97
+ type: TIGER-Lab/MMLU-Pro
98
+ config: main
99
+ split: test
100
+ args:
101
+ num_few_shot: 5
102
+ metrics:
103
+ - type: acc
104
+ value: 27.1
105
+ name: accuracy
106
+ source:
107
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Luni/StarDust-12b-v2
108
+ name: Open LLM Leaderboard
109
+ ---
110
+
111
+ # mlx-community/Luni_StarDust-12b-v2
112
+
113
+ The Model [mlx-community/Luni_StarDust-12b-v2](https://huggingface.co/mlx-community/Luni_StarDust-12b-v2) was converted to MLX format from [Luni/StarDust-12b-v2](https://huggingface.co/Luni/StarDust-12b-v2) using mlx-lm version **0.19.0**.
114
+
115
+ ## Use with mlx
116
+
117
+ ```bash
118
+ pip install mlx-lm
119
+ ```
120
+
121
+ ```python
122
+ from mlx_lm import load, generate
123
+
124
+ model, tokenizer = load("mlx-community/Luni_StarDust-12b-v2")
125
+
126
+ prompt="hello"
127
+
128
+ if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
129
+ messages = [{"role": "user", "content": prompt}]
130
+ prompt = tokenizer.apply_chat_template(
131
+ messages, tokenize=False, add_generation_prompt=True
132
+ )
133
+
134
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
135
+ ```