mkurman commited on
Commit
2664ba0
·
verified ·
1 Parent(s): 8d8dd87

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -49
README.md CHANGED
@@ -26,65 +26,35 @@ library_name: transformers
26
  Original model technical report
27
 
28
  [Phi-4 Technical Report](https://arxiv.org/pdf/2412.08905)
 
29
 
30
- ## Model Summary
31
 
32
- | | |
33
- |-------------------------|-------------------------------------------------------------------------------|
34
- | **Developers** | Microsoft Research |
35
- | **Description** | `phi-4` is a state-of-the-art open model built upon a blend of synthetic datasets, data from filtered public domain websites, and acquired academic books and Q&A datasets. The goal of this approach was to ensure that small capable models were trained with data focused on high quality and advanced reasoning.<br><br>`phi-4` underwent a rigorous enhancement and alignment process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures |
36
- | **Architecture** | 14B parameters, dense decoder-only Transformer model |
37
- | **Inputs** | Text, best suited for prompts in the chat format |
38
- | **Context length** | 16K tokens |
39
- | **GPUs** | 1920 H100-80G |
40
- | **Training time** | 21 days |
41
- | **Training data** | 9.8T tokens |
42
- | **Outputs** | Generated text in response to input |
43
- | **Dates** | October 2024 – November 2024 |
44
- | **Status** | Static model trained on an offline dataset with cutoff dates of June 2024 and earlier for publicly available data |
45
- | **Release date** | December 12, 2024 |
46
- | **License** | MIT |
47
 
48
- ## Intended Use
 
 
49
 
50
- | | |
51
- |-------------------------------|-------------------------------------------------------------------------|
52
- | **Primary Use Cases** | Our model is designed to accelerate research on language models, for use as a building block for generative AI powered features. It provides uses for general purpose AI systems and applications (primarily in English) which require:<br><br>1. Memory/compute constrained environments.<br>2. Latency bound scenarios.<br>3. Reasoning and logic. |
53
- | **Out-of-Scope Use Cases** | Our models is not specifically designed or evaluated for all downstream purposes, thus:<br><br>1. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fairness before using within a specific downstream use case, particularly for high-risk scenarios.<br>2. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case, including the model’s focus on English.<br>3. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. |
54
 
55
- ## Data Overview
56
 
57
- ### Training Datasets
 
58
 
59
- Our training data is an extension of the data used for Phi-3 and includes a wide variety of sources from:
 
60
 
61
- 1. Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code.
62
 
63
- 2. Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.).
 
 
 
64
 
65
- 3. Acquired academic books and Q&A datasets.
66
-
67
- 4. High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
68
-
69
- Multilingual data constitutes about 8% of our overall data. We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge.
70
-
71
- #### Benchmark datasets
72
-
73
- We evaluated `phi-4` using [OpenAI’s SimpleEval](https://github.com/openai/simple-evals) and our own internal benchmarks to understand the model’s capabilities, more specifically:
74
-
75
- * **MMLU:** Popular aggregated dataset for multitask language understanding.
76
-
77
- * **MATH:** Challenging competition math problems.
78
-
79
- * **GPQA:** Complex, graduate-level science questions.
80
-
81
- * **DROP:** Complex comprehension and reasoning.
82
-
83
- * **MGSM:** Multi-lingual grade-school math.
84
-
85
- * **HumanEval:** Functional code generation.
86
-
87
- * **SimpleQA:** Factual responses.
88
 
89
  ## Safety
90
 
@@ -150,6 +120,60 @@ outputs = pipeline(messages, max_new_tokens=128)
150
  print(outputs[0]["generated_text"][-1])
151
  ```
152
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
153
  ## Responsible AI Considerations
154
 
155
  Like other language models, `phi-4` can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
 
26
  Original model technical report
27
 
28
  [Phi-4 Technical Report](https://arxiv.org/pdf/2412.08905)
29
+ # Phi-4-MedIT-10B-o1 🚀
30
 
31
+ Phi-4-MedIT-10B-o1 is a specialized large language model pruned for efficiency and performance:
32
 
33
+ ### Key Features:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
 
35
+ - **Pruned Model:**
36
+ - Derived from the **phi-4 14B model**, reduced to 10B parameters using the **MKA2G pruning technique**.
37
+ - For details on the pruning methodology, see [MedITSolutionsKurman/llama-pruning](https://github.com/mkurman/llama-pruning) on GitHub.
38
 
39
+ - **Reasoning SFT:**
40
+ - Fine-tuned with a **single epoch of reasoning-specific supervised fine-tuning (SFT)** for optimized reasoning tasks.
 
 
41
 
42
+ ### Recommended Usage:
43
 
44
+ - **ReasonFlow Integration:**
45
+ - Achieve advanced reasoning by leveraging the [ReasonFlow framework](https://github.com/mkurman/ReasonFlow), which uses Multi-Path Generation with Noise.
46
 
47
+ - **Monte-Carlo Tree Search (MCTS):**
48
+ - Use with [MCTS-PyTorch](https://github.com/mkurman/mcts-pytorch) to explore tree-based decision-making and inference.
49
 
50
+ ### Applications:
51
 
52
+ This model is ideal for:
53
+ - Efficient, high-quality reasoning tasks.
54
+ - Scenarios requiring enhanced performance in pruned environments.
55
+ - Integration with advanced frameworks for reasoning or decision-making.
56
 
57
+ Explore the possibilities with Phi-4-MedIT-10B-o1 and push the boundaries of efficient AI reasoning!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
 
59
  ## Safety
60
 
 
120
  print(outputs[0]["generated_text"][-1])
121
  ```
122
 
123
+ ### With `ReasonFlow`
124
+ Check it here: [ReasonFlow framework](https://github.com/mkurman/ReasonFlow)
125
+
126
+ ```python
127
+ ...
128
+ config = ReasonFlowConfig(
129
+ num_of_thinkers=NUM_OF_THINKERS,
130
+ num_of_thoughts=NUM_OF_THOUGHTS,
131
+ topk_thinkers=TOPK_THINKERS,
132
+ )
133
+
134
+ index = 1
135
+
136
+ reason_flow = ReasonFlow(config, model, tokenizer)
137
+ output = reason_flow.generate_thoughts(chat_like_texts[index],
138
+ max_new_tokens=4096,
139
+ stream=True,
140
+ device='cuda',
141
+ torch_dtype='bfloat16',
142
+ do_sample=False,
143
+ temperature=None,
144
+ top_p=None,
145
+ top_k=None,
146
+ repetition_penalty=None)
147
+ ```
148
+
149
+ ### With `Monte-Carlo Tree Search`
150
+ Check it here: [MCTS-PyTorch](https://github.com/mkurman/mcts-pytorch)
151
+
152
+ ```python
153
+ ...
154
+ model.to("cuda")
155
+ input_tokens = input_ids['input_ids'].to('cuda')
156
+
157
+ model.eval()
158
+
159
+ print(tokenizer.decode(input_tokens[0]))
160
+
161
+ while len(input_tokens[0]) < MAX_TOTAL_TOKENS:
162
+ mcts = MCTS(model, tokenizer, max_depth=MAX_DEPTH, num_simulations=MAX_SIMULATIONS, temperature=TEMPERATURE, max_new_tokens=MAX_NEW_TOKENS, stop_tokens=model.config.eos_token_id)
163
+
164
+ new_tokens = mcts.search(input_tokens)
165
+
166
+ new_tokens = new_tokens[..., input_tokens.shape[-1]:]
167
+
168
+ input_tokens = torch.cat([input_tokens, new_tokens], dim=-1)
169
+
170
+ print(tokenizer.decode(new_tokens[0]), end='')
171
+
172
+ if has_eos(new_tokens, eos_token=model.config.eos_token_id):
173
+ break
174
+
175
+ ```
176
+
177
  ## Responsible AI Considerations
178
 
179
  Like other language models, `phi-4` can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: