mkurman commited on
Commit
034ea16
·
verified ·
1 Parent(s): 3a86325

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +156 -3
README.md CHANGED
@@ -1,3 +1,156 @@
1
- ---
2
- license: llama3.2
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3.2
3
+ datasets:
4
+ - O1-OPEN/OpenO1-SFT
5
+ language:
6
+ - en
7
+ base_model:
8
+ - meta-llama/Llama-3.2-3B-Instruct
9
+ library_name: transformers
10
+ tags:
11
+ - reasoning
12
+ - o1
13
+ ---
14
+
15
+ # Model Card: mkurman/llama-3.2-MEDIT-3B-o1
16
+
17
+ This model is a variant of **o1-like reasoning** that has been fine-tuned on **MedIT Solutions Llama 3.2 3B Instruct** (a variant of **Meta LLama 3.2 3B Instruct**). The model introduces specific tags (`<Thought>` and `<Output>`) for chain-of-thought style text generation, with a focus on instruct-style reasoning tasks.
18
+ This model was fine-tuned for exact matching rather than generating a diverse distribution. Therefore, I recommend testing it with `do_sample=False` or setting `temperature=0.0` for deterministic outputs.
19
+
20
+ ---
21
+
22
+ ## Model Details
23
+
24
+ **Model name**: `mkurman/llama-3.2-MEDIT-3B-o1`
25
+ **Type**: Small Language Model (SLM)
26
+ **Base model**: [MedIT Solutions](https://huggingface.co/meditsolutions) Llama 3.2 3B Instruct (derived from Meta Llama 3.2 3B Instruct)
27
+ **Architecture**: 3 billion parameters
28
+ **License**: llama3.2
29
+
30
+ **Intended Use Cases**:
31
+ - General question answering
32
+ - Instruction-based generation
33
+ - Reasoning and chain-of-thought exploration
34
+
35
+ **Not Recommended For**:
36
+ - Sensitive, real-world medical diagnosis without expert verification
37
+ - Highly domain-specific or regulated fields outside the model’s training scope
38
+
39
+ ---
40
+
41
+ ## Usage
42
+
43
+ ### Important Notes on Usage
44
+
45
+ 1. **Stop strings**:
46
+ Because the model uses `<Thought>` and `<Output>` tags to separate internal reasoning from the final answer, you must supply `</Output>` as a stop sequence (or multiple stop sequences, if your framework allows) to avoid the model generating infinitely.
47
+
48
+ 2. **Preventing `<|python_tag|>` bug**:
49
+ Sometimes the model starts with `<|python_tag|>` instead of the intended `<Thought>`. As a workaround, add `"<Thought>\n\n"` to the **end of your generation prompt** (in your chat template) to ensure it starts correctly.
50
+
51
+ 3. **Libraries/Tools**:
52
+ - **Ollama** and **LM Studio**: Via GGUF file.
53
+ - **Jupyter Notebook** (or similar): Using the [Transformers library](https://github.com/huggingface/transformers).
54
+
55
+ ---
56
+
57
+ ### In Ollama or LM Studio
58
+
59
+ If you are loading the GGUF file, follow the instructions provided by Ollama or LM Studio. Typically, it involves placing the model file in the appropriate directory and selecting it within the interface.
60
+
61
+ > **Example** (in Ollama CLI):
62
+ > ```bash
63
+ > ollama run hf.co/mkurman/llama-3.2-MEDIT-3B-o1
64
+ > ```
65
+
66
+ You can then issue prompts. Make sure to set stop sequences to `</Output>` (and possibly `</Thought>` if your environment supports multiple stops).
67
+
68
+ ---
69
+
70
+ ### In a Jupyter Notebook or Python Script (Transformers)
71
+
72
+ ```python
73
+ from transformers import AutoTokenizer, AutoModelForCausalLM
74
+
75
+ # 1. Load the tokenizer and model
76
+ tokenizer = AutoTokenizer.from_pretrained("mkurman/llama-3.2-MEDIT-3B-o1")
77
+ model = AutoModelForCausalLM.from_pretrained("mkurman/llama-3.2-MEDIT-3B-o1")
78
+
79
+ # 2. Define and encode your prompt
80
+ # Add '<Thought>\n\n' at the end if you want to ensure
81
+ # the model uses the correct reasoning tag.
82
+ prompt = [{'role': 'user', 'content': 'Write a short instagram post about hypertension in children. Finish with 3 hashtags'}]
83
+ input_ids = tokenizer(tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True) + '<Thought>\n\n', return_tensors='pt')
84
+
85
+ # 4. Generate response with stop sequences (if your generation method supports them)
86
+ # If your method doesn't support stop sequences directly,
87
+ # you can manually slice the model's output at '</Output>'.
88
+ output = model.generate(
89
+ input_ids=input_ids,
90
+ max_new_tokens=256,
91
+ do_sample=False, #or True with temperature 0.0
92
+ temperature=0.0,
93
+ # Some generation methods or libraries allow specifying stop sequences.
94
+ # This is an example if your environment supports it.
95
+ # stop=["</Output>"]
96
+ )
97
+
98
+ # 5. Decode the output
99
+ decoded_output = tokenizer.decode(output[0], skip_special_tokens=True)
100
+ print(decoded_output)
101
+ ```
102
+
103
+ **Note**: If your generation library does not allow direct stop sequences, you can manually parse and remove any tokens that appear after `</Output>`.
104
+
105
+ ---
106
+
107
+ ## Example Prompt/Response
108
+
109
+ **Prompt**:
110
+ ```
111
+ <Talk about the impact of regular exercise on cardiovascular health>
112
+ <Thought>
113
+ ```
114
+ *(Remember to add `<Thought>\n\n` at the end if you see the `<|python_tag|>` bug.)*
115
+
116
+ **Model’s Reasoning** (`<Thought>` block):
117
+ ```
118
+ Exercise improves heart function by ...
119
+ ```
120
+
121
+ **Model’s Final Answer** (`<Output>` block):
122
+ ```
123
+ Regular exercise has been shown to ...
124
+ </Output>
125
+ ```
126
+
127
+ You would display the `<Output>` portion as the final user-facing answer.
128
+
129
+ ---
130
+
131
+ ## Limitations and Bias
132
+
133
+ - **Hallucination**: The model may generate plausible-sounding but incorrect or nonsensical answers.
134
+ - **Medical Information**: Never rely on this model as a source of truth! **this model is not a certified medical professional**. Always verify with qualified experts before acting on medical advice.
135
+ - **Biases**: The model’s outputs may reflect biases present in the training data. Users should evaluate content for fairness and accuracy.
136
+
137
+ ---
138
+
139
+ ## License and Citation
140
+
141
+ Please refer to the base model’s **Llama 3.2 license from Meta** and any additional licenses from MedIT Solutions. If you use this model in your work, please cite:
142
+
143
+ ```
144
+ @misc{mkurman2025llama3medit3bo1,
145
+ title={{mkurman/llama-3.2-MEDIT-3B-o1}: A fine-tuned Llama 3.2 3B Instruct model for reasoning tasks},
146
+ author={Kurman, Mariusz},
147
+ year={2025},
148
+ howpublished={\url{https://huggingface.co/mkurman/llama-3.2-MEDIT-3B-o1}}
149
+ }
150
+ ```
151
+
152
+ ---
153
+
154
+ ## Contact
155
+
156
+ For questions, comments, or issues related to `mkurman/llama-3.2-MEDIT-3B-o1`, please open an [issue on the model repository](https://huggingface.co/mkurman/llama-3.2-MEDIT-3B-o1) or contact [mkurman](https://huggingface.co/mkurman).