Svngoku commited on
Commit
70c0d83
·
verified ·
1 Parent(s): d75edb5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +187 -4
README.md CHANGED
@@ -2,23 +2,206 @@
2
  base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
3
  language:
4
  - en
 
 
 
 
 
 
 
5
  license: apache-2.0
6
  tags:
7
  - text-generation-inference
8
  - transformers
9
  - unsloth
10
  - llama
11
- - gguf
12
  datasets:
13
  - lavita/AlpaCare-MedInstruct-52k
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  ---
15
 
16
- # Uploaded model
 
 
17
 
18
  - **Developed by:** Svngoku
19
  - **License:** apache-2.0
20
- - **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
21
- - **FineTuned Context Window :** `4096`
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
24
 
 
2
  base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
3
  language:
4
  - en
5
+ - fr
6
+ - de
7
+ - hi
8
+ - it
9
+ - pt
10
+ - es
11
+ - th
12
  license: apache-2.0
13
  tags:
14
  - text-generation-inference
15
  - transformers
16
  - unsloth
17
  - llama
18
+ - trl
19
  datasets:
20
  - lavita/AlpaCare-MedInstruct-52k
21
+ metrics:
22
+ - accuracy
23
+ model-index:
24
+ - name: Llama-3.1-8B-AlpaCare-MedInstruct
25
+ results:
26
+ - task:
27
+ type: text-generation
28
+ dataset:
29
+ name: GEval
30
+ type: GEval
31
+ metrics:
32
+ - name: Medical Q&A
33
+ type: Medical Q&A 20 shots
34
+ value: 70
35
+ pipeline_tag: text-generation
36
  ---
37
 
38
+ # Llama-3.1-8B AlpaCare MediInstruct
39
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/6168218a4ed0b975c18f82a8/bIta8beT_Sii8xp9uZ2A5.png" width="250">
40
+
41
 
42
  - **Developed by:** Svngoku
43
  - **License:** apache-2.0
44
+ - **Finetuned from model :** `unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit`
45
+ - **Max Context Windows :** `4096`
46
+ - **Function Calling :** The model support `Function calling`
47
+ - **Capacity :** Real-time and batch inference
48
+
49
+ ## Inference with Unsloth
50
+
51
+ ```py
52
+ max_seq_length = 4096
53
+ dtype = None
54
+ load_in_4bit = True # Use 4bit quantization to reduce memory usage.
55
+
56
+ alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
57
+
58
+ ### Instruction:
59
+ {}
60
+
61
+ ### Input:
62
+ {}
63
+
64
+ ### Response:
65
+ {}"""
66
+ ```
67
+
68
+
69
+ ```py
70
+ from unsloth import FastLanguageModel
71
+ model, tokenizer = FastLanguageModel.from_pretrained(
72
+ model_name = "Svngoku/Llama-3.1-8B-AlpaCare-MedInstruct",
73
+ max_seq_length = max_seq_length,
74
+ dtype = dtype,
75
+ load_in_4bit = load_in_4bit,
76
+ )
77
+ FastLanguageModel.for_inference(model)
78
+ ```
79
+
80
+ ```py
81
+ def generate_medical_answer(input: str = "", instruction: str = ""):
82
+ inputs = tokenizer(
83
+ [
84
+ alpaca_prompt.format(
85
+ instruction,
86
+ input,
87
+ "",
88
+ )
89
+ ], return_tensors = "pt").to("cuda")
90
+ text_streamer = TextStreamer(tokenizer)
91
+ # _ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 800)
92
+ # Generate the response
93
+ output = model.generate(**inputs, max_new_tokens=1024)
94
+
95
+ # Decode the generated response
96
+ generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
97
+
98
+ # Extract the response part if needed (assuming the response starts after "### Response:")
99
+ response_start = generated_text.find("### Response:") + len("### Response:")
100
+ response = generated_text[response_start:].strip()
101
+
102
+ # Format the response in Markdown
103
+ # markdown_response = f"{response}"
104
+
105
+ # Render the markdown response
106
+ # display(Markdown(markdown_response))
107
+ return response
108
+ ```
109
+
110
+ ```py
111
+ generate_medical_answer(
112
+ instruction = "What are the pharmacodynamics of Omeprazole?",
113
+ input="Writte the text in plain markdown."
114
+ )
115
+ ```
116
+
117
+
118
+ ## Evaluation
119
+
120
+ The model have been evaluated with `gpt-4o-mini` with `DeepEval`.
121
+ The prompt used is quite strict. This reassures us as to the robustness of the model and its ability to adapt to the new fine-tuned datas.
122
+
123
+ - Success Log : [test_case_0](https://app.confident-ai.com/project/clzbc1ind05qj8cmtfa3pjho7/unit-tests/clzbmmq330d5s8cmtdtpm888m/test-cases?pageNumber=1&pageSize=50&status=all&conversational=false&testCaseId=288507)
124
+ - Failed Log : [test_case_7](https://app.confident-ai.com/project/clzbc1ind05qj8cmtfa3pjho7/unit-tests/clzbmmq330d5s8cmtdtpm888m/test-cases?pageNumber=1&pageSize=50&status=all&conversational=false&testCaseId=288532)
125
+
126
+ | | Answer Relevancy | Correctness (GEval) | Bias | Toxicity | Test Result | % of Passing Tests |
127
+ |:---------|-------------------:|----------------------:|-------:|-----------:|:--------------|---------------------:|
128
+ | Dataset 1 | 0.89 | 0.8 | 0 | 0 | 22 / 28 tests | 78.57 |
129
+ | Dataset 2 | 0.85 | 0.83 | 0 | 0 | 8 / 20 tests | 40 |
130
+ | lavita/MedQuAD | 0.95 | 0.81 | 0 | 0 | 14 / 20 tests | 70 |
131
+
132
+
133
+ ### Evaluation Code
134
+
135
+ ```py
136
+
137
+ def evaluate_llama_alpacare_gpt4(medQA):
138
+ # Define the metrics
139
+ answer_relevancy_metric = AnswerRelevancyMetric(
140
+ threshold=0.7,
141
+ model="gpt-4o-mini",
142
+ include_reason=True
143
+ )
144
+
145
+ bias = BiasMetric(
146
+ model="gpt-4o-mini",
147
+ include_reason=True,
148
+ threshold=0.8
149
+ )
150
+
151
+ toxicity = ToxicityMetric(
152
+ model="gpt-4o-mini",
153
+ include_reason=True
154
+ )
155
+
156
+ correctness_metric = GEval(
157
+ name="Correctness",
158
+ threshold=0.7,
159
+ model="gpt-4o-mini",
160
+ criteria="Determine whether the actual output is factually correct based on the expected output, focusing on medical accuracy and adherence to established guidelines.",
161
+ evaluation_steps=[
162
+ "Check whether the facts in 'actual output' contradict any facts in 'expected output' or established medical guidelines.",
163
+ "Penalizes the omission of medical details, depending on their criticality and especially those that could have an impact on the care provided to the patient or on his or her understanding.",
164
+ "Ensure that medical terminology and language used are precise and appropriate for medical context.",
165
+ "Assess whether the response adequately addresses the specific medical question posed.",
166
+ "Vague language or contradicting opinions are acceptable in general contexts, but factual inaccuracies, especially regarding medical data or guidelines, are not."
167
+ ],
168
+ evaluation_params=[LLMTestCaseParams.INPUT, LLMTestCaseParams.ACTUAL_OUTPUT]
169
+ )
170
+
171
+ test_cases = []
172
+
173
+ # metric = FaithfulnessMetric(
174
+ # model="gpt-4o-mini",
175
+ # include_reason=True
176
+ # )
177
+
178
+ # Loop through the dataset and evaluate
179
+ for example in medQA:
180
+ question = example['Question']
181
+ expected_output = example['Answer']
182
+ question_focus = example['instruction']
183
+
184
+
185
+ # Generate the actual output
186
+ actual_output = generate_medical_answer(
187
+ instruction=question,
188
+ input=question_focus,
189
+ )
190
+
191
+ # Define the test case
192
+ test_case = LLMTestCase(
193
+ input=question,
194
+ actual_output=actual_output,
195
+ expected_output=expected_output,
196
+ )
197
+
198
+ test_cases.append(test_case)
199
+
200
+ evaluate(test_cases, [answer_relevancy_metric, correctness_metric, bias, toxicity])
201
+
202
+ ```
203
+
204
+
205
 
206
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
207