abhiyanta commited on
Commit
1c54ace
·
verified ·
1 Parent(s): 24f2e50

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -1
README.md CHANGED
@@ -3,4 +3,39 @@ license: mit
3
  language:
4
  - en
5
  base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
6
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  language:
4
  - en
5
  base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
6
+ ---
7
+
8
+ # LLaMA 3 8B - ChatDoctor Model
9
+
10
+ ## Model Description
11
+ This is a fine-tuned version of the **LLaMA 3 8B** model. The model is fine-tuned on medical conversations to assist healthcare professionals and users in understanding medical-related queries. It’s designed for natural language understanding and generation, focusing on medical advice and diagnostics.
12
+
13
+ - **Base Model:** LLaMA 3 8B
14
+ - **Fine-Tuned On:** Medical QA dataset (or specify other datasets)
15
+ - **Model Type:** Causal Language Model (CLM)
16
+
17
+ ## Intended Use
18
+ This model is intended for generating conversational responses related to medical diagnostics, symptom analysis, or any medical-related inquiry. It is designed to assist in providing informative and preliminary medical guidance based on the fine-tuned datasets.
19
+
20
+ ### Use Cases:
21
+ - Medical chatbots.
22
+ - Healthcare consultation apps.
23
+ - Symptom analysis.
24
+
25
+ ### Limitations:
26
+ - **Not a replacement for professional medical advice**: The model is trained on limited datasets and should not be used as a standalone diagnostic tool.
27
+ - **Language Bias**: It may show biases based on the data it was trained on.
28
+
29
+ ## How to Use
30
+
31
+ ```python
32
+ from transformers import AutoModelForCausalLM, AutoTokenizer
33
+
34
+ # Load the fine-tuned model and tokenizer
35
+ model = AutoModelForCausalLM.from_pretrained("abhiyanta/llama-chatdoctor")
36
+ tokenizer = AutoTokenizer.from_pretrained("abhiyanta/llama-chatdoctor")
37
+
38
+ # Generate text
39
+ inputs = tokenizer("What are the symptoms of diabetes?", return_tensors="pt")
40
+ outputs = model.generate(**inputs, max_new_tokens=50)
41
+ print(tokenizer.decode(outputs[0]))