umarmajeedofficial commited on
Commit
15d37d6
·
verified ·
1 Parent(s): 60b8593

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +141 -0
README.md ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - meta-llama/Meta-Llama-3.1-405B-Instruct-evals
5
+ language:
6
+ - en
7
+ metrics:
8
+ - accuracy
9
+ - bleu
10
+ pipeline_tag: text-generation
11
+ tags:
12
+ - llama
13
+ - conversational
14
+ - text-generation
15
+ - emergency-response
16
+ - environmental-issues
17
+ ---
18
+ This model card provides a detailed overview of the MyFriend model, which has been fine-tuned for specific use cases in emergency situations and environmental issues. The model was developed as part of a hackathon and is designed to assist in generating responses related to these domains.
19
+ Model Details
20
+ Model Description
21
+
22
+ The MyFriend model is a fine-tuned version of the TinyLlama, optimized for generating text in response to queries related to emergency situations and environmental issues. This model was trained on synthetic data generated using the Meta-Llama-3.1-405B-Instruct-Turbo model, with the fine-tuning process conducted on Kaggle using T4 x2 GPUs.
23
+
24
+ Developed by: Mixed Intelligence Team, led by Umar Majeed
25
+ Funded by: Self-funded as part of a hackathon project
26
+ Shared by: Umar Majeed
27
+ Model type: Text Generation, Conversational AI
28
+ Language(s): English
29
+ License: Apache 2.0
30
+ Finetuned from model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
31
+
32
+ Model Sources
33
+
34
+ Repository: MyFriend Model on Hugging Face
35
+ Demo: [Link to Demo (if available)]
36
+
37
+ Uses
38
+ Direct Use
39
+
40
+ The MyFriend model is intended to be used directly in applications requiring text generation related to emergency situations and environmental issues. It is suitable for chatbot implementations, emergency response systems, and educational tools focusing on environmental awareness.
41
+ Downstream Use
42
+
43
+ The model can be further fine-tuned or integrated into larger systems where specific domain knowledge or custom applications are required.
44
+ Out-of-Scope Use
45
+
46
+ The model is not suitable for general-purpose text generation tasks unrelated to its fine-tuned domain. Misuse for generating harmful or misleading information is strongly discouraged.
47
+ Bias, Risks, and Limitations
48
+
49
+ The MyFriend model, while fine-tuned on specific data, may still exhibit biases present in the original TinyLlama model. Users should be aware of potential biases, especially in sensitive contexts such as emergency responses.
50
+ Recommendations
51
+
52
+ Awareness: Users should be mindful of the model's limitations and biases.
53
+ Testing: It is recommended to test the model thoroughly in the intended environment before deployment.
54
+
55
+ How to Get Started with the Model
56
+
57
+ To get started with the MyFriend model, use the code snippet below
58
+
59
+
60
+
61
+
62
+ import torch
63
+ from transformers import pipeline
64
+
65
+ pipe = pipeline("text-generation", model="umarmajeedofficial/MyFriend", torch_dtype=torch.bfloat16, device_map="auto")
66
+
67
+ messages = [
68
+ {
69
+ "role": "system",
70
+ "content": "You are an emergency response assistant with expertise in environmental issues.",
71
+ },
72
+ {"role": "user", "content": "What should I do during a heat wave?"},
73
+ ]
74
+ prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
75
+ outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
76
+ print(outputs[0]["generated_text"])
77
+
78
+
79
+
80
+
81
+
82
+
83
+
84
+
85
+ Training Details
86
+ Training Data
87
+
88
+ The model was fine-tuned using synthetic data generated from the Meta-Llama-3.1-405B-Instruct-Turbo model. This data was specifically designed to cover a wide range of scenarios related to emergency situations and environmental issues.
89
+ Training Procedure
90
+
91
+ Preprocessing: The data was preprocessed to ensure relevance and quality for the fine-tuning task.
92
+ Training Hyperparameters: The model was trained using a mixed precision training regime on Kaggle with T4 x2 GPUs.
93
+
94
+ Evaluation
95
+ Testing Data, Factors & Metrics
96
+ Testing Data
97
+
98
+ Testing was conducted on a subset of the synthetic data generated, with evaluations focusing on the model's ability to provide accurate and contextually appropriate responses in emergency and environmental scenarios.
99
+ Metrics
100
+
101
+ Accuracy: The model's ability to generate correct information.
102
+ Relevance: The relevance of the generated text to the input query.
103
+ Bias Analysis: Evaluation of potential biases in the responses.
104
+
105
+ Results
106
+
107
+ The MyFriend model showed strong performance in generating accurate and relevant responses within its fine-tuned domain. Further details on evaluation metrics and results can be found in the repository.
108
+ Environmental Impact
109
+
110
+ The environmental impact of training the MyFriend model was minimized by leveraging efficient hardware and cloud resources. The model was trained on Kaggle with T4 x2 GPUs, balancing performance and energy consumption.
111
+
112
+ Hardware Type: T4 x2 GPUs
113
+ Hours used: Approximately 10 hours
114
+ Cloud Provider: Kaggle
115
+ Compute Region: [Region Information Needed]
116
+ Carbon Emitted: Estimated using the Machine Learning Impact calculator
117
+
118
+ Technical Specifications
119
+ Model Architecture and Objective
120
+
121
+ The MyFriend model is built on the TinyLlama architecture, with a focus on conversational text generation.
122
+ Compute Infrastructure
123
+
124
+ Hardware: T4 x2 GPUs
125
+ Software: The model was fine-tuned using the Hugging Face Transformers library.
126
+
127
+
128
+
129
+ Model Card Authors
130
+
131
+ Umar Majeed (Team Lead) www.linkedin.com/in/umarmajeedofficial
132
+ Mixed Intelligence Team Members:
133
+ Moazzan Hassan https://www.linkedin.com/in/moazzan-hassan/
134
+ Shahroz Butt https://www.linkedin.com/in/shahroz-butt-69a813211?utm_source=share&utm_campaign=share_via&utm_content=profile&utm_medium=android_app
135
+ Sidra Hammed https://www.linkedin.com/in/sidra-hameed-8s122000?utm_source=share&utm_campaign=share_via&utm_content=profile&utm_medium=android_app
136
+ Muskan Liaqat https://www.linkedin.com/in/muskan-liaquat-838880308?utm_source=share&utm_campaign=share_via&utm_content=profile&utm_medium=android_app
137
+ Sana Qaisar https://www.linkedin.com/in/sana-qaisar-03b354316/
138
+
139
+ Model Card Contact
140
+
141
+ For further information or inquiries, please contact Umar Majeed via Hugging Face.