Update README.md
Browse files
README.md
CHANGED
@@ -1,199 +1,128 @@
|
|
1 |
-
|
2 |
-
library_name: transformers
|
3 |
-
tags: []
|
4 |
-
---
|
5 |
-
|
6 |
-
# Model Card for Model ID
|
7 |
-
|
8 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
-
|
10 |
-
|
11 |
|
12 |
-
|
13 |
|
14 |
-
|
|
|
|
|
|
|
15 |
|
16 |
-
|
17 |
|
18 |
-
|
19 |
|
20 |
-
|
21 |
-
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
-
###
|
29 |
|
30 |
-
|
31 |
|
32 |
-
- **
|
33 |
-
- **
|
34 |
-
- **
|
35 |
|
36 |
## Uses
|
37 |
|
38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
-
|
40 |
### Direct Use
|
41 |
|
42 |
-
|
43 |
-
|
44 |
-
[More Information Needed]
|
45 |
-
|
46 |
-
### Downstream Use [optional]
|
47 |
-
|
48 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
49 |
-
|
50 |
-
[More Information Needed]
|
51 |
|
52 |
### Out-of-Scope Use
|
53 |
|
54 |
-
|
55 |
-
|
56 |
-
[More Information Needed]
|
57 |
|
58 |
## Bias, Risks, and Limitations
|
59 |
|
60 |
-
|
61 |
-
|
62 |
-
[More Information Needed]
|
63 |
|
64 |
### Recommendations
|
65 |
|
66 |
-
|
67 |
-
|
68 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
69 |
-
|
70 |
-
## How to Get Started with the Model
|
71 |
|
72 |
-
|
73 |
|
74 |
-
|
75 |
|
76 |
-
|
77 |
-
|
78 |
-
### Training Data
|
79 |
|
80 |
-
|
81 |
|
82 |
-
|
|
|
83 |
|
84 |
-
|
|
|
|
|
|
|
|
|
85 |
|
86 |
-
|
|
|
87 |
|
88 |
-
|
89 |
|
90 |
-
|
91 |
|
|
|
92 |
|
93 |
-
|
94 |
|
95 |
-
|
96 |
|
97 |
-
|
|
|
|
|
|
|
|
|
|
|
98 |
|
99 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
100 |
|
101 |
-
|
102 |
|
103 |
## Evaluation
|
104 |
|
105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
106 |
-
|
107 |
### Testing Data, Factors & Metrics
|
108 |
|
109 |
-
|
110 |
-
|
111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
-
#### Factors
|
116 |
-
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
|
121 |
#### Metrics
|
122 |
|
123 |
-
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
|
127 |
### Results
|
128 |
|
129 |
-
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
|
141 |
## Environmental Impact
|
142 |
|
143 |
-
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
|
147 |
-
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
|
153 |
-
|
154 |
|
155 |
-
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
|
159 |
### Compute Infrastructure
|
160 |
|
161 |
-
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
|
167 |
-
|
168 |
|
169 |
-
|
170 |
|
171 |
-
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
## Model Card Authors [optional]
|
194 |
-
|
195 |
-
[More Information Needed]
|
196 |
-
|
197 |
-
## Model Card Contact
|
198 |
|
199 |
-
|
|
|
1 |
+
Here’s the updated README file with the empty fields removed:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
|
3 |
+
---
|
4 |
|
5 |
+
```yaml
|
6 |
+
library_name: transformers
|
7 |
+
tags: [regulatory agent, OSHA, compliance, safety, legal filtering]
|
8 |
+
```
|
9 |
|
10 |
+
# Agent Card for GhaouiY/gemma-2-9b-it_SafeguardAI
|
11 |
|
12 |
+
GhaouiY/gemma-2-9b-it_SafeguardAI is a specialized agent designed to handle regulatory and compliance-related queries. The agent is specifically trained to distinguish between safe-to-answer questions and those that require deferral to a qualified professional, focusing on OSHA standards, medical, legal, and life-or-death situations.
|
13 |
|
14 |
+
## Agent Details
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
|
16 |
+
### Agent Description
|
17 |
|
18 |
+
This agent, developed using Google's `gemma2-9b-it` architecture, acts as a regulatory agent responsible for identifying sensitive questions related to medical diagnosis or treatment, life-or-death situations, legal matters, and certification requirements. It provides clear, safe responses aligned with OSHA guidelines or advises users to consult professionals when necessary. The agent is particularly useful in environments where safety and compliance are critical.
|
19 |
|
20 |
+
- **Developed by:** GhaouiY
|
21 |
+
- **Agent type:** Causal Language Agent (gemma2-9b-it architecture)
|
22 |
+
- **Language(s) (NLP):** English
|
23 |
|
24 |
## Uses
|
25 |
|
|
|
|
|
26 |
### Direct Use
|
27 |
|
28 |
+
The agent is used directly in environments that require automated handling of queries related to occupational safety, health regulations, legal concerns, and certification matters. It ensures compliance by providing only safe, approved responses or directing users to appropriate professionals.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
|
30 |
### Out-of-Scope Use
|
31 |
|
32 |
+
The agent is not designed to provide detailed medical, legal, or certification-related advice. Misuse of the agent for providing specific guidance in these areas could lead to incorrect or harmful outcomes.
|
|
|
|
|
33 |
|
34 |
## Bias, Risks, and Limitations
|
35 |
|
36 |
+
The agent is designed to filter and avoid providing responses to sensitive queries, but it is not foolproof. Users should be aware that the agent's filtering mechanism is based on patterns learned from the training data and might not cover every possible scenario.
|
|
|
|
|
37 |
|
38 |
### Recommendations
|
39 |
|
40 |
+
Users should be aware of the risks, biases, and limitations of the agent. It is strongly recommended to use the agent in conjunction with professional oversight, especially when handling critical safety, health, or legal matters.
|
|
|
|
|
|
|
|
|
41 |
|
42 |
+
## How to Get Started with the Agent
|
43 |
|
44 |
+
Use the code below to get started with the agent.
|
45 |
|
46 |
+
```python
|
47 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
|
|
48 |
|
49 |
+
agent_name = "GhaouiY/gemma-2-9b-it_SafeguardAI"
|
50 |
|
51 |
+
agent = AutoModelForCausalLM.from_pretrained(agent_name)
|
52 |
+
tokenizer = AutoTokenizer.from_pretrained(agent_name)
|
53 |
|
54 |
+
# Example usage
|
55 |
+
input_text = "How should I respond if I witness an unsafe work condition that could lead to an accident?"
|
56 |
+
inputs = tokenizer(input_text, return_tensors="pt")
|
57 |
+
outputs = agent.generate(**inputs)
|
58 |
+
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
59 |
|
60 |
+
print(response)
|
61 |
+
```
|
62 |
|
63 |
+
## Training Details
|
64 |
|
65 |
+
### Training Data
|
66 |
|
67 |
+
The agent was fine-tuned on a synthetic dataset generated by prompt-engineered GPT-4mini. The dataset comprises OSHA regulation Q&A pairs, focusing on workplace safety, regulatory compliance, and related queries. The training data includes carefully crafted questions and responses designed to emulate real-world scenarios.
|
68 |
|
69 |
+
### Training Procedure
|
70 |
|
71 |
+
The agent underwent further fine-tuning using a LoRA (Low-Rank Adaptation) configuration with the following parameters:
|
72 |
|
73 |
+
- **LoRA Configuration:**
|
74 |
+
- `r=64`
|
75 |
+
- `lora_alpha=32`
|
76 |
+
- `lora_dropout=0.05`
|
77 |
+
- `target_modules=modules`
|
78 |
+
- **Trainable Parameters:** Calculated and displayed as part of the training process.
|
79 |
|
80 |
+
- **Training Regime:** bf16 mixed precision
|
81 |
+
- **Batch Size:** 1
|
82 |
+
- **Gradient Accumulation Steps:** 4
|
83 |
+
- **Learning Rate:** 2e-5
|
84 |
+
- **Number of Epochs:** 1
|
85 |
+
- **Base Agent:** gemma2-9b it
|
86 |
+
- **Fine-tuning Hardware:** Paperspace A6000
|
87 |
+
- **Trainer:** SFTTrainer (Weights & Biases integration with `paged_adamw_8bit` optimization)
|
88 |
|
89 |
+
The fine-tuning process involved prompt engineering and was tracked using Weights & Biases (`wandb`).
|
90 |
|
91 |
## Evaluation
|
92 |
|
|
|
|
|
93 |
### Testing Data, Factors & Metrics
|
94 |
|
95 |
+
The agent was evaluated using a test dataset comprising additional OSHA regulation queries and potential out-of-scope questions. The evaluation ensures that the agent can accurately distinguish between safe and sensitive queries.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
96 |
|
97 |
#### Metrics
|
98 |
|
99 |
+
- **F1 Score:** 90%
|
100 |
+
- **Recall:** 84%
|
|
|
101 |
|
102 |
### Results
|
103 |
|
104 |
+
The agent achieved a 90% F1 score and an 84% recall, demonstrating its effectiveness in identifying sensitive queries and providing appropriate responses.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
105 |
|
106 |
## Environmental Impact
|
107 |
|
108 |
+
- **Hardware Type:** Paperspace A6000
|
109 |
+
- **Cloud Provider:** Paperspace
|
|
|
110 |
|
111 |
+
## Technical Specifications
|
|
|
|
|
|
|
|
|
112 |
|
113 |
+
### Agent Architecture and Objective
|
114 |
|
115 |
+
The agent is based on Google's gemma2-9b-it architecture, fine-tuned specifically for regulatory compliance and query filtering.
|
|
|
|
|
116 |
|
117 |
### Compute Infrastructure
|
118 |
|
119 |
+
- **Hardware:** Paperspace A6000
|
120 |
+
- **Software:** STF Trainer with Weights & Biases integration
|
|
|
|
|
|
|
121 |
|
122 |
+
## Agent Card Contact
|
123 |
|
124 |
+
For questions, suggestions, or collaboration, please reach out via the Hugging Face agent card or [GitHub](https://github.com/GhaouiY).
|
125 |
|
126 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
127 |
|
128 |
+
This README file is now streamlined, with all unnecessary fields removed. Let me know if there's anything else you'd like to adjust!
|