Update README.md
Browse files
README.md
CHANGED
@@ -1,201 +1,139 @@
|
|
1 |
---
|
|
|
|
|
2 |
library_name: transformers
|
3 |
-
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
# Model Card for Model ID
|
7 |
|
8 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
|
|
|
|
|
|
|
10 |
|
11 |
|
12 |
-
|
13 |
|
14 |
-
### Model Description
|
15 |
|
16 |
-
|
17 |
|
18 |
-
|
19 |
-
|
20 |
-
- **Developed by:** [More Information Needed]
|
21 |
-
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
-
|
28 |
-
### Model Sources [optional]
|
29 |
-
|
30 |
-
<!-- Provide the basic links for the model. -->
|
31 |
-
|
32 |
-
- **Repository:** [More Information Needed]
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
|
36 |
## Uses
|
37 |
|
38 |
-
|
39 |
-
|
40 |
-
### Direct Use
|
41 |
-
|
42 |
-
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
43 |
-
|
44 |
-
[More Information Needed]
|
45 |
|
46 |
-
### Downstream Use [optional]
|
47 |
|
48 |
-
|
49 |
-
|
50 |
-
[More Information Needed]
|
51 |
|
52 |
-
|
|
|
53 |
|
54 |
-
|
|
|
|
|
55 |
|
56 |
-
|
|
|
|
|
|
|
57 |
|
58 |
-
|
|
|
59 |
|
60 |
-
|
|
|
61 |
|
62 |
-
|
|
|
63 |
|
64 |
-
###
|
|
|
65 |
|
66 |
-
|
67 |
|
68 |
-
|
|
|
69 |
|
70 |
-
|
|
|
71 |
|
72 |
-
|
|
|
|
|
|
|
|
|
|
|
73 |
|
74 |
-
[More Information Needed]
|
75 |
|
76 |
## Training Details
|
77 |
-
|
78 |
### Training Data
|
79 |
|
80 |
-
|
81 |
|
82 |
-
|
83 |
|
84 |
-
|
|
|
85 |
|
86 |
-
|
|
|
87 |
|
88 |
-
|
|
|
89 |
|
90 |
-
|
|
|
91 |
|
|
|
|
|
92 |
|
93 |
-
|
94 |
|
95 |
-
|
|
|
96 |
|
97 |
-
####
|
98 |
|
99 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
100 |
|
101 |
-
[More Information Needed]
|
102 |
|
103 |
## Evaluation
|
104 |
|
105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
106 |
-
|
107 |
-
### Testing Data, Factors & Metrics
|
108 |
-
|
109 |
-
#### Testing Data
|
110 |
-
|
111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
-
#### Factors
|
116 |
-
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
-
|
121 |
#### Metrics
|
122 |
|
123 |
-
|
124 |
|
125 |
-
[More Information Needed]
|
126 |
|
127 |
### Results
|
128 |
|
129 |
-
|
130 |
-
|
131 |
-
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
|
139 |
-
|
|
|
|
|
140 |
|
141 |
-
|
|
|
|
|
142 |
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
-
|
159 |
-
### Compute Infrastructure
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
## Model Card Authors [optional]
|
194 |
-
|
195 |
-
[More Information Needed]
|
196 |
|
197 |
## Model Card Contact
|
198 |
|
199 |
-
|
200 |
-
|
201 |
-
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
library_name: transformers
|
5 |
+
datasets:
|
6 |
+
- jusKnows/hr_extended_questions_categories
|
7 |
+
metrics:
|
8 |
+
- accuracy
|
9 |
+
pipeline_tag: text-generation
|
10 |
---
|
11 |
|
12 |
# Model Card for Model ID
|
13 |
|
|
|
14 |
|
15 |
+
This model was trained to choose between RAG and COT techniques for especific domain chat aplications. Depending on users questions, the model may choose what is the best way to generate the response.
|
16 |
+
- Sometimes, questions are domain specific and can be answered by performing a simple RAG.
|
17 |
+
- Sometimes, we may get complex questions that require a step by step approach.
|
18 |
|
19 |
|
20 |
+
We performed a simple prompt tunning over a low-parameters base model so that we can create a basic low parameter model capable of few-shot classification with really low dataset of nearly ~100 samples.
|
21 |
|
|
|
22 |
|
23 |
+
### Base Model Sources
|
24 |
|
25 |
+
Prompt tunned version from [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on a bnb configuration of 4bits.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
|
27 |
## Uses
|
28 |
|
29 |
+
This model aims to start to perform a especific task by choosing **Retrieval Augmented Generation-RAG** or **Chain of Thought-COT**
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
|
|
|
31 |
|
32 |
+
## How to Get Started with the Model
|
|
|
|
|
33 |
|
34 |
+
```
|
35 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
36 |
|
37 |
+
model = AutoModelForCausalLM.from_pretrained(model_name)
|
38 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
|
39 |
+
```
|
40 |
|
41 |
+
```
|
42 |
+
def make_inference(query, model):
|
43 |
+
prompt = """\
|
44 |
+
### Below is an instruction that describes a task. Write a response that appropriately completes the request.
|
45 |
|
46 |
+
### Instruction:
|
47 |
+
Categorize this question into one of this two categories:
|
48 |
|
49 |
+
RAG
|
50 |
+
COT
|
51 |
|
52 |
+
Input:
|
53 |
+
{Question}
|
54 |
|
55 |
+
### Response:
|
56 |
+
"""
|
57 |
|
58 |
+
batch = tokenizer(prompt.format(Question=query), return_tensors='pt').to("cuda")
|
59 |
|
60 |
+
with torch.cuda.amp.autocast():
|
61 |
+
output_tokens = model.generate(**batch, max_new_tokens=10)
|
62 |
|
63 |
+
return output_tokens
|
64 |
+
```
|
65 |
|
66 |
+
```
|
67 |
+
query = "{your_question_goes_here}"
|
68 |
+
output_tokens = make_inference(query, model)
|
69 |
+
response = tokenizer.decode(output_tokens[0])
|
70 |
+
print(response)
|
71 |
+
```
|
72 |
|
|
|
73 |
|
74 |
## Training Details
|
|
|
75 |
### Training Data
|
76 |
|
77 |
+
The dataset used is a sinthetic dataset that contains pairs, values of **quentions**, **techniques**.
|
78 |
|
79 |
+
### Training Prompt
|
80 |
|
81 |
+
```"""\
|
82 |
+
### Below is an instruction that describes a task. Write a response that appropriately completes the request.
|
83 |
|
84 |
+
### Instruction:
|
85 |
+
Categorize this question into one of this two categories:
|
86 |
|
87 |
+
RAG
|
88 |
+
COT
|
89 |
|
90 |
+
Input:
|
91 |
+
{Question}
|
92 |
|
93 |
+
### Response:
|
94 |
+
{Category}
|
95 |
|
96 |
+
### End
|
97 |
|
98 |
+
"""
|
99 |
+
```
|
100 |
|
101 |
+
#### Training Hyperparameters
|
102 |
|
103 |
+
- evaluation_strategy="steps",
|
104 |
+
- eval_steps=1,
|
105 |
+
- logging_strategy="steps",
|
106 |
+
- per_device_train_batch_size=6,
|
107 |
+
- gradient_accumulation_steps=4,
|
108 |
+
- warmup_steps=50,
|
109 |
+
- max_steps=100,
|
110 |
+
- learning_rate=1e-3,
|
111 |
+
- fp16=True,
|
112 |
+
- logging_steps=1,
|
113 |
|
|
|
114 |
|
115 |
## Evaluation
|
116 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
117 |
#### Metrics
|
118 |
|
119 |
+
- *Accuracy*
|
120 |
|
|
|
121 |
|
122 |
### Results
|
123 |
|
124 |
+
#### Train:
|
125 |
+
Accuracy: 0.9659090909090909
|
126 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b930510de7289de37ec1c0/tdzV60p2U6i-n00vLOCUU.png)
|
|
|
|
|
|
|
|
|
|
|
|
|
127 |
|
128 |
+
#### Validation:
|
129 |
+
Accuracy: 0.9090909090909091
|
130 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b930510de7289de37ec1c0/mm9PYoIq7vKMeU-Kvr1GT.png)
|
131 |
|
132 |
+
#### Test:
|
133 |
+
Accuracy: 1.0
|
134 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b930510de7289de37ec1c0/O_GWddcey5lOBhQwS1tBP.png)
|
135 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
136 |
|
137 |
## Model Card Contact
|
138 |
|
139 |
+
Linkedin: www.linkedin.com/in/jrodriguez130
|
|
|
|