kettleguts
commited on
Commit
•
7563713
1
Parent(s):
e7c2a50
Update README.md
Browse files
README.md
CHANGED
@@ -1,104 +1,92 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
tags:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
-
# Model Card for
|
7 |
|
8 |
-
|
9 |
|
10 |
|
11 |
|
12 |
-
## Model Details
|
13 |
-
|
14 |
### Model Description
|
|
|
15 |
|
16 |
-
<!-- Provide a longer summary of what this model is. -->
|
17 |
-
|
18 |
-
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
19 |
-
|
20 |
-
- **Developed by:** [More Information Needed]
|
21 |
-
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
-
### Model Sources [optional]
|
29 |
-
|
30 |
-
<!-- Provide the basic links for the model. -->
|
31 |
-
|
32 |
-
- **Repository:** [More Information Needed]
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
|
36 |
## Uses
|
37 |
-
|
38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
|
40 |
### Direct Use
|
41 |
|
42 |
-
|
43 |
|
44 |
-
[More Information Needed]
|
45 |
-
|
46 |
-
### Downstream Use [optional]
|
47 |
-
|
48 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
49 |
-
|
50 |
-
[More Information Needed]
|
51 |
|
52 |
-
|
53 |
|
54 |
-
|
55 |
|
56 |
-
[More Information Needed]
|
57 |
|
58 |
## Bias, Risks, and Limitations
|
59 |
|
60 |
-
|
61 |
-
|
62 |
-
[More Information Needed]
|
63 |
-
|
64 |
-
### Recommendations
|
65 |
-
|
66 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
67 |
-
|
68 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
69 |
|
70 |
## How to Get Started with the Model
|
71 |
|
72 |
Use the code below to get started with the model.
|
73 |
|
74 |
-
|
75 |
-
|
76 |
-
## Training Details
|
77 |
|
78 |
-
|
79 |
|
80 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
81 |
|
82 |
-
|
|
|
|
|
|
|
83 |
|
84 |
-
|
|
|
|
|
|
|
85 |
|
86 |
-
|
87 |
|
88 |
-
|
89 |
-
|
90 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
91 |
|
|
|
|
|
92 |
|
93 |
-
#### Training Hyperparameters
|
94 |
|
95 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
96 |
|
97 |
-
#### Speeds, Sizes, Times [optional]
|
98 |
-
|
99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
100 |
-
|
101 |
-
[More Information Needed]
|
102 |
|
103 |
## Evaluation
|
104 |
|
@@ -170,32 +158,24 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
|
|
170 |
|
171 |
## Citation [optional]
|
172 |
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
**BibTeX:**
|
176 |
|
177 |
-
|
178 |
-
|
179 |
-
|
180 |
-
|
181 |
-
|
182 |
-
|
183 |
-
|
184 |
-
|
185 |
-
|
186 |
-
|
187 |
-
|
188 |
-
|
189 |
-
|
190 |
-
|
191 |
-
|
192 |
-
|
193 |
-
|
194 |
-
|
195 |
-
[More Information Needed]
|
196 |
-
|
197 |
-
## Model Card Contact
|
198 |
-
|
199 |
-
[More Information Needed]
|
200 |
|
201 |
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
tags:
|
4 |
+
- mistral
|
5 |
+
- sparse
|
6 |
+
- pruned
|
7 |
+
- wanda
|
8 |
+
license: mit
|
9 |
+
datasets:
|
10 |
+
- HuggingFaceH4/ultrachat_200k
|
11 |
+
- HuggingFaceH4/ultrafeedback_binarized
|
12 |
+
language:
|
13 |
+
- en
|
14 |
---
|
15 |
|
16 |
+
# Model Card for kettleguts/zephyr-7b-beta_sparse05
|
17 |
|
18 |
+
This is a pruned version of HuggingFaceH4/zephyr-7b-beta found [here](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta). Wanda pruning was used to introduce 50% sparsity into the linear layers. Read the paper [here](https://arxiv.org/abs/2306.11695).
|
19 |
|
20 |
|
21 |
|
|
|
|
|
22 |
### Model Description
|
23 |
+
[Here](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta#model-description)
|
24 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
|
27 |
## Uses
|
28 |
+
This model is only useful for research purposes. The quality of its text generation is highly dependent on how it is prompted. Since it is heavily pruned, it sometimes behaves like a mush smaller model.
|
|
|
29 |
|
30 |
### Direct Use
|
31 |
|
32 |
+
This model is not suitable for direct use outside of research.
|
33 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
|
35 |
+
# Out-of-Scope Use
|
36 |
|
37 |
+
This model should never be used for critical decisions involving health, life, employment, housing, law, etc. It should also never be used to harm anyone.
|
38 |
|
|
|
39 |
|
40 |
## Bias, Risks, and Limitations
|
41 |
|
42 |
+
[No safegaurd have been added to this model.](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta#bias-risks-and-limitations)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
43 |
|
44 |
## How to Get Started with the Model
|
45 |
|
46 |
Use the code below to get started with the model.
|
47 |
|
48 |
+
<code>
|
49 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, pipeline
|
|
|
50 |
|
51 |
+
model_name = 'kettleguts/zephyr-7b-beta_sparse05'
|
52 |
|
53 |
+
#quantize model for mode efficient performance
|
54 |
+
bnb_config = BitsAndBytesConfig(
|
55 |
+
load_in_4bit=True,
|
56 |
+
bnb_4bit_use_double_quant=True,
|
57 |
+
bnb_4bit_quant_type="nf4",
|
58 |
+
bnb_4bit_compute_dtype=torch.bfloat16
|
59 |
+
)
|
60 |
|
61 |
+
#load model
|
62 |
+
model = AutoModelForCausalLM.from_pretrained(model_name,
|
63 |
+
device_map = "auto",
|
64 |
+
quantization_config=bnb_config)
|
65 |
|
66 |
+
#load toeknizer
|
67 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
68 |
+
if tokenizer.pad_token is None:
|
69 |
+
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
|
70 |
|
71 |
+
pipe = pipeline("text-generation",model=model, tokenizer=tokenizer)
|
72 |
|
73 |
+
messages = [
|
74 |
+
{
|
75 |
+
"role": "system",
|
76 |
+
"content": "You are a friendly chatbot who always responds as briefly as possible with prefect grammar.",
|
77 |
+
},
|
78 |
+
{"role": "user", "content": "Briefly describe network pruning."},
|
79 |
+
]
|
80 |
+
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
81 |
+
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95,pad_token_id = tokenizer.pad_token_id)
|
82 |
+
text = str(outputs[0]).split('<|assistant|>\\n')
|
83 |
+
print(text[-1])
|
84 |
|
85 |
+
#Network pruning, in the context of artificial intelligence and machine learning, refers to the process of removing unimportant or redundant connections, or "pruning," from a neural network\'s architecture. This is done to simplify and optimize the network\'s structure, reduce overfitting, and improve its efficiency, while preserving its overall performance. Pruning typically involves removing connections, neurons, or entire layers, based on metrics such as the weight or sparsity of the connection, or the amount of improvement gained by removing the connection. The goal is to prune the network in a way that balances the trade-off between model size and accuracy, while reducing the network\'s overall complexity and resource requirements. Pruning techniques can range from simple heuristics such as early stopping, to more sophisticated methods such as compressed and pruned models, and iterative and incremental pruning.'}
|
86 |
+
</code>
|
87 |
|
|
|
88 |
|
|
|
89 |
|
|
|
|
|
|
|
|
|
|
|
90 |
|
91 |
## Evaluation
|
92 |
|
|
|
158 |
|
159 |
## Citation [optional]
|
160 |
|
|
|
|
|
161 |
**BibTeX:**
|
162 |
|
163 |
+
@misc{tunstall2023zephyr,
|
164 |
+
title={Zephyr: Direct Distillation of LM Alignment},
|
165 |
+
author={Lewis Tunstall and Edward Beeching and Nathan Lambert and Nazneen Rajani and Kashif Rasul and Younes Belkada and Shengyi Huang and Leandro von Werra and Clémentine Fourrier and Nathan Habib and Nathan Sarrazin and Omar Sanseviero and Alexander M. Rush and Thomas Wolf},
|
166 |
+
year={2023},
|
167 |
+
eprint={2310.16944},
|
168 |
+
archivePrefix={arXiv},
|
169 |
+
primaryClass={cs.LG}
|
170 |
+
}
|
171 |
+
|
172 |
+
@misc{sun2023simple,
|
173 |
+
title={A Simple and Effective Pruning Approach for Large Language Models},
|
174 |
+
author={Mingjie Sun and Zhuang Liu and Anna Bair and J. Zico Kolter},
|
175 |
+
year={2023},
|
176 |
+
eprint={2306.11695},
|
177 |
+
archivePrefix={arXiv},
|
178 |
+
primaryClass={cs.CL}
|
179 |
+
}
|
|
|
|
|
|
|
|
|
|
|
|
|
180 |
|
181 |
|