ildodeltaRule
commited on
Commit
•
5166319
1
Parent(s):
c3128b0
Upload 13 files
Browse filesAdding the same models, but with right config
- LICENSE +323 -0
- README.md +433 -0
- config.json +38 -0
- configuration_yi.py +121 -0
- generation_config.json +7 -0
- model.safetensors +3 -0
- modeling_yi.py +1028 -0
- quant_config.json +6 -0
- special_tokens_map.json +30 -0
- tokenization_yi.py +255 -0
- tokenizer.json +0 -0
- tokenizer.model +3 -0
- tokenizer_config.json +9 -0
LICENSE
ADDED
@@ -0,0 +1,323 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Yi Series Models License Agreement
|
2 |
+
Version: 2.0
|
3 |
+
Date of Release: November 4, 2023
|
4 |
+
|
5 |
+
1. Definition
|
6 |
+
|
7 |
+
“Agreement” refers to the terms and conditions defined in this Yi Series Models
|
8 |
+
License Agreement for the use, reproduction and distribution of Yi Series
|
9 |
+
Models.
|
10 |
+
|
11 |
+
“Model” refers to associated components (including checkpoints) developed based
|
12 |
+
on machine learning, including learned weights and parameters (including the
|
13 |
+
status of optimizer).
|
14 |
+
|
15 |
+
“Yi Series Models” refers to opensource models with different specifications and
|
16 |
+
capabilities named “Yi” provided by the Licensor, including Yi-6B, Yi-34B etc.
|
17 |
+
|
18 |
+
“Derivatives” refers to all modifications to Yi Series Models, work based on Yi
|
19 |
+
Series Models, or any other models created or initialized by transferring the
|
20 |
+
weights, parameters, activations, or output patterns of Yi Series Models to
|
21 |
+
other models to achieve similar performance, including but not limited to
|
22 |
+
methods that require using intermediate data representations or generating
|
23 |
+
synthetic data based on Yi Series Models to train other models.
|
24 |
+
|
25 |
+
“Licensor” refers to Beijing Lingyiwanwu Information Technology Co., Ltd.
|
26 |
+
|
27 |
+
“you” refers to an individual or legal entity that exercises the license granted
|
28 |
+
by this Agreement and/or uses the Yi Series Models for any purpose and in any
|
29 |
+
field of use.
|
30 |
+
|
31 |
+
“Third Party” refers to any individuals, legal entities or non-legal
|
32 |
+
organizations other than you.
|
33 |
+
|
34 |
+
“Distribute” refers to transmitting, copying, publishing, or otherwise sharing
|
35 |
+
the Yi Series Models with third parties, including providing the Yi Series
|
36 |
+
Models through electronic or other remote means (such as any SaaS software or
|
37 |
+
PaaS software accessed via API or web access).
|
38 |
+
|
39 |
+
“Commercial Purposes” refers to the use of the Yi Series Models, directly or
|
40 |
+
indirectly, for the operation, promotion, revenue generation, or any other
|
41 |
+
profit-making purposes for entities or individuals.
|
42 |
+
|
43 |
+
“Laws and Regulations” refers to the laws and administrative regulations of the
|
44 |
+
mainland of the People's Republic of China (for the purposes of this Agreement
|
45 |
+
only, excluding Hong Kong, Macau, and Taiwan).
|
46 |
+
|
47 |
+
“Personal Information” refers to various information related to identified or
|
48 |
+
identifiable natural persons recorded electronically or by other means,
|
49 |
+
excluding information that has been anonymized.
|
50 |
+
|
51 |
+
“Logo” refers to any trademark, service mark, trade name, domain name, website
|
52 |
+
name, or other distinctive branding marks.
|
53 |
+
|
54 |
+
|
55 |
+
2. License and License Restrictions
|
56 |
+
|
57 |
+
The Licensor hereby grants you a non-exclusive, global, non-transferable,
|
58 |
+
non-sub-licensable, revocable, and royalty-free copyright license. You must
|
59 |
+
adhere to the following license restrictions:
|
60 |
+
|
61 |
+
1) Your use of the Yi Series Models must comply with the Laws and Regulations as
|
62 |
+
well as applicable legal requirements of other countries/regions, and respect
|
63 |
+
social ethics and moral standards, including but not limited to, not using the
|
64 |
+
Yi Series Models for purposes prohibited by Laws and Regulations as well as
|
65 |
+
applicable legal requirements of other countries/regions, such as harming
|
66 |
+
national security, promoting terrorism, extremism, inciting ethnic or racial
|
67 |
+
hatred, discrimination, violence, or pornography, and spreading false harmful
|
68 |
+
information.
|
69 |
+
|
70 |
+
2) You shall not, for military or unlawful purposes or in ways not allowed by
|
71 |
+
Laws and Regulations as well as applicable legal requirements of other
|
72 |
+
countries/regions, a) use, copy or Distribute the Yi Series Models, or b) create
|
73 |
+
complete or partial Derivatives of the Yi Series Models.
|
74 |
+
|
75 |
+
3) Your use of the Yi Series Models (including using the output of the Yi Series
|
76 |
+
Models) and the creation of Derivatives must not infringe upon the legitimate
|
77 |
+
rights of any Third Party, including but not limited to the rights of personal
|
78 |
+
rights such as the right to likeness, reputation, and privacy, as well as
|
79 |
+
intellectual property rights such as copyrights, patents, trade secrets, and
|
80 |
+
other property rights.
|
81 |
+
|
82 |
+
4) You must clearly attribute the source of the Yi Series Models to the Licensor
|
83 |
+
and provide a copy of this Agreement to any Third-Party users of the Yi Series
|
84 |
+
Models and Derivatives.
|
85 |
+
|
86 |
+
5) If you modify the Yi Series Models to create Derivatives, you must clearly
|
87 |
+
indicate the substantial modifications made, and these modifications shall not
|
88 |
+
violate the license restrictions of this Agreement. You shall not enable,
|
89 |
+
assist, or in any way facilitate Third Parties to violate the license
|
90 |
+
restrictions of this Agreement.
|
91 |
+
|
92 |
+
If you plan to use the Yi Series Models and Derivatives for Commercial Purposes,
|
93 |
+
you should contact the Licensor in advance as specified in Section 7 of this
|
94 |
+
Agreement named "Updates to the Agreement and Contact Information" and obtain
|
95 |
+
written authorization from the Licensor. When you obtain authorization from the
|
96 |
+
Licensor to use the Yi Series Models and Derivatives for Commercial Purposes,
|
97 |
+
you must comply with the afore-mentioned license restrictions.
|
98 |
+
|
99 |
+
|
100 |
+
3. Intellectual Property
|
101 |
+
|
102 |
+
The ownership of the Yi Series Models and their related intellectual property
|
103 |
+
rights is solely held by the Licensor.
|
104 |
+
|
105 |
+
In any circumstance, without the prior written consent of the Licensor, you are
|
106 |
+
not allowed to use any Logo associated with the Licensor. If your use of
|
107 |
+
Licensor's Logo in violation of this Agreement causes any losses to the Licensor
|
108 |
+
or others, you will bear full legal responsibility.
|
109 |
+
|
110 |
+
|
111 |
+
4. Disclaimer and Limitation of Liability
|
112 |
+
|
113 |
+
The Yi Series Models are provided "AS IS." The Licensor does not provide any
|
114 |
+
express or implied warranties for the Yi Series Models, including but not
|
115 |
+
limited to stability, ownership, merchantability, non-infringement, or fitness
|
116 |
+
for a specific purpose of the Yi Series Models and their output results. You
|
117 |
+
assume all responsibilities for the risks and consequences arising from the use,
|
118 |
+
reproduction, distribution of the Yi Series Models, and the creation of
|
119 |
+
Derivatives.
|
120 |
+
|
121 |
+
The Licensor complies with Laws and Regulations at all stages of model training,
|
122 |
+
maintaining the legality, authenticity, accuracy, objectivity, and diversity of
|
123 |
+
data and algorithms. The Licensor is not liable for any direct, indirect,
|
124 |
+
incidental consequences, and other losses or damages related to your use,
|
125 |
+
reproduction, and distribution of the Yi Series Models, and the creation of
|
126 |
+
Derivatives under this Agreement. This includes but is not limited to:
|
127 |
+
|
128 |
+
1) The Licensor is not responsible for data security risks resulting from your
|
129 |
+
use of the Yi Series Models.
|
130 |
+
|
131 |
+
2) The Yi Series Models may contain Personal Information. When you use Yi Series
|
132 |
+
Models, you acknowledge that you are the data processing entity as defined under
|
133 |
+
the Laws and Regulations responsible for determining the processing methods and
|
134 |
+
purposes of Personal Information. You must comply with legal requirements for
|
135 |
+
processing any Personal Information that may be contained in the Yi Series
|
136 |
+
Models and assume the associated legal responsibilities, as well as the risks
|
137 |
+
and consequences of processing Personal Information.
|
138 |
+
|
139 |
+
3) The Licensor is not liable for reputation risks arising from your use of the
|
140 |
+
Yi Series Models or the output results of the Yi Series Models.
|
141 |
+
|
142 |
+
4) The Licensor is not liable for intellectual property risks associated with
|
143 |
+
your use of the Yi Series Models’ output results.
|
144 |
+
|
145 |
+
If your use, reproduction, distribution of the Yi Series Models, or the creation
|
146 |
+
of Derivatives result in losses to the Licensor, the Licensor has the right to
|
147 |
+
seek compensation from you. For any claims made by Third Parties against the
|
148 |
+
Licensor related to your use, reproduction, and distribution of the Yi Series
|
149 |
+
Models, or the creation of Derivatives, the Licensor has the right to demand
|
150 |
+
that you defend, compensate, and indemnify the Licensor and protect the Licensor
|
151 |
+
from harm.
|
152 |
+
|
153 |
+
|
154 |
+
5. Dispute Resolution
|
155 |
+
|
156 |
+
The stipulation, effectiveness, interpretation, performance, modification, and
|
157 |
+
termination of the Agreement, the use, copy and Distribute of the Yi Series
|
158 |
+
Models, and dispute resolution associated with your use, copy and distribution
|
159 |
+
shall be governed by the laws of the mainland of the People's Republic of China
|
160 |
+
(for the purposes of this agreement only, excluding Hong Kong, Macau, and
|
161 |
+
Taiwan), and the application of conflict of laws is excluded.
|
162 |
+
|
163 |
+
Any disputes arising from the use, copy or distribution of the Yi Series Models
|
164 |
+
should first be resolved through amicable negotiations. If negotiations fail,
|
165 |
+
legal proceedings should be initiated in the People's Court at the location of
|
166 |
+
the Licensor.
|
167 |
+
|
168 |
+
|
169 |
+
6. Effectiveness and Termination of the Agreement
|
170 |
+
|
171 |
+
Your use of the Yi Series Models signifies that you have read and agreed to be
|
172 |
+
bound by the terms of the Agreement. The Agreement becomes effective from the
|
173 |
+
date of your use of the Yi Series Models and will terminate from the date you
|
174 |
+
cease using the Yi Series Models. If you violate any terms or restrictions in
|
175 |
+
the Agreement, the Licensor reserves the right to terminate the Agreement.
|
176 |
+
|
177 |
+
Upon termination of the Agreement, you must immediately cease using the Yi
|
178 |
+
Series Models. Section 4, "Disclaimer and Limitation of Liability," and Section
|
179 |
+
5, "Dispute Resolution," of this Agreement remain in effect after the
|
180 |
+
termination of this Agreement.
|
181 |
+
|
182 |
+
|
183 |
+
7. Updates to the Agreement and Contact Information
|
184 |
+
|
185 |
+
The Licensor reserves the right to update the Agreement from time to time. The
|
186 |
+
latest version of the Agreement will be posted by the Licensor through
|
187 |
+
https://01.ai.
|
188 |
+
|
189 |
+
For any questions related to licensing and copyright, please contact the
|
190 |
+
Licensor at yi@01.ai.
|
191 |
+
|
192 |
+
|
193 |
+
Yi系列模型许可协议
|
194 |
+
版本: 2.0
|
195 |
+
发布日期: 2023年11月4日
|
196 |
+
|
197 |
+
1. 定义
|
198 |
+
|
199 |
+
“协议”是指本协议中定义Yi系列模型使用、复制和分发的条款和���件。
|
200 |
+
|
201 |
+
“模型”是指任何附带的基于机器学习的组件(包括检查点),包括学习的权重、参数(包括优
|
202 |
+
化器状态)。
|
203 |
+
|
204 |
+
“Yi系列模型”是指许可方开源的以Yi命名的不同规格、不同能力的模型,包括
|
205 |
+
Yi-6B、Yi-34B等。
|
206 |
+
|
207 |
+
“模型衍生品”是指对Yi系列模型的所有修改、基于Yi系列模型的工作,或通过将Yi系列模型
|
208 |
+
的权重、参数、激活或输出模式转移到其他模型而创建或初始化的任何其他模型,以使其他
|
209 |
+
模型的性能与Yi系列模型类似,包括但不限于需要使用中间数据表示的提取方法或基于Yi系
|
210 |
+
列模型生成合成数据来训练其他模型的方法。
|
211 |
+
|
212 |
+
“许可方”是指北京零一万物信息技术有限公司。
|
213 |
+
|
214 |
+
“您”是指行使本协议授予的权限和/或出于任何目的和在任何使用领域使用Yi系列模型的个
|
215 |
+
人或法人实体。
|
216 |
+
|
217 |
+
“第三方”是指您之外的任何个人、法人实体或非法人组织。
|
218 |
+
|
219 |
+
“分发”是指向第三方传输、复制、发布或以其他方式共享Yi系列模型,包括将Yi系列模型作
|
220 |
+
为通过电子或其他远程方式(例如基于 API 或 Web 访问的任何 SaaS 软件或 PaaS 软
|
221 |
+
件)。
|
222 |
+
|
223 |
+
“商业用途”是指使用Yi系列模型,直接或间接为实体或个人进行运营、推广或产生收入,或
|
224 |
+
用于任何其他盈利目的。
|
225 |
+
|
226 |
+
“法律法规”是指中华人民共和国大陆地区(仅为本协议之目的,不包括香港、澳门和台湾)
|
227 |
+
的法律及行政法规。
|
228 |
+
|
229 |
+
“个人信息”是指以电子或者其他方式记录的与已识别或者可识别的自然人有关的各种信息,
|
230 |
+
不包括匿名化处理后的信息。
|
231 |
+
|
232 |
+
“标识” 是指任何商标、服务标记、商号、域名、网站名称或其他带有显著品牌特征的标
|
233 |
+
记。
|
234 |
+
|
235 |
+
|
236 |
+
2. 许可及许可限制
|
237 |
+
|
238 |
+
许可方特此授予您非排他性、全球性、不可转让、不可再许可、可撤销、免版税的版权许
|
239 |
+
可。您必须满足如下许可限制条件:
|
240 |
+
|
241 |
+
1) 您对Yi系列模型的使用应遵守法律法规以及其他国家/地区适用的法律要求、尊重社会公
|
242 |
+
德和伦理道德。包括但不限于您不得将Yi系列模型用作危害国家安全、宣扬恐怖主义、极端
|
243 |
+
主义,宣扬民族及种族仇恨、歧视,暴力、色情,以及虚假有害信息等法律法规以及其他国
|
244 |
+
家/地区适用的法律要求禁止的目的。
|
245 |
+
|
246 |
+
2) 您不得出于军事或非法目的,或以法律法规以及其他国家/地区适用的法律要求所不允许
|
247 |
+
的方式a) 使用、复制、或分发Yi系列模型; 或b) 创建Yi系列模型的全部或部分衍生品。
|
248 |
+
|
249 |
+
3) 您对Yi系列模型的使用(包括使用Yi系列模型的输出)以及模型衍生品的创建不得侵犯
|
250 |
+
任何第三方的合法权益,包括但不限于他人肖像权、名誉权、隐私权等人格权,著作权、专
|
251 |
+
利权、商业秘密等知识产权,或其他财产权益。
|
252 |
+
|
253 |
+
4) 您必须向Yi系列模型及Yi系列模型衍生品的任何第三方使用者明确Yi系列模型的来源为
|
254 |
+
许可方并向其提供本协议的副本。
|
255 |
+
|
256 |
+
5) 若您修改Yi系列模型得到模型衍生品,您必须以显著的方式说明修改的内容,且上述修
|
257 |
+
改不得违反本协议的许可限制条件,也不能允许、协助或以其他方式使得第三方违反本协议
|
258 |
+
中的许可限制条件。
|
259 |
+
|
260 |
+
如果您计划将 Yi系列模型及模型衍生品用作商业用途,您应当事先通过第7款“协议更新及
|
261 |
+
联系方式”中的方式联系许可方进行登记并获得许可方的书面授权。若您取得许可方授权将
|
262 |
+
Yi系列模型及模型衍生品用作商业用途时,您应满足许可方上述许可限制条件。
|
263 |
+
|
264 |
+
|
265 |
+
3. 知识产权
|
266 |
+
|
267 |
+
Yi系列模型的所有权及其相关知识产权,由许可方单独所有。
|
268 |
+
|
269 |
+
在任何情况下,未经许可方事先书面同意,您不得以任何方式使用许可方的任何标识。由于
|
270 |
+
您违反本协议使用许可方的标识给许可方或他人造成损失的,由您承担全部法律责任。
|
271 |
+
|
272 |
+
|
273 |
+
4. 免责声明及责任限制
|
274 |
+
|
275 |
+
Yi系列模型按“原样”提供。许可方不对Yi系列模型提供任何明示或暗示的保证,包括但不限
|
276 |
+
于:模型及输出结果的稳定性、所有权、适销性、非侵权性、或特定用途适用性。您将对适
|
277 |
+
用、复制及分发Yi系列模型以及创建模型衍生品所产生的风险与后果承担所有责任。
|
278 |
+
|
279 |
+
许可方在模型训练的所有阶段都遵守法律法规,坚持维护数据和算法的合法、真实、准确、
|
280 |
+
客观和多样性。许可方不对您根据本协议使用、复制及分发Yi系列模型,以及创建模型衍生
|
281 |
+
品而产生或与之相关的任何直接、间接、附带的后果、以及其他损失或损害承担责任。包括
|
282 |
+
但不限于:
|
283 |
+
|
284 |
+
1) 许可方不承担您因使用Yi系列模型而导致的数据安全��险。
|
285 |
+
|
286 |
+
2) Yi系列模型中可能包含个人信息。在您使用Yi系列模型的过程中,您承认您为法律法规
|
287 |
+
定义下决定个人信息处理方式和目的的个人信息处理者。您应遵守法律法规要求处理Yi系列
|
288 |
+
模型中可能包含的个人信息,并承担相应的法律责任,以及处理个人信息的风险和后果。
|
289 |
+
|
290 |
+
3) 许可方不承担您使用Yi系列模型或模型输出结果而产生的声誉风险。
|
291 |
+
|
292 |
+
4) 许可方不承担您使用Yi系列模型的输出结果涉及的知识产权风险。
|
293 |
+
|
294 |
+
若由于您对Yi系列模型的使用、复制或分发,或者创建模型衍生品而导致许可方遭受损失,
|
295 |
+
许可方有权要求您对许可方的损失进行赔偿。对于任何第三方向许可方提出的因您使用、复
|
296 |
+
制或分发Yi系列模型或创建模型衍生品行为的相关索赔,许可方有权要求您为许可方进行辩
|
297 |
+
护、赔偿并使许可方免受损害。
|
298 |
+
|
299 |
+
|
300 |
+
5. 争议解决
|
301 |
+
|
302 |
+
协议的订立、效力、解释、履行、修改和终止,使用、复制和分发Yi系列模型以及争议解决
|
303 |
+
均适用中华人民共和国大陆地区(仅为本协议之目的,不包括香港、澳门和台湾)法律,并
|
304 |
+
排除冲突法的适用。
|
305 |
+
|
306 |
+
因使用、复制和分发Yi系列模型而发生的任何争议,各方应首先通过友好协商的方式加以解
|
307 |
+
决。协商不成时,应向许可方所在地人民法院提起诉讼。
|
308 |
+
|
309 |
+
|
310 |
+
6. 协议的生效及终止
|
311 |
+
|
312 |
+
您使用Yi系列模型即表示您已阅读并同意接受协议的约束。协议自您使用Yi系列模型之日起
|
313 |
+
生效并将在您停止使用Yi系列模型之日起终止。若您违反协议中的任何条款或限制,许可方
|
314 |
+
有权终止协议。
|
315 |
+
|
316 |
+
若协议终止,您需立即停止使用Yi系列模型。本协议第4条“免责声明及责任限制”及第5条
|
317 |
+
“争议解决”在协议终止后仍有效。
|
318 |
+
|
319 |
+
|
320 |
+
7. 协议更新及联系方式
|
321 |
+
|
322 |
+
许可方有权对协议进行不时更新。许可方将通过https://01.ai公布协议最新版本。有关许
|
323 |
+
可和版权的任何问题,请通过yi@01.ai 与许可方联系。
|
README.md
ADDED
@@ -0,0 +1,433 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: 01-ai/Yi-6B-200K
|
3 |
+
inference: false
|
4 |
+
license: other
|
5 |
+
license_link: LICENSE
|
6 |
+
license_name: yi-license
|
7 |
+
model_creator: 01-ai
|
8 |
+
model_name: Yi 6B 200K
|
9 |
+
model_type: yi
|
10 |
+
prompt_template: '{prompt}
|
11 |
+
|
12 |
+
'
|
13 |
+
quantized_by: TheBloke
|
14 |
+
---
|
15 |
+
<!-- markdownlint-disable MD041 -->
|
16 |
+
|
17 |
+
<!-- header start -->
|
18 |
+
<!-- 200823 -->
|
19 |
+
<div style="width: auto; margin-left: auto; margin-right: auto">
|
20 |
+
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
21 |
+
</div>
|
22 |
+
<div style="display: flex; justify-content: space-between; width: 100%;">
|
23 |
+
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
24 |
+
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
|
25 |
+
</div>
|
26 |
+
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
27 |
+
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
28 |
+
</div>
|
29 |
+
</div>
|
30 |
+
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
|
31 |
+
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
|
32 |
+
<!-- header end -->
|
33 |
+
|
34 |
+
# Yi 6B 200K - AWQ
|
35 |
+
- Model creator: [01-ai](https://huggingface.co/01-ai)
|
36 |
+
- Original model: [Yi 6B 200K](https://huggingface.co/01-ai/Yi-6B-200K)
|
37 |
+
- This Modelcart Only changed the Configuration to LlamaForCausalLM as it is from the original creators. This is the only way it will "automaticly" work for VLLM
|
38 |
+
|
39 |
+
<!-- description start -->
|
40 |
+
## Description
|
41 |
+
|
42 |
+
This repo contains AWQ model files for [01-ai's Yi 6B 200K](https://huggingface.co/01-ai/Yi-6B-200K).
|
43 |
+
|
44 |
+
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
|
45 |
+
|
46 |
+
|
47 |
+
### About AWQ
|
48 |
+
|
49 |
+
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
|
50 |
+
|
51 |
+
It is supported by:
|
52 |
+
|
53 |
+
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
|
54 |
+
- [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only
|
55 |
+
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
|
56 |
+
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
|
57 |
+
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
|
58 |
+
|
59 |
+
<!-- description end -->
|
60 |
+
<!-- repositories-available start -->
|
61 |
+
## Repositories available
|
62 |
+
|
63 |
+
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Yi-6B-200K-AWQ)
|
64 |
+
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Yi-6B-200K-GPTQ)
|
65 |
+
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Yi-6B-200K-GGUF)
|
66 |
+
* [01-ai's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/01-ai/Yi-6B-200K)
|
67 |
+
<!-- repositories-available end -->
|
68 |
+
|
69 |
+
<!-- prompt-template start -->
|
70 |
+
## Prompt template: None
|
71 |
+
|
72 |
+
```
|
73 |
+
{prompt}
|
74 |
+
|
75 |
+
```
|
76 |
+
|
77 |
+
<!-- prompt-template end -->
|
78 |
+
|
79 |
+
|
80 |
+
<!-- README_AWQ.md-provided-files start -->
|
81 |
+
## Provided files, and AWQ parameters
|
82 |
+
|
83 |
+
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
|
84 |
+
|
85 |
+
Models are released as sharded safetensors files.
|
86 |
+
|
87 |
+
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
|
88 |
+
| ------ | ---- | -- | ----------- | ------- | ---- |
|
89 |
+
| [main](https://huggingface.co/TheBloke/Yi-6B-200K-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 3.93 GB
|
90 |
+
|
91 |
+
<!-- README_AWQ.md-provided-files end -->
|
92 |
+
|
93 |
+
<!-- README_AWQ.md-text-generation-webui start -->
|
94 |
+
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
|
95 |
+
|
96 |
+
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
|
97 |
+
|
98 |
+
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
|
99 |
+
|
100 |
+
1. Click the **Model tab**.
|
101 |
+
2. Under **Download custom model or LoRA**, enter `TheBloke/Yi-6B-200K-AWQ`.
|
102 |
+
3. Click **Download**.
|
103 |
+
4. The model will start downloading. Once it's finished it will say "Done".
|
104 |
+
5. In the top left, click the refresh icon next to **Model**.
|
105 |
+
6. In the **Model** dropdown, choose the model you just downloaded: `Yi-6B-200K-AWQ`
|
106 |
+
7. Select **Loader: AutoAWQ**.
|
107 |
+
8. Click Load, and the model will load and is now ready for use.
|
108 |
+
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
|
109 |
+
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
|
110 |
+
<!-- README_AWQ.md-text-generation-webui end -->
|
111 |
+
|
112 |
+
<!-- README_AWQ.md-use-from-vllm start -->
|
113 |
+
## Multi-user inference server: vLLM
|
114 |
+
|
115 |
+
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
|
116 |
+
|
117 |
+
- Please ensure you are using vLLM version 0.2 or later.
|
118 |
+
- When using vLLM as a server, pass the `--quantization awq` parameter.
|
119 |
+
|
120 |
+
For example:
|
121 |
+
|
122 |
+
```shell
|
123 |
+
python3 -m vllm.entrypoints.api_server --model TheBloke/Yi-6B-200K-AWQ --quantization awq --dtype auto
|
124 |
+
```
|
125 |
+
|
126 |
+
- When using vLLM from Python code, again set `quantization=awq`.
|
127 |
+
|
128 |
+
For example:
|
129 |
+
|
130 |
+
```python
|
131 |
+
from vllm import LLM, SamplingParams
|
132 |
+
|
133 |
+
prompts = [
|
134 |
+
"Tell me about AI",
|
135 |
+
"Write a story about llamas",
|
136 |
+
"What is 291 - 150?",
|
137 |
+
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
|
138 |
+
]
|
139 |
+
prompt_template=f'''{prompt}
|
140 |
+
'''
|
141 |
+
|
142 |
+
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
|
143 |
+
|
144 |
+
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
|
145 |
+
|
146 |
+
llm = LLM(model="TheBloke/Yi-6B-200K-AWQ", quantization="awq", dtype="auto")
|
147 |
+
|
148 |
+
outputs = llm.generate(prompts, sampling_params)
|
149 |
+
|
150 |
+
# Print the outputs.
|
151 |
+
for output in outputs:
|
152 |
+
prompt = output.prompt
|
153 |
+
generated_text = output.outputs[0].text
|
154 |
+
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
|
155 |
+
```
|
156 |
+
<!-- README_AWQ.md-use-from-vllm start -->
|
157 |
+
|
158 |
+
<!-- README_AWQ.md-use-from-tgi start -->
|
159 |
+
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
|
160 |
+
|
161 |
+
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
|
162 |
+
|
163 |
+
Example Docker parameters:
|
164 |
+
|
165 |
+
```shell
|
166 |
+
--model-id TheBloke/Yi-6B-200K-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
|
167 |
+
```
|
168 |
+
|
169 |
+
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
|
170 |
+
|
171 |
+
```shell
|
172 |
+
pip3 install huggingface-hub
|
173 |
+
```
|
174 |
+
|
175 |
+
```python
|
176 |
+
from huggingface_hub import InferenceClient
|
177 |
+
|
178 |
+
endpoint_url = "https://your-endpoint-url-here"
|
179 |
+
|
180 |
+
prompt = "Tell me about AI"
|
181 |
+
prompt_template=f'''{prompt}
|
182 |
+
'''
|
183 |
+
|
184 |
+
client = InferenceClient(endpoint_url)
|
185 |
+
response = client.text_generation(prompt,
|
186 |
+
max_new_tokens=128,
|
187 |
+
do_sample=True,
|
188 |
+
temperature=0.7,
|
189 |
+
top_p=0.95,
|
190 |
+
top_k=40,
|
191 |
+
repetition_penalty=1.1)
|
192 |
+
|
193 |
+
print(f"Model output: ", response)
|
194 |
+
```
|
195 |
+
<!-- README_AWQ.md-use-from-tgi end -->
|
196 |
+
|
197 |
+
<!-- README_AWQ.md-use-from-python start -->
|
198 |
+
## Inference from Python code using Transformers
|
199 |
+
|
200 |
+
### Install the necessary packages
|
201 |
+
|
202 |
+
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
|
203 |
+
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
|
204 |
+
|
205 |
+
```shell
|
206 |
+
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
|
207 |
+
```
|
208 |
+
|
209 |
+
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
|
210 |
+
|
211 |
+
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
|
212 |
+
|
213 |
+
```shell
|
214 |
+
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
|
215 |
+
```
|
216 |
+
|
217 |
+
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
|
218 |
+
|
219 |
+
```shell
|
220 |
+
pip3 uninstall -y autoawq
|
221 |
+
git clone https://github.com/casper-hansen/AutoAWQ
|
222 |
+
cd AutoAWQ
|
223 |
+
pip3 install .
|
224 |
+
```
|
225 |
+
|
226 |
+
### Transformers example code (requires Transformers 4.35.0 and later)
|
227 |
+
|
228 |
+
```python
|
229 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
|
230 |
+
|
231 |
+
model_name_or_path = "TheBloke/Yi-6B-200K-AWQ"
|
232 |
+
|
233 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
|
234 |
+
model = AutoModelForCausalLM.from_pretrained(
|
235 |
+
model_name_or_path,
|
236 |
+
low_cpu_mem_usage=True,
|
237 |
+
device_map="cuda:0"
|
238 |
+
)
|
239 |
+
|
240 |
+
# Using the text streamer to stream output one token at a time
|
241 |
+
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
|
242 |
+
|
243 |
+
prompt = "Tell me about AI"
|
244 |
+
prompt_template=f'''{prompt}
|
245 |
+
'''
|
246 |
+
|
247 |
+
# Convert prompt to tokens
|
248 |
+
tokens = tokenizer(
|
249 |
+
prompt_template,
|
250 |
+
return_tensors='pt'
|
251 |
+
).input_ids.cuda()
|
252 |
+
|
253 |
+
generation_params = {
|
254 |
+
"do_sample": True,
|
255 |
+
"temperature": 0.7,
|
256 |
+
"top_p": 0.95,
|
257 |
+
"top_k": 40,
|
258 |
+
"max_new_tokens": 512,
|
259 |
+
"repetition_penalty": 1.1
|
260 |
+
}
|
261 |
+
|
262 |
+
# Generate streamed output, visible one token at a time
|
263 |
+
generation_output = model.generate(
|
264 |
+
tokens,
|
265 |
+
streamer=streamer,
|
266 |
+
**generation_params
|
267 |
+
)
|
268 |
+
|
269 |
+
# Generation without a streamer, which will include the prompt in the output
|
270 |
+
generation_output = model.generate(
|
271 |
+
tokens,
|
272 |
+
**generation_params
|
273 |
+
)
|
274 |
+
|
275 |
+
# Get the tokens from the output, decode them, print them
|
276 |
+
token_output = generation_output[0]
|
277 |
+
text_output = tokenizer.decode(token_output)
|
278 |
+
print("model.generate output: ", text_output)
|
279 |
+
|
280 |
+
# Inference is also possible via Transformers' pipeline
|
281 |
+
from transformers import pipeline
|
282 |
+
|
283 |
+
pipe = pipeline(
|
284 |
+
"text-generation",
|
285 |
+
model=model,
|
286 |
+
tokenizer=tokenizer,
|
287 |
+
**generation_params
|
288 |
+
)
|
289 |
+
|
290 |
+
pipe_output = pipe(prompt_template)[0]['generated_text']
|
291 |
+
print("pipeline output: ", pipe_output)
|
292 |
+
|
293 |
+
```
|
294 |
+
<!-- README_AWQ.md-use-from-python end -->
|
295 |
+
|
296 |
+
<!-- README_AWQ.md-compatibility start -->
|
297 |
+
## Compatibility
|
298 |
+
|
299 |
+
The files provided are tested to work with:
|
300 |
+
|
301 |
+
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
|
302 |
+
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
|
303 |
+
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
|
304 |
+
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
|
305 |
+
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
|
306 |
+
|
307 |
+
<!-- README_AWQ.md-compatibility end -->
|
308 |
+
|
309 |
+
<!-- footer start -->
|
310 |
+
<!-- 200823 -->
|
311 |
+
## Discord
|
312 |
+
|
313 |
+
For further support, and discussions on these models and AI in general, join us at:
|
314 |
+
|
315 |
+
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
|
316 |
+
|
317 |
+
## Thanks, and how to contribute
|
318 |
+
|
319 |
+
Thanks to the [chirper.ai](https://chirper.ai) team!
|
320 |
+
|
321 |
+
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
|
322 |
+
|
323 |
+
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
|
324 |
+
|
325 |
+
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
|
326 |
+
|
327 |
+
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
|
328 |
+
|
329 |
+
* Patreon: https://patreon.com/TheBlokeAI
|
330 |
+
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
331 |
+
|
332 |
+
**Special thanks to**: Aemon Algiz.
|
333 |
+
|
334 |
+
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
|
335 |
+
|
336 |
+
|
337 |
+
Thank you to all my generous patrons and donaters!
|
338 |
+
|
339 |
+
And thank you again to a16z for their generous grant.
|
340 |
+
|
341 |
+
<!-- footer end -->
|
342 |
+
|
343 |
+
# Original model card: 01-ai's Yi 6B 200K
|
344 |
+
|
345 |
+
<div align="center">
|
346 |
+
|
347 |
+
<img src="./Yi.svg" width="200px">
|
348 |
+
|
349 |
+
</div>
|
350 |
+
|
351 |
+
## Introduction
|
352 |
+
|
353 |
+
The **Yi** series models are large language models trained from scratch by
|
354 |
+
developers at [01.AI](https://01.ai/). The first public release contains two
|
355 |
+
bilingual(English/Chinese) base models with the parameter sizes of 6B([`Yi-6B`](https://huggingface.co/01-ai/Yi-6B))
|
356 |
+
and 34B([`Yi-34B`](https://huggingface.co/01-ai/Yi-34B)). Both of them are trained
|
357 |
+
with 4K sequence length and can be extended to 32K during inference time.
|
358 |
+
The [`Yi-6B-200K`](https://huggingface.co/01-ai/Yi-6B-200K)
|
359 |
+
and [`Yi-34B-200K`](https://huggingface.co/01-ai/Yi-34B-200K) are base model with
|
360 |
+
200K context length.
|
361 |
+
|
362 |
+
## News
|
363 |
+
|
364 |
+
- 🎯 **2023/11/06**: The base model of [`Yi-6B-200K`](https://huggingface.co/01-ai/Yi-6B-200K)
|
365 |
+
and [`Yi-34B-200K`](https://huggingface.co/01-ai/Yi-34B-200K) with 200K context length.
|
366 |
+
- 🎯 **2023/11/02**: The base model of [`Yi-6B`](https://huggingface.co/01-ai/Yi-6B) and
|
367 |
+
[`Yi-34B`](https://huggingface.co/01-ai/Yi-34B).
|
368 |
+
|
369 |
+
|
370 |
+
## Model Performance
|
371 |
+
|
372 |
+
| Model | MMLU | CMMLU | C-Eval | GAOKAO | BBH | Common-sense Reasoning | Reading Comprehension | Math & Code |
|
373 |
+
| :------------ | :------: | :------: | :------: | :------: | :------: | :--------------------: | :-------------------: | :---------: |
|
374 |
+
| | 5-shot | 5-shot | 5-shot | 0-shot | 3-shot@1 | - | - | - |
|
375 |
+
| LLaMA2-34B | 62.6 | - | - | - | 44.1 | 69.9 | 68.0 | 26.0 |
|
376 |
+
| LLaMA2-70B | 68.9 | 53.3 | - | 49.8 | 51.2 | 71.9 | 69.4 | 36.8 |
|
377 |
+
| Baichuan2-13B | 59.2 | 62.0 | 58.1 | 54.3 | 48.8 | 64.3 | 62.4 | 23.0 |
|
378 |
+
| Qwen-14B | 66.3 | 71.0 | 72.1 | 62.5 | 53.4 | 73.3 | 72.5 | **39.8** |
|
379 |
+
| Skywork-13B | 62.1 | 61.8 | 60.6 | 68.1 | 41.7 | 72.4 | 61.4 | 24.9 |
|
380 |
+
| InternLM-20B | 62.1 | 59.0 | 58.8 | 45.5 | 52.5 | 78.3 | - | 30.4 |
|
381 |
+
| Aquila-34B | 67.8 | 71.4 | 63.1 | - | - | - | - | - |
|
382 |
+
| Falcon-180B | 70.4 | 58.0 | 57.8 | 59.0 | 54.0 | 77.3 | 68.8 | 34.0 |
|
383 |
+
| Yi-6B | 63.2 | 75.5 | 72.0 | 72.2 | 42.8 | 72.3 | 68.7 | 19.8 |
|
384 |
+
| Yi-6B-200K | 64.0 | 75.3 | 73.5 | 73.9 | 42.0 | 72.0 | 69.1 | 19.0 |
|
385 |
+
| **Yi-34B** | **76.3** | **83.7** | 81.4 | 82.8 | **54.3** | **80.1** | 76.4 | 37.1 |
|
386 |
+
| Yi-34B-200K | 76.1 | 83.6 | **81.9** | **83.4** | 52.7 | 79.7 | **76.6** | 36.3 |
|
387 |
+
|
388 |
+
While benchmarking open-source models, we have observed a disparity between the
|
389 |
+
results generated by our pipeline and those reported in public sources (e.g.
|
390 |
+
OpenCompass). Upon conducting a more in-depth investigation of this difference,
|
391 |
+
we have discovered that various models may employ different prompts,
|
392 |
+
post-processing strategies, and sampling techniques, potentially resulting in
|
393 |
+
significant variations in the outcomes. Our prompt and post-processing strategy
|
394 |
+
remains consistent with the original benchmark, and greedy decoding is employed
|
395 |
+
during evaluation without any post-processing for the generated content. For
|
396 |
+
scores that were not reported by the original authors (including scores reported
|
397 |
+
with different settings), we try to get results with our pipeline.
|
398 |
+
|
399 |
+
To evaluate the model's capability extensively, we adopted the methodology
|
400 |
+
outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande,
|
401 |
+
ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ
|
402 |
+
were incorporated to evaluate reading comprehension. CSQA was exclusively tested
|
403 |
+
using a 7-shot setup, while all other tests were conducted with a 0-shot
|
404 |
+
configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1),
|
405 |
+
HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". Due
|
406 |
+
to technical constraints, we did not test Falcon-180 on QuAC and OBQA; the score
|
407 |
+
is derived by averaging the scores on the remaining tasks. Since the scores for
|
408 |
+
these two tasks are generally lower than the average, we believe that
|
409 |
+
Falcon-180B's performance was not underestimated.
|
410 |
+
|
411 |
+
## Usage
|
412 |
+
|
413 |
+
Please visit our [github repository](https://github.com/01-ai/Yi) for general
|
414 |
+
guidance on how to use this model.
|
415 |
+
|
416 |
+
## Disclaimer
|
417 |
+
|
418 |
+
Although we use data compliance checking algorithms during the training process
|
419 |
+
to ensure the compliance of the trained model to the best of our ability, due to
|
420 |
+
the complexity of the data and the diversity of language model usage scenarios,
|
421 |
+
we cannot guarantee that the model will generate correct and reasonable output
|
422 |
+
in all scenarios. Please be aware that there is still a risk of the model
|
423 |
+
producing problematic outputs. We will not be responsible for any risks and
|
424 |
+
issues resulting from misuse, misguidance, illegal usage, and related
|
425 |
+
misinformation, as well as any associated data security concerns.
|
426 |
+
|
427 |
+
## License
|
428 |
+
|
429 |
+
The Yi series models are fully open for academic research and free commercial
|
430 |
+
usage with permission via applications. All usage must adhere to the [Model
|
431 |
+
License Agreement 2.0](https://huggingface.co/01-ai/Yi-6B-200K/blob/main/LICENSE). To
|
432 |
+
apply for the official commercial license, please contact us
|
433 |
+
([yi@01.ai](mailto:yi@01.ai)).
|
config.json
ADDED
@@ -0,0 +1,38 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "/workspace/process/01-ai_yi-6b-200k/source",
|
3 |
+
"architectures": [
|
4 |
+
"LlamaForCausalLM"
|
5 |
+
],
|
6 |
+
"auto_map": {
|
7 |
+
"AutoConfig": "configuration_yi.YiConfig",
|
8 |
+
"AutoModel": "modeling_yi.YiModel",
|
9 |
+
"AutoModelForCausalLM": "modeling_yi.YiForCausalLM"
|
10 |
+
},
|
11 |
+
"bos_token_id": 1,
|
12 |
+
"eos_token_id": 2,
|
13 |
+
"hidden_act": "silu",
|
14 |
+
"hidden_size": 4096,
|
15 |
+
"initializer_range": 0.02,
|
16 |
+
"intermediate_size": 11008,
|
17 |
+
"max_position_embeddings": 200000,
|
18 |
+
"model_type": "Yi",
|
19 |
+
"num_attention_heads": 32,
|
20 |
+
"num_hidden_layers": 32,
|
21 |
+
"num_key_value_heads": 4,
|
22 |
+
"pad_token_id": 0,
|
23 |
+
"pretraining_tp": 1,
|
24 |
+
"quantization_config": {
|
25 |
+
"bits": 4,
|
26 |
+
"group_size": 128,
|
27 |
+
"quant_method": "awq",
|
28 |
+
"version": "gemm",
|
29 |
+
"zero_point": true
|
30 |
+
},
|
31 |
+
"rms_norm_eps": 1e-05,
|
32 |
+
"rope_theta": 5000000.0,
|
33 |
+
"tie_word_embeddings": false,
|
34 |
+
"torch_dtype": "float16",
|
35 |
+
"transformers_version": "4.35.0",
|
36 |
+
"use_cache": true,
|
37 |
+
"vocab_size": 64000
|
38 |
+
}
|
configuration_yi.py
ADDED
@@ -0,0 +1,121 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
""" Yi model configuration"""
|
2 |
+
from transformers.configuration_utils import PretrainedConfig
|
3 |
+
from transformers.utils import logging
|
4 |
+
|
5 |
+
logger = logging.get_logger(__name__)
|
6 |
+
|
7 |
+
Yi_PRETRAINED_CONFIG_ARCHIVE_MAP = {}
|
8 |
+
|
9 |
+
|
10 |
+
class YiConfig(PretrainedConfig):
|
11 |
+
r"""
|
12 |
+
This is the configuration class to store the configuration of a [`YiModel`]. It is used to instantiate an Yi
|
13 |
+
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
|
14 |
+
defaults will yield a similar configuration to that of the Yi model.
|
15 |
+
|
16 |
+
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
17 |
+
documentation from [`PretrainedConfig`] for more information.
|
18 |
+
|
19 |
+
|
20 |
+
Args:
|
21 |
+
vocab_size (`int`, *optional*, defaults to 64000):
|
22 |
+
Vocabulary size of the Yi model. Defines the number of different tokens that can be represented by the
|
23 |
+
`inputs_ids` passed when calling [`YiModel`]
|
24 |
+
hidden_size (`int`, *optional*, defaults to 4096):
|
25 |
+
Dimension of the hidden representations.
|
26 |
+
intermediate_size (`int`, *optional*, defaults to 11008):
|
27 |
+
Dimension of the MLP representations.
|
28 |
+
num_hidden_layers (`int`, *optional*, defaults to 32):
|
29 |
+
Number of hidden layers in the Transformer encoder.
|
30 |
+
num_attention_heads (`int`, *optional*, defaults to 32):
|
31 |
+
Number of attention heads for each attention layer in the Transformer encoder.
|
32 |
+
num_key_value_heads (`int`, *optional*):
|
33 |
+
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
|
34 |
+
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
|
35 |
+
`num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
|
36 |
+
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
|
37 |
+
by meanpooling all the original heads within that group. For more details checkout [this
|
38 |
+
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
|
39 |
+
`num_attention_heads`.
|
40 |
+
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
|
41 |
+
The non-linear activation function (function or string) in the decoder.
|
42 |
+
max_position_embeddings (`int`, *optional*, defaults to 4096):
|
43 |
+
The maximum sequence length that this model might ever be used with. Typically set this to something large
|
44 |
+
just in case (e.g., 512 or 1024 or 2048 or 4096).
|
45 |
+
initializer_range (`float`, *optional*, defaults to 0.02):
|
46 |
+
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
47 |
+
rms_norm_eps (`float`, *optional*, defaults to 1e-5):
|
48 |
+
The epsilon used by the rms normalization layers.
|
49 |
+
use_cache (`bool`, *optional*, defaults to `True`):
|
50 |
+
Whether or not the model should return the last key/values attentions (not used by all models). Only
|
51 |
+
relevant if `config.is_decoder=True`.
|
52 |
+
tie_word_embeddings(`bool`, *optional*, defaults to `False`):
|
53 |
+
Whether to tie weight embeddings
|
54 |
+
output_attentions (`bool`, *optional*, defaults to `False`):
|
55 |
+
Whether or not to output attentions.
|
56 |
+
rope_theta (`float`, *optional*, defaults to 5000000.0):
|
57 |
+
The base period of the RoPE embeddings.
|
58 |
+
Example:
|
59 |
+
|
60 |
+
```python
|
61 |
+
>>> from transformers import YiModel, YiConfig
|
62 |
+
|
63 |
+
>>> # Initializing a Yi style configuration
|
64 |
+
>>> configuration = YiConfig()
|
65 |
+
|
66 |
+
>>> # Initializing a model from the Yi style configuration
|
67 |
+
>>> model = YiModel(configuration)
|
68 |
+
|
69 |
+
>>> # Accessing the model configuration
|
70 |
+
>>> configuration = model.config
|
71 |
+
```"""
|
72 |
+
model_type = "Yi"
|
73 |
+
keys_to_ignore_at_inference = ["past_key_values"]
|
74 |
+
|
75 |
+
def __init__(
|
76 |
+
self,
|
77 |
+
vocab_size=64000,
|
78 |
+
hidden_size=4096,
|
79 |
+
intermediate_size=11008,
|
80 |
+
num_hidden_layers=32,
|
81 |
+
num_attention_heads=32,
|
82 |
+
num_key_value_heads=4,
|
83 |
+
hidden_act="silu",
|
84 |
+
max_position_embeddings=4096,
|
85 |
+
initializer_range=0.02,
|
86 |
+
rms_norm_eps=1e-5,
|
87 |
+
use_cache=True,
|
88 |
+
pad_token_id=0,
|
89 |
+
bos_token_id=1,
|
90 |
+
eos_token_id=2,
|
91 |
+
tie_word_embeddings=False,
|
92 |
+
output_attentions=False,
|
93 |
+
rope_theta=5000000.0,
|
94 |
+
**kwargs,
|
95 |
+
):
|
96 |
+
self.vocab_size = vocab_size
|
97 |
+
self.max_position_embeddings = max_position_embeddings
|
98 |
+
self.hidden_size = hidden_size
|
99 |
+
self.intermediate_size = intermediate_size
|
100 |
+
self.num_hidden_layers = num_hidden_layers
|
101 |
+
self.num_attention_heads = num_attention_heads
|
102 |
+
|
103 |
+
# for backward compatibility
|
104 |
+
if num_key_value_heads is None:
|
105 |
+
num_key_value_heads = num_attention_heads
|
106 |
+
|
107 |
+
self.num_key_value_heads = num_key_value_heads
|
108 |
+
self.hidden_act = hidden_act
|
109 |
+
self.initializer_range = initializer_range
|
110 |
+
self.rms_norm_eps = rms_norm_eps
|
111 |
+
self.use_cache = use_cache
|
112 |
+
self.output_attentions = output_attentions
|
113 |
+
self.rope_theta = rope_theta
|
114 |
+
|
115 |
+
super().__init__(
|
116 |
+
pad_token_id=pad_token_id,
|
117 |
+
bos_token_id=bos_token_id,
|
118 |
+
eos_token_id=eos_token_id,
|
119 |
+
tie_word_embeddings=tie_word_embeddings,
|
120 |
+
**kwargs,
|
121 |
+
)
|
generation_config.json
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_from_model_config": true,
|
3 |
+
"bos_token_id": 1,
|
4 |
+
"eos_token_id": 2,
|
5 |
+
"pad_token_id": 0,
|
6 |
+
"transformers_version": "4.31.0"
|
7 |
+
}
|
model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8df679a7fc459334a6578e0d064258b780b1d74ba1b40e89e68b52eeb41f2997
|
3 |
+
size 3925566088
|
modeling_yi.py
ADDED
@@ -0,0 +1,1028 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
""" PyTorch Yi model."""
|
2 |
+
import math
|
3 |
+
from typing import List, Optional, Tuple, Union
|
4 |
+
|
5 |
+
import torch.utils.checkpoint
|
6 |
+
from einops import repeat
|
7 |
+
from torch import nn
|
8 |
+
from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
|
9 |
+
from transformers.activations import ACT2FN
|
10 |
+
from transformers.modeling_outputs import (
|
11 |
+
BaseModelOutputWithPast,
|
12 |
+
CausalLMOutputWithPast,
|
13 |
+
SequenceClassifierOutputWithPast,
|
14 |
+
)
|
15 |
+
from transformers.modeling_utils import PreTrainedModel
|
16 |
+
from transformers.pytorch_utils import ALL_LAYERNORM_LAYERS
|
17 |
+
from transformers.utils import (
|
18 |
+
add_start_docstrings,
|
19 |
+
add_start_docstrings_to_model_forward,
|
20 |
+
logging,
|
21 |
+
replace_return_docstrings,
|
22 |
+
)
|
23 |
+
|
24 |
+
from .configuration_yi import YiConfig
|
25 |
+
|
26 |
+
is_flash_attn_available = True
|
27 |
+
try:
|
28 |
+
from flash_attn import flash_attn_func
|
29 |
+
except Exception:
|
30 |
+
is_flash_attn_available = False
|
31 |
+
|
32 |
+
logger = logging.get_logger(__name__)
|
33 |
+
|
34 |
+
_CONFIG_FOR_DOC = "YiConfig"
|
35 |
+
|
36 |
+
|
37 |
+
# Copied from transformers.models.bart.modeling_bart._make_causal_mask
|
38 |
+
def _make_causal_mask(
|
39 |
+
input_ids_shape: torch.Size,
|
40 |
+
dtype: torch.dtype,
|
41 |
+
device: torch.device,
|
42 |
+
past_key_values_length: int = 0,
|
43 |
+
):
|
44 |
+
"""
|
45 |
+
Make causal mask used for bi-directional self-attention.
|
46 |
+
"""
|
47 |
+
bsz, tgt_len = input_ids_shape
|
48 |
+
mask = torch.full(
|
49 |
+
(tgt_len, tgt_len),
|
50 |
+
torch.tensor(torch.finfo(dtype).min, device=device),
|
51 |
+
device=device,
|
52 |
+
)
|
53 |
+
mask_cond = torch.arange(mask.size(-1), device=device)
|
54 |
+
mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0)
|
55 |
+
mask = mask.to(dtype)
|
56 |
+
|
57 |
+
if past_key_values_length > 0:
|
58 |
+
mask = torch.cat(
|
59 |
+
[
|
60 |
+
torch.zeros(
|
61 |
+
tgt_len, past_key_values_length, dtype=dtype, device=device
|
62 |
+
),
|
63 |
+
mask,
|
64 |
+
],
|
65 |
+
dim=-1,
|
66 |
+
)
|
67 |
+
return mask[None, None, :, :].expand(
|
68 |
+
bsz, 1, tgt_len, tgt_len + past_key_values_length
|
69 |
+
)
|
70 |
+
|
71 |
+
|
72 |
+
# Copied from transformers.models.bart.modeling_bart._expand_mask
|
73 |
+
def _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None):
|
74 |
+
"""
|
75 |
+
Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`.
|
76 |
+
"""
|
77 |
+
bsz, src_len = mask.size()
|
78 |
+
tgt_len = tgt_len if tgt_len is not None else src_len
|
79 |
+
|
80 |
+
expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype)
|
81 |
+
|
82 |
+
inverted_mask = 1.0 - expanded_mask
|
83 |
+
|
84 |
+
return inverted_mask.masked_fill(
|
85 |
+
inverted_mask.to(torch.bool), torch.finfo(dtype).min
|
86 |
+
)
|
87 |
+
|
88 |
+
|
89 |
+
class YiRMSNorm(nn.Module):
|
90 |
+
def __init__(self, hidden_size, eps=1e-5):
|
91 |
+
"""
|
92 |
+
YiRMSNorm is equivalent to T5LayerNorm
|
93 |
+
"""
|
94 |
+
super().__init__()
|
95 |
+
self.weight = nn.Parameter(torch.ones(hidden_size))
|
96 |
+
self.variance_epsilon = eps
|
97 |
+
|
98 |
+
def forward(self, hidden_states):
|
99 |
+
input_dtype = hidden_states.dtype
|
100 |
+
hidden_states = hidden_states.to(torch.float32)
|
101 |
+
variance = hidden_states.pow(2).mean(-1, keepdim=True)
|
102 |
+
hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
|
103 |
+
|
104 |
+
return self.weight * hidden_states.to(input_dtype)
|
105 |
+
|
106 |
+
|
107 |
+
ALL_LAYERNORM_LAYERS.append(YiRMSNorm)
|
108 |
+
|
109 |
+
|
110 |
+
class YiRotaryEmbedding(torch.nn.Module):
|
111 |
+
def __init__(self, dim, max_position_embeddings=4096, base=5000000, device=None):
|
112 |
+
super().__init__()
|
113 |
+
|
114 |
+
self.dim = dim
|
115 |
+
self.max_position_embeddings = max_position_embeddings
|
116 |
+
self.base = base
|
117 |
+
|
118 |
+
# Build here to make `torch.jit.trace` work.
|
119 |
+
self._set_cos_sin_cache(seq_len=max_position_embeddings, device=device)
|
120 |
+
|
121 |
+
def _set_cos_sin_cache(self, seq_len, device):
|
122 |
+
self.max_seq_len_cached = seq_len
|
123 |
+
inv_freq = 1.0 / (
|
124 |
+
self.base ** (torch.arange(0, self.dim, 2).float().to(device) / self.dim)
|
125 |
+
)
|
126 |
+
t = torch.arange(self.max_seq_len_cached, device=device, dtype=torch.float32)
|
127 |
+
freqs = torch.einsum("i,j->ij", t, inv_freq)
|
128 |
+
# Different from paper, but it uses a different permutation in order to obtain the same calculation
|
129 |
+
emb = torch.cat((freqs, freqs), dim=-1)
|
130 |
+
self.register_buffer(
|
131 |
+
"cos_cached", emb.cos()[None, None, :, :], persistent=False
|
132 |
+
)
|
133 |
+
self.register_buffer(
|
134 |
+
"sin_cached", emb.sin()[None, None, :, :], persistent=False
|
135 |
+
)
|
136 |
+
|
137 |
+
def forward(self, x, seq_len=None):
|
138 |
+
# x: [bs, num_attention_heads, seq_len, head_size]
|
139 |
+
if seq_len > self.max_seq_len_cached:
|
140 |
+
self._set_cos_sin_cache(seq_len=seq_len, device=x.device)
|
141 |
+
|
142 |
+
return (
|
143 |
+
self.cos_cached[:, :, :seq_len, ...].to(dtype=x.dtype),
|
144 |
+
self.sin_cached[:, :, :seq_len, ...].to(dtype=x.dtype),
|
145 |
+
)
|
146 |
+
|
147 |
+
|
148 |
+
def rotate_half(x):
|
149 |
+
"""Rotates half the hidden dims of the input."""
|
150 |
+
x1 = x[..., : x.shape[-1] // 2]
|
151 |
+
x2 = x[..., x.shape[-1] // 2 :]
|
152 |
+
return torch.cat((-x2, x1), dim=-1)
|
153 |
+
|
154 |
+
|
155 |
+
def apply_rotary_pos_emb(q, k, cos, sin, position_ids, flash_attn_available):
|
156 |
+
# The first two dimensions of cos and sin are always 1, so we can `squeeze` them.
|
157 |
+
cos = cos.squeeze(1).squeeze(0) # [seq_len, dim]
|
158 |
+
sin = sin.squeeze(1).squeeze(0) # [seq_len, dim]
|
159 |
+
expand_dim = 2 if flash_attn_available else 1
|
160 |
+
cos = cos[position_ids].unsqueeze(expand_dim) # [bs, seq_len, dim]
|
161 |
+
sin = sin[position_ids].unsqueeze(expand_dim) # [bs, seq_len, dim]
|
162 |
+
q_embed = (q * cos) + (rotate_half(q) * sin)
|
163 |
+
k_embed = (k * cos) + (rotate_half(k) * sin)
|
164 |
+
return q_embed, k_embed
|
165 |
+
|
166 |
+
|
167 |
+
class YiMLP(nn.Module):
|
168 |
+
def __init__(self, hidden_size: int, intermediate_size: int, hidden_act: str):
|
169 |
+
super().__init__()
|
170 |
+
self.gate_proj = nn.Linear(hidden_size, intermediate_size, bias=False)
|
171 |
+
self.down_proj = nn.Linear(intermediate_size, hidden_size, bias=False)
|
172 |
+
self.up_proj = nn.Linear(hidden_size, intermediate_size, bias=False)
|
173 |
+
self.act_fn = ACT2FN[hidden_act]
|
174 |
+
|
175 |
+
def forward(self, x):
|
176 |
+
return self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x))
|
177 |
+
|
178 |
+
|
179 |
+
class YiAttention(nn.Module):
|
180 |
+
"""Multi-headed attention from 'Attention Is All You Need' paper"""
|
181 |
+
|
182 |
+
def __init__(self, config: YiConfig):
|
183 |
+
super().__init__()
|
184 |
+
self.config = config
|
185 |
+
self.hidden_size = config.hidden_size
|
186 |
+
self.num_heads = config.num_attention_heads
|
187 |
+
self.head_dim = self.hidden_size // self.num_heads
|
188 |
+
self.num_key_value_heads = config.num_key_value_heads
|
189 |
+
self.num_key_value_groups = self.num_heads // self.num_key_value_heads
|
190 |
+
self.max_position_embeddings = config.max_position_embeddings
|
191 |
+
|
192 |
+
if (self.head_dim * self.num_heads) != self.hidden_size:
|
193 |
+
raise ValueError(
|
194 |
+
f"hidden_size must be divisible by num_heads (got `hidden_size`: {self.hidden_size}"
|
195 |
+
f" and `num_heads`: {self.num_heads})."
|
196 |
+
)
|
197 |
+
self.q_proj = nn.Linear(
|
198 |
+
self.hidden_size, self.num_heads * self.head_dim, bias=False
|
199 |
+
)
|
200 |
+
self.k_proj = nn.Linear(
|
201 |
+
self.hidden_size, self.num_key_value_heads * self.head_dim, bias=False
|
202 |
+
)
|
203 |
+
self.v_proj = nn.Linear(
|
204 |
+
self.hidden_size, self.num_key_value_heads * self.head_dim, bias=False
|
205 |
+
)
|
206 |
+
self.o_proj = nn.Linear(
|
207 |
+
self.num_heads * self.head_dim, self.hidden_size, bias=False
|
208 |
+
)
|
209 |
+
|
210 |
+
self.rotary_emb = YiRotaryEmbedding(
|
211 |
+
self.head_dim,
|
212 |
+
max_position_embeddings=self.max_position_embeddings,
|
213 |
+
base=self.config.rope_theta,
|
214 |
+
)
|
215 |
+
|
216 |
+
def forward(
|
217 |
+
self,
|
218 |
+
hidden_states: torch.Tensor,
|
219 |
+
attention_mask: Optional[torch.Tensor] = None,
|
220 |
+
position_ids: Optional[torch.LongTensor] = None,
|
221 |
+
past_key_value: Optional[Tuple[torch.Tensor]] = None,
|
222 |
+
output_attentions: bool = False,
|
223 |
+
use_cache: bool = False,
|
224 |
+
) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
|
225 |
+
bsz, q_len, _ = hidden_states.size()
|
226 |
+
|
227 |
+
query_states = self.q_proj(hidden_states).view(
|
228 |
+
bsz, q_len, self.num_heads, self.head_dim
|
229 |
+
)
|
230 |
+
|
231 |
+
key_states = self.k_proj(hidden_states).view(
|
232 |
+
bsz, q_len, self.num_key_value_heads, self.head_dim
|
233 |
+
)
|
234 |
+
value_states = self.v_proj(hidden_states).view(
|
235 |
+
bsz, q_len, self.num_key_value_heads, self.head_dim
|
236 |
+
)
|
237 |
+
|
238 |
+
if not is_flash_attn_available:
|
239 |
+
if self.num_key_value_groups > 1:
|
240 |
+
key_states = repeat(
|
241 |
+
key_states, f"b n h d -> b n (h {self.num_key_value_groups}) d"
|
242 |
+
)
|
243 |
+
value_states = repeat(
|
244 |
+
value_states, f"b n h d -> b n (h {self.num_key_value_groups}) d"
|
245 |
+
)
|
246 |
+
|
247 |
+
# b n h d -> b h n d
|
248 |
+
query_states = query_states.transpose(1, 2)
|
249 |
+
key_states = key_states.transpose(1, 2)
|
250 |
+
value_states = value_states.transpose(1, 2)
|
251 |
+
|
252 |
+
seq_dim = 1 if is_flash_attn_available else 2
|
253 |
+
kv_seq_len = key_states.shape[seq_dim]
|
254 |
+
if past_key_value is not None:
|
255 |
+
kv_seq_len += past_key_value[0].shape[seq_dim]
|
256 |
+
cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
|
257 |
+
query_states, key_states = apply_rotary_pos_emb(
|
258 |
+
query_states, key_states, cos, sin, position_ids, is_flash_attn_available
|
259 |
+
)
|
260 |
+
|
261 |
+
if past_key_value is not None:
|
262 |
+
# reuse k, v, self_attention
|
263 |
+
key_states = torch.cat([past_key_value[0], key_states], dim=seq_dim)
|
264 |
+
value_states = torch.cat([past_key_value[1], value_states], dim=seq_dim)
|
265 |
+
|
266 |
+
past_key_value = (key_states, value_states) if use_cache else None
|
267 |
+
|
268 |
+
if is_flash_attn_available:
|
269 |
+
attn_output = flash_attn_func(
|
270 |
+
query_states, key_states, value_states, dropout_p=0.0, causal=True
|
271 |
+
)
|
272 |
+
else:
|
273 |
+
attn_weights = torch.matmul(
|
274 |
+
query_states, key_states.transpose(2, 3)
|
275 |
+
) / math.sqrt(self.head_dim)
|
276 |
+
|
277 |
+
if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len):
|
278 |
+
raise ValueError(
|
279 |
+
f"Attention weights should be of size {(bsz, self.num_heads, q_len, kv_seq_len)}, but is"
|
280 |
+
f" {attn_weights.size()}"
|
281 |
+
)
|
282 |
+
|
283 |
+
if attention_mask is not None:
|
284 |
+
if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
|
285 |
+
raise ValueError(
|
286 |
+
f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is"
|
287 |
+
f"{attention_mask.size()}"
|
288 |
+
)
|
289 |
+
attn_weights = attn_weights + attention_mask
|
290 |
+
dtype_min = torch.tensor(
|
291 |
+
torch.finfo(attn_weights.dtype).min,
|
292 |
+
device=attn_weights.device,
|
293 |
+
dtype=attn_weights.dtype,
|
294 |
+
)
|
295 |
+
attn_weights = torch.max(attn_weights, dtype_min)
|
296 |
+
|
297 |
+
# upcast attention to fp32
|
298 |
+
attn_weights = nn.functional.softmax(
|
299 |
+
attn_weights, dim=-1, dtype=torch.float32
|
300 |
+
).to(query_states.dtype)
|
301 |
+
attn_output = torch.matmul(attn_weights, value_states)
|
302 |
+
|
303 |
+
if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
|
304 |
+
raise ValueError(
|
305 |
+
f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is"
|
306 |
+
f" {attn_output.size()}"
|
307 |
+
)
|
308 |
+
|
309 |
+
if not is_flash_attn_available:
|
310 |
+
attn_output = attn_output.transpose(1, 2)
|
311 |
+
|
312 |
+
attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)
|
313 |
+
|
314 |
+
attn_output = self.o_proj(attn_output)
|
315 |
+
|
316 |
+
if not output_attentions:
|
317 |
+
attn_weights = None
|
318 |
+
|
319 |
+
return attn_output, attn_weights, past_key_value
|
320 |
+
|
321 |
+
|
322 |
+
class YiDecoderLayer(nn.Module):
|
323 |
+
def __init__(self, config: YiConfig):
|
324 |
+
super().__init__()
|
325 |
+
|
326 |
+
self.hidden_size = config.hidden_size
|
327 |
+
self.self_attn = YiAttention(config=config)
|
328 |
+
self.mlp = YiMLP(
|
329 |
+
hidden_size=self.hidden_size,
|
330 |
+
intermediate_size=config.intermediate_size,
|
331 |
+
hidden_act=config.hidden_act,
|
332 |
+
)
|
333 |
+
|
334 |
+
self.ln1 = YiRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
|
335 |
+
self.ln2 = YiRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
|
336 |
+
|
337 |
+
def forward(
|
338 |
+
self,
|
339 |
+
hidden_states: torch.Tensor,
|
340 |
+
attention_mask: Optional[torch.Tensor] = None,
|
341 |
+
position_ids: Optional[torch.LongTensor] = None,
|
342 |
+
past_key_value: Optional[Tuple[torch.Tensor]] = None,
|
343 |
+
output_attentions: Optional[bool] = False,
|
344 |
+
use_cache: Optional[bool] = False,
|
345 |
+
) -> Tuple[
|
346 |
+
torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]
|
347 |
+
]:
|
348 |
+
"""
|
349 |
+
Args:
|
350 |
+
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
|
351 |
+
attention_mask (`torch.FloatTensor`, *optional*): attention mask of size
|
352 |
+
`(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
|
353 |
+
output_attentions (`bool`, *optional*):
|
354 |
+
Whether or not to return the attentions tensors of all attention layers. See `attentions` under
|
355 |
+
returned tensors for more detail.
|
356 |
+
use_cache (`bool`, *optional*):
|
357 |
+
If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
|
358 |
+
(see `past_key_values`).
|
359 |
+
past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states
|
360 |
+
"""
|
361 |
+
|
362 |
+
residual = hidden_states
|
363 |
+
|
364 |
+
hidden_states = self.ln1(hidden_states)
|
365 |
+
|
366 |
+
# Self Attention
|
367 |
+
hidden_states, self_attn_weights, present_key_value = self.self_attn(
|
368 |
+
hidden_states=hidden_states,
|
369 |
+
attention_mask=attention_mask,
|
370 |
+
position_ids=position_ids,
|
371 |
+
past_key_value=past_key_value,
|
372 |
+
output_attentions=output_attentions,
|
373 |
+
use_cache=use_cache,
|
374 |
+
)
|
375 |
+
hidden_states = residual + hidden_states
|
376 |
+
|
377 |
+
# Fully Connected
|
378 |
+
residual = hidden_states
|
379 |
+
hidden_states = self.ln2(hidden_states)
|
380 |
+
hidden_states = self.mlp(hidden_states)
|
381 |
+
hidden_states = residual + hidden_states
|
382 |
+
|
383 |
+
outputs = (hidden_states,)
|
384 |
+
|
385 |
+
if output_attentions:
|
386 |
+
outputs += (self_attn_weights,)
|
387 |
+
|
388 |
+
if use_cache:
|
389 |
+
outputs += (present_key_value,)
|
390 |
+
|
391 |
+
return outputs
|
392 |
+
|
393 |
+
|
394 |
+
Yi_START_DOCSTRING = r"""
|
395 |
+
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
396 |
+
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
|
397 |
+
etc.)
|
398 |
+
|
399 |
+
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
400 |
+
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
|
401 |
+
and behavior.
|
402 |
+
|
403 |
+
Parameters:
|
404 |
+
config ([`YiConfig`]):
|
405 |
+
Model configuration class with all the parameters of the model. Initializing with a config file does not
|
406 |
+
load the weights associated with the model, only the configuration. Check out the
|
407 |
+
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
|
408 |
+
"""
|
409 |
+
|
410 |
+
|
411 |
+
@add_start_docstrings(
|
412 |
+
"The bare Yi Model outputting raw hidden-states without any specific head on top.",
|
413 |
+
Yi_START_DOCSTRING,
|
414 |
+
)
|
415 |
+
class YiPreTrainedModel(PreTrainedModel):
|
416 |
+
config_class = YiConfig
|
417 |
+
base_model_prefix = "model"
|
418 |
+
supports_gradient_checkpointing = True
|
419 |
+
_no_split_modules = ["YiDecoderLayer"]
|
420 |
+
_skip_keys_device_placement = "past_key_values"
|
421 |
+
|
422 |
+
def _init_weights(self, module):
|
423 |
+
std = self.config.initializer_range
|
424 |
+
if isinstance(module, nn.Linear):
|
425 |
+
module.weight.data.normal_(mean=0.0, std=std)
|
426 |
+
if module.bias is not None:
|
427 |
+
module.bias.data.zero_()
|
428 |
+
elif isinstance(module, nn.Embedding):
|
429 |
+
module.weight.data.normal_(mean=0.0, std=std)
|
430 |
+
if module.padding_idx is not None:
|
431 |
+
module.weight.data[module.padding_idx].zero_()
|
432 |
+
|
433 |
+
def _set_gradient_checkpointing(self, module, value=False):
|
434 |
+
if isinstance(module, YiModel):
|
435 |
+
module.gradient_checkpointing = value
|
436 |
+
|
437 |
+
|
438 |
+
Yi_INPUTS_DOCSTRING = r"""
|
439 |
+
Args:
|
440 |
+
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
|
441 |
+
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
|
442 |
+
it.
|
443 |
+
|
444 |
+
Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
|
445 |
+
[`PreTrainedTokenizer.__call__`] for details.
|
446 |
+
|
447 |
+
[What are input IDs?](../glossary#input-ids)
|
448 |
+
attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
|
449 |
+
Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
|
450 |
+
|
451 |
+
- 1 for tokens that are **not masked**,
|
452 |
+
- 0 for tokens that are **masked**.
|
453 |
+
|
454 |
+
[What are attention masks?](../glossary#attention-mask)
|
455 |
+
|
456 |
+
Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
|
457 |
+
[`PreTrainedTokenizer.__call__`] for details.
|
458 |
+
|
459 |
+
If `past_key_values` is used, optionally only the last `decoder_input_ids` have to be input (see
|
460 |
+
`past_key_values`).
|
461 |
+
|
462 |
+
If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`]
|
463 |
+
and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more
|
464 |
+
information on the default strategy.
|
465 |
+
|
466 |
+
- 1 indicates the head is **not masked**,
|
467 |
+
- 0 indicates the head is **masked**.
|
468 |
+
position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
|
469 |
+
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
|
470 |
+
config.n_positions - 1]`.
|
471 |
+
|
472 |
+
[What are position IDs?](../glossary#position-ids)
|
473 |
+
past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
|
474 |
+
Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape
|
475 |
+
`(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape
|
476 |
+
`(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`.
|
477 |
+
|
478 |
+
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
|
479 |
+
blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.
|
480 |
+
|
481 |
+
If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that
|
482 |
+
don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all
|
483 |
+
`decoder_input_ids` of shape `(batch_size, sequence_length)`.
|
484 |
+
inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
|
485 |
+
Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
|
486 |
+
is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
|
487 |
+
model's internal embedding lookup matrix.
|
488 |
+
use_cache (`bool`, *optional*):
|
489 |
+
If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
|
490 |
+
`past_key_values`).
|
491 |
+
output_attentions (`bool`, *optional*):
|
492 |
+
Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
|
493 |
+
tensors for more detail.
|
494 |
+
output_hidden_states (`bool`, *optional*):
|
495 |
+
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
|
496 |
+
more detail.
|
497 |
+
return_dict (`bool`, *optional*):
|
498 |
+
Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
|
499 |
+
"""
|
500 |
+
|
501 |
+
|
502 |
+
@add_start_docstrings(
|
503 |
+
"The bare Yi Model outputting raw hidden-states without any specific head on top.",
|
504 |
+
Yi_START_DOCSTRING,
|
505 |
+
)
|
506 |
+
class YiModel(YiPreTrainedModel):
|
507 |
+
"""
|
508 |
+
Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`YiDecoderLayer`]
|
509 |
+
|
510 |
+
Args:
|
511 |
+
config: YiConfig
|
512 |
+
"""
|
513 |
+
|
514 |
+
def __init__(self, config: YiConfig):
|
515 |
+
super().__init__(config)
|
516 |
+
self.padding_idx = config.pad_token_id
|
517 |
+
self.vocab_size = config.vocab_size
|
518 |
+
|
519 |
+
self.embed_tokens = nn.Embedding(
|
520 |
+
config.vocab_size, config.hidden_size, self.padding_idx
|
521 |
+
)
|
522 |
+
self.layers = nn.ModuleList(
|
523 |
+
[YiDecoderLayer(config) for _ in range(config.num_hidden_layers)]
|
524 |
+
)
|
525 |
+
|
526 |
+
self.norm = YiRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
|
527 |
+
|
528 |
+
self.gradient_checkpointing = False
|
529 |
+
# Initialize weights and apply final processing
|
530 |
+
self.post_init()
|
531 |
+
|
532 |
+
def get_input_embeddings(self):
|
533 |
+
return self.embed_tokens
|
534 |
+
|
535 |
+
def set_input_embeddings(self, value):
|
536 |
+
self.embed_tokens = value
|
537 |
+
|
538 |
+
# Copied from transformers.models.bart.modeling_bart.BartDecoder._prepare_decoder_attention_mask
|
539 |
+
def _prepare_decoder_attention_mask(
|
540 |
+
self, attention_mask, input_ids, inputs_embeds, past_key_values_length
|
541 |
+
):
|
542 |
+
input_shape = input_ids.shape if input_ids is not None else inputs_embeds.shape[:-1]
|
543 |
+
# create causal mask
|
544 |
+
# [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
|
545 |
+
combined_attention_mask = None
|
546 |
+
if input_shape[-1] > 1:
|
547 |
+
combined_attention_mask = _make_causal_mask(
|
548 |
+
input_shape,
|
549 |
+
inputs_embeds.dtype,
|
550 |
+
device=inputs_embeds.device,
|
551 |
+
past_key_values_length=past_key_values_length,
|
552 |
+
)
|
553 |
+
|
554 |
+
if attention_mask is not None:
|
555 |
+
# [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
|
556 |
+
expanded_attn_mask = _expand_mask(
|
557 |
+
attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]
|
558 |
+
).to(inputs_embeds.device)
|
559 |
+
combined_attention_mask = (
|
560 |
+
expanded_attn_mask
|
561 |
+
if combined_attention_mask is None
|
562 |
+
else expanded_attn_mask + combined_attention_mask
|
563 |
+
)
|
564 |
+
|
565 |
+
return combined_attention_mask
|
566 |
+
|
567 |
+
@add_start_docstrings_to_model_forward(Yi_INPUTS_DOCSTRING)
|
568 |
+
def forward(
|
569 |
+
self,
|
570 |
+
input_ids: torch.LongTensor = None,
|
571 |
+
attention_mask: Optional[torch.Tensor] = None,
|
572 |
+
position_ids: Optional[torch.LongTensor] = None,
|
573 |
+
past_key_values: Optional[List[torch.FloatTensor]] = None,
|
574 |
+
inputs_embeds: Optional[torch.FloatTensor] = None,
|
575 |
+
use_cache: Optional[bool] = None,
|
576 |
+
output_attentions: Optional[bool] = None,
|
577 |
+
output_hidden_states: Optional[bool] = None,
|
578 |
+
return_dict: Optional[bool] = None,
|
579 |
+
) -> Union[Tuple, BaseModelOutputWithPast]:
|
580 |
+
output_attentions = (
|
581 |
+
output_attentions
|
582 |
+
if output_attentions is not None
|
583 |
+
else self.config.output_attentions
|
584 |
+
)
|
585 |
+
output_hidden_states = (
|
586 |
+
output_hidden_states
|
587 |
+
if output_hidden_states is not None
|
588 |
+
else self.config.output_hidden_states
|
589 |
+
)
|
590 |
+
use_cache = use_cache if use_cache is not None else self.config.use_cache
|
591 |
+
|
592 |
+
return_dict = (
|
593 |
+
return_dict if return_dict is not None else self.config.use_return_dict
|
594 |
+
)
|
595 |
+
|
596 |
+
# retrieve input_ids and inputs_embeds
|
597 |
+
if input_ids is not None and inputs_embeds is not None:
|
598 |
+
raise ValueError(
|
599 |
+
"You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time"
|
600 |
+
)
|
601 |
+
elif input_ids is not None:
|
602 |
+
batch_size, seq_length = input_ids.shape
|
603 |
+
elif inputs_embeds is not None:
|
604 |
+
batch_size, seq_length, _ = inputs_embeds.shape
|
605 |
+
else:
|
606 |
+
raise ValueError(
|
607 |
+
"You have to specify either decoder_input_ids or decoder_inputs_embeds"
|
608 |
+
)
|
609 |
+
|
610 |
+
seq_length_with_past = seq_length
|
611 |
+
past_key_values_length = 0
|
612 |
+
|
613 |
+
if past_key_values is not None:
|
614 |
+
past_key_values_length = past_key_values[0][0].shape[2]
|
615 |
+
seq_length_with_past = seq_length_with_past + past_key_values_length
|
616 |
+
|
617 |
+
if position_ids is None:
|
618 |
+
device = input_ids.device if input_ids is not None else inputs_embeds.device
|
619 |
+
position_ids = torch.arange(
|
620 |
+
past_key_values_length,
|
621 |
+
seq_length + past_key_values_length,
|
622 |
+
dtype=torch.long,
|
623 |
+
device=device,
|
624 |
+
)
|
625 |
+
position_ids = position_ids.unsqueeze(0).view(-1, seq_length)
|
626 |
+
else:
|
627 |
+
position_ids = position_ids.view(-1, seq_length).long()
|
628 |
+
|
629 |
+
if inputs_embeds is None:
|
630 |
+
inputs_embeds = self.embed_tokens(input_ids)
|
631 |
+
|
632 |
+
if not is_flash_attn_available:
|
633 |
+
# embed positions
|
634 |
+
if attention_mask is None:
|
635 |
+
attention_mask = torch.ones(
|
636 |
+
(batch_size, seq_length_with_past),
|
637 |
+
dtype=torch.bool,
|
638 |
+
device=inputs_embeds.device,
|
639 |
+
)
|
640 |
+
attention_mask = self._prepare_decoder_attention_mask(
|
641 |
+
attention_mask,
|
642 |
+
input_ids,
|
643 |
+
inputs_embeds,
|
644 |
+
past_key_values_length,
|
645 |
+
)
|
646 |
+
else:
|
647 |
+
attention_mask = None
|
648 |
+
|
649 |
+
hidden_states = inputs_embeds
|
650 |
+
if self.gradient_checkpointing and self.training:
|
651 |
+
if use_cache:
|
652 |
+
logger.warning_once(
|
653 |
+
"`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
|
654 |
+
)
|
655 |
+
use_cache = False
|
656 |
+
|
657 |
+
# decoder layers
|
658 |
+
all_hidden_states = () if output_hidden_states else None
|
659 |
+
all_self_attns = () if output_attentions else None
|
660 |
+
next_decoder_cache = () if use_cache else None
|
661 |
+
|
662 |
+
for idx, decoder_layer in enumerate(self.layers):
|
663 |
+
if output_hidden_states:
|
664 |
+
all_hidden_states += (hidden_states,)
|
665 |
+
|
666 |
+
past_key_value = (
|
667 |
+
past_key_values[idx] if past_key_values is not None else None
|
668 |
+
)
|
669 |
+
|
670 |
+
if self.gradient_checkpointing and self.training:
|
671 |
+
|
672 |
+
def create_custom_forward(module):
|
673 |
+
def custom_forward(*inputs):
|
674 |
+
# None for past_key_value
|
675 |
+
return module(*inputs, past_key_value, output_attentions)
|
676 |
+
|
677 |
+
return custom_forward
|
678 |
+
|
679 |
+
layer_outputs = torch.utils.checkpoint.checkpoint(
|
680 |
+
create_custom_forward(decoder_layer),
|
681 |
+
hidden_states,
|
682 |
+
attention_mask,
|
683 |
+
position_ids,
|
684 |
+
)
|
685 |
+
else:
|
686 |
+
layer_outputs = decoder_layer(
|
687 |
+
hidden_states,
|
688 |
+
attention_mask=attention_mask,
|
689 |
+
position_ids=position_ids,
|
690 |
+
past_key_value=past_key_value,
|
691 |
+
output_attentions=output_attentions,
|
692 |
+
use_cache=use_cache,
|
693 |
+
)
|
694 |
+
|
695 |
+
hidden_states = layer_outputs[0]
|
696 |
+
|
697 |
+
if use_cache:
|
698 |
+
next_decoder_cache += (layer_outputs[2 if output_attentions else 1],)
|
699 |
+
|
700 |
+
if output_attentions:
|
701 |
+
all_self_attns += (layer_outputs[1],)
|
702 |
+
|
703 |
+
hidden_states = self.norm(hidden_states)
|
704 |
+
# add hidden states from the last decoder layer
|
705 |
+
if output_hidden_states:
|
706 |
+
all_hidden_states += (hidden_states,)
|
707 |
+
|
708 |
+
next_cache = next_decoder_cache if use_cache else None
|
709 |
+
if not return_dict:
|
710 |
+
return tuple(
|
711 |
+
v
|
712 |
+
for v in [hidden_states, next_cache, all_hidden_states, all_self_attns]
|
713 |
+
if v is not None
|
714 |
+
)
|
715 |
+
return BaseModelOutputWithPast(
|
716 |
+
last_hidden_state=hidden_states,
|
717 |
+
past_key_values=next_cache,
|
718 |
+
hidden_states=all_hidden_states,
|
719 |
+
attentions=all_self_attns,
|
720 |
+
)
|
721 |
+
|
722 |
+
|
723 |
+
class YiForCausalLM(YiPreTrainedModel):
|
724 |
+
_tied_weights_keys = ["lm_head.weight"]
|
725 |
+
|
726 |
+
def __init__(self, config):
|
727 |
+
super().__init__(config)
|
728 |
+
self.model = YiModel(config)
|
729 |
+
|
730 |
+
self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
|
731 |
+
|
732 |
+
# Initialize weights and apply final processing
|
733 |
+
self.post_init()
|
734 |
+
|
735 |
+
def get_input_embeddings(self):
|
736 |
+
return self.model.embed_tokens
|
737 |
+
|
738 |
+
def set_input_embeddings(self, value):
|
739 |
+
self.model.embed_tokens = value
|
740 |
+
|
741 |
+
def get_output_embeddings(self):
|
742 |
+
return self.lm_head
|
743 |
+
|
744 |
+
def set_output_embeddings(self, new_embeddings):
|
745 |
+
self.lm_head = new_embeddings
|
746 |
+
|
747 |
+
def set_decoder(self, decoder):
|
748 |
+
self.model = decoder
|
749 |
+
|
750 |
+
def get_decoder(self):
|
751 |
+
return self.model
|
752 |
+
|
753 |
+
@add_start_docstrings_to_model_forward(Yi_INPUTS_DOCSTRING)
|
754 |
+
@replace_return_docstrings(
|
755 |
+
output_type=CausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC
|
756 |
+
)
|
757 |
+
def forward(
|
758 |
+
self,
|
759 |
+
input_ids: torch.LongTensor = None,
|
760 |
+
attention_mask: Optional[torch.Tensor] = None,
|
761 |
+
position_ids: Optional[torch.LongTensor] = None,
|
762 |
+
past_key_values: Optional[List[torch.FloatTensor]] = None,
|
763 |
+
inputs_embeds: Optional[torch.FloatTensor] = None,
|
764 |
+
labels: Optional[torch.LongTensor] = None,
|
765 |
+
use_cache: Optional[bool] = None,
|
766 |
+
output_attentions: Optional[bool] = None,
|
767 |
+
output_hidden_states: Optional[bool] = None,
|
768 |
+
return_dict: Optional[bool] = None,
|
769 |
+
) -> Union[Tuple, CausalLMOutputWithPast]:
|
770 |
+
r"""
|
771 |
+
Args:
|
772 |
+
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
|
773 |
+
Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
|
774 |
+
config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
|
775 |
+
(masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
|
776 |
+
|
777 |
+
Returns:
|
778 |
+
|
779 |
+
Example:
|
780 |
+
|
781 |
+
```python
|
782 |
+
>>> from transformers import AutoTokenizer, YiForCausalLM
|
783 |
+
|
784 |
+
>>> model = YiForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS)
|
785 |
+
>>> tokenizer = AutoTokenizer.from_pretrained(PATH_TO_CONVERTED_TOKENIZER)
|
786 |
+
|
787 |
+
>>> prompt = "Hey, are you conscious? Can you talk to me?"
|
788 |
+
>>> inputs = tokenizer(prompt, return_tensors="pt")
|
789 |
+
|
790 |
+
>>> # Generate
|
791 |
+
>>> generate_ids = model.generate(inputs.input_ids, max_length=30)
|
792 |
+
>>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
|
793 |
+
"Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you."
|
794 |
+
```"""
|
795 |
+
|
796 |
+
output_attentions = (
|
797 |
+
output_attentions
|
798 |
+
if output_attentions is not None
|
799 |
+
else self.config.output_attentions
|
800 |
+
)
|
801 |
+
output_hidden_states = (
|
802 |
+
output_hidden_states
|
803 |
+
if output_hidden_states is not None
|
804 |
+
else self.config.output_hidden_states
|
805 |
+
)
|
806 |
+
return_dict = (
|
807 |
+
return_dict if return_dict is not None else self.config.use_return_dict
|
808 |
+
)
|
809 |
+
|
810 |
+
# decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
|
811 |
+
outputs = self.model(
|
812 |
+
input_ids=input_ids,
|
813 |
+
attention_mask=attention_mask,
|
814 |
+
position_ids=position_ids,
|
815 |
+
past_key_values=past_key_values,
|
816 |
+
inputs_embeds=inputs_embeds,
|
817 |
+
use_cache=use_cache,
|
818 |
+
output_attentions=output_attentions,
|
819 |
+
output_hidden_states=output_hidden_states,
|
820 |
+
return_dict=return_dict,
|
821 |
+
)
|
822 |
+
|
823 |
+
hidden_states = outputs[0]
|
824 |
+
logits = self.lm_head(hidden_states)
|
825 |
+
|
826 |
+
loss = None
|
827 |
+
if labels is not None:
|
828 |
+
# Shift so that tokens < n predict n
|
829 |
+
shift_logits = logits[..., :-1, :].contiguous()
|
830 |
+
shift_labels = labels[..., 1:].contiguous()
|
831 |
+
# Flatten the tokens
|
832 |
+
loss_fct = CrossEntropyLoss()
|
833 |
+
shift_logits = shift_logits.view(-1, self.config.vocab_size)
|
834 |
+
shift_labels = shift_labels.view(-1)
|
835 |
+
# Enable model parallelism
|
836 |
+
shift_labels = shift_labels.to(shift_logits.device)
|
837 |
+
loss = loss_fct(shift_logits, shift_labels)
|
838 |
+
|
839 |
+
if not return_dict:
|
840 |
+
output = (logits,) + outputs[1:]
|
841 |
+
return (loss,) + output if loss is not None else output
|
842 |
+
|
843 |
+
return CausalLMOutputWithPast(
|
844 |
+
loss=loss,
|
845 |
+
logits=logits,
|
846 |
+
past_key_values=outputs.past_key_values,
|
847 |
+
hidden_states=outputs.hidden_states,
|
848 |
+
attentions=outputs.attentions,
|
849 |
+
)
|
850 |
+
|
851 |
+
def prepare_inputs_for_generation(
|
852 |
+
self,
|
853 |
+
input_ids,
|
854 |
+
past_key_values=None,
|
855 |
+
attention_mask=None,
|
856 |
+
inputs_embeds=None,
|
857 |
+
**kwargs,
|
858 |
+
):
|
859 |
+
if past_key_values:
|
860 |
+
input_ids = input_ids[:, -1:]
|
861 |
+
|
862 |
+
position_ids = kwargs.get("position_ids", None)
|
863 |
+
if attention_mask is not None and position_ids is None:
|
864 |
+
# create position_ids on the fly for batch generation
|
865 |
+
position_ids = attention_mask.long().cumsum(-1) - 1
|
866 |
+
position_ids.masked_fill_(attention_mask == 0, 1)
|
867 |
+
if past_key_values:
|
868 |
+
position_ids = position_ids[:, -1].unsqueeze(-1)
|
869 |
+
|
870 |
+
# if `inputs_embeds` are passed, we only want to use them in the 1st generation step
|
871 |
+
if inputs_embeds is not None and past_key_values is None:
|
872 |
+
model_inputs = {"inputs_embeds": inputs_embeds}
|
873 |
+
else:
|
874 |
+
model_inputs = {"input_ids": input_ids}
|
875 |
+
|
876 |
+
model_inputs.update(
|
877 |
+
{
|
878 |
+
"position_ids": position_ids,
|
879 |
+
"past_key_values": past_key_values,
|
880 |
+
"use_cache": kwargs.get("use_cache"),
|
881 |
+
"attention_mask": attention_mask,
|
882 |
+
}
|
883 |
+
)
|
884 |
+
return model_inputs
|
885 |
+
|
886 |
+
@staticmethod
|
887 |
+
def _reorder_cache(past_key_values, beam_idx):
|
888 |
+
reordered_past = ()
|
889 |
+
for layer_past in past_key_values:
|
890 |
+
reordered_past += (
|
891 |
+
tuple(
|
892 |
+
past_state.index_select(0, beam_idx.to(past_state.device))
|
893 |
+
for past_state in layer_past
|
894 |
+
),
|
895 |
+
)
|
896 |
+
return reordered_past
|
897 |
+
|
898 |
+
|
899 |
+
@add_start_docstrings(
|
900 |
+
"""
|
901 |
+
The Yi Model transformer with a sequence classification head on top (linear layer).
|
902 |
+
|
903 |
+
[`YiForSequenceClassification`] uses the last token in order to do the classification, as other causal models
|
904 |
+
(e.g. GPT-2) do.
|
905 |
+
|
906 |
+
Since it does classification on the last token, it requires to know the position of the last token. If a
|
907 |
+
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
|
908 |
+
no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
|
909 |
+
padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
|
910 |
+
each row of the batch).
|
911 |
+
""",
|
912 |
+
Yi_START_DOCSTRING,
|
913 |
+
)
|
914 |
+
class YiForSequenceClassification(YiPreTrainedModel):
|
915 |
+
def __init__(self, config):
|
916 |
+
super().__init__(config)
|
917 |
+
self.num_labels = config.num_labels
|
918 |
+
self.model = YiModel(config)
|
919 |
+
self.score = nn.Linear(config.hidden_size, self.num_labels, bias=False)
|
920 |
+
|
921 |
+
# Initialize weights and apply final processing
|
922 |
+
self.post_init()
|
923 |
+
|
924 |
+
def get_input_embeddings(self):
|
925 |
+
return self.model.embed_tokens
|
926 |
+
|
927 |
+
def set_input_embeddings(self, value):
|
928 |
+
self.model.embed_tokens = value
|
929 |
+
|
930 |
+
@add_start_docstrings_to_model_forward(Yi_INPUTS_DOCSTRING)
|
931 |
+
def forward(
|
932 |
+
self,
|
933 |
+
input_ids: torch.LongTensor = None,
|
934 |
+
attention_mask: Optional[torch.Tensor] = None,
|
935 |
+
position_ids: Optional[torch.LongTensor] = None,
|
936 |
+
past_key_values: Optional[List[torch.FloatTensor]] = None,
|
937 |
+
inputs_embeds: Optional[torch.FloatTensor] = None,
|
938 |
+
labels: Optional[torch.LongTensor] = None,
|
939 |
+
use_cache: Optional[bool] = None,
|
940 |
+
output_attentions: Optional[bool] = None,
|
941 |
+
output_hidden_states: Optional[bool] = None,
|
942 |
+
return_dict: Optional[bool] = None,
|
943 |
+
) -> Union[Tuple, SequenceClassifierOutputWithPast]:
|
944 |
+
r"""
|
945 |
+
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
|
946 |
+
Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
|
947 |
+
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
|
948 |
+
`config.num_labels > 1` a classification loss is computed (Cross-Entropy).
|
949 |
+
"""
|
950 |
+
return_dict = (
|
951 |
+
return_dict if return_dict is not None else self.config.use_return_dict
|
952 |
+
)
|
953 |
+
|
954 |
+
transformer_outputs = self.model(
|
955 |
+
input_ids,
|
956 |
+
attention_mask=attention_mask,
|
957 |
+
position_ids=position_ids,
|
958 |
+
past_key_values=past_key_values,
|
959 |
+
inputs_embeds=inputs_embeds,
|
960 |
+
use_cache=use_cache,
|
961 |
+
output_attentions=output_attentions,
|
962 |
+
output_hidden_states=output_hidden_states,
|
963 |
+
return_dict=return_dict,
|
964 |
+
)
|
965 |
+
hidden_states = transformer_outputs[0]
|
966 |
+
logits = self.score(hidden_states)
|
967 |
+
|
968 |
+
if input_ids is not None:
|
969 |
+
batch_size = input_ids.shape[0]
|
970 |
+
else:
|
971 |
+
batch_size = inputs_embeds.shape[0]
|
972 |
+
|
973 |
+
if self.config.pad_token_id is None and batch_size != 1:
|
974 |
+
raise ValueError(
|
975 |
+
"Cannot handle batch sizes > 1 if no padding token is defined."
|
976 |
+
)
|
977 |
+
if self.config.pad_token_id is None:
|
978 |
+
sequence_lengths = -1
|
979 |
+
else:
|
980 |
+
if input_ids is not None:
|
981 |
+
sequence_lengths = (
|
982 |
+
torch.eq(input_ids, self.config.pad_token_id).long().argmax(-1) - 1
|
983 |
+
).to(logits.device)
|
984 |
+
else:
|
985 |
+
sequence_lengths = -1
|
986 |
+
|
987 |
+
pooled_logits = logits[
|
988 |
+
torch.arange(batch_size, device=logits.device), sequence_lengths
|
989 |
+
]
|
990 |
+
|
991 |
+
loss = None
|
992 |
+
if labels is not None:
|
993 |
+
labels = labels.to(logits.device)
|
994 |
+
if self.config.problem_type is None:
|
995 |
+
if self.num_labels == 1:
|
996 |
+
self.config.problem_type = "regression"
|
997 |
+
elif self.num_labels > 1 and (
|
998 |
+
labels.dtype == torch.long or labels.dtype == torch.int
|
999 |
+
):
|
1000 |
+
self.config.problem_type = "single_label_classification"
|
1001 |
+
else:
|
1002 |
+
self.config.problem_type = "multi_label_classification"
|
1003 |
+
|
1004 |
+
if self.config.problem_type == "regression":
|
1005 |
+
loss_fct = MSELoss()
|
1006 |
+
if self.num_labels == 1:
|
1007 |
+
loss = loss_fct(pooled_logits.squeeze(), labels.squeeze())
|
1008 |
+
else:
|
1009 |
+
loss = loss_fct(pooled_logits, labels)
|
1010 |
+
elif self.config.problem_type == "single_label_classification":
|
1011 |
+
loss_fct = CrossEntropyLoss()
|
1012 |
+
loss = loss_fct(
|
1013 |
+
pooled_logits.view(-1, self.num_labels), labels.view(-1)
|
1014 |
+
)
|
1015 |
+
elif self.config.problem_type == "multi_label_classification":
|
1016 |
+
loss_fct = BCEWithLogitsLoss()
|
1017 |
+
loss = loss_fct(pooled_logits, labels)
|
1018 |
+
if not return_dict:
|
1019 |
+
output = (pooled_logits,) + transformer_outputs[1:]
|
1020 |
+
return ((loss,) + output) if loss is not None else output
|
1021 |
+
|
1022 |
+
return SequenceClassifierOutputWithPast(
|
1023 |
+
loss=loss,
|
1024 |
+
logits=pooled_logits,
|
1025 |
+
past_key_values=transformer_outputs.past_key_values,
|
1026 |
+
hidden_states=transformer_outputs.hidden_states,
|
1027 |
+
attentions=transformer_outputs.attentions,
|
1028 |
+
)
|
quant_config.json
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"zero_point": true,
|
3 |
+
"q_group_size": 128,
|
4 |
+
"w_bit": 4,
|
5 |
+
"version": "GEMM"
|
6 |
+
}
|
special_tokens_map.json
ADDED
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"bos_token": {
|
3 |
+
"content": "<|startoftext|>",
|
4 |
+
"lstrip": false,
|
5 |
+
"normalized": true,
|
6 |
+
"rstrip": false,
|
7 |
+
"single_word": false
|
8 |
+
},
|
9 |
+
"eos_token": {
|
10 |
+
"content": "<|endoftext|>",
|
11 |
+
"lstrip": false,
|
12 |
+
"normalized": true,
|
13 |
+
"rstrip": false,
|
14 |
+
"single_word": false
|
15 |
+
},
|
16 |
+
"pad_token": {
|
17 |
+
"content": "<unk>",
|
18 |
+
"lstrip": false,
|
19 |
+
"normalized": true,
|
20 |
+
"rstrip": false,
|
21 |
+
"single_word": false
|
22 |
+
},
|
23 |
+
"unk_token": {
|
24 |
+
"content": "<unk>",
|
25 |
+
"lstrip": false,
|
26 |
+
"normalized": true,
|
27 |
+
"rstrip": false,
|
28 |
+
"single_word": false
|
29 |
+
}
|
30 |
+
}
|
tokenization_yi.py
ADDED
@@ -0,0 +1,255 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import os
|
2 |
+
from shutil import copyfile
|
3 |
+
from typing import Any, Dict, List, Optional, Tuple
|
4 |
+
|
5 |
+
import sentencepiece as spm
|
6 |
+
from transformers.tokenization_utils import AddedToken, PreTrainedTokenizer
|
7 |
+
from transformers.utils import logging
|
8 |
+
|
9 |
+
logger = logging.get_logger(__name__)
|
10 |
+
|
11 |
+
VOCAB_FILES_NAMES = {"vocab_file": "tokenizer.model"}
|
12 |
+
|
13 |
+
PRETRAINED_VOCAB_FILES_MAP = {
|
14 |
+
"vocab_file": {},
|
15 |
+
"tokenizer_file": {},
|
16 |
+
}
|
17 |
+
PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = {}
|
18 |
+
|
19 |
+
|
20 |
+
class YiTokenizer(PreTrainedTokenizer):
|
21 |
+
"""
|
22 |
+
Construct a Yi tokenizer. Based on byte-level Byte-Pair-Encoding.
|
23 |
+
|
24 |
+
Args:
|
25 |
+
vocab_file (`str`):
|
26 |
+
Path to the vocabulary file.
|
27 |
+
"""
|
28 |
+
|
29 |
+
vocab_files_names = VOCAB_FILES_NAMES
|
30 |
+
pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
|
31 |
+
max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
|
32 |
+
model_input_names = ["input_ids", "attention_mask"]
|
33 |
+
|
34 |
+
def __init__(
|
35 |
+
self,
|
36 |
+
vocab_file,
|
37 |
+
unk_token="<unk>",
|
38 |
+
bos_token="<|startoftext|>",
|
39 |
+
eos_token="<|endoftext|>",
|
40 |
+
pad_token="<unk>",
|
41 |
+
sp_model_kwargs: Optional[Dict[str, Any]] = None,
|
42 |
+
add_bos_token=True,
|
43 |
+
add_eos_token=False,
|
44 |
+
clean_up_tokenization_spaces=False,
|
45 |
+
**kwargs,
|
46 |
+
):
|
47 |
+
self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs
|
48 |
+
bos_token = (
|
49 |
+
AddedToken(bos_token, lstrip=False, rstrip=False)
|
50 |
+
if isinstance(bos_token, str)
|
51 |
+
else bos_token
|
52 |
+
)
|
53 |
+
eos_token = (
|
54 |
+
AddedToken(eos_token, lstrip=False, rstrip=False)
|
55 |
+
if isinstance(eos_token, str)
|
56 |
+
else eos_token
|
57 |
+
)
|
58 |
+
unk_token = (
|
59 |
+
AddedToken(unk_token, lstrip=False, rstrip=False)
|
60 |
+
if isinstance(unk_token, str)
|
61 |
+
else unk_token
|
62 |
+
)
|
63 |
+
pad_token = (
|
64 |
+
AddedToken(pad_token, lstrip=False, rstrip=False)
|
65 |
+
if isinstance(pad_token, str)
|
66 |
+
else pad_token
|
67 |
+
)
|
68 |
+
self.vocab_file = vocab_file
|
69 |
+
self.add_bos_token = add_bos_token
|
70 |
+
self.add_eos_token = add_eos_token
|
71 |
+
self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
|
72 |
+
self.sp_model.Load(vocab_file)
|
73 |
+
super().__init__(
|
74 |
+
bos_token=bos_token,
|
75 |
+
eos_token=eos_token,
|
76 |
+
unk_token=unk_token,
|
77 |
+
pad_token=pad_token,
|
78 |
+
add_bos_token=add_bos_token,
|
79 |
+
add_eos_token=add_eos_token,
|
80 |
+
sp_model_kwargs=self.sp_model_kwargs,
|
81 |
+
clean_up_tokenization_spaces=clean_up_tokenization_spaces,
|
82 |
+
**kwargs,
|
83 |
+
)
|
84 |
+
|
85 |
+
def __getstate__(self):
|
86 |
+
state = self.__dict__.copy()
|
87 |
+
state["sp_model"] = None
|
88 |
+
return state
|
89 |
+
|
90 |
+
def __setstate__(self, d):
|
91 |
+
self.__dict__ = d
|
92 |
+
self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
|
93 |
+
self.sp_model.Load(self.vocab_file)
|
94 |
+
|
95 |
+
@property
|
96 |
+
def vocab_size(self):
|
97 |
+
"""Returns vocab size"""
|
98 |
+
return self.sp_model.get_piece_size()
|
99 |
+
|
100 |
+
def get_vocab(self):
|
101 |
+
"""Returns vocab as a dict"""
|
102 |
+
vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}
|
103 |
+
vocab.update(self.added_tokens_encoder)
|
104 |
+
return vocab
|
105 |
+
|
106 |
+
def _tokenize(self, text):
|
107 |
+
"""Returns a tokenized string."""
|
108 |
+
return self.sp_model.encode(text, out_type=str)
|
109 |
+
|
110 |
+
def _convert_token_to_id(self, token):
|
111 |
+
"""Converts a token (str) in an id using the vocab."""
|
112 |
+
return self.sp_model.piece_to_id(token)
|
113 |
+
|
114 |
+
def _convert_id_to_token(self, index):
|
115 |
+
"""Converts an index (integer) in a token (str) using the vocab."""
|
116 |
+
token = self.sp_model.IdToPiece(index)
|
117 |
+
return token
|
118 |
+
|
119 |
+
def convert_tokens_to_string(self, tokens):
|
120 |
+
"""Converts a sequence of tokens (string) in a single string."""
|
121 |
+
current_sub_tokens = []
|
122 |
+
out_string = ""
|
123 |
+
prev_is_special = False
|
124 |
+
for i, token in enumerate(tokens):
|
125 |
+
# make sure that special tokens are not decoded using sentencepiece model
|
126 |
+
if token in self.all_special_tokens:
|
127 |
+
if not prev_is_special and i != 0:
|
128 |
+
out_string += " "
|
129 |
+
out_string += self.sp_model.decode(current_sub_tokens) + token
|
130 |
+
prev_is_special = True
|
131 |
+
current_sub_tokens = []
|
132 |
+
else:
|
133 |
+
current_sub_tokens.append(token)
|
134 |
+
prev_is_special = False
|
135 |
+
out_string += self.sp_model.decode(current_sub_tokens)
|
136 |
+
return out_string
|
137 |
+
|
138 |
+
def save_vocabulary(
|
139 |
+
self, save_directory, filename_prefix: Optional[str] = None
|
140 |
+
) -> Tuple[str]:
|
141 |
+
"""
|
142 |
+
Save the vocabulary and special tokens file to a directory.
|
143 |
+
|
144 |
+
Args:
|
145 |
+
save_directory (`str`):
|
146 |
+
The directory in which to save the vocabulary.
|
147 |
+
|
148 |
+
Returns:
|
149 |
+
`Tuple(str)`: Paths to the files saved.
|
150 |
+
"""
|
151 |
+
if not os.path.isdir(save_directory):
|
152 |
+
logger.error(f"Vocabulary path ({save_directory}) should be a directory")
|
153 |
+
return
|
154 |
+
out_vocab_file = os.path.join(
|
155 |
+
save_directory,
|
156 |
+
(filename_prefix + "-" if filename_prefix else "")
|
157 |
+
+ VOCAB_FILES_NAMES["vocab_file"],
|
158 |
+
)
|
159 |
+
|
160 |
+
if os.path.abspath(self.vocab_file) != os.path.abspath(
|
161 |
+
out_vocab_file
|
162 |
+
) and os.path.isfile(self.vocab_file):
|
163 |
+
copyfile(self.vocab_file, out_vocab_file)
|
164 |
+
elif not os.path.isfile(self.vocab_file):
|
165 |
+
with open(out_vocab_file, "wb") as fi:
|
166 |
+
content_spiece_model = self.sp_model.serialized_model_proto()
|
167 |
+
fi.write(content_spiece_model)
|
168 |
+
|
169 |
+
return (out_vocab_file,)
|
170 |
+
|
171 |
+
def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
|
172 |
+
bos_token_id = [self.bos_token_id] if self.add_bos_token else []
|
173 |
+
eos_token_id = [self.eos_token_id] if self.add_eos_token else []
|
174 |
+
|
175 |
+
output = bos_token_id + token_ids_0 + eos_token_id
|
176 |
+
|
177 |
+
if token_ids_1 is not None:
|
178 |
+
output = output + bos_token_id + token_ids_1 + eos_token_id
|
179 |
+
|
180 |
+
return output
|
181 |
+
|
182 |
+
def get_special_tokens_mask(
|
183 |
+
self,
|
184 |
+
token_ids_0: List[int],
|
185 |
+
token_ids_1: Optional[List[int]] = None,
|
186 |
+
already_has_special_tokens: bool = False,
|
187 |
+
) -> List[int]:
|
188 |
+
"""
|
189 |
+
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
|
190 |
+
special tokens using the tokenizer `prepare_for_model` method.
|
191 |
+
|
192 |
+
Args:
|
193 |
+
token_ids_0 (`List[int]`):
|
194 |
+
List of IDs.
|
195 |
+
token_ids_1 (`List[int]`, *optional*):
|
196 |
+
Optional second list of IDs for sequence pairs.
|
197 |
+
already_has_special_tokens (`bool`, *optional*, defaults to `False`):
|
198 |
+
Whether or not the token list is already formatted with special tokens for the model.
|
199 |
+
|
200 |
+
Returns:
|
201 |
+
`List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
|
202 |
+
"""
|
203 |
+
if already_has_special_tokens:
|
204 |
+
return super().get_special_tokens_mask(
|
205 |
+
token_ids_0=token_ids_0,
|
206 |
+
token_ids_1=token_ids_1,
|
207 |
+
already_has_special_tokens=True,
|
208 |
+
)
|
209 |
+
|
210 |
+
bos_token_id = [1] if self.add_bos_token else []
|
211 |
+
eos_token_id = [1] if self.add_eos_token else []
|
212 |
+
|
213 |
+
if token_ids_1 is None:
|
214 |
+
return bos_token_id + ([0] * len(token_ids_0)) + eos_token_id
|
215 |
+
return (
|
216 |
+
bos_token_id
|
217 |
+
+ ([0] * len(token_ids_0))
|
218 |
+
+ eos_token_id
|
219 |
+
+ bos_token_id
|
220 |
+
+ ([0] * len(token_ids_1))
|
221 |
+
+ eos_token_id
|
222 |
+
)
|
223 |
+
|
224 |
+
def create_token_type_ids_from_sequences(
|
225 |
+
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
|
226 |
+
) -> List[int]:
|
227 |
+
"""
|
228 |
+
Creates a mask from the two sequences passed to be used in a sequence-pair classification task. An ALBERT
|
229 |
+
sequence pair mask has the following format:
|
230 |
+
|
231 |
+
```
|
232 |
+
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
|
233 |
+
| first sequence | second sequence |
|
234 |
+
```
|
235 |
+
|
236 |
+
if token_ids_1 is None, only returns the first portion of the mask (0s).
|
237 |
+
|
238 |
+
Args:
|
239 |
+
token_ids_0 (`List[int]`):
|
240 |
+
List of ids.
|
241 |
+
token_ids_1 (`List[int]`, *optional*):
|
242 |
+
Optional second list of IDs for sequence pairs.
|
243 |
+
|
244 |
+
Returns:
|
245 |
+
`List[int]`: List of [token type IDs](../glossary#token-type-ids) according to the given sequence(s).
|
246 |
+
"""
|
247 |
+
bos_token_id = [self.bos_token_id] if self.add_bos_token else []
|
248 |
+
eos_token_id = [self.eos_token_id] if self.add_eos_token else []
|
249 |
+
|
250 |
+
output = [0] * len(bos_token_id + token_ids_0 + eos_token_id)
|
251 |
+
|
252 |
+
if token_ids_1 is not None:
|
253 |
+
output += [1] * len(bos_token_id + token_ids_1 + eos_token_id)
|
254 |
+
|
255 |
+
return output
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer.model
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:386c49cf943d71aa110361135338c50e38beeff0a66593480421f37b319e1a39
|
3 |
+
size 1033105
|
tokenizer_config.json
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"auto_map": {
|
3 |
+
"AutoTokenizer": ["tokenization_yi.YiTokenizer", null]
|
4 |
+
},
|
5 |
+
"add_bos_token": false,
|
6 |
+
"add_eos_token": false,
|
7 |
+
"model_max_length": 200000,
|
8 |
+
"tokenizer_class": "YiTokenizer"
|
9 |
+
}
|