twodgirl commited on
Commit
e2aa741
1 Parent(s): 87294e4

Upload inference lite files.

Browse files
onediffusion/LICENSE ADDED
@@ -0,0 +1,407 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Attribution-NonCommercial 4.0 International
2
+
3
+ =======================================================================
4
+
5
+ Creative Commons Corporation ("Creative Commons") is not a law firm and
6
+ does not provide legal services or legal advice. Distribution of
7
+ Creative Commons public licenses does not create a lawyer-client or
8
+ other relationship. Creative Commons makes its licenses and related
9
+ information available on an "as-is" basis. Creative Commons gives no
10
+ warranties regarding its licenses, any material licensed under their
11
+ terms and conditions, or any related information. Creative Commons
12
+ disclaims all liability for damages resulting from their use to the
13
+ fullest extent possible.
14
+
15
+ Using Creative Commons Public Licenses
16
+
17
+ Creative Commons public licenses provide a standard set of terms and
18
+ conditions that creators and other rights holders may use to share
19
+ original works of authorship and other material subject to copyright
20
+ and certain other rights specified in the public license below. The
21
+ following considerations are for informational purposes only, are not
22
+ exhaustive, and do not form part of our licenses.
23
+
24
+ Considerations for licensors: Our public licenses are
25
+ intended for use by those authorized to give the public
26
+ permission to use material in ways otherwise restricted by
27
+ copyright and certain other rights. Our licenses are
28
+ irrevocable. Licensors should read and understand the terms
29
+ and conditions of the license they choose before applying it.
30
+ Licensors should also secure all rights necessary before
31
+ applying our licenses so that the public can reuse the
32
+ material as expected. Licensors should clearly mark any
33
+ material not subject to the license. This includes other CC-
34
+ licensed material, or material used under an exception or
35
+ limitation to copyright. More considerations for licensors:
36
+ wiki.creativecommons.org/Considerations_for_licensors
37
+
38
+ Considerations for the public: By using one of our public
39
+ licenses, a licensor grants the public permission to use the
40
+ licensed material under specified terms and conditions. If
41
+ the licensor's permission is not necessary for any reason--for
42
+ example, because of any applicable exception or limitation to
43
+ copyright--then that use is not regulated by the license. Our
44
+ licenses grant only permissions under copyright and certain
45
+ other rights that a licensor has authority to grant. Use of
46
+ the licensed material may still be restricted for other
47
+ reasons, including because others have copyright or other
48
+ rights in the material. A licensor may make special requests,
49
+ such as asking that all changes be marked or described.
50
+ Although not required by our licenses, you are encouraged to
51
+ respect those requests where reasonable. More considerations
52
+ for the public:
53
+ wiki.creativecommons.org/Considerations_for_licensees
54
+
55
+ =======================================================================
56
+
57
+ Creative Commons Attribution-NonCommercial 4.0 International Public
58
+ License
59
+
60
+ By exercising the Licensed Rights (defined below), You accept and agree
61
+ to be bound by the terms and conditions of this Creative Commons
62
+ Attribution-NonCommercial 4.0 International Public License ("Public
63
+ License"). To the extent this Public License may be interpreted as a
64
+ contract, You are granted the Licensed Rights in consideration of Your
65
+ acceptance of these terms and conditions, and the Licensor grants You
66
+ such rights in consideration of benefits the Licensor receives from
67
+ making the Licensed Material available under these terms and
68
+ conditions.
69
+
70
+
71
+ Section 1 -- Definitions.
72
+
73
+ a. Adapted Material means material subject to Copyright and Similar
74
+ Rights that is derived from or based upon the Licensed Material
75
+ and in which the Licensed Material is translated, altered,
76
+ arranged, transformed, or otherwise modified in a manner requiring
77
+ permission under the Copyright and Similar Rights held by the
78
+ Licensor. For purposes of this Public License, where the Licensed
79
+ Material is a musical work, performance, or sound recording,
80
+ Adapted Material is always produced where the Licensed Material is
81
+ synched in timed relation with a moving image.
82
+
83
+ b. Adapter's License means the license You apply to Your Copyright
84
+ and Similar Rights in Your contributions to Adapted Material in
85
+ accordance with the terms and conditions of this Public License.
86
+
87
+ c. Copyright and Similar Rights means copyright and/or similar rights
88
+ closely related to copyright including, without limitation,
89
+ performance, broadcast, sound recording, and Sui Generis Database
90
+ Rights, without regard to how the rights are labeled or
91
+ categorized. For purposes of this Public License, the rights
92
+ specified in Section 2(b)(1)-(2) are not Copyright and Similar
93
+ Rights.
94
+ d. Effective Technological Measures means those measures that, in the
95
+ absence of proper authority, may not be circumvented under laws
96
+ fulfilling obligations under Article 11 of the WIPO Copyright
97
+ Treaty adopted on December 20, 1996, and/or similar international
98
+ agreements.
99
+
100
+ e. Exceptions and Limitations means fair use, fair dealing, and/or
101
+ any other exception or limitation to Copyright and Similar Rights
102
+ that applies to Your use of the Licensed Material.
103
+
104
+ f. Licensed Material means the artistic or literary work, database,
105
+ or other material to which the Licensor applied this Public
106
+ License.
107
+
108
+ g. Licensed Rights means the rights granted to You subject to the
109
+ terms and conditions of this Public License, which are limited to
110
+ all Copyright and Similar Rights that apply to Your use of the
111
+ Licensed Material and that the Licensor has authority to license.
112
+
113
+ h. Licensor means the individual(s) or entity(ies) granting rights
114
+ under this Public License.
115
+
116
+ i. NonCommercial means not primarily intended for or directed towards
117
+ commercial advantage or monetary compensation. For purposes of
118
+ this Public License, the exchange of the Licensed Material for
119
+ other material subject to Copyright and Similar Rights by digital
120
+ file-sharing or similar means is NonCommercial provided there is
121
+ no payment of monetary compensation in connection with the
122
+ exchange.
123
+
124
+ j. Share means to provide material to the public by any means or
125
+ process that requires permission under the Licensed Rights, such
126
+ as reproduction, public display, public performance, distribution,
127
+ dissemination, communication, or importation, and to make material
128
+ available to the public including in ways that members of the
129
+ public may access the material from a place and at a time
130
+ individually chosen by them.
131
+
132
+ k. Sui Generis Database Rights means rights other than copyright
133
+ resulting from Directive 96/9/EC of the European Parliament and of
134
+ the Council of 11 March 1996 on the legal protection of databases,
135
+ as amended and/or succeeded, as well as other essentially
136
+ equivalent rights anywhere in the world.
137
+
138
+ l. You means the individual or entity exercising the Licensed Rights
139
+ under this Public License. Your has a corresponding meaning.
140
+
141
+
142
+ Section 2 -- Scope.
143
+
144
+ a. License grant.
145
+
146
+ 1. Subject to the terms and conditions of this Public License,
147
+ the Licensor hereby grants You a worldwide, royalty-free,
148
+ non-sublicensable, non-exclusive, irrevocable license to
149
+ exercise the Licensed Rights in the Licensed Material to:
150
+
151
+ a. reproduce and Share the Licensed Material, in whole or
152
+ in part, for NonCommercial purposes only; and
153
+
154
+ b. produce, reproduce, and Share Adapted Material for
155
+ NonCommercial purposes only.
156
+
157
+ 2. Exceptions and Limitations. For the avoidance of doubt, where
158
+ Exceptions and Limitations apply to Your use, this Public
159
+ License does not apply, and You do not need to comply with
160
+ its terms and conditions.
161
+
162
+ 3. Term. The term of this Public License is specified in Section
163
+ 6(a).
164
+
165
+ 4. Media and formats; technical modifications allowed. The
166
+ Licensor authorizes You to exercise the Licensed Rights in
167
+ all media and formats whether now known or hereafter created,
168
+ and to make technical modifications necessary to do so. The
169
+ Licensor waives and/or agrees not to assert any right or
170
+ authority to forbid You from making technical modifications
171
+ necessary to exercise the Licensed Rights, including
172
+ technical modifications necessary to circumvent Effective
173
+ Technological Measures. For purposes of this Public License,
174
+ simply making modifications authorized by this Section 2(a)
175
+ (4) never produces Adapted Material.
176
+
177
+ 5. Downstream recipients.
178
+
179
+ a. Offer from the Licensor -- Licensed Material. Every
180
+ recipient of the Licensed Material automatically
181
+ receives an offer from the Licensor to exercise the
182
+ Licensed Rights under the terms and conditions of this
183
+ Public License.
184
+
185
+ b. No downstream restrictions. You may not offer or impose
186
+ any additional or different terms or conditions on, or
187
+ apply any Effective Technological Measures to, the
188
+ Licensed Material if doing so restricts exercise of the
189
+ Licensed Rights by any recipient of the Licensed
190
+ Material.
191
+
192
+ 6. No endorsement. Nothing in this Public License constitutes or
193
+ may be construed as permission to assert or imply that You
194
+ are, or that Your use of the Licensed Material is, connected
195
+ with, or sponsored, endorsed, or granted official status by,
196
+ the Licensor or others designated to receive attribution as
197
+ provided in Section 3(a)(1)(A)(i).
198
+
199
+ b. Other rights.
200
+
201
+ 1. Moral rights, such as the right of integrity, are not
202
+ licensed under this Public License, nor are publicity,
203
+ privacy, and/or other similar personality rights; however, to
204
+ the extent possible, the Licensor waives and/or agrees not to
205
+ assert any such rights held by the Licensor to the limited
206
+ extent necessary to allow You to exercise the Licensed
207
+ Rights, but not otherwise.
208
+
209
+ 2. Patent and trademark rights are not licensed under this
210
+ Public License.
211
+
212
+ 3. To the extent possible, the Licensor waives any right to
213
+ collect royalties from You for the exercise of the Licensed
214
+ Rights, whether directly or through a collecting society
215
+ under any voluntary or waivable statutory or compulsory
216
+ licensing scheme. In all other cases the Licensor expressly
217
+ reserves any right to collect such royalties, including when
218
+ the Licensed Material is used other than for NonCommercial
219
+ purposes.
220
+
221
+
222
+ Section 3 -- License Conditions.
223
+
224
+ Your exercise of the Licensed Rights is expressly made subject to the
225
+ following conditions.
226
+
227
+ a. Attribution.
228
+
229
+ 1. If You Share the Licensed Material (including in modified
230
+ form), You must:
231
+
232
+ a. retain the following if it is supplied by the Licensor
233
+ with the Licensed Material:
234
+
235
+ i. identification of the creator(s) of the Licensed
236
+ Material and any others designated to receive
237
+ attribution, in any reasonable manner requested by
238
+ the Licensor (including by pseudonym if
239
+ designated);
240
+
241
+ ii. a copyright notice;
242
+
243
+ iii. a notice that refers to this Public License;
244
+
245
+ iv. a notice that refers to the disclaimer of
246
+ warranties;
247
+
248
+ v. a URI or hyperlink to the Licensed Material to the
249
+ extent reasonably practicable;
250
+
251
+ b. indicate if You modified the Licensed Material and
252
+ retain an indication of any previous modifications; and
253
+
254
+ c. indicate the Licensed Material is licensed under this
255
+ Public License, and include the text of, or the URI or
256
+ hyperlink to, this Public License.
257
+
258
+ 2. You may satisfy the conditions in Section 3(a)(1) in any
259
+ reasonable manner based on the medium, means, and context in
260
+ which You Share the Licensed Material. For example, it may be
261
+ reasonable to satisfy the conditions by providing a URI or
262
+ hyperlink to a resource that includes the required
263
+ information.
264
+
265
+ 3. If requested by the Licensor, You must remove any of the
266
+ information required by Section 3(a)(1)(A) to the extent
267
+ reasonably practicable.
268
+
269
+ 4. If You Share Adapted Material You produce, the Adapter's
270
+ License You apply must not prevent recipients of the Adapted
271
+ Material from complying with this Public License.
272
+
273
+
274
+ Section 4 -- Sui Generis Database Rights.
275
+
276
+ Where the Licensed Rights include Sui Generis Database Rights that
277
+ apply to Your use of the Licensed Material:
278
+
279
+ a. for the avoidance of doubt, Section 2(a)(1) grants You the right
280
+ to extract, reuse, reproduce, and Share all or a substantial
281
+ portion of the contents of the database for NonCommercial purposes
282
+ only;
283
+
284
+ b. if You include all or a substantial portion of the database
285
+ contents in a database in which You have Sui Generis Database
286
+ Rights, then the database in which You have Sui Generis Database
287
+ Rights (but not its individual contents) is Adapted Material; and
288
+
289
+ c. You must comply with the conditions in Section 3(a) if You Share
290
+ all or a substantial portion of the contents of the database.
291
+
292
+ For the avoidance of doubt, this Section 4 supplements and does not
293
+ replace Your obligations under this Public License where the Licensed
294
+ Rights include other Copyright and Similar Rights.
295
+
296
+
297
+ Section 5 -- Disclaimer of Warranties and Limitation of Liability.
298
+
299
+ a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
300
+ EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
301
+ AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
302
+ ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
303
+ IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
304
+ WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
305
+ PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
306
+ ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
307
+ KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
308
+ ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
309
+
310
+ b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
311
+ TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
312
+ NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
313
+ INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
314
+ COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
315
+ USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
316
+ ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
317
+ DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
318
+ IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
319
+
320
+ c. The disclaimer of warranties and limitation of liability provided
321
+ above shall be interpreted in a manner that, to the extent
322
+ possible, most closely approximates an absolute disclaimer and
323
+ waiver of all liability.
324
+
325
+
326
+ Section 6 -- Term and Termination.
327
+
328
+ a. This Public License applies for the term of the Copyright and
329
+ Similar Rights licensed here. However, if You fail to comply with
330
+ this Public License, then Your rights under this Public License
331
+ terminate automatically.
332
+
333
+ b. Where Your right to use the Licensed Material has terminated under
334
+ Section 6(a), it reinstates:
335
+
336
+ 1. automatically as of the date the violation is cured, provided
337
+ it is cured within 30 days of Your discovery of the
338
+ violation; or
339
+
340
+ 2. upon express reinstatement by the Licensor.
341
+
342
+ For the avoidance of doubt, this Section 6(b) does not affect any
343
+ right the Licensor may have to seek remedies for Your violations
344
+ of this Public License.
345
+
346
+ c. For the avoidance of doubt, the Licensor may also offer the
347
+ Licensed Material under separate terms or conditions or stop
348
+ distributing the Licensed Material at any time; however, doing so
349
+ will not terminate this Public License.
350
+
351
+ d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
352
+ License.
353
+
354
+
355
+ Section 7 -- Other Terms and Conditions.
356
+
357
+ a. The Licensor shall not be bound by any additional or different
358
+ terms or conditions communicated by You unless expressly agreed.
359
+
360
+ b. Any arrangements, understandings, or agreements regarding the
361
+ Licensed Material not stated herein are separate from and
362
+ independent of the terms and conditions of this Public License.
363
+
364
+
365
+ Section 8 -- Interpretation.
366
+
367
+ a. For the avoidance of doubt, this Public License does not, and
368
+ shall not be interpreted to, reduce, limit, restrict, or impose
369
+ conditions on any use of the Licensed Material that could lawfully
370
+ be made without permission under this Public License.
371
+
372
+ b. To the extent possible, if any provision of this Public License is
373
+ deemed unenforceable, it shall be automatically reformed to the
374
+ minimum extent necessary to make it enforceable. If the provision
375
+ cannot be reformed, it shall be severed from this Public License
376
+ without affecting the enforceability of the remaining terms and
377
+ conditions.
378
+
379
+ c. No term or condition of this Public License will be waived and no
380
+ failure to comply consented to unless expressly agreed to by the
381
+ Licensor.
382
+
383
+ d. Nothing in this Public License constitutes or may be interpreted
384
+ as a limitation upon, or waiver of, any privileges and immunities
385
+ that apply to the Licensor or You, including from the legal
386
+ processes of any jurisdiction or authority.
387
+
388
+ =======================================================================
389
+
390
+ Creative Commons is not a party to its public
391
+ licenses. Notwithstanding, Creative Commons may elect to apply one of
392
+ its public licenses to material it publishes and in those instances
393
+ will be considered the “Licensor.” The text of the Creative Commons
394
+ public licenses is dedicated to the public domain under the CC0 Public
395
+ Domain Dedication. Except for the limited purpose of indicating that
396
+ material is shared under a Creative Commons public license or as
397
+ otherwise permitted by the Creative Commons policies published at
398
+ creativecommons.org/policies, Creative Commons does not authorize the
399
+ use of the trademark "Creative Commons" or any other trademark or logo
400
+ of Creative Commons without its prior written consent including,
401
+ without limitation, in connection with any unauthorized modifications
402
+ to any of its public licenses or any other arrangements,
403
+ understandings, or agreements concerning use of licensed material. For
404
+ the avoidance of doubt, this paragraph does not form part of the
405
+ public licenses.
406
+
407
+ Creative Commons may be contacted at creativecommons.org.
onediffusion/__init__.py ADDED
File without changes
onediffusion/dataset/__init__.py ADDED
File without changes
onediffusion/dataset/utils.py ADDED
@@ -0,0 +1,175 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ASPECT_RATIO_2880 = {
3
+ '0.25': [1408.0, 5760.0], '0.26': [1408.0, 5568.0], '0.27': [1408.0, 5376.0], '0.28': [1408.0, 5184.0],
4
+ '0.32': [1600.0, 4992.0], '0.33': [1600.0, 4800.0], '0.34': [1600.0, 4672.0], '0.4': [1792.0, 4480.0],
5
+ '0.42': [1792.0, 4288.0], '0.47': [1920.0, 4096.0], '0.49': [1920.0, 3904.0], '0.51': [1920.0, 3776.0],
6
+ '0.55': [2112.0, 3840.0], '0.59': [2112.0, 3584.0], '0.68': [2304.0, 3392.0], '0.72': [2304.0, 3200.0],
7
+ '0.78': [2496.0, 3200.0], '0.83': [2496.0, 3008.0], '0.89': [2688.0, 3008.0], '0.93': [2688.0, 2880.0],
8
+ '1.0': [2880.0, 2880.0], '1.07': [2880.0, 2688.0], '1.12': [3008.0, 2688.0], '1.21': [3008.0, 2496.0],
9
+ '1.28': [3200.0, 2496.0], '1.39': [3200.0, 2304.0], '1.47': [3392.0, 2304.0], '1.7': [3584.0, 2112.0],
10
+ '1.82': [3840.0, 2112.0], '2.03': [3904.0, 1920.0], '2.13': [4096.0, 1920.0], '2.39': [4288.0, 1792.0],
11
+ '2.5': [4480.0, 1792.0], '2.92': [4672.0, 1600.0], '3.0': [4800.0, 1600.0], '3.12': [4992.0, 1600.0],
12
+ '3.68': [5184.0, 1408.0], '3.82': [5376.0, 1408.0], '3.95': [5568.0, 1408.0], '4.0': [5760.0, 1408.0]
13
+ }
14
+
15
+ ASPECT_RATIO_2048 = {
16
+ '0.25': [1024.0, 4096.0], '0.26': [1024.0, 3968.0], '0.27': [1024.0, 3840.0], '0.28': [1024.0, 3712.0],
17
+ '0.32': [1152.0, 3584.0], '0.33': [1152.0, 3456.0], '0.35': [1152.0, 3328.0], '0.4': [1280.0, 3200.0],
18
+ '0.42': [1280.0, 3072.0], '0.48': [1408.0, 2944.0], '0.5': [1408.0, 2816.0], '0.52': [1408.0, 2688.0],
19
+ '0.57': [1536.0, 2688.0], '0.6': [1536.0, 2560.0], '0.68': [1664.0, 2432.0], '0.72': [1664.0, 2304.0],
20
+ '0.78': [1792.0, 2304.0], '0.82': [1792.0, 2176.0], '0.88': [1920.0, 2176.0], '0.94': [1920.0, 2048.0],
21
+ '1.0': [2048.0, 2048.0], '1.07': [2048.0, 1920.0], '1.13': [2176.0, 1920.0], '1.21': [2176.0, 1792.0],
22
+ '1.29': [2304.0, 1792.0], '1.38': [2304.0, 1664.0], '1.46': [2432.0, 1664.0], '1.67': [2560.0, 1536.0],
23
+ '1.75': [2688.0, 1536.0], '2.0': [2816.0, 1408.0], '2.09': [2944.0, 1408.0], '2.4': [3072.0, 1280.0],
24
+ '2.5': [3200.0, 1280.0], '2.89': [3328.0, 1152.0], '3.0': [3456.0, 1152.0], '3.11': [3584.0, 1152.0],
25
+ '3.62': [3712.0, 1024.0], '3.75': [3840.0, 1024.0], '3.88': [3968.0, 1024.0], '4.0': [4096.0, 1024.0]
26
+ }
27
+
28
+ ASPECT_RATIO_1024 = {
29
+ '0.25': [512., 2048.], '0.26': [512., 1984.], '0.27': [512., 1920.], '0.28': [512., 1856.],
30
+ '0.32': [576., 1792.], '0.33': [576., 1728.], '0.35': [576., 1664.], '0.4': [640., 1600.],
31
+ '0.42': [640., 1536.], '0.48': [704., 1472.], '0.5': [704., 1408.], '0.52': [704., 1344.],
32
+ '0.57': [768., 1344.], '0.6': [768., 1280.], '0.68': [832., 1216.], '0.72': [832., 1152.],
33
+ '0.78': [896., 1152.], '0.82': [896., 1088.], '0.88': [960., 1088.], '0.94': [960., 1024.],
34
+ '1.0': [1024., 1024.], '1.07': [1024., 960.], '1.13': [1088., 960.], '1.21': [1088., 896.],
35
+ '1.29': [1152., 896.], '1.38': [1152., 832.], '1.46': [1216., 832.], '1.67': [1280., 768.],
36
+ '1.75': [1344., 768.], '2.0': [1408., 704.], '2.09': [1472., 704.], '2.4': [1536., 640.],
37
+ '2.5': [1600., 640.], '2.89': [1664., 576.], '3.0': [1728., 576.], '3.11': [1792., 576.],
38
+ '3.62': [1856., 512.], '3.75': [1920., 512.], '3.88': [1984., 512.], '4.0': [2048., 512.],
39
+ }
40
+
41
+ ASPECT_RATIO_512 = {
42
+ '0.25': [256.0, 1024.0], '0.26': [256.0, 992.0], '0.27': [256.0, 960.0], '0.28': [256.0, 928.0],
43
+ '0.32': [288.0, 896.0], '0.33': [288.0, 864.0], '0.35': [288.0, 832.0], '0.4': [320.0, 800.0],
44
+ '0.42': [320.0, 768.0], '0.48': [352.0, 736.0], '0.5': [352.0, 704.0], '0.52': [352.0, 672.0],
45
+ '0.57': [384.0, 672.0], '0.6': [384.0, 640.0], '0.68': [416.0, 608.0], '0.72': [416.0, 576.0],
46
+ '0.78': [448.0, 576.0], '0.82': [448.0, 544.0], '0.88': [480.0, 544.0], '0.94': [480.0, 512.0],
47
+ '1.0': [512.0, 512.0], '1.07': [512.0, 480.0], '1.13': [544.0, 480.0], '1.21': [544.0, 448.0],
48
+ '1.29': [576.0, 448.0], '1.38': [576.0, 416.0], '1.46': [608.0, 416.0], '1.67': [640.0, 384.0],
49
+ '1.75': [672.0, 384.0], '2.0': [704.0, 352.0], '2.09': [736.0, 352.0], '2.4': [768.0, 320.0],
50
+ '2.5': [800.0, 320.0], '2.89': [832.0, 288.0], '3.0': [864.0, 288.0], '3.11': [896.0, 288.0],
51
+ '3.62': [928.0, 256.0], '3.75': [960.0, 256.0], '3.88': [992.0, 256.0], '4.0': [1024.0, 256.0]
52
+ }
53
+
54
+
55
+ ASPECT_RATIO_384 = {
56
+ '0.25': [192.0, 768.0],
57
+ '0.26': [192.0, 736.0],
58
+ '0.27': [208.0, 768.0],
59
+ '0.28': [208.0, 736.0],
60
+ '0.33': [240.0, 720.0],
61
+ '0.4': [256.0, 640.0],
62
+ '0.42': [304.0, 720.0],
63
+ '0.48': [368.0, 768.0],
64
+ '0.5': [384.0, 768.0],
65
+ '0.52': [384.0, 736.0],
66
+ '0.57': [384.0, 672.0],
67
+ '0.6': [384.0, 640.0],
68
+ '0.73': [384.0, 528.0],
69
+ '0.77': [384.0, 496.0],
70
+ '0.83': [384.0, 464.0],
71
+ '0.89': [384.0, 432.0],
72
+ '0.92': [384.0, 416.0],
73
+ '1.0': [384.0, 384.0],
74
+ '1.09': [384.0, 352.0],
75
+ '1.14': [384.0, 336.0],
76
+ '1.2': [384.0, 320.0],
77
+ '1.26': [384.0, 304.0],
78
+ '1.33': [384.0, 288.0],
79
+ '1.41': [384.0, 272.0],
80
+ '1.6': [384.0, 240.0],
81
+ '1.71': [384.0, 224.0],
82
+ '2.0': [384.0, 192.0],
83
+ '2.4': [384.0, 160.0],
84
+ '2.88': [368.0, 128.0],
85
+ '3.0': [384.0, 128.0],
86
+ '3.43': [384.0, 112.0],
87
+ '4.0': [384.0, 96.0]
88
+ }
89
+
90
+ ASPECT_RATIO_256 = {
91
+ '0.25': [128.0, 512.0], '0.26': [128.0, 496.0], '0.27': [128.0, 480.0], '0.28': [128.0, 464.0],
92
+ '0.32': [144.0, 448.0], '0.33': [144.0, 432.0], '0.35': [144.0, 416.0], '0.4': [160.0, 400.0],
93
+ '0.42': [160.0, 384.0], '0.48': [176.0, 368.0], '0.5': [176.0, 352.0], '0.52': [176.0, 336.0],
94
+ '0.57': [192.0, 336.0], '0.6': [192.0, 320.0], '0.68': [208.0, 304.0], '0.72': [208.0, 288.0],
95
+ '0.78': [224.0, 288.0], '0.82': [224.0, 272.0], '0.88': [240.0, 272.0], '0.94': [240.0, 256.0],
96
+ '1.0': [256.0, 256.0], '1.07': [256.0, 240.0], '1.13': [272.0, 240.0], '1.21': [272.0, 224.0],
97
+ '1.29': [288.0, 224.0], '1.38': [288.0, 208.0], '1.46': [304.0, 208.0], '1.67': [320.0, 192.0],
98
+ '1.75': [336.0, 192.0], '2.0': [352.0, 176.0], '2.09': [368.0, 176.0], '2.4': [384.0, 160.0],
99
+ '2.5': [400.0, 160.0], '2.89': [416.0, 144.0], '3.0': [432.0, 144.0], '3.11': [448.0, 144.0],
100
+ '3.62': [464.0, 128.0], '3.75': [480.0, 128.0], '3.88': [496.0, 128.0], '4.0': [512.0, 128.0]
101
+ }
102
+
103
+ ASPECT_RATIO_256_TEST = {
104
+ '0.25': [128.0, 512.0], '0.28': [128.0, 464.0],
105
+ '0.32': [144.0, 448.0], '0.33': [144.0, 432.0], '0.35': [144.0, 416.0], '0.4': [160.0, 400.0],
106
+ '0.42': [160.0, 384.0], '0.48': [176.0, 368.0], '0.5': [176.0, 352.0], '0.52': [176.0, 336.0],
107
+ '0.57': [192.0, 336.0], '0.6': [192.0, 320.0], '0.68': [208.0, 304.0], '0.72': [208.0, 288.0],
108
+ '0.78': [224.0, 288.0], '0.82': [224.0, 272.0], '0.88': [240.0, 272.0], '0.94': [240.0, 256.0],
109
+ '1.0': [256.0, 256.0], '1.07': [256.0, 240.0], '1.13': [272.0, 240.0], '1.21': [272.0, 224.0],
110
+ '1.29': [288.0, 224.0], '1.38': [288.0, 208.0], '1.46': [304.0, 208.0], '1.67': [320.0, 192.0],
111
+ '1.75': [336.0, 192.0], '2.0': [352.0, 176.0], '2.09': [368.0, 176.0], '2.4': [384.0, 160.0],
112
+ '2.5': [400.0, 160.0], '3.0': [432.0, 144.0],
113
+ '4.0': [512.0, 128.0]
114
+ }
115
+
116
+ ASPECT_RATIO_512_TEST = {
117
+ '0.25': [256.0, 1024.0], '0.28': [256.0, 928.0],
118
+ '0.32': [288.0, 896.0], '0.33': [288.0, 864.0], '0.35': [288.0, 832.0], '0.4': [320.0, 800.0],
119
+ '0.42': [320.0, 768.0], '0.48': [352.0, 736.0], '0.5': [352.0, 704.0], '0.52': [352.0, 672.0],
120
+ '0.57': [384.0, 672.0], '0.6': [384.0, 640.0], '0.68': [416.0, 608.0], '0.72': [416.0, 576.0],
121
+ '0.78': [448.0, 576.0], '0.82': [448.0, 544.0], '0.88': [480.0, 544.0], '0.94': [480.0, 512.0],
122
+ '1.0': [512.0, 512.0], '1.07': [512.0, 480.0], '1.13': [544.0, 480.0], '1.21': [544.0, 448.0],
123
+ '1.29': [576.0, 448.0], '1.38': [576.0, 416.0], '1.46': [608.0, 416.0], '1.67': [640.0, 384.0],
124
+ '1.75': [672.0, 384.0], '2.0': [704.0, 352.0], '2.09': [736.0, 352.0], '2.4': [768.0, 320.0],
125
+ '2.5': [800.0, 320.0], '3.0': [864.0, 288.0],
126
+ '4.0': [1024.0, 256.0]
127
+ }
128
+
129
+ ASPECT_RATIO_1024_TEST = {
130
+ '0.25': [512., 2048.], '0.28': [512., 1856.],
131
+ '0.32': [576., 1792.], '0.33': [576., 1728.], '0.35': [576., 1664.], '0.4': [640., 1600.],
132
+ '0.42': [640., 1536.], '0.48': [704., 1472.], '0.5': [704., 1408.], '0.52': [704., 1344.],
133
+ '0.57': [768., 1344.], '0.6': [768., 1280.], '0.68': [832., 1216.], '0.72': [832., 1152.],
134
+ '0.78': [896., 1152.], '0.82': [896., 1088.], '0.88': [960., 1088.], '0.94': [960., 1024.],
135
+ '1.0': [1024., 1024.], '1.07': [1024., 960.], '1.13': [1088., 960.], '1.21': [1088., 896.],
136
+ '1.29': [1152., 896.], '1.38': [1152., 832.], '1.46': [1216., 832.], '1.67': [1280., 768.],
137
+ '1.75': [1344., 768.], '2.0': [1408., 704.], '2.09': [1472., 704.], '2.4': [1536., 640.],
138
+ '2.5': [1600., 640.], '3.0': [1728., 576.],
139
+ '4.0': [2048., 512.],
140
+ }
141
+
142
+ ASPECT_RATIO_2048_TEST = {
143
+ '0.25': [1024.0, 4096.0], '0.26': [1024.0, 3968.0],
144
+ '0.32': [1152.0, 3584.0], '0.33': [1152.0, 3456.0], '0.35': [1152.0, 3328.0], '0.4': [1280.0, 3200.0],
145
+ '0.42': [1280.0, 3072.0], '0.48': [1408.0, 2944.0], '0.5': [1408.0, 2816.0], '0.52': [1408.0, 2688.0],
146
+ '0.57': [1536.0, 2688.0], '0.6': [1536.0, 2560.0], '0.68': [1664.0, 2432.0], '0.72': [1664.0, 2304.0],
147
+ '0.78': [1792.0, 2304.0], '0.82': [1792.0, 2176.0], '0.88': [1920.0, 2176.0], '0.94': [1920.0, 2048.0],
148
+ '1.0': [2048.0, 2048.0], '1.07': [2048.0, 1920.0], '1.13': [2176.0, 1920.0], '1.21': [2176.0, 1792.0],
149
+ '1.29': [2304.0, 1792.0], '1.38': [2304.0, 1664.0], '1.46': [2432.0, 1664.0], '1.67': [2560.0, 1536.0],
150
+ '1.75': [2688.0, 1536.0], '2.0': [2816.0, 1408.0], '2.09': [2944.0, 1408.0], '2.4': [3072.0, 1280.0],
151
+ '2.5': [3200.0, 1280.0], '3.0': [3456.0, 1152.0],
152
+ '4.0': [4096.0, 1024.0]
153
+ }
154
+
155
+ ASPECT_RATIO_2880_TEST = {
156
+ '0.25': [2048.0, 8192.0], '0.26': [2048.0, 7936.0],
157
+ '0.32': [2304.0, 7168.0], '0.33': [2304.0, 6912.0], '0.35': [2304.0, 6656.0], '0.4': [2560.0, 6400.0],
158
+ '0.42': [2560.0, 6144.0], '0.48': [2816.0, 5888.0], '0.5': [2816.0, 5632.0], '0.52': [2816.0, 5376.0],
159
+ '0.57': [3072.0, 5376.0], '0.6': [3072.0, 5120.0], '0.68': [3328.0, 4864.0], '0.72': [3328.0, 4608.0],
160
+ '0.78': [3584.0, 4608.0], '0.82': [3584.0, 4352.0], '0.88': [3840.0, 4352.0], '0.94': [3840.0, 4096.0],
161
+ '1.0': [4096.0, 4096.0], '1.07': [4096.0, 3840.0], '1.13': [4352.0, 3840.0], '1.21': [4352.0, 3584.0],
162
+ '1.29': [4608.0, 3584.0], '1.38': [4608.0, 3328.0], '1.46': [4864.0, 3328.0], '1.67': [5120.0, 3072.0],
163
+ '1.75': [5376.0, 3072.0], '2.0': [5632.0, 2816.0], '2.09': [5888.0, 2816.0], '2.4': [6144.0, 2560.0],
164
+ '2.5': [6400.0, 2560.0], '3.0': [6912.0, 2304.0],
165
+ '4.0': [8192.0, 2048.0],
166
+ }
167
+
168
+ def get_chunks(lst, n):
169
+ for i in range(0, len(lst), n):
170
+ yield lst[i:i + n]
171
+
172
+ def get_closest_ratio(height: float, width: float, ratios: dict):
173
+ aspect_ratio = height / width
174
+ closest_ratio = min(ratios.keys(), key=lambda ratio: abs(float(ratio) - aspect_ratio))
175
+ return ratios[closest_ratio], float(closest_ratio)
onediffusion/nextdit/__init__.py ADDED
File without changes
onediffusion/nextdit/layers.py ADDED
@@ -0,0 +1,128 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import torch.nn as nn
3
+ import torch.nn.functional as F
4
+ import numpy as np
5
+ from typing import Callable, Optional
6
+ import warnings
7
+
8
+ try:
9
+ from apex.normalization import FusedRMSNorm as RMSNorm
10
+ except ImportError:
11
+ warnings.warn("Cannot import apex RMSNorm, switch to vanilla implementation")
12
+
13
+
14
+ class RMSNorm(torch.nn.Module):
15
+ def __init__(self, dim: int, eps: float = 1e-6):
16
+ """
17
+ Initialize the RMSNorm normalization layer.
18
+ Args:
19
+ dim (int): The dimension of the input tensor.
20
+ eps (float, optional): A small value added to the denominator for numerical stability. Default is 1e-6.
21
+ Attributes:
22
+ eps (float): A small value added to the denominator for numerical stability.
23
+ weight (nn.Parameter): Learnable scaling parameter.
24
+ """
25
+ super().__init__()
26
+ self.eps = eps
27
+ self.weight = nn.Parameter(torch.ones(dim))
28
+
29
+ def _norm(self, x):
30
+ """
31
+ Apply the RMSNorm normalization to the input tensor.
32
+ Args:
33
+ x (torch.Tensor): The input tensor.
34
+ Returns:
35
+ torch.Tensor: The normalized tensor.
36
+ """
37
+ return x * torch.rsqrt(x.pow(2).mean(-1, keepdim=True) + self.eps)
38
+
39
+ def forward(self, x):
40
+ """
41
+ Forward pass through the RMSNorm layer.
42
+ Args:
43
+ x (torch.Tensor): The input tensor.
44
+ Returns:
45
+ torch.Tensor: The output tensor after applying RMSNorm.
46
+ """
47
+ output = self._norm(x.float()).type_as(x)
48
+ return output * self.weight
49
+
50
+
51
+ def modulate(x, scale):
52
+ return x * (1 + scale.unsqueeze(1))
53
+
54
+ class LLamaFeedForward(nn.Module):
55
+ """
56
+ Corresponds to the FeedForward layer in Next DiT.
57
+ """
58
+ def __init__(
59
+ self,
60
+ dim: int,
61
+ hidden_dim: int,
62
+ multiple_of: int,
63
+ ffn_dim_multiplier: Optional[float] = None,
64
+ zeros_initialize: bool = True,
65
+ dtype: torch.dtype = torch.float32,
66
+ ):
67
+ super().__init__()
68
+ self.dim = dim
69
+ self.hidden_dim = hidden_dim
70
+ self.multiple_of = multiple_of
71
+ self.ffn_dim_multiplier = ffn_dim_multiplier
72
+ self.zeros_initialize = zeros_initialize
73
+ self.dtype = dtype
74
+
75
+ # Compute hidden_dim based on the given formula
76
+ hidden_dim_calculated = int(2 * self.hidden_dim / 3)
77
+ if self.ffn_dim_multiplier is not None:
78
+ hidden_dim_calculated = int(self.ffn_dim_multiplier * hidden_dim_calculated)
79
+ hidden_dim_calculated = self.multiple_of * ((hidden_dim_calculated + self.multiple_of - 1) // self.multiple_of)
80
+
81
+ # Define linear layers
82
+ self.w1 = nn.Linear(self.dim, hidden_dim_calculated, bias=False)
83
+ self.w2 = nn.Linear(hidden_dim_calculated, self.dim, bias=False)
84
+ self.w3 = nn.Linear(self.dim, hidden_dim_calculated, bias=False)
85
+
86
+ # Initialize weights
87
+ if self.zeros_initialize:
88
+ nn.init.zeros_(self.w2.weight)
89
+ else:
90
+ nn.init.xavier_uniform_(self.w2.weight)
91
+ nn.init.xavier_uniform_(self.w1.weight)
92
+ nn.init.xavier_uniform_(self.w3.weight)
93
+
94
+ def _forward_silu_gating(self, x1, x3):
95
+ return F.silu(x1) * x3
96
+
97
+ def forward(self, x):
98
+ return self.w2(self._forward_silu_gating(self.w1(x), self.w3(x)))
99
+
100
+ class FinalLayer(nn.Module):
101
+ """
102
+ The final layer of Next-DiT.
103
+ """
104
+ def __init__(self, hidden_size: int, patch_size: int, out_channels: int):
105
+ super().__init__()
106
+ self.hidden_size = hidden_size
107
+ self.patch_size = patch_size
108
+ self.out_channels = out_channels
109
+
110
+ # LayerNorm without learnable parameters (elementwise_affine=False)
111
+ self.norm_final = nn.LayerNorm(self.hidden_size, eps=1e-6, elementwise_affine=False)
112
+ self.linear = nn.Linear(self.hidden_size, np.prod(self.patch_size) * self.out_channels, bias=True)
113
+ nn.init.zeros_(self.linear.weight)
114
+ nn.init.zeros_(self.linear.bias)
115
+
116
+ self.adaLN_modulation = nn.Sequential(
117
+ nn.SiLU(),
118
+ nn.Linear(self.hidden_size, self.hidden_size),
119
+ )
120
+ # Initialize the last layer with zeros
121
+ nn.init.zeros_(self.adaLN_modulation[1].weight)
122
+ nn.init.zeros_(self.adaLN_modulation[1].bias)
123
+
124
+ def forward(self, x, c):
125
+ scale = self.adaLN_modulation(c)
126
+ x = modulate(self.norm_final(x), scale)
127
+ x = self.linear(x)
128
+ return x
onediffusion/nextdit/modeling_nextdit.py ADDED
@@ -0,0 +1,482 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from diffusers.configuration_utils import ConfigMixin, register_to_config
2
+ from diffusers.models.modeling_utils import ModelMixin
3
+ import einops
4
+ import numpy as np
5
+ import torch
6
+ import torch.nn as nn
7
+ import torch.nn.functional as F
8
+ from .layers import LLamaFeedForward, RMSNorm
9
+
10
+
11
+ def modulate(x, scale):
12
+ return x * (1 + scale)
13
+
14
+ class TimestepEmbedder(nn.Module):
15
+ """
16
+ Embeds scalar timesteps into vector representations.
17
+ """
18
+ def __init__(self, hidden_size, frequency_embedding_size=256):
19
+ super().__init__()
20
+ self.hidden_size = hidden_size
21
+ self.frequency_embedding_size = frequency_embedding_size
22
+ self.mlp = nn.Sequential(
23
+ nn.Linear(self.frequency_embedding_size, self.hidden_size),
24
+ nn.SiLU(),
25
+ nn.Linear(self.hidden_size, self.hidden_size),
26
+ )
27
+
28
+ @staticmethod
29
+ def timestep_embedding(t, dim, max_period=10000):
30
+ """
31
+ Create sinusoidal timestep embeddings.
32
+ :param t: a 1-D Tensor of N indices, one per batch element.
33
+ :param dim: the dimension of the output.
34
+ :param max_period: controls the minimum frequency of the embeddings.
35
+ :return: an (N, D) Tensor of positional embeddings.
36
+ """
37
+ half = dim // 2
38
+ freqs = torch.exp(
39
+ -np.log(max_period) * torch.arange(0, half, dtype=t.dtype) / half
40
+ ).to(t.device)
41
+ args = t[:, :, None] * freqs[None, :]
42
+ embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1)
43
+ if dim % 2:
44
+ embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :, :1])], dim=-1)
45
+ return embedding
46
+
47
+ def forward(self, t):
48
+ t_freq = self.timestep_embedding(t, self.frequency_embedding_size)
49
+ t_freq = t_freq.to(self.mlp[0].weight.dtype)
50
+ return self.mlp(t_freq)
51
+
52
+ class FinalLayer(nn.Module):
53
+ def __init__(self, hidden_size, num_patches, out_channels):
54
+ super().__init__()
55
+ self.norm_final = nn.LayerNorm(hidden_size, eps=1e-6, elementwise_affine=False)
56
+ self.linear = nn.Linear(hidden_size, num_patches * out_channels)
57
+ self.adaLN_modulation = nn.Sequential(
58
+ nn.SiLU(),
59
+ nn.Linear(min(hidden_size, 1024), hidden_size),
60
+ )
61
+
62
+ def forward(self, x, c):
63
+ scale = self.adaLN_modulation(c)
64
+ x = modulate(self.norm_final(x), scale)
65
+ x = self.linear(x)
66
+ return x
67
+
68
+ class Attention(nn.Module):
69
+ def __init__(
70
+ self,
71
+ dim,
72
+ n_heads,
73
+ n_kv_heads=None,
74
+ qk_norm=False,
75
+ y_dim=0,
76
+ base_seqlen=None,
77
+ proportional_attn=False,
78
+ attention_dropout=0.0,
79
+ max_position_embeddings=384,
80
+ ):
81
+ super().__init__()
82
+ self.dim = dim
83
+ self.n_heads = n_heads
84
+ self.n_kv_heads = n_kv_heads or n_heads
85
+ self.qk_norm = qk_norm
86
+ self.y_dim = y_dim
87
+ self.base_seqlen = base_seqlen
88
+ self.proportional_attn = proportional_attn
89
+ self.attention_dropout = attention_dropout
90
+ self.max_position_embeddings = max_position_embeddings
91
+
92
+ self.head_dim = dim // n_heads
93
+
94
+ self.wq = nn.Linear(dim, n_heads * self.head_dim, bias=False)
95
+ self.wk = nn.Linear(dim, self.n_kv_heads * self.head_dim, bias=False)
96
+ self.wv = nn.Linear(dim, self.n_kv_heads * self.head_dim, bias=False)
97
+
98
+ if y_dim > 0:
99
+ self.wk_y = nn.Linear(y_dim, self.n_kv_heads * self.head_dim, bias=False)
100
+ self.wv_y = nn.Linear(y_dim, self.n_kv_heads * self.head_dim, bias=False)
101
+ self.gate = nn.Parameter(torch.zeros(n_heads))
102
+
103
+ self.wo = nn.Linear(n_heads * self.head_dim, dim, bias=False)
104
+
105
+ if qk_norm:
106
+ self.q_norm = nn.LayerNorm(self.n_heads * self.head_dim)
107
+ self.k_norm = nn.LayerNorm(self.n_kv_heads * self.head_dim)
108
+ if y_dim > 0:
109
+ self.ky_norm = nn.LayerNorm(self.n_kv_heads * self.head_dim, eps=1e-6)
110
+ else:
111
+ self.ky_norm = nn.Identity()
112
+ else:
113
+ self.q_norm = nn.Identity()
114
+ self.k_norm = nn.Identity()
115
+ self.ky_norm = nn.Identity()
116
+
117
+
118
+ @staticmethod
119
+ def apply_rotary_emb(xq, xk, freqs_cis):
120
+ # xq, xk: [batch_size, seq_len, n_heads, head_dim]
121
+ # freqs_cis: [1, seq_len, 1, head_dim]
122
+ xq_ = xq.float().reshape(*xq.shape[:-1], -1, 2)
123
+ xk_ = xk.float().reshape(*xk.shape[:-1], -1, 2)
124
+
125
+ xq_complex = torch.view_as_complex(xq_)
126
+ xk_complex = torch.view_as_complex(xk_)
127
+
128
+ freqs_cis = freqs_cis.unsqueeze(2)
129
+
130
+ # Apply freqs_cis
131
+ xq_out = xq_complex * freqs_cis
132
+ xk_out = xk_complex * freqs_cis
133
+
134
+ # Convert back to real numbers
135
+ xq_out = torch.view_as_real(xq_out).flatten(-2)
136
+ xk_out = torch.view_as_real(xk_out).flatten(-2)
137
+
138
+ return xq_out.type_as(xq), xk_out.type_as(xk)
139
+
140
+ def forward(
141
+ self,
142
+ x,
143
+ x_mask,
144
+ freqs_cis,
145
+ y=None,
146
+ y_mask=None,
147
+ init_cache=False,
148
+ ):
149
+ bsz, seqlen, _ = x.size()
150
+ xq = self.wq(x)
151
+ xk = self.wk(x)
152
+ xv = self.wv(x)
153
+
154
+ if x_mask is None:
155
+ x_mask = torch.ones(bsz, seqlen, dtype=torch.bool, device=x.device)
156
+ inp_dtype = xq.dtype
157
+
158
+ xq = self.q_norm(xq)
159
+ xk = self.k_norm(xk)
160
+
161
+ xq = xq.view(bsz, seqlen, self.n_heads, self.head_dim)
162
+ xk = xk.view(bsz, seqlen, self.n_kv_heads, self.head_dim)
163
+ xv = xv.view(bsz, seqlen, self.n_kv_heads, self.head_dim)
164
+
165
+ if self.n_kv_heads != self.n_heads:
166
+ n_rep = self.n_heads // self.n_kv_heads
167
+ xk = xk.repeat_interleave(n_rep, dim=2)
168
+ xv = xv.repeat_interleave(n_rep, dim=2)
169
+
170
+ freqs_cis = freqs_cis.to(xq.device)
171
+ xq, xk = self.apply_rotary_emb(xq, xk, freqs_cis)
172
+
173
+ output = (
174
+ F.scaled_dot_product_attention(
175
+ xq.permute(0, 2, 1, 3),
176
+ xk.permute(0, 2, 1, 3),
177
+ xv.permute(0, 2, 1, 3),
178
+ attn_mask=x_mask.bool().view(bsz, 1, 1, seqlen).expand(-1, self.n_heads, seqlen, -1),
179
+ scale=None,
180
+ )
181
+ .permute(0, 2, 1, 3)
182
+ .to(inp_dtype)
183
+ )
184
+
185
+
186
+ if hasattr(self, "wk_y"):
187
+ yk = self.ky_norm(self.wk_y(y)).view(bsz, -1, self.n_kv_heads, self.head_dim)
188
+ yv = self.wv_y(y).view(bsz, -1, self.n_kv_heads, self.head_dim)
189
+ n_rep = self.n_heads // self.n_kv_heads
190
+ # if n_rep >= 1:
191
+ # yk = yk.unsqueeze(3).repeat(1, 1, 1, n_rep, 1).flatten(2, 3)
192
+ # yv = yv.unsqueeze(3).repeat(1, 1, 1, n_rep, 1).flatten(2, 3)
193
+ if n_rep >= 1:
194
+ yk = einops.repeat(yk, "b l h d -> b l (repeat h) d", repeat=n_rep)
195
+ yv = einops.repeat(yv, "b l h d -> b l (repeat h) d", repeat=n_rep)
196
+ output_y = F.scaled_dot_product_attention(
197
+ xq.permute(0, 2, 1, 3),
198
+ yk.permute(0, 2, 1, 3),
199
+ yv.permute(0, 2, 1, 3),
200
+ y_mask.view(bsz, 1, 1, -1).expand(bsz, self.n_heads, seqlen, -1).to(torch.bool),
201
+ ).permute(0, 2, 1, 3)
202
+ output_y = output_y * self.gate.tanh().view(1, 1, -1, 1)
203
+ output = output + output_y
204
+
205
+ output = output.flatten(-2)
206
+ output = self.wo(output)
207
+
208
+ return output.to(inp_dtype)
209
+
210
+ class TransformerBlock(nn.Module):
211
+ """
212
+ Corresponds to the Transformer block in the JAX code.
213
+ """
214
+ def __init__(
215
+ self,
216
+ dim,
217
+ n_heads,
218
+ n_kv_heads,
219
+ multiple_of,
220
+ ffn_dim_multiplier,
221
+ norm_eps,
222
+ qk_norm,
223
+ y_dim,
224
+ max_position_embeddings,
225
+ ):
226
+ super().__init__()
227
+ self.attention = Attention(dim, n_heads, n_kv_heads, qk_norm, y_dim=y_dim, max_position_embeddings=max_position_embeddings)
228
+ self.feed_forward = LLamaFeedForward(
229
+ dim=dim,
230
+ hidden_dim=4 * dim,
231
+ multiple_of=multiple_of,
232
+ ffn_dim_multiplier=ffn_dim_multiplier,
233
+ )
234
+ self.attention_norm1 = RMSNorm(dim, eps=norm_eps)
235
+ self.attention_norm2 = RMSNorm(dim, eps=norm_eps)
236
+ self.ffn_norm1 = RMSNorm(dim, eps=norm_eps)
237
+ self.ffn_norm2 = RMSNorm(dim, eps=norm_eps)
238
+ self.adaLN_modulation = nn.Sequential(
239
+ nn.SiLU(),
240
+ nn.Linear(min(dim, 1024), 4 * dim),
241
+ )
242
+ self.attention_y_norm = RMSNorm(y_dim, eps=norm_eps)
243
+
244
+ def forward(
245
+ self,
246
+ x,
247
+ x_mask,
248
+ freqs_cis,
249
+ y,
250
+ y_mask,
251
+ adaln_input=None,
252
+ ):
253
+ if adaln_input is not None:
254
+ scales_gates = self.adaLN_modulation(adaln_input)
255
+ # TODO: Duong - check the dimension of chunking
256
+ # scale_msa, gate_msa, scale_mlp, gate_mlp = scales_gates.chunk(4, dim=-1)
257
+ scale_msa, gate_msa, scale_mlp, gate_mlp = scales_gates.chunk(4, dim=-1)
258
+ x = x + torch.tanh(gate_msa) * self.attention_norm2(
259
+ self.attention(
260
+ modulate(self.attention_norm1(x), scale_msa), # ok
261
+ x_mask,
262
+ freqs_cis,
263
+ self.attention_y_norm(y), # ok
264
+ y_mask,
265
+ )
266
+ )
267
+ x = x + torch.tanh(gate_mlp) * self.ffn_norm2(
268
+ self.feed_forward(
269
+ modulate(self.ffn_norm1(x), scale_mlp),
270
+ )
271
+ )
272
+ else:
273
+ x = x + self.attention_norm2(
274
+ self.attention(
275
+ self.attention_norm1(x),
276
+ x_mask,
277
+ freqs_cis,
278
+ self.attention_y_norm(y),
279
+ y_mask,
280
+ )
281
+ )
282
+ x = x + self.ffn_norm2(self.feed_forward(self.ffn_norm1(x)))
283
+ return x
284
+
285
+
286
+ class NextDiT(ModelMixin, ConfigMixin):
287
+ """
288
+ Diffusion model with a Transformer backbone for joint image-video training.
289
+ """
290
+ @register_to_config
291
+ def __init__(
292
+ self,
293
+ input_size=(1, 32, 32),
294
+ patch_size=(1, 2, 2),
295
+ in_channels=16,
296
+ hidden_size=4096,
297
+ depth=32,
298
+ num_heads=32,
299
+ num_kv_heads=None,
300
+ multiple_of=256,
301
+ ffn_dim_multiplier=None,
302
+ norm_eps=1e-5,
303
+ pred_sigma=False,
304
+ caption_channels=4096,
305
+ qk_norm=False,
306
+ norm_type="rms",
307
+ model_max_length=120,
308
+ rotary_max_length=384,
309
+ rotary_max_length_t=None
310
+ ):
311
+ super().__init__()
312
+ self.input_size = input_size
313
+ self.patch_size = patch_size
314
+ self.in_channels = in_channels
315
+ self.hidden_size = hidden_size
316
+ self.depth = depth
317
+ self.num_heads = num_heads
318
+ self.num_kv_heads = num_kv_heads or num_heads
319
+ self.multiple_of = multiple_of
320
+ self.ffn_dim_multiplier = ffn_dim_multiplier
321
+ self.norm_eps = norm_eps
322
+ self.pred_sigma = pred_sigma
323
+ self.caption_channels = caption_channels
324
+ self.qk_norm = qk_norm
325
+ self.norm_type = norm_type
326
+ self.model_max_length = model_max_length
327
+ self.rotary_max_length = rotary_max_length
328
+ self.rotary_max_length_t = rotary_max_length_t
329
+ self.out_channels = in_channels * 2 if pred_sigma else in_channels
330
+
331
+ self.x_embedder = nn.Linear(np.prod(self.patch_size) * in_channels, hidden_size)
332
+
333
+ self.t_embedder = TimestepEmbedder(min(hidden_size, 1024))
334
+ self.y_embedder = nn.Sequential(
335
+ nn.LayerNorm(caption_channels, eps=1e-6),
336
+ nn.Linear(caption_channels, min(hidden_size, 1024)),
337
+ )
338
+
339
+ self.layers = nn.ModuleList([
340
+ TransformerBlock(
341
+ dim=hidden_size,
342
+ n_heads=num_heads,
343
+ n_kv_heads=self.num_kv_heads,
344
+ multiple_of=multiple_of,
345
+ ffn_dim_multiplier=ffn_dim_multiplier,
346
+ norm_eps=norm_eps,
347
+ qk_norm=qk_norm,
348
+ y_dim=caption_channels,
349
+ max_position_embeddings=rotary_max_length,
350
+ )
351
+ for _ in range(depth)
352
+ ])
353
+
354
+ self.final_layer = FinalLayer(
355
+ hidden_size=hidden_size,
356
+ num_patches=np.prod(patch_size),
357
+ out_channels=self.out_channels,
358
+ )
359
+
360
+ assert (hidden_size // num_heads) % 6 == 0, "3d rope needs head dim to be divisible by 6"
361
+
362
+ self.freqs_cis = self.precompute_freqs_cis(
363
+ hidden_size // num_heads,
364
+ self.rotary_max_length,
365
+ end_t=self.rotary_max_length_t
366
+ )
367
+
368
+ def to(self, *args, **kwargs):
369
+ self = super().to(*args, **kwargs)
370
+ # self.freqs_cis = self.freqs_cis.to(*args, **kwargs)
371
+ return self
372
+
373
+ @staticmethod
374
+ def precompute_freqs_cis(
375
+ dim: int,
376
+ end: int,
377
+ end_t: int = None,
378
+ theta: float = 10000.0,
379
+ scale_factor: float = 1.0,
380
+ scale_watershed: float = 1.0,
381
+ timestep: float = 1.0,
382
+ ):
383
+ if timestep < scale_watershed:
384
+ linear_factor = scale_factor
385
+ ntk_factor = 1.0
386
+ else:
387
+ linear_factor = 1.0
388
+ ntk_factor = scale_factor
389
+
390
+ theta = theta * ntk_factor
391
+ freqs = 1.0 / (theta ** (torch.arange(0, dim, 6)[: (dim // 6)] / dim)) / linear_factor
392
+
393
+ timestep = torch.arange(end, dtype=torch.float32)
394
+ freqs = torch.outer(timestep, freqs).float()
395
+ freqs_cis = torch.exp(1j * freqs)
396
+
397
+ if end_t is not None:
398
+ freqs_t = 1.0 / (theta ** (torch.arange(0, dim, 6)[: (dim // 6)] / dim)) / linear_factor
399
+ timestep_t = torch.arange(end_t, dtype=torch.float32)
400
+ freqs_t = torch.outer(timestep_t, freqs_t).float()
401
+ freqs_cis_t = torch.exp(1j * freqs_t)
402
+ freqs_cis_t = freqs_cis_t.view(end_t, 1, 1, dim // 6).repeat(1, end, end, 1)
403
+ else:
404
+ end_t = end
405
+ freqs_cis_t = freqs_cis.view(end_t, 1, 1, dim // 6).repeat(1, end, end, 1)
406
+
407
+ freqs_cis_h = freqs_cis.view(1, end, 1, dim // 6).repeat(end_t, 1, end, 1)
408
+ freqs_cis_w = freqs_cis.view(1, 1, end, dim // 6).repeat(end_t, end, 1, 1)
409
+ freqs_cis = torch.cat([freqs_cis_t, freqs_cis_h, freqs_cis_w], dim=-1).view(end_t, end, end, -1)
410
+ return freqs_cis
411
+
412
+ def forward(
413
+ self,
414
+ samples,
415
+ timesteps,
416
+ encoder_hidden_states,
417
+ encoder_attention_mask,
418
+ scale_factor: float = 1.0, # scale_factor for rotary embedding
419
+ scale_watershed: float = 1.0, # scale_watershed for rotary embedding
420
+ ):
421
+ if samples.ndim == 4: # B C H W
422
+ samples = samples[:, None, ...] # B F C H W
423
+
424
+ precomputed_freqs_cis = None
425
+ if scale_factor != 1 or scale_watershed != 1:
426
+ precomputed_freqs_cis = self.precompute_freqs_cis(
427
+ self.hidden_size // self.num_heads,
428
+ self.rotary_max_length,
429
+ end_t=self.rotary_max_length_t,
430
+ scale_factor=scale_factor,
431
+ scale_watershed=scale_watershed,
432
+ timestep=torch.max(timesteps.cpu()).item()
433
+ )
434
+
435
+ if len(timesteps.shape) == 5:
436
+ t, *_ = self.patchify(timesteps, precomputed_freqs_cis)
437
+ timesteps = t.mean(dim=-1)
438
+ elif len(timesteps.shape) == 1:
439
+ timesteps = timesteps[:, None, None, None, None].expand_as(samples)
440
+ t, *_ = self.patchify(timesteps, precomputed_freqs_cis)
441
+ timesteps = t.mean(dim=-1)
442
+ samples, T, H, W, freqs_cis = self.patchify(samples, precomputed_freqs_cis)
443
+ samples = self.x_embedder(samples)
444
+ t = self.t_embedder(timesteps)
445
+
446
+ encoder_attention_mask_float = encoder_attention_mask[..., None].float()
447
+ encoder_hidden_states_pool = (encoder_hidden_states * encoder_attention_mask_float).sum(dim=1) / (encoder_attention_mask_float.sum(dim=1) + 1e-8)
448
+ encoder_hidden_states_pool = encoder_hidden_states_pool.to(samples.dtype)
449
+ y = self.y_embedder(encoder_hidden_states_pool)
450
+ y = y.unsqueeze(1).expand(-1, samples.size(1), -1)
451
+
452
+ adaln_input = t + y
453
+
454
+ for block in self.layers:
455
+ samples = block(samples, None, freqs_cis, encoder_hidden_states, encoder_attention_mask, adaln_input)
456
+
457
+ samples = self.final_layer(samples, adaln_input)
458
+ samples = self.unpatchify(samples, T, H, W)
459
+
460
+ return samples
461
+
462
+ def patchify(self, x, precompute_freqs_cis=None):
463
+ # pytorch is C, H, W
464
+ B, T, C, H, W = x.size()
465
+ pT, pH, pW = self.patch_size
466
+ x = x.view(B, T // pT, pT, C, H // pH, pH, W // pW, pW)
467
+ x = x.permute(0, 1, 4, 6, 2, 5, 7, 3)
468
+ x = x.reshape(B, -1, pT * pH * pW * C)
469
+ if precompute_freqs_cis is None:
470
+ freqs_cis = self.freqs_cis[: T // pT, :H // pH, :W // pW].reshape(-1, * self.freqs_cis.shape[3:])[None].to(x.device)
471
+ else:
472
+ freqs_cis = precompute_freqs_cis[: T // pT, :H // pH, :W // pW].reshape(-1, * precompute_freqs_cis.shape[3:])[None].to(x.device)
473
+ return x, T // pT, H // pH, W // pW, freqs_cis
474
+
475
+ def unpatchify(self, x, T, H, W):
476
+ B = x.size(0)
477
+ C = self.out_channels
478
+ pT, pH, pW = self.patch_size
479
+ x = x.view(B, T, H, W, pT, pH, pW, C)
480
+ x = x.permute(0, 1, 4, 7, 2, 5, 3, 6)
481
+ x = x.reshape(B, T * pT, C, H * pH, W * pW)
482
+ return x
onediffusion/pipeline/__init__.py ADDED
File without changes
onediffusion/pipeline/image_processor.py ADDED
@@ -0,0 +1,672 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 The HuggingFace Team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ import math
16
+ import warnings
17
+ from typing import List, Optional, Tuple, Union
18
+
19
+ import numpy as np
20
+ import PIL.Image
21
+ import torch
22
+ import torch.nn.functional as F
23
+ import torchvision.transforms as T
24
+ from PIL import Image, ImageFilter, ImageOps
25
+
26
+ from diffusers.configuration_utils import ConfigMixin, register_to_config
27
+ from diffusers.utils import CONFIG_NAME, PIL_INTERPOLATION, deprecate
28
+
29
+ # from onediffusion.dataset.transforms import CenterCropResizeImage
30
+
31
+ PipelineImageInput = Union[
32
+ PIL.Image.Image,
33
+ np.ndarray,
34
+ torch.Tensor,
35
+ List[PIL.Image.Image],
36
+ List[np.ndarray],
37
+ List[torch.Tensor],
38
+ ]
39
+
40
+ PipelineDepthInput = PipelineImageInput
41
+
42
+
43
+ def is_valid_image(image):
44
+ return isinstance(image, PIL.Image.Image) or isinstance(image, (np.ndarray, torch.Tensor)) and image.ndim in (2, 3)
45
+
46
+
47
+ def is_valid_image_imagelist(images):
48
+ # check if the image input is one of the supported formats for image and image list:
49
+ # it can be either one of below 3
50
+ # (1) a 4d pytorch tensor or numpy array,
51
+ # (2) a valid image: PIL.Image.Image, 2-d np.ndarray or torch.Tensor (grayscale image), 3-d np.ndarray or torch.Tensor
52
+ # (3) a list of valid image
53
+ if isinstance(images, (np.ndarray, torch.Tensor)) and images.ndim == 4:
54
+ return True
55
+ elif is_valid_image(images):
56
+ return True
57
+ elif isinstance(images, list):
58
+ return all(is_valid_image(image) for image in images)
59
+ return False
60
+
61
+
62
+ class VaeImageProcessorOneDiffuser(ConfigMixin):
63
+ """
64
+ Image processor for VAE.
65
+
66
+ Args:
67
+ do_resize (`bool`, *optional*, defaults to `True`):
68
+ Whether to downscale the image's (height, width) dimensions to multiples of `vae_scale_factor`. Can accept
69
+ `height` and `width` arguments from [`image_processor.VaeImageProcessor.preprocess`] method.
70
+ vae_scale_factor (`int`, *optional*, defaults to `8`):
71
+ VAE scale factor. If `do_resize` is `True`, the image is automatically resized to multiples of this factor.
72
+ resample (`str`, *optional*, defaults to `lanczos`):
73
+ Resampling filter to use when resizing the image.
74
+ do_normalize (`bool`, *optional*, defaults to `True`):
75
+ Whether to normalize the image to [-1,1].
76
+ do_binarize (`bool`, *optional*, defaults to `False`):
77
+ Whether to binarize the image to 0/1.
78
+ do_convert_rgb (`bool`, *optional*, defaults to be `False`):
79
+ Whether to convert the images to RGB format.
80
+ do_convert_grayscale (`bool`, *optional*, defaults to be `False`):
81
+ Whether to convert the images to grayscale format.
82
+ """
83
+
84
+ config_name = CONFIG_NAME
85
+
86
+ @register_to_config
87
+ def __init__(
88
+ self,
89
+ do_resize: bool = True,
90
+ vae_scale_factor: int = 8,
91
+ vae_latent_channels: int = 4,
92
+ resample: str = "lanczos",
93
+ do_normalize: bool = True,
94
+ do_binarize: bool = False,
95
+ do_convert_rgb: bool = False,
96
+ do_convert_grayscale: bool = False,
97
+ ):
98
+ super().__init__()
99
+ if do_convert_rgb and do_convert_grayscale:
100
+ raise ValueError(
101
+ "`do_convert_rgb` and `do_convert_grayscale` can not both be set to `True`,"
102
+ " if you intended to convert the image into RGB format, please set `do_convert_grayscale = False`.",
103
+ " if you intended to convert the image into grayscale format, please set `do_convert_rgb = False`",
104
+ )
105
+
106
+ @staticmethod
107
+ def numpy_to_pil(images: np.ndarray) -> List[PIL.Image.Image]:
108
+ """
109
+ Convert a numpy image or a batch of images to a PIL image.
110
+ """
111
+ if images.ndim == 3:
112
+ images = images[None, ...]
113
+ images = (images * 255).round().astype("uint8")
114
+ if images.shape[-1] == 1:
115
+ # special case for grayscale (single channel) images
116
+ pil_images = [Image.fromarray(image.squeeze(), mode="L") for image in images]
117
+ else:
118
+ pil_images = [Image.fromarray(image) for image in images]
119
+
120
+ return pil_images
121
+
122
+ @staticmethod
123
+ def pil_to_numpy(images: Union[List[PIL.Image.Image], PIL.Image.Image]) -> np.ndarray:
124
+ """
125
+ Convert a PIL image or a list of PIL images to NumPy arrays.
126
+ """
127
+ if not isinstance(images, list):
128
+ images = [images]
129
+ images = [np.array(image).astype(np.float32) / 255.0 for image in images]
130
+ images = np.stack(images, axis=0)
131
+
132
+ return images
133
+
134
+ @staticmethod
135
+ def numpy_to_pt(images: np.ndarray) -> torch.Tensor:
136
+ """
137
+ Convert a NumPy image to a PyTorch tensor.
138
+ """
139
+ if images.ndim == 3:
140
+ images = images[..., None]
141
+
142
+ images = torch.from_numpy(images.transpose(0, 3, 1, 2))
143
+ return images
144
+
145
+ @staticmethod
146
+ def pt_to_numpy(images: torch.Tensor) -> np.ndarray:
147
+ """
148
+ Convert a PyTorch tensor to a NumPy image.
149
+ """
150
+ images = images.cpu().permute(0, 2, 3, 1).float().numpy()
151
+ return images
152
+
153
+ @staticmethod
154
+ def normalize(images: Union[np.ndarray, torch.Tensor]) -> Union[np.ndarray, torch.Tensor]:
155
+ """
156
+ Normalize an image array to [-1,1].
157
+ """
158
+ return 2.0 * images - 1.0
159
+
160
+ @staticmethod
161
+ def denormalize(images: Union[np.ndarray, torch.Tensor]) -> Union[np.ndarray, torch.Tensor]:
162
+ """
163
+ Denormalize an image array to [0,1].
164
+ """
165
+ return (images / 2 + 0.5).clamp(0, 1)
166
+
167
+ @staticmethod
168
+ def convert_to_rgb(image: PIL.Image.Image) -> PIL.Image.Image:
169
+ """
170
+ Converts a PIL image to RGB format.
171
+ """
172
+ image = image.convert("RGB")
173
+
174
+ return image
175
+
176
+ @staticmethod
177
+ def convert_to_grayscale(image: PIL.Image.Image) -> PIL.Image.Image:
178
+ """
179
+ Converts a PIL image to grayscale format.
180
+ """
181
+ image = image.convert("L")
182
+
183
+ return image
184
+
185
+ @staticmethod
186
+ def blur(image: PIL.Image.Image, blur_factor: int = 4) -> PIL.Image.Image:
187
+ """
188
+ Applies Gaussian blur to an image.
189
+ """
190
+ image = image.filter(ImageFilter.GaussianBlur(blur_factor))
191
+
192
+ return image
193
+
194
+ @staticmethod
195
+ def get_crop_region(mask_image: PIL.Image.Image, width: int, height: int, pad=0):
196
+ """
197
+ Finds a rectangular region that contains all masked ares in an image, and expands region to match the aspect
198
+ ratio of the original image; for example, if user drew mask in a 128x32 region, and the dimensions for
199
+ processing are 512x512, the region will be expanded to 128x128.
200
+
201
+ Args:
202
+ mask_image (PIL.Image.Image): Mask image.
203
+ width (int): Width of the image to be processed.
204
+ height (int): Height of the image to be processed.
205
+ pad (int, optional): Padding to be added to the crop region. Defaults to 0.
206
+
207
+ Returns:
208
+ tuple: (x1, y1, x2, y2) represent a rectangular region that contains all masked ares in an image and
209
+ matches the original aspect ratio.
210
+ """
211
+
212
+ mask_image = mask_image.convert("L")
213
+ mask = np.array(mask_image)
214
+
215
+ # 1. find a rectangular region that contains all masked ares in an image
216
+ h, w = mask.shape
217
+ crop_left = 0
218
+ for i in range(w):
219
+ if not (mask[:, i] == 0).all():
220
+ break
221
+ crop_left += 1
222
+
223
+ crop_right = 0
224
+ for i in reversed(range(w)):
225
+ if not (mask[:, i] == 0).all():
226
+ break
227
+ crop_right += 1
228
+
229
+ crop_top = 0
230
+ for i in range(h):
231
+ if not (mask[i] == 0).all():
232
+ break
233
+ crop_top += 1
234
+
235
+ crop_bottom = 0
236
+ for i in reversed(range(h)):
237
+ if not (mask[i] == 0).all():
238
+ break
239
+ crop_bottom += 1
240
+
241
+ # 2. add padding to the crop region
242
+ x1, y1, x2, y2 = (
243
+ int(max(crop_left - pad, 0)),
244
+ int(max(crop_top - pad, 0)),
245
+ int(min(w - crop_right + pad, w)),
246
+ int(min(h - crop_bottom + pad, h)),
247
+ )
248
+
249
+ # 3. expands crop region to match the aspect ratio of the image to be processed
250
+ ratio_crop_region = (x2 - x1) / (y2 - y1)
251
+ ratio_processing = width / height
252
+
253
+ if ratio_crop_region > ratio_processing:
254
+ desired_height = (x2 - x1) / ratio_processing
255
+ desired_height_diff = int(desired_height - (y2 - y1))
256
+ y1 -= desired_height_diff // 2
257
+ y2 += desired_height_diff - desired_height_diff // 2
258
+ if y2 >= mask_image.height:
259
+ diff = y2 - mask_image.height
260
+ y2 -= diff
261
+ y1 -= diff
262
+ if y1 < 0:
263
+ y2 -= y1
264
+ y1 -= y1
265
+ if y2 >= mask_image.height:
266
+ y2 = mask_image.height
267
+ else:
268
+ desired_width = (y2 - y1) * ratio_processing
269
+ desired_width_diff = int(desired_width - (x2 - x1))
270
+ x1 -= desired_width_diff // 2
271
+ x2 += desired_width_diff - desired_width_diff // 2
272
+ if x2 >= mask_image.width:
273
+ diff = x2 - mask_image.width
274
+ x2 -= diff
275
+ x1 -= diff
276
+ if x1 < 0:
277
+ x2 -= x1
278
+ x1 -= x1
279
+ if x2 >= mask_image.width:
280
+ x2 = mask_image.width
281
+
282
+ return x1, y1, x2, y2
283
+
284
+ def _resize_and_fill(
285
+ self,
286
+ image: PIL.Image.Image,
287
+ width: int,
288
+ height: int,
289
+ ) -> PIL.Image.Image:
290
+ """
291
+ Resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center
292
+ the image within the dimensions, filling empty with data from image.
293
+
294
+ Args:
295
+ image: The image to resize.
296
+ width: The width to resize the image to.
297
+ height: The height to resize the image to.
298
+ """
299
+
300
+ ratio = width / height
301
+ src_ratio = image.width / image.height
302
+
303
+ src_w = width if ratio < src_ratio else image.width * height // image.height
304
+ src_h = height if ratio >= src_ratio else image.height * width // image.width
305
+
306
+ resized = image.resize((src_w, src_h), resample=PIL_INTERPOLATION["lanczos"])
307
+ res = Image.new("RGB", (width, height))
308
+ res.paste(resized, box=(width // 2 - src_w // 2, height // 2 - src_h // 2))
309
+
310
+ if ratio < src_ratio:
311
+ fill_height = height // 2 - src_h // 2
312
+ if fill_height > 0:
313
+ res.paste(resized.resize((width, fill_height), box=(0, 0, width, 0)), box=(0, 0))
314
+ res.paste(
315
+ resized.resize((width, fill_height), box=(0, resized.height, width, resized.height)),
316
+ box=(0, fill_height + src_h),
317
+ )
318
+ elif ratio > src_ratio:
319
+ fill_width = width // 2 - src_w // 2
320
+ if fill_width > 0:
321
+ res.paste(resized.resize((fill_width, height), box=(0, 0, 0, height)), box=(0, 0))
322
+ res.paste(
323
+ resized.resize((fill_width, height), box=(resized.width, 0, resized.width, height)),
324
+ box=(fill_width + src_w, 0),
325
+ )
326
+
327
+ return res
328
+
329
+ def _resize_and_crop(
330
+ self,
331
+ image: PIL.Image.Image,
332
+ width: int,
333
+ height: int,
334
+ ) -> PIL.Image.Image:
335
+ """
336
+ Resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center
337
+ the image within the dimensions, cropping the excess.
338
+
339
+ Args:
340
+ image: The image to resize.
341
+ width: The width to resize the image to.
342
+ height: The height to resize the image to.
343
+ """
344
+ ratio = width / height
345
+ src_ratio = image.width / image.height
346
+
347
+ src_w = width if ratio > src_ratio else image.width * height // image.height
348
+ src_h = height if ratio <= src_ratio else image.height * width // image.width
349
+
350
+ resized = image.resize((src_w, src_h), resample=PIL_INTERPOLATION["lanczos"])
351
+ res = Image.new("RGB", (width, height))
352
+ res.paste(resized, box=(width // 2 - src_w // 2, height // 2 - src_h // 2))
353
+ return res
354
+
355
+ def resize(
356
+ self,
357
+ image: Union[PIL.Image.Image, np.ndarray, torch.Tensor],
358
+ height: int,
359
+ width: int,
360
+ resize_mode: str = "default", # "default", "fill", "crop"
361
+ ) -> Union[PIL.Image.Image, np.ndarray, torch.Tensor]:
362
+ """
363
+ Resize image.
364
+
365
+ Args:
366
+ image (`PIL.Image.Image`, `np.ndarray` or `torch.Tensor`):
367
+ The image input, can be a PIL image, numpy array or pytorch tensor.
368
+ height (`int`):
369
+ The height to resize to.
370
+ width (`int`):
371
+ The width to resize to.
372
+ resize_mode (`str`, *optional*, defaults to `default`):
373
+ The resize mode to use, can be one of `default` or `fill`. If `default`, will resize the image to fit
374
+ within the specified width and height, and it may not maintaining the original aspect ratio. If `fill`,
375
+ will resize the image to fit within the specified width and height, maintaining the aspect ratio, and
376
+ then center the image within the dimensions, filling empty with data from image. If `crop`, will resize
377
+ the image to fit within the specified width and height, maintaining the aspect ratio, and then center
378
+ the image within the dimensions, cropping the excess. Note that resize_mode `fill` and `crop` are only
379
+ supported for PIL image input.
380
+
381
+ Returns:
382
+ `PIL.Image.Image`, `np.ndarray` or `torch.Tensor`:
383
+ The resized image.
384
+ """
385
+ if resize_mode != "default" and not isinstance(image, PIL.Image.Image):
386
+ raise ValueError(f"Only PIL image input is supported for resize_mode {resize_mode}")
387
+ if isinstance(image, PIL.Image.Image):
388
+ if resize_mode == "default":
389
+ image = image.resize((width, height), resample=PIL_INTERPOLATION[self.config.resample])
390
+ elif resize_mode == "fill":
391
+ image = self._resize_and_fill(image, width, height)
392
+ elif resize_mode == "crop":
393
+ image = self._resize_and_crop(image, width, height)
394
+ else:
395
+ raise ValueError(f"resize_mode {resize_mode} is not supported")
396
+
397
+ elif isinstance(image, torch.Tensor):
398
+ image = torch.nn.functional.interpolate(
399
+ image,
400
+ size=(height, width),
401
+ )
402
+ elif isinstance(image, np.ndarray):
403
+ image = self.numpy_to_pt(image)
404
+ image = torch.nn.functional.interpolate(
405
+ image,
406
+ size=(height, width),
407
+ )
408
+ image = self.pt_to_numpy(image)
409
+ return image
410
+
411
+ def binarize(self, image: PIL.Image.Image) -> PIL.Image.Image:
412
+ """
413
+ Create a mask.
414
+
415
+ Args:
416
+ image (`PIL.Image.Image`):
417
+ The image input, should be a PIL image.
418
+
419
+ Returns:
420
+ `PIL.Image.Image`:
421
+ The binarized image. Values less than 0.5 are set to 0, values greater than 0.5 are set to 1.
422
+ """
423
+ image[image < 0.5] = 0
424
+ image[image >= 0.5] = 1
425
+
426
+ return image
427
+
428
+ def get_default_height_width(
429
+ self,
430
+ image: Union[PIL.Image.Image, np.ndarray, torch.Tensor],
431
+ height: Optional[int] = None,
432
+ width: Optional[int] = None,
433
+ ) -> Tuple[int, int]:
434
+ """
435
+ This function return the height and width that are downscaled to the next integer multiple of
436
+ `vae_scale_factor`.
437
+
438
+ Args:
439
+ image(`PIL.Image.Image`, `np.ndarray` or `torch.Tensor`):
440
+ The image input, can be a PIL image, numpy array or pytorch tensor. if it is a numpy array, should have
441
+ shape `[batch, height, width]` or `[batch, height, width, channel]` if it is a pytorch tensor, should
442
+ have shape `[batch, channel, height, width]`.
443
+ height (`int`, *optional*, defaults to `None`):
444
+ The height in preprocessed image. If `None`, will use the height of `image` input.
445
+ width (`int`, *optional*`, defaults to `None`):
446
+ The width in preprocessed. If `None`, will use the width of the `image` input.
447
+ """
448
+
449
+ if height is None:
450
+ if isinstance(image, PIL.Image.Image):
451
+ height = image.height
452
+ elif isinstance(image, torch.Tensor):
453
+ height = image.shape[2]
454
+ else:
455
+ height = image.shape[1]
456
+
457
+ if width is None:
458
+ if isinstance(image, PIL.Image.Image):
459
+ width = image.width
460
+ elif isinstance(image, torch.Tensor):
461
+ width = image.shape[3]
462
+ else:
463
+ width = image.shape[2]
464
+
465
+ width, height = (
466
+ x - x % self.config.vae_scale_factor for x in (width, height)
467
+ ) # resize to integer multiple of vae_scale_factor
468
+
469
+ return height, width
470
+
471
+ def preprocess(
472
+ self,
473
+ image: PipelineImageInput,
474
+ height: Optional[int] = None,
475
+ width: Optional[int] = None,
476
+ do_crop: bool = False,
477
+ ) -> torch.Tensor:
478
+ """
479
+ Preprocess the image input.
480
+
481
+ Args:
482
+ image (`pipeline_image_input`):
483
+ The image input, accepted formats are PIL images, NumPy arrays, PyTorch tensors; Also accept list of
484
+ supported formats.
485
+ height (`int`, *optional*, defaults to `None`):
486
+ The height in preprocessed image. If `None`, will use the `get_default_height_width()` to get default
487
+ height.
488
+ width (`int`, *optional*`, defaults to `None`):
489
+ The width in preprocessed. If `None`, will use get_default_height_width()` to get the default width.
490
+ resize_mode (`str`, *optional*, defaults to `default`):
491
+ The resize mode, can be one of `default` or `fill`. If `default`, will resize the image to fit within
492
+ the specified width and height, and it may not maintaining the original aspect ratio. If `fill`, will
493
+ resize the image to fit within the specified width and height, maintaining the aspect ratio, and then
494
+ center the image within the dimensions, filling empty with data from image. If `crop`, will resize the
495
+ image to fit within the specified width and height, maintaining the aspect ratio, and then center the
496
+ image within the dimensions, cropping the excess. Note that resize_mode `fill` and `crop` are only
497
+ supported for PIL image input.
498
+ crops_coords (`List[Tuple[int, int, int, int]]`, *optional*, defaults to `None`):
499
+ The crop coordinates for each image in the batch. If `None`, will not crop the image.
500
+ """
501
+ supported_formats = (PIL.Image.Image, np.ndarray, torch.Tensor)
502
+
503
+ # Expand the missing dimension for 3-dimensional pytorch tensor or numpy array that represents grayscale image
504
+ if self.config.do_convert_grayscale and isinstance(image, (torch.Tensor, np.ndarray)) and image.ndim == 3:
505
+ if isinstance(image, torch.Tensor):
506
+ # if image is a pytorch tensor could have 2 possible shapes:
507
+ # 1. batch x height x width: we should insert the channel dimension at position 1
508
+ # 2. channel x height x width: we should insert batch dimension at position 0,
509
+ # however, since both channel and batch dimension has same size 1, it is same to insert at position 1
510
+ # for simplicity, we insert a dimension of size 1 at position 1 for both cases
511
+ image = image.unsqueeze(1)
512
+ else:
513
+ # if it is a numpy array, it could have 2 possible shapes:
514
+ # 1. batch x height x width: insert channel dimension on last position
515
+ # 2. height x width x channel: insert batch dimension on first position
516
+ if image.shape[-1] == 1:
517
+ image = np.expand_dims(image, axis=0)
518
+ else:
519
+ image = np.expand_dims(image, axis=-1)
520
+
521
+ if isinstance(image, list) and isinstance(image[0], np.ndarray) and image[0].ndim == 4:
522
+ warnings.warn(
523
+ "Passing `image` as a list of 4d np.ndarray is deprecated."
524
+ "Please concatenate the list along the batch dimension and pass it as a single 4d np.ndarray",
525
+ FutureWarning,
526
+ )
527
+ image = np.concatenate(image, axis=0)
528
+ if isinstance(image, list) and isinstance(image[0], torch.Tensor) and image[0].ndim == 4:
529
+ warnings.warn(
530
+ "Passing `image` as a list of 4d torch.Tensor is deprecated."
531
+ "Please concatenate the list along the batch dimension and pass it as a single 4d torch.Tensor",
532
+ FutureWarning,
533
+ )
534
+ image = torch.cat(image, axis=0)
535
+
536
+ if not is_valid_image_imagelist(image):
537
+ raise ValueError(
538
+ f"Input is in incorrect format. Currently, we only support {', '.join(str(x) for x in supported_formats)}"
539
+ )
540
+ if not isinstance(image, list):
541
+ image = [image]
542
+
543
+ if isinstance(image[0], PIL.Image.Image):
544
+ pass
545
+ elif isinstance(image[0], np.ndarray):
546
+ image = self.numpy_to_pil(image)
547
+ elif isinstance(image[0], torch.Tensor):
548
+ image = self.pt_to_numpy(image)
549
+ image = self.numpy_to_pil(image)
550
+
551
+ if do_crop:
552
+ transforms = T.Compose([
553
+ T.Lambda(lambda image: image.convert('RGB')),
554
+ T.ToTensor(),
555
+ T.CenterCrop((height, width)),
556
+ T.Normalize([.5], [.5]),
557
+ ])
558
+ else:
559
+ transforms = T.Compose([
560
+ T.Lambda(lambda image: image.convert('RGB')),
561
+ T.ToTensor(),
562
+ T.Resize((height, width)),
563
+ T.Normalize([.5], [.5]),
564
+ ])
565
+ image = torch.stack([transforms(i) for i in image])
566
+
567
+ # expected range [0,1], normalize to [-1,1]
568
+ do_normalize = self.config.do_normalize
569
+ if do_normalize and image.min() < 0:
570
+ warnings.warn(
571
+ "Passing `image` as torch tensor with value range in [-1,1] is deprecated. The expected value range for image tensor is [0,1] "
572
+ f"when passing as pytorch tensor or numpy Array. You passed `image` with value range [{image.min()},{image.max()}]",
573
+ FutureWarning,
574
+ )
575
+ do_normalize = False
576
+ if do_normalize:
577
+ image = self.normalize(image)
578
+
579
+ if self.config.do_binarize:
580
+ image = self.binarize(image)
581
+
582
+ return image
583
+
584
+ def postprocess(
585
+ self,
586
+ image: torch.Tensor,
587
+ output_type: str = "pil",
588
+ do_denormalize: Optional[List[bool]] = None,
589
+ ) -> Union[PIL.Image.Image, np.ndarray, torch.Tensor]:
590
+ """
591
+ Postprocess the image output from tensor to `output_type`.
592
+
593
+ Args:
594
+ image (`torch.Tensor`):
595
+ The image input, should be a pytorch tensor with shape `B x C x H x W`.
596
+ output_type (`str`, *optional*, defaults to `pil`):
597
+ The output type of the image, can be one of `pil`, `np`, `pt`, `latent`.
598
+ do_denormalize (`List[bool]`, *optional*, defaults to `None`):
599
+ Whether to denormalize the image to [0,1]. If `None`, will use the value of `do_normalize` in the
600
+ `VaeImageProcessor` config.
601
+
602
+ Returns:
603
+ `PIL.Image.Image`, `np.ndarray` or `torch.Tensor`:
604
+ The postprocessed image.
605
+ """
606
+ if not isinstance(image, torch.Tensor):
607
+ raise ValueError(
608
+ f"Input for postprocessing is in incorrect format: {type(image)}. We only support pytorch tensor"
609
+ )
610
+ if output_type not in ["latent", "pt", "np", "pil"]:
611
+ deprecation_message = (
612
+ f"the output_type {output_type} is outdated and has been set to `np`. Please make sure to set it to one of these instead: "
613
+ "`pil`, `np`, `pt`, `latent`"
614
+ )
615
+ deprecate("Unsupported output_type", "1.0.0", deprecation_message, standard_warn=False)
616
+ output_type = "np"
617
+
618
+ if output_type == "latent":
619
+ return image
620
+
621
+ if do_denormalize is None:
622
+ do_denormalize = [self.config.do_normalize] * image.shape[0]
623
+
624
+ image = torch.stack(
625
+ [self.denormalize(image[i]) if do_denormalize[i] else image[i] for i in range(image.shape[0])]
626
+ )
627
+
628
+ if output_type == "pt":
629
+ return image
630
+
631
+ image = self.pt_to_numpy(image)
632
+
633
+ if output_type == "np":
634
+ return image
635
+
636
+ if output_type == "pil":
637
+ return self.numpy_to_pil(image)
638
+
639
+ def apply_overlay(
640
+ self,
641
+ mask: PIL.Image.Image,
642
+ init_image: PIL.Image.Image,
643
+ image: PIL.Image.Image,
644
+ crop_coords: Optional[Tuple[int, int, int, int]] = None,
645
+ ) -> PIL.Image.Image:
646
+ """
647
+ overlay the inpaint output to the original image
648
+ """
649
+
650
+ width, height = image.width, image.height
651
+
652
+ init_image = self.resize(init_image, width=width, height=height)
653
+ mask = self.resize(mask, width=width, height=height)
654
+
655
+ init_image_masked = PIL.Image.new("RGBa", (width, height))
656
+ init_image_masked.paste(init_image.convert("RGBA").convert("RGBa"), mask=ImageOps.invert(mask.convert("L")))
657
+ init_image_masked = init_image_masked.convert("RGBA")
658
+
659
+ if crop_coords is not None:
660
+ x, y, x2, y2 = crop_coords
661
+ w = x2 - x
662
+ h = y2 - y
663
+ base_image = PIL.Image.new("RGBA", (width, height))
664
+ image = self.resize(image, height=h, width=w, resize_mode="crop")
665
+ base_image.paste(image, (x, y))
666
+ image = base_image.convert("RGB")
667
+
668
+ image = image.convert("RGBA")
669
+ image.alpha_composite(init_image_masked)
670
+ image = image.convert("RGB")
671
+
672
+ return image
onediffusion/pipeline/onediffusion.py ADDED
@@ -0,0 +1,1079 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from dataclasses import dataclass
2
+ from diffusers import AutoencoderKL, FlowMatchEulerDiscreteScheduler
3
+ from diffusers.pipelines.pipeline_utils import DiffusionPipeline
4
+ from diffusers.utils import (
5
+ CONFIG_NAME,
6
+ DEPRECATED_REVISION_ARGS,
7
+ BaseOutput,
8
+ PushToHubMixin,
9
+ deprecate,
10
+ is_accelerate_available,
11
+ is_accelerate_version,
12
+ is_torch_npu_available,
13
+ is_torch_version,
14
+ logging,
15
+ numpy_to_pil,
16
+ replace_example_docstring,
17
+ )
18
+ from diffusers.models.modeling_utils import _LOW_CPU_MEM_USAGE_DEFAULT, ModelMixin
19
+ from diffusers.utils.torch_utils import randn_tensor
20
+ from diffusers.utils import BaseOutput
21
+ # from diffusers.image_processor import VaeImageProcessor
22
+ import einops
23
+ import inspect
24
+ import numpy as np
25
+ import PIL
26
+ import torch
27
+ from transformers import T5EncoderModel, T5Tokenizer
28
+ from typing import Any, Callable, Dict, List, Optional, Union
29
+ from PIL import Image
30
+
31
+ from ..nextdit.modeling_nextdit import NextDiT
32
+ from ..dataset.utils import *
33
+ # from ..dataset.multitask.multiview import calculate_rays
34
+ from ..pipeline.image_processor import VaeImageProcessorOneDiffuser
35
+
36
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
37
+
38
+ SUPPORTED_DEVICE_MAP = ["balanced"]
39
+
40
+ EXAMPLE_DOC_STRING = """
41
+ Examples:
42
+ ```py
43
+ >>> import torch
44
+ >>> from one_diffusion import OneDiffusionPipeline
45
+
46
+ >>> pipe = OneDiffusionPipeline.from_pretrained("path_to_one_diffuser_model")
47
+ >>> pipe = pipe.to("cuda")
48
+
49
+ >>> prompt = "A beautiful sunset over the ocean"
50
+ >>> image = pipe(prompt).images[0]
51
+ >>> image.save("beautiful_sunset.png")
52
+ ```
53
+ """
54
+
55
+ def create_c2w_matrix(azimuth_deg, elevation_deg, distance=1.0, target=np.array([0, 0, 0])):
56
+ """
57
+ Create a Camera-to-World (C2W) matrix from azimuth and elevation angles.
58
+
59
+ Parameters:
60
+ - azimuth_deg: Azimuth angle in degrees.
61
+ - elevation_deg: Elevation angle in degrees.
62
+ - distance: Distance from the target point.
63
+ - target: The point the camera is looking at in world coordinates.
64
+
65
+ Returns:
66
+ - C2W: A 4x4 NumPy array representing the Camera-to-World transformation matrix.
67
+ """
68
+ # Convert angles from degrees to radians
69
+ azimuth = np.deg2rad(azimuth_deg)
70
+ elevation = np.deg2rad(elevation_deg)
71
+
72
+ # Spherical to Cartesian conversion for camera position
73
+ x = distance * np.cos(elevation) * np.cos(azimuth)
74
+ y = distance * np.cos(elevation) * np.sin(azimuth)
75
+ z = distance * np.sin(elevation)
76
+ camera_position = np.array([x, y, z])
77
+
78
+ # Define the forward vector (from camera to target)
79
+ target = 2*camera_position - target
80
+ forward = target - camera_position
81
+ forward /= np.linalg.norm(forward)
82
+
83
+ # Define the world up vector
84
+ world_up = np.array([0, 0, 1])
85
+
86
+ # Compute the right vector
87
+ right = np.cross(world_up, forward)
88
+ if np.linalg.norm(right) < 1e-6:
89
+ # Handle the singularity when forward is parallel to world_up
90
+ world_up = np.array([0, 1, 0])
91
+ right = np.cross(world_up, forward)
92
+ right /= np.linalg.norm(right)
93
+
94
+ # Recompute the orthogonal up vector
95
+ up = np.cross(forward, right)
96
+
97
+ # Construct the rotation matrix
98
+ rotation = np.vstack([right, up, forward]).T # 3x3
99
+
100
+ # Construct the full C2W matrix
101
+ C2W = np.eye(4)
102
+ C2W[:3, :3] = rotation
103
+ C2W[:3, 3] = camera_position
104
+
105
+ return C2W
106
+
107
+ @dataclass
108
+ class OneDiffusionPipelineOutput(BaseOutput):
109
+ """
110
+ Output class for Stable Diffusion pipelines.
111
+
112
+ Args:
113
+ images (`List[PIL.Image.Image]` or `np.ndarray`)
114
+ List of denoised PIL images of length `batch_size` or numpy array of shape `(batch_size, height, width,
115
+ num_channels)`. PIL images or numpy array present the denoised images of the diffusion pipeline.
116
+ """
117
+
118
+ images: Union[List[Image.Image], np.ndarray]
119
+ latents: Optional[torch.Tensor] = None
120
+
121
+
122
+ def retrieve_latents(
123
+ encoder_output: torch.Tensor, generator: Optional[torch.Generator] = None, sample_mode: str = "sample"
124
+ ):
125
+ if hasattr(encoder_output, "latent_dist") and sample_mode == "sample":
126
+ return encoder_output.latent_dist.sample(generator)
127
+ elif hasattr(encoder_output, "latent_dist") and sample_mode == "argmax":
128
+ return encoder_output.latent_dist.mode()
129
+ elif hasattr(encoder_output, "latents"):
130
+ return encoder_output.latents
131
+ else:
132
+ raise AttributeError("Could not access latents of provided encoder_output")
133
+
134
+
135
+ def calculate_shift(
136
+ image_seq_len,
137
+ base_seq_len: int = 256,
138
+ max_seq_len: int = 4096,
139
+ base_shift: float = 0.5,
140
+ max_shift: float = 1.16,
141
+ # max_clip: float = 1.5,
142
+ ):
143
+ m = (max_shift - base_shift) / (max_seq_len - base_seq_len) # 0.000169270833
144
+ b = base_shift - m * base_seq_len # 0.5-0.0433333332
145
+ mu = image_seq_len * m + b
146
+ # mu = min(mu, max_clip)
147
+ return mu
148
+
149
+
150
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.retrieve_timesteps
151
+ def retrieve_timesteps(
152
+ scheduler,
153
+ num_inference_steps: Optional[int] = None,
154
+ device: Optional[Union[str, torch.device]] = None,
155
+ timesteps: Optional[List[int]] = None,
156
+ sigmas: Optional[List[float]] = None,
157
+ **kwargs,
158
+ ):
159
+ """
160
+ Calls the scheduler's `set_timesteps` method and retrieves timesteps from the scheduler after the call. Handles
161
+ custom timesteps. Any kwargs will be supplied to `scheduler.set_timesteps`.
162
+
163
+ Args:
164
+ scheduler (`SchedulerMixin`):
165
+ The scheduler to get timesteps from.
166
+ num_inference_steps (`int`):
167
+ The number of diffusion steps used when generating samples with a pre-trained model. If used, `timesteps`
168
+ must be `None`.
169
+ device (`str` or `torch.device`, *optional*):
170
+ The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.
171
+ timesteps (`List[int]`, *optional*):
172
+ Custom timesteps used to override the timestep spacing strategy of the scheduler. If `timesteps` is passed,
173
+ `num_inference_steps` and `sigmas` must be `None`.
174
+ sigmas (`List[float]`, *optional*):
175
+ Custom sigmas used to override the timestep spacing strategy of the scheduler. If `sigmas` is passed,
176
+ `num_inference_steps` and `timesteps` must be `None`.
177
+
178
+ Returns:
179
+ `Tuple[torch.Tensor, int]`: A tuple where the first element is the timestep schedule from the scheduler and the
180
+ second element is the number of inference steps.
181
+ """
182
+ if timesteps is not None and sigmas is not None:
183
+ raise ValueError("Only one of `timesteps` or `sigmas` can be passed. Please choose one to set custom values")
184
+ if timesteps is not None:
185
+ accepts_timesteps = "timesteps" in set(inspect.signature(scheduler.set_timesteps).parameters.keys())
186
+ if not accepts_timesteps:
187
+ raise ValueError(
188
+ f"The current scheduler class {scheduler.__class__}'s `set_timesteps` does not support custom"
189
+ f" timestep schedules. Please check whether you are using the correct scheduler."
190
+ )
191
+ scheduler.set_timesteps(timesteps=timesteps, device=device, **kwargs)
192
+ timesteps = scheduler.timesteps
193
+ num_inference_steps = len(timesteps)
194
+ elif sigmas is not None:
195
+ accept_sigmas = "sigmas" in set(inspect.signature(scheduler.set_timesteps).parameters.keys())
196
+ if not accept_sigmas:
197
+ raise ValueError(
198
+ f"The current scheduler class {scheduler.__class__}'s `set_timesteps` does not support custom"
199
+ f" sigmas schedules. Please check whether you are using the correct scheduler."
200
+ )
201
+ scheduler.set_timesteps(sigmas=sigmas, device=device, **kwargs)
202
+ timesteps = scheduler.timesteps
203
+ num_inference_steps = len(timesteps)
204
+ else:
205
+ scheduler.set_timesteps(num_inference_steps, device=device, **kwargs)
206
+ timesteps = scheduler.timesteps
207
+ return timesteps, num_inference_steps
208
+
209
+
210
+
211
+ class OneDiffusionPipeline(DiffusionPipeline):
212
+ r"""
213
+ Pipeline for text-to-image generation using OneDiffuser.
214
+
215
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
216
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
217
+
218
+ Args:
219
+ transformer ([`NextDiT`]):
220
+ Conditional transformer (NextDiT) architecture to denoise the encoded image latents.
221
+ vae ([`AutoencoderKL`]):
222
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
223
+ text_encoder ([`T5EncoderModel`]):
224
+ Frozen text-encoder. OneDiffuser uses the T5 model as text encoder.
225
+ tokenizer (`T5Tokenizer`):
226
+ Tokenizer of class T5Tokenizer.
227
+ scheduler ([`FlowMatchEulerDiscreteScheduler`]):
228
+ A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
229
+ """
230
+
231
+ def __init__(
232
+ self,
233
+ transformer: NextDiT,
234
+ vae: AutoencoderKL,
235
+ text_encoder: T5EncoderModel,
236
+ tokenizer: T5Tokenizer,
237
+ scheduler: FlowMatchEulerDiscreteScheduler,
238
+ ):
239
+ super().__init__()
240
+ self.register_modules(
241
+ transformer=transformer,
242
+ vae=vae,
243
+ text_encoder=text_encoder,
244
+ tokenizer=tokenizer,
245
+ scheduler=scheduler,
246
+ )
247
+ self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
248
+ self.image_processor = VaeImageProcessorOneDiffuser(vae_scale_factor=self.vae_scale_factor)
249
+
250
+ def enable_vae_slicing(self):
251
+ self.vae.enable_slicing()
252
+
253
+ def disable_vae_slicing(self):
254
+ self.vae.disable_slicing()
255
+
256
+ def enable_sequential_cpu_offload(self, gpu_id=0):
257
+ if is_accelerate_available():
258
+ from accelerate import cpu_offload
259
+ else:
260
+ raise ImportError("Please install accelerate via `pip install accelerate`")
261
+
262
+ device = torch.device(f"cuda:{gpu_id}")
263
+
264
+ for cpu_offloaded_model in [self.transformer, self.text_encoder, self.vae]:
265
+ if cpu_offloaded_model is not None:
266
+ cpu_offload(cpu_offloaded_model, device)
267
+
268
+ @property
269
+ def _execution_device(self):
270
+ if self.device != torch.device("meta") or not hasattr(self.transformer, "_hf_hook"):
271
+ return self.device
272
+ for module in self.transformer.modules():
273
+ if (
274
+ hasattr(module, "_hf_hook")
275
+ and hasattr(module._hf_hook, "execution_device")
276
+ and module._hf_hook.execution_device is not None
277
+ ):
278
+ return torch.device(module._hf_hook.execution_device)
279
+ return self.device
280
+
281
+ def encode_prompt(
282
+ self,
283
+ prompt,
284
+ device,
285
+ num_images_per_prompt,
286
+ do_classifier_free_guidance,
287
+ negative_prompt=None,
288
+ max_length=300,
289
+ ):
290
+ batch_size = len(prompt) if isinstance(prompt, list) else 1
291
+
292
+ text_inputs = self.tokenizer(
293
+ prompt,
294
+ padding="max_length",
295
+ max_length=max_length,
296
+ truncation=True,
297
+ add_special_tokens=True,
298
+ return_tensors="pt",
299
+ )
300
+ text_input_ids = text_inputs.input_ids
301
+ attention_mask = text_inputs.attention_mask
302
+
303
+ untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
304
+
305
+ if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(text_input_ids, untruncated_ids):
306
+ removed_text = self.tokenizer.batch_decode(untruncated_ids[:, max_length - 1 : -1])
307
+ logger.warning(
308
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
309
+ f" {max_length} tokens: {removed_text}"
310
+ )
311
+
312
+ text_encoder_output = self.text_encoder(text_input_ids.to(device), attention_mask=attention_mask.to(device))
313
+ prompt_embeds = text_encoder_output[0].to(torch.float32)
314
+
315
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
316
+ bs_embed, seq_len, _ = prompt_embeds.shape
317
+ prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
318
+ prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1)
319
+
320
+ # duplicate attention mask for each generation per prompt
321
+ attention_mask = attention_mask.repeat(1, num_images_per_prompt)
322
+ attention_mask = attention_mask.view(bs_embed * num_images_per_prompt, -1)
323
+
324
+ # get unconditional embeddings for classifier free guidance
325
+ if do_classifier_free_guidance:
326
+ uncond_tokens: List[str]
327
+ if negative_prompt is None:
328
+ uncond_tokens = [""] * batch_size
329
+ elif type(prompt) is not type(negative_prompt):
330
+ raise TypeError(
331
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
332
+ f" {type(prompt)}."
333
+ )
334
+ elif isinstance(negative_prompt, str):
335
+ uncond_tokens = [negative_prompt]
336
+ elif batch_size != len(negative_prompt):
337
+ raise ValueError(
338
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
339
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
340
+ " the batch size of `prompt`."
341
+ )
342
+ else:
343
+ uncond_tokens = negative_prompt
344
+
345
+ max_length = text_input_ids.shape[-1]
346
+ uncond_input = self.tokenizer(
347
+ uncond_tokens,
348
+ padding="max_length",
349
+ max_length=max_length,
350
+ truncation=True,
351
+ return_tensors="pt",
352
+ )
353
+
354
+ uncond_encoder_output = self.text_encoder(uncond_input.input_ids.to(device), attention_mask=uncond_input.attention_mask.to(device))
355
+ negative_prompt_embeds = uncond_encoder_output[0].to(torch.float32)
356
+
357
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
358
+ seq_len = negative_prompt_embeds.shape[1]
359
+ negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1)
360
+ negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)
361
+
362
+ # duplicate unconditional attention mask for each generation per prompt
363
+ uncond_attention_mask = uncond_input.attention_mask.repeat(1, num_images_per_prompt)
364
+ uncond_attention_mask = uncond_attention_mask.view(batch_size * num_images_per_prompt, -1)
365
+
366
+ # For classifier free guidance, we need to do two forward passes.
367
+ # Here we concatenate the unconditional and text embeddings into a single batch
368
+ # to avoid doing two forward passes
369
+ prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
370
+ attention_mask = torch.cat([uncond_attention_mask, attention_mask])
371
+
372
+ return prompt_embeds.to(device), attention_mask.to(device)
373
+
374
+ def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None):
375
+ shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor)
376
+ if isinstance(generator, list) and len(generator) != batch_size:
377
+ raise ValueError(
378
+ f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
379
+ f" size of {batch_size}. Make sure the batch size matches the length of the generators."
380
+ )
381
+
382
+ if latents is None:
383
+ latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
384
+ else:
385
+ latents = latents.to(device)
386
+
387
+ # scale the initial noise by the standard deviation required by the scheduler
388
+ latents = latents * self.scheduler.init_noise_sigma
389
+ return latents
390
+
391
+ @torch.no_grad()
392
+ @replace_example_docstring(EXAMPLE_DOC_STRING)
393
+ def __call__(
394
+ self,
395
+ prompt: Union[str, List[str]] = None,
396
+ height: Optional[int] = None,
397
+ width: Optional[int] = None,
398
+ num_inference_steps: int = 50,
399
+ guidance_scale: float = 5.0,
400
+ negative_prompt: Optional[Union[str, List[str]]] = None,
401
+ num_images_per_prompt: Optional[int] = 1,
402
+ eta: float = 0.0,
403
+ generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
404
+ latents: Optional[torch.FloatTensor] = None,
405
+ output_type: Optional[str] = "pil",
406
+ return_dict: bool = True,
407
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
408
+ callback_steps: int = 1,
409
+ forward_kwargs: Optional[Dict[str, Any]] = {},
410
+ **kwargs,
411
+ ):
412
+ r"""
413
+ Function invoked when calling the pipeline for generation.
414
+
415
+ Args:
416
+ prompt (`str` or `List[str]`, *optional*):
417
+ The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
418
+ height (`int`, *optional*, defaults to self.transformer.config.sample_size):
419
+ The height in pixels of the generated image.
420
+ width (`int`, *optional*, defaults to self.transformer.config.sample_size):
421
+ The width in pixels of the generated image.
422
+ num_inference_steps (`int`, *optional*, defaults to 50):
423
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
424
+ expense of slower inference.
425
+ guidance_scale (`float`, *optional*, defaults to 7.5):
426
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
427
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
428
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
429
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
430
+ usually at the expense of lower image quality.
431
+ negative_prompt (`str` or `List[str]`, *optional*):
432
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
433
+ `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
434
+ less than `1`).
435
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
436
+ The number of images to generate per prompt.
437
+ eta (`float`, *optional*, defaults to 0.0):
438
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
439
+ [`schedulers.DDIMScheduler`], will be ignored for others.
440
+ generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
441
+ One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
442
+ to make generation deterministic.
443
+ latents (`torch.FloatTensor`, *optional*):
444
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
445
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
446
+ tensor will ge generated by sampling using the supplied random `generator`.
447
+ output_type (`str`, *optional*, defaults to `"pil"`):
448
+ The output format of the generate image. Choose between
449
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
450
+ return_dict (`bool`, *optional*, defaults to `True`):
451
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
452
+ plain tuple.
453
+ callback (`Callable`, *optional*):
454
+ A function that will be called every `callback_steps` steps during inference. The function will be
455
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
456
+ callback_steps (`int`, *optional*, defaults to 1):
457
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
458
+ called at every step.
459
+
460
+ Examples:
461
+
462
+ Returns:
463
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
464
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
465
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
466
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
467
+ (nsfw) content, according to the `safety_checker`.
468
+ """
469
+ height = height or self.transformer.config.input_size[-2] * 8 # TODO: Hardcoded downscale factor of vae
470
+ width = width or self.transformer.config.input_size[-1] * 8
471
+
472
+ # check inputs. Raise error if not correct
473
+ self.check_inputs(prompt, height, width, callback_steps)
474
+
475
+ # define call parameters
476
+ batch_size = 1 if isinstance(prompt, str) else len(prompt)
477
+ device = self._execution_device
478
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
479
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf
480
+ do_classifier_free_guidance = guidance_scale > 1.0
481
+
482
+ encoder_hidden_states, encoder_attention_mask = self.encode_prompt(
483
+ prompt,
484
+ device,
485
+ num_images_per_prompt,
486
+ do_classifier_free_guidance,
487
+ negative_prompt,
488
+ )
489
+
490
+ # set timesteps
491
+ # # self.scheduler.set_timesteps(num_inference_steps, device=device)
492
+ # timesteps = self.scheduler.timesteps
493
+ timesteps = None
494
+
495
+ # prepare latent variables
496
+ num_channels_latents = self.transformer.config.in_channels
497
+ latents = self.prepare_latents(
498
+ batch_size * num_images_per_prompt,
499
+ num_channels_latents,
500
+ height,
501
+ width,
502
+ self.dtype,
503
+ device,
504
+ generator,
505
+ latents,
506
+ )
507
+
508
+ # prepare extra step kwargs
509
+ extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
510
+
511
+ # 5. Prepare timesteps
512
+ sigmas = np.linspace(1.0, 1 / num_inference_steps, num_inference_steps)
513
+ image_seq_len = latents.shape[-1] * latents.shape[-2] / self.transformer.config.patch_size[-1] / self.transformer.config.patch_size[-2]
514
+ mu = calculate_shift(
515
+ image_seq_len,
516
+ self.scheduler.config.base_image_seq_len,
517
+ self.scheduler.config.max_image_seq_len,
518
+ self.scheduler.config.base_shift,
519
+ self.scheduler.config.max_shift,
520
+ )
521
+ timesteps, num_inference_steps = retrieve_timesteps(
522
+ self.scheduler,
523
+ num_inference_steps,
524
+ device,
525
+ timesteps,
526
+ sigmas,
527
+ mu=mu,
528
+ )
529
+ num_warmup_steps = max(len(timesteps) - num_inference_steps * self.scheduler.order, 0)
530
+ self._num_timesteps = len(timesteps)
531
+
532
+ # denoising loop
533
+ num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
534
+ with self.progress_bar(total=num_inference_steps) as progress_bar:
535
+ for i, t in enumerate(timesteps):
536
+ # expand the latents if we are doing classifier free guidance
537
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
538
+ # latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
539
+
540
+ # predict the noise residual
541
+ noise_pred = self.transformer(
542
+ samples=latent_model_input.to(self.dtype),
543
+ timesteps=torch.tensor([t] * latent_model_input.shape[0], device=device),
544
+ encoder_hidden_states=encoder_hidden_states.to(self.dtype),
545
+ encoder_attention_mask=encoder_attention_mask,
546
+ **forward_kwargs
547
+ )
548
+
549
+ # perform guidance
550
+ if do_classifier_free_guidance:
551
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
552
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
553
+
554
+ # compute the previous noisy sample x_t -> x_t-1
555
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
556
+
557
+ # call the callback, if provided
558
+ if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
559
+ progress_bar.update()
560
+ if callback is not None and i % callback_steps == 0:
561
+ callback(i, t, latents)
562
+
563
+ # scale and decode the image latents with vae
564
+ latents = 1 / self.vae.config.scaling_factor * latents
565
+ if latents.ndim == 5:
566
+ latents = latents.squeeze(1)
567
+ image = self.vae.decode(latents.to(self.vae.dtype)).sample
568
+
569
+ image = (image / 2 + 0.5).clamp(0, 1)
570
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
571
+
572
+ if output_type == "pil":
573
+ image = self.numpy_to_pil(image)
574
+
575
+ if not return_dict:
576
+ return (image, None)
577
+
578
+ return OneDiffusionPipelineOutput(images=image)
579
+
580
+ @torch.no_grad()
581
+ def img2img(
582
+ self,
583
+ prompt: Union[str, List[str]] = None,
584
+ image: Union[PIL.Image.Image, List[PIL.Image.Image]] = None,
585
+ height: Optional[int] = None,
586
+ width: Optional[int] = None,
587
+ num_inference_steps: int = 50,
588
+ guidance_scale: float = 5.0,
589
+ denoise_mask: Optional[List[int]] = [1, 0],
590
+ negative_prompt: Optional[Union[str, List[str]]] = None,
591
+ num_images_per_prompt: Optional[int] = 1,
592
+ eta: float = 0.0,
593
+ generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
594
+ latents: Optional[torch.FloatTensor] = None,
595
+ output_type: Optional[str] = "pil",
596
+ return_dict: bool = True,
597
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
598
+ callback_steps: int = 1,
599
+ do_crop: bool = True,
600
+ is_multiview: bool = False,
601
+ multiview_azimuths: Optional[List[int]] = [0, 30, 60, 90],
602
+ multiview_elevations: Optional[List[int]] = [0, 0, 0, 0],
603
+ multiview_distances: float = 1.7,
604
+ multiview_c2ws: Optional[List[torch.Tensor]] = None,
605
+ multiview_intrinsics: Optional[torch.Tensor] = None,
606
+ multiview_focal_length: float = 1.3887,
607
+ forward_kwargs: Optional[Dict[str, Any]] = {},
608
+ noise_scale: float = 1.0,
609
+ **kwargs,
610
+ ):
611
+ # Convert single image to list for consistent handling
612
+ if isinstance(image, PIL.Image.Image):
613
+ image = [image]
614
+
615
+ if height is None or width is None:
616
+ closest_ar = get_closest_ratio(height=image[0].size[1], width=image[0].size[0], ratios=ASPECT_RATIO_512)
617
+ height, width = int(closest_ar[0][0]), int(closest_ar[0][1])
618
+
619
+ if not isinstance(multiview_distances, list) and not isinstance(multiview_distances, tuple):
620
+ multiview_distances = [multiview_distances] * len(multiview_azimuths)
621
+
622
+ # height = height or self.transformer.config.input_size[-2] * 8 # TODO: Hardcoded downscale factor of vae
623
+ # width = width or self.transformer.config.input_size[-1] * 8
624
+
625
+ # 1. check inputs. Raise error if not correct
626
+ self.check_inputs(prompt, height, width, callback_steps)
627
+
628
+ # Additional input validation for image list
629
+ if not all(isinstance(img, PIL.Image.Image) for img in image):
630
+ raise ValueError("All elements in image list must be PIL.Image objects")
631
+
632
+ # 2. define call parameters
633
+ batch_size = 1 if isinstance(prompt, str) else len(prompt)
634
+ device = self._execution_device
635
+ do_classifier_free_guidance = guidance_scale > 1.0
636
+
637
+ # 3. Encode input prompt
638
+ encoder_hidden_states, encoder_attention_mask = self.encode_prompt(
639
+ prompt,
640
+ device,
641
+ num_images_per_prompt,
642
+ do_classifier_free_guidance,
643
+ negative_prompt,
644
+ )
645
+
646
+ # 4. Preprocess all images
647
+ if image is not None and len(image) > 0:
648
+ processed_image = self.image_processor.preprocess(image, height=height, width=width, do_crop=do_crop)
649
+ else:
650
+ processed_image = None
651
+
652
+ # # Stack processed images along the sequence dimension
653
+ # if len(processed_images) > 1:
654
+ # processed_image = torch.cat(processed_images, dim=0)
655
+ # else:
656
+ # processed_image = processed_images[0]
657
+
658
+ timesteps = None
659
+
660
+ # 6. prepare latent variables
661
+ num_channels_latents = self.transformer.config.in_channels
662
+ if processed_image is not None:
663
+ cond_latents = self.prepare_latents(
664
+ batch_size * num_images_per_prompt,
665
+ num_channels_latents,
666
+ height,
667
+ width,
668
+ self.dtype,
669
+ device,
670
+ generator,
671
+ latents,
672
+ image=processed_image,
673
+ )
674
+ else:
675
+ cond_latents = None
676
+
677
+ # 7. prepare extra step kwargs
678
+ extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
679
+ denoise_mask = torch.tensor(denoise_mask, device=device)
680
+ denoise_indices = torch.where(denoise_mask == 1)[0]
681
+ cond_indices = torch.where(denoise_mask == 0)[0]
682
+ seq_length = denoise_mask.shape[0]
683
+
684
+ latents = self.prepare_init_latents(
685
+ batch_size * num_images_per_prompt,
686
+ seq_length,
687
+ num_channels_latents,
688
+ height,
689
+ width,
690
+ self.dtype,
691
+ device,
692
+ generator,
693
+ )
694
+
695
+ # 5. Prepare timesteps
696
+ sigmas = np.linspace(1.0, 1 / num_inference_steps, num_inference_steps)
697
+ # image_seq_len = latents.shape[1] * latents.shape[-1] * latents.shape[-2] / self.transformer.config.patch_size[-1] / self.transformer.config.patch_size[-2]
698
+ image_seq_len = noise_scale * sum(denoise_mask) * latents.shape[-1] * latents.shape[-2] / self.transformer.config.patch_size[-1] / self.transformer.config.patch_size[-2]
699
+ # image_seq_len = 256
700
+ mu = calculate_shift(
701
+ image_seq_len,
702
+ self.scheduler.config.base_image_seq_len,
703
+ self.scheduler.config.max_image_seq_len,
704
+ self.scheduler.config.base_shift,
705
+ self.scheduler.config.max_shift,
706
+ )
707
+ timesteps, num_inference_steps = retrieve_timesteps(
708
+ self.scheduler,
709
+ num_inference_steps,
710
+ device,
711
+ timesteps,
712
+ sigmas,
713
+ mu=mu,
714
+ )
715
+ num_warmup_steps = max(len(timesteps) - num_inference_steps * self.scheduler.order, 0)
716
+ self._num_timesteps = len(timesteps)
717
+
718
+ if is_multiview:
719
+ raise Exception('Multiview is supported in this demo.')
720
+ cond_indices_images = [index // 2 for index in cond_indices if index % 2 == 0]
721
+ cond_indices_rays = [index // 2 for index in cond_indices if index % 2 == 1]
722
+
723
+ multiview_elevations = [element for element in multiview_elevations if element is not None]
724
+ multiview_azimuths = [element for element in multiview_azimuths if element is not None]
725
+ multiview_distances = [element for element in multiview_distances if element is not None]
726
+
727
+ if multiview_c2ws is None:
728
+ multiview_c2ws = [
729
+ torch.tensor(create_c2w_matrix(azimuth, elevation, distance)) for azimuth, elevation, distance in zip(multiview_azimuths, multiview_elevations, multiview_distances)
730
+ ]
731
+ c2ws = torch.stack(multiview_c2ws).float()
732
+ else:
733
+ c2ws = torch.Tensor(multiview_c2ws).float()
734
+
735
+ c2ws[:, 0:3, 1:3] *= -1
736
+ c2ws = c2ws[:, [1, 0, 2, 3], :]
737
+ c2ws[:, 2, :] *= -1
738
+
739
+ w2cs = torch.inverse(c2ws)
740
+ if multiview_intrinsics is None:
741
+ multiview_intrinsics = torch.Tensor([[[multiview_focal_length, 0, 0.5], [0, multiview_focal_length, 0.5], [0, 0, 1]]]).repeat(c2ws.shape[0], 1, 1)
742
+ K = multiview_intrinsics
743
+ Rs = w2cs[:, :3, :3]
744
+ Ts = w2cs[:, :3, 3]
745
+ sizes = torch.Tensor([[1, 1]]).repeat(c2ws.shape[0], 1)
746
+
747
+ assert height == width
748
+ cond_rays = calculate_rays(K, sizes, Rs, Ts, height // 8)
749
+ cond_rays = cond_rays.reshape(-1, height // 8, width // 8, 6)
750
+ # padding = (0, 10)
751
+ # cond_rays = torch.nn.functional.pad(cond_rays, padding, "constant", 0)
752
+ cond_rays = torch.cat([cond_rays, cond_rays, cond_rays[..., :4]], dim=-1) * 1.658
753
+ cond_rays = cond_rays[None].repeat(batch_size * num_images_per_prompt, 1, 1, 1, 1)
754
+ cond_rays = cond_rays.permute(0, 1, 4, 2, 3)
755
+ cond_rays = cond_rays.to(device, dtype=self.dtype)
756
+
757
+ latents = einops.rearrange(latents, "b (f n) c h w -> b f n c h w", n=2)
758
+ if cond_latents is not None:
759
+ latents[:, cond_indices_images, 0] = cond_latents
760
+ latents[:, cond_indices_rays, 1] = cond_rays
761
+ latents = einops.rearrange(latents, "b f n c h w -> b (f n) c h w")
762
+ else:
763
+ if cond_latents is not None:
764
+ latents[:, cond_indices] = cond_latents
765
+
766
+ # denoising loop
767
+ num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
768
+ with self.progress_bar(total=num_inference_steps) as progress_bar:
769
+ for i, t in enumerate(timesteps):
770
+ # expand the latents if we are doing classifier free guidance
771
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
772
+ input_t = torch.broadcast_to(einops.repeat(torch.Tensor([t]).to(device), "1 -> 1 f 1 1 1", f=latent_model_input.shape[1]), latent_model_input.shape).clone()
773
+
774
+ if is_multiview:
775
+ input_t = einops.rearrange(input_t, "b (f n) c h w -> b f n c h w", n=2)
776
+ input_t[:, cond_indices_images, 0] = self.scheduler.timesteps[-1]
777
+ input_t[:, cond_indices_rays, 1] = self.scheduler.timesteps[-1]
778
+ input_t = einops.rearrange(input_t, "b f n c h w -> b (f n) c h w")
779
+ else:
780
+ input_t[:, cond_indices] = self.scheduler.timesteps[-1]
781
+
782
+ # predict the noise residual
783
+ noise_pred = self.transformer(
784
+ samples=latent_model_input.to(self.dtype),
785
+ timesteps=input_t,
786
+ encoder_hidden_states=encoder_hidden_states.to(self.dtype),
787
+ encoder_attention_mask=encoder_attention_mask,
788
+ **forward_kwargs
789
+ )
790
+
791
+ # perform guidance
792
+ if do_classifier_free_guidance:
793
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
794
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
795
+
796
+ # compute the previous noisy sample x_t -> x_t-1
797
+ bs, n_frame = noise_pred.shape[:2]
798
+ noise_pred = einops.rearrange(noise_pred, "b f c h w -> (b f) c h w")
799
+ latents = einops.rearrange(latents, "b f c h w -> (b f) c h w")
800
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
801
+ latents = einops.rearrange(latents, "(b f) c h w -> b f c h w", b=bs, f=n_frame)
802
+ if is_multiview:
803
+ latents = einops.rearrange(latents, "b (f n) c h w -> b f n c h w", n=2)
804
+ if cond_latents is not None:
805
+ latents[:, cond_indices_images, 0] = cond_latents
806
+ latents[:, cond_indices_rays, 1] = cond_rays
807
+ latents = einops.rearrange(latents, "b f n c h w -> b (f n) c h w")
808
+ else:
809
+ if cond_latents is not None:
810
+ latents[:, cond_indices] = cond_latents
811
+
812
+ # call the callback, if provided
813
+ if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
814
+ progress_bar.update()
815
+ if callback is not None and i % callback_steps == 0:
816
+ callback(i, t, latents)
817
+
818
+ decoded_latents = latents / 1.658
819
+ # scale and decode the image latents with vae
820
+ latents = 1 / self.vae.config.scaling_factor * latents
821
+ if latents.ndim == 5:
822
+ latents = latents[:, denoise_indices]
823
+ latents = einops.rearrange(latents, "b f c h w -> (b f) c h w")
824
+ image = self.vae.decode(latents.to(self.vae.dtype)).sample
825
+
826
+ image = (image / 2 + 0.5).clamp(0, 1)
827
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
828
+
829
+ if output_type == "pil":
830
+ image = self.numpy_to_pil(image)
831
+
832
+ if not return_dict:
833
+ return (image, None)
834
+
835
+ return OneDiffusionPipelineOutput(images=image, latents=decoded_latents)
836
+
837
+ def prepare_extra_step_kwargs(self, generator, eta):
838
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
839
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
840
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
841
+ # and should be between [0, 1]
842
+
843
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
844
+ extra_step_kwargs = {}
845
+ if accepts_eta:
846
+ extra_step_kwargs["eta"] = eta
847
+
848
+ # check if the scheduler accepts generator
849
+ accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
850
+ if accepts_generator:
851
+ extra_step_kwargs["generator"] = generator
852
+ return extra_step_kwargs
853
+
854
+ def check_inputs(self, prompt, height, width, callback_steps):
855
+ if not isinstance(prompt, str) and not isinstance(prompt, list):
856
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
857
+
858
+ if height % 16 != 0 or width % 16 != 0:
859
+ raise ValueError(f"`height` and `width` have to be divisible by 16 but are {height} and {width}.")
860
+
861
+ if (callback_steps is None) or (
862
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
863
+ ):
864
+ raise ValueError(
865
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
866
+ f" {type(callback_steps)}."
867
+ )
868
+
869
+ def get_timesteps(self, num_inference_steps, strength, device):
870
+ # get the original timestep using init_timestep
871
+ init_timestep = min(int(num_inference_steps * strength), num_inference_steps)
872
+
873
+ t_start = max(num_inference_steps - init_timestep, 0)
874
+ timesteps = self.scheduler.timesteps[t_start:]
875
+
876
+ return timesteps, num_inference_steps - t_start
877
+
878
+ def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None, image=None):
879
+ shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor)
880
+ if isinstance(generator, list) and len(generator) != batch_size:
881
+ raise ValueError(
882
+ f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
883
+ f" size of {batch_size}. Make sure the batch size matches the length of the generators."
884
+ )
885
+
886
+ if latents is None:
887
+ latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
888
+ else:
889
+ latents = latents.to(device)
890
+
891
+ if image is None:
892
+ # scale the initial noise by the standard deviation required by the scheduler
893
+ # latents = latents * self.scheduler.init_noise_sigma
894
+ return latents
895
+
896
+ image = image.to(device=device, dtype=dtype)
897
+
898
+ if isinstance(generator, list) and len(generator) != batch_size:
899
+ raise ValueError(
900
+ f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
901
+ f" size of {batch_size}. Make sure the batch size matches the length of the generators."
902
+ )
903
+ elif isinstance(generator, list):
904
+ if image.shape[0] < batch_size and batch_size % image.shape[0] == 0:
905
+ image = torch.cat([image] * (batch_size // image.shape[0]), dim=0)
906
+ elif image.shape[0] < batch_size and batch_size % image.shape[0] != 0:
907
+ raise ValueError(
908
+ f"Cannot duplicate `image` of batch size {image.shape[0]} to effective batch_size {batch_size} "
909
+ )
910
+ init_latents = [
911
+ retrieve_latents(self.vae.encode(image[i : i + 1]), generator=generator[i])
912
+ for i in range(batch_size)
913
+ ]
914
+ init_latents = torch.cat(init_latents, dim=0)
915
+ else:
916
+ init_latents = retrieve_latents(self.vae.encode(image.to(self.vae.dtype)), generator=generator)
917
+
918
+ init_latents = self.vae.config.scaling_factor * init_latents
919
+ init_latents = init_latents.to(device=device, dtype=dtype)
920
+
921
+ init_latents = einops.rearrange(init_latents, "(bs views) c h w -> bs views c h w", bs=batch_size, views=init_latents.shape[0]//batch_size)
922
+ # latents = einops.rearrange(latents, "b c h w -> b 1 c h w")
923
+ # latents = torch.concat([latents, init_latents], dim=1)
924
+ return init_latents
925
+
926
+ def prepare_init_latents(self, batch_size, seq_length, num_channels_latents, height, width, dtype, device, generator, latents=None):
927
+ shape = (batch_size, seq_length, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor)
928
+ if isinstance(generator, list) and len(generator) != batch_size:
929
+ raise ValueError(
930
+ f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
931
+ f" size of {batch_size}. Make sure the batch size matches the length of the generators."
932
+ )
933
+
934
+ if latents is None:
935
+ latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
936
+ else:
937
+ latents = latents.to(device)
938
+
939
+ return latents
940
+
941
+ @torch.no_grad()
942
+ def generate(
943
+ self,
944
+ prompt: Union[str, List[str]],
945
+ num_inference_steps: int = 50,
946
+ guidance_scale: float = 5.0,
947
+ negative_prompt: Optional[Union[str, List[str]]] = None,
948
+ num_images_per_prompt: Optional[int] = 1,
949
+ height: Optional[int] = None,
950
+ width: Optional[int] = None,
951
+ eta: float = 0.0,
952
+ generator: Optional[torch.Generator] = None,
953
+ latents: Optional[torch.FloatTensor] = None,
954
+ output_type: Optional[str] = "pil",
955
+ return_dict: bool = True,
956
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
957
+ callback_steps: Optional[int] = 1,
958
+ ):
959
+ """
960
+ Function for image generation using the OneDiffusionPipeline.
961
+ """
962
+ return self(
963
+ prompt=prompt,
964
+ num_inference_steps=num_inference_steps,
965
+ guidance_scale=guidance_scale,
966
+ negative_prompt=negative_prompt,
967
+ num_images_per_prompt=num_images_per_prompt,
968
+ height=height,
969
+ width=width,
970
+ eta=eta,
971
+ generator=generator,
972
+ latents=latents,
973
+ output_type=output_type,
974
+ return_dict=return_dict,
975
+ callback=callback,
976
+ callback_steps=callback_steps,
977
+ )
978
+
979
+ @staticmethod
980
+ def numpy_to_pil(images):
981
+ """
982
+ Convert a numpy image or a batch of images to a PIL image.
983
+ """
984
+ if images.ndim == 3:
985
+ images = images[None, ...]
986
+ images = (images * 255).round().astype("uint8")
987
+ if images.shape[-1] == 1:
988
+ # special case for grayscale (single channel) images
989
+ pil_images = [Image.fromarray(image.squeeze(), mode="L") for image in images]
990
+ else:
991
+ pil_images = [Image.fromarray(image) for image in images]
992
+
993
+ return pil_images
994
+
995
+ @classmethod
996
+ def from_pretrained(cls, pretrained_model_name_or_path, **kwargs):
997
+ model_path = pretrained_model_name_or_path
998
+ cache_dir = kwargs.pop("cache_dir", None)
999
+ force_download = kwargs.pop("force_download", False)
1000
+ proxies = kwargs.pop("proxies", None)
1001
+ local_files_only = kwargs.pop("local_files_only", None)
1002
+ token = kwargs.pop("token", None)
1003
+ revision = kwargs.pop("revision", None)
1004
+ from_flax = kwargs.pop("from_flax", False)
1005
+ torch_dtype = kwargs.pop("torch_dtype", None)
1006
+ custom_pipeline = kwargs.pop("custom_pipeline", None)
1007
+ custom_revision = kwargs.pop("custom_revision", None)
1008
+ provider = kwargs.pop("provider", None)
1009
+ sess_options = kwargs.pop("sess_options", None)
1010
+ device_map = kwargs.pop("device_map", None)
1011
+ max_memory = kwargs.pop("max_memory", None)
1012
+ offload_folder = kwargs.pop("offload_folder", None)
1013
+ offload_state_dict = kwargs.pop("offload_state_dict", False)
1014
+ low_cpu_mem_usage = kwargs.pop("low_cpu_mem_usage", _LOW_CPU_MEM_USAGE_DEFAULT)
1015
+ variant = kwargs.pop("variant", None)
1016
+ use_safetensors = kwargs.pop("use_safetensors", None)
1017
+ use_onnx = kwargs.pop("use_onnx", None)
1018
+ load_connected_pipeline = kwargs.pop("load_connected_pipeline", False)
1019
+
1020
+ if low_cpu_mem_usage and not is_accelerate_available():
1021
+ low_cpu_mem_usage = False
1022
+ logger.warning(
1023
+ "Cannot initialize model with low cpu memory usage because `accelerate` was not found in the"
1024
+ " environment. Defaulting to `low_cpu_mem_usage=False`. It is strongly recommended to install"
1025
+ " `accelerate` for faster and less memory-intense model loading. You can do so with: \n```\npip"
1026
+ " install accelerate\n```\n."
1027
+ )
1028
+
1029
+ if low_cpu_mem_usage is True and not is_torch_version(">=", "1.9.0"):
1030
+ raise NotImplementedError(
1031
+ "Low memory initialization requires torch >= 1.9.0. Please either update your PyTorch version or set"
1032
+ " `low_cpu_mem_usage=False`."
1033
+ )
1034
+
1035
+ if device_map is not None and not is_torch_version(">=", "1.9.0"):
1036
+ raise NotImplementedError(
1037
+ "Loading and dispatching requires torch >= 1.9.0. Please either update your PyTorch version or set"
1038
+ " `device_map=None`."
1039
+ )
1040
+
1041
+ if device_map is not None and not is_accelerate_available():
1042
+ raise NotImplementedError(
1043
+ "Using `device_map` requires the `accelerate` library. Please install it using: `pip install accelerate`."
1044
+ )
1045
+
1046
+ if device_map is not None and not isinstance(device_map, str):
1047
+ raise ValueError("`device_map` must be a string.")
1048
+
1049
+ if device_map is not None and device_map not in SUPPORTED_DEVICE_MAP:
1050
+ raise NotImplementedError(
1051
+ f"{device_map} not supported. Supported strategies are: {', '.join(SUPPORTED_DEVICE_MAP)}"
1052
+ )
1053
+
1054
+ if device_map is not None and device_map in SUPPORTED_DEVICE_MAP:
1055
+ if is_accelerate_version("<", "0.28.0"):
1056
+ raise NotImplementedError("Device placement requires `accelerate` version `0.28.0` or later.")
1057
+
1058
+ if low_cpu_mem_usage is False and device_map is not None:
1059
+ raise ValueError(
1060
+ f"You cannot set `low_cpu_mem_usage` to False while using device_map={device_map} for loading and"
1061
+ " dispatching. Please make sure to set `low_cpu_mem_usage=True`."
1062
+ )
1063
+
1064
+ transformer = NextDiT.from_pretrained(f"{model_path}", subfolder="transformer", torch_dtype=torch.float32, cache_dir=cache_dir)
1065
+ vae = AutoencoderKL.from_pretrained(f"{model_path}", subfolder="vae", cache_dir=cache_dir)
1066
+ text_encoder = T5EncoderModel.from_pretrained(f"{model_path}", subfolder="text_encoder", torch_dtype=torch.float16, cache_dir=cache_dir)
1067
+ tokenizer = T5Tokenizer.from_pretrained(model_path, subfolder="tokenizer", cache_dir=cache_dir)
1068
+ scheduler = FlowMatchEulerDiscreteScheduler.from_pretrained(model_path, subfolder="scheduler", cache_dir=cache_dir)
1069
+
1070
+ pipeline = cls(
1071
+ transformer=transformer,
1072
+ vae=vae,
1073
+ text_encoder=text_encoder,
1074
+ tokenizer=tokenizer,
1075
+ scheduler=scheduler,
1076
+ **kwargs
1077
+ )
1078
+
1079
+ return pipeline