svjack's picture
Upload 1392 files
43b7e92 verified
|
raw
history blame
12.2 kB
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
<p align="center">
<br>
<img src="https://raw.githubusercontent.com/huggingface/diffusers/77aadfee6a891ab9fcfb780f87c693f7a5beeb8e/docs/source/imgs/diffusers_library.jpg" width="400"/>
<br>
</p>
# Diffusers
πŸ€— DiffusersλŠ” 이미지, μ˜€λ””μ˜€, 심지어 λΆ„μžμ˜ 3D ꡬ쑰λ₯Ό μƒμ„±ν•˜κΈ° μœ„ν•œ μ΅œμ²¨λ‹¨ 사전 ν›ˆλ ¨λœ diffusion λͺ¨λΈμ„ μœ„ν•œ λΌμ΄λΈŒλŸ¬λ¦¬μž…λ‹ˆλ‹€. κ°„λ‹¨ν•œ μΆ”λ‘  μ†”λ£¨μ…˜μ„ μ°Ύκ³  μžˆλ“ , 자체 diffusion λͺ¨λΈμ„ ν›ˆλ ¨ν•˜κ³  μ‹Άλ“ , πŸ€— DiffusersλŠ” 두 가지 λͺ¨λ‘λ₯Ό μ§€μ›ν•˜λŠ” λͺ¨λ“ˆμ‹ νˆ΄λ°•μŠ€μž…λ‹ˆλ‹€. 저희 λΌμ΄λΈŒλŸ¬λ¦¬λŠ” [μ„±λŠ₯보닀 μ‚¬μš©μ„±](conceptual/philosophy#usability-over-performance), [κ°„νŽΈν•¨λ³΄λ‹€ λ‹¨μˆœν•¨](conceptual/philosophy#simple-over-easy), 그리고 [좔상화보닀 μ‚¬μš©μž 지정 κ°€λŠ₯μ„±](conceptual/philosophy#tweakable-contributorfriendly-over-abstraction)에 쀑점을 두고 μ„€κ³„λ˜μ—ˆμŠ΅λ‹ˆλ‹€.
이 λΌμ΄λΈŒλŸ¬λ¦¬μ—λŠ” μ„Έ 가지 μ£Όμš” ꡬ성 μš”μ†Œκ°€ μžˆμŠ΅λ‹ˆλ‹€:
- λͺ‡ μ€„μ˜ μ½”λ“œλ§ŒμœΌλ‘œ μΆ”λ‘ ν•  수 μžˆλŠ” μ΅œμ²¨λ‹¨ [diffusion νŒŒμ΄ν”„λΌμΈ](api/pipelines/overview).
- 생성 속도와 ν’ˆμ§ˆ κ°„μ˜ κ· ν˜•μ„ λ§žμΆ”κΈ° μœ„ν•΄ μƒν˜Έκ΅ν™˜μ μœΌλ‘œ μ‚¬μš©ν•  수 μžˆλŠ” [λ…Έμ΄μ¦ˆ μŠ€μΌ€μ€„λŸ¬](api/schedulers/overview).
- λΉŒλ”© λΈ”λ‘μœΌλ‘œ μ‚¬μš©ν•  수 있고 μŠ€μΌ€μ€„λŸ¬μ™€ κ²°ν•©ν•˜μ—¬ 자체적인 end-to-end diffusion μ‹œμŠ€ν…œμ„ λ§Œλ“€ 수 μžˆλŠ” 사전 ν•™μŠ΅λœ [λͺ¨λΈ](api/models).
<div class="mt-10">
<div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5">
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./tutorials/tutorial_overview"
><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Tutorials</div>
<p class="text-gray-700">결과물을 μƒμ„±ν•˜κ³ , λ‚˜λ§Œμ˜ diffusion μ‹œμŠ€ν…œμ„ κ΅¬μΆ•ν•˜κ³ , ν™•μ‚° λͺ¨λΈμ„ ν›ˆλ ¨ν•˜λŠ” 데 ν•„μš”ν•œ κΈ°λ³Έ κΈ°μˆ μ„ λ°°μ›Œλ³΄μ„Έμš”. πŸ€— Diffusersλ₯Ό 처음 μ‚¬μš©ν•˜λŠ” 경우 μ—¬κΈ°μ—μ„œ μ‹œμž‘ν•˜λŠ” 것이 μ’‹μŠ΅λ‹ˆλ‹€!</p>
</a>
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./using-diffusers/loading_overview"
><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div>
<p class="text-gray-700">νŒŒμ΄ν”„λΌμΈ, λͺ¨λΈ, μŠ€μΌ€μ€„λŸ¬λ₯Ό λ‘œλ“œν•˜λŠ” 데 도움이 λ˜λŠ” μ‹€μš©μ μΈ κ°€μ΄λ“œμž…λ‹ˆλ‹€. λ˜ν•œ νŠΉμ • μž‘μ—…μ— νŒŒμ΄ν”„λΌμΈμ„ μ‚¬μš©ν•˜κ³ , 좜λ ₯ 생성 방식을 μ œμ–΄ν•˜κ³ , μΆ”λ‘  속도에 맞게 μ΅œμ ν™”ν•˜κ³ , λ‹€μ–‘ν•œ ν•™μŠ΅ 기법을 μ‚¬μš©ν•˜λŠ” 방법도 배울 수 μžˆμŠ΅λ‹ˆλ‹€.</p>
</a>
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./conceptual/philosophy"
><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div>
<p class="text-gray-700">λΌμ΄λΈŒλŸ¬λ¦¬κ°€ μ™œ 이런 λ°©μ‹μœΌλ‘œ μ„€κ³„λ˜μ—ˆλŠ”μ§€ μ΄ν•΄ν•˜κ³ , 라이브러리 μ΄μš©μ— λŒ€ν•œ 윀리적 κ°€μ΄λ“œλΌμΈκ³Ό μ•ˆμ „ κ΅¬ν˜„μ— λŒ€ν•΄ μžμ„Ένžˆ μ•Œμ•„λ³΄μ„Έμš”.</p>
</a>
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./api/models"
><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Reference</div>
<p class="text-gray-700">πŸ€— Diffusers 클래슀 및 λ©”μ„œλ“œμ˜ μž‘λ™ 방식에 λŒ€ν•œ 기술 μ„€λͺ….</p>
</a>
</div>
</div>
## Supported pipelines
| Pipeline | Paper/Repository | Tasks |
|---|---|:---:|
| [alt_diffusion](./api/pipelines/alt_diffusion) | [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679) | Image-to-Image Text-Guided Generation |
| [audio_diffusion](./api/pipelines/audio_diffusion) | [Audio Diffusion](https://github.com/teticio/audio-diffusion.git) | Unconditional Audio Generation |
| [controlnet](./api/pipelines/stable_diffusion/controlnet) | [Adding Conditional Control to Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.05543) | Image-to-Image Text-Guided Generation |
| [cycle_diffusion](./api/pipelines/cycle_diffusion) | [Unifying Diffusion Models' Latent Space, with Applications to CycleDiffusion and Guidance](https://arxiv.org/abs/2210.05559) | Image-to-Image Text-Guided Generation |
| [dance_diffusion](./api/pipelines/dance_diffusion) | [Dance Diffusion](https://github.com/williamberman/diffusers.git) | Unconditional Audio Generation |
| [ddpm](./api/pipelines/ddpm) | [Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239) | Unconditional Image Generation |
| [ddim](./api/pipelines/ddim) | [Denoising Diffusion Implicit Models](https://arxiv.org/abs/2010.02502) | Unconditional Image Generation |
| [if](./if) | [**IF**](./api/pipelines/if) | Image Generation |
| [if_img2img](./if) | [**IF**](./api/pipelines/if) | Image-to-Image Generation |
| [if_inpainting](./if) | [**IF**](./api/pipelines/if) | Image-to-Image Generation |
| [latent_diffusion](./api/pipelines/latent_diffusion) | [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752)| Text-to-Image Generation |
| [latent_diffusion](./api/pipelines/latent_diffusion) | [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752)| Super Resolution Image-to-Image |
| [latent_diffusion_uncond](./api/pipelines/latent_diffusion_uncond) | [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) | Unconditional Image Generation |
| [paint_by_example](./api/pipelines/paint_by_example) | [Paint by Example: Exemplar-based Image Editing with Diffusion Models](https://arxiv.org/abs/2211.13227) | Image-Guided Image Inpainting |
| [pndm](./api/pipelines/pndm) | [Pseudo Numerical Methods for Diffusion Models on Manifolds](https://arxiv.org/abs/2202.09778) | Unconditional Image Generation |
| [score_sde_ve](./api/pipelines/score_sde_ve) | [Score-Based Generative Modeling through Stochastic Differential Equations](https://openreview.net/forum?id=PxTIG12RRHS) | Unconditional Image Generation |
| [score_sde_vp](./api/pipelines/score_sde_vp) | [Score-Based Generative Modeling through Stochastic Differential Equations](https://openreview.net/forum?id=PxTIG12RRHS) | Unconditional Image Generation |
| [semantic_stable_diffusion](./api/pipelines/semantic_stable_diffusion) | [Semantic Guidance](https://arxiv.org/abs/2301.12247) | Text-Guided Generation |
| [stable_diffusion_text2img](./api/pipelines/stable_diffusion/text2img) | [Stable Diffusion](https://stability.ai/blog/stable-diffusion-public-release) | Text-to-Image Generation |
| [stable_diffusion_img2img](./api/pipelines/stable_diffusion/img2img) | [Stable Diffusion](https://stability.ai/blog/stable-diffusion-public-release) | Image-to-Image Text-Guided Generation |
| [stable_diffusion_inpaint](./api/pipelines/stable_diffusion/inpaint) | [Stable Diffusion](https://stability.ai/blog/stable-diffusion-public-release) | Text-Guided Image Inpainting |
| [stable_diffusion_panorama](./api/pipelines/stable_diffusion/panorama) | [MultiDiffusion](https://multidiffusion.github.io/) | Text-to-Panorama Generation |
| [stable_diffusion_pix2pix](./api/pipelines/stable_diffusion/pix2pix) | [InstructPix2Pix: Learning to Follow Image Editing Instructions](https://arxiv.org/abs/2211.09800) | Text-Guided Image Editing|
| [stable_diffusion_pix2pix_zero](./api/pipelines/stable_diffusion/pix2pix_zero) | [Zero-shot Image-to-Image Translation](https://pix2pixzero.github.io/) | Text-Guided Image Editing |
| [stable_diffusion_attend_and_excite](./api/pipelines/stable_diffusion/attend_and_excite) | [Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models](https://arxiv.org/abs/2301.13826) | Text-to-Image Generation |
| [stable_diffusion_self_attention_guidance](./api/pipelines/stable_diffusion/self_attention_guidance) | [Improving Sample Quality of Diffusion Models Using Self-Attention Guidance](https://arxiv.org/abs/2210.00939) | Text-to-Image Generation Unconditional Image Generation |
| [stable_diffusion_image_variation](./stable_diffusion/image_variation) | [Stable Diffusion Image Variations](https://github.com/LambdaLabsML/lambda-diffusers#stable-diffusion-image-variations) | Image-to-Image Generation |
| [stable_diffusion_latent_upscale](./stable_diffusion/latent_upscale) | [Stable Diffusion Latent Upscaler](https://twitter.com/StabilityAI/status/1590531958815064065) | Text-Guided Super Resolution Image-to-Image |
| [stable_diffusion_model_editing](./api/pipelines/stable_diffusion/model_editing) | [Editing Implicit Assumptions in Text-to-Image Diffusion Models](https://time-diffusion.github.io/) | Text-to-Image Model Editing |
| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [Stable Diffusion 2](https://stability.ai/blog/stable-diffusion-v2-release) | Text-to-Image Generation |
| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [Stable Diffusion 2](https://stability.ai/blog/stable-diffusion-v2-release) | Text-Guided Image Inpainting |
| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [Depth-Conditional Stable Diffusion](https://github.com/Stability-AI/stablediffusion#depth-conditional-stable-diffusion) | Depth-to-Image Generation |
| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [Stable Diffusion 2](https://stability.ai/blog/stable-diffusion-v2-release) | Text-Guided Super Resolution Image-to-Image |
| [stable_diffusion_safe](./api/pipelines/stable_diffusion_safe) | [Safe Stable Diffusion](https://arxiv.org/abs/2211.05105) | Text-Guided Generation |
| [stable_unclip](./stable_unclip) | Stable unCLIP | Text-to-Image Generation |
| [stable_unclip](./stable_unclip) | Stable unCLIP | Image-to-Image Text-Guided Generation |
| [stochastic_karras_ve](./api/pipelines/stochastic_karras_ve) | [Elucidating the Design Space of Diffusion-Based Generative Models](https://arxiv.org/abs/2206.00364) | Unconditional Image Generation |
| [text_to_video_sd](./api/pipelines/text_to_video) | [Modelscope's Text-to-video-synthesis Model in Open Domain](https://modelscope.cn/models/damo/text-to-video-synthesis/summary) | Text-to-Video Generation |
| [unclip](./api/pipelines/unclip) | [Hierarchical Text-Conditional Image Generation with CLIP Latents](https://arxiv.org/abs/2204.06125)(implementation by [kakaobrain](https://github.com/kakaobrain/karlo)) | Text-to-Image Generation |
| [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Text-to-Image Generation |
| [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Image Variations Generation |
| [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Dual Image and Text Guided Generation |
| [vq_diffusion](./api/pipelines/vq_diffusion) | [Vector Quantized Diffusion Model for Text-to-Image Synthesis](https://arxiv.org/abs/2111.14822) | Text-to-Image Generation |