aashish1904
commited on
Commit
•
cd24305
1
Parent(s):
21f3834
Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,111 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
---
|
3 |
+
|
4 |
+
license: other
|
5 |
+
license_name: kohaku-license-1.0
|
6 |
+
datasets:
|
7 |
+
- laion/conceptual-captions-12m-webdataset
|
8 |
+
- CaptionEmporium/coyo-hd-11m-llavanext
|
9 |
+
- KBlueLeaf/danbooru2023-metadata-database
|
10 |
+
- graph-based-captions/GBC10M
|
11 |
+
language:
|
12 |
+
- en
|
13 |
+
pipeline_tag: text-generation
|
14 |
+
library_name: transformers
|
15 |
+
|
16 |
+
---
|
17 |
+
|
18 |
+
[![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
|
19 |
+
|
20 |
+
|
21 |
+
# QuantFactory/TIPO-500M-GGUF
|
22 |
+
This is quantized version of [KBlueLeaf/TIPO-500M](https://huggingface.co/KBlueLeaf/TIPO-500M) created using llama.cpp
|
23 |
+
|
24 |
+
# Original Model Card
|
25 |
+
|
26 |
+
# TIPO: Text to Image with text presampling for Prompt Optimization
|
27 |
+
|
28 |
+
500M LLaMA arch model trained for TIPO.<br>
|
29 |
+
Tech Report: https://hackmd.io/@KBlueLeaf/BJULOQBR0
|
30 |
+
|
31 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/630593e2fca1d8d92b81d2a1/fc9ovmARapQmgq9DZ7ApJ.png)
|
32 |
+
|
33 |
+
## Introduction
|
34 |
+
|
35 |
+
In this project, we introduce "TIPO" (**T**ext to **I**mage with text presampling for **P**rompt **O**ptimization), an innovative framework designed to significantly enhance the quality and usability of Text-to-Image (T2I) generative models. TIPO utilizes the Large Language Models (LLMs) to perform "Text Presampling" within the inference pipeline of text-to-image generative modeling. By refining and extending user input prompts, TIPO enables generative models to produce superior results with minimal user effort, making T2I systems more accessible and effective for a wider range of users.
|
36 |
+
|
37 |
+
## Usage
|
38 |
+
Use updated version of DTG extension (renamed to z-tipo-extension), current version of z-tipo-extension support stable-diffusion-webui, stable-diffusion-webui-forge and ComfyUI. SD-Next haven't been tested.
|
39 |
+
https://github.com/KohakuBlueleaf/z-tipo-extension
|
40 |
+
|
41 |
+
## Model arch and Training
|
42 |
+
This model is LLaMA arch with 500M parameters, the training data is combined version of Danbooru2023, GBC10M and Coyo-HD-11M.<br>
|
43 |
+
The total token seen is around 30B tokens.<br>
|
44 |
+
For more information please refer to the tech report and following table.
|
45 |
+
|
46 |
+
| | TIPO-200M | TIPO-500M |
|
47 |
+
| ----------------- | ------------------------------------------------------------------------------ | ------------------------------------------------------------------------------ |
|
48 |
+
| Arch | LLaMA | LLaMA |
|
49 |
+
| Max ctx length | 1024 | 1024 |
|
50 |
+
| Batch Size | 2048 | 3584 |
|
51 |
+
| Training dataset | Danbooru, GBC10M, 5epoch<br />Danbooru, GBC10M, Coyo11M, 3epoch | Danbooru, GBC10M, Coyo11M, 5epoch |
|
52 |
+
| Real Token Seen* | 40B token | 30B token |
|
53 |
+
| Training Hardware | RTX 3090 x 4 | H100 x 8 |
|
54 |
+
| Training Time | 420 hour` | 100 hour` |
|
55 |
+
| URL | [KBlueLeaf/TIPO-200M · Hugging Face](https://huggingface.co/KBlueLeaf/TIPO-200M) | [KBlueLeaf/TIPO-500M · Hugging Face](https://huggingface.co/KBlueLeaf/TIPO-500M) |
|
56 |
+
|
57 |
+
*: We only count "non-padding token" in the token seen, since all the training data have very large length range <br/>
|
58 |
+
`: Since the training data is pretty short, it cost more time to reach same token seen than general LLM pretraining.<br/>
|
59 |
+
As reference, with 4096 as max ctx length and almost all the data have reach that length, you may only need 2days to reach 10B token seen on RTX 3090 x 4 with 200M model.
|
60 |
+
|
61 |
+
### Evaluation
|
62 |
+
We have tested TIPO in several metric:
|
63 |
+
|
64 |
+
#### 1. Aesthetic Score (Higher is Better)
|
65 |
+
|
66 |
+
We compute the Aesthetic Score using the **Aesthetic Predictor V2.5**. This metric is calculated on the short/truncated long test.
|
67 |
+
|
68 |
+
![Aesthetic Score Distribution](https://hackmd.io/_uploads/HkJphkSCA.png)
|
69 |
+
|
70 |
+
*Figure 1: Aesthetic Score distribution.*
|
71 |
+
|
72 |
+
#### 2. AI Corrupt Score (Higher is Better)
|
73 |
+
|
74 |
+
The AI Corrupt Score is obtained from the **AICorruptMetrics** in **sdeval**.
|
75 |
+
|
76 |
+
This metric is calculated on the short/truncated long test.
|
77 |
+
|
78 |
+
![AI Corrupt Score Distribution](https://hackmd.io/_uploads/SJlktvE0R.png)
|
79 |
+
|
80 |
+
*Figure 2: AI Corrupt Score distribution.*
|
81 |
+
|
82 |
+
#### 3. Frechet Dino Distance (FDD) on Scenery Tag Test
|
83 |
+
|
84 |
+
We use FDD on the Scenery Tag Test to demonstrate that when input prompts address a smaller distribution, the model struggles to generate images that reflect the true distribution. However, with **TIPO**, this issue is mitigated.
|
85 |
+
|
86 |
+
| FDD Model | `<meta> scenery` only | `<meta> scenery` + TIPO |
|
87 |
+
|------------------|-----------------------|-------------------------|
|
88 |
+
| DinoV2 ViT-S | 0.1917 | **0.1786** |
|
89 |
+
| DinoV2 ViT-B | 0.2002 | **0.1755** |
|
90 |
+
| DinoV2 ViT-L | 0.2017 | **0.1863** |
|
91 |
+
| DinoV2 ViT-G | 0.2359 | **0.2096** |
|
92 |
+
|
93 |
+
*Table 1: Frechet Dino Distance (FDD) on Scenery Tag Test.*
|
94 |
+
|
95 |
+
## LICENSE
|
96 |
+
This model is released under [Kohaku License 1.0](https://kblueleaf.net/documents/kohaku-license/?[Your%20Organization/Name]=KohakuBlueLeaf&[Year]=2024)<br>
|
97 |
+
You can check the above provided URL or check the LICENSE file in this repo.
|
98 |
+
|
99 |
+
### Citation
|
100 |
+
```bibtex
|
101 |
+
@misc{yeh2024tipo,
|
102 |
+
title = {TIPO: Text to Image with text presampling for Prompt Optimization},
|
103 |
+
author = {Yeh, Shih-Ying},
|
104 |
+
year = {2024},
|
105 |
+
month = {9},
|
106 |
+
day = {29},
|
107 |
+
note = {Technical report available at \url{https://hackmd.io/@KBlueLeaf/BJULOQBR0}.
|
108 |
+
Model available at \url{https://huggingface.co/KBlueLeaf/TIPO-500M}.
|
109 |
+
Source code available at \url{https://github.com/KohakuBlueleaf/KGen}},
|
110 |
+
}
|
111 |
+
```
|