KBlueLeaf commited on
Commit
6bd2441
1 Parent(s): 6c9fc1d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -40
README.md CHANGED
@@ -27,58 +27,57 @@ Use updated version of DTG extension (renamed to z-tipo-extension), current vers
27
  https://github.com/KohakuBlueleaf/z-tipo-extension
28
 
29
  ## Model arch and Training
30
- This model is LLaMA arch with 200M parameters, the training data is combined version of Danbooru2023, GBC10M and Coyo-HD-11M.<br>
31
- The total token seen is around 40B tokens.<br>
 
32
  For more information please refer to the tech report and following table.
33
 
34
- | | TIPO-200M | TIPO-500M |
35
- | ----------------- | ------------------------------------------------------------------------------ | ------------------------------------------------------------------------------ |
36
- | Arch | LLaMA | LLaMA |
37
- | Max ctx length | 1024 | 1024 |
38
- | Batch Size | 2048 | 3584 |
39
- | Training dataset | Danbooru, GBC10M, 5epoch<br />Danbooru, GBC10M, Coyo11M, 3epoch | Danbooru, GBC10M, Coyo11M, 5epoch |
40
- | Real Token Seen* | 40B token | 30B token |
41
- | Training Hardware | RTX 3090 x 4 | H100 x 8 |
42
- | Training Time | 420 hour` | 100 hour` |
43
- | URL | [KBlueLeaf/TIPO-200M · Hugging Face](https://huggingface.co/KBlueLeaf/TIPO-200M) | [KBlueLeaf/TIPO-500M · Hugging Face](https://huggingface.co/KBlueLeaf/TIPO-500M) |
44
-
45
- *: We only count "non-padding token" in the token seen, since all the training data have very large length range <br/>
46
- `: Since the training data is pretty short, it cost more time to reach same token seen than general LLM pretraining.<br/>
47
  As reference, with 4096 as max ctx length and almost all the data have reach that length, you may only need 2days to reach 10B token seen on RTX 3090 x 4 with 200M model.
48
 
49
  ### Evaluation
50
- We have tested TIPO in several metric:
51
-
52
- #### 1. Aesthetic Score (Higher is Better)
53
-
54
- We compute the Aesthetic Score using the **Aesthetic Predictor V2.5**. This metric is calculated on the short/truncated long test.
55
-
56
- ![Aesthetic Score Distribution](https://hackmd.io/_uploads/HkJphkSCA.png)
57
-
58
- *Figure 1: Aesthetic Score distribution.*
59
-
60
- #### 2. AI Corrupt Score (Higher is Better)
61
-
62
- The AI Corrupt Score is obtained from the **AICorruptMetrics** in **sdeval**.
63
 
64
- This metric is calculated on the short/truncated long test.
65
 
66
- ![AI Corrupt Score Distribution](https://hackmd.io/_uploads/SJlktvE0R.png)
 
67
 
68
- *Figure 2: AI Corrupt Score distribution.*
 
 
 
 
69
 
70
- #### 3. Frechet Dino Distance (FDD) on Scenery Tag Test
71
 
72
- We use FDD on the Scenery Tag Test to demonstrate that when input prompts address a smaller distribution, the model struggles to generate images that reflect the true distribution. However, with **TIPO**, this issue is mitigated.
 
73
 
74
- | FDD Model | `<meta> scenery` only | `<meta> scenery` + TIPO |
75
- |------------------|-----------------------|-------------------------|
76
- | DinoV2 ViT-S | 0.1917 | **0.1786** |
77
- | DinoV2 ViT-B | 0.2002 | **0.1755** |
78
- | DinoV2 ViT-L | 0.2017 | **0.1863** |
79
- | DinoV2 ViT-G | 0.2359 | **0.2096** |
80
 
81
- *Table 1: Frechet Dino Distance (FDD) on Scenery Tag Test.*
 
 
 
 
82
 
83
  ## LICENSE
84
  This model is released under [Kohaku License 1.0](https://kblueleaf.net/documents/kohaku-license/?[Your%20Organization/Name]=KohakuBlueLeaf&[Year]=2024)<br>
 
27
  https://github.com/KohakuBlueleaf/z-tipo-extension
28
 
29
  ## Model arch and Training
30
+
31
+ This model is LLaMA arch with 200M parameters, the training data is combined version of Danbooru2023, Coyo-HD-11M. <br>
32
+ The total token seen is around 50B tokens. <br>
33
  For more information please refer to the tech report and following table.
34
 
35
+ | | TIPO-200M | TIPO-200M-ft | TIPO-500M |
36
+ | ----------------- | ------------------------------------------------------------------------------ | ---------------------------------- | ------------------------------------------------------------------------------ |
37
+ | Arch | LLaMA | LLaMA | LLaMA |
38
+ | Max ctx length | 1024 | 1024 | 1024 |
39
+ | Batch Size | 2048 | 2048 | 3584 |
40
+ | Training dataset | Danbooru, GBC10M, 5epoch<br />Danbooru, GBC10M, Coyo11M, 3epoch | Danbooru(pixtral), Coyo11M, 2epoch | Danbooru, GBC10M, Coyo11M, 5epoch |
41
+ | Real Token Seen* | 40B token | 50B (10B more from TIPO-200M) | 30B token |
42
+ | Training Hardware | RTX 3090 x 4 | RTX 3090 x 4 | H100 x 8 |
43
+ | Training Time | 420 hour` | 120 hour` | 100 hour` |
44
+ | Huggingface | You Are HERE | [KBlueLeaf/TIPO-200M-ft · Hugging Face](https://huggingface.co/KBlueLeaf/TIPO-200M-ft)| [KBlueLeaf/TIPO-500M · Hugging Face](https://huggingface.co/KBlueLeaf/TIPO-500M) |
45
+
46
+ *: We only count "non-padding token" in the token seen, since all the training data have very large length range. <br>
47
+ `: Since the training data is pretty short, it cost more time to reach same token seen than general LLM pretraining. <br>
48
  As reference, with 4096 as max ctx length and almost all the data have reach that length, you may only need 2days to reach 10B token seen on RTX 3090 x 4 with 200M model.
49
 
50
  ### Evaluation
51
+ **Evaluation are done on TIPO-200M model** <br>
52
+ We have tested TIPO compared to other Model in several test and metrics:
 
 
 
 
 
 
 
 
 
 
 
53
 
54
+ #### Scenery tag test
55
 
56
+ In this test we use single "scenery" tag as input. (With some certain meta) <br>
57
+ To test each prompt gen method to see if they can obtain the desired distribution of outputs while maintain the quality of images.
58
 
59
+ | Scenery Tag Test | Original | GPT4o-mini | Prompt DB | Promptis | TIPO(ours) |
60
+ | ---- | ---- | ---- | ---- | ---- | ---- |
61
+ | FDD ↓ | 0.3558 | 0.5414 | 0.3247 | *0.2350* | **0.2282** |
62
+ | Aesthetic ↑ | 5.0569 | **6.3676** | 6.1609 | 5.9468 | *6.2571* |
63
+ | AI Corrupt ↑ | 0.4257 | *0.7490* | 0.5024 | 0.5669 | **0.9195** |
64
 
65
+ #### Short/Truncated Long test
66
 
67
+ In this test we use short caption or manually truncated caption from GBC10M and CoyoHD11M. <br>
68
+ This test examine the ability of prompt gen method on handling almostly completed prompts.
69
 
70
+ | Short | Original | GPT4o-mini | Prompt DB | Promptis | TIPO(ours) |
71
+ | ---- | ---- | ---- | ---- | ---- | ---- |
72
+ | FDD | 0.0957 | 0.1668 | *0.0980* | 0.1783 | 0.1168 |
73
+ | Aesthetic | 5.8370 | **6.0589** | 5.8213 | 5.7963 | *5.8531* |
74
+ | AI Corrupt ↑ | 0.7113 | 0.6985 | 0.7064 | 0.6314 | **0.7131** |
 
75
 
76
+ | Truncated Long | Original | GPT4o-mini | Prompt DB | Promptis | TIPO(ours) |
77
+ | ---- | ---- | ---- | ---- | ---- | ---- |
78
+ | FDD ↓ | 0.0955 | 0.1683 | *0.1247* | 0.2096 | 0.1210 |
79
+ | Aesthetic ↑ | 5.7497 | **6.0168** | 5.8191 | 5.7759 | *5.8364* |
80
+ | AI Corrupt ↑ | 0.6868 | 0.6712 | 0.6741 | 0.5925 | **0.7130** |
81
 
82
  ## LICENSE
83
  This model is released under [Kohaku License 1.0](https://kblueleaf.net/documents/kohaku-license/?[Your%20Organization/Name]=KohakuBlueLeaf&[Year]=2024)<br>