File size: 3,472 Bytes
7ead35c
 
661131f
7ead35c
 
 
 
 
661131f
7ead35c
 
 
c894c99
7ead35c
 
4961a55
7ead35c
661131f
7ead35c
 
 
 
 
 
6ca85ca
7ead35c
 
 
6ca85ca
661131f
6ca85ca
c894c99
7ead35c
661131f
7ead35c
 
 
c894c99
7ead35c
 
 
 
 
 
 
 
 
 
4a061f5
c6099d8
c22b143
9adcdb2
 
661131f
7ead35c
 
 
 
 
 
 
 
 
 
9adcdb2
661131f
c6099d8
 
661131f
 
 
9adcdb2
c6099d8
661131f
c6099d8
661131f
 
 
 
 
c6099d8
661131f
c6099d8
661131f
 
 
 
 
c6099d8
661131f
c6099d8
661131f
 
 
 
 
c6099d8
7ead35c
c6099d8
4961a55
7ead35c
908cc01
661131f
4961a55
7ead35c
 
 
 
 
 
 
 
 
 
 
661131f
 
6ca85ca
661131f
 
 
 
6ca85ca
661131f
7ead35c
 
 
 
 
661131f
6ca85ca
7ead35c
 
 
6ca85ca
661131f
7ead35c
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
---
license: creativeml-openrail-m
base_model: "ptx0/pixart-900m-1024-ft-large"
tags:
  - stable-diffusion
  - stable-diffusion-diffusers
  - text-to-image
  - diffusers
  - simpletuner
  - full

inference: true

---

# pixart-900m-1024-ft

This is a full rank finetune derived from [ptx0/pixart-900m-1024-ft-large](https://huggingface.co/ptx0/pixart-900m-1024-ft-large).



The main validation prompt used during training was:

```
ethnographic photography of teddy bear at a picnic, ears tucked behind a cozy hoodie looking darkly off to the stormy picnic skies
```

## Validation settings
- CFG: `4.5`
- CFG Rescale: `0.0`
- Steps: `25`
- Sampler: `None`
- Seed: `42`
- Resolutions: `1024x1024,1344x768,916x1152`

Note: The validation settings are not necessarily the same as the [training settings](#training-settings).




<Gallery />

The text encoder **was not** trained.
You may reuse the base model text encoder for inference.


## Training settings

- Training epochs: 1
- Training steps: 49000
- Learning rate: 1e-06
- Effective batch size: 192
  - Micro-batch size: 24
  - Gradient accumulation steps: 1
  - Number of GPUs: 8
- Prediction type: epsilon
- Rescaled betas zero SNR: False
- Optimizer: AdamW, stochastic bf16
- Precision: Pure BF16
- Xformers: Not used


## Datasets

### photo-concept-bucket
- Repeats: 0
- Total number of images: ~567360
- Total number of aspect buckets: 4
- Resolution: 1.0 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: random
### midjourney-v6-520k-raw
- Repeats: 0
- Total number of images: ~390912
- Total number of aspect buckets: 1
- Resolution: 1.0 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
### sfwbooru
- Repeats: 0
- Total number of images: ~233664
- Total number of aspect buckets: 1
- Resolution: 1.0 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
### nijijourney-v6-520k-raw
- Repeats: 0
- Total number of images: ~416064
- Total number of aspect buckets: 1
- Resolution: 1.0 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
### dalle3
- Repeats: 0
- Total number of images: ~1889680
- Total number of aspect buckets: 1
- Resolution: 1.0 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square


## Inference


```python
import torch
from diffusers import DiffusionPipeline




model_id = 'pixart-900m-1024-ft'
prompt = 'ethnographic photography of teddy bear at a picnic, ears tucked behind a cozy hoodie looking darkly off to the stormy picnic skies'
negative_prompt = 'blurry, cropped, ugly'
pipeline = DiffusionPipeline.from_pretrained(model_id)
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu')

prompt = "ethnographic photography of teddy bear at a picnic, ears tucked behind a cozy hoodie looking darkly off to the stormy picnic skies"
negative_prompt = "blurry, cropped, ugly"

pipeline = DiffusionPipeline.from_pretrained(model_id)
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu')
image = pipeline(
    prompt=prompt,
    negative_prompt='blurry, cropped, ugly',
    num_inference_steps=25,
    generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826),
    width=1152,
    height=768,
    guidance_scale=4.5,
    guidance_rescale=0.0,
).images[0]
image.save("output.png", format="PNG")
```