File size: 1,721 Bytes
640cbe1
7c3482c
640cbe1
dce8f26
64954e6
c56392c
 
dce8f26
cadd1c8
64954e6
52f7053
c56392c
cadd1c8
 
64954e6
52f7053
c56392c
cadd1c8
 
ac32721
52f7053
c56392c
cadd1c8
52f7053
ac32721
52f7053
c56392c
74bb938
52f7053
ac32721
52f7053
c56392c
cadd1c8
64954e6
ac32721
64954e6
c56392c
fbe00f8
 
 
 
 
 
 
 
 
 
cadd1c8
fbe00f8
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
---
license: creativeml-openrail-m
---

SD 1.5 dreambooth models fine-tuned on Anythingv3. Checkpoint files trained on (number of steps divided by 90) images scraped from boorus, with text encoder trained for around (number of images times 12) steps. Focused on ckpts before moving on to safetensors and lora. (~11/2022)

Textual inversion embedding versions can be found [here](https://huggingface.co/922-CA/gfl-ddlc-TI-tests) (trained off the same or similar datasets).

# nes2-1350 (Negev)
* Fine-tuned 12/05/2022
* Keyword: negebave
* First attempt

# ma1-850 (FMG9)
* Fine-tuned ~12/2022
* Keyword: magmgbg
* First attempt

# ca1-1260 (KAC-PDW)
* Fine-tuned 12/06/2022
* Keyword: kcpdweh
* First attempt

# p3-1800 (P90)
* Fine-tuned ~12/2022
* Keyword: hrstlleds
* Third attempt

# pn1-1350 (P90)
* Fine-tuned ~12/2022
* Keyword: hrstlleds
* Fourth attempt, may get best results with TI
  
# pr2 (Persicaria)
* Fine-tuned 12/06/2022
* Keyword: prsheheas
* First attempt, may get best results with TI

# nexl1_11142023_test
* Keyword: negev \(girls' frontline\)
* One of first attempts, works best with [this model](https://civitai.com/models/162577/kohaku-xl-beta?modelVersionId=203416)
* Tested with ComfyUI, strength at ~1.5 seems to give best results (may be underfitted)

# pxl1_11182023_test-000150
* Keyword: p90 \(girls' frontline\)
* Fine tuned for longer (cut off at ~150 epochs vs 50 epochs for previous), tested with [this model](https://civitai.com/models/162577/kohaku-xl-beta?modelVersionId=203416)
* Tested with ComfyUI, may be overfitting to undesirable features (changing dataset for next test)
  
MTBA (previews, any future loras or models trained off better bases- hopefully more better SDXL too)