mdmachine commited on
Commit
4cdb047
1 Parent(s): 7a86fe5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -1
README.md CHANGED
@@ -11,6 +11,11 @@ tags:
11
  - Provide the model to an app such as Mochi Diffusion [Github](https://github.com/godly-devotion/MochiDiffusion) - [Discord](https://discord.gg/x2kartzxGv) to generate images.<br>
12
  - `split_einsum` version is compatible with all compute unit options including Neural Engine.<br>
13
  - `original` version is only compatible with CPU & GPU option.<br>
 
 
 
 
 
14
 
15
  # Note: Some models do not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).
16
 
@@ -41,4 +46,18 @@ So what's the difference between Vivid and all my other models?
41
 
42
  This model adds a lot more detail and realism to the images created with it and not just with portraits but landscapes as well. The other thing this model is better at is taking Textual Inversion embeddings. Lucid and Retro are both very resistant to TI Embeddings but Vivid is transformed very easily with a good embedding.
43
 
44
- What are you waiting for? Go get some great results from simple prompts.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  - Provide the model to an app such as Mochi Diffusion [Github](https://github.com/godly-devotion/MochiDiffusion) - [Discord](https://discord.gg/x2kartzxGv) to generate images.<br>
12
  - `split_einsum` version is compatible with all compute unit options including Neural Engine.<br>
13
  - `original` version is only compatible with CPU & GPU option.<br>
14
+ - Custom resolution versions are tagged accordingly.<br>
15
+ - `vae` tagged files have a vae embedded into the model.<br>
16
+ - Descriptions are posted as-is from original model source. Not all features and/or results may be available in CoreML format.<br>
17
+ - This model was converted with `vae-encoder` for i2i.
18
+ - Models that are 32 bit will have "fp32" in the filename.
19
 
20
  # Note: Some models do not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).
21
 
 
46
 
47
  This model adds a lot more detail and realism to the images created with it and not just with portraits but landscapes as well. The other thing this model is better at is taking Textual Inversion embeddings. Lucid and Retro are both very resistant to TI Embeddings but Vivid is transformed very easily with a good embedding.
48
 
49
+ What are you waiting for? Go get some great results from simple prompts.
50
+
51
+ What's new in v2.0?
52
+
53
+ Wow wow wow.. two big model releases have kept me busy testing and prompting.
54
+
55
+ F222 was replaced by Hassan's newest model release
56
+
57
+ The new H&A 3DKX Update replaced the older version
58
+
59
+ wavymulder's portrait+ was added
60
+
61
+ Dreamlike was udpated in the mix as well
62
+
63
+ The end result is a lot more realistic and vivid outcome. I used the same prompt to generate the new preview images as were used in v1.0.