This model is so good!
Have you made any more merges of it? I'm blown away with the quality of this model . . . A new protogen mix could be built off this that would slay 5.8 etc.
Why not share on CivitAI, so we can share our work?
Thanks! The only merge I've done of it is Memento:
https://huggingface.co/Yntec/Memento
Here's the sample prompt with Memento:
Which trades colorfulness for realism, and reach (the picture of Santa near the bottom of the page couldn't be produced with Dreamlike.)
I made 4 more models that included Dreamlike, I never released them because these were their outputs:
And this was Memento's output:
All of them failed to improve it, so now I hope in the future to make a merge that includes Dreamlike that improves over Memento's output, I've slowed down on the releases of my merges because I don't release anything if it's not better than the ones I already have. The more I have, the harder is it to improve them.
Why not share on CivitAI, so we can share our work?
Because of their censorship. You couldn't upload the sample pics of this model to Civitai as they have a strict policy of "no photorealistic images of minors", when I tried their image generator with my prompts they just bounced because they had some word they didn't like, and I've uploaded images that immediately get flagged for review (well, of course if they would violate their strict policy they have to do it like this, but I disagree with it and won't support them).
However, the license of these models allow you to just upload them to Civitai yourself if you want to share them, I bet they would get a lot more hearts over there, anyway, ha!
I see.
Whatever you've done to "photoreal" vastly improved it. I don't have quite the tech or know how atm to build or merge models quite like you do. Ironically my artistic goal is not photorealism at all, I like to push a model into noisy feedback with complex prompts and higher CFG values to acheive a more granulated graphic look similar to woodcuts/old master prints/serigraphs etc. Also, I only use a few of the older sampler-schedulers as I don't like the look of the noise on the new DPM+etc varieties very much when they are pushed. Very few models can hold up to this kind of abuse without getting ridiculous, ugly or incoherent. This model has the among the highest CFG resistance I've seen for an SD1.5 models, plus ClipSkip doesn't seem necessary. I will try out your Momento model to see what it can do.
As far as merge ideas, I think something could be made that would improve upon Darkstorm's Protogen 5.x formulas. The model merging weights are posted on them, https://huggingface.co/darkstorm2150/Protogen_x5.8_Official_Release, all except for his Open Gen which feels like a predecessor of SDXL. Up until now these have been my most reliable models for image creation, but I've started exploring the models he based his on. This is how I found yours. It looks like you've already improved on most of the base models he used for his merge.
I would put this on CivitAI, but honestly I'm quite busy and that place is like a bazaar lol a few gems, mostly a lot of trash. I did attempt a simple merge with the newest "Counterfeitv3.0fixfp16" model that I'm playing with and like so far. Still I don't have the skills you do, and I bet you could improve it even more.
There was a great idea that BIRDL had with their National Gallery of Art based model (search NGA) but it tends to pop that Getty Images stuff on there too much lol and has weak CFG resistance. I'm also looking at the WikiArt models too . . . (but WikiCommons has much better images for models to be trained on!)
Ah yes, most of my work has focused mostly on improving photorealism, I guess because of the challenge, and because some models are awesome and I love them, but there's some giant gaps in their knowledge, so they just can't draw some things you ask them to (example: a girl with Santa. A concept so simple and most anime models will give you two girls in christmas attire, but never Santa!) Others that can draw those things are only good at those things, so I find a prompt that showscases it and then merge their block weights using binary search until I narrow down the relevant ones so the first model increases its range, which ends looking like the second model with the first model's style.
Great to hear what you can do with Dreamlike. As for Protogen, my favorite has always been v2.2, it's the most unique in the animation department, and I lighted it up even more with my merge of it and the Glow LoRA in case you want to check it out: https://huggingface.co/Yntec/PotaytoPotahto - I have always planned to make my unofficial merge of Protogen, and to merge it with my favorite models like Deliberate, DreamShaper and ReVAnimated, so I'll add Dreamlike to the list, though I need some goal, something they can't do that they could do together.
I remember when I found the Noosphere model and was so blown away by it that I made several merges with it, perhaps I should do that with Dreamlike and see if I can improve it by luck!
What do you think of the different vaes for this? I see you had a recent one with the MoistymixV2 baked in. I haven't tried that one yet. I've been using the ridiculously huge CounterfeitV2.5vae, which has been really effective for me.
Ideas for a model would really be emulating Protogen's Infinity/Nova with all your versions of those models merged together. That Protogen Infinity has never worked well for me, but I liked Nova sometimes.
I've also built a small database of images I've culled from Wikicommons of the highest resolution/cleaned/sharpened images of Goya etchings and old master engravings. I've been looking for a partner on building a model that can accurately do this, but everyone has balked so far. Aquatint is a bit difficult of a texture emulate for instance, plus recreating simultaneously the deckled paper texture, etc.
Doing a simple merge with you Dreamlike nearly fixed all the problems with the model I mentioned before. At least, they tried to build a model entirely on old master/public domain museum masterpieces (as an artist with a BFA, this kind of ethics is important to me) . https://huggingface.co/BirdL/NGA_Art_SD-V1.5 Still if I had some real knowhow I could do a better merge that would solve all the incongruencies.
I only check VAEs when the one models come with is problematic, it's too desaturated or faces look really ugly with them. Then I test them with all VAEs and pick the best one. The MoistMixV2VE is my favorite for anime or animation, specially for models that are very lazy with the eyes of people, MoistMixV2 just redraws those eyes and solves any problems! It's also the VAE with the best saturation and contrast, without the bleeding colors of other VAEs like kl-f8-anime2 or NAI VAE. Here's the link:
https://huggingface.co/MoistMix/MoistMixV2/resolve/main/MoistMixV2.vae.pt
Of course that's not its name, I have no idea from where it comes from, but I've done blind tests where I don't know what VAEs are used, and MoistMixV2 keeps coming up on top, my version of Deliberate improves over the original with it: https://huggingface.co/Yntec/Deliberate (even though the improvement is microscopic.)
The problem with it is that it breaks photorealism, as it draws anime eyes on small faces, it's not suitable for that.
For that I found out 840K VAE is the best: https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors
If a model already uses it, chances are swapping its VAE with anything else will break faces. Though there are some exceptions and I have some models with a different VAE that improved their outputs.
And now, let's keep talking about Dreamlike... its recipe is no secret and it's so simple it's ridiculous... https://huggingface.co/Yntec/dreamlike-photoreal-remix/discussions/3
As the story goes, what I noticed when merging models at some 50%-50% was that their effects were diluted, they're already at some 90% SD1.5, 45% of it is added from one model and 45% is added from the other, and then the 10% of the other models gets shrunk to 5% to fit in there. To avoid this you extract the "essence" of a model, you subtract 100% of SD1.5, and you have a model that is unusable, but you can merge it back with something to keep its difference with SD1.5 at full strength. For DreamlikeRemix I did that to keep DreamlikePhotoReal 1.0 and DreamlikePhotoReal 2.0 at their maximum, and Dreamlike was just the essence merged back with DreamlikePhotoReal 2.0 at 50/50.
I never tried it but I suppose merging the Essence (instead of merging Dreamlike, you subtract SD1.5 from DreamlikePhotoReal 2.0 and merge that) with any other model would produce similar results and it could be done quickly.
The thing with merging many models together is that either they become diluted as above, or they become overtrained and give outputs that look like the CFG is too high. Since then I've abandoned such methods and switched to merging block weights with great results... for my tastes.
In the department of using image databases for creating new models I can't help, I have never trained any models and all my models are just merges of models by others, if you have a safetensors file or a LoRA I could help reach something you want to do. Or you could post example images of problems and we could try to solve them.
What I do is going to a space like this: https://huggingface.co/spaces/Yntec/Diffusion60XX and send the prompt of what I want to achieve, and see what model already is close to it, and another that doesn't have the problem, and merge them in a way that makes them produce the wanted output. And when they can do that, they also deliver in all other fields automatically,
Cool. Thanks for the info. If push comes to shove I might try that. I'm really trying to find ways of using models built only on artist's work in the public domain, old masters, museum collections, etc. Anime models like counterfeit and anything can do some interesting drawing effects, but mostly they make things too cartoony.
Funnily enough, one of my favorite series of XL models are the RealCartoonXL ones because they are so resistant to haloing at high CFG, draw well, and blend well with others to create more artistic effects.
Hey Crowyote, I attempted to merge Protogen with Dreamlike for the Prodigy model at: https://huggingface.co/Yntec/Prodigy - but despite it being the best I could deliver, it didn't seem to work out, with outputs like these:
For prompts like "Chibi girl. Cinematic wallpaper by gehry." and "lvngvncnt wa-vy style masterpiece, Cartoon pretty CUTE girl, 1958, fantasy, protocol 2, Portrait of a happy family, pastel, pencil, painting, traditional elementary, sound on the eastmancolor, fanbox of LEATHER /, smooth, faved, happy female, DETAILED CHIBI EYES, russian, seven, werner brand, mark arian" (which work well in other models.)
Though then I tried to check what Dreamlike did with the first one and I got this...
Ouch! It seems Dreamlike has huge gaps in what it can draw, and it's not suitable for merging, Memento fills those gaps by ignoring part of your prompt and not really delivering things like cartoon or anime, but Prodigy just gives up and blurs things out.
<EDIT
Not that DreamlikePhotoReal2.0 is any better...
Ha...
/EDIT>
I'll try to fix Prodigy somehow but it seems Dreamlike is good for what it is already, but can't really be used to improve the outputs of other models.
I gotta disagree. I think the problem is merging with Protogen. I've never been able to get a successful merger using any of it's forms with another model. I suspect because they are at the edge of over-engineered mergers themselves.
My suggestion was for you to rebuild protogen using your improved of the various base models.
I've had some success using a simple mergers of your dreamlike to improve other base models like ArtSeekmega and the counterfeitv3fixfp16(or whatever it's called). I haven't tried using any complicated mergers with "supermerger" yet.
Another thing to consider is that I think protogen works best with the 1st gen samplers like heun, euler, lms etc. and it's not for everybody, but what I like about it is it can create graphic art (not comics or anime etc - I mean woodcut, etching, serigraph type looks) if pushed to high CFG, the noise tends to organize itself better in bolder patterns than just a halo-effect. I guess everyone wants a model to make chibis and waifus these days, but I just have little interest in that personally.
Thanks, I'll keep that in mind. I would also like to see what you are generating and see examples, and examples of pictures that you'd want a rebuilt protogen to generate, because, I've never merged more than 4 models together, because when I try adding a 5th one results start to deteriorate. As Protogen's author explained, there's only so much space in a model, when you add more to it, what is added overwrites something else, the model forgets it, the trick is to carefully choose what is forgotten to improve what matters, but other than photorealism/chibis/waifus I have no idea what I should be aiming for (and that's why most of the models I merge excel at the same things, but may continue to provide bad outputs for other things, because I never generate those, and I have no idea what they are.)
I didn't mean that as any slight. These are very popular forms of image generation and sometimes I do make some for fun in my own style. Your models have given an interesting twist on the type of work I'm generating which are fine art prints. I'll tell you what I'll give Prodigy a shot and see if I can make it work in the next few days.
I agree any merge with Protogen is overkill. Have you checked out the Rev Animated v2 Rebirth yet on CivitAI? I can't get it to work with Invoke at all at this point, but it looks interesting. https://civitai.com/models/7371/rev-animated?modelVersionId=425083
I'll tell you what I'll give Prodigy a shot and see if I can make it work in the next few days.
I appreciate your help and really want to implement your ideas into my workflow, I'm currently working on a new version of Prodigy, with the best layers of DreamlikePhotoReal2.0 and maximizing its details, Prodigy will become ProdigyAlpha and the new one will take its place.
Have you checked out the Rev Animated v2 Rebirth yet on CivitAI?
Not yet, ReVAnimated v10 remains one of my favorite models as later versions became a bit too generic for my taste, but I'm always happy to see new spins on classic models.
Contact me on discord to discuss further. crowyotertx3080
I don't support discord or any closed community where things remain secret and hidden and people can't search for them and benefit, and I quite like how we don't even have private messages in huggingface and what we say remains public.
Discord is the worst that has happened to the internet as people have to keep making the same discoveries and reinventing the wheel over and over because the information is protected on a group or on private messages between individuals. Future generations investigating these subjects may find our thread and find something valuable that saves them time, I can't go and read what is being posted on those private discord groups, or what has been discussed between individuals, but at least I will not participate in the creation of more information that remains closed.
For the purpose of collaboration and the chances of new people perhaps joining us and helping in this venture, I want any progress to remain accessible to all, and if not, I'm fine with no new information being generated at all.
I just don't share my images on such a wide open forum. I'm not opposed to discussing thing. I do occasionally post my work on instagram tho.
Oh, right, I understand it, I also don't share them unless they're samples for the models I release. But what about the prompts? I mean, you don't need to share images if I can recreate them at home, that's something incredible image models allow. Maybe they're secret as well (in that case you wouldn't want me to know about them, not even on a discord, as I could leak them), just curious.