--- license: other license_name: fair-ai-public-license-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en base_model: - Laxhar/noobai-XL_v1.0 pipeline_tag: text-to-image tags: - safetensors - diffusers - stable-diffusion - stable-diffusion-xl - art library_name: diffusers ---

NoobAI XL V-Pred 0.5

# Model Introduction This image generation model, based on Laxhar/noobai-XL_v1.0, leverages full Danbooru and e621 datasets with native tags and natural language captioning. Implemented as a v-prediction model (distinct from eps-prediction), it requires specific parameter configurations - detailed in following sections. Special thanks to my teammate euge for the coding work, and we're grateful for the technical support from many helpful community members. # ⚠️ IMPORTANT NOTICE ⚠️ ## **THIS MODEL WORKS DIFFERENT FROM EPS MODELS!** ## **PLEASE READ THE GUIDE CAREFULLY!** ## Model Details - **Developed by**: [Laxhar Lab](https://huggingface.co/Laxhar) - **Model Type**: Diffusion-based text-to-image generative model - **Fine-tuned from**: Laxhar/noobai-XL_v1.0 - **Sponsored by from**: [Lanyun Cloud](https://cloud.lanyun.net) --- # How to Use the Model. ## Method I: [reForge](https://github.com/Panchovix/stable-diffusion-webui-reForge/tree/dev_upstream) 1. Install reForge by following the instructions in the repository; 2. Switch to `dev_upstream_experimental` branch by running `git checkout dev_upstream_experimental`; 3. Launch reForge WebUI; 4. Find "_Advanced Model Sampling for Forge_" accordion at the bottom of the "_txt2img_" tab; 5. Enable "_Enable Advanced Model Sampling_"; 6. Select "_v_prediction_" in the "_Discrete Sampling Type_" checkbox group. 7. Generate images! ## Method II: [ComfyUI](https://github.com/comfyanonymous/ComfyUI) SAMLPLE with NODES [comfy_ui_workflow_sample](/Laxhar/noobai-XL-Vpred-0.5/blob/main/comfy_ui_sample.png) ## Method III: [WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) Note that dev branch is not stable and **may contain bugs**. 1. (If you haven't installed WebUI) Clone the repository: ```bash git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui ``` 2. Switch to `dev` branch: ```bash git switch dev ``` 3. Pull latest updates: ```bash git pull ``` ## Method IV: [Diffusers](https://huggingface.co/docs/diffusers/en/index) ```python import torch from diffusers import StableDiffusionXLPipeline from diffusers import EulerDiscreteScheduler ckpt_path = "/path/to/model.safetensors" pipe = StableDiffusionXLPipeline.from_single_file( ckpt_path, use_safetensors=True, torch_dtype=torch.float16, ) scheduler_args = {"prediction_type": "v_prediction", "rescale_betas_zero_snr": True} pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config, **scheduler_args) pipe.enable_xformers_memory_efficient_attention() pipe = pipe.to("cuda") prompt = """masterpiece, best quality,artist:john_kafka,artist:nixeu,artist:quasarcake, chromatic aberration, film grain, horror \(theme\), limited palette, x-shaped pupils, high contrast, color contrast, cold colors, arlecchino \(genshin impact\), black theme, gritty, graphite \(medium\)""" negative_prompt = "nsfw, worst quality, old, early, low quality, lowres, signature, username, logo, bad hands, mutated hands, mammal, anthro, furry, ambiguous form, feral, semi-anthro" image = pipe( prompt=prompt, negative_prompt=negative_prompt, width=832, height=1216, num_inference_steps=28, guidance_scale=5, generator=torch.Generator().manual_seed(42), ).images[0] image.save("output.png") ``` 4. Launch WebUI and use the model as usual. **Note**: Please make sure Git is installed and environment is properly configured on your machine. --- # Recommended Settings ## Parameters - CFG: 4 ~ 5 - Steps: 28 ~ 35 - Sampling Method: **Euler** (⚠️ Other samplers will not work properly) - Resolution: Total area around 1024x1024. Best to choose from: 768x1344, **832x1216**, 896x1152, 1024x1024, 1152x896, 1216x832, 1344x768 ## Prompts - Prompt Prefix: ``` masterpiece, best quality, newest, absurdres, highres, safe, ``` - Negative Prompt: ``` nsfw, worst quality, old, early, low quality, lowres, signature, username, logo, bad hands, mutated hands, mammal, anthro, furry, ambiguous form, feral, semi-anthro ``` # Usage Guidelines ## Caption ``` <1girl/1boy/1other/...>, , , , , , ``` ## Quality Tags For quality tags, we evaluated image popularity through the following process: - Data normalization based on various sources and ratings. - Application of time-based decay coefficients according to date recency. - Ranking of images within the entire dataset based on this processing. Our ultimate goal is to ensure that quality tags effectively track user preferences in recent years. | Percentile Range | Quality Tags | | :--------------- | :------------- | | > 95th | masterpiece | | > 85th, <= 95th | best quality | | > 60th, <= 85th | good quality | | > 30th, <= 60th | normal quality | | <= 30th | worst quality | ## Aesthetic Tags | Tag | Description | | :-------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | very awa | Top 5% of images in terms of aesthetic score by [waifu-scorer](https://huggingface.co/Eugeoter/waifu-scorer-v4-beta) | | worst aesthetic | All the bottom 5% of images in terms of aesthetic score by [waifu-scorer](https://huggingface.co/Eugeoter/waifu-scorer-v4-beta) and [aesthetic-shadow-v2](https://huggingface.co/shadowlilac/aesthetic-shadow-v2) | | ... | ... | ## Date Tags There are two types of date tags: **year tags** and **period tags**. For year tags, use `year xxxx` format, i.e., `year 2021`. For period tags, please refer to the following table: | Year Range | Period tag | | :--------- | :--------- | | 2005-2010 | old | | 2011-2014 | early | | 2014-2017 | mid | | 2018-2020 | recent | | 2021-2024 | newest | ## Dataset - The latest Danbooru images up to the training date (approximately before 2024-10-23) - E621 images [e621-2024-webp-4Mpixel](https://huggingface.co/datasets/NebulaeWis/e621-2024-webp-4Mpixel) dataset on Hugging Face **Communication** - **QQ Groups:** - 875042008 - 914818692 - 635772191 - **Discord:** [Laxhar Dream Lab SDXL NOOB](https://discord.com/invite/DKnFjKEEvH) # Model License This model's license inherits from https://huggingface.co/OnomaAIResearch/Illustrious-xl-early-release-v0 fair-ai-public-license-1.0-sd and adds the following terms. Any use of this model and its variants is bound by this license. ## I. Usage Restrictions - Prohibited use for harmful, malicious, or illegal activities, including but not limited to harassment, threats, and spreading misinformation. - Prohibited generation of unethical or offensive content. - Prohibited violation of laws and regulations in the user's jurisdiction. ## II. Commercial Prohibition We prohibit any form of commercialization, including but not limited to monetization or commercial use of the model, derivative models, or model-generated products. ## III. Open Source Community For the open source community, you need to: - Open source derivative models, merged models, LoRAs, and products based on the above models. - Share work details such as synthesis formulas, prompts, and workflows. - Follow the fair-ai-public-license to ensure derivative works remain open source. ## IV. Disclaimer Generated models may produce unexpected or harmful outputs. Users must assume all risks and potential consequences of usage. # Participants and Contributors ## Participants - **L_A_X:** [Civitai](https://civitai.com/user/L_A_X) | [Liblib.art](https://www.liblib.art/userpage/9e1b16538b9657f2a737e9c2c6ebfa69) | [Huggingface](https://huggingface.co/LAXMAYDAY) - **li_li:** [Civitai](https://civitai.com/user/li_li) | [Huggingface](https://huggingface.co/heziiiii) - **nebulae:** [Civitai](https://civitai.com/user/kitarz) | [Huggingface](https://huggingface.co/NebulaeWis) - **Chenkin:** [Civitai](https://civitai.com/user/Chenkin) | [Huggingface](https://huggingface.co/windsingai) - **Euge:** [Civitai](https://civitai.com/user/Euge_) | [Huggingface](https://huggingface.co/Eugeoter) | [Github](https://github.com/Eugeoter) ## Contributors - **Narugo1992**: Thanks to [narugo1992](https://github.com/narugo1992) and the [deepghs](https://huggingface.co/deepghs) team for open-sourcing various training sets, image processing tools, and models. - **Mikubill**: Thanks to [Mikubill](https://github.com/Mikubill) for the [Naifu](https://github.com/Mikubill/naifu) trainer. - **Onommai**: Thanks to [OnommAI](https://onomaai.com/) for open-sourcing a powerful base model. - **V-Prediction**: Thanks to the following individuals for their detailed instructions and experiments. - adsfssdf - [bluvoll](https://civitai.com/user/bluvoll) - [bvhari](https://github.com/bvhari) - [catboxanon](https://github.com/catboxanon) - [parsee-mizuhashi](https://huggingface.co/parsee-mizuhashi) - [very-aesthetic](https://github.com/very-aesthetic) - **Community**: [aria1th261](https://civitai.com/user/aria1th261), [neggles](https://github.com/neggles/neurosis), [sdtana](https://huggingface.co/sdtana), [chewing](https://huggingface.co/chewing), [irldoggo](https://github.com/irldoggo), [reoe](https://huggingface.co/reoe), [kblueleaf](https://civitai.com/user/kblueleaf), [Yidhar](https://github.com/Yidhar), ageless, 白玲可, Creeper, KaerMorh, 吟游诗人, SeASnAkE, [zwh20081](https://civitai.com/user/zwh20081), Wenaka⁧~喵, 稀里哗啦, 幸运二副, 昨日の約, 445, [EBIX](https://civitai.com/user/EBIX), [Sopp](https://huggingface.co/goyishsoyish), [Y_X](https://civitai.com/user/Y_X), [Minthybasis](https://civitai.com/user/Minthybasis), [Rakosz](https://civitai.com/user/Rakosz)