kohya_ss / localizations /zh-CN.json
zengxi123's picture
Upload folder using huggingface_hub
fb83c5b verified
{
"-Need to add resources here": "-需要在这里添加资源",
"(Experimental, Optional) Since the latent is close to a normal distribution, it may be a good idea to specify a value around 1/10 the noise offset.": "(实验性,可选)由于潜在变量接近正态分布,指定一个接近噪声偏移1/10的值可能是个好主意。",
"(Optional) Add training comment to be included in metadata": "(可选)增加训练注释到元数据",
"(Optional) Enforce number of epoch": "(可选)强制每批数量",
"(Optional) Save only the specified number of models (old models will be deleted)": "(可选)仅保存指定数量的模型(旧模型将被删除)",
"(Optional) Save only the specified number of states (old models will be deleted)": "(可选)仅保存指定数量的状态(旧模型将被删除)",
"(Optional) Stable Diffusion base model": "(可选)稳定扩散基础模型",
"(Optional) Stable Diffusion model": "(可选)稳定扩散模型",
"(Optional) The model is saved every specified steps": "(可选)模型每隔指定的步数保存一次",
"(Optional)": "(可选)",
"About SDXL training": "关于SDXL培训",
"Adaptive noise scale": "自适应噪声比例",
"Additional parameters": "额外参数",
"Advanced options": "高级选项",
"Advanced parameters": "高级参数",
"Advanced": "增强",
"ashleykleynhans runpod docker builds": "ashleykleynhans runpod docker构建",
"Automatically determine the dim(rank) from the weight file.": "从权重文件自动确定dim(排名)。",
"Autosave": "自动保存",
"Basic Captioning": "基本字幕",
"Basic": "基础",
"Batch size": "批量大小",
"BLIP Captioning": "BLIP字幕",
"Bucket resolution steps": "桶分辨率步骤",
"Built with Gradio": "使用Gradio构建",
"Cache latents to disk": "缓存潜变量到磁盘",
"Cache latents": "缓存潜变量",
"Caption file extension": "标题文件扩展名",
"Caption text": "标题文本",
"caption": "标题",
"Change History": "更改历史",
"Class prompt": "Class类提示",
"Color augmentation": "颜色增强",
"Configuration file": "配置文件",
"constant_with_warmup": "带预热的常数",
"constant": "常数",
"Conv Dimension (Rank)": "卷积维度(Rank)",
"Conv Dimension": "卷积维度",
"Convert model": "转换模型",
"Copy info to Folders Tab": "复制信息到文件夹",
"cosine_with_restarts": "带重启的余弦函数学习率的方法",
"cosine": "余弦函数",
"CrossAttention": "交叉注意力",
"DANGER!!! -- Insecure folder renaming -- DANGER!!!": "危险!!!-- 不安全的文件夹重命名 -- 危险!!!",
"Dataset folder": "数据集文件夹",
"Dataset preparation": "数据集准备",
"Dataset Preparation": "数据集准备",
"Dataset repeats": "数据集重复",
"Desired LoRA rank": "期望的LoRA秩",
"Destination training directory": "训练结果目录",
"Device": "设备",
"DIM from weights": "从权重获取DIM",
"Directory containing the images to caption": "包含要添加标题的图像的目录",
"Directory containing the training images": "直接包含训练图片",
"Directory where formatted training and regularisation folders will be placed": "训练和正则化文件会被取代",
"Disable CP decomposition": "禁用CP分解",
"Do not copy other files in the input folder to the output folder": "不要将输入文件夹中的其他文件复制到输出文件夹",
"Do not copy other files": "不复制其他文件",
"Don't upscale bucket resolution": "不要放大桶分辨率",
"Dreambooth/LoRA Dataset balancing": "Dreambooth/LoRA数据集平衡",
"Dreambooth/LoRA Folder preparation": "Dreambooth/LoRA文件准备",
"Dropout caption every n epochs": "每n个时代丢弃标题",
"DyLoRA model": "DyLoRA模型",
"Dynamic method": "动态方法",
"Dynamic parameter": "动态参数",
"e.g., \"by some artist\". Leave empty if you only want to add a prefix or postfix.": "例如,\"由某个艺术家创作\"。如果您只想添加前缀或后缀,请留空。",
"e.g., \"by some artist\". Leave empty if you want to replace with nothing.": "例如,\"由某个艺术家创作\"。如果您想用空白替换,请留空。",
"Enable buckets": "启用数据容器buckets",
"Enable for Hugging Face's stabilityai models": "启用Hugging Face的stabilityai模型",
"Enter one sample prompt per line to generate multiple samples per cycle. Optional specifiers include: --w (width), --h (height), --d (seed), --l (cfg scale), --s (sampler steps) and --n (negative prompt). To modify sample prompts during training, edit the prompt.txt file in the samples directory.": "每行输入一个样本提示以生成每个周期的多个样本。可选指定符包括:--w(宽度),--h(高度),--d(种子),--l(cfg比例),--s(采样器步骤)和--n(负提示)。要在训练期间修改样本提示,请编辑样本目录中的prompt.txt文件。",
"Epoch": "数量增加",
"Error": "错误",
"Example of the optimizer settings for Adafactor with the fixed learning rate:": "具有固定学习率的Adafactor优化器设置的示例:",
"Extract DyLoRA": "提取DyLoRA",
"Extract LoRA model": "提取LoRA模型",
"Extract LoRA": "提取LoRA",
"Extract LyCORIS LoCon": "提取LyCORIS LoCon",
"Extract LyCORIS LoCON": "提取LyCORIS LoCON",
"FileNotFoundError": "FileNotFoundError",
"Find text": "查找文本",
"Finetune": "微调",
"Finetuned model": "微调模型",
"Finetuning Resource Guide": "微调资源指南",
"fixed": "固定",
"Flip augmentation": "翻转增强",
"float16": "float16",
"Folders": "文件夹",
"Full bf16 training (experimental)": "完全bf16训练(实验性)",
"Full fp16 training (experimental)": "完全fp16训练(实验性)",
"Generate caption files for the grouped images based on their folder name": "根据其文件夹名称为分组图片生成标题文件",
"Generate caption metadata": "生成标题元数据",
"Generate Captions": "生成标题",
"Generate image buckets metadata": "生成图像存储桶元数据",
"GIT Captioning": "GIT字幕",
"Gradient accumulate steps": "渐变积累步骤",
"Gradient checkpointing": "渐变检查点",
"Group size": "Group大小",
"Guidelines for SDXL Finetuning": "SDXL微调指南",
"Guides": "指南",
"How to Create a LoRA Part 1: Dataset Preparation:": "如何创建LoRA第1部分:数据集准备:",
"If unchecked, tensorboard will be used as the default for logging.": "如果未选中,tensorboard将用作日志记录的默认选项。",
"If you have valuable resources to add, kindly create a PR on Github.": "如果您有有价值的资源要添加,请在Github上创建一个PR。",
"Ignore Imported Tags Above Word Count": "忽略高于字数计数的导入标签",
"Image folder to caption": "要添加标题的图像文件夹",
"Image folder": "图片文件夹",
"Include images in subfolders as well": "同时包括子文件夹中的图片",
"Include Subfolders": "包括子文件夹",
"Init word": "初始化词",
"Input folder": "输入文件夹",
"Install Location": "安装位置",
"Installation": "安装",
"Instance prompt": "实例提示",
"Keep n tokens": "保留n个令牌",
"Launching the GUI on Linux and macOS": "在Linux和macOS上启动GUI",
"Launching the GUI on Windows": "在Windows上启动GUI",
"Learning rate": "学习率",
"linear": "线性",
"Linux and macOS Upgrade": "Linux和macOS升级",
"Linux and macOS": "Linux和macOS",
"Linux Pre-requirements": "Linux预先要求",
"Load": "加载",
"Loading...": "载入中...",
"Local docker build": "本地Docker构建",
"Logging folder": "日志文件夹",
"LoRA model \"A\"": "LoRA模型“A”",
"LoRA model \"B\"": "LoRA模型“B”",
"LoRA model \"C\"": "LoRA模型“C”",
"LoRA model \"D\"": "LoRA模型“D”",
"LoRA model": "LoRA模型",
"LoRA network weights": "LoRA网络权重",
"LoRA": "LoRA",
"LR number of cycles": "学习率周期数",
"LR power": "学习率功率",
"LR scheduler extra arguments": "学习率调度器额外参数",
"LR Scheduler": "学习率调度器",
"LR warmup (% of steps)": "学习率预热(%的步数)",
"LyCORIS model": "LyCORIS模型",
"Macos is not great at the moment.": "目前MacOS的支持不是很好。",
"Manual Captioning": "手动字幕",
"Manual installation": "手动安装",
"Max bucket resolution": "最大存储桶分辨率",
"Max length": "最大长度",
"Max num workers for DataLoader": "DataLoader的最大工作人员数量",
"Max resolution": "最大分辨率",
"Max Timestep": "最大时间步",
"Max Token Length": "最大令牌长度",
"Max train epoch": "每批数量",
"Max train steps": "最大训练步数",
"Maximum bucket resolution": "最大数据容器存储桶分辨率",
"Maximum size in pixel a bucket can be (>= 64)": "可以达到的最大像素尺寸(>= 64)",
"Memory efficient attention": "内存高效注意力",
"Merge LoRA (SVD)": "合并LoRA(SVD)",
"Merge LoRA": "合并LoRA",
"Merge LyCORIS": "合并LyCORIS",
"Merge model": "合并模型",
"Merge precision": "合并精度",
"Merge ratio model A": "模型A合并比例",
"Merge ratio model B": "模型B合并比例",
"Merge ratio model C": "模型C合并比例",
"Merge ratio model D": "模型D合并比例",
"Min bucket resolution": "最小数据容器存储桶分辨率",
"Min length": "最小长度",
"Min SNR gamma": "最小SNR伽玛",
"Min Timestep": "最小时间步",
"Minimum bucket resolution": "最小数据容器存储桶分辨率",
"Minimum size in pixel a bucket can be": "数据容器存储桶的最小像素大小",
"Mixed precision": "混合精度",
"Mnimum difference": "最小差异",
"Mode": "模式",
"Model A merge ratio (eg: 0.5 mean 50%)": "模型A合并比率(例如:0.5意味着50%)",
"Model B merge ratio (eg: 0.5 mean 50%)": "模型B合并比率(例如:0.5意味着50%)",
"Model C merge ratio (eg: 0.5 mean 50%)": "模型C合并比率(例如:0.5意味着50%)",
"Model D merge ratio (eg: 0.5 mean 50%)": "模型D合并比率(例如:0.5意味着50%)",
"Model output folder": "模型输出文件夹",
"Model output name": "模型输出文件夹",
"Model Quick Pick": "快速选择模型",
"Module dropout": "模块丢失",
"Network Dimension (Rank)": "网络维度(秩)",
"Network Dimension": "网络维度",
"Network dropout": "网络丢失",
"No module called tkinter": "没有名为tkinter的模块",
"No token padding": "无令牌填充",
"Noise offset type": "噪声偏移类型",
"Noise offset": "噪声偏移",
"Number of beams": "beam的数量 - 由于同时考虑多个解决方案,beam搜索能够减少错误累积,从而提高最终解决方案的质量。",
"Number of CPU threads per core": "每个核心的CPU线程数",
"Number of images to group together": "要一起分组的图像数量",
"Number of updates steps to accumulate before performing a backward/update pass": "执行反向/更新传递之前需要积累的更新步骤数",
"object template": "对象模板",
"Only for SD v2 models. By scaling the loss according to the time step, the weights of global noise prediction and local noise prediction become the same, and the improvement of details may be expected.": "仅适用于SD v2模型。通过根据时间步长缩放损失,全局噪声预测和局部噪声预测的权重变得相同,可以期望细节的改进。",
"Open": "打开",
"Optimizer extra arguments": "优化器额外参数",
"Optimizer": "优化器",
"Optional: CUDNN 8.6": "可选:CUDNN 8.6",
"Original": "原始",
"Output folder": "输出文件夹",
"Output": "输出",
"Overwrite existing captions in folder": "覆盖文件夹中现有的标题",
"Page File Limit": "页面文件限制",
"PagedAdamW8bit": "分页AdamW8位",
"PagedLion8bit": "分页Lion8位",
"Parameters": "参数",
"path for the checkpoint file to save...": "保存检查点文件的路径...",
"path for the LoRA file to save...": "保存LoRA文件的路径...",
"path for the new LoRA file to save...": "保存新LoRA文件的路径...",
"path to \"last-state\" state folder to resume from": "从中恢复的“最后状态”状态文件夹的路径",
"Path to the DyLoRA model to extract from": "要从中提取的DyLoRA模型的路径",
"Path to the finetuned model to extract": "要提取的微调模型的路径",
"Path to the LoRA A model": "LoRA A模型的路径",
"Path to the LoRA B model": "LoRA B模型的路径",
"Path to the LoRA C model": "LoRA C模型的路径",
"Path to the LoRA D model": "LoRA D模型的路径",
"Path to the LoRA model to verify": "要验证的LoRA模型的路径",
"Path to the LoRA to resize": "要调整大小的LoRA的路径",
"Path to the LyCORIS model": "LyCORIS模型的路径",
"path where to save the extracted LoRA model...": "保存提取出的LoRA模型的路径...",
"Persistent data loader": "持久数据加载器",
"polynomial": "多项式",
"Postfix to add to BLIP caption": "添加到BLIP标题的后缀",
"Postfix to add to caption": "添加到标题的后缀",
"Pre-built Runpod template": "预构建的Runpod模板",
"Prefix to add to BLIP caption": "添加到BLIP标题的前缀",
"Prefix to add to caption": "添加到标题的前缀",
"Prepare training data": "准备训练数据",
"Print training command": "打印训练命令",
"Prior loss weight": "先验损失权重",
"Prodigy": "神童",
"Provide a SD file path IF you want to merge it with LoRA files": "如果您想将其与LoRA文件合并,请提供SD文件路径",
"Provide a SD file path that you want to merge with the LyCORIS file": "提供您想与LyCORIS文件合并的SD文件路径",
"PyTorch 2 seems to use slightly less GPU memory than PyTorch 1.": "PyTorch 2似乎使用的GPU内存比PyTorch 1略少。",
"Quick Tags": "快速标签",
"Random crop instead of center crop": "随机裁剪而非中心裁剪",
"Rank dropout": "排名丢失",
"Rate of caption dropout": "标题丢失率",
"Recommended value of 0.5 when used": "使用时推荐值为0.5",
"Recommended value of 5 when used": "使用时推荐值为5",
"recommended values are 0.05 - 0.15": "推荐值为0.05 - 0.15",
"Regularisation folder": "正则化文件夹",
"Regularisation images": "正则化图像",
"Repeats": "重复",
"Replacement text": "替换文本",
"Required bitsandbytes >= 0.36.0": "所需的bitsandbytes >= 0.36.0",
"Resize LoRA": "调整LoRA尺寸",
"Resize model": "调整模型大小",
"Resolution (width,height)": "分辨率(宽度,高度)",
"Resource Contributions": "资源贡献",
"Resume from saved training state": "从保存的训练状态恢复",
"Resume TI training": "恢复TI训练",
"Runpod": "Runpod",
"Sample every n epochs": "每n个时代采样一次",
"Sample every n steps": "每n步采样一次",
"Sample image generation during training": "培训期间的样本图像生成",
"Sample prompts": "样本提示",
"Sample sampler": "样本采样器",
"Samples": "样例",
"Save dtype": "保存数据类型",
"Save every N epochs": "每N个epochs保存",
"Save every N steps": "每N步保存一次",
"Save last N steps state": "保存最后N步状态",
"Save last N steps": "保存最后N步",
"Save precision": "保存精度",
"Save to": "保存到",
"Save trained model as": "保存训练模型为",
"Save training state": "保存训练状态",
"Save": "保存",
"Scale v prediction loss": "缩放v预测损失",
"Scale weight norms": "缩放权重规范",
"SD Model": "SD模型",
"SDXL model": "SDXL模型",
"Set the Max resolution to at least 1024x1024, as this is the standard resolution for SDXL. ": "将 最大分辨率 设置为至少 1024x1024,因为这是 SDXL 的标准分辨率。",
"Set the Max resolution to at least 1024x1024, as this is the standard resolution for SDXL.": "将最大分辨率设置为至少1024x1024,因为这是SDXL的标准分辨率。",
"Setup": "设置",
"SGDNesterov": "SGD Nesterov",
"SGDNesterov8bit": "SGD Nesterov 8位",
"Shuffle caption": "随机标题",
"Source LoRA": "源LoRA",
"Source model type": "源模型类型",
"Source model": "模型来源",
"Sparsity": "稀疏性",
"Stable Diffusion base model": "稳定扩散基础模型",
"Stable Diffusion original model: ckpt or safetensors file": "稳定扩散原始模型:ckpt或safetensors文件",
"Start tensorboard": "开始 tensorboard",
"Start training": "开始训练",
"Starting GUI Service": "启动GUI服务",
"Stop tensorboard": "结束 tensorboard",
"Stop text encoder training": "停止文本编码器训练",
"Stop training": "停止训练",
"style template": "样式模板",
"sv_fro": "sv_fro",
"Target model folder": "目标模型文件夹",
"Target model name": "目标模型名称",
"Target model precision": "目标模型精度",
"Target model type": "目标模型类型",
"Template": "模板",
"Text Encoder learning rate": "文本编码器学习率",
"The fine-tuning can be done with 24GB GPU memory with the batch size of 1.": "微调可以在具有1个批量大小的24GB GPU内存上完成。",
"The GUI allows you to set the training parameters and generate and run the required CLI commands to train the model.": "该GUI允许您设置训练参数,并生成并运行训练模型所需的CLI命令。",
"This guide is a resource compilation to facilitate the development of robust LoRA models.": "该指南是一个资源汇编,以促进强大LoRA模型的开发。",
"This section provide Dreambooth tools to help setup your dataset…": "这些选择帮助设置自己的数据集",
"This section provide LoRA tools to help setup your dataset…": "本节提供LoRA工具以帮助您设置数据集...",
"This section provide Various Finetuning guides and information…": "本节提供各种微调指南和信息",
"This utility allows quick captioning and tagging of images.": "此工具允许快速地为图像添加标题和标签。",
"This utility allows you to create simple caption files for each image in a folder.": "此工具允许您为文件夹中的每个图像创建简单的标题文件。",
"This utility can be used to convert from one stable diffusion model format to another.": "该工具可用于将一个稳定扩散模型格式转换为另一种格式",
"This utility can extract a DyLoRA network from a finetuned model.": "该工具可以从微调模型中提取DyLoRA网络。",
"This utility can extract a LoRA network from a finetuned model.": "该工具可以从微调模型中提取LoRA网络。",
"This utility can extract a LyCORIS LoCon network from a finetuned model.": "该工具可以从微调模型中提取LyCORIS LoCon网络。",
"This utility can merge a LyCORIS model into a SD checkpoint.": "该工具可以将LyCORIS模型合并到SD检查点中。",
"This utility can merge two LoRA networks together into a new LoRA.": "该工具可以将两个LoRA网络合并为一个新的LoRA。",
"This utility can merge up to 4 LoRA together or alternatively merge up to 4 LoRA into a SD checkpoint.": "该工具可以合并多达4个LoRA,或者选择性地将多达4个LoRA合并到SD检查点中。",
"This utility can resize a LoRA.": "该工具可以调整LoRA的大小。",
"This utility can verify a LoRA network to make sure it is properly trained.": "该工具可以验证LoRA网络以确保其得到适当的训练。",
"This utility uses BLIP to caption files for each image in a folder.": "此工具使用BLIP为文件夹中的每张图像添加标题。",
"This utility will create the necessary folder structure for the training images and optional regularization images needed for the kohys_ss Dreambooth/LoRA method to function correctly.": "为训练文件创建文件夹",
"This utility will ensure that each concept folder in the dataset folder is used equally during the training process of the dreambooth machine learning model, regardless of the number of images in each folder. It will do this by renaming the concept folders to indicate the number of times they should be repeated during training.": "此工具将确保在训练dreambooth机器学习模型的过程中,数据集文件夹中的每个概念文件夹都将被平等地使用,无论每个文件夹中有多少图像。它将通过重命名概念文件夹来指示在训练期间应重复使用它们的次数。",
"This utility will group images in a folder based on their aspect ratio.": "此工具将根据它们的纵横比将文件夹中的图像分组。",
"This utility will use GIT to caption files for each images in a folder.": "此工具将使用GIT为文件夹中的每张图像添加标题。",
"This utility will use WD14 to caption files for each images in a folder.": "此工具将使用WD14为文件夹中的每张图像添加标题。",
"Tips for SDXL training": "SDXL培训提示",
"Token string": "令牌字符串",
"Train a custom model using kohya finetune python code": "使用kohya微调Python代码训练个性化模型",
"Train a custom model using kohya train network LoRA python code…": "使用kohya训练网络LoRA Python代码训练自定义模型",
"Train batch size": "训练批次大小",
"Train Network": "训练网络",
"Train text encoder": "训练文本编码器",
"Train U-Net only.": "仅训练 U-Net",
"Training config folder": "训练配置文件夹",
"Training Image folder": "训练图像文件夹",
"Training images": "训练图像",
"Training steps per concept per epoch": "每个周期每个概念的训练步骤",
"Training": "训练",
"Troubleshooting": "故障排除",
"Tutorials": "教程",
"Unet learning rate": "Unet学习率",
"UNet linear projection": "UNet 线性投影",
"Upgrading": "升级",
"Use --cache_text_encoder_outputs option and caching latents.": "使用 --cache_text_encoder_outputs 选项和缓存潜在变量。",
"Use Adafactor optimizer. RMSprop 8bit or Adagrad 8bit may work. AdamW 8bit doesn’t seem to work.": "使用 Adafactor 优化器。 RMSprop 8bit 或 Adagrad 8bit 可能有效。 AdamW 8bit 好像不行。",
"Use beam search": "使用beam搜索-启发式图搜索算法,beam搜索可以用来生成更准确和自然的文本。",
"Use gradient checkpointing.": "使用梯度检查点。",
"Use latent files": "使用潜在文件",
"Use sparse biais": "使用稀疏偏见",
"Users can obtain and/or generate an api key in the their user settings on the website: https://wandb.ai/login": "用户可以在以下网站的用户设置中获取和/或生成API密钥:https://wandb.ai/login",
"V Pred like loss": "v预测损失",
"Values greater than 0 will make the model more img2img focussed. 0 = image only": "大于0的值会使模型更加聚焦在 img2img 上。0 = 仅图像。这应该表示时间步参数,大于0会使模型更加侧重 img2img 生成,0则仅关注图像生成。",
"Values lower than 1000 will make the model more img2img focussed. 1000 = noise only": "小于1000的值会使模型更加聚焦在 img2img 上。1000 = 仅噪声。这也应该表示时间步参数,小于1000会使模型更加侧重 img2img 生成,1000则仅从噪声生成图像。",
"Vectors": "向量",
"Verbose": "详细输出",
"WANDB API Key": "WANDB API 密钥。",
"WARNING! The use of this utility on the wrong folder can lead to unexpected folder renaming!!!": "警告!在错误的文件夹上使用此工具可能导致意外的文件夹重命名!",
"WD14 Captioning": "WD14字幕",
"Windows Upgrade": "Windows升级",
"Train a custom model using kohya dreambooth python code…": "使用kohya的dreambooth Python代码训练个性化模型",
"Training comment": "训练注释",
"Train a TI using kohya textual inversion python code…": "使用kohya的文本反转Python代码训练TI模型",
"Train a custom model using kohya finetune python code…": "使用kohya的微调Python代码训练个性化模型"
}