The following values were not passed to `accelerate launch` and had defaults used instead: `--num_processes` was set to a value of `1` `--num_machines` was set to a value of `1` `--mixed_precision` was set to a value of `'no'` `--dynamo_backend` was set to a value of `'no'` To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`. /workspace/thumbs_up/train_dreambooth_lora_sdxl.py:122: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. def resize_image(image, size, interpolation=Image.BILINEAR): 10/13/2023 10:53:26 - INFO - __main__ - Current working directory: /workspace/thumbs_up 10/13/2023 10:53:26 - INFO - __main__ - Distributed environment: NO Num processes: 1 Process index: 0 Local process index: 0 Device: cuda Mixed precision type: fp16 You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors. You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors. {'dynamic_thresholding_ratio', 'clip_sample_range', 'variance_type', 'thresholding'} was not found in config. Values will be initialized to default values. {'dropout', 'attention_type'} was not found in config. Values will be initialized to default values. `text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["id2label"]` will be overriden. `text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["bos_token_id"]` will be overriden. `text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["eos_token_id"]` will be overriden. `text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["id2label"]` will be overriden. `text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["bos_token_id"]` will be overriden. `text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["eos_token_id"]` will be overriden. Some weights of ViTModel were not initialized from the model checkpoint at facebook/dino-vits16 and are newly initialized: ['pooler.dense.bias', 'pooler.dense.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. wandb: Currently logged in as: berglund. Use `wandb login --relogin` to force relogin wandb: Tracking run with wandb version 0.15.12 wandb: Run data is saved locally in /workspace/thumbs_up/wandb/run-20231013_105346-9l8ww0bd wandb: Run `wandb offline` to turn off syncing. wandb: Syncing run dandy-wildflower-54 wandb: ⭐️ View project at https://wandb.ai/berglund/dreambooth-lora-sd-xl wandb: 🚀 View run at https://wandb.ai/berglund/dreambooth-lora-sd-xl/runs/9l8ww0bd 10/13/2023 10:53:47 - INFO - __main__ - ***** Running training ***** 10/13/2023 10:53:47 - INFO - __main__ - Num examples = 21 10/13/2023 10:53:47 - INFO - __main__ - Num batches each epoch = 11 10/13/2023 10:53:47 - INFO - __main__ - Num Epochs = 55 10/13/2023 10:53:47 - INFO - __main__ - Instantaneous batch size per device = 2 10/13/2023 10:53:47 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 2 10/13/2023 10:53:47 - INFO - __main__ - Gradient Accumulation steps = 1 10/13/2023 10:53:47 - INFO - __main__ - Total optimization steps = 600 Steps: 0%| | 0/600 [00:00 thumbs up", "a photo of Barack Obama wearing a vest showing thumbs up", "a photo of a black man at the beach showing thumbs up". Loading pipeline components...: 0%| | 0/7 [00:00 thumbs up", "a photo of Barack Obama wearing a vest showing thumbs up", "a photo of a black man at the beach showing thumbs up". Loading pipeline components...: 0%| | 0/7 [00:00