Fix deprecated float16/fp16 variant loading through new `version` API.

#4
by patrickvonplaten - opened

Hey CompVis πŸ‘‹,

Your model repository seems to contain a fp16 branch to load the model in float16 precision. Loading fp16 versions from a branch instead of the main branch is deprecated and will eventually be forbidden. Instead, we strongly recommend to save fp16 versions of the model under .fp16. version files directly on the 'main' branch as enabled through this PR.This PR makes sure that your model repository allows the user to correctly download float16 precision model weights by adding fp16 model weights in both safetensors and PyTorch bin format:

pipe = DiffusionPipeline.from_pretrained(CompVis/ldm-super-resolution-4x-openimages, torch_dtype=torch.float16, variant='fp16')

For more information please have a look at: https://huggingface.co/docs/diffusers/using-diffusers/loading#checkpoint-variants.
We made sure you that you can safely merge this pull request.

Best, the 🧨 Diffusers team.

patrickvonplaten changed pull request status to merged

Sign up or log in to comment