Missing config.json file for Llama-3.2-1B-Instruct-QLORA_INT4_EO8
Hello,
I’ve been trying to use the meta-llama/Llama-3.2-1B-Instruct-QLORA_INT4_EO8 model, but I noticed that the config.json file is missing from the repository. Could you please confirm whether this file is needed for proper use of the model, or if we should be using a different configuration file or method?
If the config.json is not required, could you advise on what alternative setup or file should be used to initialize the model correctly?
Thank you for your assistance!
The weights were uploaded in their "original" (meta) format, and they need to be translated to the HuggingFace format to be used with the pipelines. I'm sure they will upload the reformatted version soon.
I've downloaded the model from Meta's official download portal.
But when I tried loading that, it's giving the same error - "OSError: /Users//.llama/checkpoints/Llama3.2-1B-Instruct:int4-qlora-eo8 does not appear to have a file named config.json. Checkout 'https://huggingface.co//Users//.llama/checkpoints/Llama3.2-1B-Instruct:int4-qlora-eo8/tree/None' for available files."