Estimation of hardware requirements to finetune Pixtral models
Hi,
My current GPU (single RTX 4090 - 24GB) capability can only handle nf4 precision when it comes to the finetuning. I am willing to upgrade but I am not fully aware of the exact memory required for executing a successful finetuning run on Pixtral model.
Also, I am will be serving the tuned model via vllm and hence, would also be using the conversion script to get the consolidated model ready.
Could anyone share some of their findings or point towards a right direction which will save me time and energy?
I thank you in advance.
Best regards.
Hi,
Some guidance here: https://huggingface.co/docs/transformers/en/perf_train_gpu_one. Usually (with the AdamW optimizer) it takes the # number of parameters times 16-18 to get the required size in GB for full fine-tune. So in case of Pixtral 12B, that means 12*16 = 192 GB.
One can significantly reduce this with techniques like Q-LoRa: https://huggingface.co/blog/4bit-transformers-bitsandbytes.
For converting to the vLLM format, see this thread: https://huggingface.co/mistral-community/pixtral-12b/discussions/4.