Efficient training with HF or LLaVA format?

#16
by raja22 - opened

Hi,

Thanks for open-sourcing this amazing dataset with 24M samples!

When I try to load the full dataset at once using load_dataset,it consumes a lot of CPU memory probably bcoz of PIL.Image objects. We also tried a subset (2-4M for POC) in .parquet files for streaming as a POC (1 8XH100 node) but still facing issues.

  • Did you also use streaming mode when training with this dataset? Could you please share the data loading code / dataloader setup you used? Or point to any other open-source training scripts you are aware of?
  • Or do you recommend to save the images on disk and convert to similar LLaVA format? What would be better in terms of training performance?

Thanks in advance!
cc @shubhamagarwal92

HuggingFaceM4 org

Hi! We trained using our nanovlm project (https://github.com/huggingface/nanoVLM). Managed to get 8 nodes * 8xH100 working properly with it. Totally agree that it is a bit painful, although in that codebase it's already optimized and our data loading times are very low (30 ms). We're working in getting it to work while streaming directly from the hub, which should reduce some of these pains, at the cost of higher network costs.

Sign up or log in to comment