user
stringlengths 3
28
| created_at
timestamp[us]date 2020-04-01 09:48:12
2025-07-30 20:59:07
| body
stringlengths 1
173k
| issue_number
int64 1
3.81k
| __index_level_0__
int64 0
11.8k
|
---|---|---|---|---|
avishaiElmakies
| 2025-07-10T14:12:10 |
> would you like to edit the function here as a "suggestion" and then i can commit it here directly and you will have a commit here
Unfortunately, this change is bigger than just a function change since you need to propagate the relevant data through the trainer and make sure it is being used during every use of this function. While it is not hard. it does take some time to make sure everything is ok.
| 3,072 | 11,805 |
kashif
| 2025-07-11T06:55:30 |
@avishaiElmakies I have now fixed the log-prob calculation so that it takes the visual context into account
| 3,072 | 11,806 |
avishaiElmakies
| 2025-07-11T08:19:37 |
@kashif thanks! Looks good!
Is there a plan to add other modalities to GRPO as well?
| 3,072 | 11,807 |
sergiopaniego
| 2025-07-11T08:29:35 |
> @kashif thanks! Looks good! Is there a plan to add other modalities to GRPO as well?
Sure!! But we can keep only vision for this one and then add the other modalities in another one (even the [one](https://github.com/huggingface/trl/pull/3460) already created). This way, we can merge sooner!
| 3,072 | 11,808 |
avishaiElmakies
| 2025-07-11T08:32:04 |
Ok, thanks!
| 3,072 | 11,809 |
sergiopaniego
| 2025-07-11T16:00:16 |
A working example with the latest iteration for the review :)
[PR recipe](https://github.com/huggingface/cookbook/pull/312) in Cookbook || To see the notebook directly [here](https://app.reviewnb.com/huggingface/cookbook/pull/312/)
Fine tuned model in the notebook: [sergiopaniego/Qwen2.5-VL-3B-Instruct-Thinking](https://huggingface.co/sergiopaniego/Qwen2.5-VL-3B-Instruct-Thinking) || [Tensorboard](https://huggingface.co/sergiopaniego/Qwen2.5-VL-3B-Instruct-Thinking/tensorboard)
| 3,072 | 11,810 |
HuggingFaceDocBuilderDev
| 2025-07-14T17:59:10 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3072). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,072 | 11,811 |
qgallouedec
| 2025-07-19T07:05:30 |
FYI I'm refactoring a bit to simplify the whole approach. I'll push my updated version tomorrow on this branch
| 3,072 | 11,812 |
Revist
| 2025-07-22T14:23:54 |
Hi guys, thank you for the great work!
I am trying to use this PR with "llava-hf/llava-v1.6-mistral-7b-hf" however get an error "jinja2.exceptions.UndefinedError: 'dict object' has no attribute 'text'". Is this caused by wrong dataset format or a bug in the PR?
```
accelerate launch --config_file=examples/accelerate_configs/deepspeed_zero3.yaml examples/scripts/grpo_vlm.py --model_name_or_path llava-hf/llava-v1.6-mistral-7b-hf --output_dir grpo-Qwen2.5-VL-3B-Instruct_test --learning_rate 1e-5 --gradient_checkpointing --torch_dtype bfloat16 --max_prompt_length 2048 --max_completion_length 1024 --use_vllm --vllm_mode colocate --use_peft --lora_target_modules "q_proj", "v_proj" --log_completions
```
| 3,072 | 11,813 |
EauDeData
| 2025-07-23T08:36:27 |
Thanks you for this incredibly important merge. I think the whole DL community is happy today :)
From the initial PR I thought we did not have documentation, yet it passed the documentation check on merge; where can we find a clearer example of how to use it? I cannot find the updated docs...
Again, sincere thanks for your work.
| 3,072 | 11,814 |
kashif
| 2025-07-23T08:49:40 |
thanks @EauDeData you can find the docs here: https://huggingface.co/docs/trl/main/en/grpo_trainer#vision-language-model-vlm-training
| 3,072 | 11,815 |
ghubnerr
| 2025-07-26T21:07:18 |
Hi everyone! Sorry to bring this back - I noticed that the VLM support has a very strict format required for the element spec, where it expects a dict containing the `"prompt"` and `"image"` keys. This removes the control of where the user wants to insert the `<start_of_image>` tag, for example. I can tell that this decision was made because the `maybe_apply_chat_template` function returns a string, processed by the `processor_class`'s apply chat template method (with `tokenize=False`, making images not compatible with that format.
With the `AutoProcessor`, one can actually return tensors using the `apply_chat_template` method, which lets you control the image placement better. An example with `Gemma3Processor`:
```py
prompt = "This <img> is the image."
```
```py
messages = [
{"role": "system", "content": [{"type": "text", "text": system_prompt}]},
{"role": "user", "content": [
{"type": "text", "text": prompt}
{"type": "image", "image": Image.open(BytesIO(image_bytes)),
]},
```
Which could be called like this:
```py
formatted_mm_tokens = processor.apply_chat_template(
conversation=messages,
add_generation_prompt=True,
do_pan_and_scan=True,
tokenize=True, # <-- Do tokenize
)
```
If this format is provided, this could potentially replace these [lines](https://github.com/huggingface/trl/blob/eee9ec94efbbadb3652aa428827b052b58f36ac7/trl/trainer/grpo_trainer.py#L1321C9-L1330C10) inside an if statement.
The `AutoProcessor` automatically identifies an image in the content and creates a multi-modal token array. It would be really great to have this sort of control. I'd be happy to work on this later -- I'm currently in the middle of an internship, but will be available soon.
| 3,072 | 11,816 |
qgallouedec
| 2025-07-26T21:29:39 |
Hi, thanks for reporting, contributions are very welcome to fix this
| 3,072 | 11,817 |
MohamedAliRashad
| 2025-07-30T20:59:07 |
Is this PR in a working state right or it had breaking changes ?
| 3,072 | 11,818 |
HuggingFaceDocBuilderDev
| 2025-03-13T12:37:40 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3070). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,070 | 11,819 |
burtenshaw
| 2025-03-13T12:41:59 |
@qgallouedec Amazing work. thanks! a few questions:
- Is padding like this necessary for all trainers?
- do we need to patch other token ids like this?
```python
processor.pad_token_id = processor.tokenizer.pad_token_id
processor.bos_token_id = processor.tokenizer.bos_token_id
processor.eos_token_id = processor.tokenizer.eos_token_id
```
- did you see how unsloth is dealing with missing token id's: https://huggingface.co/unsloth/gemma-3-4b-it/commit/90fe72f525abc73ff7283c23e6ceccea5d4273bb . Do you think we should open a PR for changes on the hub repo?
| 3,070 | 11,820 |
qgallouedec
| 2025-03-13T12:46:12 |
> Is padding like this necessary for all trainers?
usually tokenizers have a pad method. Here, the gemma processor doesn't. But maybe we shouldn't use the processor and directly load the tokenizer? Checking
| 3,070 | 11,821 |
NanoCode012
| 2025-03-13T12:33:06 |
Thanks for the approval, @kashif . Would it be possible to trigger the workflow as well?
| 3,069 | 11,822 |
HuggingFaceDocBuilderDev
| 2025-03-13T12:41:55 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3069). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,069 | 11,823 |
NanoCode012
| 2025-03-23T15:20:33 |
@kashif , may I ask if this PR may be merged anytime soon?
| 3,069 | 11,824 |
NanoCode012
| 2025-03-13T09:16:32 |
This issue also exists for CPOTrainer.
Repro:
```
accelerate launch examples/scripts/cpo.py --dataset_name trl-lib/ultrafeedback_binarized --model_name_or_path=gpt2 --per_device_train_batch_size 4 --max_steps 1000 --learning_rate 8e-6 --gradient_accumulation_steps 1 --logging_steps 10 --eval_steps 500 --output_dir="gpt2-aligned-cpo" --warmup_steps 150 --report_to none --bf16 --logging_first_step --no_remove_unused_columns
```
Commenting out the below allows it to run (or calculating `.mean`):
https://github.com/huggingface/trl/blob/4871c82b0cd1caae72522182f9171ea069481250/trl/trainer/cpo_trainer.py#L838-L843
| 3,068 | 11,825 |
NanoCode012
| 2025-03-13T09:20:08 |
Other trainers that return logits metrics do not have this issue:
- BCO: Uses sum logits
- DPO: Uses mean logits
- KTO: Uses sum logits
| 3,068 | 11,826 |
qgallouedec
| 2025-03-13T23:11:17 |
This may help you #3076
| 3,067 | 11,827 |
qgallouedec
| 2025-03-13T23:12:34 |
The answer is yes and no. It's still triangular but the samples can "contaminate". That's a known issue #1230
| 3,067 | 11,828 |
tchang1997
| 2025-03-13T23:14:28 |
As per [the tutorial](https://huggingface.co/docs/trl/main/en/grpo_trainer) I use `accelerate launch` and set `--num-processes [N_GPUS]` to do multi-GPU training. You may also need to play with [`deepspeed`](https://huggingface.co/docs/trl/main/en/deepspeed_integration) settings.
These can all be `pip install`-ed — note that you may need to run `accelerate config` first to set things up.
| 3,066 | 11,829 |
tjoymeed
| 2025-03-14T03:22:03 |
Does it support the VRAM combined, ie. 40GB x 8 = 320 GB total ?
| 3,066 | 11,830 |
tchang1997
| 2025-03-14T17:24:07 |
In theory, that's completely dependent on your hardware, not these packages. `accelerate` simply lets you do distributed training across GPUs easily, and `deepspeed` has some flags you can set to make training even more memory-efficient.
| 3,066 | 11,831 |
tjoymeed
| 2025-03-14T17:31:56 |
Hardware has no problem. What flags can I set to get the combined VRAM 40GB x8 = 320 GB total?
| 3,066 | 11,832 |
tchang1997
| 2025-03-14T18:01:35 |
Try `accelerate config` — it'll walk you through some prompts to answer questions about your setup, and auto-set those flags. You can rerun that at any time if you need to change things. It'll also make a `deepspeed` config which you can later edit — see [here](https://huggingface.co/docs/accelerate/en/usage_guides/deepspeed) for more info.
| 3,066 | 11,833 |
qgallouedec
| 2025-04-05T17:04:30 |
Your are probably looking for DeepSpeed ZeRO 3, check our doc: https://huggingface.co/docs/trl/main/deepspeed_integration
| 3,066 | 11,834 |
VProv
| 2025-03-26T16:33:29 |
Relevant to this PR too
https://github.com/huggingface/trl/pull/2568#issuecomment-2755022960
| 3,065 | 11,835 |
HuggingFaceDocBuilderDev
| 2025-03-12T11:18:46 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3062). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,062 | 11,836 |
iamansinha
| 2025-03-13T04:54:05 |
Currently, I think for trl>0.14, `per_device_train_batch_size` means number of generations per device, and not the number of prompts per device.
Refer to this illustration given at this [line](https://github.com/huggingface/trl/blob/4871c82b0cd1caae72522182f9171ea069481250/trl/trainer/grpo_trainer.py#L597) in the code comments.
So, the number of prompts per device is equal to `per_device_train_batch_size / num_generations`
For your example, minimum `per_device_train_batch_size` should be 2, so that for `num_processes=4` for 4 GPUs with `use_vllm=False`, each GPU is generating 2 responses to give 8 generations in total for one prompt sample.
And, if you want to generate all 8 generations of one prompt per GPU, then you need to set `per_device_train_batch_size` same as `num_generations`. Similarly, for all generations of `n` number of prompts per GPU, set `per_device_train_batch_size = n * num_generations`.
Hope this helps!
| 3,061 | 11,837 |
YueChenkkk
| 2025-03-14T11:42:23 |
I think this constraint ensures all the generations are consumed in a single backward step. Otherwise the buffer mechanism will be way more complicated.
| 3,061 | 11,838 |
tonghuikang
| 2025-03-23T02:23:38 |
Does this mean that the size of `n_generations` (which is the G is GRPO) is limited by the number of GPUs you have?
I would like to try a huge number for `n_generations` though.
| 3,061 | 11,839 |
qgallouedec
| 2025-03-23T03:12:02 |
No it means that it's limited by num GPUs x per device batch size
| 3,061 | 11,840 |
tonghuikang
| 2025-03-23T05:18:49 |
Can `per_device_train_batch_size` be a large number not limited by GPU memory size?
I set
```
num_generations=16,
per_device_train_batch_size=16,
```
the run is ok, but when I set
```
num_generations=32,
per_device_train_batch_size=32,
```
It ran out of memory in the first training step, it seems that I cannot do num_generations=32 without more GPUs
```
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 936.00 MiB. GPU 0 has a total capacity of 139.81 GiB of which 684.00 MiB is free. Process 69 has 139.13 GiB memory in use. Of the allocated memory 137.16 GiB is allocated by PyTorch, and 649.90 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
```
| 3,061 | 11,841 |
qgallouedec
| 2025-04-01T15:45:57 |
> Can `per_device_train_batch_size` be a large number not limited by GPU memory size?
No it can't, the larger the batch size, the larger the memory needed
| 3,061 | 11,842 |
kashif
| 2025-03-12T12:11:31 |
@benyaminjami so with this implementation... what happens when `seq_kd` is False? are we then still doing the GKD loss?
| 3,058 | 11,843 |
qgallouedec
| 2025-03-12T11:13:49 |
Can you provide the change that you've made? It's not clear from your explanation
| 3,057 | 0 |
vagitablebirdcode
| 2025-03-12T12:06:15 |
It is easy, just change the `SamplingParams` as follows:
```python
self.sampling_params = SamplingParams(n=self.args.num_generations)
```
Then it can generate `n` result for every input. Further, I think vllm can process `per_device_train_batch_size * num_generations * gradient_accumulation_steps` inputs in one time, which can accelerate the training in the collection stage.
| 3,057 | 1 |
qgallouedec
| 2025-03-12T12:20:01 |
How is it different from the current code?
| 3,057 | 2 |
vagitablebirdcode
| 2025-03-12T12:32:49 |
I am still in the planning phase and haven't made any changes to the relevant code yet, as this improvement will be a large project. I found that the sampling in Trainer from transformers is based on batch_size and is collected step by step using accumulate_step. To generate per_device_train_batch_size * num_generations * gradient_accumulation_steps at once, we first need to modify the dataset sampler.
After the improvement, each iteration should pass in gradient_accumulation_steps * per_device_train_batch_size samples and use llm.generate to collect the results. Finally, the results should be evenly distributed across devices for update calculations.
| 3,057 | 3 |
qgallouedec
| 2025-03-12T12:54:39 |
Sorry but it's even less clear. What is the suggested change? I still can't see the difference between what you're describing and the current implementation
> It is easy, just change the `SamplingParams` as follows:
> ```python
> self.sampling_params = SamplingParams(n=self.args.num_generations)
> ```
> Then it can generate `n` result for every input. Further, I think vllm can process `per_device_train_batch_size * num_generations * gradient_accumulation_steps` inputs in one time, which can accelerate the training in the collection stage.
https://github.com/huggingface/trl/blob/fd9e5a7cabc8b7def9b64042cb147616aa0d1d04/trl/trainer/grpo_trainer.py#L525
| 3,057 | 4 |
vagitablebirdcode
| 2025-03-12T13:04:26 |
I'm very sorry—I didn't notice the main branch and was only looking at the 0.15.2 branch and a few PR branches. The code you mentioned is not present in these branches. In fact, these code in main branch has already implemented my idea.
Thank you very much for your response! I will go ahead and close this issue.
| 3,057 | 5 |
HuggingFaceDocBuilderDev
| 2025-03-11T23:24:21 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3056). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,056 | 6 |
Pclanglais
| 2025-03-11T23:20:28 |
Same question for a different issue. I'm using a model with special tokens signalling different text parts and I'm unable to access them, without setting skip_special_tokens=False
| 3,054 | 7 |
qgallouedec
| 2025-03-11T23:30:39 |
Thanks for the suggestion. In fact it's already been suggested in #2728, and I think this solution should actually be avoided: https://github.com/huggingface/trl/pull/2728#issuecomment-2635166424
| 3,054 | 8 |
mtoslalibu
| 2025-03-12T13:43:46 |
> Thanks for the suggestion. In fact it's already been suggested in [#2728](https://github.com/huggingface/trl/pull/2728), and I think this solution should actually be avoided: [#2728 (comment)](https://github.com/huggingface/trl/pull/2728#issuecomment-2635166424)
Thank you for your response. I will introduce the batch-related parameters (like max-num-seq) one by one, then. The motivation is that batch size has a strong impact on inference duration, tuning which can reduce GRPO training duration.
| 3,054 | 9 |
qgallouedec
| 2025-03-11T16:58:24 |
@loricxy0707 can you confirm that this fixes your issue?
| 3,053 | 10 |
HuggingFaceDocBuilderDev
| 2025-03-11T17:01:10 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3053). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,053 | 11 |
HuggingFaceDocBuilderDev
| 2025-03-11T14:58:02 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3052). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,052 | 12 |
qgallouedec
| 2025-03-11T14:36:20 |
Hi, thanks for the question. Yes, we first generate, then compute the reward and the loss, then the weights are updated.
> From the looks, it feels like the parameter update is blocked until the first two steps are complete. Does that mean the GPUs (with the model weights loaded) remain idle until then?
With vLLM yes, without, these GPUs are used to generate.
> I believe it's the same behavior for both the approaches:
> - gathering the parameters (on a single GPU) from ds3 before generation
> - using a separate GPU with vllm for generation
Not exactly, because without vllm, the weights are gathered in all devices, so all devices generate.
| 3,050 | 13 |
yash-malik
| 2025-03-11T16:29:01 |
Thanks for the answer! That makes sense!
| 3,050 | 14 |
Rocketknight1
| 2025-03-11T12:20:54 |
cc @zucchini-nlp @qgallouedec
| 3,051 | 15 |
qgallouedec
| 2025-03-11T13:25:59 |
This is not high priority, so contributions are very welcome. This issue belongs to TRL, I'll transfer it.
| 3,051 | 16 |
SabaPivot
| 2025-03-13T07:04:02 |
> This is not high priority, so contributions are very welcome. This issue belongs to TRL, I'll transfer it.
Sure. https://github.com/om-ai-lab/VLM-R1
Team om-ai-lab has implemented the GRPO Trainer for QWEN-VL series model.
Hope this helps.
| 3,051 | 17 |
qgallouedec
| 2025-03-11T14:37:42 |
Good point, I'll be happy to receive a PR for this :)
| 3,049 | 18 |
shirinyamani
| 2025-03-28T17:29:31 |
I've commented on your PR!
| 3,049 | 19 |
jamesbraza
| 2025-03-28T17:42:19 |
Hi @shirinyamani thanks for the PR comment, but I think you're misunderstanding here, can you reopen this issue? This issue still stands.
https://github.com/Future-House/trl/pull/9 was about resolving https://github.com/huggingface/trl/issues/3018 on a fork, and by happenstance I fixed this issue in that PR too. However, I am not going to open that PR into actual `trl` as it was too hacky.
| 3,049 | 20 |
qgallouedec
| 2025-03-11T13:40:08 |
Thanks for fixing it. Can you just apply the suggestion?
| 3,048 | 21 |
HuggingFaceDocBuilderDev
| 2025-03-11T17:25:09 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3048). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,048 | 22 |
HuggingFaceDocBuilderDev
| 2025-03-11T13:53:25 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3046). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,046 | 23 |
qgallouedec
| 2025-03-10T19:27:22 |
I see the issue, maybe we should not make the assumption that all prompts are always different.
The alternative is to do something like
`prompt[::self.num_generations]`
WDYT?
| 3,045 | 24 |
shing100
| 2025-03-11T08:52:14 |
I have the same issue.
After updating trl, there have been too many VRAMs used to learn.
When SFT training the 7.8b model with 2 nodes (H100*8), use a total of 454.08 GiB.
Liger-kernal + deepspeed zero3
micro batch size 1
sequence_len 8192
https://github.com/axolotl-ai-cloud/axolotl/issues/2387
| 3,044 | 25 |
maoulee
| 2025-03-11T11:49:38 |
> I have the same issue.
>
> After updating trl, there have been too many VRAMs used to learn.
>
> When SFT training the 7.8b model with 2 nodes (H100*8), use a total of 454.08 GiB.
>
> Liger-kernal + deepspeed zero3 micro batch size 1 sequence_len 8192
>
> [axolotl-ai-cloud/axolotl#2387](https://github.com/axolotl-ai-cloud/axolotl/issues/2387)
Have you solve this problem in trl ?I find this code can be work fine in unsloth but with a very slowly speed
| 3,044 | 26 |
qgallouedec
| 2025-03-11T17:43:21 |
Can you provide the full traceback? Here it's hard to know where is the memory peak
| 3,044 | 27 |
maoulee
| 2025-03-13T03:53:15 |
> Can you provide the full traceback? Here it's hard to know where is the memory peak
I have sovle this problem by use function in unsloth-zoo, it helps vllm get the weights of lora instead of move model weights to vllm it reduced the vram of model weights
Here is the terminal out put :INFO 03-13 11:20:41 gptq_marlin.py:202] Using MarlinLinearKernel for GPTQMarlinLinearMethod
Loading safetensors checkpoint shards: 0% Completed | 0/1 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:00<00:00, 7.22it/s]
Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:00<00:00, 7.21it/s]
INFO 03-13 11:20:42 model_runner.py:1115] Loading model weights took 0.4302 GB
INFO 03-13 11:20:42 punica_selector.py:18] Using PunicaWrapperGPU.
INFO 03-13 11:20:55 worker.py:267] Memory profiling takes 12.69 seconds
INFO 03-13 11:20:55 worker.py:267] the current vLLM instance can use total_gpu_memory (39.39GiB) x gpu_memory_utilization (0.20) = 7.88GiB
INFO 03-13 11:20:55 worker.py:267] model weights take 0.43GiB; non_torch_memory takes 0.09GiB; PyTorch activation peak memory takes 1.39GiB; the rest of the memory reserved for KV Cache is 5.97GiB.
INFO 03-13 11:20:55 executor_base.py:110] # CUDA blocks: 32588, # CPU blocks: 21845
INFO 03-13 11:20:55 executor_base.py:115] Maximum concurrency for 2500 tokens per request: 208.56x
INFO 03-13 11:21:14 model_runner.py:1434] Capturing cudagraphs for decoding. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI. If out-of-memory error occurs during cudagraph capture, consider decreasing `gpu_memory_utilization` or switching to eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
Capturing CUDA graph shapes: 100%|████████████████████████████████████████| 35/35 [00:21<00:00, 1.64it/s]
INFO 03-13 11:21:36 model_runner.py:1562] Graph capturing finished in 22 secs, took 1.74 GiB
INFO 03-13 11:21:36 llm_engine.py:431] init engine (profile, create kv cache, warmup model) took 54.14 seconds
{'loss': 0.0, 'grad_norm': 2.7884418964385986, 'learning_rate': 1.0101010101010103e-07, 'rewards/reward_len': -321.578125, 'reward': -321.578125, 'reward_std': 314.4985647201538, 'completion_length': 229.140625, 'kl': 0.0, 'epoch': 0.01}
{'loss': -0.0, 'grad_norm': 1.232833981513977, 'learning_rate': 2.0202020202020205e-07, 'rewards/reward_len': -120.75, 'reward': -120.75, 'reward_std': 145.09556579589844, 'completion_length': 79.8125, 'kl': 0.0, 'epoch': 0.02}
{'loss': -0.0, 'grad_norm': 1.4564472436904907, 'learning_rate': 3.0303030303030305e-07, 'rewards/reward_len': -170.6875, 'reward': -170.6875, 'reward_std': 182.55911830067635, 'completion_length': 104.9375, 'kl': -5.692243576049805e-06, 'epoch': 0.03}
{'loss': -0.0, 'grad_norm': 3.2063918113708496, 'learning_rate': 4.040404040404041e-07, 'rewards/reward_len': -110.671875, 'reward': -110.671875, 'reward_std': 129.3709478378296, 'completion_length': 73.71875, 'kl': -8.501112461090088e-06, 'epoch': 0.04}
{'loss': -0.0, 'grad_norm': 1.7419143915176392, 'learning_rate': 5.05050505050505e-07, 'rewards/reward_len': -234.28125, 'reward': -234.28125, 'reward_std': 278.61364382505417, 'completion_length': 128.328125, 'kl': -7.413327693939209e-06, 'epoch': 0.05}
{'loss': -0.0, 'grad_norm': 2.447553873062134, 'learning_rate': 6.060606060606061e-07, 'rewards/reward_len': -201.859375, 'reward': -201.859375, 'reward_std': 169.03560876846313, 'completion_length': 157.59375, 'kl': -6.861984729766846e-06, 'epoch': 0.06}
{'loss': -0.0, 'grad_norm': 1.1706939935684204, 'learning_rate': 7.070707070707071e-07, 'rewards/reward_len': -75.9375, 'reward': -75.9375, 'reward_std': 133.00669565796852, 'completion_length': 55.546875, 'kl': -6.794929504394531e-06, 'epoch': 0.07}
{'loss': -0.0, 'grad_norm': 2.1840455532073975, 'learning_rate': 8.080808080808082e-07, 'rewards/reward_len': -399.328125, 'reward': -399.328125, 'reward_std': 241.75924617052078, 'completion_length': 297.390625, 'kl': -4.477798938751221e-06, 'epoch': 0.08}
{'loss': -0.0, 'grad_norm': 2.187257766723633, 'learning_rate': 9.090909090909091e-07, 'rewards/reward_len': -199.421875, 'reward': -199.421875, 'reward_std': 201.53497797250748, 'completion_length': 132.828125, 'kl': -6.563961505889893e-06, 'epoch': 0.09}
{'loss': 0.0, 'grad_norm': 1.8141218423843384, 'learning_rate': 1.01010101010101e-06, 'rewards/reward_len': -334.484375, 'reward': -334.484375, 'reward_std': 281.6256628036499, 'completion_length': 225.859375, 'kl': 1.1272728443145752e-05, 'epoch': 0.1}
{'loss': 0.0, 'grad_norm': 2.5700647830963135, 'learning_rate': 1.111111111111111e-06, 'rewards/reward_len': -163.3125, 'reward': -163.3125, 'reward_std': 170.36365354061127, 'completion_length': 118.921875, 'kl': 1.0117888450622559e-05, 'epoch': 0.11}
{'loss': 0.0, 'grad_norm': 1.258663535118103, 'learning_rate': 1.2121212121212122e-06, 'rewards/reward_len': -317.734375, 'reward': -317.734375, 'reward_std': 255.7184435725212, 'completion_length': 214.5, 'kl': 1.574307680130005e-05, 'epoch': 0.12}
{'loss': 0.0, 'grad_norm': 2.4687442779541016, 'learning_rate': 1.3131313131313134e-06, 'rewards/reward_len': -397.640625, 'reward': -397.640625, 'reward_std': 343.2056703567505, 'completion_length': 255.921875, 'kl': 0.0002644285559654236, 'epoch': 0.13}
{'loss': 0.0, 'grad_norm': 2.0361921787261963, 'learning_rate': 1.4141414141414143e-06, 'rewards/reward_len': -61.234375, 'reward': -61.234375, 'reward_std': 134.06728866696358, 'completion_length': 41.28125, 'kl': 0.000720784068107605, 'epoch': 0.14}
{'loss': 0.0, 'grad_norm': 2.076171875, 'learning_rate': 1.5151515151515152e-06, 'rewards/reward_len': -68.78125, 'reward': -68.78125, 'reward_std': 85.20245426893234, 'completion_length': 50.203125, 'kl': 0.0004588514566421509, 'epoch': 0.15}
{'loss': 0.0, 'grad_norm': 2.653731107711792, 'learning_rate': 1.6161616161616164e-06, 'rewards/reward_len': -244.984375, 'reward': -244.984375, 'reward_std': 229.60207390785217, 'completion_length': 167.515625, 'kl': 0.0006752237677574158, 'epoch': 0.16}
{'loss': 0.0, 'grad_norm': 1.4232606887817383, 'learning_rate': 1.7171717171717173e-06, 'rewards/reward_len': -433.109375, 'reward': -433.109375, 'reward_std': 422.8487824201584, 'completion_length': 293.953125, 'kl': 0.0012104883790016174, 'epoch': 0.17}
{'loss': 0.0001, 'grad_norm': 1.926514983177185, 'learning_rate': 1.8181818181818183e-06, 'rewards/reward_len': -183.265625, 'reward': -183.265625, 'reward_std': 192.7100260257721, 'completion_length': 130.8125, 'kl': 0.0017363205552101135, 'epoch': 0.18}
{'loss': 0.0001, 'grad_norm': 1.6588062047958374, 'learning_rate': 1.9191919191919192e-06, 'rewards/reward_len': -81.53125, 'reward': -81.53125, 'reward_std': 122.57226317375898, 'completion_length': 56.0, 'kl': 0.0016131997108459473, 'epoch': 0.19}
{'loss': 0.0001, 'grad_norm': 1.1836130619049072, 'learning_rate': 2.02020202020202e-06, 'rewards/reward_len': -78.203125, 'reward': -78.203125, 'reward_std': 164.13510417938232, 'completion_length': 54.578125, 'kl': 0.0036144256591796875, 'epoch': 0.2}
{'loss': 0.0003, 'grad_norm': 1.376534342765808, 'learning_rate': 2.1212121212121216e-06, 'rewards/reward_len': -221.0625, 'reward': -221.0625, 'reward_std': 193.60646617412567, 'completion_length': 159.625, 'kl': 0.006711140275001526, 'epoch': 0.21}
{'loss': 0.0004, 'grad_norm': 1.8582404851913452, 'learning_rate': 2.222222222222222e-06, 'rewards/reward_len': -147.4375, 'reward': -147.4375, 'reward_std': 201.59488809108734, 'completion_length': 104.296875, 'kl': 0.01008462905883789, 'epoch': 0.22}
{'loss': 0.0004, 'grad_norm': 2.769685745239258, 'learning_rate': 2.3232323232323234e-06, 'rewards/reward_len': -73.296875, 'reward': -73.296875, 'reward_std': 126.35262995958328, 'completion_length': 58.734375, 'kl': 0.010751724243164062, 'epoch': 0.23}
{'loss': 0.0004, 'grad_norm': 1.448876976966858, 'learning_rate': 2.4242424242424244e-06, 'rewards/reward_len': -326.9375, 'reward': -326.9375, 'reward_std': 302.11334347724915, 'completion_length': 230.59375, 'kl': 0.00894937664270401, 'epoch': 0.24}
{'loss': 0.0003, 'grad_norm': 6.789086818695068, 'learning_rate': 2.5252525252525258e-06, 'rewards/reward_len': -63.015625, 'reward': -63.015625, 'reward_std': 75.98726436495781, 'completion_length': 39.59375, 'kl': 0.00805211067199707, 'epoch': 0.25}
{'loss': 0.0004, 'grad_norm': 4.663589000701904, 'learning_rate': 2.6262626262626267e-06, 'rewards/reward_len': -78.328125, 'reward': -78.328125, 'reward_std': 89.43056464195251, 'completion_length': 54.546875, 'kl': 0.01078033447265625, 'epoch': 0.26}
3%|█▋ | 26/990 [21:29<12:05:37, 45.16s/it]
| 3,044 | 28 |
kashif
| 2025-03-13T07:45:55 |
thanks @abhigoyal1997 having a look now
| 3,043 | 29 |
kashif
| 2025-03-13T08:21:09 |
@abhigoyal1997 is the issue that the `beta` is not the same as the beta in paper? Also, note that the ` F.kl_div` takes inputs q and p to calculate KL(p||q) which can cause confusion too?
| 3,043 | 30 |
kashif
| 2025-03-13T08:28:51 |
The paper has:

In TRL it is implemented as:
$$
D_{{JSD}(\beta)}(P \| Q) = \beta KL\Big(P \Big \| \beta Q + (1- \beta)P \Big) + (1 - \beta) KL\Big(Q \Big \| \beta Q + (1 - \beta) P \Big)
$$
You can see when beta=0, the loss is the KL(student || teacher) which is `F.kl_div(teacher, student)` in TRL and when beta=1, the loss is KL(teacher || student) which is `F.kl_div(student, teacher)` so there is a difference in the original vs. the TRL formulation
| 3,043 | 31 |
HuggingFaceDocBuilderDev
| 2025-03-13T09:23:28 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3043). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,043 | 32 |
abhigoyal1997
| 2025-03-13T10:51:28 |
> The paper has: 
>
> In TRL it is implemented as:
>
> D J S D ( β ) ( P | Q ) = β K L ( P | β Q + ( 1 − β ) P ) + ( 1 − β ) K L ( Q | β Q + ( 1 − β ) P )
>
> You can see when beta=0, the loss is the KL(student || teacher) which is `F.kl_div(teacher, student)` in TRL and when beta=1, the loss is KL(teacher || student) which is `F.kl_div(student, teacher)` so there is a difference in the original vs. the TRL formulation
Hi Kashif, yes this was the problem. The mixture distribution was calculated with the wrong weights.
Thanks for reviewing and approving!
| 3,043 | 33 |
skoshx
| 2025-03-10T20:45:07 |
So turns out this was just a skill issue, `max_length` defaults to `512`, so in `_forward` the outputs get truncated, so we get zero output, causing the error.
There should probably be an assertion like;
```py
assert config.max_length > config.max_new_tokens, "`max_length` should be higher than `max_new_tokens` or your outputs will get truncated to zero length."
```
| 3,042 | 34 |
qgallouedec
| 2025-03-11T17:47:44 |
How much memory does your system have?
| 3,039 | 35 |
qgallouedec
| 2025-03-11T17:52:48 |
From the log, it's not clear where this memory peak occur. Can you try to be even more precise with the looping pattern you made? I'll give it a try myself as well
| 3,039 | 36 |
qgallouedec
| 2025-03-10T05:39:50 |
Thanks for reporting. Trl doesn't support python 3.14. Currently, 3.13 should work but it not officially supported, see #2593. Max supported version is 3.12.
| 3,038 | 37 |
debdeepsanyal
| 2025-03-09T07:00:54 |
same issue. the code was working with the GRPOTrainer earlier but now it throws off this RuntimeError.
| 3,035 | 38 |
debdeepsanyal
| 2025-03-09T19:37:36 |
After some further checking, I think the problem occurs if I am using `device_map='auto'`. Someone kindly fix this portion.
| 3,035 | 39 |
stevebell117
| 2025-03-10T14:29:44 |
We also have `device_map='auto'`
| 3,035 | 40 |
qgallouedec
| 2025-03-08T18:24:24 |
I am encountering this issue as well. Any idea how to solve it?
| 3,034 | 41 |
dongdongzhaoUP
| 2025-03-12T13:47:13 |
Also
| 3,034 | 42 |
jenna-russell
| 2025-03-18T19:14:51 |
I also am encountering this issue
| 3,034 | 43 |
lilakk
| 2025-03-18T19:18:25 |
I've been encountering the same issue!
| 3,034 | 44 |
Bingogogogogo
| 2025-03-19T11:35:24 |
same issue
| 3,034 | 45 |
Vanchrn
| 2025-03-22T02:38:28 |
same
| 3,034 | 46 |
wofeishenling
| 2025-03-23T11:55:03 |
same issue
| 3,034 | 47 |
naajeehxe
| 2025-03-24T11:54:01 |
same here...
| 3,034 | 48 |
anakin87
| 2025-03-30T13:16:27 |
Related Transformers PR: https://github.com/huggingface/transformers/pull/36162
As a workaround, you can try installing `transformers==4.48.3`.
| 3,034 | 49 |
qgallouedec
| 2025-04-01T18:16:40 |
Could be related: https://github.com/huggingface/transformers/pull/36729
| 3,034 | 50 |
MrZhengXin
| 2025-04-11T03:01:40 |
How about setting `cache_implementation='dynamic'` in GRPOConfig
https://github.com/huggingface/trl/blob/d625c5533a6b1c84d3565c8080857f6bb81c538a/trl/trainer/grpo_config.py#L80
```python
training_args = GRPOConfig(
# ...
cache_implementation="dynamic",
)
trainer = GRPOTrainer(
model=model,
processing_class=tokenizer,
args=training_args,
train_dataset=dataset,
# ...
)
```
This could be the issue of StaticCache: https://github.com/huggingface/transformers/issues/37189
| 3,034 | 51 |
singhalarchit
| 2025-04-14T20:12:06 |
@MrZhengXin .. this did not solve the issue. I am using Qwen-2.5 7B.
| 3,034 | 52 |
qgallouedec
| 2025-04-15T18:38:56 |
For context, this error only occurs when generating with transformers. So, to solve this problem and make generation faster, all at once, I recommend using vLLM instead, see documentation: https://huggingface.co/docs/trl/en/grpo_trainer#speed-up-training-with-vllm-powered-generation
| 3,034 | 53 |
TriLoo
| 2025-04-18T06:25:30 |
same here
| 3,034 | 54 |
PolarisHsu
| 2025-04-18T07:01:08 |
same issue
| 3,034 | 55 |
ChrisKimZHT
| 2025-04-19T06:41:49 |
I'm encountering this issue during DPO training too. And set `gradient_checkpointing` to `true` OR install `transformers==4.48.3` can temporary solve this problem.
| 3,034 | 56 |
p1kachu2233
| 2025-04-21T22:08:25 |
same
| 3,034 | 57 |
harveyaot
| 2025-04-24T07:15:54 |
my case is using the default collator, which used padding_side='right' without any dynamic checks if flash_attn_2 used.
after passing in a customized data collator, the issue resolved.
[https://github.com/huggingface/trl/blob/89556c8cbf1a816539167a46cdf285419e057fec/trl/trainer/sft_trainer.py#L131](code to update)
| 3,034 | 58 |
shon-otmazgin-wix
| 2025-05-19T20:10:33 |
do `use_cache=False` while creating the model solved my issues
| 3,034 | 59 |
ahans30
| 2025-06-10T15:55:45 |
Thansks @harveyaot, that worked for me. Just don't use the default collator. Below is the fixed version that worked for me.
```python
from trl.trainer.sft_trainer import DataCollatorForLanguageModeling
from trl.trainer.utils import pad
class CollatorWithPaddingSideFixed(DataCollatorForLanguageModeling):
def torch_call(self, examples):
# Convert to tensor
input_ids = [torch.tensor(example["input_ids"]) for example in examples]
attention_mask = [torch.ones_like(input_ids) for input_ids in input_ids]
labels = [torch.tensor(example["input_ids"]) for example in examples]
if self.completion_only_loss and "completion_mask" in examples[0]:
completion_mask = [torch.tensor(example["completion_mask"]) for example in examples]
# Pad
output = {}
output["input_ids"] = pad(
input_ids,
padding_value=self.pad_token_id,
padding_side="left",
pad_to_multiple_of=self.pad_to_multiple_of,
)
output["attention_mask"] = pad(
attention_mask, padding_value=0, padding_side="left", pad_to_multiple_of=self.pad_to_multiple_of
)
output["labels"] = pad(
labels, padding_value=-100, padding_side="left", pad_to_multiple_of=self.pad_to_multiple_of
)
if self.completion_only_loss and "completion_mask" in examples[0]:
completion_mask = pad(
completion_mask, padding_value=0, padding_side="left", pad_to_multiple_of=self.pad_to_multiple_of
)
output["labels"][completion_mask == 0] = -100 # mask everything that is not in the completion
return output
```
| 3,034 | 60 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.